Questions regarding Hash files and hash file stage
Moderators: chulett, rschirm, roy
Ray / Craig,
Once I've optimized the hashed file and have all the OVER.30 data moved over to DATA.30, do I have to modify the existing DataStage jobs? By modify I mean change the modulus and other values in the hashed file stage.
Below are the various jobs which touch this hashed file
1) Initial job - Clears the hashed file by inserting one record (with @NULL values) for all fields
2) Lookup file creation job - Inserts all the records into the hashed file
3) Normal job - which is the normal ETL job which looks up on the hashed file.
Jobs (1) and (2) are run each weekend.
Job (3) run from Monday through Friday.
Thanks
Once I've optimized the hashed file and have all the OVER.30 data moved over to DATA.30, do I have to modify the existing DataStage jobs? By modify I mean change the modulus and other values in the hashed file stage.
Below are the various jobs which touch this hashed file
1) Initial job - Clears the hashed file by inserting one record (with @NULL values) for all fields
2) Lookup file creation job - Inserts all the records into the hashed file
3) Normal job - which is the normal ETL job which looks up on the hashed file.
Jobs (1) and (2) are run each weekend.
Job (3) run from Monday through Friday.
Thanks
Not sure if you have a legitimate need to separate #s 1 & 2 or not but you certainly don't need to insert any kind of null record to clear a hashed file. Just setting that Option is all that is needed, it will happen whether you write a record to it or not.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
I don't believe so, they should remain intact even when the hashed file is cleared. If that's wrong, I'm sure Ray will be along shortly with a correction.
Now, if you dropped and recreated it each time that would be a different story... there you would have to ensure they were set properly in the Options section of the hashed file stage.
Now, if you dropped and recreated it each time that would be a different story... there you would have to ensure they were set properly in the Options section of the hashed file stage.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Hello,
1) Why does HFC shows a separation value for dynamic hashed file? Please correct me if I'm wrong but I thought separation values are only for static hashed files and for dynamic hashed files we can only specify group size.
If I'm wrong then how to specify separation value for dynamic hashed file? I don't see a option in Designer. Should it be done via TCL?
2) Does optimizing a dynamic hashed file hold good only when a subset of records are selected from a hashed file?
I ask this question based on one of my posts - viewtopic.php?p=387741
Thanks.
1) Why does HFC shows a separation value for dynamic hashed file? Please correct me if I'm wrong but I thought separation values are only for static hashed files and for dynamic hashed files we can only specify group size.
If I'm wrong then how to specify separation value for dynamic hashed file? I don't see a option in Designer. Should it be done via TCL?
2) Does optimizing a dynamic hashed file hold good only when a subset of records are selected from a hashed file?
I ask this question based on one of my posts - viewtopic.php?p=387741
Thanks.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
GROUP.SIZE 1 = separation 4
GROUP.SIZE 2 = separation 8
Any other value in HFC will cause a warning message to be displayed.
The CREATE.TABLE statement generated by HFC will include a GROUP.SIZE 2 clause if required. GROUP.SIZE 1 is the default, so does not have to be included.
GROUP.SIZE 2 = separation 8
Any other value in HFC will cause a warning message to be displayed.
The CREATE.TABLE statement generated by HFC will include a GROUP.SIZE 2 clause if required. GROUP.SIZE 1 is the default, so does not have to be included.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Craig,
Existing job design below
It's all in a single job, NOT in two separate jobs.
Existing job design below
Code: Select all
DRS
|
|
Hashed File
|
|
DRS-------->Transformer--------->DRS