creating hash file from sequential file
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 62
- Joined: Thu Dec 28, 2006 11:54 pm
creating hash file from sequential file
I am looking to create a hash file from a sequential file in a server routine(or any other generic way). Basically I neeed the data to be copied from the sequential file to the hash file without having to create a datastage job. Is this even possible? If it is possibe could you please guide me as to how to achieve this? How do I specify the key columns for the hash file? If I want only three columns out 6 columns (in sequential file) to be populated to hash file, how can I do this?
From my research I found the way to create a blank hash file but could not find anyway to create a hash file using data from sequential file.
From my research I found the way to create a blank hash file but could not find anyway to create a hash file using data from sequential file.
-
- Premium Member
- Posts: 62
- Joined: Thu Dec 28, 2006 11:54 pm
Craig, can you guide me to any basic commands that can be used to do that? Reason we want to replace with the routine so that we do not have to write hundreds of jobs. Instead we want to create a utility which can be passed a parameter and will do the needful. We do something similar for some other jobs.
So, you have hundreds of files that you need to create hundreds of hashed files from? And these files... will their metadata be identical or could they all possibly be different? Will the hashed file structures be different? Just trying to properly gauge the effort here.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Yes it can be done with a DataStage BASIC routine but, first, how do you plan to handle the metadata (the field definitions) ? You cannot create a hashed file without at least one key column.
Secondly, would you like to create the hashed file as a UniVerse "table", or build the dictionary portion manually?
"The needful" is a little more than you have had in mind.
Secondly, would you like to create the hashed file as a UniVerse "table", or build the dictionary portion manually?
"The needful" is a little more than you have had in mind.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 62
- Joined: Thu Dec 28, 2006 11:54 pm
-
- Premium Member
- Posts: 62
- Joined: Thu Dec 28, 2006 11:54 pm
-
- Premium Member
- Posts: 62
- Joined: Thu Dec 28, 2006 11:54 pm
Okay... why the need for them to be created in separate processes that involve PX jobs, sequential files and a custom routine? Easy enough to have any of the downstream jobs create them when they run, just hook the sequential file up to the hashed file used as the lookup. Or when we had a plethora of shared hashed files, the first thing each 'batch run' did was run a series of Server jobs that created them by reading their source and reloading them for the current run. Then each downstream process had whatever hashed file lookups it needed, ready to go.
Just trying to see if perhaps your solution is a bit... over-engineered? And if there is a better way to get you where you need to be.
Just trying to see if perhaps your solution is a bit... over-engineered? And if there is a better way to get you where you need to be.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers