Page 1 of 1

creating hash file from sequential file

Posted: Mon May 14, 2018 8:55 am
by abhinavsuri
I am looking to create a hash file from a sequential file in a server routine(or any other generic way). Basically I neeed the data to be copied from the sequential file to the hash file without having to create a datastage job. Is this even possible? If it is possibe could you please guide me as to how to achieve this? How do I specify the key columns for the hash file? If I want only three columns out 6 columns (in sequential file) to be populated to hash file, how can I do this?

From my research I found the way to create a blank hash file but could not find anyway to create a hash file using data from sequential file.

Posted: Mon May 14, 2018 9:10 am
by chulett
While I'm sure a custom routine can be created to do this, why not just create a Server job? That would be... quick and easy.

Posted: Mon May 14, 2018 2:39 pm
by abhinavsuri
Craig, can you guide me to any basic commands that can be used to do that? Reason we want to replace with the routine so that we do not have to write hundreds of jobs. Instead we want to create a utility which can be passed a parameter and will do the needful. We do something similar for some other jobs.

Posted: Mon May 14, 2018 2:50 pm
by chulett
So, you have hundreds of files that you need to create hundreds of hashed files from? And these files... will their metadata be identical or could they all possibly be different? Will the hashed file structures be different? Just trying to properly gauge the effort here.

Posted: Mon May 14, 2018 4:45 pm
by ray.wurlod
Yes it can be done with a DataStage BASIC routine but, first, how do you plan to handle the metadata (the field definitions) ? You cannot create a hashed file without at least one key column.

Secondly, would you like to create the hashed file as a UniVerse "table", or build the dictionary portion manually?

"The needful" is a little more than you have had in mind.

Posted: Tue May 15, 2018 10:07 am
by abhinavsuri
All these files will have different sets of columns and also different key fields since the data is from different tables for each file. Can you point me in a direction where I can get information on how to create the hash file while defining teh metadata and keys?

Posted: Tue May 15, 2018 1:26 pm
by abhinavsuri
just to add more details , I am using RCP to create a seq. file from the table. Now I need to use this seq. file to create a hash file but I need to define the keys

Posted: Tue May 15, 2018 1:48 pm
by chulett
I don't want to derail this conversation but have been wondering... can you let us know what these hashed files will be used for?

Posted: Wed May 16, 2018 7:37 am
by abhinavsuri
These hash files will be used as lookups in other jobs downstream.

Posted: Wed May 16, 2018 8:54 am
by chulett
Okay... why the need for them to be created in separate processes that involve PX jobs, sequential files and a custom routine? Easy enough to have any of the downstream jobs create them when they run, just hook the sequential file up to the hashed file used as the lookup. Or when we had a plethora of shared hashed files, the first thing each 'batch run' did was run a series of Server jobs that created them by reading their source and reloading them for the current run. Then each downstream process had whatever hashed file lookups it needed, ready to go.

Just trying to see if perhaps your solution is a bit... over-engineered? And if there is a better way to get you where you need to be.