Hash File Calculator? Where is it?

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Jahnavi
Participant
Posts: 6
Joined: Wed Oct 12, 2005 2:24 pm

Hash File Calculator? Where is it?

Post by Jahnavi »

Hi,

I am new to datastage. I am trying to write/update and read a Hash at the same time. I have seen several posts about this. I have the reference hash set up at "Disabled, Lock for Updates" (I tried just disabled also). The allow stage write cache is not checked in the hash stage to which I write. Enable row buffering option is also unchecked in the job properties.I have only 30,000 rows, 8 columns and 1 key column and yet the performance is very slow! Takes nearly 10 minutes for the whole job to complete. I left all the other Dynamic hash properties at default values. Any help or tips to improve performance is appreciated.
I was looking at changing the properties of the hash file and reference is made in several posts about a Hash File Calculator which we seem to be missing. I talked to the admin, he looked through the datastage Cd and was not able to find it. Where are we supposed to find it?
Please help.

Thank You,
Jahnavi
Sunshine2323
Charter Member
Charter Member
Posts: 130
Joined: Mon Sep 06, 2004 3:05 am
Location: Dubai,UAE

Post by Sunshine2323 »

Hi Jahnavi,

Check in the DataStage CD inside the Utilities/Unsupported folder.
Warm Regards,
Amruta Bandekar

<b>If A equals success, then the formula is: A = X + Y + Z, X is work. Y is play. Z is keep your mouth shut. </b>
--Albert Einstein
Jahnavi
Participant
Posts: 6
Joined: Wed Oct 12, 2005 2:24 pm

Post by Jahnavi »

Thanks! Found it.
anu123
Premium Member
Premium Member
Posts: 143
Joined: Sun Feb 05, 2006 1:05 pm
Location: Columbus, OH, USA

Post by anu123 »

Jahnavi wrote:Thanks! Found it.
Could you please post your findings here ?
Thank you,
Anu
Jahnavi
Participant
Posts: 6
Joined: Wed Oct 12, 2005 2:24 pm

Post by Jahnavi »

From the various posts I read, i think 2 things helped me
1) I unchecked the clear file before writing option which I gather reduces speed. Instead I checked the delete file before creation.
2) I increased the modulus from 1 to 1000.
3) Even then the speed was less. I then ran the job with only 1 row as somebody suggested, to clear up any existing rows and then tried running the job again with all the rows this time.

Performance increased 10 times. Now it takes less than a minute to write and read from the same hash!

Thanks for the help by all the posters!

Jahnavi
Post Reply