Tuning datastage

A forum for discussing DataStage<sup>®</sup> basics. If you're not sure where your question goes, start here.

Moderators: chulett, rschirm, roy

Post Reply
arun_nair24
Participant
Posts: 2
Joined: Mon Nov 20, 2006 10:52 pm
Location: India

Tuning datastage

Post by arun_nair24 »

Hi,
I am new to datastage, am an Oracle dba and have been asked to improve the datastage loading performance.

Wat all things can be considered for tuning datastage. I am running a job which selects data from one table, does a basic transformation( mapping) and loads it to another table in another schema.


What i noticed is when i start the job, initially the rows/sec (~1000 rows/sec) are pretty high and then they drop to around (~600 rows/sec).
What could be the reason? I have enabled row buffer cache and made it 1024 kb. THis is datastage 7.5.1. I monitored the database side, there everything seems normal
Any help would be highly appreciated.
suresh.narasimha
Premium Member
Premium Member
Posts: 81
Joined: Mon Nov 21, 2005 4:17 am
Location: Sydney, Australia
Contact:

Tuning Datastage

Post by suresh.narasimha »

Hi,

1.If you have a huge volume of data and there are no constraint on your target table you can go for a Bulk Stage.
2.You can use Link partitioner with multiple transformers depending on the number of CPUs of your server.
3.Select only the required columns at the source.
4.Performance also depends on the network traffic and the I/O hits on the server

So, if you take care of 1,2,3 and 4 is not in your hands.

Regards,
Suresh N
SURESH NARASIMHA
arun_nair24
Participant
Posts: 2
Joined: Mon Nov 20, 2006 10:52 pm
Location: India

Post by arun_nair24 »

Thanks for your reply, But i would not be able to use bulk loader as the design here is using an ODBC source/destination ( cant change that :(

Also, any idea regarding why the rows/sec are pretty high to start with and then they drop off?
aakashahuja
Premium Member
Premium Member
Posts: 210
Joined: Wed Feb 16, 2005 7:17 am

Post by aakashahuja »

MOst of the times, while loading to a table, DS starts off with high row count and then later it drops. One reason to this could be due to filling of the log space. Modifying the commit interval could do the trick if this is the case.

Apart from these, basic database tuning tips, if kept in mind, often render good performance.

Also, you could modify the Array Size (and correspondingly the Transaction size variable). A proper value of these also improve performance.
aakashahuja
Premium Member
Premium Member
Posts: 210
Joined: Wed Feb 16, 2005 7:17 am

Post by aakashahuja »

Most of the times, while loading to a table, DS starts off with high row count and then later it drops. One reason to this could be due to filling of the log space. Modifying the commit interval could do the trick if this is the case.

Apart from these, basic database tuning tips, if kept in mind, often render good performance.

Also, you could modify the Array Size (and correspondingly the Transaction size variable). A proper value of these also improve performance.

Hope it helps
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

A running DataStage job is simply another Oracle client application. It will either be doing inserts/updates, or will invoke sqlldr. Tune these as you would for any other client application. Beware that some DataStage developers will try to create 10 million row transactions, so make sure you have sufficient rollback segment space.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply