Selection Criteria for Datastage 7.0 or Datastage XE

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
Mahesh
Participant
Posts: 3
Joined: Sun Sep 21, 2003 8:03 pm

Selection Criteria for Datastage 7.0 or Datastage XE

Post by Mahesh »

Hi,

What can be the possible scenarios where I can decide using Datastage 7.0 or Datastage Parallel Extender?

Advance thanks for all replies.

Thanks
Nat
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Short answer:

PX for high-volume mass loads (high volume to me is a backfill or one-time load of 500+ million rows of source data)

XE for small-medium volume daily incrementals
kjanes
Participant
Posts: 144
Joined: Wed Nov 06, 2002 2:16 pm

Post by kjanes »

Another scenario may be new development (in general) vs. exisiting jobs.
Version 7 of DataStage PX offers some development improvements over the standard DataStage. It could make sense to develop new jobs using PX with the vision that PX will be more scalable over time as your business/data volumes grow.

If you already have an established base of jobs on DataStage, they will need to be re-written using the PX design palette if your objective is to run them under PX. Obviously, these conversions can be easy to complex depending on the requirements.

You could make the decision as follows based on whether or not you have both tools:

New Development => PX
Existing Jobs => DataStage standard edition

There is no single best answer and possibly no single best tool (standard vs. PX) to meet all of your needs. It becomes an issue of what fits in your environment and who you have that can understand/code it. The design palette/process for PX is a good amount different from standard DataStage.
Kevin Janes
kjanes
Participant
Posts: 144
Joined: Wed Nov 06, 2002 2:16 pm

Post by kjanes »

Here's another scenario that might help. Say you have a 12 hour batch window to accomplish certain processing. Say you have tuned and tweaked all that you can on DataStage. It's as good as it's gonna get. In the mean time, data volumes continue to grow but your window stays the same. You may have to convert or begin writing new jobs under PX to continue to meet your deadlines while your batch window stays the same.

In our case, we're not talking about a few jobs in 12 hours, we're talking hundreds of jobs in 12 hours with data volumes continuing to grow.
Kevin Janes
Post Reply