Parameterizing table structures.

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

basu.ds
Participant
Posts: 118
Joined: Tue Feb 06, 2007 12:59 am
Location: Bangalore

Post by basu.ds »

use schema files and parameterised the job
jaysheel
Participant
Posts: 57
Joined: Mon Apr 07, 2008 1:54 am
Location: Bangalore

Post by jaysheel »

Thanks for the relpy.

Could you please elaborate on that.

Thanks
- Jaysheel -
jaysheel
Participant
Posts: 57
Joined: Mon Apr 07, 2008 1:54 am
Location: Bangalore

Post by jaysheel »

Can anyone suggest me something on this..?
Ray, expecting a reply from you :)
- Jaysheel -
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Ten separate jobs, each using correct metadata, would be my preferred approach - easier to maintain when things change.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
jaysheel
Participant
Posts: 57
Joined: Mon Apr 07, 2008 1:54 am
Location: Bangalore

Post by jaysheel »

Thats right Ray.. But here the challenge is to reuse the job for 10 different tables. We are doing a research on this. If reusability is possible then could you tell me the way to approach it ?


Thanks,
- Jaysheel -
mdbatra
Premium Member
Premium Member
Posts: 175
Joined: Wed Oct 22, 2008 10:01 am
Location: City of London

Post by mdbatra »

I think basu.ds has already provided u the solution:
Use schema file to pass metadata & give parameters for Input file,Target Table & Schema file !
Rgds,
MB
Karthi_sk
Participant
Posts: 5
Joined: Tue Jun 08, 2004 11:52 pm

Post by Karthi_sk »

Hi,
I am also having a similar issue.
I tried using the schema file as a parameter for the sequential file stage(which is the source) and it works perfectly.
The problem starts when i try to parameterise the target table, as i dont see any option to use a schema file in any of the DBMS stages.
That means i am able to parameterise the source file name, souce file definition and the target name, but not the target table definition and under these conditions my metadata for the source and target are not matching.
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

Ray's preferred approach would also be my preferred approach.

Maintainability is an important factor. Having the metadata for impact analysis and data lineage is extremely important.

It's true that Ab Initio can do this easily with a parameterized graph, but if an Ab Initio developer took the time to make sure that data lineage and impact analysis was not broken... Instead they generally don't take care of the metadata, which is one reason (in my opinion) that Ab Initio's metadata solution is so weak.

Mike
mdbatra
Premium Member
Premium Member
Posts: 175
Joined: Wed Oct 22, 2008 10:01 am
Location: City of London

Post by mdbatra »

Karthi...
When we have RCP enabled + providing the parameters for Input Data File, Schema File, Target Table Name...no need of worrying about
Target Table defination. It'll get populated at run time.
Rgds,
MB
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

Does anyone know what MDB's suggestion does to impact analysis and data lineage? Is it broken? Can it be easily fixed? If it is broken and takes anything more than trivial developer effort to fix, then I personally will avoid this technique. I place much greater value on metadata and maintainability than I do on a minor productivity gain.

Mike
mdbatra
Premium Member
Premium Member
Posts: 175
Joined: Wed Oct 22, 2008 10:01 am
Location: City of London

Post by mdbatra »

Mike..
With all regards, i totally accord with the fact of making 10 Different jobs for "impact analysis and data lineage".
But my post was mistaken by you perhaps. It was just to let "Karthi" know, who was trying to acheive re-usability(in a single job), how to do that irrespective of the Maintenance factor.

& Karthi, i trust you must have got what you need to do :D
Rgds,
MB
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

MDB,

To the contrary, I really appreciate the suggestion that you offered. For my own education, I just want to understand the pros and cons of different design alternatives. I don't have a good feel for the current metadata capabilities in Information Server and haven't done anything with metadata since my return to the DataStage world. I suspect the "utility" approach would break data lineage and impact analysis in IIS as well... just want to confirm that suspicion and see if it is "repairable".

Mike
mdbatra
Premium Member
Premium Member
Posts: 175
Joined: Wed Oct 22, 2008 10:01 am
Location: City of London

Post by mdbatra »

Nothing denying the fact that "Utility" approach will surely hit the Maintainabilty of the Metadata. Also,despite of my little age in DataStage(2 years), i never saw a practice which gave priviledge to the former, it has always been the latter who got it.
Rgds,
MB
mdbatra
Premium Member
Premium Member
Posts: 175
Joined: Wed Oct 22, 2008 10:01 am
Location: City of London

Post by mdbatra »

Also,regarding repairability,in our DW/BI arena, we do not mind doing a little more & sophisticated work but yes , we do mind reworking, infact a lot. That's what i learnt.
People may differ!
Rgds,
MB
mdbatra
Premium Member
Premium Member
Posts: 175
Joined: Wed Oct 22, 2008 10:01 am
Location: City of London

Post by mdbatra »

Also,regarding repairability,in our DW/BI arena, we do not mind doing a little more & sophisticated work but yes , we do mind reworking, infact a lot. That's what i learnt.
People may differ!
Rgds,
MB
Post Reply