Suggest better design

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
svhari76
Participant
Posts: 48
Joined: Tue Nov 16, 2010 5:04 pm

Suggest better design

Post by svhari76 »

I am trying to design a job which does following.

1) Get source flat file(havingng approx. 1-2M records) and separate them into 2 files (file1,file2) based on category.

2) key columns are same in these files.Load file 1 records to table1.

File 2 is having all comma delimited fields(variable number of fields- max upto 20) and key columns.
Now i have to convert each column to be a row(other than key columns) and
then load the data to table 2.

table1 to table2 relationship is 1 to 0..*(0-many).


what am i trying do?

Since i have large volume of data in db i don't want to look up either in the db at runtime or load huge data to hashed file.
so that i can reduce the loading and look up time.

Try load the data in table 1 first and if there are any failures write to a hashed file FILE-A(may be duplicates -i.e load all master records first)

Now read each file2 record and lookup in FILE-A and if not found, insert record to table2(detail records).



Is this right way of loading the data? Am i suppose to load master and corresponding details at once?
Hari
manoj_23sakthi
Participant
Posts: 47
Joined: Tue Feb 23, 2010 12:16 am
Location: CHENNAI

Re: Suggest better design

Post by manoj_23sakthi »

Hi,
I think we had meet this senerio instead of conveting into rows into colums and transforming .
<F1> and <F2>
<F2> use Transformer and split them into diff catageory
for eg 10 cat
Join all 10 cat with <F1>
u can get the Result

Regards,
Manoj
Abhijeet1980
Participant
Posts: 81
Joined: Tue Aug 15, 2006 8:31 am
Location: Zürich
Contact:

Post by Abhijeet1980 »

Hari,

I request you to rephrase your query and also assist that with WHY?

Some of our colleagues at DSXChange may suggest you a better design.

Pls dont explain what stages you have used or you intend to use. Let the readers decide it for you.

I hope, you get a quick answer.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

In terms of the original poster's final question, you can control a "grouped transaction" in an ODBC stage. As to the rest, please provide an English description of what you are trying to achieve.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
svhari76
Participant
Posts: 48
Joined: Tue Nov 16, 2010 5:04 pm

Post by svhari76 »

Thanks for the responses.

Please ignore my post. I have implemented easy possible solutions.
Hari
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It would be courteous were you to share them with us.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
svhari76
Participant
Posts: 48
Joined: Tue Nov 16, 2010 5:04 pm

Post by svhari76 »

I was inserting the records first in parent table and collecting rejected records in to hashed file. while i am inserting the child table records i am looking up in to hashed fiel for rejected records and corresponding child records will be eliminated from the insertion.

More over both parent and child records are not in a single transaction.
Hari
Post Reply