Update to Oracle problem

Archive of postings to DataStageUsers@Oliver.com. This forum intended only as a reference and cannot be posted to.

Moderators: chulett, rschirm

Locked
admin
Posts: 8720
Joined: Sun Jan 12, 2003 11:26 pm

Update to Oracle problem

Post by admin »

Hi DataStage folks,

I have a job design where it will read from an Oracle table
called "Customer_Data" and insert the records into a Unidata table.
After the record has been inserted to the Unidata table, it will update
a flag field in the source Oracle table which is the "Customer_Data".
The reason of updating the flag field is to indicate that the record
has been fetched and inserted into Unidata table.

My problem here is that, when the update to Oracle stage, the job
will take more than 30 minutes to finish for 2000 records if I set the
array size to 1 and transaction size to 1.

If I set the array size to 3000 and transaction size to 6000, it
will take only approximately 5 minutes to finish the job.

Of course the second option of array size 3000 and transaction size
6000 can give a better performance.

However my concern here is that when a record is inserted to Unidata
table, it will not commit immediately to the Oracle table if I use the
settings of array size 3000 and transaction size 6000. Therefore if the
job fails, I understand that the records inserted to Unidata, for
example the 100th records, all the updated 100 records may not be
commited to the Oracle table as it normally will wait for the record to
reach 6000 before it commits the transaction. The point here is that
the record that were inserted to Unidata will not be in sync with the
Oracle table which the job needs to update a flag field to show that it
has fetched the record. Therefore, I am concern of the data integrity.

Actually I am in a dilemma. It is between performance and data
inconsistency issue. This job is supposed to execute every 15 minutes
interval.

Could anybody give me a suggestion on how to handle this problem?


Regards,
Melvin
Locked