I have a job which has 2 parallel streams of database DML:
Transformer ------> Update table 1 with transaction handling as 0
|
-----------> Insert Table 2 with transaction handling as 0
I want the job not to do the commit (for database operations) unless both the database operations are completed.
If one fails , the other should not commit.
Can anyone tell me how do we acheive this in the datastage job ??
Comit transactions only when both are successful
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 103
- Joined: Tue Oct 14, 2003 4:07 am
Well, here's the party line...
As long as both links come from the same Transformer, you should see a new option on Transaction tab. In addition to Transaction Size, there should be one for Transaction Grouping. There you indicate what action you want to happen when one or the other of the actions fail.
Two things: 1) Not sure how well it generally works. 2) Not sure it's available in your version.
The other way to ensure this is to set your 'Abort job after X number of Warnings' down low - like to 1. If the job aborts neither link will commit. This can be set via Job Control.
As long as both links come from the same Transformer, you should see a new option on Transaction tab. In addition to Transaction Size, there should be one for Transaction Grouping. There you indicate what action you want to happen when one or the other of the actions fail.
Two things: 1) Not sure how well it generally works. 2) Not sure it's available in your version.
The other way to ensure this is to set your 'Abort job after X number of Warnings' down low - like to 1. If the job aborts neither link will commit. This can be set via Job Control.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
I never got it to work properly in DataStage, the problem seems to be that each "link" to a database is opened on a seperate connection.
So if you put the transaction size to 1 your first link will already have committed before it will even try the second link.
Try it out yourself, it's been a since version 4 when I last tried that. Nowadays we just implement our own "restarting" but this requires some more thought.
Ogmios
So if you put the transaction size to 1 your first link will already have committed before it will even try the second link.
Try it out yourself, it's been a since version 4 when I last tried that. Nowadays we just implement our own "restarting" but this requires some more thought.
Ogmios
In theory there's no difference between theory and practice. In practice there is.
Each OCI stage is a separate connection, each link to the same OCI is a single connection therefore allowing for limited transaction control as previously described. It works although slow. Also a "failed" Update statement is not necessarily the same thing as an Update statement that doesn't update anything.
Another possibility is to include, as a constraint of the second database link, the first's REJECTEDCODE. That way, if the first doesn't succeed, the second will not be exercised. all of this is available in version 5.x.
Another possibility is to include, as a constraint of the second database link, the first's REJECTEDCODE. That way, if the first doesn't succeed, the second will not be exercised. all of this is available in version 5.x.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Premium Member
- Posts: 103
- Joined: Tue Oct 14, 2003 4:07 am
If I enable transaction grouping , (the datastage help says "A transaction group cannot use a Rows per transaction value other than 1."), then I will have to make rows per transaction value as 1. That means all the transactions before current transaction have been comitted.
After failure in insert stream, I cannot rollback insert and update streams completely to the state when the job started. I need to put the restart logic in place to start from the point of failure.
Is there some other way than transaction grouping ??
Thanks
After failure in insert stream, I cannot rollback insert and update streams completely to the state when the job started. I need to put the restart logic in place to start from the point of failure.
Is there some other way than transaction grouping ??
Thanks
-
- Premium Member
- Posts: 103
- Joined: Tue Oct 14, 2003 4:07 am
Good point. Also it appears you want transaction control at the job level not the row level. You'll have to set the commit rows value to 0. You'll need the REJECTEDCODE constraint for all links to the database. If any of these constraints evaluates to true you'll have to abort the the job. This will roll everything back and commit nothing. That's the way we do it for some of our MQ-Oracle jobs. Kind of a pain if you abort a lot. Perhaps you could write the bad rows to a file and continue with the rest?