locking table when update and insert in to the same table

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
yimwai
Participant
Posts: 46
Joined: Thu Jul 01, 2010 9:34 pm

locking table when update and insert in to the same table

Post by yimwai »

How to avoid locking table when update and insert into one table at the same time .
The result data should be divided into two parts(two parts have no shared data) :
insert one part into table,and update the other into the same table.
i used transformer stage to get the result data and divide them into two parts through two lines to DB2 enterprise stages,
then the table is locked and the job keeps running forever...
how to handle this situation
can I make the DB2 enterprise stages(from one transformer) running in order or other good ideas
ETLJOB
Participant
Posts: 87
Joined: Thu May 01, 2008 1:15 pm
Location: INDIA

Post by ETLJOB »

Do you find any reason for the "UPSERT" option, not to work in your case?

Note: Post your questions in appropriate forum.
HariK
Participant
Posts: 68
Joined: Thu May 17, 2007 1:50 am

Post by HariK »

what are the write methods you are using in the target stages?
yimwai
Participant
Posts: 46
Joined: Thu Jul 01, 2010 9:34 pm

Post by yimwai »

ETLJOB wrote:Do you find any reason for the "UPSERT" option, not to work in your case?

Note: Post your questions in appropriate forum.
ooh,that may be a good solution ,i never use upsert.
you mean that I can divide result data by checking the input data in "where" statement of "user_defined sql"??
such as "WHERE ORCHESTRATE.DIF_COD=1"
yimwai
Participant
Posts: 46
Joined: Thu Jul 01, 2010 9:34 pm

Post by yimwai »

I'm so sorry ,i think i described a wrong stuation .the fact is the two parts of result data has shared data.
and the upsert only do one action for one record and i aslo donnot know how to divide them in DE stage
kwwilliams
Participant
Posts: 437
Joined: Fri Oct 21, 2005 10:00 pm

Post by kwwilliams »

You can't bulletproof your ability to lock a table when performing to simultaneous bulk actions at the same time. This is for many reasons. The table is going to potentially try to write an update and insert into pages at the same time. This would be especially true if you have a clustered index on the table.

The indexes are being updated at the same time you are performing inserts or updates. If you happened to be updating a key column (shouldn't happen often but I have seen it).

Finally DB2 would begin to escalate the locks all the way to the table level depending upon how many locks are being held (from row to block then all the way to the table level). Your DBA should be able to help you understand the locking that is occurring while your job is running. And regardless of what they tell you, they can change some parameters to allow more locks in the database before the lock is escalated. If they tell you otherwise google db2 lock escalation you should find some useful information for your confrontation ... I mean conversation. :wink:

In the end I would drop one of the two links to a dataset and then process the actions sequentially. It's probably not worth the effort to get thsi working and there is no guarantee that it won't revert to its previous behavior in the future.
Post Reply