Deadlock detected while updating table

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
satheesh_color
Participant
Posts: 182
Joined: Thu Jun 16, 2005 2:05 am

Deadlock detected while updating table

Post by satheesh_color »

Hi All,

While doing an update of a table which cotains million of rows, we face deadlock issue. Even though we dont have duplicate value based on the key column we come across this issue.


Fatal Error: The connector failed to execute the statement:
UPDATE FACT SET AMOUNT1 = ?, AMOUNT2 = ?, AMOUNT3 = ?, AMOUNT4 = ? WHERE KEY = ?. The reported errors are:
[SQLSTATE 61000] java.sql.BatchUpdateException: ORA-00060: deadlock detected while waiting for resource
[SQLSTATE 61000] java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource


Fact Table ini_trans=16 and max_trans=255.

Kindly let me know your thoughts on the same.

Thanks & Regards,
Satheesh.R
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

What level of locking (transaction isolation) are you using?

How are your data partitioned?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
satheesh_color
Participant
Posts: 182
Joined: Thu Jun 16, 2005 2:05 am

Post by satheesh_color »

Hi Ray,

we are using Hash partition

Regards,
Satheesh.R
satheesh_color
Participant
Posts: 182
Joined: Thu Jun 16, 2005 2:05 am

Post by satheesh_color »

Hi All,

As we are using JDBC Connector stage to connect the target Db using Oracle thin client with Read committed as an isolation level and Hashed partition..still the issue occurs when we try to update the table with 4 nodes.


Kindly let me know your thoughts on the same.


Regards,
Satheesh.R
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Sounds to me like you're not hashing properly. What are your hash key(s)?
-craig

"You can never have too many knives" -- Logan Nine Fingers
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

I haven't touched an Oracle database in years, so don't know if my comments would apply.

When I see a high-volume update statement (what percentage of the FACT table are you updating and what is the total volume?), I would also be concerned about how many indexes exist on the table. You normally would partition data by a primary key/clustering index, but it is frequently the locks on secondary index pages and the lock escalation policy of the database that can cause deadlocks.

Can you afford to drop and rebuild secondary indexes?

Mike
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

I didn't ask about partitioning, I asked about locking.

Let me add an additional question about what array size and transaction size you are using.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
satheesh_color
Participant
Posts: 182
Joined: Thu Jun 16, 2005 2:05 am

Post by satheesh_color »

Hi All,

In JDBC connector we are using RecordCount as 100 and Session/BatchSize as 100. The default value is 2000. Eventhough we have reduced it, it causes deadlock.

Note: We are trying to update 6 million records and Foreign keys(1,2,3) having index. We are updating the table based on primary key.


When I try to run on single node / sequentially the job ran fine.

Regards,
Satheesh.R
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

ray.wurlod wrote:I didn't ask about partitioning, I asked about locking.
You ask about both, actually. :wink:
ray.wurlod wrote:What level of locking (transaction isolation) are you using?

How are your data partitioned?
And the fact that it runs fine on a single node still smells like a partitioning issue to me, the details of which are still unanswered. For whatever that is worth.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Mike
Premium Member
Premium Member
Posts: 1021
Joined: Sun Mar 03, 2002 6:01 pm
Location: Tampa, FL

Post by Mike »

In addition to answering the multiple questions about how you are partitioning the data (by primary key?), how many rows are in your target table?

Updates are an expensive DML operation, so 6 million updates have a performance cost.

If your table size is 10 million, then you might benefit (as an example) by replacing the 6 million updates with a truncate/bulk load (which might have a lower overall performance cost). But if your table size is 100 million, then the 6 million updates might still be the lower cost alternative.

Right now your only working solution seems to be running 6 million updates serially (single node), and it does seem likely that there would be lower cost alternatives available to you.

Mike
Post Reply