Page 1 of 1

Deadlock detected while updating table

Posted: Mon Feb 13, 2017 6:56 pm
by satheesh_color
Hi All,

While doing an update of a table which cotains million of rows, we face deadlock issue. Even though we dont have duplicate value based on the key column we come across this issue.


Fatal Error: The connector failed to execute the statement:
UPDATE FACT SET AMOUNT1 = ?, AMOUNT2 = ?, AMOUNT3 = ?, AMOUNT4 = ? WHERE KEY = ?. The reported errors are:
[SQLSTATE 61000] java.sql.BatchUpdateException: ORA-00060: deadlock detected while waiting for resource
[SQLSTATE 61000] java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource


Fact Table ini_trans=16 and max_trans=255.

Kindly let me know your thoughts on the same.

Thanks & Regards,
Satheesh.R

Posted: Mon Feb 13, 2017 8:45 pm
by ray.wurlod
What level of locking (transaction isolation) are you using?

How are your data partitioned?

Posted: Mon Feb 13, 2017 10:57 pm
by satheesh_color
Hi Ray,

we are using Hash partition

Regards,
Satheesh.R

Posted: Tue Feb 14, 2017 11:01 am
by satheesh_color
Hi All,

As we are using JDBC Connector stage to connect the target Db using Oracle thin client with Read committed as an isolation level and Hashed partition..still the issue occurs when we try to update the table with 4 nodes.


Kindly let me know your thoughts on the same.


Regards,
Satheesh.R

Posted: Tue Feb 14, 2017 11:29 am
by chulett
Sounds to me like you're not hashing properly. What are your hash key(s)?

Posted: Tue Feb 14, 2017 1:42 pm
by Mike
I haven't touched an Oracle database in years, so don't know if my comments would apply.

When I see a high-volume update statement (what percentage of the FACT table are you updating and what is the total volume?), I would also be concerned about how many indexes exist on the table. You normally would partition data by a primary key/clustering index, but it is frequently the locks on secondary index pages and the lock escalation policy of the database that can cause deadlocks.

Can you afford to drop and rebuild secondary indexes?

Mike

Posted: Wed Feb 15, 2017 2:45 am
by ray.wurlod
I didn't ask about partitioning, I asked about locking.

Let me add an additional question about what array size and transaction size you are using.

Posted: Wed Feb 15, 2017 6:22 am
by satheesh_color
Hi All,

In JDBC connector we are using RecordCount as 100 and Session/BatchSize as 100. The default value is 2000. Eventhough we have reduced it, it causes deadlock.

Note: We are trying to update 6 million records and Foreign keys(1,2,3) having index. We are updating the table based on primary key.


When I try to run on single node / sequentially the job ran fine.

Regards,
Satheesh.R

Posted: Wed Feb 15, 2017 8:00 am
by chulett
ray.wurlod wrote:I didn't ask about partitioning, I asked about locking.
You ask about both, actually. :wink:
ray.wurlod wrote:What level of locking (transaction isolation) are you using?

How are your data partitioned?
And the fact that it runs fine on a single node still smells like a partitioning issue to me, the details of which are still unanswered. For whatever that is worth.

Posted: Wed Feb 15, 2017 8:31 am
by Mike
In addition to answering the multiple questions about how you are partitioning the data (by primary key?), how many rows are in your target table?

Updates are an expensive DML operation, so 6 million updates have a performance cost.

If your table size is 10 million, then you might benefit (as an example) by replacing the 6 million updates with a truncate/bulk load (which might have a lower overall performance cost). But if your table size is 100 million, then the 6 million updates might still be the lower cost alternative.

Right now your only working solution seems to be running 6 million updates serially (single node), and it does seem likely that there would be lower cost alternatives available to you.

Mike