Deadlock detected while updating table
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 182
- Joined: Thu Jun 16, 2005 2:05 am
Deadlock detected while updating table
Hi All,
While doing an update of a table which cotains million of rows, we face deadlock issue. Even though we dont have duplicate value based on the key column we come across this issue.
Fatal Error: The connector failed to execute the statement:
UPDATE FACT SET AMOUNT1 = ?, AMOUNT2 = ?, AMOUNT3 = ?, AMOUNT4 = ? WHERE KEY = ?. The reported errors are:
[SQLSTATE 61000] java.sql.BatchUpdateException: ORA-00060: deadlock detected while waiting for resource
[SQLSTATE 61000] java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource
Fact Table ini_trans=16 and max_trans=255.
Kindly let me know your thoughts on the same.
Thanks & Regards,
Satheesh.R
While doing an update of a table which cotains million of rows, we face deadlock issue. Even though we dont have duplicate value based on the key column we come across this issue.
Fatal Error: The connector failed to execute the statement:
UPDATE FACT SET AMOUNT1 = ?, AMOUNT2 = ?, AMOUNT3 = ?, AMOUNT4 = ? WHERE KEY = ?. The reported errors are:
[SQLSTATE 61000] java.sql.BatchUpdateException: ORA-00060: deadlock detected while waiting for resource
[SQLSTATE 61000] java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource
Fact Table ini_trans=16 and max_trans=255.
Kindly let me know your thoughts on the same.
Thanks & Regards,
Satheesh.R
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 182
- Joined: Thu Jun 16, 2005 2:05 am
-
- Participant
- Posts: 182
- Joined: Thu Jun 16, 2005 2:05 am
I haven't touched an Oracle database in years, so don't know if my comments would apply.
When I see a high-volume update statement (what percentage of the FACT table are you updating and what is the total volume?), I would also be concerned about how many indexes exist on the table. You normally would partition data by a primary key/clustering index, but it is frequently the locks on secondary index pages and the lock escalation policy of the database that can cause deadlocks.
Can you afford to drop and rebuild secondary indexes?
Mike
When I see a high-volume update statement (what percentage of the FACT table are you updating and what is the total volume?), I would also be concerned about how many indexes exist on the table. You normally would partition data by a primary key/clustering index, but it is frequently the locks on secondary index pages and the lock escalation policy of the database that can cause deadlocks.
Can you afford to drop and rebuild secondary indexes?
Mike
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 182
- Joined: Thu Jun 16, 2005 2:05 am
Hi All,
In JDBC connector we are using RecordCount as 100 and Session/BatchSize as 100. The default value is 2000. Eventhough we have reduced it, it causes deadlock.
Note: We are trying to update 6 million records and Foreign keys(1,2,3) having index. We are updating the table based on primary key.
When I try to run on single node / sequentially the job ran fine.
Regards,
Satheesh.R
In JDBC connector we are using RecordCount as 100 and Session/BatchSize as 100. The default value is 2000. Eventhough we have reduced it, it causes deadlock.
Note: We are trying to update 6 million records and Foreign keys(1,2,3) having index. We are updating the table based on primary key.
When I try to run on single node / sequentially the job ran fine.
Regards,
Satheesh.R
You ask about both, actually.ray.wurlod wrote:I didn't ask about partitioning, I asked about locking.
And the fact that it runs fine on a single node still smells like a partitioning issue to me, the details of which are still unanswered. For whatever that is worth.ray.wurlod wrote:What level of locking (transaction isolation) are you using?
How are your data partitioned?
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
In addition to answering the multiple questions about how you are partitioning the data (by primary key?), how many rows are in your target table?
Updates are an expensive DML operation, so 6 million updates have a performance cost.
If your table size is 10 million, then you might benefit (as an example) by replacing the 6 million updates with a truncate/bulk load (which might have a lower overall performance cost). But if your table size is 100 million, then the 6 million updates might still be the lower cost alternative.
Right now your only working solution seems to be running 6 million updates serially (single node), and it does seem likely that there would be lower cost alternatives available to you.
Mike
Updates are an expensive DML operation, so 6 million updates have a performance cost.
If your table size is 10 million, then you might benefit (as an example) by replacing the 6 million updates with a truncate/bulk load (which might have a lower overall performance cost). But if your table size is 100 million, then the 6 million updates might still be the lower cost alternative.
Right now your only working solution seems to be running 6 million updates serially (single node), and it does seem likely that there would be lower cost alternatives available to you.
Mike