database table deadlock

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
abyss
Premium Member
Premium Member
Posts: 172
Joined: Thu May 22, 2014 12:43 am

database table deadlock

Post by abyss »

hi all
i think this is more like a database question rather than a ds question but someone may can help me on here.

i have a job has two sql server stages that update same table and they may update same row, if they do the deadlock will occur. these 2 stages runs very different queries from same source so they can't be merge into one stage.

my question is there a way to wait one stage finish then start another one? (i know it's unlikely...) or i have to put one of them into another job?

thanks
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Can you give some details on the job design, please? I'm unclear if there is one main processing stream that splits to two separate target stages or if there are two distinct processing streams in the same job.
-craig

"You can never have too many knives" -- Logan Nine Fingers
PaulVL
Premium Member
Premium Member
Posts: 1315
Joined: Fri Dec 17, 2010 4:36 pm

Post by PaulVL »

Isn't a deadlock only going to happen when you try to update/insert data into a common table?

Not sure why you would craft a job that has two stages to update the same table. How do you account for who has priority? Two updates to the same record is dangerous unless you set a prioritized event.

Why not have one upsert event at the end of your flow and ensure that you control which row of data happens before the other row of data.

data stream #2 is concatenated to data stream #1 and than an upsert to the table.

?!?
abyss
Premium Member
Premium Member
Posts: 172
Joined: Thu May 22, 2014 12:43 am

Post by abyss »

this is my job, populate type 6 table
Image

those 2 SE stages update same table, one is type 1 change another is type 2 change.
I had a kind of illegitimate work around: set the array size to 1 to avoid the problem. consider the whole table only have couple thousand rows the performance is acceptable...
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Not sure why you would consider that an "illegitimate work around", that was the first thing I was going to suggest before I ended up simply asking about the job design.
-craig

"You can never have too many knives" -- Logan Nine Fingers
abyss
Premium Member
Premium Member
Posts: 172
Joined: Thu May 22, 2014 12:43 am

Post by abyss »

haha awesome.

then i can mark it as resolved then :D :D :D
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Well... considering how any conversation after that would have involved your data volumes and since you've confirmed it is small and the design is working for you, then yes - sounds like a resolution to me. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
PaulVL
Premium Member
Premium Member
Posts: 1315
Joined: Fri Dec 17, 2010 4:36 pm

Post by PaulVL »

But the fundamental topic of which thread has priority is still important IMHO.

But that's for abyss to decide.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Of course.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply