Segmentation Fault with a core dump

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
vjreddy65
Participant
Posts: 7
Joined: Tue Apr 06, 2004 7:34 pm

Segmentation Fault with a core dump

Post by vjreddy65 »

All,
I have a custom operator being used in my job. Iam getting an error "Contents of phantom output file=> RT_SC524/OshExecuter.sh[16]: 65112 Segmentation fault". Soon after this it coredumps. Do anyone has any idea, what it is?? Does it have to do anything with the custom operator Iam using?

Any idea is apreciated.

-vj
gh_amitava
Participant
Posts: 75
Joined: Tue May 13, 2003 4:14 am
Location: California
Contact:

Post by gh_amitava »

Hi,,

Are you using Basic transformer in your job design which is not suggested in PX environment..

Regards
Amitava
Eric
Participant
Posts: 254
Joined: Mon Sep 29, 2003 4:35 am

Re: Segmentation Fault with a core dump

Post by Eric »

This sounds like it is connected with the custom operator. Perhaps there is a row of data that is not formed correctly and the operator does not know how to handle it?
gleblanc
Participant
Posts: 1
Joined: Mon Jul 28, 2003 5:24 am

Contents of phantom output file

Post by gleblanc »

I have the same problem, a job that ran successfully yesterday and is failing today.

Job has :
Sequential file - linkto - Transformer stage - linkto - TDMLoadPXStage

resulting with following messages :

Contents of phantom output file =>
RT_SC33/OshExecuter.sh[16]: 29445 Memory fault
Contents of phantom output file =>
DataStage Job 33 Phantom 29446
Parallel job reports failure (code 139)

Any idea ?
tks
Gilles
leo_t_nice
Participant
Posts: 25
Joined: Thu Oct 02, 2003 8:57 am

Post by leo_t_nice »

Hi

How long did this job run for?

Did it fall over immediately, or did it process a number of rows (and if so, how many). I had a similar problem (though i forget the exact message, it did involve segmentation violation) when using the TDMLoad stage. In our case (PX 7.01, HP-UX) it was caused by an apparent memory leak. Try running "top" on your server and watch the memory used by your job. We could run until the memory used by the job was around 800Mb, then bang!

Our solution was to use a server shared container with the TDMLoad stage. The performance is pretty good and the job doesnt fail :)

Hope this helps
Post Reply