Heap Size Limit Environment variable

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
bensonian
Participant
Posts: 42
Joined: Tue Nov 22, 2005 2:12 pm

Heap Size Limit Environment variable

Post by bensonian »

Is there an Environment variable we can set at the Project level, to increase the 'heap size limit'. Please find the attached error message below:

The current soft limit on the data segment (heap) size (1610612736) is less than the hard limit (2147483647), consider increasing the heap size limit

Any help would be appreciated.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

That's controlled by the operating system via the ulimit function. Have a chat with your SysAdmin and see if they're willing to bump that up for your user.
-craig

"You can never have too many knives" -- Logan Nine Fingers
bensonian
Participant
Posts: 42
Joined: Tue Nov 22, 2005 2:12 pm

Post by bensonian »

chulett wrote:That's controlled by the operating system via the ulimit function. Have a chat with your SysAdmin and see if they're willing to bump that up for your user. ...
Thanks for a quick response. But, the current ulimit function is set to unlimited. I beleive this error is more specific to Datastage and its about some Environment variable which we are not too sure about.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

How are you checking? You can't use "the same user" and check from the command line, you need to check from a job's environment. Take any job and add an "ExecSH" before-job call to "ulimit -a" and see what it logs... unless you've already done that?
-craig

"You can never have too many knives" -- Logan Nine Fingers
bensonian
Participant
Posts: 42
Joined: Tue Nov 22, 2005 2:12 pm

Post by bensonian »

chulett wrote:How are you checking? You can't use "the same user" and check from the command line, you need to check from a job's environment. Take any job and add an "ExecSH" before-job call to "ulimit -a" and s ...
Actually the abort we are encountering is in 'PROD' environment. Currently a 'batch id' was used to run these jobs. Being a user other than 'batch id' these are the results after executinng 'ulimit -a'.

ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 32768
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) 2048
Post Reply