Heap Allocation Failed-Increasing Heap Size Doesn't Help

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
rubik
Participant
Posts: 3
Joined: Fri Dec 07, 2007 5:24 am

Heap Allocation Failed-Increasing Heap Size Doesn't Help

Post by rubik »

There were already numerous posts regarding this subject they do not seem to address the situation faced by us.

We have a job that failed with the following error:
The current soft limit on the data segment (heap) size (2147483645) is less than the hard limit (2147483647), consider increasing the heap size limit
Message:: DB2_UDB_API_0,0: Current heap size: 1,598,147,728 bytes in 68,660,973 blocks
Message:: DB2_UDB_API_0,0: Fatal Error: Throwing exception: APT_BadAlloc: Heap allocation failed.
What we have done:
1. Increased heap (data segment) size hard and soft limit to unlimited. Added "ulimit -aH; ulimit -aS;" in BeforeJob ExecSH and below is the output confirming that ulimit has been changed:
ulimit -aH
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) unlimited

ulimit -aS
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
memory(kbytes) unlimited
coredump(blocks) 2097151
nofiles(descriptors) unlimited

2. Enabled large address space model by allowing datastage (osh) to access up to 2GB memory
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa osh

3. Monitored the job execution through "svmon"
It seems that the job can have working storage memory of up to 2GB (8 segments of 256MB each - meaning that the large address space model works).

However, it seems that the job requires >2GB memory and aborts whenever the osh process tries to utilize more than 2GB of memory (verified through svmon).

My understanding is that DataStage is a 32bit app and therefore will only be able to use up to 2Gb of memory. If this is the case, how can we work around the error? This is a simple job that reads 2 database tables (1.5 mil records and 10 mil records) each, perform join on a key, and output to another staging table.

Any help is much appreciated!
sky_sailor
Participant
Posts: 1
Joined: Wed Jun 30, 2010 11:31 pm

Re: Heap Allocation Failed-Increasing Heap Size Doesn't Help

Post by sky_sailor »

Recently we met such a problem,this issue not only make the job aborted,but also caused the server down.
We found when load a table definition,there is an optional choice:
"Ensure all char columns use unicode".If we import the table layout with this option unchosed,the job will work well.
Per my thought,when we enable this option,the data conversation from ascii to uincode will be done during the job run,that will use out the memory up to 2G,totally out of the ds capability and cause the job killed,even the server down.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Re: Heap Allocation Failed-Increasing Heap Size Doesn't Help

Post by ray.wurlod »

rubik wrote:My understanding is that DataStage is a 32bit app and therefore will only be able to use up to 2Gb of memory.
This is not the case on 64-bit AIX systems (such as version 6.1). I'm not sure what the bittage of version 5.3 is. Another thing you might look at is the setting of LDR_CNTRL.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

You've hit the limit set by your ldedit value. The quick solution is to change all char and varchar columns to varchar columns with no size limit; this can reduce your memory use per row. If the data width cannot be decreased then I would either consider changing your lookup to a join or to split your lookup stage into 2 distinct ones, each with just a subset of the data.
kurapatisrk
Premium Member
Premium Member
Posts: 15
Joined: Wed Feb 24, 2010 6:37 pm
Location: United States

Re: Heap Allocation Failed-Increasing Heap Size Doesn't Help

Post by kurapatisrk »

Hi,

I am getting this error I tried everthing else except increasing the heap size. Can you tell me how to increase the heap size to unlimited.


Thanks in Advance.
Thanks
Ksrk
prasanna_anbu
Participant
Posts: 42
Joined: Thu Dec 28, 2006 1:39 am

Post by prasanna_anbu »

ArndW wrote:You've hit the limit set by your ldedit value. The quick solution is to change all char and varchar columns to varchar columns with no size limit; this can reduce your memory use per row. If the data ...
Have you resloved this issue? If so please help me on this.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

prasanna_anbu wrote:Have you resloved this issue? If so please help me on this.
:!: Rather than jump on the end of an old thread, why not start your own post on the subject? Give us the details of your problem.
-craig

"You can never have too many knives" -- Logan Nine Fingers
koolsun85
Participant
Posts: 36
Joined: Tue Jun 15, 2010 3:30 pm
Location: Tampa

Post by koolsun85 »

Change all the Char datatype to Varchar and re run the job. It worked for me.
Thanks
koolsun85
Participant
Posts: 36
Joined: Tue Jun 15, 2010 3:30 pm
Location: Tampa

Post by koolsun85 »

Also Remove the stage and re design with the same stage as the stage might got corrupted. if u build it again it might solve the issue.
Thanks
abhijain
Participant
Posts: 88
Joined: Wed Jun 13, 2007 1:10 pm
Location: India

Post by abhijain »

Also, try to give a certain length to your VARCHAR fields ( for e.g Varchar(200) ) rather than using VARCHAR().

When we define the column as VARCHAR() by default, it takes the maximum possible value for the column.

We have also faced the similar issues and it was crashing our servers. We have modified the job using above resolution and it helped us alot.
Rgrds,
Abhi
Post Reply