We have a job that failed with the following error:
What we have done:The current soft limit on the data segment (heap) size (2147483645) is less than the hard limit (2147483647), consider increasing the heap size limit
Message:: DB2_UDB_API_0,0: Current heap size: 1,598,147,728 bytes in 68,660,973 blocks
Message:: DB2_UDB_API_0,0: Fatal Error: Throwing exception: APT_BadAlloc: Heap allocation failed.
1. Increased heap (data segment) size hard and soft limit to unlimited. Added "ulimit -aH; ulimit -aS;" in BeforeJob ExecSH and below is the output confirming that ulimit has been changed:
ulimit -aH
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) unlimited
ulimit -aS
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
memory(kbytes) unlimited
coredump(blocks) 2097151
nofiles(descriptors) unlimited
2. Enabled large address space model by allowing datastage (osh) to access up to 2GB memory
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa osh
3. Monitored the job execution through "svmon"
It seems that the job can have working storage memory of up to 2GB (8 segments of 256MB each - meaning that the large address space model works).
However, it seems that the job requires >2GB memory and aborts whenever the osh process tries to utilize more than 2GB of memory (verified through svmon).
My understanding is that DataStage is a 32bit app and therefore will only be able to use up to 2Gb of memory. If this is the case, how can we work around the error? This is a simple job that reads 2 database tables (1.5 mil records and 10 mil records) each, perform join on a key, and output to another staging table.
Any help is much appreciated!