Page 1 of 1

Maximum Size a Folder Stage can handle

Posted: Wed Feb 17, 2016 12:06 pm
by PeteM2
What is the maximum file size that the Folder stage can process in the server job?
Our application runs on Unix AIX 7.1 and we use Datastage version 9.1.
One of the posts dating back to 2006 says that it varies with respect to OS and datastage version.
Is this documented anywhere now?
Our intention is to read the content of the file and load it into a record on oracle(version 11g) table with a column of CLOB type.
We were successful in testing a file upto 512 MB in our environment and failed to process anything beyond that.

Would be interested to know what factors/parameters would decide the maximum allowed filesize the Folder stage can handle.

Posted: Wed Feb 17, 2016 1:10 pm
by qt_ky
I would assume it's documented only according to your operating system and type of file system.

At the operating system level, you may want to look unto the ulimit command and make sure it defaults to unlimited file size for all of the users who are executing jobs, such as the dsadm user.

A simple way to confirm that at run time is to put a ulimit -a command in the before-job or after-job subroutine (ExecSH) value in your job properties, run the job and view the detailed job log. You should find something like this:

Code: Select all

-- ulimit -a output:
time(seconds)        unlimited
file(blocks)         unlimited
data(kbytes)         unlimited
stack(kbytes)        unlimited
memory(kbytes)       unlimited
coredump(blocks)     unlimited
nofiles(descriptors) 131072
threads(per process) unlimited
processes(per user)  unlimited