Consumed more than 100000 bytes looking for record delimiter
Posted: Wed Sep 19, 2012 8:29 am
I know this topic has been approached many times and the answer is always "datastage couldnt find the record delimiter".
However, in my case for testing purposes, I am building a file on aix that goes from 31990 to 32010 in length incrementing in length by 1 each time.
I run the job after I build the file. Btw, it only has the letter 'A' in it repeated that many times to make the length of the file.
When i get to record length 32001 (32000 A's plus one byte for end of line marker) , Datastage abends with the message:
Consumed more than 100000 bytes looking for record delimiter; aborting
Here is the file i am building and using to read:
wc -l temp.out
1 temp.out
ls -altr temp.out
-rw-r--r-- 1 dsxxx dstage 32001 Sep 19 09:58 temp.out
Obviously, the problem isnt the setting of the record delimiter since it is the same from file lengths 31990 to 31999 and is the same for 32000.
I read the one record file into one variable in the datastage sequential file reader that is set to LongVarChar 45000.
There have been posts about adding APT values but the document states that they are set to 100k or more so I dont see how that is a problem. when I run the job, those variables are not listed so I am not sure how to show what their value truly is.
Obviously, when it dies at 32001, one has to think there is a 32k limit set somewhere .. but I am not sure how to figure it out. One might suspect datastage would indicate it better rather than saying it is reading 100000 bytes (when there isnt that much data) to find the record delimiter.
Also, if it is really the $APT_MAX_DELIMITED_READ_SIZE, I can not find the document on how to set that correctly. I tried to add it to thru edit -> job properties -> parameters -> add environment variable. This added it to the user defined list as 100000 and either it didnt work or that wasnt the problem because i still have the error OR i added the parameter in the wrong section. I would have thought it should have been a REAL environment variable rather than one i create.
Also, i read in posts that datastage doesnt have limits on record length .. somehow with all these variables to set, there must be some limit.
However, in my case for testing purposes, I am building a file on aix that goes from 31990 to 32010 in length incrementing in length by 1 each time.
I run the job after I build the file. Btw, it only has the letter 'A' in it repeated that many times to make the length of the file.
When i get to record length 32001 (32000 A's plus one byte for end of line marker) , Datastage abends with the message:
Consumed more than 100000 bytes looking for record delimiter; aborting
Here is the file i am building and using to read:
wc -l temp.out
1 temp.out
ls -altr temp.out
-rw-r--r-- 1 dsxxx dstage 32001 Sep 19 09:58 temp.out
Obviously, the problem isnt the setting of the record delimiter since it is the same from file lengths 31990 to 31999 and is the same for 32000.
I read the one record file into one variable in the datastage sequential file reader that is set to LongVarChar 45000.
There have been posts about adding APT values but the document states that they are set to 100k or more so I dont see how that is a problem. when I run the job, those variables are not listed so I am not sure how to show what their value truly is.
Obviously, when it dies at 32001, one has to think there is a 32k limit set somewhere .. but I am not sure how to figure it out. One might suspect datastage would indicate it better rather than saying it is reading 100000 bytes (when there isnt that much data) to find the record delimiter.
Also, if it is really the $APT_MAX_DELIMITED_READ_SIZE, I can not find the document on how to set that correctly. I tried to add it to thru edit -> job properties -> parameters -> add environment variable. This added it to the user defined list as 100000 and either it didnt work or that wasnt the problem because i still have the error OR i added the parameter in the wrong section. I would have thought it should have been a REAL environment variable rather than one i create.
Also, i read in posts that datastage doesnt have limits on record length .. somehow with all these variables to set, there must be some limit.