Missing record delimiter "\n", saw EOF instead

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Missing record delimiter "\n", saw EOF instead

Post by kumar_s »

Hi Dsxians,

Iam getting the warning
SEQ,0: Missing record delimiter "\n", saw EOF instead
When smilar job reads the same file in another project it works fine.
So i could conclude there is not problem with the input file. Format and delimiter settings for the Sequential stages used in the both jobs to read the file are exactly same.

May i know what can be the reason for this warning to popup.

regards
kumar
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

There is no line terminator ("final delimiter") on the final row in your file. It's a data problem.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Hi ray,
I understand. But now i am runnig another job which reads the same file, it works fine. :!:

regards
kumar
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Kumar,

you can have the sequential file stage ignore a missing final delimiter. One job has it set and the other does not.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Check - thoroughly - that both Sequential File stages have exactly the same settings for delimiters, both at the file and record level.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Hi,
Both exactly the same job, one in SIT and another in Development project. Reading the same file. SIT gives the problem and Development wont.

Will the rerun of the SIT job work. :roll:

Regards
kumar
sri1dhar
Charter Member
Charter Member
Posts: 54
Joined: Mon Nov 03, 2003 3:57 pm

Post by sri1dhar »

kumar_s wrote:Hi,
Both exactly the same job, one in SIT and another in Development project. Reading the same file. SIT gives the problem and Development wont.

Will the rerun of the SIT job work. :roll:

Regards
kumar
Kumar - Can you please let me know if you fond what was the issue. I have the same problem now.
keshav0307
Premium Member
Premium Member
Posts: 783
Joined: Mon Jan 16, 2006 10:17 pm
Location: Sydney, Australia

Post by keshav0307 »

I encounter the similar problem, i don't remeber exactly, but it is related to the final delimiter only, try using different type of final delimiter, probably setting the final delimiter value to 'none' wolve the problem
sri1dhar
Charter Member
Charter Member
Posts: 54
Joined: Mon Nov 03, 2003 3:57 pm

Post by sri1dhar »

keshav0307 wrote:I encounter the similar problem, i don't remeber exactly, but it is related to the final delimiter only, try using different type of final delimiter, probably setting the final delimiter value to 'none' wolve the problem
My job was actually working fine until a few days back. Then when I was testing yesterday it failed. I restored the old job from the backup and even that job fails. I'm using the same data files that I used when the job ran successfully earlier. The only change I see in the environment is we installed a patch for Oracle Enterprise stage. That shouldn't affect seq. file stage libraries, but with my experience with this product who knows?

The Record level settings in the seq. file stage Format are: Final delimeter = end; Record delimeter = UNIX new line.

I am also using Read Method = "File Pattern" i.e. read from multiple files that match pattern.

The job acts strange. I actually have 4 input files. When I run the job individaully for each file there is no issue. I tried other combinations 2 files, 3 files, all 4 files. I am having issue when no. of files are > 3 & only One of the file is common all cases.
sri1dhar
Charter Member
Charter Member
Posts: 54
Joined: Mon Nov 03, 2003 3:57 pm

Post by sri1dhar »

After multiple tests this is what I noticed:

The sequential file stage reads the first file successfully. From second file onwards the warning below repeats after every 830 rows. Please note that I am using "File Pattern" read method so I can load multipl files:

Warning: SeqReutersPricingFile,0: Missing record delimiter "\n", saw EOF instead

I tried with a different file that has different layout (longer than the previous one). Now the warning repeats every 775 rows.

Looks like its the number of bytes rather than the no. of records. So the warning is repeating every ~130K bytes.



In another case the job read the first 2 files and ignored the other files and finished with a successful status.
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

is this a dos file that you are ftping over to the unix and then trying to load it? At the unix level, try the command
dos2unix filename
and then see if it fixes the file
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
sri1dhar
Charter Member
Charter Member
Posts: 54
Joined: Mon Nov 03, 2003 3:57 pm

Post by sri1dhar »

DSguru2B wrote:is this a dos file that you are ftping over to the unix and then trying to load it? At the unix level, try the command
dos2unix filename
and then see if it fixes the file
Thease are files ftped from a Unix server. The job worked with the same files before. We installed a patch for Oracle Enterprise stage after that and I am trying investigate whether that changed any libraries related to Seq. file stage.

Also when I ran the job for each individual file, it works with no problem. That probably proves that there is nothing wrong with the files. Its only when each node reads from more than 1 file I have the warnings.
rajan.n
Premium Member
Premium Member
Posts: 96
Joined: Mon Oct 09, 2006 7:47 am

Post by rajan.n »

Hi all,
am facing the same prob here,am reading the xml file which is created runtime for the job name given in run time.which creates a log file as an xml file.
the properties goes like this..
read method:specific file
options are all as default..settings.
format
record delimiter:null
field delimiter:none
the coloum type is char.

this works fine with some jobs in same proj with giving a warning as
XML,0: Missing record delimiter "\n", saw EOF instead

but not with other.
i get a warning and a fatal error like this

XML,0: Consumed more than 100,000 bytes looking for record delimiter; aborting
can any bodyhelp me in this issue.

thanks much in advance.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Sorry I dont remember what I did to get rid of this issue.
But check if file patter is used, at every end of file and start of next file during concatination, the error occurs.
Consumed more than 100,000 bytes looking for record delimiter
Should refer to the missing delemiter.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
Krazykoolrohit
Charter Member
Charter Member
Posts: 560
Joined: Wed Jul 13, 2005 5:36 am
Location: Ohio

Post by Krazykoolrohit »

For all your varchar columns except the last column, change "contains terminators" to YES.

Try this and let me know if it helps. If not then you may have to change your record delimiters to any other ASCII values

You can see contains terminator at the sequential stage -> input -> columns. right click and select properties and click the check box to see the field.
Post Reply