Reading a sequential file from an EE job into a server job
Moderators: chulett, rschirm, roy
Reading a sequential file from an EE job into a server job
Hello Everyone,
I have searched the forum and could not find this specific problem, so if anyone can help out, that would be much appreciated.
I have a Parallel job which ends up creating a sequential file with the following Format settings:
Final Delimiter: None
Delimiter: Comma
Quote: Double
This file in intended to be read in further down-stream by a Server job. I have the settings for the Sequential file in the server job set to:
Delimiter: ,
Quote character: "
Both sequential stages are set up to use the same Table Definition.
The problem is that the EE job runs fine and produces a file that I am able to View easily in DataStage. However, trying to view the file in the Server job and it displays very strangely with odd control characters.
Can EE job created sequential files be read into Server jobs?
Thanks in advance.
I have searched the forum and could not find this specific problem, so if anyone can help out, that would be much appreciated.
I have a Parallel job which ends up creating a sequential file with the following Format settings:
Final Delimiter: None
Delimiter: Comma
Quote: Double
This file in intended to be read in further down-stream by a Server job. I have the settings for the Sequential file in the server job set to:
Delimiter: ,
Quote character: "
Both sequential stages are set up to use the same Table Definition.
The problem is that the EE job runs fine and produces a file that I am able to View easily in DataStage. However, trying to view the file in the Server job and it displays very strangely with odd control characters.
Can EE job created sequential files be read into Server jobs?
Thanks in advance.
Hi kumar_s,
Thanks for your reply, and for the welcome! :)
The meta data is definately the same. When I view the file in the Server job all the columns are displayed, but the data seems to spill over into the next rows.
The file is on the same server - it might help to tell you that I used a join stage in the EE job - input to the job was four datasets, output is the sequential file.
The control character doesn't appear at the end of line, they are all over the place!
Thanks again.
Thanks for your reply, and for the welcome! :)
The meta data is definately the same. When I view the file in the Server job all the columns are displayed, but the data seems to spill over into the next rows.
The file is on the same server - it might help to tell you that I used a join stage in the EE job - input to the job was four datasets, output is the sequential file.
The control character doesn't appear at the end of line, they are all over the place!
Thanks again.
What does the file look like when you view the contents from UNIX?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Hi ArndW,
The file when viewed in vi contains all kind of junk like:
"0^M" - I was expecting "0" - EDITED: checked the input and there are control characters where-ever the ^M appears. The stuff below is still a mystery though.
and
^@28/12/2003^F^@137064^B^@NA^B^
where I would have expected:
"28/12/2003","137064","NA"
It looks to me therefore that the join of datasets into a sequential file ins't working. Is this something that could cause problems and that should be avoided?
Thanks
The file when viewed in vi contains all kind of junk like:
"0^M" - I was expecting "0" - EDITED: checked the input and there are control characters where-ever the ^M appears. The stuff below is still a mystery though.
and
^@28/12/2003^F^@137064^B^@NA^B^
where I would have expected:
"28/12/2003","137064","NA"
It looks to me therefore that the join of datasets into a sequential file ins't working. Is this something that could cause problems and that should be avoided?
Thanks
-
- Participant
- Posts: 5
- Joined: Thu Jul 27, 2006 2:37 pm
Re: Reading a sequential file from an EE job into a server j
I am not sure but one of the possibility that in parallel job you creating file as dos file and when you tying to view the file in server job you might be reading the file as UNIX file please check that you should create and read file as UNIX file.
GD
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
What stage type did you use to write the sequential file in the Parallel job?
Try changing the format of the file so that it uses UNIX-style record delimiter. No record delimiter is contra-indicated if the data are variable length and/or delimited format.
Try changing the format of the file so that it uses UNIX-style record delimiter. No record delimiter is contra-indicated if the data are variable length and/or delimited format.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Thanks Ray.
The Sequential file is being written as output from a Join Stage. The input to the join stage is 4 seperate datasets - all the same layout.
I tried setting format record delimiter to UNIX new line, but no luck.
The other odd thing is that there are 601 rows input, and the stats show 601 lines output from the Join, but when I view data on the sequential file, only 24 of the rows are shown!
Cheers
The Sequential file is being written as output from a Join Stage. The input to the join stage is 4 seperate datasets - all the same layout.
I tried setting format record delimiter to UNIX new line, but no luck.
The other odd thing is that there are 601 rows input, and the stats show 601 lines output from the Join, but when I view data on the sequential file, only 24 of the rows are shown!
Cheers
[quote="DSguru2B"]24 must be the limit on the sequential file stage when you hit view data.[/quote]
Thanks DSguru2B, but I have set the Limit to 1000. The display still only shows 24. They are also 24 from various places on the file, not just the first 24 rows.
I put in a peek and the output of the peek shows all 601 rows are being passed in the link.
Thanks
Thanks DSguru2B, but I have set the Limit to 1000. The display still only shows 24. They are also 24 from various places on the file, not just the first 24 rows.
I put in a peek and the output of the peek shows all 601 rows are being passed in the link.
Thanks
Hello again all,
I have been through the job with a fine toothed comb and cannot for the life of me work out why there is "{prefix=2}" after the fields in the OSH. This to me seems likely to be the problem - or at least indicative of the problem.
Can anyone advise me please as to where this prefix is set?
Many thanks in advance.
I have been through the job with a fine toothed comb and cannot for the life of me work out why there is "{prefix=2}" after the fields in the OSH. This to me seems likely to be the problem - or at least indicative of the problem.
Can anyone advise me please as to where this prefix is set?
Many thanks in advance.
Hi All,
Managed to get around this by joining the datasets into a complex flat file, then loading the CFF to a sequential file, at which point it is readable in a Server job.
Still perplexed by the original problem, but am happy with a work-around, so onwards and upwards..
Thanks again to anyone that offered help.
Managed to get around this by joining the datasets into a complex flat file, then loading the CFF to a sequential file, at which point it is readable in a Server job.
Still perplexed by the original problem, but am happy with a work-around, so onwards and upwards..
Thanks again to anyone that offered help.