Writing in to Sequential file error

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
mandyli
Premium Member
Premium Member
Posts: 898
Joined: Wed May 26, 2004 10:45 pm
Location: Chicago

Writing in to Sequential file error

Post by mandyli »

Hi

Output of the transformer stage I am writing into Sequential file. During run time I am getting following error.

"Job aborting due to row limit being reached on output link:"





Please reply.

[quote][/quote]
Eric
Participant
Posts: 254
Joined: Mon Sep 29, 2003 4:35 am

Post by Eric »

That sounds like a constraint in the Transformer which specifies to stop the job after xx many rows that meet the condition in the constraint.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It certainly does. Can you review the contents of your Constraints grid in the Transformer stage and post the contents of the rightmost column?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
mandyli
Premium Member
Premium Member
Posts: 898
Joined: Wed May 26, 2004 10:45 pm
Location: Chicago

Post by mandyli »

Thanks

It will work now..
rsrikant
Participant
Posts: 58
Joined: Sat Feb 28, 2004 12:35 am
Location: Silver Spring, MD

Post by rsrikant »

Actually me too had a similar problem. A bit funny.

In my job i am writing from the lookup rejects link to a col generator and then to transformer and finally to a sequential stage.

As eric and ray mentioned, I have a constraint on the transformer stage to abort the job after 1 row is rejected from lookup stage and this 1 row is written into the sequential file.

lookup --------> col generator --------> transformer --------> Seq. File

This job runs inconsistently. I mean one time it works fine the other time no. I am running it in multinode with $APT_BUFFERING_POLICY environment variable set to force buffering.

We have few more jobs like this but all of them runs fine except this one. I don't see any difference between this job and other jobs that are running fine.

Finally after some trial and error i changed the col generator stage from parallel to sequential. Then the job started working fine.

I was quite happy that the problem got solved but never understood what solved the problem. :? :?

Any idea on what's happening?? :idea: :idea:

Thanks,
Srikanth
rsrikant
Participant
Posts: 58
Joined: Sat Feb 28, 2004 12:35 am
Location: Silver Spring, MD

Post by rsrikant »

To add more and make it clear,

Actual problem is not the error "Job aborting due to row limit being reached on output link:"

Actually I want the job to abort with this error if any row gets rejected from lookup stage.

The problem is - when the row gets rejected from lookup the job aborts but the row is not written into sequential file.

When the col generator stage is changed from parallel to sequential the job aborts and the row also gets inserted into the sequential file.

All other jobs have the col generator stage as parallel but still they abort properly by writing the row into sequential file.

Thanks,
Srikanth
Post Reply