Failed to open file in datastage status=13

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
thurmy34
Premium Member
Premium Member
Posts: 198
Joined: Fri Mar 31, 2006 8:27 am
Location: Paris

Failed to open file in datastage status=13

Post by thurmy34 »

Hi gurus
Since a few days we have randomly the following error
failed to open file in datastage status=13
It's happen inside the jobs who wrote and read a file.
The write is ok the read failed.
After the abort we can relaunch the job and it works.
Can you help me ?
Thank you
Hope This Helps
Regards
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Check the permissions on the file and its parent folder, and determine under which user ID the jobs run (you may also need to check group membership).

Bonne chance!
Last edited by ray.wurlod on Wed Jun 08, 2016 10:54 pm, edited 1 time in total.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
PaulVL
Premium Member
Premium Member
Posts: 1315
Joined: Fri Dec 17, 2010 4:36 pm

Post by PaulVL »

Cluster environment and your job gets sent to a different host that doesn't have the mount?
thurmy34
Premium Member
Premium Member
Posts: 198
Joined: Fri Mar 31, 2006 8:27 am
Location: Paris

Post by thurmy34 »

Hi all
Thank you for your answers.
The weird thing is that we can write the file but not open it right after Inside the the same job.
Hope This Helps
Regards
Teej
Participant
Posts: 677
Joined: Fri Aug 08, 2003 9:26 am
Location: USA

Post by Teej »

You should not be reading and writing to the same file at the same time. It just does not work well that way due to the locks that we impose while writing to the file.

I would suggest a review of the job design is in order.
Developer of DataStage Parallel Engine (Orchestrate).
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Actually, Teej, such blocking operations are perfectly possible, feasible and even normal in server jobs.

Put another way, in server jobs a passive stage can have an input link (writing) AND an output link (reading). When the job runs, the output link is not opened until the input link has been closed.

One use case is pre-populating hashed files in the same job in which they're used to deliver reference data.

Check also that the file pathname is EXACTLY the same on the input and the output link - you may be trying to open a non-existent file. That it works sometimes suggests that you may be using a parameter that sometimes has an incorrect value, or it may be a timing issue in the underlying file system (which may be a bit slow to effect the close, because it has to update date/time modified in a large directory and date/time accessed in all parent directories, for example).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply