Unable to delete hashed file - job aborts

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Nic
Charter Member
Charter Member
Posts: 24
Joined: Mon Sep 26, 2005 1:08 pm
Location: UK

Unable to delete hashed file - job aborts

Post by Nic »

Error message - every job aborts which has a hashed file in it

FSSValidationJob..HshFSS.LnkToHshFSS: DSD.UVOpen rm: Unable to remove directory HshFSS: File exists
Unable to DELETE file "HshFSS".
"HshFSS" is already in your VOC file as a file definition record.
File name =
File not created.
.

I have a batch job that has been running fine on Project A for about 2-3 weeks until it suddenly started aborting with the above message. Every job that had a hashed file aborted. When I exported the batch and used another project it was running fine.
A month later this problem occurred on the other project. The 3am schedule was running fine but when it came to manually running the batch job in the morning it started aborting. The batch job has been scheduled for 3am every night for about a week now and additionally we have been running it 3-4 times a day as well without any problems.
I am using dynamic hashed files, that are deleted and recreated every time the job is run in the default project directory. Apparently this isn't a space issue. There is other development work going on in the same project but the file names are different.
Any idea what could be causing this problem. I am more interested in the preventing rather than the resolving at this stage.
Thanks very much for your help.
jdulaney
Charter Member
Charter Member
Posts: 13
Joined: Thu Feb 02, 2006 1:32 pm

File permissions?

Post by jdulaney »

Have you verified that the file permissions haven't changed on the hashed files? My security people are not at all happy with the permissions created on the files and want them changed. Perhaps something is running which modifies the permissions?

Jeanne
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

You have specified "delete file" in the create options for your hashed files and the part that executes the UNIX "rm" part of that to get rid of the file is failing. This is because of UNIX permissions, as Jeanne has already posted. Whether the umask for your user has changed or if you are perhaps running this under a different userid from the one which created the files in the first place is something you need to find out.
Nic
Charter Member
Charter Member
Posts: 24
Joined: Mon Sep 26, 2005 1:08 pm
Location: UK

Looks like it is permissions

Post by Nic »

Thanks for all your help, it pointed me to the right direction. looks like that is what was causing the problem as the person who scheduled the job can run them with the same userid whereas some other people can't. I will now investigate the groups and permissions and if they are not the same I will change them to see if that resolves the problem.
Thanks again.
Nic
Charter Member
Charter Member
Posts: 24
Joined: Mon Sep 26, 2005 1:08 pm
Location: UK

Re: Unable to delete hashed file - job aborts

Post by Nic »

The problem was that the users have been in different groups that have been set up with different permissions. We have created a functional user_id and that was used to schedule the jobs, which then solved the problem.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Make sure all DataStage users' umask is set to 002, and that they belong to appropriate operating system groups.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply