Multi Instances not able to view in director
Moderators: chulett, rschirm, roy
Multi Instances not able to view in director
Hi All,
I have a multi instance job which creates around 67 Instances, after sequence is completed, all the instances are vanished? what setting i need to do at Admin level. Can anybody help on this please.
I have a multi instance job which creates around 67 Instances, after sequence is completed, all the instances are vanished? what setting i need to do at Admin level. Can anybody help on this please.
You don't.
You need to change the setting to 2 days or 3 days to be safe.
Having a 2 instance setting on a multi instance job is very bad. It often leads to job failures. You basically can only have 2 itterations running at the same time. If you spun up #3 while #1 & 2 are running... #1 job will fail upon it's next attempt to write to the log. (guessing a BLINK error).
You need to change the setting to 2 days or 3 days to be safe.
Having a 2 instance setting on a multi instance job is very bad. It often leads to job failures. You basically can only have 2 itterations running at the same time. If you spun up #3 while #1 & 2 are running... #1 job will fail upon it's next attempt to write to the log. (guessing a BLINK error).
Job log auto purging only happens when the job executes (or write activity to the log).
You can have a 2 day purge setting and have a log sit there for months. Upon it's next execution, it will kick off the log purge process.
But yes, I do see your point. You want the ability to have a 2 itteration retention, yet for multi instance jobs you want X amount of days.
Pick one.
I do not believe you can override the setting at a job level.
I would pick the 3 day retention method.
You can have a 2 day purge setting and have a log sit there for months. Upon it's next execution, it will kick off the log purge process.
But yes, I do see your point. You want the ability to have a 2 itteration retention, yet for multi instance jobs you want X amount of days.
Pick one.
I do not believe you can override the setting at a job level.
I would pick the 3 day retention method.
Hi Paul,
I found below info from IBM website, Does below fix will resolve my issue.
Problem summary
****************************************************************
USERS AFFECTED:
Customers using Auto-Purging that require failed jobs to not be
purged so that the status can be investigated.
****************************************************************
PROBLEM DESCRIPTION:
When Auto-pugring is in use, all instances of a job will be
purged, without checking for the success or fail status.
****************************************************************
RECOMMENDATION:
This change is included in 8.0.1 Fix Pack 2.
****************************************************************
Problem conclusion
The Auto-Purge functionality has been enhanced so that jobs
which have failed do not have their Status or Log records
removed, so that subsequent evaluation of the failure can
occur. This is enabled by adding an environment variable for the
relevant project in the DataStage Administrator.
DS_LOG_AUTOPURGE_IGNORE_STATUS should contain a comma-separated
list of the status codes to ignore. A failed job has a status of
3, so the default value of this environment should be 3. By
default this environment variable will not exist and will have
no impact on existing Auto-Purge logic. The variable should be
added in the User-defined section and is likely to slow the
operation of the Auto-purge code when a job completes. Job Logs
& Status can still be completely cleared from the DataStage
Director.
I found below info from IBM website, Does below fix will resolve my issue.
Problem summary
****************************************************************
USERS AFFECTED:
Customers using Auto-Purging that require failed jobs to not be
purged so that the status can be investigated.
****************************************************************
PROBLEM DESCRIPTION:
When Auto-pugring is in use, all instances of a job will be
purged, without checking for the success or fail status.
****************************************************************
RECOMMENDATION:
This change is included in 8.0.1 Fix Pack 2.
****************************************************************
Problem conclusion
The Auto-Purge functionality has been enhanced so that jobs
which have failed do not have their Status or Log records
removed, so that subsequent evaluation of the failure can
occur. This is enabled by adding an environment variable for the
relevant project in the DataStage Administrator.
DS_LOG_AUTOPURGE_IGNORE_STATUS should contain a comma-separated
list of the status codes to ignore. A failed job has a status of
3, so the default value of this environment should be 3. By
default this environment variable will not exist and will have
no impact on existing Auto-Purge logic. The variable should be
added in the User-defined section and is likely to slow the
operation of the Auto-purge code when a job completes. Job Logs
& Status can still be completely cleared from the DataStage
Director.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Just to clarify, by default job log auto purge only occurs when the job finishes WITHOUT FAILURE.
If the job aborts, crashes or is stopped, no entries are purged from the job log.
If the job aborts, crashes or is stopped, no entries are purged from the job log.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Hi Ray,
I found one more option, in clear log in director where i can set to Auto Purge to older than a day or 2, I tested this and it worked fine. I have couple of questions on this
1) Does this option will be retained if i compile this job
2) If i migrate this job into other env, will this option retained or i need to set to auto purge in director when the code goes to other env.
3) Will this option helpful to work on or will i face any issues if this code is moved into production.
4) Is there any parameter to suppress the project level setting at the job level.
I found one more option, in clear log in director where i can set to Auto Purge to older than a day or 2, I tested this and it worked fine. I have couple of questions on this
1) Does this option will be retained if i compile this job
2) If i migrate this job into other env, will this option retained or i need to set to auto purge in director when the code goes to other env.
3) Will this option helpful to work on or will i face any issues if this code is moved into production.
4) Is there any parameter to suppress the project level setting at the job level.
The purge settings set specifically for a job remain even when editing and compiling a job. The purge settings are not exported with a job and thus won't be imported into another project.
Not setting purge value defaults at a project level has very often caused incredibly slow performance.
If you specifically set the purge settings on a job the project level attributes are overwritten. so if turn off auto-purge on a job then any project-level purging is disabled.
Not setting purge value defaults at a project level has very often caused incredibly slow performance.
If you specifically set the purge settings on a job the project level attributes are overwritten. so if turn off auto-purge on a job then any project-level purging is disabled.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>