Unable to open project 'XYZ' - 81016. ( Job Migrated )
Moderators: chulett, rschirm, roy
Unable to open project 'XYZ' - 81016. ( Job Migrated )
I have a Job Design which was migrated from SUN Solaris/DS 7.5.1 to AIX/DS 7.5.3
Job Design is as below:
ORacle-Transformer(4 Links Coming out) ----Funnel---Seq File
In the Transformer i have a Server Routine which had 4 arguments..job name, table name, source name and file name
Here we capture the table count, tables loaded, tables not loaded etc.,
Job is aborting with the below fatal error:
Trns,0: Unable to open project 'XYZ' - 81016.
Trns,0: The runLocally() of the operator failed. [api/operator_rep.C:4069]
Trns,0: Operator terminated abnormally: runLocally did not return APT_StatusOk [processmgr/rtpexecutil.C:167]
main_program: Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
I did search in the forum on
Unable to open project 'XYZ' - 81016.
but could not find the any solution, as it looks like the error is misleading.
Any ideas....
Thanks in Advance..
Job Design is as below:
ORacle-Transformer(4 Links Coming out) ----Funnel---Seq File
In the Transformer i have a Server Routine which had 4 arguments..job name, table name, source name and file name
Here we capture the table count, tables loaded, tables not loaded etc.,
Job is aborting with the below fatal error:
Trns,0: Unable to open project 'XYZ' - 81016.
Trns,0: The runLocally() of the operator failed. [api/operator_rep.C:4069]
Trns,0: Operator terminated abnormally: runLocally did not return APT_StatusOk [processmgr/rtpexecutil.C:167]
main_program: Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
Unexpected exit status 1 [processmgr/slprocess.C:420]
I did search in the forum on
Unable to open project 'XYZ' - 81016.
but could not find the any solution, as it looks like the error is misleading.
Any ideas....
Thanks in Advance..
Thanks,
Pavan
Pavan
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
How was the actual migration performed?
Code: Select all
SELECT * FROM SYS.MESSAGE WHERE @ID = '081016';
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Thanks Ray for the quick reply..
We manually exported/imported the .dsx from SUN Solaris/DS 7.5.1 to AIX/DS 7.5.3.
We have all the necessary Config Files created on the new AIX server and the other code is working fine.
This was the only job which has a Basic Transformer with a server routine in it is aborting.
We manually exported/imported the .dsx from SUN Solaris/DS 7.5.1 to AIX/DS 7.5.3.
We have all the necessary Config Files created on the new AIX server and the other code is working fine.
This was the only job which has a Basic Transformer with a server routine in it is aborting.
Thanks,
Pavan
Pavan
I created a simple job(Oracle-Basic T/F-Seq File) with a basic transformer in it. Even this is aborting with the same error.
BASIC_Transformer_10,1: Unable to open project 'EDW' - 81016.
But when i call a Server Routine in the Basic Transformer, job aborted again with the same error.
BASIC_Transformer_10,1: Unable to open project 'EDW' - 81016.
BASIC_Transformer_10,1: Unable to open project 'EDW' - 81016.
But when i call a Server Routine in the Basic Transformer, job aborted again with the same error.
BASIC_Transformer_10,1: Unable to open project 'EDW' - 81016.
Thanks,
Pavan
Pavan
I'm not aware of what circumstances would cause the failure of the BASIC Transformer in a PX job like that. I would think the next step would be to contact your official support provider and see if they can resolve it, then post the results back here.
As to a 'workaround', to me that would be to not use that stage. Convert whatever the server routine does into its PX equivalent.
As to a 'workaround', to me that would be to not use that stage. Convert whatever the server routine does into its PX equivalent.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Have you tried a forced compile on the job? IS the "XYZ" a project name on the old machine?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Craig - I missed that part I wonder if this could be distributed installation... what does the apt_config file look like?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
The Project name is same on old and new machines.
I tried force compile as well..but it didn't work.
The config file we are using is:
main_program: APT configuration file: /apps/Ascential/DataStage/Configurations/XYZDefault.apt
We have one main server and two cluster servers.
The job aborts is on the same single server which is our Dev server.
Let me know if I can provide any further information.
I tried force compile as well..but it didn't work.
The config file we are using is:
main_program: APT configuration file: /apps/Ascential/DataStage/Configurations/XYZDefault.apt
Code: Select all
/* DataStage Configuration File - Project=XYZ
File automatically generated - 2009/07/16 09:25
*/
{
node "Conductor_01"
{
fastname "dseax006"
pools "Conductor"
resource disk "/worknode/datasets" {pools ""}
resource scratchdisk "/worknode/scratch" {pools ""}
}
node "etlax007_01"
{
fastname "etlax007"
pools ""
resource disk "/worknode07/datasets" {pools ""}
resource disk "/worknode01/datasets" {pools ""}
resource disk "/worknode08/datasets" {pools ""}
resource disk "/worknode05/datasets" {pools ""}
resource disk "/worknode03/datasets" {pools ""}
resource disk "/worknode04/datasets" {pools ""}
resource disk "/worknode06/datasets" {pools ""}
resource disk "/worknode02/datasets" {pools ""}
resource scratchdisk "/worknode07/scratch" {pools ""}
resource scratchdisk "/worknode01/scratch" {pools ""}
resource scratchdisk "/worknode08/scratch" {pools ""}
resource scratchdisk "/worknode05/scratch" {pools ""}
resource scratchdisk "/worknode03/scratch" {pools ""}
resource scratchdisk "/worknode04/scratch" {pools ""}
resource scratchdisk "/worknode06/scratch" {pools ""}
resource scratchdisk "/worknode02/scratch" {pools ""}
}
node "etlax008_02"
{
fastname "etlax008"
pools ""
resource disk "/worknode07/datasets" {pools ""}
resource disk "/worknode01/datasets" {pools ""}
resource disk "/worknode08/datasets" {pools ""}
resource disk "/worknode05/datasets" {pools ""}
resource disk "/worknode03/datasets" {pools ""}
resource disk "/worknode04/datasets" {pools ""}
resource disk "/worknode06/datasets" {pools ""}
resource disk "/worknode02/datasets" {pools ""}
resource scratchdisk "/worknode07/scratch" {pools ""}
resource scratchdisk "/worknode01/scratch" {pools ""}
resource scratchdisk "/worknode08/scratch" {pools ""}
resource scratchdisk "/worknode05/scratch" {pools ""}
resource scratchdisk "/worknode03/scratch" {pools ""}
resource scratchdisk "/worknode04/scratch" {pools ""}
resource scratchdisk "/worknode06/scratch" {pools ""}
resource scratchdisk "/worknode02/scratch" {pools ""}
}
node "etlax007_03"
{
fastname "etlax007"
pools ""
resource disk "/worknode07/datasets" {pools ""}
resource disk "/worknode01/datasets" {pools ""}
resource disk "/worknode08/datasets" {pools ""}
resource disk "/worknode05/datasets" {pools ""}
resource disk "/worknode03/datasets" {pools ""}
resource disk "/worknode04/datasets" {pools ""}
resource disk "/worknode06/datasets" {pools ""}
resource disk "/worknode02/datasets" {pools ""}
resource scratchdisk "/worknode07/scratch" {pools ""}
resource scratchdisk "/worknode01/scratch" {pools ""}
resource scratchdisk "/worknode08/scratch" {pools ""}
resource scratchdisk "/worknode05/scratch" {pools ""}
resource scratchdisk "/worknode03/scratch" {pools ""}
resource scratchdisk "/worknode04/scratch" {pools ""}
resource scratchdisk "/worknode06/scratch" {pools ""}
resource scratchdisk "/worknode02/scratch" {pools ""}
}
node "etlax008_04"
{
fastname "etlax008"
pools ""
resource disk "/worknode07/datasets" {pools ""}
resource disk "/worknode01/datasets" {pools ""}
resource disk "/worknode08/datasets" {pools ""}
resource disk "/worknode05/datasets" {pools ""}
resource disk "/worknode03/datasets" {pools ""}
resource disk "/worknode04/datasets" {pools ""}
resource disk "/worknode06/datasets" {pools ""}
resource disk "/worknode02/datasets" {pools ""}
resource scratchdisk "/worknode07/scratch" {pools ""}
resource scratchdisk "/worknode01/scratch" {pools ""}
resource scratchdisk "/worknode08/scratch" {pools ""}
resource scratchdisk "/worknode05/scratch" {pools ""}
resource scratchdisk "/worknode03/scratch" {pools ""}
resource scratchdisk "/worknode04/scratch" {pools ""}
resource scratchdisk "/worknode06/scratch" {pools ""}
resource scratchdisk "/worknode02/scratch" {pools ""}
}
}
The job aborts is on the same single server which is our Dev server.
Let me know if I can provide any further information.
Thanks,
Pavan
Pavan