Error in build output schema from DataSetDef
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
Error in build output schema from DataSetDef
I have a job with 2 Oracle connectors. The source and target databases are the same. The source extracts data from many schemas and the query is like:
SELECT DISTINCT
TO_DATE('#L_PERD_ARRT_INFO#'|| '01','YYYYMMDD') AS DATE_TRT,
TO_DATE('#L_PERD_ARRT_INFO#'|| '01','YYYYMMDD') AS DATE_DERN_SITT,
RCOFP.LIBL_ORGN_FINN AS LIBL_ORGN_FINN,
DPP.CODE_CIVL AS CODE_CIVL,
DPP.DATE_NAIS AS DATE_NAIS,
DPP.CODE_INSEE_COMM_NAIS AS CODE_INSEE_VILLE,
...etc
FROM
DCLI.DCLI_PROFIL_PARTIC DPP ,
RDWH.RDWH_CODE_ORGN_FINN_PILT RCOFP,
... etc
Since I'm using the RCP functions and my target table has the columns names identical to the colum names declared after each 'AS' I have not found it necessary to define any column explicitly.
The funny thing is while this works perfectly with one Oracle user, using another it fails.
To note:
This abort occurs only in Datastage.
If I run the query in my RDBMS (connecting using the same account) it works fine
Both the Oracle users have the same privileges and can see both source and target tables.
The datastage error I get is:
Error creating schema ( record( DATE_TRT:nullable timestamp; ..... ) from DataSetDef provided by connector: In field "CODE_INSEE_VILLE": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NOM_MART": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NOM_PATR": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "PRNM": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_2EME_LIGN_ADRS_AFNR_V2": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_3EME_LIGN_ADRS_AFNR_V3": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_4EME_LIGN_ADRS_AFNR_V4": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_5EME_LIGN_ADRS_AFNR_V5": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
What is anormal is that everything works fine using one Oracle account.
Any ideas guys?
Thanks in advance
SELECT DISTINCT
TO_DATE('#L_PERD_ARRT_INFO#'|| '01','YYYYMMDD') AS DATE_TRT,
TO_DATE('#L_PERD_ARRT_INFO#'|| '01','YYYYMMDD') AS DATE_DERN_SITT,
RCOFP.LIBL_ORGN_FINN AS LIBL_ORGN_FINN,
DPP.CODE_CIVL AS CODE_CIVL,
DPP.DATE_NAIS AS DATE_NAIS,
DPP.CODE_INSEE_COMM_NAIS AS CODE_INSEE_VILLE,
...etc
FROM
DCLI.DCLI_PROFIL_PARTIC DPP ,
RDWH.RDWH_CODE_ORGN_FINN_PILT RCOFP,
... etc
Since I'm using the RCP functions and my target table has the columns names identical to the colum names declared after each 'AS' I have not found it necessary to define any column explicitly.
The funny thing is while this works perfectly with one Oracle user, using another it fails.
To note:
This abort occurs only in Datastage.
If I run the query in my RDBMS (connecting using the same account) it works fine
Both the Oracle users have the same privileges and can see both source and target tables.
The datastage error I get is:
Error creating schema ( record( DATE_TRT:nullable timestamp; ..... ) from DataSetDef provided by connector: In field "CODE_INSEE_VILLE": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NOM_MART": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NOM_PATR": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "PRNM": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_2EME_LIGN_ADRS_AFNR_V2": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_3EME_LIGN_ADRS_AFNR_V3": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_4EME_LIGN_ADRS_AFNR_V4": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_5EME_LIGN_ADRS_AFNR_V5": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
What is anormal is that everything works fine using one Oracle account.
Any ideas guys?
Thanks in advance
Tony
BI Consultant - Datastage
BI Consultant - Datastage
Could you turn on "$APT_PRINT_SCHEMAS" and see if the displayed internal schemas differ between the two users and post any differences to the thread?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
This variable won't change the outcome. But it does add an entry to the log file which shows the schemas used for each link.
Run the job with one user, store the schemas in a file. Run it again with the other user, store that schema in a file. Compare the two schemas for the output to see if there are any differences.
Run the job with one user, store the schemas in a file. Run it again with the other user, store that schema in a file. Compare the two schemas for the output to see if there are any differences.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
The environment variable OSH_PRINT_SCHEMAS does not seem to print anything extra in the Director log. Be it with this variable property true or false I have the line OSH script in my directo log.
And for both runs of the job (the one using the Oracle account which isn't giving any trouble and the one using the Oracle account which crashes) the detail in OSH script is identical.
The messages following abort are:
ORA_DCLI_PROFIL_PARTIC: Error creating schema ( record( DATE_TRT:nullable timestamp; DATE_DERN_SITT:nullable timestamp; SGK_PERS:decimal[9, 0]; CODE_ORGN_FINN:string[5]; LIBL_ORGN_FINN:nullable string[max=40]; NUMR_PERS:decimal[9, 0]; CODE_CIVL:nullable string[1]; DATE_NAIS:nullable timestamp; CODE_INSEE_VILLE:nullable string[0]; LIBL_INSEE_VILLE:nullable string[max=80]; CODE_DEPT_NAIS:nullable string[2]; CODE_NATN_REGL:nullable string[1]; LIBL_NATN_REGL:nullable string[max=80]; NB_ENFN_CHRG:nullable decimal[2, 0]; CODE_CSP:nullable string[4]; LIBL_CSP:nullable string[max=80]; CODE_CONT_TRVL_CLNT:nullable string[1]; LIBL_CONT_TRVL_CLNT:nullable string[max=80]; NOM_MART:nullable string[0]; NOM_PATR:nullable string[0]; PRNM:nullable string[0]; CODE_CAPC_JURD:nullable string[2]; LIBL_CAPC_JURD:nullable string[max=160]; DATE_EMBC:nullable timestamp; NUMR_SIREN_EMPL:nullable string[9]; NOM_EMPL:nullable string[max=32]; CODE_NAF_REV2:nullable string[5]; LIBL_NAF_REV2:nullable string[max=100]; NUMR_2EME_LIGN_ADRS_AFNR_V2:nullable string[0]; NUMR_3EME_LIGN_ADRS_AFNR_V3:nullable string[0]; NUMR_4EME_LIGN_ADRS_AFNR_V4:nullable string[0]; NUMR_5EME_LIGN_ADRS_AFNR_V5:nullable string[0]; NUMR_6EME_LIGN_ADRS_AFNR_V6:nullable string[max=32]; MT_CUML_DEPO:nullable decimal[15, 0]; MT_CUML_EPAR_LIQD:nullable decimal[15, 0]; MT_CUML_EPAR_CONT:nullable decimal[15, 0]; MT_CUML_EPAR_INVS:nullable decimal[15, 0]; NUMR_CHRN_PRFS:decimal[3, 0]; ) ) from DataSetDef provided by connector: In field "CODE_INSEE_VILLE": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NOM_MART": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NOM_PATR": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "PRNM": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_2EME_LIGN_ADRS_AFNR_V2": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_3EME_LIGN_ADRS_AFNR_V3": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_4EME_LIGN_ADRS_AFNR_V4": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_5EME_LIGN_ADRS_AFNR_V5": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
So apparently it expects a length of > 0 for certain fields.
And for both runs of the job (the one using the Oracle account which isn't giving any trouble and the one using the Oracle account which crashes) the detail in OSH script is identical.
The messages following abort are:
ORA_DCLI_PROFIL_PARTIC: Error creating schema ( record( DATE_TRT:nullable timestamp; DATE_DERN_SITT:nullable timestamp; SGK_PERS:decimal[9, 0]; CODE_ORGN_FINN:string[5]; LIBL_ORGN_FINN:nullable string[max=40]; NUMR_PERS:decimal[9, 0]; CODE_CIVL:nullable string[1]; DATE_NAIS:nullable timestamp; CODE_INSEE_VILLE:nullable string[0]; LIBL_INSEE_VILLE:nullable string[max=80]; CODE_DEPT_NAIS:nullable string[2]; CODE_NATN_REGL:nullable string[1]; LIBL_NATN_REGL:nullable string[max=80]; NB_ENFN_CHRG:nullable decimal[2, 0]; CODE_CSP:nullable string[4]; LIBL_CSP:nullable string[max=80]; CODE_CONT_TRVL_CLNT:nullable string[1]; LIBL_CONT_TRVL_CLNT:nullable string[max=80]; NOM_MART:nullable string[0]; NOM_PATR:nullable string[0]; PRNM:nullable string[0]; CODE_CAPC_JURD:nullable string[2]; LIBL_CAPC_JURD:nullable string[max=160]; DATE_EMBC:nullable timestamp; NUMR_SIREN_EMPL:nullable string[9]; NOM_EMPL:nullable string[max=32]; CODE_NAF_REV2:nullable string[5]; LIBL_NAF_REV2:nullable string[max=100]; NUMR_2EME_LIGN_ADRS_AFNR_V2:nullable string[0]; NUMR_3EME_LIGN_ADRS_AFNR_V3:nullable string[0]; NUMR_4EME_LIGN_ADRS_AFNR_V4:nullable string[0]; NUMR_5EME_LIGN_ADRS_AFNR_V5:nullable string[0]; NUMR_6EME_LIGN_ADRS_AFNR_V6:nullable string[max=32]; MT_CUML_DEPO:nullable decimal[15, 0]; MT_CUML_EPAR_LIQD:nullable decimal[15, 0]; MT_CUML_EPAR_CONT:nullable decimal[15, 0]; MT_CUML_EPAR_INVS:nullable decimal[15, 0]; NUMR_CHRN_PRFS:decimal[3, 0]; ) ) from DataSetDef provided by connector: In field "CODE_INSEE_VILLE": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NOM_MART": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NOM_PATR": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "PRNM": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_2EME_LIGN_ADRS_AFNR_V2": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_3EME_LIGN_ADRS_AFNR_V3": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_4EME_LIGN_ADRS_AFNR_V4": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
In field "NUMR_5EME_LIGN_ADRS_AFNR_V5": Parsing parameters "0" for schema type "string": Length must be positive, got: "0"
So apparently it expects a length of > 0 for certain fields.
Tony
BI Consultant - Datastage
BI Consultant - Datastage
You need to set this print schemas to "true" and it will create a log entry in the director log showing as "main_program: Schemas:" and containing the detailed schemas
The column "NUMR_2EME_LIGN_ADRS_AFNR_V2:nullable string[0];" is certainly wrong. Where do you define it and as what?
The column "NUMR_2EME_LIGN_ADRS_AFNR_V2:nullable string[0];" is certainly wrong. Where do you define it and as what?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
Actually I have not defined any of the columns. I'm using the RCP function to read and write from the source Oracle to the target Oracle tables.
So just segregating the columns for which the error is obvious I get a source query as:
SELECT DISTINCT
DPP.CODE_INSEE_COMM_NAIS AS CODE_INSEE_VILLE,
DPP.NOM_MART AS NOM_MART,
DPP.NOM_PATR AS NOM_PATR,
DPP.PRNM AS PRNM,
DA.NUMR_2EME_LIGN_ADRS_AFNR_V2 AS NUMR_2EME_LIGN_ADRS_AFNR_V2,
DA.NUMR_3EME_LIGN_ADRS_AFNR_V3 AS NUMR_3EME_LIGN_ADRS_AFNR_V3,
DA.NUMR_4EME_LIGN_ADRS_AFNR_V4 AS NUMR_4EME_LIGN_ADRS_AFNR_V4,
DA.NUMR_5EME_LIGN_ADRS_AFNR_V5 AS NUMR_5EME_LIGN_ADRS_AFNR_V5
FROM
S9958014.SACCEF_PERM_PERS T,
DCLI.DCLI_PROFIL_PARTIC DPP ,
(SELECT NUMR_PERS,NUMR_CHRN_PRFS,CODE_ORGN_FINN,
MT_TRSR_REEL_MOIS,
MT_EPRG_LIQD_REEL_MOIS,
MT_EPRG_CONT_REEL_MOIS,
MT_EPRG_INVS_MOIS
FROM DCLI.DCLI_SYNT_PARTIC_HIST WHERE PERD_ARRT_INFO='201205') DSPH,
RDWH.RDWH_CODE_ORGN_FINN_PILT RCOFP,
DCLI.DCLI_INSEE_COMMUNE DIC,
RDWH.RDWH_NATN_REGL RNR,
RDWH.RDWH_BPCE_CODE_CSP RBCC,
RDWH.RDWH_CODE_CONT_TRVL_CLNT RCCTC,
RDWH.RDWH_CODE_TYPE_CAPC_JURD RCTCJ,
RDWH.RDWH_BPCE_CODE_NAF_REV2 RBCNR,
DCLI.DCLI_ADRESSES DA
WHERE
T.NUMR_CHRN_PRFS=DPP.NUMR_CHRN_PRFS(+) AND
T.CODE_ORGN_FINN = DPP.CODE_ORGN_FINN(+) AND
T.NUMR_PERS = DPP.NUMR_PERS(+) AND
DPP.NUMR_PERS = DSPH.NUMR_PERS(+) AND
DPP.CODE_ORGN_FINN = DSPH.CODE_ORGN_FINN(+) AND
DPP.NUMR_CHRN_PRFS=DSPH.NUMR_CHRN_PRFS(+) AND
DPP.CODE_ORGN_FINN = RCOFP.CODE_ORGN_FINN(+) AND
NVL(DPP.CODE_INSEE_COMM_NAIS,'*') = DIC.CODE_INSEE_COMM(+) AND
NVL(DPP.CODE_NATN_REGL,'*') = RNR.CODE_NATN_REGL(+) AND
NVL(DPP.CODE_CSP,'*') = RBCC.CODE_CSP(+) AND
NVL(DPP.CODE_TYPE_CONT_TRVL_CLNT,'*') = RCCTC.CODE_TYPE_CONT_TRVL_CLNT(+) AND
NVL(DPP.CODE_TYPC_CAPC_JURD,'*') = RCTCJ.CODE_TYPE_CAPC_JURD(+) AND
NVL(DPP.CODE_NAF_REV2,'*') = RBCNR.CODE_NAF_REV2_PROF(+) AND
NVL(DPP.IDNT_ADRS_LEGL_FISC,0) = DA.IDNT_ADRS(+) AND
DPP.CODE_ORGN_FINN = DA.CODE_ORGN_FINN(+);
My target table contains the fields after each AS since I verify with the following query on my target table and it runs fine:
select
CODE_INSEE_VILLE,
NOM_MART,
NOM_PATR,
PRNM,
NUMR_2EME_LIGN_ADRS_AFNR_V2,
NUMR_3EME_LIGN_ADRS_AFNR_V3,
NUMR_4EME_LIGN_ADRS_AFNR_V4,
NUMR_5EME_LIGN_ADRS_AFNR_V5
from S9958014.SACCEF_PERS
Puzzling
So just segregating the columns for which the error is obvious I get a source query as:
SELECT DISTINCT
DPP.CODE_INSEE_COMM_NAIS AS CODE_INSEE_VILLE,
DPP.NOM_MART AS NOM_MART,
DPP.NOM_PATR AS NOM_PATR,
DPP.PRNM AS PRNM,
DA.NUMR_2EME_LIGN_ADRS_AFNR_V2 AS NUMR_2EME_LIGN_ADRS_AFNR_V2,
DA.NUMR_3EME_LIGN_ADRS_AFNR_V3 AS NUMR_3EME_LIGN_ADRS_AFNR_V3,
DA.NUMR_4EME_LIGN_ADRS_AFNR_V4 AS NUMR_4EME_LIGN_ADRS_AFNR_V4,
DA.NUMR_5EME_LIGN_ADRS_AFNR_V5 AS NUMR_5EME_LIGN_ADRS_AFNR_V5
FROM
S9958014.SACCEF_PERM_PERS T,
DCLI.DCLI_PROFIL_PARTIC DPP ,
(SELECT NUMR_PERS,NUMR_CHRN_PRFS,CODE_ORGN_FINN,
MT_TRSR_REEL_MOIS,
MT_EPRG_LIQD_REEL_MOIS,
MT_EPRG_CONT_REEL_MOIS,
MT_EPRG_INVS_MOIS
FROM DCLI.DCLI_SYNT_PARTIC_HIST WHERE PERD_ARRT_INFO='201205') DSPH,
RDWH.RDWH_CODE_ORGN_FINN_PILT RCOFP,
DCLI.DCLI_INSEE_COMMUNE DIC,
RDWH.RDWH_NATN_REGL RNR,
RDWH.RDWH_BPCE_CODE_CSP RBCC,
RDWH.RDWH_CODE_CONT_TRVL_CLNT RCCTC,
RDWH.RDWH_CODE_TYPE_CAPC_JURD RCTCJ,
RDWH.RDWH_BPCE_CODE_NAF_REV2 RBCNR,
DCLI.DCLI_ADRESSES DA
WHERE
T.NUMR_CHRN_PRFS=DPP.NUMR_CHRN_PRFS(+) AND
T.CODE_ORGN_FINN = DPP.CODE_ORGN_FINN(+) AND
T.NUMR_PERS = DPP.NUMR_PERS(+) AND
DPP.NUMR_PERS = DSPH.NUMR_PERS(+) AND
DPP.CODE_ORGN_FINN = DSPH.CODE_ORGN_FINN(+) AND
DPP.NUMR_CHRN_PRFS=DSPH.NUMR_CHRN_PRFS(+) AND
DPP.CODE_ORGN_FINN = RCOFP.CODE_ORGN_FINN(+) AND
NVL(DPP.CODE_INSEE_COMM_NAIS,'*') = DIC.CODE_INSEE_COMM(+) AND
NVL(DPP.CODE_NATN_REGL,'*') = RNR.CODE_NATN_REGL(+) AND
NVL(DPP.CODE_CSP,'*') = RBCC.CODE_CSP(+) AND
NVL(DPP.CODE_TYPE_CONT_TRVL_CLNT,'*') = RCCTC.CODE_TYPE_CONT_TRVL_CLNT(+) AND
NVL(DPP.CODE_TYPC_CAPC_JURD,'*') = RCTCJ.CODE_TYPE_CAPC_JURD(+) AND
NVL(DPP.CODE_NAF_REV2,'*') = RBCNR.CODE_NAF_REV2_PROF(+) AND
NVL(DPP.IDNT_ADRS_LEGL_FISC,0) = DA.IDNT_ADRS(+) AND
DPP.CODE_ORGN_FINN = DA.CODE_ORGN_FINN(+);
My target table contains the fields after each AS since I verify with the following query on my target table and it runs fine:
select
CODE_INSEE_VILLE,
NOM_MART,
NOM_PATR,
PRNM,
NUMR_2EME_LIGN_ADRS_AFNR_V2,
NUMR_3EME_LIGN_ADRS_AFNR_V3,
NUMR_4EME_LIGN_ADRS_AFNR_V4,
NUMR_5EME_LIGN_ADRS_AFNR_V5
from S9958014.SACCEF_PERS
Puzzling
Tony
BI Consultant - Datastage
BI Consultant - Datastage
So now we come back to the internal schemas. See what the schema definitions coming out of your source in both the "good" and the "bad" scenarios.
Since the only difference is the user-id to Oracle, perhaps the one user doesn't have sufficient access rights to read the DDL.
Since the only difference is the user-id to Oracle, perhaps the one user doesn't have sufficient access rights to read the DDL.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
I'm going to stop posting on this thread, since I don't enjoy repeating myself again and again. Perhaps someone else might be able to assist. If you don't look at the schemas to determine where the varchar(0) comes from then you won't make much progress.
Can both of your users do a "describe {table}" in TOAD?
Can both of your users do a "describe {table}" in TOAD?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
I just realized that sometimes when jobs abort the schemas are not displayed. Try a test job with the output database stage replaced with a peek and see if the schemas from the source are different between the two user-ids. I'd wager that they are.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Premium Member
- Posts: 288
- Joined: Tue May 27, 2008 3:42 am
- Location: Luxembourg
Yes. I just verified tha both users can do a DESCRIBE on all the tables in the source as well as the target table.
I know it sounds as if I'm not doing something right. Its puzzling about it too since the log entries seem identical.
Sorry if I'm irritating you but I'm kinda lost here
I know it sounds as if I'm not doing something right. Its puzzling about it too since the log entries seem identical.
Sorry if I'm irritating you but I'm kinda lost here
Tony
BI Consultant - Datastage
BI Consultant - Datastage
<sigh> have you done the comparision of the schemas displayed when you set $OSH_PRINT_SCHEMAS?
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
The columns getting a zero length are from the DCLI schema, did you try a describe against these particular tables (with the failing user)?
Try do a table import against these tables, no idea if the process DS follows for this is different to when it determines a schema at run time but something to try...
Try do a table import against these tables, no idea if the process DS follows for this is different to when it determines a schema at run time but something to try...