Is it possible to generate the schema file using ds job
Moderators: chulett, rschirm, roy
Is it possible to generate the schema file using ds job
Hi All,
Is it possible to generate the schema file using data stage 8.5.
My requirement is as follows.
we want create the delimiter files using dataset or Oracle stage, which we want to re-use it in different project. In order to avoid the rework, is there any option to create the schema files from dataset or Oracle stage, so that i can use the RCP concept to load the data.
Thanks in Advance.
Is it possible to generate the schema file using data stage 8.5.
My requirement is as follows.
we want create the delimiter files using dataset or Oracle stage, which we want to re-use it in different project. In order to avoid the rework, is there any option to create the schema files from dataset or Oracle stage, so that i can use the RCP concept to load the data.
Thanks in Advance.
Thanks for your reply.
My question,do we have option to create the schema file in 8.5 using meta data.
EX :
Data set metadata
No , decimal 11
Name , varchar2(10)
Address, varchar2(50)
we want to create the schema file for above metadata through data stage job as below.
record
{record_delim='\n', record_length=fixed, delim=none}
(
No:nullable decimal[11,0] {width=11, null_field=' '};
Name:nullable string[10] {width=10, null_field=' '};
Address :nullable string[50] {width=50, null_field=' '};
)
My question,do we have option to create the schema file in 8.5 using meta data.
EX :
Data set metadata
No , decimal 11
Name , varchar2(10)
Address, varchar2(50)
we want to create the schema file for above metadata through data stage job as below.
record
{record_delim='\n', record_length=fixed, delim=none}
(
No:nullable decimal[11,0] {width=11, null_field=' '};
Name:nullable string[10] {width=10, null_field=' '};
Address :nullable string[50] {width=50, null_field=' '};
)
Vino
You need a table containing the metadata of your sequential files (column names, datatypes, length, scale, nullability etc.).
Then you can run a query to produce a file in the format you need. You will need some UNION statements to produce the header lines and the finishing closing brackets. You may need to experiment a bit to find out which datatypes need which options and how You have to modify Your statement to be able to read your files using the resulting schema-files. So there will be a number of case-statements as well.
Datasets do not have an option to attach a schema-file because the schema-definition is contained in the descriptor-file of the dataset.
You do not need schema files to run queries against databases either. Just enable Runtime Column Propagation on the Stage and run a valid SQL-Statement like "select * from #db_schema#.#db_table# or an SQL-Statement from a file. DataStage will use metadata from the system-tables of the database to propagate the result downstream.
Then you can run a query to produce a file in the format you need. You will need some UNION statements to produce the header lines and the finishing closing brackets. You may need to experiment a bit to find out which datatypes need which options and how You have to modify Your statement to be able to read your files using the resulting schema-files. So there will be a number of case-statements as well.
Datasets do not have an option to attach a schema-file because the schema-definition is contained in the descriptor-file of the dataset.
You do not need schema files to run queries against databases either. Just enable Runtime Column Propagation on the Stage and run a valid SQL-Statement like "select * from #db_schema#.#db_table# or an SQL-Statement from a file. DataStage will use metadata from the system-tables of the database to propagate the result downstream.
"It is not the lucky ones are grateful.
There are the grateful those are happy." Francis Bacon
There are the grateful those are happy." Francis Bacon
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
I can't find this feature on my table definitions, I am using 9.1 Parallel job. Am I doing something wrong? I want to create schema files from many database tablesray.wurlod wrote:Right click in the columns grid of the table definition you have already in DataStage and choose Save As to save as a schema file. ...
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Hi Ray,
I am also in a need to create a schema file systematically as it is difficult to know all the orchestrate keywords to create one manually.
As per this thread, I created a table in Teradata DB and imported in the table definition repository. But when I checked the option to create schema file, I couldn't find any. Please let me know if that feature is revoked in 11.3 or present somewhere else.
Thanks
I am also in a need to create a schema file systematically as it is difficult to know all the orchestrate keywords to create one manually.
As per this thread, I created a table in Teradata DB and imported in the table definition repository. But when I checked the option to create schema file, I couldn't find any. Please let me know if that feature is revoked in 11.3 or present somewhere else.
Thanks
How is a manual cut and paste and a file edit an "Automatic" activity?
My 2 cents:
To automate the activity as the OP (back in 2012) originally asked, I would modify the job to output a 1 row dataset. I would then write an after job routine to use the orchadmin command to dump the schema of that dataset. You could wrap the orchadmin command with some fancy awk/sed and echo statements to properly make a schema file, then orchadmin rm that dataset afterwards.
Something like this:
echo "record" > #seqfilepath##ds_filename#.schema; echo "{record_delim='\n', final_delim=end,delim=',', quote=double}" >> #seqfilepath##ds_filename#.schema; $DSHOME/../PXEngine/bin/orchadmin.exe describe -s #dataset_path##ds_filename# 2>/dev/null | sed 1,11d >> #seqfilepath##ds_filename#.schema; echo #seqfilepath##ds_filename#.schema; $DSHOME/../PXEngine/bin/orchadmin.exe rm #seqfilepath##ds_filename#;
where:
#seqfilepath# is the path where you want to drop the schema file.
#ds_filename# is the dataset filename.
Remember to ensure that your orchadmin.exe has the correct LD_LIBRARY_PATH or equivalent for your OS set.
Not sure if the above command is 100% syntactically correct, but you can figure it out.
The OP asked for an AUTOMATED mechanism that accounted for RCP being used in his/her job. That means that the person wanted an random table to be passed to a job at runtime.
Automation is not clicky clicky on a gui.
My 2 cents:
To automate the activity as the OP (back in 2012) originally asked, I would modify the job to output a 1 row dataset. I would then write an after job routine to use the orchadmin command to dump the schema of that dataset. You could wrap the orchadmin command with some fancy awk/sed and echo statements to properly make a schema file, then orchadmin rm that dataset afterwards.
Something like this:
echo "record" > #seqfilepath##ds_filename#.schema; echo "{record_delim='\n', final_delim=end,delim=',', quote=double}" >> #seqfilepath##ds_filename#.schema; $DSHOME/../PXEngine/bin/orchadmin.exe describe -s #dataset_path##ds_filename# 2>/dev/null | sed 1,11d >> #seqfilepath##ds_filename#.schema; echo #seqfilepath##ds_filename#.schema; $DSHOME/../PXEngine/bin/orchadmin.exe rm #seqfilepath##ds_filename#;
where:
#seqfilepath# is the path where you want to drop the schema file.
#ds_filename# is the dataset filename.
Remember to ensure that your orchadmin.exe has the correct LD_LIBRARY_PATH or equivalent for your OS set.
Not sure if the above command is 100% syntactically correct, but you can figure it out.
The OP asked for an AUTOMATED mechanism that accounted for RCP being used in his/her job. That means that the person wanted an random table to be passed to a job at runtime.
Automation is not clicky clicky on a gui.