All,
Is there anyway to write Orchstrate schema of any stage output to physical file.
I am using column export to group all columns coming in from input table to a field. I want to use this job repetitively for various tables. I will be using schema file option in column export to group fields. I will be changing schema file at runtime.
Regards,
-Vikas
Write Orchstrate schema of any stage output to physical file
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Welcome aboard. :D
There are several environment variables that can cause this to happen, either directly or indirectly. Most of them dump the schema into the job log, but you can use Print from Director to get it into a file from there.
One or two (such as DS_PX_DEBUG) can direct the information to a directory on the file system.
There are several environment variables that can cause this to happen, either directly or indirectly. Most of them dump the schema into the job log, but you can use Print from Director to get it into a file from there.
One or two (such as DS_PX_DEBUG) can direct the information to a directory on the file system.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Not sure if this is runtime enough for you but;
1. Save a Table definition from the columns tab
2. Edit the Table definition to suit
3. Goto the Parallel tab and right click to save as a *.osh schema file.
4. Put these where the job can access them. In my case it was ftp'ing them to UNIX.
5. Run the job with a schema parameter to use any particular schema as the write to the Sequential file from the table.
So in your case, I would use the above to save schemas in each of the required table formats, get them out to where the job can use them, call the job with a table -- schema parameter pair. Then go back and re-compile the job with RCP turned on and re-run. VOILA!
1. Save a Table definition from the columns tab
2. Edit the Table definition to suit
3. Goto the Parallel tab and right click to save as a *.osh schema file.
4. Put these where the job can access them. In my case it was ftp'ing them to UNIX.
5. Run the job with a schema parameter to use any particular schema as the write to the Sequential file from the table.
So in your case, I would use the above to save schemas in each of the required table formats, get them out to where the job can use them, call the job with a table -- schema parameter pair. Then go back and re-compile the job with RCP turned on and re-run. VOILA!