Search found 199 matches
- Mon Feb 15, 2010 7:25 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Import metadata to Fast Track
- Replies: 14
- Views: 10541
Ernie, Here is how it looks after I create the shared table using the Shared Table Wizard in DataStage. And we don't have MWB so am not sure how it looks in that workbench. DSR.HR_UNITS Source Table Definition: Data source type: ODBC Data source name: DSO Table/file name: DSR.HR_UNITS Description: C...
- Sat Feb 13, 2010 3:54 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Import metadata to Fast Track
- Replies: 14
- Views: 10541
Thanks Ernie, I was actually able to create shared tables from the Table definitions in Data Stage with no problem at all. But my problem was access to those tables from Fast Track. In the Fast Track tool, there seem to be only 2 ways to import metadata 1) Import from Metadata Repository 2) Source C...
- Fri Feb 12, 2010 12:00 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Import metadata to Fast Track
- Replies: 14
- Views: 10541
- Wed Feb 10, 2010 4:00 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Import metadata to Fast Track
- Replies: 14
- Views: 10541
thanks for your response Ernie. We are using the regular Orchestrate import option in DataStage and all works fine in Datastage. per your suggestion, i tried to use the Oracle Connector (11) in DataStage and got the exact same error message as i got in Fast Track. I suppose we can try the ODBC Conne...
- Wed Feb 10, 2010 8:56 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Import metadata to Fast Track
- Replies: 14
- Views: 10541
Import metadata to Fast Track
Hi, We just bought the Fast Track tool and am trying to import the metadata (source and target tables) into it. But am stuck on the Source configuration interface unable to figure out which one to use to make it work. Basically our application server is on Linux, our data resides on Oracle 11g datab...
- Tue Mar 24, 2009 12:42 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Missing Jobs from project
- Replies: 7
- Views: 2319
Well, our team was in a hurry to restore it ASAP and i didn't have enough time to work on this repair, would love to know more on how to do this though. What we finally ended up doing was restore from system backup, where we lost a week's worth of development (not a whole lot in our case, since it w...
- Mon Mar 23, 2009 3:14 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Missing Jobs from project
- Replies: 7
- Views: 2319
- Mon Mar 23, 2009 3:11 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Missing Jobs from project
- Replies: 7
- Views: 2319
- Mon Mar 23, 2009 2:55 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Missing Jobs from project
- Replies: 7
- Views: 2319
- Mon Mar 23, 2009 2:40 pm
- Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
- Topic: Missing Jobs from project
- Replies: 7
- Views: 2319
Missing Jobs from project
So, we don't know how it happened, but all jobs in a project are now missing. When we login to Designer or director, we just don't see any jobs there. I did a LIST DS_JOBS and all of them show up. so, i am kind of hopeful that the error is related to some dictionary, repository listing, index issue....
- Wed Aug 13, 2008 12:43 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Commit frequency when using Oracle Load/Append
- Replies: 15
- Views: 11410
Commit frequency when using Oracle Load/Append
Hi, I have a job that reads about 92 million records from a source table and loads it to a oracle table. I have the target stage in this case have a LOAD/Append option. I did not set the DIRECT=TRUE. But this job takes forever and i am trying to figure out if there is a way to speed up this load. Th...
- Mon Jul 21, 2008 12:56 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: strange compiling problem in transformer
- Replies: 8
- Views: 5160
- Mon Jul 21, 2008 12:30 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Surrogate Key generator using DB SEQ option
- Replies: 6
- Views: 5017
Re: Surrogate Key generator using DB SEQ option
[quote="bensonian"]
Source Name = #db_server#.#db_name#.#schema#.<table_name>
[/quote]
The source name here should not be the <table_name> but the oracle sequence name. The oracle sequence should be already created in the DB
Source Name = #db_server#.#db_name#.#schema#.<table_name>
[/quote]
The source name here should not be the <table_name> but the oracle sequence name. The oracle sequence should be already created in the DB
- Mon Jul 21, 2008 12:26 pm
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Describe failed because of a missing column, Oracle stage
- Replies: 5
- Views: 6180
I had similar problem, but though it doesnt make sense, i added a modify stage inbetween the transformer and Oracle Load, and just a dummy specification like <in_col_name> = <out_col_name> and it works for me, no more errors. for eg: in your case, if the column name coming from sequential file is FI...
- Tue Jul 15, 2008 11:06 am
- Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
- Topic: Oracle Enterprise - Reading in parallel
- Replies: 9
- Views: 5641
Yes hello105, you are right. When i set the partition table option while using a user-defined SQL query that joins multiple tables, I end up with duplicates. The data read becomes faster, but to avoid the duplicates if we add a remove dups stage, it slows down again negating the speed that i achieve...