Search found 60 matches

by Gaurav.Dave
Fri May 05, 2006 12:12 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: delete large number of records in DB2
Replies: 17
Views: 16447

RobinM wrote:Old thread I know, but...

Could you use REORG DISCARD in your environment?
I am not familiar with this. Can you give me some more information about it?

Thanks,
Gaurav
by Gaurav.Dave
Mon Mar 20, 2006 3:28 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: how to deal with multiple header in a same file
Replies: 6
Views: 3674

Well, Header records counts will not be fixed...it will be changing.... here are some sample data from the file.... 1Q05 2Q05 3Q05 4Q05 06TGT Client Team TELE OO Fed/Exce GMR PUI 804Top Valid Revenue 8.926070 41.575685 12.089471 10.110442 06TGT Client Team TELE OO Fed/Exce GMR GS 804Top Valid Revenu...
by Gaurav.Dave
Mon Mar 20, 2006 2:56 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: how to deal with multiple header in a same file
Replies: 6
Views: 3674

Thanks for your quick response. There will be distinct header for each subset..... example, in a single file, I will be getting header1 ---------------> "1Q05", "2Q05", "3Q05", "4Q05" underlying Data1------> header2----------------->"1Q06", "2Q0...
by Gaurav.Dave
Wed Mar 01, 2006 9:04 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: packed decimal value, how to supress zeros
Replies: 5
Views: 3915

Thanks Ray

Still I am not able find the solution. Wondering if I need edit properties for the particular column (decimal one)..

Regards,
Gaurav
by Gaurav.Dave
Wed Mar 01, 2006 3:16 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: packed decimal value, how to supress zeros
Replies: 5
Views: 3915

packed decimal value, how to supress zeros

Hello, I have 4 set of datastage job, which is running in sequence to achieve the desire target valus.... (1) DB2 -->xfm--> Datasets, (2) Datasets ---->join--->Datasets, (3) Datasets ---> join-->xfm---> Sequential file and then (4) db2 loader to load to target db2 database. When I check the Sequenti...
by Gaurav.Dave
Mon Feb 20, 2006 1:56 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: long varchar to varchar
Replies: 8
Views: 2513

Thanks Kenneth for your guidance..

Regards,
Gaurav
by Gaurav.Dave
Mon Feb 20, 2006 1:55 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: long varchar to varchar
Replies: 8
Views: 2513

How about creating a dummy column with a truncated & trimmed value from the LongVarChar into your target VarChar(315) in your SELECT statement directly? This would work for both Server and Parallel jobs. I have used Trim function for that perticular column in my x'former stage between source DB...
by Gaurav.Dave
Mon Feb 20, 2006 12:53 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: long varchar to varchar
Replies: 8
Views: 2513

How about creating a dummy column with a truncated & trimmed value from the LongVarChar into your target VarChar(315) in your SELECT statement directly? This would work for both Server and Parallel jobs. I have used Trim function for that perticular column in my x'former stage between source DB...
by Gaurav.Dave
Mon Feb 20, 2006 12:50 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: long varchar to varchar
Replies: 8
Views: 2513

kcbland wrote:Server or Parallel job, which stages (ODBC,CLI, PX?)

Kenneth,

I have tried both on Server and Parallel (in sequential mode). I have got the warning in my target stage which is DB2 API.

Thanks,
Gaurav
by Gaurav.Dave
Mon Feb 20, 2006 11:59 am
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: long varchar to varchar
Replies: 8
Views: 2513

long varchar to varchar

Hello, In my DataStage job I am extracting data from Japan MVS DB2 Database tables and load into the DB2 tables on Unix box. It's straight move. We dont want to export .ixf file first and then load to our target tables. Instead we want to load data directly from MVS database to target DB2 database t...
by Gaurav.Dave
Thu Feb 09, 2006 4:02 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: sorted input to Join
Replies: 7
Views: 2294

Well, with sequential files it behaves differently....

But when you use Datasets, it's partition based, u need to key partioned and sorted it before you input to ur join stage...

Gaurav Dave
by Gaurav.Dave
Fri Nov 04, 2005 9:13 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: converting character to number
Replies: 2
Views: 1552

Thanks Ray!!

Let me look on the Modify stage property...

Gaurav
by Gaurav.Dave
Thu Nov 03, 2005 3:39 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: converting character to number
Replies: 2
Views: 1552

converting character to number

Hi... In my job I m doing some aggregation, in which one of that is to find MAX value of the input field which is "CHAR" and contains Last name of the employee. And source is coming from flat files and o/p is also flat file. I have designed my job in PX enviornment, When I pass this column...
by Gaurav.Dave
Tue Oct 11, 2005 1:18 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: delete large number of records in DB2
Replies: 17
Views: 16447

You can use a stored procedure. Here's how I'd do it in Oracle, you can figure out the method for UDB: declare max_rows integer := 1000; begin loop delete from mytable where mydate between begindate and enddate and rownum <= max_rows; exit when sql%rowcount = 0; commit; end loop; end; Great thanks ...
by Gaurav.Dave
Tue Oct 11, 2005 12:03 pm
Forum: IBM<sup>®</sup> Infosphere DataStage Server Edition
Topic: delete large number of records in DB2
Replies: 17
Views: 16447

UDB is a different creature than Oracle, as a partitioned database it spreads it data around different notes in an attempt to distribute the data as evenly as possible. Partitioning by data ranges is not a common method, as data would group to specific nodes and therefore a single query could bottl...