Search found 353 matches

by chandra.shekhar@tcs.com
Fri May 24, 2013 12:16 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Credibility of Checksum and Alternatives
Replies: 14
Views: 6312

@Jerome
If you are using checksum stage then you have luxury of dealing in huge data.
The checksum stage produces a 32 byte code(datatype char(32)).
So number of possible combinations will be 16^32 = 3.4028236692093846346337460743177e+38.
by chandra.shekhar@tcs.com
Thu May 16, 2013 4:10 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Interwier Question
Replies: 9
Views: 5307

The answer you told is also correct, calculate the count in Aggregator and then in Filter stage use constrain count >1 and count = 1.
by chandra.shekhar@tcs.com
Wed May 15, 2013 11:41 pm
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: not able to convert string to time stamp
Replies: 4
Views: 2555

Use

Code: Select all

StringToDate(DSLink4.CRE_DTTM,"%dd-%mmm-%yy")
by chandra.shekhar@tcs.com
Wed May 15, 2013 2:50 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Junk Characters in datastage
Replies: 8
Views: 13316

You can use double Convert function in transformer, it will remove all the junk characters other than alphabets and numbers.
by chandra.shekhar@tcs.com
Wed May 15, 2013 12:10 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Logic Issue
Replies: 7
Views: 2367

Whatever you have told, it looks fine.
Instead of using Sort and Remove Duplicate stage, you can use only Transformer stage to achieve the same.
by chandra.shekhar@tcs.com
Mon May 06, 2013 12:33 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Sort Stage Question
Replies: 4
Views: 1651

If removing duplicates is your primary goal then you can use either of them
Sort Stage, Remove Duplicates, Transformer, Copy.
by chandra.shekhar@tcs.com
Mon Apr 29, 2013 1:16 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: Populate all rows in a single column
Replies: 4
Views: 2805

If your source is a Sequential File then, read all the rows in a single column i.e. mention final delimiter is "none".
And in the transformer, use Convert function to replace new lines characters.

Code: Select all

Convert(char(10) : char(13),'',<SRC_COL>)
by chandra.shekhar@tcs.com
Fri Apr 26, 2013 7:23 am
Forum: IBM<sup>®</sup> DataStage Enterprise Edition (Formerly Parallel Extender/PX)
Topic: how to get 30 days old date from current date
Replies: 5
Views: 4769

As suggested by Craig,
for Datastage 8.5+

Code: Select all

DateOffsetByComponents(CurrentDate(),0,0,-30)
For Earlier versions of Datastage

Code: Select all

DateFromDaysSince(-30, CurrentDate())
by chandra.shekhar@tcs.com
Thu Apr 25, 2013 1:19 pm
Forum: General
Topic: Sequential File path query
Replies: 34
Views: 12353

@Eric, I have fired these commands on the server through putty and I got expected result in both cases. That means the commands are working absolutely fine. For your confirmation, the output of these commands is the list of the files which are to be read in the stage. The o/p of the commands when us...
by chandra.shekhar@tcs.com
Thu Apr 25, 2013 7:53 am
Forum: General
Topic: Sequential File path query
Replies: 34
Views: 12353

@Roland
Yes, I am not using both of them, I have tried them separately. :(

Just wanted to let you all know that as I have told earlier that I am able to view the data from the stage, that means I have used the code correctly.
But while running the job.. :cry:
by chandra.shekhar@tcs.com
Thu Apr 25, 2013 7:39 am
Forum: General
Topic: Sequential File path query
Replies: 34
Views: 12353

find /data/finrecon/SBIRECON/`echo "#$JpBusinessDate#" | tr -d '-'`/GLCCCTA* OR ls /data/finrecon/SBIRECON/`echo "#$JpBusinessDate#" | tr -d '-'`/GLCCCTA* The actual path is /data/finrecon/SBIRECON/20130419/GLCCCTA* And I am passing Business Date as 2013-04-19. I have modified t...
by chandra.shekhar@tcs.com
Thu Apr 25, 2013 7:11 am
Forum: General
Topic: Sequential File path query
Replies: 34
Views: 12353

@Craig/Eric, Yes I have put the command lines where you have mentioned You should have Source=>Read Method=>File Pattern Source=>File Pattern=>the command line I gave you adapted to your needs. It seems that you are using a filter command instead of a file pattern command I havent used the filter pr...
by chandra.shekhar@tcs.com
Thu Apr 25, 2013 4:26 am
Forum: General
Topic: Sequential File path query
Replies: 34
Views: 12353

Eric,
I have used the commands what you have told earlier.
As I mentioned too, while viewing the data from the sequential file, the code works fine.
But when I run the job, it throws some error while reading last row.
I have pasted the error too.
by chandra.shekhar@tcs.com
Thu Apr 25, 2013 3:02 am
Forum: General
Topic: Sequential File path query
Replies: 34
Views: 12353

@Ray,
Well, to be true, UNIX was never my forte :(
I have tried many ways to run the job, setting different property with in the sequential file.
Even searched here in dsxhange but its of no use :(
by chandra.shekhar@tcs.com
Thu Apr 25, 2013 1:57 am
Forum: General
Topic: Sequential File path query
Replies: 34
Views: 12353

Hi, I tried your both logics. While viewing the data your code works like charm. :) But when I run the job, my job aborts while fetching the last row with the below error Source subproc: cat: 0652-050 Cannot open ls. Filter status 512; filter process failed: 2; Import error at record 100. My file ha...