DSXchange: DataStage and IBM Websphere Data Integration Forum
View next topic
View previous topic
Add To Favorites
Author Message
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Tue Nov 20, 2018 10:19 am Reply with quote    Back to top    

Ok Frank. so transformer stage is not allowing to parse the input binary string using substring and throwing compilation error. I tried to use

Code:
DecimalTo Decimal(inputlink.Rawcolumn[12,4])

but it throws ccompilation error.

Now I am using a column import stage where I am using Rawcolumn as Varbinary and try to parse the fields as per column layout.

If a column is defined as below:

CHD-NO-HRSK-ACS-SEGS PIC S9(4)V COMP-3

I am using field length 4 and type decimal and setting property Packed=yes.

Do I need to set default value? as sometimes it is giving error as
import error and no default value, data {00 01 00} as offset 83.

Is it required to set default value allways?

Also, as the file is multiple record types, so I placed a transformer before column import stage to read the required type of record.
I first use rawTo String(inputlink.Rawcolumn)[1,5] and then checking if

rawTo String(inputlink.Rawcolumn)[1,5]='01CCD'

we need to consider only 01CCD type of record. But the the constratint is not being evaluated and all the records are being pushed to the column import stage.

I use a seq file to view the out put of rawTo String(inputlink.Rawcolumn)[1,5] and could see the value 01CCD. Why transfor constratint is failing?
rawTo String is not enogh for conversion?

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 739
Location: Malvern, PA
Points: 7018

Post Posted: Tue Nov 20, 2018 10:29 am Reply with quote    Back to top    

Rumu,

Read the using mainframe FAQ linked below for details on handling packed decimal.

Your column length for PIC S9(4)V COMP-3 is 3 bytes: counting the integer places you would see in the file 01 23 4F.

As for parsing, you can define the file stage input single column as Char, even though it doesn't fit the formats of the individual columns. That will get you past your error.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Tue Nov 20, 2018 11:02 am Reply with quote    Back to top    

Thanks Franklin.

I changed the column type to Char but the job ran for more than 30 minutes and rejected all the records.

For packed decimal fields I found in data dictionary, the field length always shows 2 for PIC S9(4)V COMP-3 fields. When I define the length as 3 in the output column of column import stage, then how does DataStage identify the starting position of the next field?

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 739
Location: Malvern, PA
Points: 7018

Post Posted: Tue Nov 20, 2018 11:17 am Reply with quote    Back to top    

The reference you found is incorrect.

COMP-3 storage length -- the bytes it takes up on the file -- is number of integer places divided by two and add one, round up for odd number of places.

PIC S9(4) COMP-3: 4 divided by 2, add 1, storage length is 3.
Storage, hexadecimal representation: 1,234 (or 12.34, decimal is never stored) is 01 23 4C

In the FAQ example, PIC S9(5) COMP-3 takes 4 bytes. 12,345 is 00 12 34 5C.

Please start from the beginning with the entire record, and "map" it to the copybook. Manually determine starting position and length for every field. That's the only way to be sure that your [position,length] values in the derivation are accurate.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Tue Nov 20, 2018 11:50 am Reply with quote    Back to top    

Hi Frank,

I am sharing a part of the data dictionary here: starting part

FROM TO FIELD LENGTH PICTURE
1 75 CHD-RECORD 75 X(75)
76 77 CHD-NO-SEG 2 S9(4)V COMP
78 79 CHD-ALP-SEG 2 S9(4)V COMP

when I read the abobe layout, i am using in column import stage output column tab as below:

CHD-RECORD Char 75
CHD-NO-SEG Decimal 3
CHD-ALP-SEG Decimal 3

after CHD-RECORD when I read CHD-NO-SEG will define length as 3 and packed-yes
but in case of CHD-ALP-SEG, where will be the starting position? 78 or 79 ?

I can not handle packed decimal in transformer stage as if i read the input as Char, the job rejected all the records. if I use Varbinary, char fields are read correctly but Decimal field is showing error asking to use rawto String function ,if i use that function it will mess the data.

So I switch to column import stage. Please suggest how do I define the length of the packed decimal data in column import output stage . If I go by the calculation, then will it impact the starting position of the consequitive fields?

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Tue Nov 20, 2018 12:14 pm Reply with quote    Back to top    

You said "Please start from the beginning with the entire record, and map it to the copybook."

Shall I use a sequential file to read the entire record in one single field? Initially I was using that with a VarBinary field. While splitting the record in transformer as per the data dictionary, I faced an issue with packed decimal fields.

For Char fields I used raw to String(input_column)[start,length]

But this derivation is not applicable for packed decimal. Can you please tell me what exactly I need to use to split packed decimal fields?

My copybook is not good. For multiple record types file, the copybook has definition for only one type of record without defining the variable part within it.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
chulett

Premium Poster


since January 2006

Group memberships:
Premium Members, Inner Circle, Server to Parallel Transition Group

Joined: 12 Nov 2002
Posts: 43011
Location: Denver, CO
Points: 221947

Post Posted: Tue Nov 20, 2018 3:08 pm Reply with quote    Back to top    

Maybe I missed that in all of this but why are you not using the Complex Flat File stage for this? Confused

_________________
-craig

The muffin man is seated at the table in the laboratory of the Utility Muffin Research Kitchen reaching for an oversized chrome spoon
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Tue Nov 20, 2018 3:23 pm Reply with quote    Back to top    

Hi Craig,

The file I am reading has 11 types of records and each record type has variable length based on the segments.
The copybook that I received has layout for only one Type of records and without considering the variable length segments.
While reading the data using that copybook, I had to uncheck multiple record types as the copybook has one 01 level.
While reading the data, as expected lots of warnings received saying input buffer overrun.Ultimately 6 records are processed and with all packed decimal fields as 00000. Expected number of records are much much more.
As no body can rectify the copybook, thought of using sequential file to read the binary data as a single column and then parsing.. There also getting issues with packed decimal fields..
I am stuck ..Any help is much appreciated. Thanks.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Wed Nov 21, 2018 9:41 am Reply with quote    Back to top    

Hi,

When I am trying to read the binary file in sequential stage, I read the entire row in one VarBinary column and then use RawToString function to convert and filter the specific record type from first field. I can view the Record Id in DataStage viewer and in UNIX but the string equality is not working in transformer... I tried to use Downcase function on that field and found some strange character in viewer and UNIX.

If I use the single column as Char then I am not able to read any fields... I used Conversion with MB and I used options to convert from Binary to Decimal but that did not work. Can you please help me how to read a binary file using sequential stage? VarBinary or Char? What function should be used to convert string and packed decimal?

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
asorrell
Site Admin

Group memberships:
Premium Members, DSXchange Team, Inner Circle, Server to Parallel Transition Group

Joined: 04 Apr 2003
Posts: 1694
Location: Colleyville, Texas
Points: 23058

Post Posted: Wed Nov 21, 2018 10:04 am Reply with quote    Back to top    

When Franklin said "Map it to the copybook" I believe he meant FIX the copybook to make it reflect reality. Then use the Complex Flat File stage to read the data using the new copybook.

You will probably need assistance from a COBOL programmer on the mainframe to help you do that, but it can be done.

I have never had success loading a complex COBOL file without an actual working copybook and the CFF stage. I have had to modify the copybook a few times, but then I can do that as I was a COBOL programmer during college and afterwards. (yes - that's how old I am!)

Even if the mainframe doesn't have the copybooks anymore, there are chargeable third party tools that can parse load modules and rebuild them. I've never used one, but I have heard of customers using them before.

_________________
Andy Sorrell
Certified DataStage Consultant
IBM Analytics Champion 2009 - 2017
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Wed Nov 21, 2018 12:41 pm Reply with quote    Back to top    

so without copybook, I cant read a Binary file? I saw one thread , quite old (2008) where it is mentioned to use sequential stage and column import stage ..no further details.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Thu Nov 22, 2018 4:17 pm Reply with quote    Back to top    

Hi,

I finally got an actual copybook of First type of record which we need to load, but the File has other 10 types of records as those are not used so copybooks not available.

I imported the copybook and configured the CFF stage.
This time the job ran with warnings but correct number of output records ie correct number of first type record is received. As the File has other 10 types of records of varying length so the warnings generated as Import warning for short records.. The last step is pending to filter out the first type record before passing to the CFF stage so that the CFF stage should process only the first type of record according to the layout.
I used a filter command in CFF stage as

grep -a '02CHK' <Filename?


02CHK is the Record ID of first type.
But this doesnot work. I tried with the Constrat tab in the output tab of CFF stage but that also does not work.

Could you kindly help me with this last thing to Filter out '02CHK' from the input Binary file before passing to CFF stage?

The input File name is MoneyMST.<datefield>[/code]

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
chulett

Premium Poster


since January 2006

Group memberships:
Premium Members, Inner Circle, Server to Parallel Transition Group

Joined: 12 Nov 2002
Posts: 43011
Location: Denver, CO
Points: 221947

Post Posted: Thu Nov 22, 2018 7:02 pm Reply with quote    Back to top    

We would need more than "does not work" to help. Exactly what doesn't work about it? If you manually run that grep from the command line, do you get the output you expect - i.e. the proper number of lines with the values unchanged? I ask the latter because I'm not familiar with that "-a" (read binary as text) option so no clue what the ramifications of using it are.

_________________
-craig

The muffin man is seated at the table in the laboratory of the Utility Muffin Research Kitchen reaching for an oversized chrome spoon
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Fri Nov 23, 2018 6:16 am Reply with quote    Back to top    

Hi Craig,

I used the command in the command line, it did not work as it returns 0 rows.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 739
Location: Malvern, PA
Points: 7018

Post Posted: Fri Nov 23, 2018 7:35 am Reply with quote    Back to top    

Rumu,

Please confirm that the following is accurate:
Code:
FROM TO FIELD LENGTH PICTURE
 1 75 CHD-RECORD 75 X(75)
 76 77 CHD-NO-SEG 2 S9(4)V COMP
 78 79 CHD-ALP-SEG 2 S9(4)V COMP


If it is accurate, those are not packed decimal fields. They are binary numeric fields that are sql type integer or double or of that category.

COMP is not packed decimal. Only COMP-3 is packed decimal.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
Display posts from previous:       

Add To Favorites
View next topic
View previous topic
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum



Powered by phpBB © 2001, 2002 phpBB Group
Theme & Graphics by Daz :: Portal by Smartor
All times are GMT - 6 Hours