DSXchange: DataStage and IBM Websphere Data Integration Forum
View next topic
View previous topic
Add To Favorites
Author Message
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 739
Location: Malvern, PA
Points: 7018

Post Posted: Tue Nov 13, 2018 12:56 pm Reply with quote    Back to top    

Rumu,

Your need to split the REDEFINES to a subordinate item (higher level number) is very odd. I have many successfully imported copybooks that have the PIC clause on the REDEFINES item.

The expected period error is not one I can help you with. Sometimes, if you used copy/paste for the line, retyping it manually will clear that up. Otherwise, it could just be gremlins in the system giving you a hard time. Wink

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Tue Nov 13, 2018 4:26 pm Reply with quote    Back to top    

Hi Frank,

I myself found that resolution weird of adding one new level. However, would you be able to send me a sample cobol copy book for reference?
This is first time I am handling cobol copy book so struggling a lot.

Thanks.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Wed Nov 14, 2018 10:10 am Reply with quote    Back to top    

Hi All,

The third copy book that I received today is really challenging.
This copy book corresponds to a binary file which has 4 types of record with variable length.
Each type of record can have maximum 12 segments. Out of 12 segnments, 10 segments are present always and remaining 2 segments are optional. When any of these optional segments are missing then next segments are pushed back.
Out of this 12 segments, only 4 segments are being used hence need to map only those 4 sergments. These 4 segments always present not optional but one segment out of these 4 segments are placed after one optional segment.Hence it is posisble that when the optional segment is not present then the next segment takes the position of the optional segments.
The max record length is 16326.The first 3 segments are of length 3254,380,200 and I need those. My last segment is of 1200 length and is last in the record. Hence I need a Filler of length 11292 after the first 3 segments and before the last segments.
As each of these 4 segments are going to one one table so my architect has given me 4 copybooks pertatning to each segments.
What I did, while reading the first segments I used 3254 and remaning FILLLER to make it 16326.
Similarlay, while reading copybook for second segments, I sed Filler 3254 then fields for 380 followed by Filler.
I am stuck with the last segments.
Had this record been a fixed format I could have used FILLER 15126 at the begining then the fields of length 1200.As this last segments are optional so I could not figure it out how to change the copybook layout
for this last segment.
Is there a option to read the file using starting position?Like we do in any programming language ex SAS.
Please help.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Wed Nov 14, 2018 12:29 pm Reply with quote    Back to top    

I found a resolution.
While reading copybook of individual segments of the one which has optional segment ahead of it, I read that optional segment as FILLER of that length and used OCCURS depending on clause using data dictionary.

Code:
01  CHD-CLIENT-PRODUCT-SEG.   
          05  FILLER  PIC X(57).
          05  CHD-NO-HRSK-ACS-SEGS  PIC S9(4)V COMP-3.
          05  FILLER PIC X(6).       
          05  CHD-NO-SMALL-SEGS     PIC S9(4)V COMP-3.
          05  FILLER  PIC X(3567).
          05  FILLER  OCCURS 0 TO 1 TIMES
                  DEPENDING ON CHD-NO-HRSK-ACS-SEGS  PIC X(200).
          05  FILLER  OCCURS 0 TO 4 TIMES
                 DEPENDING ON CHD-NO-SMALL-SEGS    PIC X(100).
          05  FILLER   PIC X(8).
          05  CHD-CURR-STRT-EVNT-DT  PIC S9(9)V COMP-3.


here the field is from first mandatory segment CHD-NO-HRSK-ACS-SEGS which denotes availability of the optional segments.

I could read the metadata successfully and in column tab it shows occurs accurately.

Is this fine ? As this is first time I am changing copybook and my architect is not aware of occurs depending on so I am asking your help to confirm if my approach is correct.
Thanks.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Wed Nov 14, 2018 6:47 pm Reply with quote    Back to top    

Frank,Craig

Any comments on my approach?is this correct?

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 739
Location: Malvern, PA
Points: 7018

Post Posted: Thu Nov 15, 2018 8:27 am Reply with quote    Back to top    

There's a level of detail in your situation, which you would not provide here, which prevents me from offering any specific feedback. If you have successful reads and processing, then you have the right solution.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
chulett

Premium Poster


since January 2006

Group memberships:
Premium Members, Inner Circle, Server to Parallel Transition Group

Joined: 12 Nov 2002
Posts: 43011
Location: Denver, CO
Points: 221947

Post Posted: Thu Nov 15, 2018 8:38 am Reply with quote    Back to top    

Thinking we should start a pool as to when this hits five pages. Any takers? Wink

_________________
-craig

Space Available
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Thu Nov 15, 2018 11:47 am Reply with quote    Back to top    

Hi Frank, Craig

I know this issue have been going on for long. I am sorry about this.
I am really facing some challenges and as there is no one in my project to give any information , I depend on you hence coming back again and again.
While reading the file. I can see some weird characters like
rr?rr? and ???- ?l%
 ?? C 5518422M3SJ2BR222


I used default native-indian byte order, and NLS as Project default UTF-8.
I tried with IS0_8859-1:1987...but no luck.
Can you please help. Thanks.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 739
Location: Malvern, PA
Points: 7018

Post Posted: Mon Nov 19, 2018 7:04 am Reply with quote    Back to top    

Rumu,

Craig likes to tease. You don't need to apologize. He has me to answer mainframe data questions, and ever since I wrote an FAQ for him he's been even more teasing. Ahem. Cool

Your latest problem needs further definition. What fields are involved with your "weird" characters, and what are their PIC clauses? Showing the output is not going to be enough, because EBCDIC to ASCII is often inconsistent but not for any obvious reasons.

If you can, line up the data with the fields. Show the field, PIC clause and the hexadecimal value of each byte. I don't promise an answer, but I might be able to eliminate some causes.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
chulett

Premium Poster


since January 2006

Group memberships:
Premium Members, Inner Circle, Server to Parallel Transition Group

Joined: 12 Nov 2002
Posts: 43011
Location: Denver, CO
Points: 221947

Post Posted: Mon Nov 19, 2018 11:11 am Reply with quote    Back to top    

Right... apologies. The smiley was meant to convey the fact that I was kidding. Or teasing, it would seem. Wink

I would also be curious how the file is being transferred to the DataStage server. Some transfers can do automatic EBCDIC to ASCII conversions and those will destroy packed fields. FYI. And that's probably in the FAQ entries somewhere too!

_________________
-craig

Space Available
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Mon Nov 19, 2018 11:43 am Reply with quote    Back to top    

Hi Frank, Craig,

The fields with which I am facing issues are PIC X ...I am writing the output in a sequential file with record delimeter as end. But when I view the file in unix using head command to select few records , not able to do so as it is showing all the records...
I think the issue is with cobolfile defination, as there are 11 types of records in the actual file and metadata file has definition for one type of file hence warnings are coming as input buffer overrun.
I tried to use the constratint tab in the CFF stage to select the only one type of record but that is not working as the identifying filter is of 6 bytes and actual column is 10 bytes. In the constraint tab , there is no option to use substring.
I want to filter the one type of records first and then passing it to datastage CFF stage. Is it possible? Can I ask to convert the EBCDIC file to ASCII ? then can Unix read the file and filter out one type of record?

I am getting no clue how to proceed. Is it possible to read the binary file other than CFF stage and then taking the field using offset position like can be done in SAS?

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Mon Nov 19, 2018 1:16 pm Reply with quote    Back to top    

OK, I tried using sequential file stage. I read the entire data as VarBinary with maximum field length of the longest record type.
I used the following properties in the input seq file stage:
Record delimeter=end
Field delimeter=none
Character set=EBCDIC
ByteOrder=native-endian
string
Export EBCDIC to ASCII
Decimal
Packed Yes

Inside the transformer, I am reading first 6 character to identify the record type and using rawtostring function.

In the out put sequential file stage,
I am using following format:

String
Export EBCDIC to ASCII
Decimal packed-yes
Date is julian.

I can read the first 6 byte to idenify the record type...My requierment to read all the fields as character as the first landing zone is to load in a text file .

Can I read packed decimal data using rawto string?

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 739
Location: Malvern, PA
Points: 7018

Post Posted: Mon Nov 19, 2018 1:25 pm Reply with quote    Back to top    

Packed decimal is an exact format. It cannot be translated except by "unpacking" it according to the format.

The first problem is that it's not hexadecimal. Each half-byte contains a decimal value -- 0 to 9 -- with the final half-byte reserved for the sign -- C for positive, D for negative and F for unsigned.

Treating it as binary and using raw to string will give you unusable data. You must read it with a dedicated column properly defined, or parse it as text and use position and length to put it in a properly defined column or stage variable. You can then use DecimalToDecimal to convert it to a numeric variable, then use DecimalToString to convert it to text.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 282

Points: 2830

Post Posted: Mon Nov 19, 2018 1:57 pm Reply with quote    Back to top    

Hi Frank,
I am not able to understand the following part:

"You must read it with a dedicated column properly defined, or parse it as text and use position and length"

I am reading the entire string in on ecolumn an dthen in transformer using raw to string and then substring for position.
I under stand that this would not work for Packed decimal fields.
I tried to read the input string first without using raw to binary function but during compilation, it shows error and asking to use raw to string function.
there is no rawtonumber or rawto decimal field. What should I use to extract the PD fields?

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 739
Location: Malvern, PA
Points: 7018

Post Posted: Tue Nov 20, 2018 8:55 am Reply with quote    Back to top    

You need to parse the packed decimal fields before using raw to string. Use position and length to derive it to a decimal column defined for its format. Raw to string is always going to fail for packed decimal.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
Display posts from previous:       

Add To Favorites
View next topic
View previous topic
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum



Powered by phpBB © 2001, 2002 phpBB Group
Theme & Graphics by Daz :: Portal by Smartor
All times are GMT - 6 Hours