COBOL file without copybook
Moderators: chulett, rschirm, roy
COBOL file without copybook
Hi All,
Our requirement is to transform a process into DataStage where the old process reads COBOL binary files through SAS code and produces CSV files. DataStage will replace the SAS code and will read the binary files and converts them into CSV files.
The main issue is vendor is not ready to share the COBOL copybook, rather they shared the SAS code to interpret it to make the COBOL copybook. SAS code reads only 25% of the columns.
Could you please suggest if any alternative method exists?
Thanks for your help.
Our requirement is to transform a process into DataStage where the old process reads COBOL binary files through SAS code and produces CSV files. DataStage will replace the SAS code and will read the binary files and converts them into CSV files.
The main issue is vendor is not ready to share the COBOL copybook, rather they shared the SAS code to interpret it to make the COBOL copybook. SAS code reads only 25% of the columns.
Could you please suggest if any alternative method exists?
Thanks for your help.
Rumu
IT Consultant
IT Consultant
Sadly, probably not, especially if you only have information for 25% of the fields. Guess the missing bits can just be treated as FILLER.
IMHO, it's a bit ridiculous they won't share the copybook, boggles my mind a bit and will make it very difficult to do a proper job of processing their files.
IMHO, it's a bit ridiculous they won't share the copybook, boggles my mind a bit and will make it very difficult to do a proper job of processing their files.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Well... yes... as long as they include FILLER for all the secret stuff. The byte length for the entire record and the byte layout (position, type, size) for the fields you're allowed to know about must still be correct. Hopefully their architect is aware of this. Normally I wouldn't question that but this instance seems a bit special.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Yup. And PIC X(nnn) FILLER should work just fine, one for each section/collection of Contiguous Secret Stuff.
PS: No need to quote me all the time, just use the lovely Reply to topic option at the top/bottom of each page rather than "Reply with quote". Save me from cleaning up behind you.
PS: No need to quote me all the time, just use the lovely Reply to topic option at the top/bottom of each page rather than "Reply with quote". Save me from cleaning up behind you.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Thanks Craig.
So, if a the source COBOL file has 4 segments and each segment has 3000 bytes and we are interested with 1000 bytes in each of the segments, then for remaining 2000 bytes it should be FILLER PIC X(2000) in each segments.
You mentioned contiguous secret stuff does that mean unused columns positioning should have to be adjacent ? our columns of interests are not contiguous rather scattered. Ex, if one column of width 10 is taken @231 position, next columns to be read from @340. Is that ok ?
So, if a the source COBOL file has 4 segments and each segment has 3000 bytes and we are interested with 1000 bytes in each of the segments, then for remaining 2000 bytes it should be FILLER PIC X(2000) in each segments.
You mentioned contiguous secret stuff does that mean unused columns positioning should have to be adjacent ? our columns of interests are not contiguous rather scattered. Ex, if one column of width 10 is taken @231 position, next columns to be read from @340. Is that ok ?
Rumu
IT Consultant
IT Consultant
The short answer is yes. The medium answer is let your architect put the copybook together for you and see if it works. If they're worth their salt, they'll handle all that and it will work. Or it won't.
On the "contiguous" question, all I meant was any group of skipped fields between the ones you need to know about - no matter if that's one or one hundred fields - can be represented by a single FILLER field of the appropriate total length. You don't care how many fields are technically in there since they are Verboten! to you. So for your specific example:
FieldA sarts at 231 and is 10 characters (ends at 240)
FILLER starts at 241 for 99 characters (ends at 339)
FieldB starts at 340
Lather, rinse, repeat for all the stuff you're not allowed to parse out. Total record length should match reality. Of course, it's all still there for the viewing but DataStage can be told to ignore them.
On the "contiguous" question, all I meant was any group of skipped fields between the ones you need to know about - no matter if that's one or one hundred fields - can be represented by a single FILLER field of the appropriate total length. You don't care how many fields are technically in there since they are Verboten! to you. So for your specific example:
FieldA sarts at 231 and is 10 characters (ends at 240)
FILLER starts at 241 for 99 characters (ends at 339)
FieldB starts at 340
Lather, rinse, repeat for all the stuff you're not allowed to parse out. Total record length should match reality. Of course, it's all still there for the viewing but DataStage can be told to ignore them.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Hi Craig,
I received the first cut of the copy book today. What was done there, the source file has 5 segments, out of that only 4 is of our interest. He created one copy book for each segment. I am not going into more detail on that. The first qs arises that can we have 4 different copy book to read same file or if we create one copy book with redefine clause, will be good?
As the source file has multiple segments ie cobol data files is of multiple records,should not the records type should have 01 level? I am not finding this in our copy book.
Here is a snapshot of the copybook. Not sure what is 5 RDT-HEAD?Similarly he has defined RDT-BODY as well.
I think instead of 1 RDT-DADA , it should be 01 RDT-DATA
I received the first cut of the copy book today. What was done there, the source file has 5 segments, out of that only 4 is of our interest. He created one copy book for each segment. I am not going into more detail on that. The first qs arises that can we have 4 different copy book to read same file or if we create one copy book with redefine clause, will be good?
As the source file has multiple segments ie cobol data files is of multiple records,should not the records type should have 01 level? I am not finding this in our copy book.
Here is a snapshot of the copybook. Not sure what is 5 RDT-HEAD?Similarly he has defined RDT-BODY as well.
I think instead of 1 RDT-DADA , it should be 01 RDT-DATA
Code: Select all
1 RDT-DATA.
5 RDT-HEAD.
10 RDT-REC-CODE-BYTES.
15 RDT-REC-CODE PIC X(2).
10 RDT-REC-CODE-BYTES-DETAIL REDEFINES RDT-REC-CODE-BYTES.
15 RDT-REC-CODE-KEY PIC X.
15 RDT-REC-TYPE-CONTROL PIC X.
10 RDT-NO-POST-REASON PIC S9(3)V COMP-3.
10 RDT-SC-1 PIC X.
10 RDT-SC-2 PIC X.
10 RDT-SC-3 PIC X.
10 RDT-SC-4 PIC X.
10 RDT-SC-5 PIC X.
10 RDT-SC-6 PIC X.
10 RDT-SC-7 PIC X.
10 RDT-SC-8 PIC X.
10 RDT-CHD-SYSTEM-NO PIC X(4).
10 RDT-CHD-PRIN-BANK PIC X(4).
10 RDT-CHD-AGENT-BANK PIC X(4).
10 RDT-CHD-ACCOUNT-NUMBER PIC X(16).
10 RDT-TRANSACTION-CODE PIC S9(3)V COMP-3.
10 RDT-MRCH-SYSTEM-NO PIC X(4).
10 RDT-MRCH-PRIN-BANK PIC X(4).
10 RDT-MRCH-AGENT-NO PIC X(4).
10 RDT-MRCH-ACCOUNT-NUMBER PIC X(16).
10 RDT-DR-MERCHANT-NUMBER REDEFINES RDT-MRCH-ACCOUNT-NUMBER.
10 RDT-CHD-EXT-STATUS PIC X.
10 RDT-CHD-INT-STATUS PIC X.
10 RDT-TANI PIC X.
10 RDT-TRANSFER-FLAG PIC X.
10 RDT-ITEM-ASSES-CODE-NUM PIC 9.
10 RDT-MRCH-SIC-CODE PIC S9(5)V COMP-3.
10 RDT-TRANSACTION-DATE PIC S9(7)V COMP-3.
10 RDT-DR-DATE-OF-ITEM REDEFINES RDT-TRANSACTION-DATE.
Rumu
IT Consultant
IT Consultant
I'm going to step away from this conversation and let the people here who actually work with stuff like this day after day take over. Been WAY too long.
And from what I remember, DataStage and the CFF stage do not support REDEFINES but that may no longer be true for all I know. And you may not be in a "redefines" scenario with your pieces but perhaps a "concatenate" one. Only you and your architect would know right now, and some of those questions would need to go to him/them.
To the best of my knowledge you are going to need one redefines-less copybook (even if you have to manually put the pieces together) to import as metadata for the stage.
Edited to add: Okay, never mind on the REDEFINES restriction, looks like they are supported according to the docs:
https://www.ibm.com/support/knowledgece ... Stage.html
And from what I remember, DataStage and the CFF stage do not support REDEFINES but that may no longer be true for all I know. And you may not be in a "redefines" scenario with your pieces but perhaps a "concatenate" one. Only you and your architect would know right now, and some of those questions would need to go to him/them.
To the best of my knowledge you are going to need one redefines-less copybook (even if you have to manually put the pieces together) to import as metadata for the stage.
Edited to add: Okay, never mind on the REDEFINES restriction, looks like they are supported according to the docs:
https://www.ibm.com/support/knowledgece ... Stage.html
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers