DSXchange: DataStage and IBM Websphere Data Integration Forum
View next topic
View previous topic
Add To Favorites
Author Message
rumu
Participant



Joined: 06 Jun 2005
Posts: 231

Points: 2236

Post Posted: Tue Oct 30, 2018 7:19 am Reply with quote    Back to top    

May be asking a stupid question that , copybook extension is .docx.. can datastage read that metatdata file or has to be changed to .cfd?

Also, the metadata file starts with level 1 and 5 , guessing these have to be changed to 01 and 05?

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 706
Location: Malvern, PA
Points: 6714

Post Posted: Tue Oct 30, 2018 7:41 am Reply with quote    Back to top    

Rumu,

There's a link to the FAQ for using mainframe files at the bottom of my post.

To answer your last question first: the import wizard for COBOL FD looks for .cfd on file names. I've never tried using a formatted file like a Word doc, but the wizard is a plain text parse and you are well-advised to convert the .docx to a plain text file, then edit it to prepare it for import and save it as .cfd.

From your description, you have a multiple record type file. The record type is identified by the first two bytes of each record. CFF can handle that with one copybook that shows each record type's layout under a redefines of the record from the third byte on. The CFF FAQ can help with that as well.

This is a learn by doing task. Expect there to be mistakes and glitches. Post if you get frustrated and I might be able to help.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
chulett

Premium Poster


since January 2006

Group memberships:
Premium Members, Inner Circle, Server to Parallel Transition Group

Joined: 12 Nov 2002
Posts: 42792
Location: Denver, CO
Points: 220559

Post Posted: Tue Oct 30, 2018 9:19 am Reply with quote    Back to top    

Yup, plain text only, formatting free. Well, other than indenting, I assume. And thanks Franklin!

_________________
-craig

"I don't mind you comin' here and wastin' all my time time"
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 231

Points: 2236

Post Posted: Tue Oct 30, 2018 10:49 am Reply with quote    Back to top    

Thanks Frank for your help. For the cobolcopybook what I did , I saved the docx file in plain text.Should I rename it to .cfd extension?
I changed level 1 to 01 and 5 to 05 ..While converting to plain text,it asks for CRLF but I guess in cobol an trailing . is the record delimeter in the copybook file.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 706
Location: Malvern, PA
Points: 6714

Post Posted: Tue Oct 30, 2018 11:00 am Reply with quote    Back to top    

I'm not sure what you're asking, so let me know if I missed what you need.

The copybook text file can have any extension. The import wizard for COBOL FD finds copybook files for import with the default extension .cfd, but you can have it find any other extension.

The copybook is plain text and is code that follows Cobol syntax requirements. The first byte must be in position 8 or higher. Good practice is to end every line with a period, but it's not required. Be very careful with that. Even plain text has control characters in it, but they are valid to the O/S and have no bearing on the copybook import.

The wizard is Cobol smart. It will parse each line correctly if it complies with the syntax requirements.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 231

Points: 2236

Post Posted: Tue Oct 30, 2018 11:01 am Reply with quote    Back to top    

Sorry Craig.. I missed your response. So plain text ie .txt file is fine as copybook?

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 231

Points: 2236

Post Posted: Tue Oct 30, 2018 11:39 am Reply with quote    Back to top    

Frank, Craig

Let me describe the Cobol source file here:

1) The file is a fixed width file with 5636 lenghth of each type of record.
2) Four type of records are there.
3)For each type of record,first 82 bytes of record length is same,field length and positioning is changing from 83 rd position. For one type of record, the field at @83 is 9 byte long where as for second record it is 11 byte long Again from @584 , the feild length and positioning are same.

So what are architech has done, he creted a common Group for first 82 bytes longthen from @83 to @583 created Record Body group for each of the 4 types using redine clause and then one common Trailer Group from @584.

1 RDT-DATA.

5 RDT-HEAD.

10 RDT-REC-CODE-BYTES.
15 RDT-REC-CODE PIC X(2).

10 RDT-REC-CODE-BYTES-DETAIL REDEFINES RDT-REC-CODE-BYTES.
15 RDT-REC-CODE-KEY PIC X.
15 RDT-REC-TYPE-CONTROL PIC X.

10 RDT-NO-POST-REASON PIC S9(3)V COMP-3.
10 RDT-SC-1 PIC X.
10 RDT-SC-2 PIC X.
10 RDT-SC-3 PIC X.
10 RDT-SC-4 PIC X.
10 RDT-SC-5 PIC X.
10 RDT-SC-6 PIC X.
10 RDT-SC-7 PIC X.
10 RDT-SC-8 PIC X.
10 RDT-CHD-SYSTEM-NO PIC X(4).
10 RDT-CHD-PRIN-BANK PIC X(4).
10 RDT-CHD-AGENT-BANK PIC X(4).
10 RDT-CHD-ACCOUNT-NUMBER PIC X(16).
10 RDT-TRANSACTION-CODE PIC S9(3)V COMP-3.
10 RDT-MRCH-SYSTEM-NO PIC X(4).
10 RDT-MRCH-PRIN-BANK PIC X(4).
10 RDT-MRCH-AGENT-NO PIC X(4).
10 RDT-MRCH-ACCOUNT-NUMBER PIC X(16).
10 RDT-DR-MERCHANT-NUMBER REDEFINES RDT-MRCH-ACCOUNT-NUMBER.
10 RDT-CHD-EXT-STATUS PIC X.
10 RDT-CHD-INT-STATUS PIC X.
10 RDT-TANI PIC X.
10 RDT-TRANSFER-FLAG PIC X.
10 RDT-ITEM-ASSES-CODE-NUM PIC 9.
10 RDT-MRCH-SIC-CODE PIC S9(5)V COMP-3.
10 RDT-TRANSACTION-DATE PIC S9(7)V COMP-3.
10 RDT-DR-DATE-OF-ITEM REDEFINES RDT-TRANSACTION-DATE.

5 RDT-BODY-1.
10 RDT-BATCH-TYPE PIC S9V COMP-3.
10 RDT-JULIAN-POST-DATE PIC S9(5)V COMP-3.
10 RDT-ENTRY-TYPE PIC X.
10 RDT-ENTRY-SYS-4 PIC X(4).
10 RDT-ENTRY-SYS-2 PIC X(2).
10 RDT-ENTRY-DATE PIC X(2).
10 RDT-ENTRY-2 PIC X(2).
10 RDT-ENTRY-LAST-6 PIC X(6).
10 RDT-BKDT-ADDITIONAL-INT PIC S9(15)V99 COMP-3.
10 RDT-BACKDATED-TRAN-FLAG PIC X.
10 RDT-CBRN-TRAN-ID PIC S9(4)V COMP-3.
10 RDT-TRANSACTION-AMOUNT PIC S9(15)V99 COMP-3.
10 RDT-AUDIT-TRAIL-DATE PIC S9(7)V COMP-3.

5 RDTCO-BODY-1 REDEFINES RDT-BODY-1.
10 RDT-DCX-BATCH-TYPE PIC S9V COMP-3.
10 RDT-DCX-JULIAN-DATE PIC S9(5)V COMP-3.
10 RDT-DCX-TYPE PIC X.
10 RDT-DCX-SYS-4 PIC X(4).
10 RDT-DCX-SYS-2 PIC X(2).
10 RDT-DCX-DATE PIC X(2).
10 RDT-DCX-2 PIC X(2).
10 RDT-DCX-LAST-6 PIC X(6).
10 RDT-DCX-ADJ-SALE-FEE-AM PIC S9(15)V99 COMP-3.
10 RDT-BACKDATED-TRAN-FLAG PIC X.
10 RDT-CBRN-TRAN-ID PIC S9(4)V COMP-3.
10 RDT-DCX-MON-TRAN-JOURNAL-AMT PIC S9(15)V99 COMP-3.
10 RDT-DCX-AUDIT-TRAIL-DATE PIC S9(7)V COMP-3.
10 RDT-DCX-PRIN-TOTAL PIC S9(13)V99 COMP-3.
10 RDT-DCX-PRIN-CASH PIC S9(13)V99 COMP-3.
10 RDT-DCX-PRIN-MRCH PIC S9(13)V99 COMP-3.
10 RDT-DCX-STMT-FIN-CHG-OFF PIC S9(11)V99 COMP-3.
10 RDT-DCX-MTJ-FIN-CHG-OFF PIC S9(11)V99 COMP-3.

This is part of the code, RDT and RDTCO are 2 tyoes of record out of 4 types.
Is this a correct way to define the metadta? Or he can create 4 different copybooks ..
One qs, for First type of record,@83 , the columnlength and name are different than the one for record type 2. This can be handled in Redine clause right? Also, in the view tab of source files are all the fields will be visible ie columns for type 1, columns for type 2?Output of the CFF stage should have all the columns so that proper columns to be selected to the respective output links.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 706
Location: Malvern, PA
Points: 6714

Post Posted: Tue Oct 30, 2018 11:52 am Reply with quote    Back to top    

That's a good start.

You have 4 types of records, each with a unique value in RDT-REC-CODE. Every record type has a common area, 82 bytes "header" and the trailer.

Your copybook can be reviewed to make sure on import that you've correctly loaded a table definition for each record type.

Loading the full, all record types table definition to CFF is your first step, having made sure CFF is configured for multiple record types. After the table definition is loaded, the Record ID tab will be active and you'll enter there how each record type is identified.

Follow the using CFF FAQ. It will get you there, and anything left will be things you need to clean up. Give it a try, and let us know how it turns out.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 231

Points: 2236

Post Posted: Tue Oct 30, 2018 12:03 pm Reply with quote    Back to top    

Thanks Frank.

Here, only one level 01 is defined in the copy book and remaining common groups as level 5.. I assume for multiple record types, there should be as many level 01 as per no of record types. Like for my case, there should be 4 level 01 ...is that correct? in our copybook we have,
01 RDT
05 Header

O5 body 1
…….
O5 body 2 redines body1
…….
O5 body 3 redines body1
…….
O5 body 4 redines body1
05 Trailer

My datastage access is still pending hence can not test this copybook.

Can you please suggest whether this copy book will give 4 types of records upon importing or only one type of record named RDT?
I want to see 4 tables under metadataimport screen...I think here only 1 table name will be displayed ie RDT as there is one 01 level.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 706
Location: Malvern, PA
Points: 6714

Post Posted: Tue Oct 30, 2018 12:49 pm Reply with quote    Back to top    

Based on a copybook I use that is similar to yours, it should work.

You only need one 01 line. You only need one 05 Header section. You're import will show the redefines for the 05 body sections. The 05 trailer will show at the end.

When you load the table definition to CFF, and you define it as multiple record type, CFF will "help" you from there.

Again, this is learning by doing. Unless I see your requirements and coding, I can't go into more detail, and that's as it should be. Follow the FAQ as best as you can, and post if you hit an obstacle.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 231

Points: 2236

Post Posted: Tue Oct 30, 2018 2:49 pm Reply with quote    Back to top    

Thanks Frank. I will load this once my DS is installed and let you know.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 231

Points: 2236

Post Posted: Wed Nov 07, 2018 8:35 am Reply with quote    Back to top    

The source binary file resides on AS400 server . In order to use CFF stage to read the file, first step is to bring the file on Datastage server in binary mode and then read it . Can I use FTP stage to connect to AS400 to land the file? In FTP stage, how to define the metadata? To read the data in one single column?

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
FranklinE



Group memberships:
Premium Members

Joined: 25 Nov 2008
Posts: 706
Location: Malvern, PA
Points: 6714

Post Posted: Wed Nov 07, 2018 8:50 am Reply with quote    Back to top    

My implementation is z/OS mainframe, and we handle what you describe as you describe it.

Code:
FTP Enterprise stage for input:
Format tab:
Record type = implicit
Delimiter = none
Character set = EBCDIC
Data format = binary
Allow all zeros = yes
Packed = yes -- Check = no

One column, SQL type Binary, Extended Length and Scale attributes blank, Nullable No

Sequential file stage for output: Format and Column identical to FTP stage.


This two-stage parallel job is multi-instance and used for over 200 datasets of every description.

The CFF stage sets your table definition from the copybook. The record read is controlled by the table definition total record length. If you original file has a delimiter, you just adjust everything to it.

_________________
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: http://www.dsxchange.com/viewtopic.php?t=143596 Using CFF FAQ: http://www.dsxchange.com/viewtopic.php?t=157872
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 231

Points: 2236

Post Posted: Wed Nov 07, 2018 9:11 am Reply with quote    Back to top    

Thanks Frank.
I just had a talk to mainframe guy and asked him to push the file to datastage server so that CFF stage can directly read from there. He said he will try as the file is huge in size. If that does not work then the only step is to use FTP stage to read as you have mentioned here.
In the second approach, the first job will create a sequential file with one column that holds the entire record and in next job CFF stage will read that file using cobol copybook. The binary file has no delimeter.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
rumu
Participant



Joined: 06 Jun 2005
Posts: 231

Points: 2236

Post Posted: Fri Nov 09, 2018 10:51 am Reply with quote    Back to top    

Hi,

I received the cobol copy book created by cleint architect. I was asked to reveiw the copybook. There are 2 issues I found as below:

1) The source file has an optional header record which contains few dates and we need those dates in the output file if header is present.
The copyook does not include that header part ..So I am thinking to Cut the first record and dump to a sequntial file to get the dates informtion and after processing the actual data file, add those optional fields . Is that fine?

2) Each data record is prefixed with a 10 byte Record_ID. The coby book does not use that 10 bytes. I assume that copy book can not read correctly the data file as the first 10 byte is ommitted from the copybook. Is this assumption correct?
We have meeting with client architect in next Wednesday. Where I will raise the above qs.

_________________
Rumu
IT Consultant
Rate this response:  
Not yet rated
Display posts from previous:       

Add To Favorites
View next topic
View previous topic
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum



Powered by phpBB © 2001, 2002 phpBB Group
Theme & Graphics by Daz :: Portal by Smartor
All times are GMT - 6 Hours