Junk characters issue

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
hemaarvind1
Participant
Posts: 50
Joined: Mon Jan 21, 2008 9:35 am

Junk characters issue

Post by hemaarvind1 »

Hi Everyone,

We are having an issue while loading the data from source mainframe file to netezza database.

We are reading data in EBCDIC format and we are able to see the data in the view according to the requirement. However,when we try to load the data to netezza, it gave error as "bad rows limit exceeded".

When we checked the data in mainframe system manually, there are few special characters found in the file,however, while viewing the data through cff stage, they are not shown and data is displayed normally.

Could you please let me know how the cff stage reads junk data while transferring to its output tab and how to identify what are the exact special characters coming in.
srinivas.nettalam
Participant
Posts: 134
Joined: Tue Jun 15, 2010 2:10 am
Location: Bangalore

Post by srinivas.nettalam »

N.Srinivas
India.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

There is no such thing as "junk data". Any data in your client's database is your client's data.

Since the data are being read successfully, the values are valid. Somewhere in your job design - possibly in the loading phase - you are having mapping issues. You need to resolve these. Try writing to a text file instead, which you can inspect using a hex editor of some kind to learn what that data look like. Specify a map of NONE for this file, at least initially.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply