Exceeding Internal Limit

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
jackson.eyton
Premium Member
Premium Member
Posts: 145
Joined: Thu Oct 26, 2017 10:43 am

Exceeding Internal Limit

Post by jackson.eyton »

Hi everyone,
I have a rather simple job pulling some data from an excel file and placing it in a table. No real transformations are done to the data short of trimming spaces before and after. The column definitions coming in from the spreadsheet are varchar(256) and outgoing to my table are also varchar(256). The issue is when running the job its attempting to create the table and fields using varchar(64000), this then leads to an error that the table exceeds the 65535 byte internal limit. Full error here:
STAGE_GEOGRAPHIC_AREA_HIERARCHY: Failed to execute SQL statement: CREATE TABLE STAGE_GEOGRAPHIC_AREA_HIERARCHY (Population_Timestamp TIMESTAMP NOT NULL, PROCESSED_LAST_DATE DATE, Country_Short_Code VARCHAR(64000), Zip_Code VARCHAR(64000), Ttype VARCHAR(64000), Primary_City VARCHAR(64000), State_Code VARCHAR(64000), County_Name VARCHAR(64000), ISO_Code VARCHAR(64000), Country_Name VARCHAR(64000), State_Abbreviation VARCHAR(64000), State_Name VARCHAR(64000)) DISTRIBUTE ON RANDOM. Reason: [SQLCODE=HY000][Native=46] ERROR: Table 'STAGE_GEOGRAPHIC_AREA_HIERARCHY' record size 640012 exceeds internal limit of 65535 bytes (CC_NZConnectionRep::executeDirect, file CC_NZConnectionRep.cpp, line 201)
The Job's OSH can be reviewed here:
https://github.com/jacksoneyton/DataSta ... master/OSH

I'm not understanding why its deciding that the fields are varchar(64000), therefore I'm unsure how to resolve. Any advice would be greatly appreciated, thanks!!
-Me
jackson.eyton
Premium Member
Premium Member
Posts: 145
Joined: Thu Oct 26, 2017 10:43 am

Post by jackson.eyton »

Hmm, looks like no one has any thoughts on this. I will rebuild this job from scratch in this case and see if the issue persists. I'll update again once I have more.

So odd....
-Me
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Well... had some thoughts about unbounded fields but did seem to see anything like that in your output so did not post. And if you search for that term, all you get are XML related results. :(

I did, however, do an exact search for your magic number of 64000 this morning and several topics turned up, including this one. Maybe one of them might shed some insight into what is going on.
-craig

"You can never have too many knives" -- Logan Nine Fingers
jackson.eyton
Premium Member
Premium Member
Posts: 145
Joined: Thu Oct 26, 2017 10:43 am

Nevermind... I'm an idiot...

Post by jackson.eyton »

Never mind everyone.... User error... I copied the unstructured data stage from another job and I must have propagated column values or something and done it wrong. I had my column length in the scale field and length left blank.

Thank you Chulett for the link there, that gave me enough information to realize it was using the default max value for netezza and therefore was likely not set correctly. My eyes missed it the first time, I just saw the numbers I was expecting to see and not that those numbers were in the wrong column... My bad!!! :oops:
-Me
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Aha! Was wondering why all of your sizes were in the "scale" property but figured it was some sort of secret Netezza thing. Glad you spotted that. :D
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply