loading large record in to teradata table

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
just4u_sharath
Premium Member
Premium Member
Posts: 236
Joined: Sun Apr 01, 2007 7:41 am
Location: Michigan

loading large record in to teradata table

Post by just4u_sharath »

Hello,
I am trying to load data in to teradata table from a file. One filed has a size of 90k and is being loaded in to column defined as clob. I have defined it as long varchar in the datastage and trying to load. I able to load records that has size less than 64k but its failing for records greater than 64k.

The job is aborting with the following error.

The message's length (91669) exceeds the limit (65466) and it can't be broken at newlines; the message is truncated.

thank you
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

That just looks like a message about the error, not the error itself.
-craig

"You can never have too many knives" -- Logan Nine Fingers
just4u_sharath
Premium Member
Premium Member
Posts: 236
Joined: Sun Apr 01, 2007 7:41 am
Location: Michigan

Post by just4u_sharath »

Thank you for the response..

Following is the error:

TD_etl_unq,0: Field col contains a value that has a byte length of 91,456 which exceeds the allocated length of 64,000 (the field value is '{"ptyLocRtf":[{"actnCd":"I","locUsgId":"0","addrSeqNbr":"453564706","callInd":"Y","policy":null,"persId":null,"addrTyp":null,"ptyLocEffDt":"2003-06-01","ptyLocEndDt":"9999-12-31","ptyKeyTyp":"MEMBER","ptySrcKey":"IL1^000055^296099546^....

prints entire value of the field and ends with ...[Message split, exceeds 64000 characters]...

next in the log it shows
TD_etl_rtf_unq,0: RDBMS code 3514: User-generated transaction ABORT. SQL statement: ROLLBACK
TD_etl_rtf_unq,0: Number of rows inserted: 0

buffer(0),0: Fatal Error: APT_BufferOperator::writeAllData() write failed. This is probably due to a downstream operator failure.
just4u_sharath
Premium Member
Premium Member
Posts: 236
Joined: Sun Apr 01, 2007 7:41 am
Location: Michigan

Post by just4u_sharath »

I am able to load using bteq script in to the teradata table. so there is no issue on the DB side.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

You haven't specified your target stage - is it a Teradata Connector or one of the legacy stages? This may help in either case. If you are going to need more help, you'll need to provide more details about the stage and settings / properties you are using on the target side.
-craig

"You can never have too many knives" -- Logan Nine Fingers
just4u_sharath
Premium Member
Premium Member
Posts: 236
Joined: Sun Apr 01, 2007 7:41 am
Location: Michigan

Post by just4u_sharath »

1. I am using Teradata connector stage
2. I am using immediate option with array size 1. bulk option is not working.
3. write mode: Insert
4. Table action: Append
5. Record count 100
6. Enable lob references: NO
7. The target column that i have issue is defined as clob. I am able to insert filed less than 64k but not more than 64k. I am able to insert using bteq script but its failing when trying to insert using datastage.

thanks
PaulVL
Premium Member
Premium Member
Posts: 1315
Joined: Fri Dec 17, 2010 4:36 pm

Post by PaulVL »

What are the memory buffer settings you have on the stage?

Also.. not sure if this will help (probably not,but at this point it can't hurt to try)... APT_TERA_64K_BUFFERS ? Did you set it?

Another you may investigate is APT_DEFAULT_TRANSPORT_BLOCK_SIZE.


Honestly... just guessing at this point.
priyadarshikunal
Premium Member
Premium Member
Posts: 1735
Joined: Thu Mar 01, 2007 5:44 am
Location: Troy, MI

Post by priyadarshikunal »

Bulk method only supports up to 64kb in an array, if the single record is exceeding that, you may not be able to use bulk at all.


This link should help you.
Priyadarshi Kunal

Genius may have its limitations, but stupidity is not thus handicapped. :wink:
just4u_sharath
Premium Member
Premium Member
Posts: 236
Joined: Sun Apr 01, 2007 7:41 am
Location: Michigan

Post by just4u_sharath »

Thanks for the reply.

1. Yes i am aware of the limitation on the bulk mode.
2. Yes i did use the APT_DEFAULT_TRANSPORT_BLOCK_SIZE ENV variable with default value 1280000
3. I also used APT_TSORT_STRESS_BLOCKSIZE with default value as 3000000 as per IBM link below.
4. The buffer side i tried using
a) Default
b) Auto Buffer: 6145728, Buffer free run 95% and disk increment to 2 MB
3) Buffer: 6145728, Buffer free run 95% and disk increment to 2 MB

still fails with same error
http://www-01.ibm.com/support/docview.w ... wg21660153

thanks
priyadarshikunal
Premium Member
Premium Member
Posts: 1735
Joined: Thu Mar 01, 2007 5:44 am
Location: Troy, MI

Post by priyadarshikunal »

there should be "max buffer size" row in teradata connector properties under limits section, which is by default set to 0 to use default buffer size of 64KB. You will have to change that to push the record larger than 64kb.
Priyadarshi Kunal

Genius may have its limitations, but stupidity is not thus handicapped. :wink:
just4u_sharath
Premium Member
Premium Member
Posts: 236
Joined: Sun Apr 01, 2007 7:41 am
Location: Michigan

Post by just4u_sharath »

I have tired giving max buffer size = 12800. But still failing with same issue.

Thanks
Post Reply