SQL Datatype issues (LongNVarChar and Bit)
Moderators: chulett, rschirm, roy
SQL Datatype issues (LongNVarChar and Bit)
Hello,
I am somewhat new to DataStage and am running into an issue.
Unfortunately, I am not able to find any helpful information regarding this issue, and I wanted to post here to see what kind of help I can get.
I have a job that is reading from a SQL Server table. Unfortunately, some of the datatypes on the source table are causing the job to fail (that much information I can find online). The datatypes are; LongNVarchar and BIT. If I copy the table and modify the copied table's datatypes to something more usable (ie; char and int), the job works fine. Unfortunately, this is not an option as there is an external application that is dependent on this table having these datatypes.
My questions are ... how can I write a DS job that uses these columns and writes the data to another table and/or a CSV file? Can I do some sort of conversion of the columns with these datatypes? (For example; to_char() or something similar) What is the best way to resolve the issue with the datatypes so that I can move forward?
Thank you
I am somewhat new to DataStage and am running into an issue.
Unfortunately, I am not able to find any helpful information regarding this issue, and I wanted to post here to see what kind of help I can get.
I have a job that is reading from a SQL Server table. Unfortunately, some of the datatypes on the source table are causing the job to fail (that much information I can find online). The datatypes are; LongNVarchar and BIT. If I copy the table and modify the copied table's datatypes to something more usable (ie; char and int), the job works fine. Unfortunately, this is not an option as there is an external application that is dependent on this table having these datatypes.
My questions are ... how can I write a DS job that uses these columns and writes the data to another table and/or a CSV file? Can I do some sort of conversion of the columns with these datatypes? (For example; to_char() or something similar) What is the best way to resolve the issue with the datatypes so that I can move forward?
Thank you
Here are the error messages that I am seeing after attempting to run the DS job:
source,0: The array size must be set to 1 when reading LOBs (CC_OdbcDBRecordDataSetProducer::dbsAllocateBindBuffers, file CC_OdbcDBRecordDataSetProducer.cpp, line 587)
source,0: The runLocally() of the operator failed.
source,0: Operator terminated abnormally: runLocally() did not return APT_StatusOk
node_node1: Player 1 terminated unexpectedly.
main_program: APT_PMsectionLeader(1, node1), player 1 - Unexpected termination by Unix signal 15(SIGTERM).
Thanks for your help!
source,0: The array size must be set to 1 when reading LOBs (CC_OdbcDBRecordDataSetProducer::dbsAllocateBindBuffers, file CC_OdbcDBRecordDataSetProducer.cpp, line 587)
source,0: The runLocally() of the operator failed.
source,0: Operator terminated abnormally: runLocally() did not return APT_StatusOk
node_node1: Player 1 terminated unexpectedly.
main_program: APT_PMsectionLeader(1, node1), player 1 - Unexpected termination by Unix signal 15(SIGTERM).
Thanks for your help!
So, the ODBC Connector. Sorry but I have to ask - did you try setting the Array Size to 1? Is there only a single LOB in the table? I also believe you'll need to move the LOB field to the end of the metadata but others can confirm that.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Thank you all for your responses.
I found the location where to change the array size ... it is in the ODBC connector within the DS job. I have a lookup table and a target table within my DS job, so each will need to be updated with the array size of 1. I am now able to insert new records, but when I attempt to change the action of the job to update existing rows, then insert new rows, the job fails with the error messages provided previously. Why is this?
I found the location where to change the array size ... it is in the ODBC connector within the DS job. I have a lookup table and a target table within my DS job, so each will need to be updated with the array size of 1. I am now able to insert new records, but when I attempt to change the action of the job to update existing rows, then insert new rows, the job fails with the error messages provided previously. Why is this?
Can't really answer the question with any authority but then I don't generally advocate use of those "dual action" actions. If you are doing a lookup, do you not already know if you should be doing an update versus an insert? In that case, direct the result to two separate targets, one for each action.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
One approach might be to create a view of your table using "Standard" datatypes and work with the view.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Thank you all for your responses!
I am trying a different approach, to test a few things out.
I have been successful in loading the contents of a CSV file through a Transformer stage to an ODBC Connector target. All columns insert the data as expected, except for the date information that is within the CSV file.
The date information in the CSV file is in the following format; yyyy-mm-dd hh:mm:ss. I have tried changing the target table (SQL Server) using a datatime and timestamp datatypes, but no luck. The following id an example of the logic that I have tried.
StringToTimestamp(DSLink4.CreateDate,"%yyyy-%mm-%dd %hh:%nn:%ss.3")
How does one take in-coming date information from a CSV file and load it into a SQL Server table with a datetime datatype? I have been racking my brain searching via Google searches, but no luck yet.
Thanks!
I am trying a different approach, to test a few things out.
I have been successful in loading the contents of a CSV file through a Transformer stage to an ODBC Connector target. All columns insert the data as expected, except for the date information that is within the CSV file.
The date information in the CSV file is in the following format; yyyy-mm-dd hh:mm:ss. I have tried changing the target table (SQL Server) using a datatime and timestamp datatypes, but no luck. The following id an example of the logic that I have tried.
StringToTimestamp(DSLink4.CreateDate,"%yyyy-%mm-%dd %hh:%nn:%ss.3")
How does one take in-coming date information from a CSV file and load it into a SQL Server table with a datetime datatype? I have been racking my brain searching via Google searches, but no luck yet.
Thanks!