Insert New Rows Only using ODBC Connector to MSSQL

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Insert New Rows Only using ODBC Connector to MSSQL

Post by ShaneMuir »

Hi All

I am having a little bit of an issue when executing a process to update a target MSSQL DB using an ODBC connector stage with the Write mode set to Insert New Rows Only.

The issue is, that it works for some target tables and not others. We have checked for differences in the table set up between tables that did work and tables that didn't work and can see no obvious difference.

Further Detail:
Job Design: The job design is very simple.
1. It reads from Oracle using an ODBC Connector with RCP on. Ie no columns are specified. The source tables do not have key values specified.
2. The data passes through a transformer stage (this was added to capture rejects, but it too is RCP with no column specified)
3. The target is ODBC Connector, with parameters for table name and Table Action. The Write Mode is set to 'Insert new rows only'. The target DB tables have primary key values. The arrays size, and record count are set as 2000, Isolation level is set to 'Read uncommittted'


The idea behind the job is that it can be run once for a given table, and the table action will be set to Truncate. This will load an initial view of the data.
A second pass of the table will then happen (the source table has since been updated). The second pass will have the taret table action set to 'Append'. This in conjunction with the Write mode 'Insert new rows only' should in theory check the target DB metadata for the key values and insert new rows only. (Or at the very least, fail on the PK violation and not insert existing records).

Currently this job is used to move some 350 odd tables. However, when running the second pass of information sometimes the job will work, other times it will just hang. It is consistent with the target tables on which it hangs and for which it works.

I have talked to a DBA (of sorts) and we have found the following:
* There is no obvious difference between the tables' set up
* There are no foreign key relationships
* There are no other processes running against the table.

We have also tried the following:
* Various array and record count sizes, chaning Isolation level settings between Serializable and Read Committed - no effect - the jobs will still hang for the offending tables. In some instances it causes the tables that once worked to no longer work.
* Drop and recreate the table - no effect - job will still hang.
* Create a copy of the table with the same structure - job will work as it is supposed to.
* Create the table on a new DB (same server) - job still hangs.
* I have found that it works for tables which have 160,000 rows, but can fail on a table with 13 rows. The number of rows does not seem to be a factor.
* It will fail on the tables whether there are new records to insert or not. At first I was thinking that it was failing only when there were new records, but testing has shown this to be incorrect.


These tests suggest to me that there must be something attached to the offending tables that is preventing the update.

OR

That the way that MSSQL is determining the record locks is for some reason causing a deadlock.

Has anybody seen such behaviour before? Is there a setting that I have potentially missed, either on the ODBC connector stage, or in the odbc.ini file. Can anybody share how MSSQL determines its record locks (ie can it be different at the table level as opposed to the DB level?)

EDIT:
We have done further investigations and tests, and have found that the issue is begin caused by the Primary Key name on the tables. If we change the primary key name the process seems to work. No idea as to why this would be the case. I am thinking that it might be the actual ODBC connector somehow.

I did update the job to used a target DRS stage, but that too has issues. The insert new rows option does not seem to work, it finishes ok, but doesn't actually update anything in the target DB.

Any comments are welcome at this point.

Regards
Shane
Last edited by ShaneMuir on Wed Feb 11, 2015 11:45 am, edited 1 time in total.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Re: Insert New Rows Only using ODBC Connector to MSSQL

Post by chulett »

ShaneMuir wrote:* Create a copy of the table with the same structure - job will work as it is supposed to.
* Create the table on a new DB (same server) - job still hangs.
These two statements seem... contradictory. The first would also seem to imply that the 'copy' was in fact not identical, any chance of that? :? Curious what the difference would be between them, seems like there must be something.

Wish I knew more about MSSQL and whatever arcane locking scheme it uses under the covers. Also sorry to hear you have an out of sorts DBA... oh, wait - a DBA of sorts. Interesting concept. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

DBA of sorts = not a DBA but knows a lot more about MSSQL than I do (mind you this is not difficult).

I too am thrown as to why a copy of the table structure would work, but creating it in a new DB doesn't. It suggests that there is something happening at a higher level than the DB which is acting on a named table (because different table name same structure works, but same table different DB does not). But we cannot find any such thing.

Thinking about it some more, it could be implied that because it works with the same table structure with a different name, it cannot be the way that MSSQL runs its record locks.
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

FYI:

We have done further investigations and tests, and have found that the issue is begin caused by the Primary Key name on the tables. If we change the primary key name the process seems to work. No idea as to why this would be the case. I am thinking that it might be the actual ODBC connector somehow.

I did update the job to used a target DRS stage, but that too has issues. The insert new rows option does not seem to work, it finishes ok, but doesn't actually update anything in the target DB.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Hopefully you've involved support in this.

As to your update issue, is this still with RCP enabled?
-craig

"You can never have too many knives" -- Logan Nine Fingers
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

Yes Craig, everything is still RCP.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

OK, as far as I know it can't properly do updates via RCP as it doesn't know the key columns to bind into the where clause. I seem to recall an interesting post from Arnd on the subject and how he had to dynamically generate a schema at runtime with the key information to get their 'generic' job to work properly. I'll see if I can dig it up.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

Thanks for the input Craig

I too was sceptical that RCP could run for an 'update'. If you actually choose any sort of update it does not work. DS will kick up an error before you even run the job.

However it seems that with the ODBC connector it utilises the target metadata to determine what the key values are and inserts accordingly. Either that or its utilising the Primary key violation error and just not reporting it.

The same logic however doesn't work using the DRS stage, for whatever reason it does not seem to use the target key value when you set it to insert new rows only. It just says it completed successfully but hasn't actually inserted anything (even if the target table is empty).

At this point, because we have it so close to working using the ODBC stage, I think I will just get the owners of the SQLServer tables to change the key names - the assure me they aren't used in anyway other than on the tables so that they can be easily changed, with no impact.
cppwiz
Participant
Posts: 135
Joined: Tue Sep 04, 2007 11:27 am

Re: Insert New Rows Only using ODBC Connector to MSSQL

Post by cppwiz »

ShaneMuir wrote:Currently this job is used to move some 350 odd tables.
Have you looked at implementing InfoSphere Change Data Capture?

http://www-03.ibm.com/software/products ... andatacapt

Rather than looping through a DataStage job, you could truncate and load these tables in parallel through CDC for a faster and trouble-free process. You could also keep the tables continuously in sync if that requirement would evolve in the future.

Loading hundreds of tables with a DataStage RCP job seems like an outdated concept that has been replaced with better tools to replicate data between heterogenous data sources.
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Re: Insert New Rows Only using ODBC Connector to MSSQL

Post by ShaneMuir »

Yes CDC could be of use here, but for the actual circumstance it would be like using a sledgehammer to crack a nut.
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

UPDATE:

So further investigations seem to show that is to do with the ODBC driver.

We performed an extra test where we tried to replicate the error in Oracle. When using the ODBC stage to write to a duplicate oracle table, we had the exact same problem where the process hung. However when we switched to an oracle connector the process worked as expected.

So as a workaround we've recreated the tables in oracle, merging the data there before pushing it to SQL Server as a truncate and load.

We have also tested with an updated ODBC driver, and still the problem persists. So we will raise something with IBM.
Post Reply