Page 1 of 1

JDBC Connector for DB2 Z/OS optimization

Posted: Tue Jul 18, 2017 7:09 am
by clarcombe
In production we are noticing that the updates and inserts to our Z/OS
tables are performing slowly.

Scenario
we are deleting, inserting and updating many tables with a small number of rows each time.

I have included two screen shots of our parameters. The transaction
size is set to 10000 having previously been 20000 but we had to down
grade it for a Z/OS timeout issue.

Insert
https://ibb.co/eoms4a

Update
https://ibb.co/gqSzja

Delete
https://ibb.co/b32X4a

For example
A delete of 30000 rows runs at 148 rows/second with an unique index on the key. Surely this should run faster than this ?

Does anyone know of any optimization parameters that can be set to speed this up ?

Thanks

Posted: Tue Jul 18, 2017 8:18 am
by PaulVL
have you tried a test job using the DB2 connector stage?

(your pics didn't work)

Posted: Tue Jul 18, 2017 9:06 am
by clarcombe
We did try the DB2 connector but for some reason (I dont remember why) we moved to the JDBC connector. I think it was a warning issue.

I dont know why the pics dont work. They work well enough if you copy paste them. Maybe its the https

Posted: Tue Jul 18, 2017 1:01 pm
by chulett
Just as an FYI, they don't work because the forum software doesn't recognize them as valid image links, pretty sure they have to end in something more like .png/jpg/gif and the like. Took off the tags so people can click on them... they still don't work for me here but they block a ton of sites at work. :(

Posted: Wed Aug 23, 2017 6:13 am
by clarcombe
We believe this is due to having numerous triggers on each table. We are investigating.

Posted: Wed Aug 23, 2017 8:47 am
by PaulVL
How exactly did you calculate the rows per second?

The monitor will show you rows per second based upon the start time of the job, not the actual speed of the rows flowing once the stage gets the data.

I would not recommend using a jdbc if the db2 connector stage is available. If you are getting a warning message... address it.

Also, turning off the table mismatch check is not a wise thing to do. it's minimal overhead to the job but can save your bacon when the need arises.

Posted: Fri Aug 25, 2017 2:57 am
by clarcombe
Im looking at the notes in the job.I think it was to do with an RCP issue. DB2 didn't like RCP

Posted: Fri Aug 25, 2017 5:13 am
by clarcombe
From the original developer.

JDBC : chosen because not all the records were processed with the DB2 connector and it gave no message or failure.

We couldn't wait for a patch so we went with JDBC instead and this worked.