I am facing issue in one Datastage job that reads the data ( Has fields with JSON data, defined as BLOB datatype in the Oracle table) and performs BULK load to the Oracle table. Note that this table doesnt have any constraints defined, so ideally its supposed to perform the load run faster.
The job takes 15 minutes to load 5 Million records. The commit count/Array size gets defaulted to 1 due to the LongVarBinary field. Can someone pls suggest a better way to handle this scenario so that the loads run faster ?
Flow :
Sequential File ----> Oracle Load ( using Oracle Connector)
JSON data ---> Defined as LongVarBinary in Datastage
JSON is usually pure text, so could be defined as CLOB and treated by DataStage as Long VarChar data type. Then you could probably boost your granularity.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.