ArndW wrote:The important part is visible. Perform an explicit truncation or rounding if you wish to get rid of the warning, or write to a column that supports decimals.
ArndW wrote:If you target column is an integer then, as you suspected, the real number passed to it gets truncated. If you cannot make the target field contain decimals then I would explicitly perform the real -& ...
Hello all, I am facing the following issue with one of my jobs: [IBM][CLI Driver] CLI0182W Fractional truncation. SQLSTATE In the job there ia a column which is generated by multiplication of an integer coulmn(int 10) and a decimal (decimal 13)column in the source sql. The result is stored in an int...
There are many different kinds of CDC (change data capture). Ideally the database itself will maintain a log of deleted records, perhaps via a trigger or through the application itself, and you can l ... well yes I have come across an article which says that one of the efficient methods to do CDC i...
hello Ray sir, Thanks for the motivation :) I was interested in knowing the first part : detecting the deleted rows. I had come across CDC as a viable method. So wanted to know more about it. any place you can point out where I can find some related reading about its implementation? Best regards, Sa...
hello sir, I have mentioned that we do not intend to actually delete the records. We plan to mark records with a flag, say 'D' for deleted. we have huge tables with millions of rows in them. so we cannot afford to load them every time. so we are looking at CDC kind of an approach. we have a constrai...
Hello vmcburney, You could try multiple instance jobs. In a transformer partition your data on the natural key using a Transformer across multiple instances. Split your large hash files to match those partitions. F ... Hey this sounds intresting!!!!!!!! But to implement this we need to divide our da...
Hello vcmburney, There is a change data capture solution for Server Jobs using the CRC32 function that has been discussed in a lot of threads. It is very fast and not difficult to build but there has been ongoing deb ... We have already implemented CRC32 for CDD. We are facing performance issues bec...
Hello ray, We have no idea what your job design is Our job is designed like this: source---->row merge---->CRC--->Transformer----->rowsplit---->Tatget. We do the CRC hash lookup in the transformer stage identifying the new rows. Further we have used tow paths to load inserts and updates into the tar...
Hello all, I am facing a performance issue with one of our CRC based jobs. its taking 10 hours to load. Our source has 15 mil rows.We cannot do the Change Data Capture with the timestamp because there is'nt a time stamp field which tracks changed rows. We have changed the hash file to 64 bit to acco...
If your source has a ROWlastupdated timestamp column then you can try the following: store the job last run time in a shared container. In the job use the selection tab to query row.LASTUPD_DTTM>%DateTimeIn('#LastModifiedDateTime#') this would only pic up new or updated rows since last run. (would s...
Hello Ray, Welcome aboard. :D 1> Yes and yes 2> Yes*** 3> No - must be one by one A>as mentioned in the second point could you please tell us what would be the other methods to do the same? B>for the third one i thought i would simplyfy the process(kind of automate it) by reading a sequential file w...
hello all, I am trying to move some of the hashed files from one project to a new one by copying the hash files folder from the project directory and then pasting it into the new project directory on my machine. after that i set pointers to them in the new project by the SETFILE command.Then if i ru...