Internal Error: (blockSizeActual >= v4BlockHeader::size (

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
ann_nalinee
Premium Member
Premium Member
Posts: 22
Joined: Wed Sep 17, 2003 12:21 pm
Location: Sydney

Internal Error: (blockSizeActual >= v4BlockHeader::size (

Post by ann_nalinee »

Got below error while trying to read input datasets. It seems like somehow the input dataset was crashed.

Internal Error: (blockSizeActual >= v4BlockHeader::size ()): datamgr/partition.C: 474

Also, we get the error message while trying to view this dataset from Data Set Management:

Unknown error reading data

Everything will be back to normal once the dataset is re-generated.

However, we tried to identify the root cause of the issue so that this could be permanently fixed. Has anybody experienced this issue before?
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

So it was corrupted somehow, it would seem. Perhaps a support case is in order in case this is something that they are aware of on your platform / version. Otherwise, it is hard to say... did the job creating it finish without error? Did you run out of disk space wherever it is being created? Being on Windows, are you perhaps running Anti Virus software on the server? They've been known to wreak a bit o' havoc on DataStage jobs.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ann_nalinee
Premium Member
Premium Member
Posts: 22
Joined: Wed Sep 17, 2003 12:21 pm
Location: Sydney

Post by ann_nalinee »

Chulett,

Thanks for your response.

This dataset was created successfully last week and we continued using it as the source data without any issue.

However, once we tried to read this file again on Monday, we got this error.

We logged this with support already and waiting for them to investigate.

Anti virus might be something that we haven't looked into yet as this is usually handled by IT support team. We might need to keep an eye on this to see if there is any anti virus application running on the server.

Cheers,
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Was curious if the dataset was ever valid (and then was corrupted later) or if it started off life that way. Let us know what you find out about AV on the server. If it is there, it might be as simple as excluding the directories where DataStage components live from being scanned.

Another thought - we've seen situations where an unrelated process uses the same name and corrupts existing datasets. It can be hard to track down, especially if the corruptor doesn't run on a daily basis.
-craig

"You can never have too many knives" -- Logan Nine Fingers
PaulVL
Premium Member
Premium Member
Posts: 1315
Joined: Fri Dec 17, 2010 4:36 pm

Post by PaulVL »

You might also want to generate some dummy data and test your theories outside of your regular prod flow.

RowGen up million rows of data into a dataset.

job #2, read that dataset.

Make sure you are using the APT files that were associated with your regular prod runs. Look to see if Job1 is using the same APT file as Job2.

If you are a cluster / grid environment, make sure that all resource disk mounts are accessible by all hosts in your cluster/grid.
ann_nalinee
Premium Member
Premium Member
Posts: 22
Joined: Wed Sep 17, 2003 12:21 pm
Location: Sydney

Post by ann_nalinee »

Thank you for the reply guys.

There are some issues with the hard disk somehow as we found out this when tried to run Disk Error Checking. After scan disk and let windows fixes the issue itself, the problem seems to be gone.

Much appreciated for your help and ideas.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Well, that would certainly do it. :wink:

Thanks for posting your resolution.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply