Type 30 Descriptor Table Full - Windows

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
ecwolf
Charter Member
Charter Member
Posts: 3
Joined: Thu Jun 17, 2004 8:44 am
Location: Edmonton, Alberta, Canada

Type 30 Descriptor Table Full - Windows

Post by ecwolf »

Good Day!!

We are using DataStage 7.5 Server Edition on a Windows platform and we have a number of sequences which are running in parallel.

Recently we have been running into the famous "Unable to allocate Type 30 descriptor, table is full" error. I have been researching the forum and from what I understand, there is a T30FILE setting that needs to be adjusted. I have two questions:

1. From what I've read, this setting can be modified in a UNIX configuration file. Is there a similar configuration file in Windows? If so where is it? I have not been able to find a definitive answer. If I've missed a post somewhere please let me know.

2. Is there any way of cleaning out the T30File table? We have been doing a lot of development and I fear that this table is filled with old and obsolete entries. Again any advice would be appreciated.

Thanks and Cheers!!

Eric
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

uvconfig is in the DSEngine directory on Windoze as well. Check it out.

T30FILE is not a true table, but a shared memory thingy. There are no old and obsolete entries. A reboot once a month never hurts. Your issue is too many simultaneous dynamic hashed files being used. Just up the number, you'll be fine. Follow the same directions (get users out, logoff all clients, stop DataStage, go to server command line, modify uvconfig, uvregen, reboot the server (good idea) else start DataStage.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

If you don't believe that you are using that many hashed files, remember that most of the Repository tables are also hashed files. Every job you create engenders some more, such as RT_CONFIGnn, RT_STATUSnn and RT_LOGnn. T30FILE sets an upper limit on the number that can be open simultaneously: the default is 200, raise it to 500 or even 1000. Each entry in the table on Windows is only 112 bytes in size, so even 1000 will take only a relatively small amount of memory.
Entries are removed from the T30FILE memory table when a dynamic hashed file is closed and no-one else has it open. The T30FILE table stores the sizing information needed (immediately) by all processes to determine whether to trigger a split or a merge when updating the hashed file, and the current modulus value to be used in calculating the group address using the hashing algorithm. Keeping this information in shared memory ensures its immediacy.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
datastage
Participant
Posts: 229
Joined: Wed Oct 23, 2002 10:10 am
Location: Omaha

Post by datastage »

ray.wurlod wrote:If you don't believe that you are using that many hashed files, remember that most of the Repository tables are also hashed files. Every job you create engenders some more, such as RT_CONFIGnn, RT_STATUSnn and RT_LOGnn. T30FILE sets an upper limit on the number that can be open simultaneously: the default is 200, raise it to 500 or even 1000.
In the past I only considered what jobs were running at the point in time that the error occurred, but this makes me think: Open DS Director sessions could also have an influence right? Wouldn't they be basically opening the RT_LOGnn files to display in the client window?
Byron Paul
WARNING: DO NOT OPERATE DATASTAGE WITHOUT ADULT SUPERVISION.

"Strange things are afoot in the reject links" - from Bill & Ted's DataStage Adventure
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Yes, the Director has a number of files open but it won't have several log files open concurrently. Also remember that these file units are shared across all users, so if you have several directors and running jobs open using the same hashed files they share units.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Director also has RT_CONFIGnnn and RT_STATUSnnn open for the currently selected job, and DS_JOBS and its CATEGORY index.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply