Are there any ROTs with respect to hash file performance?
For example, if I have a hash file with the following attributes:
RL= 180 approx (includes a 100 char varchar name field)
Records - 750,000
Dynamic, Min modulus = 1.
DATA 30 file size 30MB approx
OVER 30 file size 9MB approx.
Assuming that there will be accesses to this file with an even spread of key field values (I know, bad assumption. I just want feelings here), what sort of speed improvements might I see if the overflow file was reduced or eliminated? I have been reading manuals to find out more aboiut hash files....
TIA
Neil
Hash file performance questions
Moderators: chulett, rschirm, roy
There's been a number of posts on this topic, this one for example. It links to another post containing an excellent example of YARLKBP - Yet Another Rather Lengthy Kenneth Bland Post. Always worth adding to your list of 'Favorites'.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Read this:
viewtopic.php?t=85364
You want no overflow. Everytime a row can't be found in the data file it must sequentially scan the overflow.
If this is your average hash file size, then I suggest you increase the minimum modulus until the file no longer dynamically resizes and always maintains the same data file size as the minimum. Then, bump it by 10% or whatever you feel will approximately be adequate to keep performance from degrading on abnormal large data runs.
viewtopic.php?t=85364
You want no overflow. Everytime a row can't be found in the data file it must sequentially scan the overflow.
If this is your average hash file size, then I suggest you increase the minimum modulus until the file no longer dynamically resizes and always maintains the same data file size as the minimum. Then, bump it by 10% or whatever you feel will approximately be adequate to keep performance from degrading on abnormal large data runs.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Kia ora. If there are a fairly constant number of rows, then a static hashed file rather than the default (dynamic) may be more performant. You might like to follow that train of thought a little further in your researches. The good news is that there are quite a number of (UniVerse) hashed file experts in Aotearoa.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.