We are running Datastage(9.1.2) on a windows 2008 R2 server.
the server is a Hyper-V, now setup with 8 CPU and 24 GB memory.
We have three disks, C(100GB),D(500GB),E(50GB)
I am looking at some tech notes about node setup and I am a bit confused. Some say you need half the nodes of #CPU, some say one for each disk, while others say it is a matter of testing.
I have set up the following as a test:
Code: Select all
node "node1"
{
fastname "DSSRV01"
pools ""
resource disk "C:/IBM_NODE_CONFIG/Datasets" {pools ""}
resource scratchdisk "C:/IBM_NODE_CONFIG/Scratch" {pools ""}
}
node "node2"
{
fastname "DSSRV01"
pools ""
resource disk "D:/IBM_NODE_CONFIG/Datasets" {pools ""}
resource scratchdisk "D:/IBM_NODE_CONFIG/Scratch" {pools ""}
}
node "node3"
{
fastname "DSSRV01"
pools ""
resource disk "E:/IBM_NODE_CONFIG/Datasets" {pools ""}
resource scratchdisk "E:/IBM_NODE_CONFIG/Scratch" {pools ""}
}
All works just fine, have tested with default two nodes, but cannot see any change.(then again I might be checking things the wrong way)
Also read bout setting nodes with several resource disks like:
{
node "dev1"
{
fastname "etltools-dev"
pool ""
resource disk "/data/etltools-tutorial/d1" { }
resource disk "/data/etltools-tutorial/d2" { }
resource scratchdisk "/data/etltools-tutorial/temp" { }
}
node "dev2"
{
fastname "etltools-dev"
pool ""
resource disk "/data/etltools-tutorial/d1" { }
resource scratchdisk "/data/etltools-tutorial/temp" { }
}
}
Any comments on best practice?