value files

A forum for discussing DataStage<sup>®</sup> basics. If you're not sure where your question goes, start here.

Moderators: chulett, rschirm, roy

Post Reply
karthi_gana
Premium Member
Premium Member
Posts: 729
Joined: Tue Apr 28, 2009 10:49 pm

value files

Post by karthi_gana »

All,

I have read about value files here. I can't get the extact usage of value files.

Do we need to create the value files manually? IF yes,

a) Do we need to enter the path of that file name?
b) what is the format of that file?

I have both parameters and value files in my parameter set. which one will take precedence?

I have never used values files in any of my project. I am just heard this concept. So it may be a too basic question. but excuse me and please explain it in detail.
Karthik
jwiles
Premium Member
Premium Member
Posts: 1274
Joined: Sun Nov 14, 2004 8:50 pm
Contact:

Post by jwiles »

Need to create the value files manually? No, but you can

a) No, just the filename. Value Files are stored in <IS Project Directory>/ParameterSets/<ParameterSetName>/<ValueFileName>
b) Text file containing ParameterName=Value lines of text. Example:

Code: Select all

SOURCE_FOLDER=/u01/source_data
TARGET_FOLDER=/u01/target_data
Both parameters and value files: Values stored in the parameterset definition are "default values" (the "as defined" option at runtime), similar to default values for job parameters, and precedence is the same. If you specify a value file name, its values take precedence if present, otherwise you get the default values defined in the parameterset definition.

You already know what parameter sets are. Value files contain text which sets the values for those parameters to something other then the defined default values. For example, you could create three value files containing parameter values for running test vs qa vs production jobs. Just supply the appropriate value file name at job runtime.

Regards,
- james wiles


All generalizations are false, including this one - Mark Twain.
PhilHibbs
Premium Member
Premium Member
Posts: 1044
Joined: Wed Sep 29, 2004 3:30 am
Location: Nottingham, UK
Contact:

Post by PhilHibbs »

My suggestion - and I don't know if this is normal practice or not - is to always use value files. If you have a Parameter Set that is common to all your jobs, and you just want the values to be the same for all of them, then still make a value set file called "Default" or "Common" or some such, and supply that to the job in the shell script that runs dsjob. That way you can override the values by just creating a new set and supplying that instead, and your different environments can have different config (e.g. commit frequency) without having to re-compile your jobs.

In answer to "how to create the files", a file is automatically created whenever you add a new named entry in the Values tab.

What I usually do for large Parameter Sets is this:

1. Add the new Value Set entry name with blank values in DataStage
2. Close the Parameter Set
2. Go to the directory in a shell
3. Copy an existing file over the top of the new file
4. Open the Parameter Set in DataStage and edit the Value Set entry

This avoids having to copy-and-paste loads of defaults from another entry into the new entry. Just overwrite the file with a set that is close to the set that you want and then customise the new set.

Also, back up your value set files!
Phil Hibbs | Capgemini
Technical Consultant
Post Reply