SORT produces a INFO message but no rows passed through

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
AGStafford
Premium Member
Premium Member
Posts: 30
Joined: Thu Jan 16, 2003 2:51 pm

SORT produces a INFO message but no rows passed through

Post by AGStafford »

I have Server Shared Container that perform a sort in it.
The Sort is producing a "INFO" message for every row processed that provides no information:
330876905 INFO Fri Jul 11 03:07:12 2014
jLandMF025_POD_ALL_200_TNCB101A_1File.DID_LandMF025_2_POD__fzb_TNCB101_2.scServerROWCOUNTGeneric.Sort_By_GROUP_KEYS: %s


And no rows are getting past the sort :
Stage: scServerROWCOUNTGeneric.Sort_By_GROUP_KEYS, 368222 rows input
Stage start time=2014-07-11 02:38:28, end time=2014-07-11 03:10:24, elapsed=00:31:56
Link: Write_To_SORT, 368222 rows
Link: Write_To_AGGREGATOR, 0 rows


The data being sorted is from a flat file which looks like:
"F5"
"ALL"
"F4"
"ALL"
"FF"
"ALL"
"FF"
"ALL"
"FF"
"ALL"
"09"
"ALL"
"FF"
"ALL"
"FF"
"ALL"
"FF"


Anyone have any idea why I am getting so many INFO messages, rather than WARNING messages and still have absolutely no rows getting through the sort.

The job does run during the busiest part of the schedule when we are maxing out the CPU. I cannot tell about memory.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

A couple of questions. Is this a Server Shared Container in a Parallel job? And is this behavior repeatable, meaning does it always produce no output from the aggregator.
-craig

"You can never have too many knives" -- Logan Nine Fingers
AGStafford
Premium Member
Premium Member
Posts: 30
Joined: Thu Jan 16, 2003 2:51 pm

Post by AGStafford »

The shared container is run in a Server job
AGStafford
Premium Member
Premium Member
Posts: 30
Joined: Thu Jan 16, 2003 2:51 pm

Post by AGStafford »

It ran this way for 10 months (5 days/week) so it was repeating.
However when I manually ran the job this morning it did not have the same problem.
I am going to see if the problem reappears in the production run tonight.

I did not make any changes to the job.
I did clear out the log (it was 1.1GB) before I reran it manually.
Post Reply