File Connector stage :-Java related errors

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
TNZL_BI
Premium Member
Premium Member
Posts: 24
Joined: Mon Aug 20, 2012 5:15 am
Location: NZ

File Connector stage :-Java related errors

Post by TNZL_BI »

Hi All ,

I am using a File Connector stage to write to the HDFS ecosystem. I have used the HttpFS method to connect to the HDFS ecosystem. However when I run the Job I am getting a time out error.

The exact details are given below :-

File_Connector_31,0: com.ascential.e2.common.CC_Exception: An exception occurred: java.net.SocketTimeoutException Read timed out
at com.ibm.iis.cc.filesystem.FileSystemLogger.createCCException (FileSystemLogger.java: 194)
at com.ibm.iis.cc.filesystem.FileSystem.process (FileSystem.java: 743)
at com.ibm.is.cc.javastage.connector.CC_JavaAdapter.run (CC_JavaAdapter.java: 443)


Has anyone encountered such an issue before ? Any inputs here will be really helpful
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

A search online for "java.net.SocketTimeoutException" overwhelmingly suggests you increase the timeout value. Where do you do that? I have no idea but I would imagine someone here does.
-craig

"You can never have too many knives" -- Logan Nine Fingers
TNZL_BI
Premium Member
Premium Member
Posts: 24
Joined: Mon Aug 20, 2012 5:15 am
Location: NZ

Post by TNZL_BI »

Thanks Chulett .

That's what I am also thinking but not sure where to do that from. I checked on the net and on some places it is mentioned to perhaps log on to the websphere and make changes to the JVM settings in there. Let me start looking in that direction
TNZL_BI
Premium Member
Premium Member
Posts: 24
Joined: Mon Aug 20, 2012 5:15 am
Location: NZ

Post by TNZL_BI »

Hi All , Just wanted to close this thread saying that I can now connect Hadoop using the file connector stage.

there were 2 main issues. Hadoop cluster had 1 Name node and 2 Data nodes. Ideally opening the firewalls for Name node and corresponding Name Node Port should suffice. However the JOb was failing by showing that it cant connect to the Data Node port which was strange. We have anyway opened up all ports for all Nodes ( Name node and Data Nodes ). Doing this got the issue fixed and the Job started running fine.

Also note that it is important to know which Hadoop you are working on. In our case it was Apache Hadoop which wasn't SSL enabled and not Kerberos enabled which made is a straightforward process actually.
Post Reply