Spark jobs (e.g. Analytics) appear to hang when run on remote worker nodes

In a distributed HBase & Spark installation, you may encounter a scenario where jobs submitted to Spark will hang, and timeout.

A possible cause of this is that the HBase parameter "hbase.zookeeper.quorum" is not configured properly in the hbase-site.xml file. Ensure that this is set to point to the correct hostname of the HQuorumPeer server (if HBase is managing its own zookeeper instance), or the hostname of your externally managed Zookeeper instance.

Furthermore, ensure that this file has been properly pushed out to the /opt/interset/hbase/conf directory of all HBase region servers in the environment.

Once this has been done, restart the HBase Region Server instances, and resubmit the job (i.e. rerun

Was this article helpful?
0 out of 0 found this helpful
Have more questions? Submit a request