Not table to launch Spark with yarn

Hi,

I am getting error while launching Spark with Yarn with 1.6 and 2.x.x version.

Is there any problem on going ?

Exception in thread “main” java.lang.Exception: When running with master ‘yarn’ either HADOOP_CONF_DIR or YARN_CONF_DIR must be set in the environment.
at org.apache.spark.deploy.SparkSubmitArguments.validateSubmitArguments(SparkSubmitArguments.scala:251)
at org.apache.spark.deploy.SparkSubmitArguments.validateArguments(SparkSubmitArguments.scala:228)
at org.apache.spark.deploy.SparkSubmitArguments.(SparkSubmitArguments.scala:109)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:114)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Traceback (most recent call last):
File “/usr/spark1.6/python/pyspark/shell.py”, line 43, in
sc = SparkContext(pyFiles=add_files)
File “/usr/spark1.6/python/pyspark/context.py”, line 112, in init
SparkContext._ensure_initialized(self, gateway=gateway)
File “/usr/spark1.6/python/pyspark/context.py”, line 245, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway()
File “/usr/spark1.6/python/pyspark/java_gateway.py”, line 94, in launch_gateway
raise Exception(“Java gateway process exited before sending the driver its port number”)
Exception: Java gateway process exited before sending the driver its port number