Unable to load mysql table to spark Dataframe

scala> val df_mysql = sqlContext.read.format(“jdbc”).option(“driver”,“com.mysql.jdbc.Driver”). option(“url”,“cxln2.c.thelab-240901.internal”).option(“dbtable”,“retail_db.employee”).option(“user”,“sqoopuser”). option(“password”,“N
HkkP876rp”).load()
java.lang.NullPointerException
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:72)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:114)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:45)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:330)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:125)
… 48 elided


please note : retail_db.employee table exist in mysql
mysql> select * from employee;
±------±-----±-----+
| empid | sal | year |
±------±-----±-----+
| 1 | 100 | 2001 |
| 1 | 200 | 2002 |
| 1 | 300 | 2003 |
| 2 | 100 | 2001 |
| 3 | 100 | 2001 |
| 3 | 100 | 2002 |
| 3 | 100 | 2003 |
| 2 | 200 | 2002 |
| 2 | 300 | 2002 |
| 2 | 300 | 2003 |
±------±-----±-----+
10 rows in set (0.00 sec)

Got the fix for the above problem.

Follow: