Pyspark read.csv not working in spark 2.0.1

I am trying to read csv using jupyter lab using the below code with Saprk 2.0.1 version but getting error, please help -

Spark_Session=SparkSession.builder
.enableHiveSupport()
.master(‘local’)
.appName(“Spark Sql Trial”)
.getOrCreate()
from pyspark import SparkContext
from pyspark.sql.types import *
taxisub_df=Spark_Session.read.csv(path="/home/biswajithalder127715/Sample_data/Data/LabData/nycweather.csv",header=‘true’,inferSchema=‘true’)
taxisub_df.show()

Error mesage ==>

Py4JJavaError Traceback (most recent call last)
in ()
3 from pyspark import SparkContext
4 from pyspark.sql.types import *
----> 5 taxisub_df=Spark_Session.read.csv(path="/home/biswajithalder127715/Sample_data/Data/LabData/nycweather.csv",header=‘true’,inferSchema=‘true’)

Py4JJavaError: An error occurred while calling o180.csv.
: java.lang.RuntimeException: Multiple sources found for csv (org.apache.spark.sql.execution.datasources.csv.CSVFileFormat, com.databricks.spark.csv.DefaultSource15), please specify the fully qualified class name.
at scala.sys.package$.error(package.scala:27)
at org.apache.spark.sql.execution.datasources.DataSource.lookupDataSource(DataSource.scala:170)
at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:79)
at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:79)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:325)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:149)
at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:413)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:237)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:745)

I found the read.csv jar in /usr/spark2.0.1/jars folder; found 2 files -
opencsv-2.3.jar
super-csv-2.2.0.jar

Is that ok?