Hi everyone,
Just to follow up, I have "kind of" solved the problem. The reason that it is only "kind of" solved is, there is a weird thing happen, could you give me some advice please? Thank you very much. So the code looks like below:
library(sparklyr) # load sparklyr package
sc=spark_connect(master="local",spark_home="/Users/ya/Downloads/soft/spark-2.4.3-bin-hadoop2.7") # connect sparklyr with spark, only works with JDK8
jdbc.config=spark_config()
jdbc.config$'sparklyr.shell.driver-class-path' ="/Users/ya/Downloads/soft/spark-2.4.3-bin-hadoop2.7/jars/mysql-connector-java-8.0.16.jar" # put the jdbc connector under spark_home/jars/ folder
query1=spark_read_jdbc(sc,name='student1',options=list(url='jdbc:mysql://localhost:3306/learnsql',user='root',password='ya',dbtable='student1'))
The weird thing is, the spark_read_jdbc() has to be run TWICE to get this work. The first run get the error: Error: java.sql.SQLException: No suitable driver among other info, and the second run works. Anybody knows why sparklyr behave like this?
Thank you very much.
Best regards,
YA