Not sure how to do this in SparkR but in sparklyr you would so something like the following, notice that you need a JDBC driver properly installed in your cluster, this example uses the MySql connector but there are many more that might suit your needs.
sc <- spark_connect(
master = "local",
config = list("sparklyr.shell.driver-class-path" = "~/Downloads/mysql-connector-java-5.1.41/mysql-connector-java-5.1.41-bin.jar")
)
spark_read_jdbc(sc, "person_jdbc", options = list(
url = "jdbc:mysql://localhost:3306/sparklyr",
user = "root", password = "<password>",
dbtable = "person"))
Instead of passing a SQL statement as the dbtable, I would rather register the tables you need to access in Spark individually and then join tables as needed.
Additionally, you can connect to data sources directly from R using the odbc package without having to use Spark. For RStudio Pro customers, you can consider also using the RStudio Professional Drivers.