Sparklyr: options for spark_write_parquet

Spark has options to write out files by partition, bucket, sort order
python:
df.write.partitionBy("date").parquet("py.par")
scala:
df.write.format("....").bucketBy(8, "j", "k").sortBy("j", "k").saveAsTable("sc.par")

How are these options specified in sparklyr without using custom invoke code?
Thanks for any insight!

#This works:
spark_write_parquet(spark_df,'/tmp/b2.par',partition_by = 'ts')

#more generic form but didn't partition?
spark_write_parquet(spark_df,'/tmp/b3.par',options=list(partitionBy='ts'))

#this make an error
#java.lang.Exception: No matched method found for class org.apache.spark.sql.DataFrameWriter.option
spark_write_parquet(spark_df,'/tmp/b4.par',partition_by ='ts',options=list(bucketBy=list(10,'pkey')))
2 Likes