Conversion from JSON to CSV through R implies two steps: reading and writing. Reading is generally the hard step, and the liked page shows one way to do this in the second half:
library(sergeant)
library(tidyverse)
db <- src_drill("localhost")
scorecards <- tbl(db, "dfs.tmp.`/college-scorecard/*.json.gz`")
To get this to work requires installing and starting Apache Drill first, covered in the first chapters. To be clear, jsonlite::stream_in is a simpler approach; Drill just scales more flexibly.
Once data is read into R, saving it as a CSV is comparatively straightforward, and can be as simple as a call to write.csv, or better, readr::write_csv or data.table::fwrite.
The top of the linked page suggests another possibility: using Drill to both read and write without touching R at all. (You could run the SQL from R if you like.) The example there goes the other direction—from CSV to JSON—but it is certainly capable of going the other direction. It requires writing SQL to invert the following:
0: jdbc:drill:> ALTER SESSION SET `store.format`='json';
0: jdbc:drill:> CREATE TABLE dfs.tmp.`/1996-97` AS SELECT * FROM dfs.root.`/Users/bob/Data/CollegeScorecard_Raw_Data/MERGED1996_97_PP.csv`;
The advantage of this approach is that Drill is quite clever about memory management. You can still subset along the way, if you like.
The disadvantage is that while the APIs are well-designed, this is beyond basic R and requires some system setup. If you can get jsonlite to handle everything and this is a one-time job, Drill is overkill.