This is very interesting. I'm using the qs format, which is also way faster than the (admittedly low baseline of) cvs files. The upside is that it can store R objects natively, such as lists, which is very useful the downside being that you have to load the whole file. Parquet might be better for loading columns in from a larger file. I made a quick test from the hip between parquet and qs, on a file with 500 mio rows and 20 columns, and qs (with the slowest compression setting) made the file 1/3 the size of write_parquet (with default settings). haven't tested speed yet.
Have you thought about using data.table's fread() for the CVSs?
This is a better baseline I would say. You can supply fread with commands from the commandline, such as sed and awk. This means you can select columns and filter rows before they enter R, which makes CSV files suddenly become much more interesting again, for large data. I have yet to see people use freads command line capabilities to it full potential. I should probably make a blog post myself where I do that.
Also, there is the h5 format. I think a comparison with more possibilities would be beneficial.
Also, use the library bench() for benchmarking, so you can see the RAM usage.