Does your computer have several 100 GB RAM? With a cursory look, most of the memory in the JSON is used by actual data (not just boilerplate), so, even if you streamed it, I don't see how you could keep the contents of that file in memory.
Assuming you are working with a desktop computer, I see two possibilities: either you're trying to recover a few specific values (so you could just stream, picking up what's needed while discarding the rest), or you want to have access to the entire dataset.
Entire dataset
In the second case, I think the easiest is to use a database like sqlite or duckdb. It will require downloading the entire file and storing it to disk, but then you can use pointed commands to extract anything you need. You can find many ways to do it here.
Extracting values on-the-fly
It seems to me that file is not in a format like ndjson where each line can be interpreted independently. So, yes, you can read parts of the file, but parsing the json might be a problem since you get unmatched { or }.
If the data is in ndjson, there is jsonlite::stream_in(). But when I try it on this link, I get an error similar to that one.
An alternative is to use scan() to directly read the text:
url <- "https://d25kgz5rikkq4n.cloudfront.net/cost_tra..."
mytext <- scan(gzcon(url(url)),
what = character(),
sep = "\n",
skip = 1L,
n = 2L)
nchar(mytext)
#> [1] 1124838 98110
substr(mytext[[1]], 1, 9)
#> [1] "{\"negotia"
myjson <- jsonlite::fromJSON(mytext[[2]])
#> Error: parse error: trailing garbage
#> ditional_information":""}]}]},
#> (right here) ------^
Created on 2022-07-12 by the reprex package (v2.0.1)
But you still have a problem when trying to parse the JSON line-by-line. However if you know what you're looking for, you can use sep = "," and look at each json record while it passes by, and only save it if needed. That would take some programming.