The example treatment is not within R; it is performed through shell commands in a terminal with standard *nix commands.
This command operates on a file data.json./import/tweets_ in a subdirectory of the current directory to split it into 50,000 line numbered pieces. Without more this simply displays on the screen.
operates on the file import/tweets_da (within a subdirectory to the directory from which the shell is operating) by extracting the first line with head 1 and then piping the result using the | operator, which is equivalent to the %>% in R to a grep command to search for sequences of upper and lower case letters, numbers and the -_] characters. Without more, this will simply display to screen.
For the reasons given in the linked post, preprocessing the data outside R may be required unless very large RAM resources are made available. As noted, the data format is a single-line file. An alternative approach would be to use awk, sed or a custom parser in flex/bison, C/C++, Golang, Haskell or another language that reads from stdio and writes to stdout, which goes far to surmount the difficulties with large files because they act in a streaming fashion.
If none of this makes sense, it is due to unfamiliarity with the Linux/macOS programming environment. That is well worth learning but isn't something that can be done in a few hours.
As a possible alternative, look into the {ndjson} package for "streaming" json, which addresses the same issue in similar way but through R.