In this case, I would highly recommend that you take a look at the packages that exist and use various agency APIs:
There's a handy post, Scraping Responsibly with R, below that goes into some of the details of checking some of the nitty grittier details, but one of the rules of thumb is that, if there's an API, you should try to avoid scraping:
If you do need to scrape a site, the best tool for the job depends on how a site is generated — rvest is probably the most common one you'll see.
hrbrmstr has a great collection of posts using various stacks to scrape sites with dynamic content:
https://rud.is/b/category/web-scraping/
Edit Also a hrbrmstr-recommended link: