Even if I simply copy/paste the url incrementing &page=1 to &page=2, the site automatically redirects to &page=1. Only if I click the button to page 2, will the URL say &page=2 and show results accordingly
Then the site is using JavaScript to (at the very least) enforce a navigation path through the data.
It's a common anti-webscraping technique.
If you want to automate scraping the data you'll need to do it in a way which mimics an actual user moving through the pages.
The way to do this in R is with RSelenium. It's a fairly involved process to get it up and running, and though I done it a bunch and led a few seminars on it, I unfortunately am not in a position to walk you through it myself.
The best I can do is point you to a few online resources and if you have smaller questions along the way someone (maybe even I) will probably be able to point you in the right direction.
Scrapping an adversarial website can be very fun if you like this sort of thing. Know that at times it is more art than science as you will need to probe quite a bit to see how they try to thwart you and develop a response to it.
Things to watch out for:
Popups which render the rest of the page inaccessible until you dismiss them.
Assigning elements randomly generated names, IDs, or putting dummy elements into the page structure so you can't reliably access elements with a standardized xpath or css selector.
Pages which load a static page which them loads a completely dynamically generated page to display.
You may need to spend a lot of time inspecting pages. I recommend you browse using Chrome and get used to inspecting elements (Ctrl + Shift + I) and maybe brush up on some JavaScript yourself so you can inject code in the Developer Console (Ctrl + Shift + J).
You'll also have a MUCH, MUCH, MUCH easier time if you can watch your "headless" browser at work, so make sure you read about using VNC with RSelenium.