The problem here looks to be the website itself. If you try each URL in succession, you end up with this:
It looks like you may need to figure out login credentials and ensuring that your connection from R has permission to view the pages that you are accessing. This can be done a myriad of ways. I hate to point you down the path of RSelenium, but that is an option. There is probably a way to set a header / cookie to use your login credentials manually in your program, as well, since this is static content. I would look into the xml2 package a bit for related functions, as well.
Since there are only 4 pages, you could obviously download the HTML files yourself and then access them locally. That doesn't help if you want this process to be automated / reproducible, though.
In general, I encourage you to keep your URL labels with the data it came from, as it would have made it clear that you were having problems on successive pages. The other approach is to try a handful of URLs manually (i.e. fetchData(1), fetchData(4)), and see what you get before firing off the ol' ldply.
EDIT: I basically just said the same thing as @danr. Oops 