Scrapping 400 pages using rvest and purr

Thanks for the help. I am trying to apply text sentiment analysis on the editorials, so that's why I want to download that many articles. No scrapping for the sake of scrapping.

polite option worked well. With rvest, even when I use Sys.Dely() I keep getting 403 error after a while. Both approaches are slow, but polite is slower of the two. Yet, good to have it.

One more thing, I am trying to grapple with. How can I supply two digit number to past0() function. I have some website's where I need to give three missing links, e.g. the date at the end of this URL: https://www.dawn.com/archive/2019-05-22,

I would want to supply, the year, month and day parts separately.