This article has a lot of information regarding this: Harvesting the web with rvest • rvest
Scraping all of this data might take a bit of effort to gather and then clean, but here is some example code to get the information of the first profile:
library(rvest)
library(dplyr)
URL_test = read_html('https://www.doctoralia.com.br/pesquisa?q=&loc=S%C3%A3o%20Paulo')
CSS_pull2 <-
html_node(URL_test,'.media-body') %>%
html_text()
gsub('\t',' ',gsub('\n', ' ', CSS_pull2))
The following will get you all the nodes for the .media-body element. Again the data is not clean.
CSS_pull = html_nodes(URL_test, '.media-body')
CSS_text = html_text(CSS_pull, trim = TRUE)
gsub('\t',' ',gsub('\n', ' ', CSS_text))