What Quarto does is to compile an HTML/CSS/JS static webpage from formating markup and interleaving it with output from code chunks to create a big static webpage where the content doesn't change between updates. All the host server does is to take the HTTP request and serve up the content unchanged from the time it was originally transferred.
This contrasts with services like Python's Djanjo that uses an SQL database as a backend to read content continuously as it is updated without regenerating the whole site. Query operations are translated to SQL language, sent to the SQL server, brought back, reformatted and served up. Quarto can be set up to do searches and table sorts, those are "internal" to the document as it comes off the knitting machine.
For serving dynamic web content, R has other tools such as the R/Apache Integration project, which embeds the R interpreter inside the Apache 2.0 web server. This allows for the creation of web applications in R, with the ability to handle HTTP requests and responses.
If the update frequency is relatively low, just knitting the updated document and posting it is most of what's required. The qualification could be a hiccup with flushing the client-side browser cache so that the updated content is requested. It's been like 25 years since I've had to worry about this and likely the HTTP ecosystem has developed tools to address cache invalidation, but it's something to keep in mind.
In sum, just do something like reading in your dictionary as csv and spitting it out as html tables should work.