Pandoc Stucks and Consumes High Memory

Hi, everyone

I've been trying to convert an RMarkdown file into html and pdf. HTML Conversion works fine and I had a little r code to embed/see an html into the RMarkdown editor using htmltools library..

When this one line, pandoc process stucks while converting pdf and starts eating more memory every second. It went up to 8.5GB, then I killed the pandoc process.

htmltools::includeHTML("foo.html")

This is a very unrelated code for the pdf of course, and I expect it to just ignore it instead of trying to convert.
I thought this might be useful.

Are you inserting into a R code chunk this code ?

You could conditionally ignore this chunk when creating pdf files.
Something like

```{r, eval=knitr::is_html_output()}

See details and more examples here:

Hope it helps

1 Like

Perfect, thanks. I'm very new to bookdown.

This is a Rmarkdown & knitr trick.

I advice you to look into Rmarkdown intro if you are new to this. The books are great ressources

Then if you are interested in bookdown

And the last one above.

1 Like

OK, this one is not about the topic, but this limitation is not cool.

image

@moderators

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.