I don't know the right answer for you, but I've been through some similar processes and can share a few observations that I've made.
The way I've incrementally ported SQL into dplyr is to first make sure I understand the logic of the SQL before changing anything. Then if the SQL has some logical breaks I put each "related lump" of SQL in a file and call it in a parameterized way using glue_sql from the glue package. Then I'm left with most of my extraction code feeding into tibbles via SQL. Then I wrote my reporting bits in RMarkdown.
So at first pass maybe my code is 90% SQL and 10% R. That's a good start. Then I look for design patterns in all my SQL. If there's a lot of repeated logic then I'd consider making a function that abstracts that logic away. One easy win I've repeated a few times is to have a function that does ALL the joins for a given star schema and returns a tibble that's a pointer to that joined up stuff. Cuts all that preamble business from all my queries. Then I pass that big lazy tibble through a number of filter statements to get it down to a reasonable size, maybe do some summation and then collect. I'll incrementally refactor to move things into functions and then tie them together with dplyr data pipes.
I used to have a "boil the ocean" approach where I'd just try fully refactor and solve everything. I've changed my approach to one of "incremental improvement" as I've aged. I think keeping code running but refactoring it bit by bit is usually (but not always) a better model.
Good luck!