The SGD approach to solving SVMs is fairly new -- it wasn't until somewhat recently that people formulated the hinge loss in a way that allows gradient descent as opposed to solving the quadratic programming problem.
I'm not sure what the actual tradeoffs are between the two methods in terms of computational time / efficiency. I imagine something fancy like ADMM / related is better than SGD. My guess is the advantage of SGD is that you can do early stopping / fit the SVM in a neural net framework using hinge loss. Based on this it's not surprising that there isn't an SGD SVM implementation in R since the advantage isn't entirely clear.
I have a naive Numpy implementation of the SGD approach if you want to translate it into R, but again, I imagine this would be for didactic purposes more than anything.