Stochastic Gradient Descent on Custom Functions

I am working with the R programming language.

I am trying to see if there any popular implementations to perform Stochastic Gradient Descent on custom defined functions.

For instance, here is an example of using Gradient Descent to optimize a custom function (using the well established "pracma" library):

# define function:

  Rastrigin <- function(x)
    {
        return(20 + x[1]^2 + x[2]^2 - 10*(cos(2*pi*x[1]) + cos(2*pi*x[2])))
    }

# run gradient descent:

library(pracma)

> steep_descent(c(1, 1), Rastrigin)

$xmin
[1] 0.9949586 0.9949586

$fmin
[1] 1.989918

$niter
[1] 3

Now, I am trying to run Stochastic Gradient Descent on this same function. I found the following package that allow for Stochastic Gradient Descent (e.g. sgd package - RDocumentation) - but this seems to more suited for functions within pre-existing statistical models. I also tried looking for popular variants of Stochastic Gradient Descent such as ADAGRAD or RMSPROP, but there does not seem to be any straightforward methods to implement Stochastic Gradient Descent on custom defined functions.

For instance - suppose I wanted to run Stochastic Gradient Descent on the "Rastrigin" function that I defined above: can someone please show me how to do this?

Thanks!

Check out the CRAN Task View for optimization. There are a few there.

1 Like

Thank you so much for your reply! There are so many choices for stochastic optimization - do you know which one would be best suited for this problem? Thank you so much!

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.