How to use optim in R

A friend of mine asked me the other day how she could use the function optim in R to fit data. Of course there are functions for fitting data in R and I wrote about this earlier. However, she wanted to understand how to do this from scratch using optim.

The function optim provides algorithms for general purpose optimisations and the documentation is perfectly reasonable, but I remember that it took me a little while to get my head around how to pass data and parameters to optim. Thus, here are two simple examples.

I start with a linear regression by minimising the residual sum of square and discuss how to carry out a maximum likelihood estimation in the second example.

Minimise residual sum of squares

I start with an x-y data set, which I believe has a linear relationship and therefore I'd like to fit y against x by minimising the residual sum of squares.
dat=data.frame(x=c(1,2,3,4,5,6), 
y=c(1,3,5,6,8,12))
Next, I create a function that calculates the residual sum of square of my data against a linear model with two parameter. Think of y = par[1] + par[2] * x.
min.RSS <- function(data, par) {
with(data, sum((par[1] + par[2] * x - y)^2))
}
Optim minimises a function by varying its parameters. The first argument of optim are the parameters I'd like to vary, par in this case; the second argument is the function to be minimised, min.RSS. The tricky bit is to understand how to apply optim to your data. The solution is the ... argument in optim, which allows me to pass other arguments through to min.RSS, here my data. Therefore I can use the following statement:
result <- optim(par = c(0, 1), min.RSS, data = dat)
# I find the optimised parameters in result$par
# the minimised RSS is stored in result$value
result
## $par
## [1] -1.267 2.029
##
## $value
## [1] 2.819
##
## $counts
## function gradient
## 89 NA
##
## $convergence
## [1] 0
##
## $message
## NULL
Let me plot the result:
plot(y ~ x, data = dat)
abline(a = result$par[1], b = result$par[2], col = "red")

Read more »