argument matches multiple formal arguments

When I run the code below, the following error pops up.

constrOptim(theta = para.cDCC, f = QLL.cDCC, grad = grad.lik, ui = ui, ci = ci, mu = 1e-05,
outer.iterations = 20, outer.eps = 0.01,
control = list(maxit = 10, reltol = 1e-05),
sigma = sigma, r = rt, t = t, S = S)

Error in R(theta, theta, ...) :
argument 5 matches multiple formal arguments

As far as I know, this means that argument 5 (ci in this case) has not been specified clearly. However, there is no other argument commencing ci and ci is completely specified.
(I googled a lot, but haven't been able to identify the root cause)

It wouold be appreciated if somebody helps me;

  1. how to interprete the error message. I am not too sure what is exactly wrong and what the cause is.
  2. how to fix it.

Even answering only 1) above would be welcome.

Hi!

To help us help you, could you please prepare a reproducible example (reprex) illustrating your issue? Please have a look at this guide, to see how to create one:

R has partial matching of function arguments. It's weird. There's an example in this link.

I think in this case, you're sending an argument that has the wrong name to a function, and R is trying to do the partial match but it matches multiple arguments. The error may not be in constrOptim but rather in a function that's called by it. Do you have a line of your code R(theta, theta, ...)? That might be the issue.

@arthur.t
Thank you very much for your reply.

I don't even understand which code below corresonds to "R(theta, theta, ...)". Based on your advice, grad.lik function may be an issue because it contains argument named theta?

QLL.cDCC = function(para, sigma, r, t, S){
  #parameters
  alpha = para[1] 
  beta = para[2]  
  A = diag( sqrt(alpha), 3, 3)
  B = diag( sqrt(beta), 3, 3)
  
  #initial value
  loglik = 0
  Qt = S 
  
    for (i in 2:t){
    rt = c( r[i, 1], r[i, 2], r[i, 3] ) #rt
    rt.L = c( r[i-1, 1], r[i-1, 2], r[i-1, 3] ) #r_{t-1}
    
    Dt = diag( c(sigma[i, 1], sigma[i, 2], sigma[i, 3]) ) #D_t
    Dt.L = diag( c(sigma[i-1, 1], sigma[i-1, 2], sigma[i-1, 3]) ) #D_{t-1}
    
    Qt.star.L = diag( diag(Qt) ) #Q*_{t-1}
    
    et.L = solve(Dt.L) %*% rt.L #epsilon_{t-1}
    
    Qt = S - A %*% S %*% t(A) - B %*% S %*% t(B) + 
      A %*% ( matpow(Qt.star.L, 1/2) %*% et.L %*% t(et.L) %*% matpow(Qt.star.L, 1/2) ) %*% t(A) + 
      B %*% Qt %*% t(B)
   
    Qt.star = diag (diag(Qt)) #Q*_t
    
    Rt = matpow(Qt.star, -1/2) %*% Qt %*% matpow(Qt.star, -1/2) #Rt
    
    loglik = loglik + log(det(Rt)) + t(rt) %*% solve(Dt) %*% solve(Rt) %*% solve(Dt) %*% rt
  }
  
  return(loglik)  
}

# Initial value
para.cDCC = c(0.2, 0.7) 

# Constraints
ui = rbind(c(-1, -1), diag(2))
ci = c(-1, 0, 0)

# Gradient
grad.lik = function(theta, sigma, r, t, S, d = 1e-05) {
  npara = length(theta) 
  id = d * diag(npara) 
  para1 = theta + id[, 1] 
  para2 = theta + id[, 2]
  lik0 = QLL.cDCC(para = c(theta[1], theta[2]), sigma, r, t, S)
  lik1 = QLL.cDCC(para = c(para1[1], para1[2]), sigma, r, t, S)
  lik2 = QLL.cDCC(para = c(para2[1], para2[2]), sigma, r, t, S) 
  c( sum ( (lik1 - lik0) / d ), sum( (lik2 - lik0) / d ) )  
}

From the documentation of constrOptim, on the ... argument.

... Other named arguments to be passed to f and grad : needs to be passed through optim so should not match its argument names.

Because the ... arguments will be passed to both f and grad, maybe they have to have different variable names? I would try renaming sigma, r, t, S to sigma1, r1, t1, S1 in one of the two functions and passing those values as well through constrOptim.

@arthur.t
Thanks for your advice.
I changed the code for grad as below, yet came acrros an error again. I know it is likely to be a matter of how to pass arguments, but the help page of ConstrOptim does not help me understand the cause.

QLL.cDCC = function(para, sigma, r, t, S){
  #parameters
  alpha = para[1] 
  beta = para[2]  
  A = diag( sqrt(alpha), 3, 3)
  B = diag( sqrt(beta), 3, 3)
  
  #initial value
  loglik = 0
  Qt = S 
  
    for (i in 2:t){
    rt = c( r[i, 1], r[i, 2], r[i, 3] ) #rt
    rt.L = c( r[i-1, 1], r[i-1, 2], r[i-1, 3] ) #r_{t-1}
    
    Dt = diag( c(sigma[i, 1], sigma[i, 2], sigma[i, 3]) ) #D_t
    Dt.L = diag( c(sigma[i-1, 1], sigma[i-1, 2], sigma[i-1, 3]) ) #D_{t-1}
    
    Qt.star.L = diag( diag(Qt) ) #Q*_{t-1}
    
    et.L = solve(Dt.L) %*% rt.L #epsilon_{t-1}
    
    Qt = S - A %*% S %*% t(A) - B %*% S %*% t(B) + 
      A %*% ( matpow(Qt.star.L, 1/2) %*% et.L %*% t(et.L) %*% matpow(Qt.star.L, 1/2) ) %*% t(A) + 
      B %*% Qt %*% t(B)
   
    Qt.star = diag (diag(Qt)) #Q*_t
    
    Rt = matpow(Qt.star, -1/2) %*% Qt %*% matpow(Qt.star, -1/2) #Rt
    
    loglik = loglik + log(det(Rt)) + t(rt) %*% solve(Dt) %*% solve(Rt) %*% solve(Dt) %*% rt
  }
  
  return(loglik)  
}

# Initial value
para.cDCC = c(0.2, 0.7) 

# Constraints
ui = rbind(c(-1, -1), diag(2))
ci = c(-1, 0, 0)

# Gradient
grad.lik = function(theta, sig, rt, time, Qbar, d = 1e-05) {
  npara = length(theta)
  id = d * diag(npara)
  para1 = theta + id[, 1] 
  para2 = theta + id[, 2]
  lik0 = QLL.cDCC(para = c(theta[1], theta[2]), sig, rt, time, Qbar)
  lik1 = QLL.cDCC(para = c(para1[1], para1[2]), sig, rt, time, Qbar) 
  lik2 = QLL.cDCC(para = c(para2[1], para2[2]), sig, rt, time, Qbar)
  c( sum ( (lik1 - lik0) / d ), sum( (lik2 - lik0) / d ) )  
}

constrOptim(theta = para.cDCC, f = QLL.cDCC, grad = grad.lik,
            ui = ui, ci = ci, mu = 1e-05,
            outer.iterations = 20, outer.eps = 0.01,
            control = list(maxit = 10, reltol = 1e-05),
            sigma = sigma, r = rt, t = t, S = S,
            sig = sigma, rt = rt, time = t, Qbar = S)
Error in f(theta, ...) : 
  unused arguments (sig = c(0.105576566727196, 0.0331946284335895, 0.0138857397581752, 0.0217326984284847, 0.0231534438380203, 0.0153916301908369, 0.0292855662102406, 0.0172160094908594, 0.0116638977051967, 0.00959788709912503, 0.0105309714058963, 0.00947354394759166, 0.0107452076116813, 0.0104776481542499, 0.0116587846490399, 0.0118950384480802, 0.0104006773054768, 0.0141335692188089, 0.00992035495286316, 0.0100342072798916, 0.0100836007574604, 0.0120156618050858, 0.0108686998952507, 0.00963564418914747, 0.0110824622777232, 
0.0134996400739264, 0.0116200885870828, 0.0108018268030357, 0.010562977383885, 0.0145633435386395, 0.0111702081027262, 0.00978245251114175, 0.0108463641240292, 0.00947589877191595, 0.0108067875434139, 0.00971272341378489, 0.00939819414748299, 0.00934838477547327, 0.0129419773997132, 0.0101549716077007, 0.0130479753872755, 0.0116677021658086, 0.0113460605588272, 0.0107918736919542, 0.00950051887278545, 0.00976532349560294, 0.010026855691866, 0.0157524164894

Hm. Maybe the problem is theta. Both constrOptim and your user-defined functions have a variable named theta. Try to rename in your functions.

@arthur.t
Thanks. I renamed, to no avail. After trying your suggestion, I found in the help of ConstrOptim "The gradient function grad must be supplied except with method = "Nelder-Mead". It should take arguments matching those of f and return a vector containing the gradient.". Therefore, f and grad should have the same arguments.

I ran your code and get

object `S` not found

Can you review ?

@nirgrahamuk

Thank you very much for your comment. Please find the whole code as below.
Do you know how to upload a csv file here?

### DCC estimation

## package
library(tidyverse) 
library(tibble) 
library(lubridate) 
library(quantmod)  
library(PerformanceAnalytics) 
library(rugarch) 
library(xdcclarge)
library(nloptr) 

## Clear Variables
rm(list = ls())

# Matrix Multiple
matpow = function(x, pow){
  y = eigen(x)
  y$vectors %*% diag( (y$values)^pow ) %*% solve(y$vectors) 
}

###########
#Read Data#
###########

## Furures Data
futures = read_csv("Futures20210411.csv", col_types = cols(US = col_double(), EU = col_double()))

## Create xts Data
date = unlist(futures[,1]) %>% as.Date()    
data = futures[, 2:4]
x.data = xts(data, order.by = date) 

## logreturn
return = diff(log(x.data[,1:3])) %>% 
  na.omit()

return = return[-which(is.infinite(return[,3])), ] #EUの対数収益率になぜか無限が含まれるので除外

## Exclude Outliers
outlier_index = apply(return, 2, function(x) abs(x) > mean(x) + 3 * sd(x)) %>%  
  rowSums() 

outlier_index = outlier_index != 0

return = return[!outlier_index] 

####################
#1st stage estimate#
####################

# mean adjusted return
rt = apply(return, 2, function(x) x - mean(x))

# log likelihodd function
t = length(rt[,1])
n = 3 #number of assets

QLL = function(para, x, t, n){
  # Parameters
  omega = rep(0,n)
  a = rep(0,n)
  b = rep(0,n)
  
  omega[1:n] = para[1:n]
  a[1:n] = para[(n + 1) : (2 * n)]
  b[1:n] = para[(2*n + 1) : (3 * n)]
  
  # Quasi Log Likelihood(QLL) initialization
  loglik=0
  
  # Start of the loop
  for (j in 1:n){
    h=var(x[,j]) 
    
    for (i in 2:t){
      h = omega[j] + a[j] * x[i-1,j]^2 + b[j] * h 
      loglik = loglik + log(h) + x[i,j]^2/h  
    }
  }
  
  return(loglik) 
}


## Maximization of log likelihodd function

# initial value
para.garch = c(rep(0.2,n), rep(0.5,n), rep(0.2,n)) 

# Constraint
cons.garch = function(para, x, t, n){
  cons = c(para[n+1] + para[2*n+1] - 1, para[n+2] + para[2*n+2] - 1, para[n+3] + para[2*n+3] - 1)
  return(cons) #a + b < 1
}

lb.garch = rep(0, 9) # a, b >0
ub.garch = rep(Inf, 9) 

# Algorithm
opts = list("algorithm" = "NLOPT_LN_COBYLA", 
            "xtol_rel"=1.0e-4, 
            "maxeval" = 1000)  

# Maximization with constraint
mlef.garch = nloptr(x0 = para.garch, 
                    eval_f = QLL, 
                    lb = lb.garch, 
                    ub = ub.garch, 
                    eval_g_ineq = cons.garch, 
                    opts = opts, 
                    x = rt,
                    t = t,
                    n = n
)

mlef.garch
garch.e = mlef.garch$solution 

# Conditional variance based on the model
garch.ht = function(para, x, n){
  
  ht = matrix(0, nrow = length(x[,1]), ncol = n) 
  ut = matrix(0, nrow = length(x[,1]), ncol = n) 
  
  for (j in 1:n){
    ht[1,j] = var(x[,j]) 
    ut[,j] = x[,j] 
    
    for (i in 2:length(x[,j])){
      ht[i,j] = para[n - (n-j)] + para[2*n - (n-j)] * ut[i-1, j]^2 + para[3*n - (n-j)] * ht[i-1, j]
    }
  }
  
  return(ht)
}

# Check the fit of GARCH
ht = garch.ht(para = garch.e, x = rt, n = n)
ut.AU = rt[,1]
ht.AU = ht[,1] 
vt.AU = ut.AU/sqrt(ht.AU)

plot(vt.AU, type="l")
acf(vt.AU) 

#####################################
#Preparation for 2nd step estimation#
#####################################

p = 3 #number of parameters

ht = matrix(rep(0, n*t),t,n) 

for (j in 1:n){
  ht[1,j] = sqrt(var(rt[,j])) 
  ut = rt[,j] 
  
  omega = garch.e[j]
  a = garch.e[j + p]
  b = garch.e[j + p*2]
  
  for (i in 2:length(rt[,j])){
    ht[i,j] = omega + a * ut[i-1]^2 + b * ht[i-1,j]
  }
}

sigma = sqrt(ht) 

fun.S = function(x, sigma, t){
  
  sum.S = 0
  
  for (i in 1:t){
    rt = c(x[i,1], x[i,2], x[i,3]) 
    Qt.star = diag( c(ht[i,1], ht[i,2], ht[i,3]) ) 
    Dt = diag( c(sigma[i,1], sigma[i,2], sigma[i,3]) ) 
    et = solve(Dt) %*% rt 
    sum.S = sum.S + matpow(Qt.star, 1/2) %*% et %*% t(et) %*% matpow(Qt.star, 1/2)
  }
  S = sum.S/t
  return(S)
  
} 

S = fun.S(x = rt, sigma = sigma, t = t) 

####################
#Estimation of cDCC#
####################

# Define log likelihood function
QLL.cDCC = function(para, sigma, r, t, S){
  #parameters
  alpha = para[1] 
  beta = para[2]  
  A = diag( sqrt(alpha), 3, 3)
  B = diag( sqrt(beta), 3, 3)
  
  #initial value
  loglik = 0
  Qt = S 
  
  for (i in 2:t){
    rt = c( r[i, 1], r[i, 2], r[i, 3] ) #rt
    rt.L = c( r[i-1, 1], r[i-1, 2], r[i-1, 3] ) #r_{t-1}
    
    Dt = diag( c(sigma[i, 1], sigma[i, 2], sigma[i, 3]) ) #D_t
    Dt.L = diag( c(sigma[i-1, 1], sigma[i-1, 2], sigma[i-1, 3]) ) #D_{t-1}
    
    Qt.star.L = diag( diag(Qt) ) #Q*_{t-1}
    
    et.L = solve(Dt.L) %*% rt.L #epsilon_{t-1}
    
    Qt = S - A %*% S %*% t(A) - B %*% S %*% t(B) + 
      A %*% ( matpow(Qt.star.L, 1/2) %*% et.L %*% t(et.L) %*% matpow(Qt.star.L, 1/2) ) %*% t(A) + 
      B %*% Qt %*% t(B)
    
    Qt.star = diag (diag(Qt)) #Q*_t
    
    Rt = matpow(Qt.star, -1/2) %*% Qt %*% matpow(Qt.star, -1/2) #Rt
    
    loglik = loglik + log(det(Rt)) + t(rt) %*% solve(Dt) %*% solve(Rt) %*% solve(Dt) %*% rt
  }
  
  return(loglik)
}

QLL.cDCC(para = c(0.2, 0.7), sigma = sigma, r = rt, t = t, S = S) #check if loglikelihodd function works

# initial value
para.cDCC = c(0.2, 0.7) 

# constraint
resta = rbind(c(-1, -1), diag(2))
restb = c(-1, 0, 0)

# Gradient

grad.lik = function(para, sigma, r, t, S) {
  d = 1e-05
  npara = length(para) 
  id = d * diag(npara) 
  para1 = para + id[, 1] 
  para2 = para + id[, 2] 
  lik0 = QLL.cDCC(para = c(para[1], para[2]), sigma, r, t, S)
  lik1 = QLL.cDCC(para = c(para1[1], para1[2]), sigma, r, t, S) 
  lik2 = QLL.cDCC(para = c(para2[1], para2[2]), sigma, r, t, S) 
  c( sum ( (lik1 - lik0) / d ), sum( (lik2 - lik0) / d ) )  
}

grad.lik(para = para.cDCC, sigma = sigma, r = rt, t = t, S = S) 

# Optimazation

constrOptim(theta = para.cDCC, f = QLL.cDCC, grad = grad.lik,
            ui = resta, ci = restb, mu = 1e-05,
            outer.iterations = 20, outer.eps = 0.01,
            control = list(maxit = 10, reltol = 1e-05),
            sigma = sigma, r = rt, t = t, S = S)


This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.