Forecast horizon accuracy with cross-validation

In FPP3 section 5.10 on time series cross-validation, an example is given on the forecasting performance of 1- to 8-step-ahead drift forecasts. In the section on evaluating point forecast accuracy, it is recommended that the test set should ideally be at least as large as the maximum forecast horizon required (and preferably this comprises at most 20% of the number of observations).

Here is a snapshot of the code from the article:

google_2015_tr <- google_2015 %>%
  stretch_tsibble(.init = 3, .step = 1)
fc <- google_2015_tr %>%
  model(RW(Close ~ drift())) %>%
  forecast(h = 8)

To incorporate the '20%' guideline, don't we need .init = 40 so that the forecast horizon forms at most 20% of the total data? I seem to have trouble fitting ETS or ARIMA without a much larger .init anyway.

In addition, say we wanted to forecast 12 months ahead. Does it make sense simply to do time series cross-validation with 1- to 12-step-ahead forecasts, or is there an argument for going further?


Referred here by Forecasting: Principles and Practice, by Rob J Hyndman and George Athanasopoulos

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.

If you have a query related to it or one of the replies, start a new topic and refer back with a link.