Summarise the performance of the model using accuracy measures. Accuracy measures can be computed directly from models as the one-step-ahead fitted residuals are available. When evaluating accuracy on forecasts, you will need to provide a complete dataset that includes the future data and data used to train the model.

# S3 method for mdl_df
accuracy(object, measures = point_accuracy_measures, ...)

# S3 method for mdl_ts
accuracy(object, measures = point_accuracy_measures, ...)

# S3 method for fbl_ts
accuracy(object, data, measures = point_accuracy_measures, ..., by = NULL)

Arguments

object

A model or forecast object

measures

A list of accuracy measure functions to compute (such as point_accuracy_measures, interval_accuracy_measures, or distribution_accuracy_measures)

...

Additional arguments to be passed to measures that use it.

data

A dataset containing the complete model dataset (both training and test data). The training portion of the data will be used in the computation of some accuracy measures, and the test data is used to compute the forecast errors.

by

Variables over which the accuracy is computed (useful for computing across forecast horizons in cross-validation). If by is NULL, groups will be chosen automatically from the key structure.

Examples

library(fable)
library(tsibble)
#> 
#> Attaching package: ‘tsibble’
#> The following objects are masked from ‘package:base’:
#> 
#>     intersect, setdiff, union
library(tsibbledata)
library(dplyr)
#> 
#> Attaching package: ‘dplyr’
#> The following objects are masked from ‘package:stats’:
#> 
#>     filter, lag
#> The following objects are masked from ‘package:base’:
#> 
#>     intersect, setdiff, setequal, union

fit <- aus_production %>%
  filter(Quarter < yearquarter("2006 Q1")) %>% 
  model(ets = ETS(log(Beer) ~ error("M") + trend("Ad") + season("A")))

# In-sample training accuracy does not require extra data provided.
accuracy(fit)
#> # A tibble: 1 × 10
#>   .model .type       ME  RMSE   MAE   MPE  MAPE  MASE RMSSE   ACF1
#>   <chr>  <chr>    <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>  <dbl>
#> 1 ets    Training 0.321  15.8  12.0 0.103  2.87 0.752 0.795 -0.177

# Out-of-sample forecast accuracy requires the future values to compare with.
# All available future data will be used, and a warning will be given if some
# data for the forecast window is unavailable.
fc <- fit %>% 
  forecast(h = "5 years")
fc %>% 
  accuracy(aus_production)
#> Warning: The future dataset is incomplete, incomplete out-of-sample data will be treated as missing. 
#> 2 observations are missing between 2010 Q3 and 2010 Q4
#> # A tibble: 1 × 10
#>   .model .type    ME  RMSE   MAE   MPE  MAPE  MASE RMSSE  ACF1
#>   <chr>  <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 ets    Test   5.01  9.65  7.85  1.13  1.83 0.491 0.487 0.320
  
# It is also possible to compute interval and distributional measures of
# accuracy for models and forecasts which give forecast distributions.
fc %>% 
  accuracy(
    aus_production,
    measures = list(interval_accuracy_measures, distribution_accuracy_measures)
  )
#> Warning: The future dataset is incomplete, incomplete out-of-sample data will be treated as missing. 
#> 2 observations are missing between 2010 Q3 and 2010 Q4
#> # A tibble: 1 × 7
#>   .model .type winkler pinball scaled_pinball percentile  CRPS
#>   <chr>  <chr>   <dbl>   <dbl>          <dbl>      <dbl> <dbl>
#> 1 ets    Test     101.    3.87          0.121       7.62  7.56