Type: | Package |
Title: | Tests of Equal Predictive Accuracy for Panels of Forecasts |
Version: | 1.2 |
Depends: | R (≥ 2.10) |
Imports: | Rcpp |
LinkingTo: | Rcpp, RcppArmadillo |
Date: | 2025-03-07 |
Author: | Krzysztof Drachal [aut, cre] (Faculty of Economic Sciences, University of Warsaw, Poland) |
Maintainer: | Krzysztof Drachal <kdrachal@wne.uw.edu.pl> |
Description: | Allows to perform the tests of equal predictive accuracy for panels of forecasts. Main references: Qu et al. (2024) <doi:10.1016/j.ijforecast.2023.08.001> and Akgun et al. (2024) <doi:10.1016/j.ijforecast.2023.02.001>. |
License: | GPL-3 |
LazyData: | TRUE |
URL: | https://CRAN.R-project.org/package=pEPA |
Note: | Research funded by the grant of the National Science Centre, Poland, under the contract number DEC-2018/31/B/HS4/02021. |
NeedsCompilation: | yes |
Packaged: | 2025-03-07 19:58:07 UTC; Krzysiek |
Repository: | CRAN |
Date/Publication: | 2025-03-07 20:30:01 UTC |
Computes Test for Cross-Sectional Clusters.
Description
This function computes test of the equal predictive accuracy for cross-sectional clusters. It corresponds to C^{(1)}_{nT}
statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that a pair of forecasts have the same expected accuracy among cross-sectional clusters. However, their predictive accuracy can be different across the clusters, but the same among each cluster. The test is suitable for situations with cross-sectional independence.
Usage
csc.C1.test(evaluated1,evaluated2,realized,loss.type="SE",cl)
Arguments
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
cl |
|
Value
class htest
object, list
of
statistic |
test statistic |
parameter |
|
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
References
Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.
See Also
Examples
data(forecasts)
y <- t(observed)
# just to save time
y <- y[,1:40]
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
{
f.bsr[i,] <- predicted[[i]][1:40,1]
f.dma[i,] <- predicted[[i]][1:40,9]
}
# 2 cross-sectional clusters: energy commodities and non-energy commodities
cs.cl <- c(1,9)
t <- csc.C1.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)
Computes Test for Cross-Sectional Clusters.
Description
This function computes test of the equal predictive accuracy for cross-sectional clusters. It corresponds to C^{(3)}_{nT}
statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that a pair of forecasts have the same expected accuracy among cross-sectional clusters. However, their predictive accuracy can be different across the clusters, but the same among each cluster. The test allows for strong cross-sectional dependence.
Usage
csc.C3.test(evaluated1,evaluated2,realized,loss.type="SE",cl)
Arguments
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
cl |
|
Value
class htest
object, list
of
statistic |
test statistic |
parameter |
|
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
References
Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.
See Also
Examples
data(forecasts)
y <- t(observed)
# just to reduce computation time restrict to energy commodities only
y <- y[1:8,]
f.bsr <- matrix(NA,ncol=ncol(y),nrow=8)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:8)
{
f.bsr[i,] <- predicted[[i]][,1]
f.dma[i,] <- predicted[[i]][,9]
}
# 2 cross-sectional clusters: crude oil and other energy commodities
cs.cl <- c(1,4)
t <- csc.C3.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)
Computes Test for Cross-Sectional Clusters.
Description
This function computes test of the equal predictive accuracy for cross-sectional clusters. The null hypothesis of this test is that a pair of forecasts have the same expected accuracy among cross-sectional clusters. However, their predictive accuracy can be different across the clusters, but the same among each cluster. The test is suitable if either: K \ge 2
and significance level
\le 0.08326
, or 2 \le K \le 14
and significance level
\le 0.1
, or K = \{ 2,3 \}
and significance level
\le 0.2
, where K
denotes the number of time clusters.
Usage
csc.test(evaluated1,evaluated2,realized,loss.type="SE",cl,dc=FALSE)
Arguments
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
cl |
|
dc |
|
Value
class htest
object, list
of
statistic |
test statistic |
parameter |
|
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
References
Qu, R., Timmermann, A., Zhu, Y. 2024. Comparing forecasting performance with panel data. International Journal of Forecasting 40, 918–941.
See Also
Examples
data(forecasts)
y <- t(observed)
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
{
f.bsr[i,] <- predicted[[i]][,1]
f.dma[i,] <- predicted[[i]][,9]
}
# 2 cross-sectional clusters: energy commodities and non-energy commodities
cs.cl <- c(1,9)
t <- csc.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=cs.cl)
Sample Panel of Commodities Spot Prices.
Description
Observed spot prices of various commodities.
Usage
data(forecasts)
Format
observed
is matrix
object such that its columns correspond to spot prices of selected 56 commodities.
Details
They cover the period between 1996 and 2021, and are in monthly freqency. Variables names are the same as in the paper by Drachal and Pawłowski (2024). The observed prices were taken from The World Bank (2022).
References
Drachal, K., Pawłowski, M. 2024. Forecasting selected commodities' prices with the Bayesian symbolic regression. International Journal of Financial Studies 12, 34, doi:10.3390/ijfs12020034
The World Bank. 2022. Commodity Markets. https://www.worldbank.org/en/research/commodity-markets
See Also
Examples
data(forecasts)
# WTI prices
t1 <- observed[,3]
Computes Test for Overall Equal Predictive Ability.
Description
This function computes test of the equal predictive accuracy for the pooled average. It corresponds to S^{(1)}_{nT}
statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that the pooled average loss is equal in expectation for a pair of forecasts from both considered methods. The alternative one is that the differences do not average out across the cross-sectional and time-series dimensions. The test is suitable for situations with cross-sectional independence.
Usage
pool_av.S1.test(evaluated1,evaluated2,realized,loss.type="SE")
Arguments
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
Value
class htest
object, list
of
statistic |
test statistic |
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
References
Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.
See Also
Examples
data(forecasts)
y <- t(observed)
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
{
f.bsr[i,] <- predicted[[i]][,1]
f.dma[i,] <- predicted[[i]][,9]
}
t <- pool_av.S1.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")
Computes Test for Overall Equal Predictive Ability.
Description
This function computes test of the equal predictive accuracy for the pooled average. It corresponds to S^{(3)}_{nT}
statistic in the referenced paper by Akgun et al. (2024). The null hypothesis of this test is that the pooled average loss is equal in expectation for a pair of forecasts from both considered methods. The alternative one is that the differences do not average out across the cross-sectional and time-series dimensions. The test allows for strong cross-sectional dependence.
Usage
pool_av.S3.test(evaluated1,evaluated2,realized,loss.type="SE")
Arguments
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
Value
class htest
object, list
of
statistic |
test statistic |
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
References
Akgun, O., Pirotte, A., Urga, G., Yang, Z. 2024. Equal predictive ability tests based on panel data with applications to OECD and IMF forecasts. International Journal of Forecasting 40, 202–228.
See Also
Examples
data(forecasts)
y <- t(observed)
# just to reduce computation time shorten time-series
y <- y[,1:40]
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
{
f.bsr[i,] <- predicted[[i]][1:40,1]
f.dma[i,] <- predicted[[i]][1:40,9]
}
t <- pool_av.S3.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")
Computes Test for the Pooled Average.
Description
This function computes test of the equal predictive accuracy for the pooled average. The null hypothesis of this test is that the pooled average loss is equal in expectation for a pair of forecasts from both considered methods. The alternative hypothesis can be formulated as the differences do not average out across the cross-sectional and time-series dimensions.
Usage
pool_av.test(evaluated1,evaluated2,realized,loss.type="SE",J=NULL)
Arguments
evaluated1 |
|
evaluated2 |
|
realized |
|
loss.type |
a method to compute the loss function, |
J |
|
Value
class htest
object, list
of
statistic |
test statistic |
parameter |
|
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
References
Hyndman, R.J., Koehler, A.B. 2006. Another look at measures of forecast accuracy. International Journal of Forecasting 22, 679–688.
Qu, R., Timmermann, A., Zhu, Y. 2024. Comparing forecasting performance with panel data. International Journal of Forecasting 40, 918–941.
Taylor, S. J., 2005. Asset Price Dynamics, Volatility, and Prediction, Princeton University Press.
Triacca, U., 2024. Comparing Predictive Accuracy of Two Forecasts, https://www.lem.sssup.it/phd/documents/Lesson19.pdf.
Examples
data(forecasts)
y <- t(observed)
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
{
f.bsr[i,] <- predicted[[i]][,1]
f.dma[i,] <- predicted[[i]][,9]
}
t <- pool_av.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE")
Sample Panels of Commodities Spot Prices Forecasts.
Description
Forecasts obtained from various methods applied to various commodities prices.
Usage
data(forecasts)
Format
predicted
is list
of forecasts of spot prices of selected 56 commodities. For each commodity matrix
of forecasts generated by various methods is provided. Columns correspond to various methods.
Details
The forecasts were taken from Drachal and Pawłowski (2024). They cover the period between 1996 and 2021, and are in monthly freqency. Variables and methods names are the same as in that paper, where they are described in details.
References
Drachal, K., Pawłowski, M. 2024. Forecasting selected commodities' prices with the Bayesian symbolic regression. International Journal of Financial Studies 12, 34, doi:10.3390/ijfs12020034
See Also
Examples
data(forecasts)
# WTI prices predicted by BSR rec method
t2 <- predicted[[3]][,1]
Computes Test for Time Clusters.
Description
This function computes test of the equal predictive accuracy for time clusters. The null hypothesis of this test is that the equal predictive accuracy for the two methods holds within each of the time clusters. The test is suitable if either: K \ge 2
and significance level
\le 0.08326
, or 2 \le K \leq 14
and significance level
\le 0.1
, or K = \{ 2,3 \}
and significance level
\le 0.2
, where K
denotes the number of time clusters.
Usage
tc.test(evaluated1,evaluated2,realized,loss.type="SE",cl)
Arguments
evaluated1 |
same as in |
evaluated2 |
same as in |
realized |
same as in |
loss.type |
same as in |
cl |
|
Value
class htest
object, list
of
statistic |
test statistic |
parameter |
|
alternative |
alternative hypothesis of the test |
p.value |
p-value |
method |
name of the test |
data.name |
names of the tested data |
References
Qu, R., Timmermann, A., Zhu, Y. 2024. Comparing forecasting performance with panel data. International Journal of Forecasting 40, 918–941.
See Also
Examples
data(forecasts)
y <- t(observed)
f.bsr <- matrix(NA,ncol=ncol(y),nrow=56)
f.dma <- f.bsr
# extract prices predicted by BSR rec and DMA methods
for (i in 1:56)
{
f.bsr[i,] <- predicted[[i]][,1]
f.dma[i,] <- predicted[[i]][,9]
}
# 3 time clusters: Jun 1996 -- Nov 2007, Dec 2007 -- Jun 2009, Jul 2009 - Aug 2021
# rownames(observed)[1]
# rownames(observed)[139]
# rownames(observed)[158]
t.cl <- c(1,139,158)
t <- tc.test(evaluated1=f.bsr,evaluated2=f.dma,realized=y,loss.type="SE",cl=t.cl)