utils module

class utils.AS14_test(one_side=False, n_boot=10000)[source]

Bases: bootstrap_mean_test

Acerbi-Szekely test for assessing the goodness of the Expected Shortfall estimate, with both Z1 and Z2 statistics, as described in:

Acerbi, C., & Szekely, B. (2014). Back-testing expected shortfall. Risk, 27(11), 76-81.

The null hypothesis is H0: Q, E are the correct (latent) quantile and expected shortfall estimates for the observed time series Y.

Parameters:

  • one_side: bool, optional

    if True, the test is one sided (i.e. H0: mu >= mu_target). Default is False

  • n_boot: int, optional

    the number of bootstrap replications. Default is 10_000

Example of usage

import numpy as np
from utils import AS14_test

y = np.random.randn(250)*1e-2  #Replace with price returns
qf = np.random.uniform(-1, 0, 250)*1e-1  #Replace with quantile forecasts
ef = np.random.uniform(-1, 0, 250)*1e-1  #Replace with expected shortfall forecasts
theta = 0.05 #Set the desired confidence level

# Compute the Acerbi-Szekely test with Z1 statistic
AS14_test()(qf, ef, y, test_type='Z1', theta=theta, seed=2)

Methods:

__call__(Q, E, Y, theta, test_type='Z1', seed=None)[source]

Compute the test

INPUTS:
  • Q: ndarray

    the quantile estimates

  • E: ndarray

    the expected shortfall estimates

  • Y: ndarray

    the actual time series

  • test_type: str, optional

    the type of test to perform. It must be either ‘Z1’ or ‘Z2’. Default is ‘Z1’

  • seed: int, optional

    the seed for the random number generator. Default is None

OUTPUTS:
  • statistic: float

    the test statistic

  • p_value: float

    the p-value of the test

class utils.DMtest(loss_func, h=1)[source]

Bases: object

Diebold-Mariano test for the equality of forecast accuracy. The null H0: E[loss_func(Q1, E1, Y)] == E[loss_func(Q2, E2, Y)] is tested.

Parameters:

  • loss_func: callable

    the loss function to compute the forecast accuracy

  • h: int, optional

    the maximum lag to compute the autocovariance. Default is 1

Example of usage

import numpy as np
from utils import DMtest, patton_loss

y = np.random.randn(250)*1e-2  #Replace with price returns
qf_1 = np.random.uniform(-1, 0, 250)  #Replace with quantile forecasts of algorithm 1
ef_1 = np.random.uniform(-1, 0, 250)  #Replace with expected shortfall forecasts of algorithm 1
qf_2 = np.random.uniform(-1, 0, 250)  #Replace with quantile forecasts of algorithm 2
ef_2 = np.random.uniform(-1, 0, 250)  #Replace with expected shortfall forecasts of algorithm 2
theta = 0.05 #Set the desired confidence level

DMtest(patton_loss(theta, ret_mean=False))(qf_1, ef_1, qf_2, ef_2, y) #Compute the Diebold Mariano test (with Patton loss)

Methods:

__call__(Q1, E1, Q2, E2, Y)[source]
INPUTS:
  • Q1: ndarray

    the first set of quantile predictions

  • E1: ndarray

    the first set of expected shortfall predictions

  • Q2: ndarray

    the second set of quantile predictions

  • E2: ndarray

    the second set of expected shortfall predictions

  • Y: ndarray

    the actual time series

OUTPUTS:
  • stat: float

    the test statistic

  • p_value: float

    the p-value of the test

  • mean_difference: float

    the mean difference of the losses

class utils.Encompassing_test(loss, n_boot=10000)[source]

Bases: bootstrap_mean_test

Encompassing test to assess whenever the first sample of losses is statistically lower than the second. As described in:

Kışınbay, T. (2010). The use of encompassing tests for forecast combinations. Journal of Forecasting, 29(8), 715-727.

The null hypothesis is H0: E[loss(Q_new, E_new, Y)] >= E[loss(Q_bench, E_bench, Y)].

Parameters:

  • loss: callable

    the loss function to compute the forecast accuracy

  • n_boot: int, optional

    the number of bootstrap replications. Default is 10_000

Example of usage

import numpy as np
from utils import Encompassing_test, patton_loss

y = np.random.randn(250)*1e-2  #Replace with price returns
qf_1 = np.random.uniform(-1, 0, 250)*1e-1  #Replace with quantile forecasts of algorithm 1
ef_1 = np.random.uniform(-1, 0, 250)*1e-1  #Replace with expected shortfall forecasts of algorithm 1
qf_2 = np.random.uniform(-1, 0, 250)*1e-1  #Replace with quantile forecasts of algorithm 2
ef_2 = np.random.uniform(-1, 0, 250)*1e-1  #Replace with expected shortfall forecasts of algorithm 2
theta = 0.05 #Set the desired confidence level

Encompassing_test(patton_loss(theta, ret_mean=False))(qf_1, ef_1, qf_2, ef_2, y) #Compute the Encompassing test (with Patton loss)

Methods:

__call__(Q_new, E_new, Q_bench, E_bench, Y, seed=None)[source]
INPUTS:
  • Q_new: ndarray

    the first set of quantile predictions

  • E_new: ndarray

    the first set of expected shortfall predictions

  • Q_bench: ndarray

    the second set of quantile predictions

  • E_bench: ndarray

    the second set of expected shortfall predictions

  • Y: ndarray

    the actual time series

  • seed: int, optional

    the seed for the random number generator. Default is None

OUTPUTS:
  • statistic: float

    the test statistic

  • p_value: float

    the p-value of the test

class utils.LossDiff_test(loss, n_boot=10000)[source]

Bases: bootstrap_mean_test

Loss difference test to assess whenever the first sample of losses is statistically lower than the second. The null hypothesis is H0: E[loss(Q_new, E_new, Y)] >= E[loss(Q_bench, E_bench, Y)].

Parameters:

  • loss: callable

    the loss function to compute the forecast accuracy

  • n_boot: int, optional

    the number of bootstrap replications. Default is 10_000

Example of usage

import numpy as np
from utils import LossDiff_test, patton_loss

y = np.random.randn(250)*1e-2  #Replace with price returns
qf_1 = np.random.uniform(-1, 0, 250)  #Replace with quantile forecasts of algorithm 1
ef_1 = np.random.uniform(-1, 0, 250)  #Replace with expected shortfall forecasts of algorithm 1
qf_2 = np.random.uniform(-1, 0, 250)  #Replace with quantile forecasts of algorithm 2
ef_2 = np.random.uniform(-1, 0, 250)  #Replace with expected shortfall forecasts of algorithm 2
theta = 0.05 #Set the desired confidence level

LossDiff_test(patton_loss(theta, ret_mean=False))(qf_1, ef_1, qf_2, ef_2, y) #Compute the Loss Difference test (with Patton loss)

Methods:

__call__(Q_new, E_new, Q_bench, E_bench, Y, seed=None)[source]

Compute the test

INPUTS:
  • Q_new: ndarray

    the first set of quantile predictions

  • E_new: ndarray

    the first set of expected shortfall predictions

  • Q_bench: ndarray

    the second set of quantile predictions

  • E_bench: ndarray

    the second set of expected shortfall predictions

  • Y: ndarray

    the actual time series

  • seed: int, optional

    the seed for the random number generator. Default is None

OUTPUTS:
  • statistic: float

    the test statistic

  • p_value: float

    the p-value of the test

class utils.McneilFrey_test(one_side=False, n_boot=10000)[source]

Bases: bootstrap_mean_test

McNeil-Frey test for assessing the goodness of the Expected Shortfall estimate, as described in:

McNeil, A. J., & Frey, R. (2000). Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach. Journal of empirical finance, 7(3-4), 271-300.

The null hypothesis is H0: the risk is not underestimated.

Parameters:

  • one_side: bool, optional

    if True, the test is one sided (i.e. H0: mu >= mu_target). Default is False

  • n_boot: int, optional

    the number of bootstrap replications. Default is 10_000

Example of usage

import numpy as np
from utils import McneilFrey_test

y = np.random.randn(250)*1e-2  #Replace with price returns
qf = np.random.uniform(-1, 0, 250)  #Replace with quantile forecasts
ef = np.random.uniform(-1, 0, 250)  #Replace with expected shortfall forecasts

McneilFrey_test(one_side=True)(qf, ef, y, seed=2) #Compute the McNeil-Frey test

Methods:

__call__(Q, E, Y, seed=None)[source]

Compute the test

INPUTS:
  • Q: ndarray

    the quantile estimates

  • E: ndarray

    the expected shortfall estimates

  • Y: ndarray

    the actual time series

  • seed: int, optional

    the seed for the random number generator. Default is None

OUTPUTS:
  • statistic: float

    the test statistic

  • p_value: float

    the p-value of the test

class utils.PinballLoss(theta, ret_mean=True)[source]

Bases: object

Pinball (a.k.a. Quantile) loss function

Parameters:

  • theta: float

    the target confidence level

  • ret_mean: bool, optional

    if True, the function returns the mean of the loss, otherwise the loss point-by-point. Default is True

Example of usage

import numpy as np
from utils import PinballLoss

y = np.random.randn(250)*1e-2  #Replace with price returns
qf = np.random.uniform(-1, 0, 250)  #Replace with quantile forecasts
theta = 0.05 #Set the desired confidence level

PinballLoss(theta)(qf, y) #Compute the pinball loss

Methods:

__call__(y_pred, y_true)[source]

Compute the pinball loss

INPUTS:
  • y_pred: ndarray

    the predicted values

  • y_true: ndarray

    the true values

OUTPUTS:
  • loss: float

    the loss function mean value, if ret_mean is True. Otherwise, the loss for each observation

class utils.barrera_loss(theta, ret_mean=True)[source]

Bases: object

Barrera loss function. Eq. (2.13) in:

Barrera, D., Crépey, S., Gobet, E., Nguyen, H. D., & Saadeddine, B. (2022). Learning value-at-risk and expected shortfall. arXiv preprint arXiv:2209.06476.

Parameters:

  • theta: float

    the target confidence level

  • ret_mean: bool, optional

    if True, the function returns the mean of the loss, otherwise the loss point-by-point. Default is True

Example of usage

import numpy as np
from utils import barrera_loss

y = np.random.randn(250)*1e-2  #Replace with price returns
qf = np.random.uniform(-1, 0, 250)  #Replace with quantile forecasts
ef = np.random.uniform(-1, 0, 250)  #Replace with expected shortfall forecasts
theta = 0.05 #Set the desired confidence level

barrera_loss(theta)(qf, ef, y) #Compute the barrera loss

Methods:

__call__(v, e, y)[source]
INPUTS:
  • v: ndarray

    the quantile estimate

  • e: ndarray

    the expected shortfall estimate

  • y: ndarray

    the actual time series

OUTPUTS:
  • loss: float

    the loss function mean value, if ret_mean is True. Otherwise, the loss for each observation

class utils.bootstrap_mean_test(mu_target, one_side=False, n_boot=10000)[source]

Bases: object

Bootstrap test for assessing whenever mean of a sample is == or >= a target value

Parameters:

  • mu_target: float

    the mean to test against

  • one_side: bool, optional

    if True, the test is one sided (i.e. H0: mu >= mu_target), otherwise it is two-sided (i.e. H0: mu == mu_target). Default is False

  • n_boot: int, optional

    the number of bootstrap replications. Default is 10_000

__call__(data, seed=None)[source]

Compute the test

INPUTS:
  • data: ndarray

    the original sample

  • seed: int, optional

    the seed for the random number generator. Default is None

OUTPUTS:
  • statistic: float

    the test statistic

  • p_value: float

    the p-value of the test

utils.cr_t_test(errorsA, errorsB, train_len, test_len)[source]

Corrected resampled t-test for the equality of forecast accuracy. The null H0: E[errorsA] >= E[errorsB] is tested.

INPUTS:
  • errorsA: ndarray

    the first set of forecast errors

  • errorsB: ndarray

    the second set of forecast errors

  • train_len: int

    the length of the training set

  • test_len: int

    the length of the test set

OUTPUTS:
  • stat: float

    the test statistic

  • p_value: float

    the p-value of the test

Example of usage

utils.gaussian_tail_stats(theta, loc=0, scale=1)[source]

Compute the Value at Risk and Expected Shortfall for a Gaussian distribution

INPUTS:
  • theta: float

    the quantile to compute the statistics

  • loc: ndarray, optional

    the mean of the distribution

  • scale: ndarray, optional

    the standard deviation of the distribution

OUTPUTS:
  • var: ndarray

    the Value at Risk for a normal distribution with mean=loc and standard deviation=scale

  • es: ndarray

    the Expected Shortfall for a normal distribution with mean=loc and standard deviation=scale

Example of usage

import numpy as np
from utils import gaussian_tail_stats

res = gaussian_tail_stats(theta=0.05, loc=0, scale=1e-2) #Compute the VaR and the Expected Shortfall
print('VaR =', res['var'], '    ES =', res['es'])
class utils.patton_loss(theta, ret_mean=True)[source]

Bases: object

Patton loss function. Eq. (6) in:

Patton, A. J., Ziegel, J. F., & Chen, R. (2019). Dynamic semiparametric models for expected shortfall (and value-at-risk). Journal of econometrics, 211(2), 388-413.

Parameters:

  • theta: float

    the target confidence level

  • ret_mean: bool, optional

    if True, the function returns the mean of the loss, otherwise the loss point-by-point. Default is True

Example of usage

import numpy as np
from utils import patton_loss

y = np.random.randn(250)*1e-2  #Replace with price returns
qf = np.random.uniform(-1, 0, 250)  #Replace with quantile forecasts
ef = np.random.uniform(-1, 0, 250)  #Replace with expected shortfall forecasts
theta = 0.05 #Set the desired confidence level

losses = patton_loss(theta, ret_mean=False)(qf, ef, y) #Compute the patton loss

Methods:

__call__(v, e, y)[source]
INPUTS:
  • v: ndarray

    the quantile estimate

  • e: ndarray

    the expected shortfall estimate

  • y: ndarray

    the actual time series

OUTPUTS:
  • loss: float

    the loss function mean value, if ret_mean is True. Otherwise, the loss for each observation

utils.tstudent_tail_stats(theta, df, loc=0, scale=1)[source]

Compute the Value at Risk and Expected Shortfall for a Student’s t distribution

INPUTS:
  • theta: float

    the quantile to compute the statistics

  • df: int

    the degrees of freedom of the distribution

  • loc: ndarray, optional

    the mean of the distribution

  • scale: ndarray, optional

    the standard deviation of the distribution

OUTPUTS:
  • var: ndarray

    the Value at Risk for the t-distribution

  • es: ndarray

    the Expected Shortfall for the t-distribution

Example of usage

import numpy as np
from utils import tstudent_tail_stats

res = tstudent_tail_stats(theta=0.05, df=5, loc=0, scale=1e-2) #Compute the VaR and the Expected Shortfall
print('VaR =', res['var'], '    ES =', res['es'])