eval_utils module

class eval_utils.AS14_test(one_side=False, n_boot=10000)[source]

Bases: bootstrap_mean_test

Acerbi-Szekely test for assessing the goodness of the Expected Shortfall estimate, with both Z1 and Z2 statistics, as described in:

Acerbi, C., & Szekely, B. (2014). Back-testing expected shortfall. Risk, 27(11), 76-81.

Parameters:

  • one_side: bool, optional

    if True, the test is one sided (i.e. H0: mu >= mu_target). Default is False

  • n_boot: int, optional

    the number of bootstrap replications. Default is 10_000

Example of usage

import numpy as np
from eval_utils import AS14_test

y = np.random.randn(250)*1e-2  #Replace with price returns
qf = np.random.uniform(-1, 0, 250)*1e-1  #Replace with quantile forecasts
ef = np.random.uniform(-1, 0, 250)*1e-1  #Replace with expected shortfall forecasts
theta = 0.05 #Set the desired confidence level

# Compute the Acerbi-Szekely test with Z1 statistic
AS14_test()(qf, ef, y, test_type='Z1', theta=theta, seed=2)

Methods:

class eval_utils.PinballLoss(theta, ret_mean=True)[source]

Bases: object

Pinball (a.k.a. Quantile) loss function

Parameters:

  • theta: float

    the target confidence level

  • ret_mean: bool, optional

    if True, the function returns the mean of the loss, otherwise the loss point-by-point. Default is True

Example of usage

import numpy as np
from eval_utils import PinballLoss

y = np.random.randn(250)*1e-2  #Replace with price returns
qf = np.random.uniform(-1, 0, 250)  #Replace with quantile forecasts
theta = 0.05 #Set the desired confidence level

PinballLoss(theta)(qf, y) #Compute the pinball loss

Methods:

class eval_utils.barrera_loss(theta, ret_mean=True)[source]

Bases: object

Barrera loss function

class eval_utils.bootstrap_mean_test(mu_target, one_side=False, n_boot=10000)[source]

Bases: object

Bootstrap test for assessing whenever mean of a sample is == or >= a target value

Parameters:

  • mu_target: float

    the mean to test against

  • one_side: bool, optional

    if True, the test is one sided (i.e. H0: mu >= mu_target), otherwise it is two-sided (i.e. H0: mu == mu_target). Default is False

  • n_boot: int, optional

    the number of bootstrap replications. Default is 10_000

class eval_utils.patton_loss(theta, ret_mean=True)[source]

Bases: object

Patton loss function