Entropic Value at Risk
Entropic Value at Risk
Entropic Value at Risk
In financial mathematics and stochastic optimization, the concept of risk measure is used to
quantify the risk involved in a random outcome or risk position. Many risk measures have hitherto
been proposed, each having certain characteristics. The entropic value at risk (EVaR) is a coherent
risk measure introduced by Ahmadi-Javid,[1][2] which is an upper bound for the value at risk (VaR)
and the conditional value at risk (CVaR), obtained from the Chernoff inequality. The EVaR can also
be represented by using the concept of relative entropy. Because of its connection with the VaR
and the relative entropy, this risk measure is called "entropic value at risk". The EVaR was
developed to tackle some computational inefficiencies[clarification needed] of the CVaR. Getting
inspiration from the dual representation of the EVaR, Ahmadi-Javid[1][2] developed a wide class of
coherent risk measures, called g-entropic risk measures. Both the CVaR and the EVaR are
members of this class.
Contents
1 Definition
2 Properties
3 Examples
4 Optimization
6 See also
7 References
Definition
Let {\displaystyle (\Omega ,{\mathcal {F}},P)}(\Omega ,{\mathcal {F}},P) be a probability space with
{\displaystyle \Omega }\Omega a set of all simple events, {\displaystyle {\mathcal {F}}}{\mathcal
{F}} a {\displaystyle \sigma }\sigma -algebra of subsets of {\displaystyle \Omega }\Omega and
{\displaystyle P}P a probability measure on {\displaystyle {\mathcal {F}}}{\mathcal {F}}. Let
{\displaystyle X}X be a random variable and {\displaystyle \mathbf {L} _{M^{+}}}{\displaystyle
\mathbf {L} _{M^{+}}} be the set of all Borel measurable functions {\displaystyle X:\Omega \to
\mathbb {R} }{\displaystyle X:\Omega \to \mathbb {R} } whose moment-generating function
{\displaystyle M_{X}(z)}{\displaystyle M_{X}(z)} exists for all {\displaystyle z\geq 0}{\displaystyle
z\geq 0}. The entropic value at risk (EVaR) of {\displaystyle X\in \mathbf {L} _{M^{+}}}{\displaystyle
X\in \mathbf {L} _{M^{+}}} with confidence level {\displaystyle 1-\alpha }1-\alpha is defined as
follows:
{\displaystyle {\text{EVaR}}_{1-\alpha }(X):=\inf _{z>0}\left\{z^{-1}\ln \left({\frac {M_{X}(z)}
{\alpha }}\right)\right\}.}{\displaystyle {\text{EVaR}}_{1-\alpha }(X):=\inf _{z>0}\left\{z^{-1}\ln
\left({\frac {M_{X}(z)}{\alpha }}\right)\right\}.}
(1)
In finance, the random variable {\displaystyle X\in \mathbf {L} _{M^{+}},}{\displaystyle X\in
\mathbf {L} _{M^{+}},} in the above equation, is used to model the losses of a portfolio.
(2)
Solving the equation {\displaystyle e^{-za}M_{X}(z)=\alpha }{\displaystyle e^{-za}M_{X}(z)=\alpha }
for {\displaystyle a,}a, results in
which shows the relationship between the EVaR and the Chernoff inequality. It is worth noting
that {\displaystyle a_{X}(1,z)}{\displaystyle a_{X}(1,z)} is the entropic risk measure or exponential
premium, which is a concept used in finance and insurance, respectively.
Let {\displaystyle \mathbf {L} _{M}}{\displaystyle \mathbf {L} _{M}} be the set of all Borel
measurable functions {\displaystyle X:\Omega \to \mathbb {R} }{\displaystyle X:\Omega \to
\mathbb {R} } whose moment-generating function {\displaystyle M_{X}(z)}{\displaystyle M_{X}(z)}
exists for all {\displaystyle z}z. The dual representation (or robust representation) of the EVaR is as
follows:
(3)
where {\displaystyle X\in \mathbf {L} _{M},}{\displaystyle X\in \mathbf {L} _{M},} and {\displaystyle
\Im }\Im is a set of probability measures on {\displaystyle (\Omega ,{\mathcal {F}})}
(\Omega,\mathcal{F}) with {\displaystyle \Im =\{Q\ll P:D_{KL}(Q||P)\leq -\ln \alpha \}}
{\displaystyle \Im =\{Q\ll P:D_{KL}(Q||P)\leq -\ln \alpha \}}. Note that
is the relative entropy of {\displaystyle Q}Q with respect to {\displaystyle P,}P, also called the
Kullback–Leibler divergence. The dual representation of the EVaR discloses the reason behind its
naming.
Properties
(4)
For {\displaystyle X,Y\in \mathbf {L} _{M}}{\displaystyle X,Y\in \mathbf {L} _{M}}, {\displaystyle
{\text{EVaR}}_{1-\alpha }(X)={\text{EVaR}}_{1-\alpha }(Y)}{\displaystyle {\text{EVaR}}_{1-\alpha }
(X)={\text{EVaR}}_{1-\alpha }(Y)} for all {\displaystyle \alpha \in ]0,1]}{\displaystyle \alpha \in ]0,1]}
if and only if {\displaystyle F_{X}(b)=F_{Y}(b)}{\displaystyle F_{X}(b)=F_{Y}(b)} for all {\displaystyle
b\in \mathbb {R} }{\displaystyle b\in \mathbb {R} }.
The entropic risk measure with parameter {\displaystyle \theta ,}\theta , can be represented by
means of the EVaR: for all {\displaystyle X\in \mathbf {L} _{M^{+}}}{\displaystyle X\in \mathbf {L}
_{M^{+}}} and {\displaystyle \theta >0}\theta >0
(5)
The EVaR with confidence level {\displaystyle 1-\alpha }1-\alpha is the tightest possible upper
bound that can be obtained from the Chernoff inequality for the VaR and the CVaR with
confidence level {\displaystyle 1-\alpha }1-\alpha ;
(6)
The following inequality holds for the EVaR:
(7)
Examples
Comparing the VaR, CVaR and EVaR for the standard normal distribution
Comparing the VaR, CVaR and EVaR for the uniform distribution over the interval (0,1)
For {\displaystyle X\sim N(\mu ,\sigma ^{2}),}{\displaystyle X\sim N(\mu ,\sigma ^{2}),}
(9)
Figures 1 and 2 show the comparing of the VaR, CVaR and EVaR for {\displaystyle N(0,1)}N(0,1)
and {\displaystyle U(0,1)}U(0,1).
Optimization
Let {\displaystyle \rho }\rho be a risk measure. Consider the optimization problem
(11)
Let {\displaystyle {\boldsymbol {S}}_{\boldsymbol {\psi }}}{\displaystyle {\boldsymbol
{S}}_{\boldsymbol {\psi }}} be the support of the random vector {\displaystyle {\boldsymbol
{\psi }}.}{\displaystyle {\boldsymbol {\psi }}.} If {\displaystyle G(.,{\boldsymbol {s}})}{\displaystyle
G(.,{\boldsymbol {s}})} is convex for all {\displaystyle {\boldsymbol {s}}\in {\boldsymbol
{S}}_{\boldsymbol {\psi }}}{\displaystyle {\boldsymbol {s}}\in {\boldsymbol {S}}_{\boldsymbol
{\psi }}}, then the objective function of the problem (11) is also convex. If {\displaystyle
G({\boldsymbol {w}},{\boldsymbol {\psi }})}{\displaystyle G({\boldsymbol {w}},{\boldsymbol
{\psi }})} has the form
(12)
and {\displaystyle \psi _{1},\ldots ,\psi _{m}}{\displaystyle \psi _{1},\ldots ,\psi _{m}} are
independent random variables in {\displaystyle \mathbf {L} _{M}}{\displaystyle \mathbf {L} _{M}},
then (11) becomes
which is computationally tractable. But for this case, if one uses the CVaR in problem (10), then
the resulting problem becomes as follows:
(14)
It can be shown that by increasing the dimension of {\displaystyle \psi }\psi , problem (14) is
computationally intractable even for simple cases. For example, assume that {\displaystyle \psi
_{1},\ldots ,\psi _{m}}{\displaystyle \psi _{1},\ldots ,\psi _{m}} are independent discrete random
variables that take {\displaystyle k}k distinct values. For fixed values of {\displaystyle {\boldsymbol
{w}}}{\boldsymbol {w}} and {\displaystyle t,}t, the complexity of computing the objective function
given in problem (13) is of order {\displaystyle mk}mk while the computing time for the objective
function of problem (14) is of order {\displaystyle k^{m}}k^{m}. For illustration, assume that
{\displaystyle k=2,m=100}{\displaystyle k=2,m=100} and the summation of two numbers takes
{\displaystyle 10^{-12}}10^{-12} seconds. For computing the objective function of problem (14)
one needs about {\displaystyle 4\times 10^{10}}4\times 10^{10} years, whereas the evaluation of
objective function of problem (13) takes about {\displaystyle 10^{-10}}10^{{-10}} seconds. This
shows that formulation with the EVaR outperforms the formulation with the CVaR (see [2] for
more details).
Drawing inspiration from the dual representation of the EVaR given in (3), one can define a wide
class of information-theoretic coherent risk measures, which are introduced in.[1][2] Let
{\displaystyle g}g be a convex proper function with {\displaystyle g(1)=0}{\displaystyle g(1)=0} and
{\displaystyle \beta }\beta be a non-negative number. The {\displaystyle g}g-entropic risk measure
with divergence level {\displaystyle \beta }\beta is defined as
(15)
where {\displaystyle \Im =\{Q\ll P:H_{g}(P,Q)\leq \beta \}}{\displaystyle \Im =\{Q\ll P:H_{g}
(P,Q)\leq \beta \}} in which {\displaystyle H_{g}(P,Q)}{\displaystyle H_{g}(P,Q)} is the generalized
relative entropy of {\displaystyle Q}Q with respect to {\displaystyle P}P. A primal representation of
the class of {\displaystyle g}g-entropic risk measures can be obtained as follows:
{\displaystyle {\text{ER}}_{g,\beta }(X)=\inf _{t>0,\mu \in \mathbb {R} }\left\lbrace t\left[\mu +
{\text{E}}_{P}\left(g^{*}\left({\frac {X}{t}}-\mu +\beta \right)\right)\right]\right\rbrace }
{\displaystyle {\text{ER}}_{g,\beta }(X)=\inf _{t>0,\mu \in \mathbb {R} }\left\lbrace t\left[\mu +
{\text{E}}_{P}\left(g^{*}\left({\frac {X}{t}}-\mu +\beta \right)\right)\right]\right\rbrace }
(16)
(17)
(18)
See also
Stochastic optimization
Risk measure
Value at risk
Expected shortfall
Kullback–Leibler divergence
References
Ahmadi-Javid, Amir (2011). An information-theoretic approach to constructing coherent risk
measures. St. Petersburg, Russia: Proceedings of IEEE International Symposium on Information
Theory. pp. 2125–2127. doi:10.1109/ISIT.2011.6033932.
Ahmadi-Javid, Amir (2012). "Entropic value-at-risk: A new coherent risk measure". Journal of
Optimization Theory and Applications. 155 (3): 1105–1123. doi:10.1007/s10957-011-9968-2.
Ahmadi-Javid, Amir (2012). "Addendum to: Entropic Value-at-Risk: A New Coherent Risk
Measure". Journal of Optimization Theory and Applications. 155 (3): 1124–1128.
doi:10.1007/s10957-012-0014-9.
Breuer, Thomas; Csiszar, Imre (2013). "Measuring Distribution Model Risk". arXiv:1301.4832v1.