Bayesian Inference Methods For Univariat
Bayesian Inference Methods For Univariat
Bayesian Inference Methods For Univariat
12046
Abstract. This survey reviews the existing literature on the most relevant Bayesian inference
methods for univariate and multivariate GARCH models. The advantages and drawbacks of each
procedure are outlined as well as the advantages of the Bayesian approach versus classical procedures.
The paper makes emphasis on recent Bayesian non-parametric approaches for GARCH models that
avoid imposing arbitrary parametric distributional assumptions. These novel approaches implicitly
assume infinite mixture of Gaussian distributions on the standardized returns which have been shown
to be more flexible and describe better the uncertainty about future volatilities. Finally, the survey
presents an illustration using real data to show the flexibility and usefulness of the non-parametric
approach.
Keywords. Bayesian inference; Dirichlet process mixture; Financial returns; GARCH models;
Multivariate GARCH models; Volatility
1. Introduction
Understanding, modelling and predicting the volatility of financial time series has been extensively
researched for more than 30 years and the interest in the subject is far from decreasing. Volatility
prediction has a very wide range of applications in finance, for example, in portfolio optimization, risk
management, asset allocation, asset pricing. The two most popular approaches to model volatility are
based on the Autoregressive Conditional Heteroscedasticity (ARCH)-type and Stochastic Volatility (SV)-
type models. The seminal paper of Engle (1982) proposed the primary ARCH model while Bollerslev
(1986) generalized the purely autoregressive ARCH into an ARMA-type model, called the Generalized
Autoregressive Conditional Heteroscedasticity (GARCH) model. Since then, there has been a very large
amount of research on the topic, stretching to various model extensions and generalizations. Meanwhile,
the researchers have been addressing two important topics: looking for the best specification for the errors
and selecting the most efficient approach for inference and prediction.
Besides selecting the best model for the data, distributional assumptions for the returns are equally
important. It is well known, that every prediction, in order to be useful, has to come with a certain precision
measurement. In this way the agent can know the risk she is facing, that is, uncertainty. Distributional
assumptions permit to quantify this uncertainty about the future. Traditionally, the errors have been
assumed to be Gaussian, however, it has been widely acknowledged that financial returns display fat
tails and are not conditionally Gaussian. Therefore, it is common to assume a Student-t distribution, see
Bollerslev (1987), He and Teräsvirta (1999) and Bai et al. (2003), among others. However, the assumption
of Gaussian or Student-t distributions is rather restrictive. An alternative approach is to use a mixture
of distributions, which can approximate arbitrarily any distribution given a sufficient number of mixture
components. A mixture of two Normals was used by Bai et al. (2003), Ausı́n and Galeano (2007) and
Giannikis et al. (2008), among others. These authors have shown that the models with the mixture
distribution for the errors outperformed the Gaussian one and do not require additional restrictions on the
degrees of freedom parameter as the Student-t one.
As for the inference and prediction, the Bayesian approach is especially well-suited for GARCH
models and provides some advantages compared to classical estimation techniques, as outlined by Ardia
and Hoogerheide (2010). Firstly, the positivity constraints on the parameters to ensure positive variance,
may encumber some optimization procedures. In the Bayesian setting, constraints on the model parameters
can be incorporated via priors. Secondly, in most of the cases we are more interested not in the model
parameters directly, but in some non-linear functions of them. In the maximum likelihood (ML) setting,
it is quite troublesome to perform inference on such quantities, while in the Bayesian setting it is
usually straightforward to obtain the posterior distribution of any non-linear function of the model
parameters. Furthermore, in the classical approach, models are usually compared by any other means
than the likelihood. In the Bayesian setting, marginal likelihoods and Bayes factors allow for consistent
comparison of non-nested models while incorporating Occam’s razor for parsimony. Also, Bayesian
estimation provides reliable results even for finite samples. Finally, Hall and Yao (2003) add that the ML
approach presents some limitations when the errors are heavy tailed, also the convergence rate is slow
and the estimators may not be asymptotically Gaussian.
This survey reviews the existing Bayesian inference methods for univariate and multivariate GARCH
models while having in mind their error specifications. The main emphasis of the paper is on the recent
development of an alternative inference approach for these models using Bayesian non-parametrics. The
classical parametric modelling, relying on a finite number of parameters, although so widely used, has
some certain drawbacks. Since the number of parameters for any model is fixed, one can encounter
underfitting or overfitting, which arises from the misfit between the data available and the parameters
needed to estimate. Then, in order to avoid assuming wrong parametric distributions, which may lead
to inconsistent estimators, it is better to consider a semi- or non-parametric approach. Bayesian non-
parametrics may lead to less constrained models than classical parametric Bayesian statistics and provide
an adequate description of the data, especially when the conditional return distribution is far away from
Gaussian.
To our knowledge, there have been very few papers using Bayesian non-parametrics for GARCH
models. These are Ausı́n et al. (2014) for univariate GARCH and Jensen and Maheu (2013) and Virbickaite
et al. (2013) for MGARCH. All of them have considered infinite mixtures of Gaussian distributions with
a Dirichlet process (DP) prior over the mixing distribution, which results into DP mixture (DPM) models.
This approach so far proves to be the most popular Bayesian non-parametric modelling procedure. The
results over the papers have been consistent: The Bayesian non-parametric approach leads to more flexible
models and is better in explaining heavy-tailed return distributions, which parametric models cannot fully
capture.
The outline of this survey is as follows. Section 2 shortly introduces univariate GARCH models and
different inference and prediction methods. Section 3 overviews the existing models for multivariate
GARCH and different inference and prediction approaches. Section 4 introduces the Bayesian non-
parametric modelling approach and reviews the limited literature of this area in time-varying volatility
models. Section 5 presents a real data application. Finally, Section 6 concludes.
2. Univariate GARCH
As mentioned earlier, the two most popular approaches to model volatility are GARCH-type and SV-type
models. In this survey we focus on GARCH models, therefore, SV models will not be included thereafter.
Journal of Economic Surveys (2015) Vol. 29, No. 1, pp. 76–96
C 2013 John Wiley & Sons Ltd
78 VIRBICKAITE ET AL.
Also, we are not going to enter into the technical details of the Bayesian algorithms and refer to Robert
and Casella (2004) for a more detailed description of Bayesian techniques.
and Markov Chain Monte Carlo (MCMC) methods have facilitated the usage of Bayesian techniques, see
for example, Robert and Casella (2004).
The standard Gibbs sampling procedure does not make the list because it cannot be used due to the
recursive nature of the conditional variance: the conditional posterior distributions of the model parameters
are not of a simple form. One of the alternatives is the Griddy–Gibbs sampler as in Bauwens and Lubrano
(1998). They discuss that previously used importance sampling and Metropolis algorithms have certain
drawbacks, such as that they require a careful choice of a good approximation of the posterior density.
The authors propose a Griddy–Gibbs sampler which explores analytical properties of the posterior density
as much as possible. In this paper the GARCH model has Student-t errors, which allows for fat tails.
The authors choose to use flat (uniform) priors on parameters (ω, α, β) with whatever region is needed
to ensure the positivity of variance, however, the flat prior for the degrees of freedom cannot be used,
because then the posterior density is not integrable. Instead, they choose a half-right side of Cauchy. The
posteriors of the parameters were found to be skewed, which is a disadvantage for the commonly used
Gaussian approximation. On the other hand, Ausı́n and Galeano (2007) modelled the errors of a GARCH
model with a mixture of two Gaussian distributions. The advantage of this approach compared to that of
Student-t errors, is that if the number of the degrees of freedom is very small (less than 5), some moments
may not exist. The authors have chosen flat priors for all the parameters, and discovered that there is little
sensitivity to the change in the prior distributions (from uniform to Beta), unlike in Bauwens and Lubrano
(1998), where the sensitivity for the prior choice for the degrees of freedom is high. More articles using a
Griddy–Gibbs sampling approach are by Bauwens and Lubrano (2002), who have modelled asymmetric
volatility with Gaussian innovations and have used uniform priors for all the parameters, and by Wago
(2004), who explored an asymmetric GARCH model with Student-t errors.
Another MCMC algorithm used in estimating GARCH model parameters, is the Metropolis–Hastings
(MH) method, which samples from a candidate density and then accepts or rejects the draws depending
on a certain acceptance probability. Ardia (2006) modelled the errors as Gaussian distributed with zero
mean and unit variance while the priors are chosen as Gaussian, and an MH algorithm is used to
draw samples from the joint posterior distribution. The author has carried out a comparative analysis
between ML and Bayesian approaches, finding, as in other papers, that some posterior distributions of
the parameters were skewed, thus warning against the abusive use of the Gaussian approximation. Also,
Ardia (2006) has performed a sensitivity analysis of the prior means and scale parameters and concluded
that the initial priors in this case are vague enough. This approach has been also used by Müller and Pole
(1998), Nakatsuma (2000) and Vrontos et al. (2000), among others. A special case of the MH method
is the random walk Metropolis–Hastings (RWMH) where the proposal draws are generated by randomly
perturbing the current value using a spherically symmetric distribution. A usual choice is to generate
candidate values from a Gaussian distribution where the mean is the previous value of the parameter and
the variance can be calibrated to achieve the desired acceptance probability. This procedure is repeated
at each MCMC iteration. Ausı́n and Galeano (2007) have also carried out a comparison of estimation
approaches, Griddy–Gibbs, RWMH and ML. Apparently, RWMH has difficulties in exploring the tails of
the posterior distributions and ML estimates may be rather different for those parameters where posterior
distributions are skewed.
In order to select one of the algorithms, one might consider some criteria, such as fast convergence for
example. Asai (2006) numerically compares some of these approaches in the context of GARCH. The
Griddy–Gibbs method is capable in handling the shape of the posterior by using smaller MCMC outputs
comparing with other methods, also, it is flexible regarding parametric specification of a model. However,
it can require a lot of computational time. This author also investigates MH, adaptive rejection Metropolis
sampling (ARMS), proposed by Gilks et al. (1995), and acceptance–rejection MH algorithms (ARMH),
proposed by Tierney (1994). For more detail about each method in GARCH models see Nakatsuma
(2000) and Kim et al. (1998), among others. Using simulated data, Asai (2006) calculated geometric
averages of inefficiency factors for each method. Inefficiency factor is just an inverse of Geweke (1992)
Journal of Economic Surveys (2015) Vol. 29, No. 1, pp. 76–96
C 2013 John Wiley & Sons Ltd
80 VIRBICKAITE ET AL.
efficiency factor. According to this, the ARMH algorithm performed the best. Also, computational time
was taken into consideration, where ARMH clearly outperformed MH and ARMS, while Griddy–Gibbs
stayed just a bit behind. The author observes that even though the ARMH method showed the best results,
the posterior densities for each parameter did not quite explore the tails of the distributions, as desired. In
this case Griddy–Gibbs performs better; also, it requires less draws than ARMH. Bauwens and Lubrano
(1998) investigate one more convergence criteria, proposed by Yu and Mykland (1998), which is based on
cumulative sum (cumsum) statistics. It basically shows that if MCMC is converging, the graph of a certain
cumsum statistic against time should approach zero. Their employed Griddy–Gibbs algorithm converged
in all four parameters quite fast. Then, the authors explored the advantages and disadvantages of alternative
approaches: the importance sampling and the MH algorithm. Considering importance sampling, one of
the main disadvantages, as mentioned before, is to find a good approximation of the posterior density
(importance function). Also, comparing with Griddy–Gibbs algorithm, the importance sampling requires
much more draws to get smooth graphs of the marginal densities. For the MH algorithm, same as in
importance sampling, a good approximation needs to be found. Also, compared to Griddy–Gibbs, the
MH algorithm did not fully explore the tails of the distribution, unless for a very big number of draws.
Another important aspect of the Bayesian approach, as commented before, are the advantages in model
selection compared to the classical methods. Miazhynskaia and Dorffner (2006) review some Bayesian
model selection methods using MCMC for GARCH-type models, which allow for the estimation of either
marginal model likelihoods, Bayes factors or posterior model probabilities. These are compared to the
classical model selection criteria showing that the Bayesian approach clearly considers model complexity
in a more unbiased way. Also, Chen et al. (2009) includes a revision of Bayesian selection methods for
asymmetric GARCH models, such as the GJR–GARCH and threshold GARCH. They show how using
the Bayesian approach it is possible to compare complex and non-nested models to choose, for example,
between GARCH and SV models, between symmetric or asymmetric GARCH models or to determine
the number of regimes in threshold processes, among others.
An alternative approach to the previous parametric specifications is the use of Bayesian non-parametric
methods, that allow to model the errors as an infinite mixture of normals, as seen in the paper by Ausı́n
et al. (2014). The Bayesian non-parametric approach for time-varying volatility models will be discussed
in detail in Section 4.
To sum up, considering the amount of articles published quite recently regarding the topic of estimating
univariate GARCH models using MCMC methods indicates still growing interest in the area. Although
numerous GARCH-family models have been investigated using different MCMC algorithms, there are
still a lot of areas that need further research and development.
3. Multivariate GARCH
Returns and volatilities depend on each other, so multivariate analysis is a more natural and useful
approach. The starting point of multivariate volatility models is a univariate GARCH, thus the most simple
MGARCH models can be viewed as direct generalizations of their univariate counterparts. Consider a
T
multivariate return series {rt }t=1 of size K × 1. Then
1/2
rt = μt + at = μt + Ht ǫt
where μt = E[rt |It−1 ], at are mean-corrected returns, ǫt is a random vector, such that E[ǫt ] = 0 and
1/2
Cov[ǫt ] = I K and Ht is a positive definite matrix of dimensions K × K , such that Ht is the conditional
1/2 1/2
covariance matrix of rt , that is, Cov[rt |It−1 ] = Ht Cov[ǫt ](Ht )′ = Ht . There is a wide range of
MGARCH models, where most of them differ in specifying Ht . In the rest of this section we will review
the most popular and widely used, and the different Bayesian approaches to make inference and prediction.
Journal of Economic Surveys (2015) Vol. 29, No. 1, pp. 76–96
C 2013 John Wiley & Sons Ltd
BAYESIAN INFERENCE METHODS FOR UNIVARIATE AND MULTIVARIATE GARCH MODELS 81
For general reviews on MGARCH models, see Bauwens et al. (2006), Silvennoinen and Teräsvirta (2009)
and (Tsay, 2010, chapter 10), among others.
Regarding inference, one can also consider the same arguments provided in the univariate GARCH case
mentioned above. ML estimation for MGARCH models can be obtained by using numerical optimization
algorithms, such as Fisher scoring and Newton–Raphson. Vrontos et al. (2003b) have estimated several
bivariate ARCH and GARCH models and found that some classical estimates of the parameters were quite
different from their Bayesian counterparts. This was due to the non-normality of the parameters. Thus, the
authors suggest careful interpretation of the classical estimation approach. Also, Vrontos et al. (2003b)
found it difficult to evaluate the classical estimates under the stationarity conditions, and consequently the
resulting parameters, evaluated ignoring the stationarity constraints, produced non-stationary estimates.
These difficulties can be overcome using the Bayesian approach.
where ⊙ indicates the Hadamard product, , A and B are symmetric K × K matrices. As noted in
Bauwens et al. (2006), Ht is positive definite provided that , A, B and the initial matrix H0 are positive
definite. However, these are quite strong restrictions on the parameters. Also, DVEC model does not
allow for dynamic dependence between volatility series. In order to avoid such strong restrictions on the
parameter matrices, Engle and Kroner (1995) propose the BEKK (Baba, Engle, Kraft and Kroner) model,
which is just a special case of a VEC and, consequently, less general. It has the attractive property that
the conditional covariance matrices are positive definite by construction. The model looks as follows:
′ ′ ′
Ht = ∗ ∗ + A∗ (at−1 at−1
′
)A∗ + B ∗ Ht−1 B ∗ (1)
where ∗ is a lower triangular matrix and A∗ and B ∗ are K × K matrices. In the BEKK model it is easy
to impose the definite positiveness of the Ht matrix. However, the parameter matrices A∗ and B ∗ do not
have direct interpretations since they do not represent directly the size of the impact of the lagged values
of volatilities and squared returns.
Osiewalski and Pipien (2004) present a paper that compares the performance of various bivariate
ARCH and GARCH models, such as VEC, BEKK, estimated using Bayesian techniques. As the authors
observe, they are the first to perform model comparison using Bayes factors and posterior odds in the
MGARCH setting. The algorithm used for parameter estimation and inference is MH, and to check
for convergence they rely on the cumsum statistics, introduced by Yu and Mykland (1998), and used by
Bauwens and Lubrano (1998) in the univariate GARCH setting. Using the real data, the authors found that
the t-BEKK models performed the best, leaving t-VEC not so far behind; t-VEC model, sometimes also
called t-VECH, is a more general form of a DVEC, seen above, where the mean-corrected returns follow a
Student-t distribution. The name comes from a function called vech, which reshapes the lower triangular
portion of a symmetric variance–covariance matrix into a column vector. To sum up, the authors choose
Journal of Economic Surveys (2015) Vol. 29, No. 1, pp. 76–96
C 2013 John Wiley & Sons Ltd
82 VIRBICKAITE ET AL.
t-BEKK model as clearly better than the t-VEC, because it is relatively simple and has less parameters to
estimate.
On the other hand, Hudson and Gerlach (2008) developed a prior distribution for a VECH specification
that directly satisfies both necessary and sufficient conditions for positive definiteness and covariance
stationarity, while remaining diffuse and non-informative over the allowable parameter space. These
authors employed MCMC methods, including MH, to help enforce the conditions in this prior.
More recently, Burda and Maheu (2013) use the BEKK–GARCH model to show the usefulness of a
new posterior sampler called the Adaptive Hamiltonian Monte Carlo (AHMC). Hamiltonian Monte Carlo
(HMC) is a procedure to sample from complex distributions. The AHMC is an alternative inferential
method based on HMC that is both fast and locally adaptive. The AHMC appears to work very well when
the dimension of the parameter space is very high. Model selection based on marginal likelihood is used
to show that full BEKK models are preferred to restricted diagonal specifications. Additionally, Burda
(2013) suggests an approach called Constrained Hamiltonian Monte Carlo (CHMC) in order to deal with
high-dimensional BEKK models with targeting, which allow for a parameter dimension reduction without
compromising the model fit, unlike the diagonal BEKK. Model comparison of the full BEKK and the
BEKK with targeting is performed indicating that the latter dominates the former in terms of marginal
likelihood.
3.2 Factor-GARCH
Factor-GARCH was first proposed by Engle et al. (1990) to reduce the dimension of the multivariate
model of interest using an accurate approximation of the multivariate volatility. The definition of the
Factor-GARCH model, proposed by Lin (1992), says that BEKK model in (1) is a Factor-GARCH, if A∗
and B ∗ have rank one and the same left and right eigenvalues: A∗ = αwλ′ , B ∗ = βwλ′ , where α and β
are scalars and w and λ are eigenvectors. Several variants of the factor model have been proposed. One
of them is the full-factor multivariate GARCH by Vrontos et al. (2003a):
r t = μ + at
at = W X t
X t |It−1 ∼ N K (0, t )
extensive Bayesian analysis of a full-factor MGARCH model considering not only parameter uncertainty,
but model uncertainty as well.
As already discussed, a very common stylized feature of financial time series is the asymmetric
volatility. Dellaportas and Vrontos (2007) have proposed a new class of tree structured MGARCH models
that explore the asymmetric volatility effect. Same as the paper by Vrontos et al. (2003a), the authors
consider not only parameter-related uncertainty, but also uncertainty corresponding to model selection.
Thus in this case, Bayesian approach becomes particularly useful because an alternative method based
on maximizing the pseudo-likelihood is only able to work after selecting a single model. The authors
develop an MCMC stochastic search algorithm that generates candidate tree structures and their posterior
probabilities. The proposed algorithm converged fast. Such modelling and inference approach leads to
more reliable and more informative results concerning model selection and individual parameter inference.
There are more models that are nested in BEKK, such as the Orthogonal GARCH for example,
see Alexander and Chibumba (1997) and Van der Weide (2002), among others. All of them fall into
the class of direct generalizations of univariate GARCH or linear combinations of univariate GARCH
models. Another class of models are the non-linear combinations of univariate GARCH models, such as
conditional correlation (CCC), dynamic condition correlation (DCC), general dynamic covariance (GDC)
and Copula–GARCH models. A very recent alternative approach that also considers Bayesian estimation
can be found in Jin and Maheu (2013) who proposes a new dynamic component models of returns and
realized covariance (RCOV) matrices based on time-varying Wishart distributions. In particular, Bayesian
estimation and model comparison is conducted with an existing range of multivariate GARCH models
and RCOV models.
3.3 CCC
The CCC model, proposed by Bollerslev (1990) and the simplest in its class, is based on the decomposition
of the conditional covariance matrix into conditional standard deviations and correlations. Then, the
conditional covariance matrix Ht looks as follows:
Ht = Dt R Dt
where Dt is diagonal matrix with the K conditional standard deviations and R is a time-invariant CCC
matrix such that R = (ρi j ) and ρi j = 1, ∀i = j. The CCC approach can be applied to a wide range of
univariate GARCH family models, such as exponential GARCH or GJR–GARCH, for example.
Vrontos et al. (2003b) have estimated some real data using a variety of bivariate ARCH and GARCH
models in order to select the best model specification and to compare the Bayesian parameter estimates
to those of the ML. These authors have considered three ARCH and three GARCH models, all of them
with constant CCCs. They have used an MH algorithm, which allows to simulate from the joint posterior
distribution of the parameters. For model comparison and selection, Vrontos et al. (2003b) have obtained
predictive distributions and assessed comparative validity of the analysed models, according to which the
CCC model with diagonal covariance matrix performed the best considering one-step-ahead predictions.
3.4 DCC
A natural extension of the simple CCC model are the DCC models, first proposed by Tse and Tsui (2002)
and Engle (2002a). The DCC approach is more realistic, because the dependence between returns is likely
to be time varying.
The models proposed by Tse and Tsui (2002) and Engle (2002a) consider that the conditional covariance
matrix Ht looks as Ht = Dt Rt Dt , where Rt is now a time-varying correlation matrix at time t. The
models differ in the specification of Rt . In the paper by Tse and Tsui (2002), the CCC matrix is
Journal of Economic Surveys (2015) Vol. 29, No. 1, pp. 76–96
C 2013 John Wiley & Sons Ltd
84 VIRBICKAITE ET AL.
Rt = (1 − θ1 − θ2 )R + θ1 Rt−1 + θ2 t−1 , where θ1 and θ2 are non-negative scalar parameters, such that
θ1 + θ2 < 1, R is a positive definite matrix such that ρii = 1 and t−1 is a K × K sample correlation
matrix of the past m standardized mean-corrected returns u t = Dt−1 at . On the other hand, in the paper
by Engle (2002a), the specification of Rtis Rt = (I ⊙ Q t )−1/2 Q t (I ⊙ Q t )−1/2 , where Q t = (1 − α −
β) Q̄ + α(u t−1 u ′t−1 ) + β Q t−1 , u i,t = ai,t / h ii,t is a mean-corrected standardized returns, α and β are
non-negative scalar parameters, such that α + β < 1 and Q̄ is an unconditional covariance matrix of u t .
As noted in Bauwens et al. (2006), the model by Engle (2002a) does not formulate the CCC as a weighted
sum of past correlations, unlike in the DCC model by Tse and Tsui (2002), seen earlier. The drawback
of both these models is that θ1 , θ2 , α and β are scalar parameters, so all CCCs have the same dynamics.
However, as Tsay (2010) notes it, the models are parsimonious.
Moreover, as financial returns display not only asymmetric volatility, but also excess kurtosis, previous
research, as in univariate case, has mostly considered using a multivariate Student-t distribution for the
errors. However, as already discussed, this approach has several limitations. Galeano and Ausı́n (2010)
propose an MGARCH–DCC model, where the standardized innovations follow a mixture of Gaussian
distributions. This allows to capture long tails without being limited by the degrees of freedom constraint,
which is necessary to impose in the Student-t distribution so that higher moments could exist. The authors
estimate the proposed model using the classical ML and Bayesian approaches. In order to estimate model
parameters, dynamics of single assets and dynamic correlations, and the parameters of the Gaussian
mixture, Galeano and Ausı́n (2010) have relied on RWMH algorithm. BIC criteria was used for selecting
the number of mixture components, which performed well in simulated data. Using real data, the authors
provide an application to calculating the Value at Risk (VaR) and solving a portfolio selection problem.
MLE and Bayesian approaches have performed similarly in point estimation, however, the Bayesian
approach, besides giving just point estimates, allows the derivation of predictive distributions for the
portfolio VaR.
An extension of the DCC model of Engle (2002a) is the Asymmetric DCC also proposed by Engle
(2002a), which incorporates an asymmetric correlation effect. It means that correlations between asset
returns decrease more in the bear market than they increase when the market performs well. Cappiello
et al. (2006) generalizes the ADCC model into the AGDCC model, where the parameters of the correlation
equation are vectors, and not scalars. This allows for asset-specific correlation dynamics. In the AGDCC
model, the Q t matrix in the DCC model is replaced with:
where u t = Dt−1 at are mean corrected standardized returns, ηt = u t ⊙ I (u t < 0) selects just negative
returns, ‘diag’ stands for either taking just the diagonal elements from the matrix, or making a diagonal
−1 K
matrix fromK a vector, S is a sample correlation
K matrix of u t , κ, λ and δ are K × 1 vectors, κ̄ = K i=1 κi ,
−1 −1
λ̄ = K i=1 λi and δ̄ = K i=1 δi . To ensure positivity and stationarity of Q t , it is necessary to
impose κi , λi , δi > 0 and κi2 + λi2 + δi2 /2 < 1, ∀i = 1, . . . , K . The AGDCC by Cappiello et al. (2006)
is just a special case where κ1 = . . . = κ K , λ1 = . . . = λ K and δ1 = . . . = δ K .
To our knowledge, the only paper that considers the AGDCC model in the Bayesian setting is Virbickaite
et al. (2013) that propose to model the distribution of the standardized returns as an infinite scale mixture
of Gaussian distributions by relying on Bayesian non-parametrics. This approach is presented in more
detail in Section 4.
3.5 Copula–GARCH
The use of copulas is an alternative approach to study return time series and their volatilities. The main
convenience of using copulas is that individual marginal densities of the returns can be defined separately
from their dependence structure. Then, each marginal time series can be modelled using univariate
Journal of Economic Surveys (2015) Vol. 29, No. 1, pp. 76–96
C 2013 John Wiley & Sons Ltd
BAYESIAN INFERENCE METHODS FOR UNIVARIATE AND MULTIVARIATE GARCH MODELS 85
specification and the dependence between the returns can be modelled by selecting an appropriate
copula function. A K -dimensional copula C(u 1 , . . . , u K ), is a multivariate distribution function in the
unit hypercube [0, 1] K , with uniform [0, 1] marginal distributions. Under certain conditions, the Sklar
Theorem affirms that (see, Sklar, 1959), every joint distribution F(x1 , . . . , x K ), whose marginals are given
by F1 (x1 ), . . . , FK (x K ), can be written as F(x1 , . . . , x K ) = C(F1 (x1 ), . . . , FK (x K )), where C is a copula
function of F, which is unique if the marginal distributions are continuous.
The most popular approach to volatility modelling through copulas is called the Copula–GARCH
model, where univariate GARCH models are specified for each marginal series and the dependence
structure between them is described using a copula function. A very useful feature of copulas, as noted by
Patton (2009), is that the marginal distributions of each random variable do not need to be similar to each
other. This is very important in modelling time series, because each of them might be following different
distributions. The choice of copulas can vary from a simple Gaussian copula to more flexible ones, such as
Clayton, Gumbel and mixed Gaussian. In the existing literature, different parametric and non-parametric
specifications can be used for the marginals and copula function C. Also, the copula function can be
assumed to be constant or time varying, as seen in Ausı́n and Lopes (2010), among others.
The estimation for Copula–GARCH models can be performed in a variety of ways. ML is the obvious
choice for fully parametric models. Estimation is generally based on a multistage method, where firstly the
parameters of the marginal univariate distributions are estimated and then used to condition in estimating
the parameters of the copula. Another approach is non- or semi-parametric estimation of the univariate
marginal distributions followed by a parametric estimation of the copula parameters. As Patton (2006)
has showed, the two-stage ML approach lead to consistent, but not efficient, estimators.
An alternative is to employ a Bayesian approach, as done by Ausı́n and Lopes (2010). The authors
have developed a one-step Bayesian procedure where all parameters are estimated at the same time using
the entire likelihood function and, provided the methodology, for obtaining optimal portfolio, calculating
VaR and CVaR. Ausı́n and Lopes (2010) have used a Gibbs sampler to sample from a joint posterior,
where each parameter is updated using an RWMH. In order to reduce computational cost, the model
and copula parameters are updated not one-by-one, but rather by blocks, that consist of highly correlated
vectors of model parameters.
Arakelian and Dellaportas (2012) have also used Bayesian inference for Copula–GARCH models.
These authors have proposed a methodology for modelling dynamic dependence structure by allowing
copula functions or copula parameters to change across time. The idea is to use a threshold approach so
these changes, that are assumed to be unknown, do not evolve in time but occur in distinct points. These
authors have also employed an RWMH for parameter estimation together with a Laplace approximation.
The adoption of an MCMC algorithm allows the choice of different copula functions and/or different
parameter values between two time thresholds. Bayesian model averaging is considered for predicting
dependence measures such as the Kendall’s correlation. They conclude that the new model performs well
and offers a good insight into the time-varying dependencies between the financial returns.
Hofmann and Czado (2010) developed Bayesian inference of a multivariate GARCH model where the
dependence is introduced by a D-vine copula on the innovations. A D-vine copula is a special case of vine
copulas which are very flexible to construct multivariate copulas because it allows to model dependency
between pairs of margins individually. Inference is carried out using a two-step MCMC method closely
related with the usual two-step maximum likelihood procedure for estimating Copula–GARCH models.
The authors then focus on estimating the VaR of a portfolio that shows asymmetric dependencies between
some pairs of assets and symmetric dependency between others.
All the previously introduced methods rely on parametric assumptions for the distribution of the errors.
However, imposing a certain distribution can be rather restrictive and lead to underestimated uncertainty
about future volatilities, as seen in Virbickaite et al. (2013). Therefore, Bayesian non-parametric methods
become especially useful, since they do not impose any specific distribution on the standardized returns.
m−1
where the weights are obtained as before: ω1 = β1 , ωm = βm i=1 (1 − βi ), for m = 1, . . ., and where
βm ∼ Beta (1, α) and θm ∼ G 0 .
Regarding inference algorithms, there are two main types of approaches. On the one hand, the marginal
methods, such as those proposed by Escobar and West (1995), MacEachern (1994) and Neal (2000),
which rely on the Polya urn representation. All these algorithms are based on integrating out the infinite
dimensional part of the model. Recently, another class of algorithms, called conditional methods, have
been proposed. These approaches, based on the stick-breaking scheme, leave the infinite part in the
model and sample a finite number of variables. These include the procedure by Walker (2007), who
introduces slice sampling schemes to deal with the infiniteness in DPM, and the retrospective MCMC
method of Papaspiliopoulos and Roberts (2008), that is later combined by Papaspiliopoulos (2008) with
slice sampling method by Walker (2007) to obtain a new composite algorithm, which is better, faster and
easier to implement. Generally, the stick-breaking compared to the Polya urn procedures produce better
mixing and simpler algorithms.
88
C 2013 John Wiley & Sons Ltd
10 300
250
5
Log−Returns
200
0
150
−5
100
VIRBICKAITE ET AL.
−10 50
0
0 200 400 600 800 1000 1200 1400 1600 1800 2000 −10 −8 −6 −4 −2 0 2 4 6 8 10
10 400
350
5
Log−Returns
300
250
0
200
−5 150
100
−10
50
0
0 200 400 600 800 1000 1200 1400 1600 1800 2000 −10 −5 0 5 10 15
FTSE100 S&P500
Table 2. Estimation Results for FTSE100 (Subindex 1) and S&P500 (Subindex 2) Log-Returns, with 30,000
iterations plus 10,000 Burn-In Iterations.
model a GJR–ADCC was chosen, allowing for asymmetric volatilities and asymmetric time-varying
correlations. Moreover, the authors have carried out a simulation study that illustrated the adaptability
of the DPM model. Finally, the authors provided one real data application to portfolio decision problem
concluding that DPM models are less restrictive and more adaptive to whatever distribution the data
comes from, therefore, can better capture the uncertainty about financial decisions.
To sum up, the findings in the above-mentioned papers are consistent: the Bayesian semi-parametric
approach leads to more flexible models and is better in explaining heavy-tailed return distributions,
which parametric models cannot fully capture. The parameters are less precise, that is, wider Bayesian
credible intervals are observed because the semi-parametric models are less restricted. This provides a
more adequate measure of uncertainty. If in the Gaussian setting the credible intervals are very narrow
and the real data are not Gaussian, this makes the agent overconfident about her decisions, and she takes
more risk than she would like to assume. Steel (2008) observes that the combination of Bayesian methods
and MCMC computational algorithms provide new modelling possibilities and calls for more research
regarding non-parametric Bayesian time series modelling.
Journal of Economic Surveys (2015) Vol. 29, No. 1, pp. 76–96
C 2013 John Wiley & Sons Ltd
Journal of Economic Surveys (2015) Vol. 29, No. 1, pp. 76–96
90
C 2013 John Wiley & Sons Ltd
r1,t+1 r2,t+1
5 5
DPM DPM
Normal Normal
0 0
−5 −5
VIRBICKAITE ET AL.
−10 −10
−15 −15
−20 −20
−25 −25
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
Figure 2. Log-Predictive Densities of the One-Step-Ahead Returns rt+1 for Bayesian Gaussian and DPM Models.
BAYESIAN INFERENCE METHODS FOR UNIVARIATE AND MULTIVARIATE GARCH MODELS 91
Table 3. Estimated Means, Medians and 95% Credible Intervals of One-Step-Ahead Volatilities of FTSE100
and S&P500 Log-Returns.
HT⋆(1,1)
+1 1.7164 0.4007 0.4098 (0.3681, 0.4538) 0.3996 (0.3550, 0.4512)
0.4099 0.0857 0.3983 0.0962
HT⋆(1,2)
+1 1.1120 0.2911 0.2800 (0.2571, 0.3077 ) 0.2751 (0.2421, 0.3123)
0.2790 0.0506 0.2742 0.0702
HT⋆(2,2)
+1 1.9617 0.4939 0.4635 (0.4159, 0.5193) 0.4431 (0.3912, 0.5059 )
0.4606 0.1034 0.4408 0.1146
5. Illustration
This illustration study using real data has basically two goals: firstly, to show the advantages of the
Bayesian approach, such as the ability to obtain posterior densities of quantities of interest and the facility
to incorporate various constraints on the parameters. Secondly, to illustrate the flexibility of the Bayesian
non-parametric approach for GARCH modelling.
The data used for estimation are the log-returns (in percentages), obtained from close prices adjusted
for dividends and splits, of two market indices: FTSE100 and S&P500 from 10 November 2004 till 10
December 2012, resulting into a sample size of 2000 observations. FTSE100 is a share index of the 100
companies listed on the London Stock Exchange with the highest market capitalization. S&P500 is a
stock market index based on the common stock prices of 500 top publicly traded American companies.
The data were obtained from Yahoo Finance. Figure 1 and Table 1 present the basic plots and descriptive
statistics of the two log-return series.
As seen from the plot and descriptive statistics, the data are slightly skewed and with high kurtosis,
therefore, assuming a Gaussian distribution for the standardized returns would be inappropriate. Therefore,
we estimate this bivariate time series using an ADCC model by Engle (2002a), presented in Section 3.4,
which incorporates an asymmetric correlation effect. The univariate series are assumed to follow GJR-
GARCH(1, 1) models in order to incorporate the leverage effect in volatilities. As for the errors, we use
an infinite scale mixture of Gaussian distributions. Therefore, we call the final model GJR–ADCC–DPM.
Inference and prediction is carried out using Bayesian non-parametric techniques, as seen in Virbickaite
et al. (2013). The selection of the MGARCH specification is arbitrary and other models might work
equally well. For the sake of comparison, we estimate a restricted GJR–ADCC–Gaussian model using
ML and Bayesian approaches. The estimation results are presented in Table 2.
The estimated parameters are very similar for all three approaches, except for α, the asymmetric
correlation coefficient δ. Since α and δ are so close to zero, the ML has some trouble in estimating those
parameters. Overall, the δ is small, indicating little evidence of asymmetrical behaviour in correlations.
Figure 2 shows the estimated marginal predictive densities of the one-step-ahead returns in log scale
using the Bayesian approach. We can observe the differences in tails arising from different specification
of the errors. The DPM model allows for a more flexible distribution, therefore, for more extreme returns,
that is, fatter tails. The estimated densities were obtained using the procedure described in Virbickaite
et al. (2013).
Journal of Economic Surveys (2015) Vol. 29, No. 1, pp. 76–96
C 2013 John Wiley & Sons Ltd
Journal of Economic Surveys (2015) Vol. 29, No. 1, pp. 76–96
92
C 2013 John Wiley & Sons Ltd
14
16 50
14 12
VIRBICKAITE ET AL.
40
12
10
10
30
8
6
20
6
4
4
10
2
2
0 0 0
0.3 0.32 0.34 0.36 0.38 0.4 0.42 0.44 0.46 0.48 0.5 0.6 0.62 0.64 0.66 0.68 0.7 0.36 0.38 0.4 0.42 0.44 0.46 0.48 0.5 0.52 0.54 0.56
Table 3 presents the estimated mean, median and 95% credible intervals of one-step-ahead volatility
matrices in Bayesian context. The matrix element (1,1) represents the volatility for the FTSE100 series
(2,2) for the S&P500, and the elements in the diagonal (1,2) and (2,1) represent the covariance of both
financial returns. Figure 3 draws the posterior distributions for volatilities and correlation. The estimated
mean volatilities for both, DPM and Gaussian approaches, are very similar, however, the main differences
arise from the shape of the posterior distribution. Ninety-five percent credible intervals for DPM model
correlation are wider providing a more realistic measure of uncertainty about future correlations between
two assets. This is a very important implication in financial setting, because if an investor chooses to be
Gaussian, she would be overconfident about her decision and unable to adequately measure the risk she
is facing. See Virbickaite et al. (2013) for a more detailed comparison of DPM and alternative parametric
approaches in portfolio decision problems.
To sum up, this illustration has shown the main differences between the standard estimation procedures
and the new non-parametric approach. Even though the point estimates for the parameters and the one-
step-ahead volatilities are very similar, the main differences arise from the thickness of tails of predictive
distributions of one-step-ahead returns and the shape of the posterior distribution for the one-step-ahead
volatilities.
6. Conclusions
In this paper, we reviewed univariate and multivariate GARCH models and inference methods, putting
emphasis on the Bayesian approach. We have surveyed the existing literature that concerns various
Bayesian inference methods for MGARCH models, outlining the advantages of the Bayesian approach
versus the classical procedures. We have also discussed in more detail the recent Bayesian non-parametric
method for GARCH models, which avoid imposing arbitrary parametric distributional assumptions. This
new approach is more flexible and can describe better the uncertainty about future volatilities and returns,
as has been illustrated using real data.
Acknowledgements
We are grateful to an anonymous referee for helpful comments. The first and second authors are grateful for
the financial support from MEC grant ECO2011-25706. The third author acknowledges financial support from
MEC grant ECO2012-38442.
References
Alexander, C.O. and Chibumba, A.M. (1997) Multivariate orthogonal factor GARCH. Mimeo, University of
Sussex.
Antoniak, C.E. (1974) Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems.
Annals of Statistics 2: 1152–1174.
Arakelian, V. and Dellaportas, P. (2012) Contagion determination via copula and volatility threshold models.
Quantitative Finance 12: 295–310.
Ardia, D. (2006) Bayesian estimation of the GARCH (1,1) model with normal innovations. Student 5: 1–13.
Ardia, D. and Hoogerheide, L.F. (2010) Efficient Bayesian Estimation And Combination Of Garch-Type
Models. In K. Bocker (ed.), Rethinking Risk Measurement and Reporting: Examples and Applications
from Finance: Vol II, , Chapter 1. London: RiskBooks.
Asai, M. (2006) Comparison of MCMC methods for estimating GARCH models. Journal of the Japan Statistical
Society 36: 199–212.
Ausı́n, M.C. and Galeano, P. (2007) Bayesian estimation of the Gaussian mixture GARCH model.
Computational Statistics & Data Analysis 51: 2636–2652.
Journal of Economic Surveys (2015) Vol. 29, No. 1, pp. 76–96
C 2013 John Wiley & Sons Ltd
94 VIRBICKAITE ET AL.
Ausı́n, M.C. and Lopes, H.F. (2010) Time-varying joint distribution through copulas. Computational Statistics
& Data Analysis 54: 2383–2399.
Ausı́n, M.C., Galeano, P. and Ghosh, P. (2014) A semiparametric Bayesian approach to the analysis of financial
time series with applications to value at risk estimation. European Journal of Operational Research 232:
350–358.
Bai, X., Russell, J.R. and Tiao, G.C. (2003) Kurtosis of GARCH and stochastic volatility models with non-
normal innovations. Journal of Econometrics 114: 349–360.
Bauwens, L. and Lubrano, M. (1998) Bayesian inference on GARCH models using the Gibbs sampler.
Econometrics Journal 1: 23–46.
Bauwens, L. and Lubrano, M. (2002) Bayesian option pricing using asymmetric GARCH models. Journal of
Empirical Finance 9: 321–342.
Bauwens, L., Laurent, S. and Rombouts, J.V.K. (2006) Multivariate GARCH models: a survey. Journal of
Applied Econometrics 21: 79–109.
Bera, A.K. and Higgins, M.L. (1993) ARCH models: properties, estimation and testing. Journal of Economic
Surveys 7: 305–362.
Black, F. (1976) Studies of stock market volatility changes. Proceedings of the American Statistical Association;
Business and Economic Statistics Section 177–181.
Bollerslev, T. (1986) Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics 31:
307–327.
Bollerslev, T. (1987) A conditionally heteroskedastic time series model for speculative prices and rates of
return. The Review of Economics and Statistics 69: 542–547.
Bollerslev, T. (1990) Modelling the coherence in short-run nominal exchange rates: a multivariate generalized
ARCH model. The Review of Economics and Statistics 72: 498–505.
Bollerslev, T., Chou, R.Y. and Kroner, K.F. (1992) ARCH modeling in finance: a review of the theory and
empirical evidence. Journal of Econometrics 52: 5–59.
Bollerslev, T., Engle, R.F. and Nelson, D.B. (1994) ARCH models. In R.F. Engle and D. McFadden (eds),
Handbook of Econometrics, Vol. 4 (pp. 2959–3038). Amsterdam: Elsevier.
Bollerslev, T., Engle, R.F. and Wooldridge, J.M. (1988) A capital asset pricing model with time-varying
covariances. Journal of Political Economy 96: 116–131.
Burda, M. (2013) Constrained Hamiltonian Monte Carlo in BEKK GARCH with targeting. Working Paper,
University of Toronto.
Burda, M. and Maheu, J.M. (2013) Bayesian adaptively updated Hamiltonian Monte Carlo with an application to
high-dimensional BEKK GARCH models. Studies in Nonlinear Dynamics & Econometrics17: 345–372.
Cappiello, L., Engle, R.F. and Sheppard, K. (2006) Asymmetric dynamics in the correlations of global equity
and bond returns. Journal of Financial Econometrics 4: 537–572.
Chen, C.W., Gerlach, R. and So, M. K. (2009) Bayesian model selection for heteroscedastic models. Advances
in Econometrics 23: 567–594.
Dellaportas, P. and Vrontos, I.D. (2007) Modelling volatility asymmetries: a Bayesian analysis of a class of
tree structured multivariate GARCH models. The Econometrics Journal 10: 503–520.
Ding, Z. and Engle, R.F. (2001) Large scale conditional covariance matrix modeling, estimation and testing.
Academia Economic Papers 29: 157–184.
Engle, R.F. (1982) Autoregressive conditional heteroskedasticity with estimates of the variance of United
Kingdom inflation. Econometrica 50: 987–1008.
Engle, R.F. (2002a) Dynamic conditional correlation. Journal of Business & Economic Statistics 20: 339–350.
Engle, R.F. (2002b) New frontiers for ARCH models. Journal of Applied Econometrics 17: 425–446.
Engle, R.F. (2004) Risk and volatility: econometric models and financial practice. The American Economic
Review 94: 405–420.
Engle, R.F. and Kroner, K.F. (1995) Multivariate simultaneous generalized ARCH. Econometric Theory 11:
122–150.
Engle, R.F., Ng, V.K. and Rothschild, M. (1990) Asset pricing with a factor-ARCH covariance structure.
Journal of Econometrics 45: 213–237.
Escobar, M.D. and West, M. (1995) Bayesian density estimation and inference using mixtures. Journal of the
American Statistical Association 90: 577–588.
Ferguson, T.S. (1973) A Bayesian analysis of some nonparametric problems. Annals of Statistics 1: 209–230.
Galeano, P. and Ausı́n, M.C. (2010) The Gaussian mixture dynamic conditional correlation model: parameter
estimation, value at risk calculation, and portfolio selection. Journal of Business and Economic Statistics
28: 559–571.
Geweke, J. (1992) Evaluating the accuracy of sampling-based approaches to the calculation of posterior
moments. Bayesian Statistics 4: 169–193.
Giannikis, D., Vrontos, I.D. and Dellaportas, P. (2008) Modelling nonlinearities and heavy tails via threshold
normal mixture GARCH models. Computational Statistics & Data Analysis 52: 1549–1571.
Gilks, W.R., Best, N.G. and Tan, K.K.C. (1995) Adaptive rejection metropolis sampling within Gibbs sampling.
Journal of the Royal Statistical Society Series C 44: 455–472.
Glosten, L.R., Jagannathan, R. and Runkle, D.E. (1993) On the relation between the expected value and the
volatility of the nominal excess return on stocks. The Journal of Finance 48: 1779–1801.
Greyserman, A., Jones, D.H. and Strawderman, W.E. (2006) Portfolio selection using hierarchical Bayesian
analysis and MCMC methods. Journal of Banking & Finance 30: 669–678.
Hall, P. and Yao, Q. (2003) Inference in ARCH and GARCH models with heavy-tailed errors. Econometrica
71: 285–317.
He, C. and Teräsvirta, T. (1999) Properties of moments of a family of GARCH processes. Journal of
Econometrics 92: 173–192.
Hofmann, M. and Czado, C. (2010) Assessing the VaR of a portfolio using D-vine copula based multivariate
GARCH models. Working Paper, Technische Universität München Zentrum Mathematik.
Hudson, B.G. and Gerlach, R.H. (2008) A Bayesian approach to relaxing parameter restrictions in multivariate
GARCH models. Test 17: 606–627.
Ishida, I. and Engle, R.F. (2002) Modeling variance of variance: the square-root, the affine, and the CEV
GARCH models. Working Paper, New York University, pp. 1–47.
Jacquier, E. and Polson, N.G. (2013) Asset allocation in finance: a Bayesian perspective. In P. Damien,
P. Dellaportas, N.G. Polson and D.A. Stephen (eds), Bayesian Theory and Applications, (pp. 501–516).
Oxford: Oxford University Press.
Jensen, M.J. and Maheu, J.M. (2013) Bayesian semiparametric multivariate GARCH modeling. Journal of
Econometrics 176: 3–17.
Jin, X. and Maheu, J.M. (2013) Modeling realized covariances and returns. Journal of Financial Econometrics
11: 335–369.
Jorion, P. (1986) Bayes-stein estimation for portfolio analysis. The Journal of Financial and Quantitative
Analysis 21: 279–292.
Kang, L. (2011) Asset allocation in a Bayesian copula-GARCH framework: an application to the ‘passive funds
versus active funds’ problem. Journal of Asset Management 12: 45–66.
Kim, S., Shephard, N. and Chib, S. (1998) Stochastic volatility: likelihood inference and comparison with
ARCH Models. Review of Economic Studies 65: 361–393.
Lin, W.L. (1992) Alternative estimators for factor GARCH models: a Monte Carlo comparison. Journal of
Applied Econometrics 7: 259–279.
MacEachern, S.N. (1994) Estimating normal means with a conjugate style Dirichlet process prior.
Communications in Statistics: Simulation and Computation 23: 727–741.
Miazhynskaia, T. and Dorffner, G. (2006) A comparison of Bayesian model selection based on MCMC with
application to GARCH-type models. Statistical Papers 47: 525–549.
Müller, P. and Pole, A. (1998) Monte Carlo posterior integration in GARCH models. Sankhya, Series B 60:
127–14.
Nakatsuma, T. (2000) Bayesian analysis of ARMA-GARCH models: a Markov chain sampling approach.
Journal of Econometrics 95: 57–69.
Neal, R.M. (2000) Markov chain sampling methods for Dirichlet process mixture models. Journal of
Computational and Graphical Statistics 9: 249–265.
Nelson, D.B. (1991) Conditional heteroskedasticity in asset returns: a new approach. Econometrica 59: 347–
370.
Osiewalski, J. and Pipien, M. (2004) Bayesian comparison of bivariate ARCH-type models for the main
exchange rates in Poland. Journal of Econometrics 123: 371–391.
Papaspiliopoulos, O. (2008) A note on posterior sampling from Dirichlet mixture models. Working Paper,
University of Warwick. Centre for Research in Statistical Methodology, Coventry, pp. 1–8.
Papaspiliopoulos, O. and Roberts, G.O. (2008). Retrospective Markov chain Monte Carlo methods for Dirichlet
process hierarchical models. Biometrika 95: 169–186.
Patton, A.J. (2006) Modelling asymmetric exchange rate dependence. International Economic Review 47:
527–556.
Patton, A.J. (2009) Copula-based models for financial time series. In T.G. Andersen, T. Mikosch, J.P. Kreiß,
and R.A. Davis (eds), Handbook of Financial Time Series, Chapter 36, (pp. 767–785). Berlin, Heidelberg:
Springer.
Robert, C.P. and Casella, G. (2004) Monte Carlo Statistical Methods. Springer Texts in Statistics, 2nd edn. New
York: Springer.
Silvennoinen, A. and Teräsvirta, T. (2009) Multivariate GARCH models. In T.G. Andersen, T. Mikosch, J.P.
Kreiß, and R.A. Davis (eds), Handbook of Financial Time Series, (pp. 201–229). Berlin, Heidelberg:
Springer.
Sklar, A. (1959) Fonctions de répartition á n dimensions et leur marges. Publications de l’Institut de Statistique
de l’Université de Paris. 8: 229–231.
Steel, M. (2008) Bayesian time series analysis. In S. Durlauf and L. Blume (eds), The New Palgrave Dictionary
of Economics, 2nd edn. London: Palgrave Macmillan.
Teräsvirta, T. (2009) An introduction to univariate GARCH models. In T.G. Andersen, T. Mikosch, J.P. Kreiß,
and R.A. Davis (eds), Handbook of Financial Time Series, (pp. 17–42). Berlin, Heidelberg: Springer.
Tierney, L. (1994) Markov chains for exploring posterior distributions. Annals of Statistics 22: 1701–1728.
Tsay, R.S. (2010) Analysis of Financial Time Series, 3rd edn. Hoboken: John Wiley & Sons, Inc.
Tse, Y.K. and Tsui, A.K.C. (2002) A multivariate generalized autoregressive conditional heteroscedasticity
model with time-varying correlations. Journal of Business & Economic Statistics 20: 351–362.
Van der Weide, R. (2002) GO-GARCH: a multivariate generalized orthogonal GARCH model. Journal of
Applied Econometrics 17: 549–564.
Virbickaite, A., Ausı́n, M.C. and Galeano, P. (2013) A Bayesian non-parametric approach to asymmet-
ric dynamic conditional correlation model with application to portfolio selection. Working paper,
arXiv:1301.5129v1 [q-fin.PM]
Vrontos, I.D., Dellaportas, P. and Politis, D.N. (2000) Full Bayesian inference for GARCH and EGARCH
models. Journal of Business and Economic Statistics 18: 187–198.
Vrontos, I.D., Dellaportas, P. and Politis, D.N. (2003a) A full-factor multivariate GARCH model. Econometrics
Journal 6: 312–334.
Vrontos, I.D., Dellaportas, P. and Politis, D.N. (2003b) Inference for some multivariate ARCH and GARCH
models. Journal of Forecasting 22: 427–446.
Wago, H. (2004) Bayesian estimation of smooth transition GARCH model using Gibbs sampling. Mathematics
and Computers in Simulation 64: 63–78.
Walker, S.G. (2007) Sampling the Dirichlet mixture model with slices. Communications in Statistics: Simulation
and Computation 36: 45–54.
Yu, B. and Mykland, P. (1998) Looking at Markov samplers through cusum path plots: a simple diagnostic
idea. Statistics and Computing 8: 275–286.
Zakoian, J.M. (1994) Threshold heteroskedastic models. Journal of Economic Dynamics and Control 18:
931–955.