Linear Mixed Models

Linear mixed models (LMMs) are a generalisation of linear models [1] to allow the ouctome to be described as a summation of both fixed and random effects [2]. LMM inference is implemented by the glimix_core.lmm module and described here.

Note

Please refer to the Variables section for the definition of the otherwise-unspecified mathematical symbols.

Introduction

A LMM can be described as

(1)\[\mathbf y = \mathrm X\boldsymbol\beta + \mathrm G\mathbf u + \boldsymbol\epsilon,\]

where \(\mathbf u \sim \mathcal N(\mathbf 0, v_0\mathrm I_k)\) is a vector of random effects and \(\epsilon_i\) are iid Normal random variables with zero-mean and variance \(v_1\) each. The outcome-vector is thus distributed according to

\[\mathbf y \sim \mathcal N(\mathrm X\boldsymbol\beta, v_0 \mathrm G \mathrm G^{\intercal} + v_1\mathrm I_n)\]

We refer to \(\mathrm G \mathrm G^{\intercal}\) as \(\mathrm K\), the covariance-matrix.

The LMM class provides a FastLMM [3] implementation to perform inference over the variance parameters \(v_0\) and \(v_1\) and over the vector \(\boldsymbol\beta\) of fixed-effect sizes. An instance of this class is created by providing the outcome y, the covariates X, and the covariance K via its economic eigendecomposition. Here is an example:

>>> from numpy import array, ones
>>> from numpy_sugar.linalg import economic_qs_linear
>>> from glimix_core.lmm import LMM
>>>
>>> G = array([[1, 2], [3, -1], [1.1, 0.5], [0.5, -0.4]], float)
>>> QS = economic_qs_linear(G)
>>> X = ones((4, 1))
>>> y = array([-1, 2, 0.3, 0.5])
>>> lmm = LMM(y, X, QS)
>>> lmm.fit(verbose=False)
>>> lmm.lml()  
-2.2726234086180557

The method LMM.fit() is called to optimise the marginal likelihood over the fixed-effect sizes \(\boldsymbol\beta\) and over the variances \(v_0\) and \(v_1\). The resulting values for the above inference are:

>>> lmm.beta[0]  
0.0664650291693258
>>> lmm.v0  
0.33736446158226896
>>> lmm.v1  
0.012503600451739165

This module also provides FastScanner, an approximated LMM implementation for performing an even faster inference across several (millions, for example) covariates independently. More detail about this approach is given in the next section.

Association scan

Let \(\mathrm X\) be a \(n\)-by-\(d\) matrix, \(\mathrm M\) a \(n\)-by-\(c\) matrix, and \(\mathbf y\) an array of outcome:

\[\mathbf y \sim \mathcal N\big(~ \mathrm X\mathbf b + \mathbf{m}_j \alpha_j,~ s (\mathrm K + v \mathrm I_n) ~\big)\]

Note that \(\alpha_j\) is a scalar multiplying a column-matrix \(\mathbf{m}_j\). The variable \(s\) is a scaling factor that, if not set, is jointly adjusted with \(\alpha_j\) in order to maximise the marginal likelihood. The variable \(v\) is held fixed and that is the reason why this inference can be performed quickly over millions of candidates.

The user provides the outcome y, the covariates X, the covariance K via its eigendecomposition, and the variance v to create an instance of the FastScanner class. After that, the user is able to call the FastScanner.fast_scan() method and pass the matrix of candidates M to compute the log of the marginal likelihood and the fixed-effect size corresponding to each column of M. For example:

>>> from glimix_core.lmm import FastScanner
>>>
>>> scanner = FastScanner(y, X, QS, lmm.v1)
>>> M = array([[1, 3, -1.5], [0, -2, 4], [-2, -2.5, 3], [0.2, +2, 2]])
>>> lmls, effect_sizes = scanner.fast_scan(M, verbose=False)
>>>
>>> lmls[0]  
0.25435129186857885
>>> lmls[1]  
0.4399332250146468
>>> lmls[2]  
0.8654753462599309
>>> effect_sizes[0]  
-0.07463997107886489
>>> effect_sizes[1]  
-0.044137681512885746
>>> effect_sizes[2]  
0.09065047700251282

API

class glimix_core.lmm.LMM(y, X, QS=None)[source]

Fast Linear Mixed Models inference via maximum likelihood.

It perform inference on the model (1), explained in the Introduction section.

Parameters:
  • y (array_like) – Outcome.
  • X (array_like) – Covariates as a two-dimensional array.
  • QS (tuple) – Economic eigendecompositon in form of ((Q0, Q1), S0) of a covariance matrix K.

Examples

>>> from numpy import array
>>> from numpy_sugar.linalg import economic_qs_linear
>>> from glimix_core.lmm import LMM
>>>
>>> X = array([[1, 2], [3, -1]], float)
>>> QS = economic_qs_linear(X)
>>> covariates = array([[1], [1]])
>>> y = array([-1, 2], float)
>>> lmm = LMM(y, covariates, QS)
>>> lmm.fit(verbose=False)
>>> print('%.3f' % lmm.lml())
-3.649

One can also specify which parameters should be fitted:

>>> from numpy import array
>>> from numpy_sugar.linalg import economic_qs_linear
>>> from glimix_core.lmm import LMM
>>>
>>> X = array([[1, 2], [3, -1]], float)
>>> QS = economic_qs_linear(X)
>>> covariates = array([[1], [1]])
>>> y = array([-1, 2], float)
>>> lmm = LMM(y, covariates, QS)
>>> lmm.fix('delta')
>>> lmm.fix('scale')
>>> lmm.delta = 0.5
>>> lmm.scale = 1
>>> lmm.fit(verbose=False)
>>> print('%.3f' % lmm.lml())
-3.832
>>> lmm.unfix('delta')
>>> lmm.fit(verbose=False)
>>> print('%.3f' % lmm.lml())
-3.713
X

Covariates set by the user.

It has to be a matrix of number-of-samples by number-of-covariates.

Returns:Covariates.
Return type:numpy.ndarray
beta

Get or set fixed-effect sizes.

copy()[source]

Return a copy of this object.

This is useful for performing new inference based on the results of the copied object, as the new LMM object will start its inference from the initial solution found in the copied object.

Returns:Copy of this object.
Return type:LMM
fit(verbose=True)[source]

Maximise the marginal likelihood.

Parameters:verbose (bool, optional) – True for progress output; False otherwise. Defaults to True.
fix(var_name)[source]

Disable the optimisation of a given variable.

Parameters:var_name (str) – Possible values are "delta" and "scale".
fixed_effects_variance

Variance of the fixed-effects.

It is defined as the empirical variance of the prior mean.

Returns:Estimated variance of the fixed-effects.
Return type:numpy.ndarray
get_fast_scanner()[source]

Return FastScanner for association scan.

Returns:Instance of a class designed to perform very fast association scan.
Return type:FastScanner
isfixed(var_name)[source]

Return whether a variable it is fixed or not.

Parameters:var_name (str) – Possible values are "delta" and "scale".
Returns:True if fixed; False otherwise.
Return type:bool
lml()[source]

Log of the marginal likelihood.

Returns:Log of the marginal likelihood.
Return type:float
mean

Estimated mean \(\mathrm X\boldsymbol\beta\).

Returns:Mean of the prior.
Return type:numpy.ndarray
scale

Scaling factor.

Returns:Current scale if fixed; optimal scale otherwise.
Return type:float
unfix(var_name)[source]

Enable the optimisation of a given variable.

Parameters:var_name (str) – Possible values are "delta" and "scale".
v0

First variance.

Returns:\(s (1 - \delta)\)
Return type:float
v1

Second variance.

Returns:\(s \delta\)
Return type:float
class glimix_core.lmm.FastScanner(y, X, QS, v)[source]

Approximated fast inference over several covariates.

Specifically, it computes the log of the marginal likelihood

\[\log p(\mathbf y)_j = \log \mathcal N\big(~ \mathrm X\mathbf b_j^* + \mathbf{m}_j \alpha_j^*,~ s_j^* (\mathrm K + v \mathrm I) ~\big),\]

for optimal \(\mathbf b_j\), \(\alpha_j\), and \(s_j^*\) values. Vector \(\mathbf{m}_j\) is the candidate defined by the i-th column of matrix M provided to method fast_scan(). Variance \(v\) is not optimised for performance reasons. The method assumes the user has provided a reasonable value for it.

Parameters:
  • y (array_like) – Real-valued outcome.
  • X (array_like) – Matrix of covariates.
  • QS (tuple) – Economic eigendecomposition ((Q0, Q1), S0) of K.
  • v (float) – Variance due to iid effect.

Notes

The implementation requires further explanation as it is somehow obscure. Let \(\mathrm B_i=\mathrm Q_i\mathrm D_i^{-1}\mathrm Q_i^{\intercal}\) for \(i \in \{0, 1\}\) and \(\mathrm E_j = [\mathrm X;~ \mathbf m_j]\). The matrix resulted from \(\mathrm E_j^{\intercal}\mathrm B_i\mathrm E_j\) is represented by the variable ETBE, and four views of such a matrix are given by the variables XTBX, XTBM, MTBX, and MTBM. Those views represent \(\mathrm X^{\intercal}\mathrm B_i\mathrm X\), \(\mathrm X^{\intercal}\mathrm B_i\mathbf m_j\), \(\mathbf m_j^{\intercal}\mathrm B_i\mathrm X\), and \(\mathbf m_j^{\intercal}\mathrm B_i\mathbf m_j\), respectively.

fast_scan(M, verbose=True)[source]

LML and fixed-effect sizes of each marker.

If the scaling factor s is not set by the user via method set_scale(), its optimal value will be found and used in the calculation.

Parameters:
  • M (array_like) – Matrix of fixed-effects across columns.
  • verbose (bool, optional) – True for progress information; False otherwise. Defaults to True.
Returns:

null_lml()[source]

Log of the marginal likelihood for the null hypothesis.

Returns:Log of the marginal likelihood.
Return type:float
set_scale(scale)[source]

Set the scaling factor.

Calling this method disables the automatic scale learning.

Parameters:scale (float) – Scaling factor.
unset_scale()[source]

Unset the scaling factor.

If called, it enables the scale learning again.

Implementation

Single-trait

The LMM model (1) can be equivalently written as

\[\mathbf y \sim \mathcal N\Big(~ \mathrm X\boldsymbol\beta;~ s \big( (1-\delta) \mathrm K + \delta \mathrm I \big) ~\Big),\]

and we thus have \(v_0 = s (1 - \delta)\) and \(v_1 = s \delta\). Consider the economic eigendecomposition of \(\mathrm K\):

\[\begin{split}\overbrace{[\mathrm Q_0 \quad \mathrm Q_1]}^{\mathrm Q} \overbrace{\left[\begin{array}{cc} \mathrm S_0 & \mathbf 0\\ \mathbf 0 & \mathbf 0 \end{array}\right]}^{\mathrm S} \left[\begin{array}{c} \mathrm Q_0^{\intercal} \\ \mathrm Q_1^{\intercal} \end{array}\right] = \mathrm K\end{split}\]

and let

\[\begin{split}\mathrm D = \left[ \begin{array}{cc} (1-\delta)\mathrm S_0 + \delta\mathrm I_r & \mathbf 0\\ \mathbf 0 & \delta\mathrm I_{n-r} \end{array} \right].\end{split}\]

We thus have

\[((1-\delta)\mathrm K + \delta\mathrm I_n)^{-1} = \mathrm Q \mathrm D^{-1} \mathrm Q^{\intercal}.\]

A diagonal covariance-matrix can then be used to define an equivalent marginal likelihood:

\[\mathcal N\left(\mathrm Q^{\intercal} \mathbf y ~|~ \mathrm Q^{\intercal} \mathrm X\boldsymbol\beta,~ s \mathrm D \right).\]

Taking the logarithm and expanding it gives us

\[\begin{split}\log p(\mathbf y) &= -\frac{n}{2} \log 2\pi - \frac{n}{2} \log s - \frac{1}{2}\log|\mathrm D|\\ &- \frac{1}{2} (\mathrm Q^{\intercal}\mathbf y)^{\intercal} s^{-1} \mathrm D^{-1}(\mathrm Q^{\intercal} \mathbf y)\\ &+ (\mathrm Q^{\intercal}\mathbf y)^{\intercal} s^{-1} \mathrm D^{-1} (\mathrm Q^{\intercal} \mathrm X \boldsymbol\beta)\\ &- \frac{1}{2} (\mathrm Q^{\intercal} \mathrm X \boldsymbol\beta)^{\intercal} s^{-1} \mathrm D^{-1} (\mathrm Q^{\intercal} \mathrm X \boldsymbol\beta).\end{split}\]

Setting the derivative of \(\log p(\mathbf y)\) over effect sizes equal to zero leads to solutions \(\boldsymbol\beta^*\) from equation

\[(\mathrm Q^{\intercal}\mathrm X \boldsymbol\beta^*)^{\intercal} \mathrm D^{-1} (\mathrm Q^{\intercal} \mathrm X) = (\mathrm Q^{\intercal}\mathbf y)^{\intercal}\mathrm D^{-1} (\mathrm Q^{\intercal}\mathrm X).\]

Replacing it back to the log of the marginal likelihood gives us the simpler formulae

\[\begin{split}\log p(\mathbf y) &= -\frac{n}{2} \log 2\pi - \frac{n}{2} \log s - \frac{1}{2}\log|\mathrm D|\\ & +\frac{1}{2} (\mathrm Q^{\intercal}\mathbf y)^{\intercal} s^{-1} \mathrm D^{-1}\mathrm Q^{\intercal} (\mathrm X\boldsymbol\beta^* - \mathbf y).\end{split}\]

In the extreme case where \(\boldsymbol\beta^*\) is such that \(\mathbf y = \mathrm X\boldsymbol\beta^*\), the maximum is attained as \(s \rightarrow 0\).

Setting the derivative of \(\log p(\mathbf y; \boldsymbol\beta^*)\) over scale equal to zero leads to the maximum

\[s^* = n^{-1} (\mathrm Q^{\intercal}\mathbf y)^{\intercal} \mathrm D^{-1}\mathrm Q^{\intercal} (\mathbf y - \mathrm X\boldsymbol\beta^*).\]

We offer the possibility to use either \(s^*\) found via the above equation or a scale defined by the user. In the first case we have a further simplification of the log of the marginal likelihood:

\[\begin{split}\log p(\mathbf y; \boldsymbol\beta^*, s^*) &= -\frac{n}{2} \log 2\pi - \frac{n}{2} \log s^* - \frac{1}{2}\log|\mathrm D| - \frac{n}{2}\\ &= \log \mathcal N(\text{Diag}(\sqrt{s^*\mathrm D}) ~|~ \mathbf 0, s^*\mathrm D).\end{split}\]

Multi-trait

The extension to multiple traits becomes easy under the assumption that the traits are uncorrelated, as assumed in this section. Let \(m\) be the number of traits. We stack all the different traits into

\[\mathbf y = \text{vec}\left(\left[ \mathbf y_0 ~ \mathbf y_1 ~\cdots~ \mathbf y_m \right] \right)\]

Similarly, we have the covariates, fixed-effect sizes, and the assembled covariance matrix \(\tilde{\mathrm K}\) as

\[\mathrm X = \text{vec}\left(\left[ \mathrm X_0 ~ \mathrm X_1 ~\cdots~ \mathrm X_m \right] \right),\]
\[\boldsymbol\beta = \text{vec}\left( \left[ \boldsymbol\beta_0 ~ \boldsymbol\beta_1 ~\cdots~ \boldsymbol\beta_m \right] \right),\]

and

\[\begin{split}\tilde{\mathrm K} = \left[ \begin{array}{ccc} \mathrm K & \mathbf 0 & \cdots \\ \vdots & \ddots & \\ \mathbf 0 & & \mathrm K \end{array} \right],\end{split}\]

where \(\mathrm K\) is repeated \(m\) times in \(\tilde{\mathrm K}\). We thus consider the model

(2)\[\mathbf y \sim \mathcal N\Big(~ \mathrm X\boldsymbol\beta;~ v_0 \tilde{\mathrm K} + v_1 \mathrm I_{nm} ~\Big),\]

which is the model (1) with multi-trait structure and uncorrelated traits.

We use the fact that the eigendecomposition of \(\tilde{\mathrm K}\) can be computed as fast as the eigendecomposition of \(\mathrm K\):

\[\begin{split}\overbrace{[\mathrm Q ~ \cdots ~ \mathrm Q]}^{\tilde{\mathrm Q}} \overbrace{\left[\begin{array}{ccc} \mathrm S & \mathbf 0 & \cdots \\ \vdots & \ddots & \\ \mathbf 0 & & \mathrm S \end{array}\right]}^{\tilde{\mathrm S}} \left[\begin{array}{c} \mathrm Q^{\intercal} \\ \vdots \\ \mathrm Q^{\intercal} \end{array}\right] = \tilde{\mathrm K}.\end{split}\]

Let

\[\begin{split}\tilde{\mathrm D} = \left[ \begin{array}{cc} (1-\delta)\tilde{\mathrm S}_0 + \delta\mathrm I_r & \mathbf 0\\ \mathbf 0 & \delta\mathrm I_{n-r} \end{array} \right].\end{split}\]

We thus have

\[\begin{split}((1-\delta)\tilde{\mathrm K} + \delta\mathrm I_{nm})^{-1} = \left[\begin{array}{ccc} \mathrm Q\mathrm D^{-1}\mathrm Q^{\intercal} & \mathbf 0 & \cdots\\ \vdots & \ddots & \\ \mathbf 0 & & \mathrm Q\mathrm D^{-1}\mathrm Q^{\intercal} \end{array}\right]\end{split}\]

A diagonal covariance-matrix can then be used to define an equivalent marginal likelihood:

\[\begin{split}\mathcal N\left(\tilde{\mathrm Q}^{\intercal} \mathbf y ~|~ \tilde{\mathrm Q}^{\intercal} \mathrm X\boldsymbol\beta ,~ s\left[\begin{array}{ccc} \mathrm D & \mathbf 0 & \cdots\\ \vdots & \ddots & \\ \mathbf 0 & & \mathrm D \end{array}\right] \right).\end{split}\]

The optimal effect sizes are solutions to the equation

\[\left[(\mathrm Q^{\intercal}\mathrm X_i\boldsymbol\beta_i^*)^{\intercal} \mathrm D^{-1} (\mathrm Q^{\intercal} \mathrm X_i)\right] = \left[(\mathrm Q^{\intercal}\mathbf y_i)^{\intercal}\mathrm D^{-1} (\mathrm Q^{\intercal}\mathrm X_i)\right],\]

for \(i \in \{1, \dots, m\}\). Setting the derivative of \(\log p(\mathbf y; \boldsymbol\beta^*)\) over scale equal to zero leads to the maximum

\[s^* = (nm)^{-1} \left[(\mathrm Q^{\intercal}\mathbf y_i)^{\intercal} \mathrm D^{-1}\right]\left[\mathrm Q^{\intercal} (\mathbf y_i - \mathrm X_i\boldsymbol\beta_i^*)\right].\]

Using the above, optimal scale leads to a further simplification of the log of the marginal likelihood:

\[\begin{split}\log p(\mathbf y; \boldsymbol\beta^*, s^*) &= -\frac{nm}{2} \log 2\pi - \frac{nm}{2} \log s^* - \frac{1}{2}\log|\tilde{\mathrm D}| - \frac{nm}{2}\\ &= \log \mathcal N(\text{Diag}(\sqrt{s^*\tilde{\mathrm D}}) ~|~ \mathbf 0, s^*\tilde{\mathrm D}).\end{split}\]

Variables

\(n\):Number of samples.
\(m\):Number of traits.
\(c\):Number of candidates.
\(d\):Number of covariates.
\(k\):Number of random effects.
\(r\):Covariance-matrix rank.

References

[1]Wikipedia contributors. (2018, May 22). Linear model. In Wikipedia, The Free Encyclopedia. Retrieved 16:00, August 5, 2018, from https://en.wikipedia.org/w/index.php?title=Linear_model&oldid=842479751.
[2]Introduction to linear mixed models. UCLA: Institute for Digital Research and Education. Retrieved from August 5, 2018, from https://stats.idre.ucla.edu/other/mult-pkg/introduction-to-linear-mixed-models/.
[3]Lippert, Christoph, Listgarten, Jennifer, Liu, Ying, Kadie, Carl M, Davidson, Robert I & Heckerman, David (2011). FaST linear mixed models for genome-wide association studies. Nature methods, 8, 833-835.