Gaussian Process

Introduction

A GP is a statistical distribution \(Y_t\), \(t\in\mathrm T\), for which any finite linear combination of samples has a joint Gaussian distribution [1] [2]. An instance of such class of processes is defined via a mean function \(m(\cdot)\) and a covariance function \(k(\cdot, \cdot)\) whose domains are \(\mathrm T\) and \(\mathrm T\times\mathrm T\), respectively.

A finite instance of such process is given by

\[\mathbf y \sim \mathcal N(\mathbf z ~|~ \mathbf m, \mathrm K)\]

Class GP performs inference over the mean and covariance parameters via maximum likelihood and Expectation Propagation [3] approximation.

Usage

class glimix_core.gp.GP(y, mean, cov)[source]

Gaussian Process inference via maximum likelihood.

Parameters
  • y (array_like) – Outcome variable.

  • mean (function) – Mean function. (Refer to Mean functions.)

  • cov (function) – Covariance function. (Refer to Covariance functions.)

Example

>>> from numpy.random import RandomState
>>>
>>> from glimix_core.example import offset_mean
>>> from glimix_core.example import linear_eye_cov
>>> from glimix_core.gp import GP
>>> from glimix_core.random import GPSampler
>>>
>>> random = RandomState(94584)
>>>
>>> mean = offset_mean()
>>> cov = linear_eye_cov()
>>>
>>> y = GPSampler(mean, cov).sample(random)
>>>
>>> gp = GP(y, mean, cov)
>>> print('Before: %.4f' % gp.lml())
Before: -15.5582
>>> gp.fit(verbose=False)
>>> print('After: %.4f' % gp.lml())
After: -13.4791
>>> print(gp)  
GP(...)
  lml: -13.47907874997517
  OffsetMean(): OffsetMean
    offset: 0.7755803668772308
  SumCov(covariances=...): SumCov
    LinearCov(): LinearCov
      scale: 2.061153622438558e-09
    EyeCov(dim=10): EyeCov
      scale: 0.8675680523425126
fit(verbose=True, factr=100000.0, pgtol=1e-07)[source]

Maximise the marginal likelihood.

Parameters
  • verbose (bool) – True for progress output; False otherwise. Defaults to True.

  • factr (float, optional) – The iteration stops when (f^k - f^{k+1})/max{|f^k|,|f^{k+1}|,1} <= factr * eps, where eps is the machine precision.

  • pgtol (float, optional) – The iteration will stop when max{|proj g_i | i = 1, ..., n} <= pgtol where pg_i is the i-th component of the projected gradient.

Notes

Please, refer to scipy.optimize.fmin_l_bfgs_b() for further information about factr and pgtol.

lml()[source]

Log of the marginal likelihood.

Returns

\(\log p(\mathbf y)\)

Return type

float