Lacey Etzkorn
December 3, 2019
What is gaussian quadrature?
How can I perform GQ integrations in R?
Adaptive Gaussian Quadrature
Solves definite intergrals of smooth, univariate functions, using a deterministic method.
\[ \text{Gauss Quad.} \subsetneq \text{Quadrature Rules} \approx \text{Numerical Integration} \]
A more familiar, non-deterministic example:
\[ \text{Monte Carlo Integration} \subsetneq \text{Numerical Integration} \]
\[ \int_{-1}^1 \frac{1}{\sqrt{2\pi\sigma^2}} \exp\bigg(-\frac{x^2}{2\sigma^2}\bigg) dx \]
\[ \text{Lower Sum} = \sum_{j = 1}^J \frac{f(t_j)}{t_j - t_{j-1}}\\ \text{Trapezoids} = \sum_{j = 1}^J \frac{f(t_j) + f(t_{j-1})}{2 (t_j - t_{j-1})}\\ \text{MonteCarlo} = \sum_{j = 1}^J \frac{1\{y_j\in[0,1]\}}{J} \]
\[ \text{Lower Sum} = \sum_{j = 1}^J \frac{f(t_j)}{t_j - t_{j-1}}\\ \text{Trapezoids} = \sum_{j = 1}^J \frac{f(t_j) + f(t_{j-1})}{2 (t_j - t_{j-1})}\\ \text{MonteCarlo} = \sum_{j = 1}^J \frac{1\{y_j\in[0,1]\}}{J}\\ \text{General (GQ)} = \sum_{j = 1}^J w_j \cdot f(t_j) \]
where \( \{t_j\}_{j=1}^J \) is a monotone increasing sequence in \( [-1,1] \).
How to choose \( \{w_j, t_j\}_{j=1}^J \) for a set J?
For \( J \) nodes and a polynomial function \( g(x) \) of degree \( 2J-1 \),
we can choose \( \{w_j, t_j\}_{j=1}^J \) so that the following approximation is an equality.
\[ \int_{-1}^1 g(t)dt = \sum_{j = 1}^J w_i g(t_i) \]
To extend to non-polynomials:
find \( J \) large enough so there exists a polynomial \( g(t) \) that approximates \( f(t) \).
\[ 2 \text{ nodes } \Rightarrow \text{polynomial of order } 2J-1 = 3 \]
Approximate \( f(t) \) with \( g(t) = \beta_0 + \beta_1 t = \beta_2 t^2 + \beta_3 t^3 \).
Solve for \( t_1, t_2 \) and \( w_1, w_2 \) :
\[ \text{Constant Fn. } \int_{-1}^1 \beta_0 dt = w_1\beta_0 + w_2\beta_0 \ \ \ \ \Rightarrow \ \ \ \ 2 = w_1 + w_2 \]
\[ \text{Linear Fn. } \int_{-1}^1 \beta_0 + \beta_1 t dt = w_1 (\beta_0 + \beta_1 t_1) + w_2(\beta_0 + \beta_1 t_2) \\ w_1 t_1 = - w_2 t_2 \]
\[ \text{Quadratic Fn. } \int_{-1}^1 \beta_0 + \beta_1 t + \beta_2 t^2 dt = w_1 (\beta_0 + \beta_1 t_1 + \beta_2 t_1^2) + w_2(\beta_0 + \beta_1 t_2 + \beta_2 t_2^2 ) \\ \beta_0 t + \beta_1/2 t^2 + \beta_2/3 t^3 dt = w_1 (\beta_0 + \beta_1 t_1 + \beta_2 t_1^2) + w_2(\beta_0 + \beta_1 t_2 + \beta_2 t_2^2 ) \\ 2/3 = w_1 t_1^2 + w_2 t_2^2 \\ \text{Cubic Fn. ... } \]
$nodes
[1] -0.5773503 0.5773503
$weights
[1] 1 1
nodes5 nodes20
trap 0.9368746959529075 0.9537026804700616
GQ 0.9547606259052610 0.9544997361036420
pnorm 0.9544997361036416 0.9544997361036416
We did not use ANY information about the function to define the weights \( w_j \) or nodes \( t_j \).
Extensions to improper integrals, infinite derivatives, small number of dimensions.
library(statmod)
gauss.quad(5, kind="legendre")
$nodes
[1] -9.061798459386639e-01 -5.384693101056830e-01 1.097392509692326e-16
[4] 5.384693101056831e-01 9.061798459386637e-01
$weights
[1] 0.2369268850561891 0.4786286704993662 0.5688888888888879
[4] 0.4786286704993672 0.2369268850561891
gq20 <- gauss.quad(20, kind="legendre")
sum(gq20$weights * dnorm(gq20$nodes, sd = .5))
[1] 0.954499736103642
Gauss-Laguerre Quadrature:
\[ \int_0^\infty g(x) dx = \int_0^\infty x^\alpha e^{-x} f(x) dx \approx \sum_{j =1}^J w_j f(x_j) \]
Gamma Expected Value:
\[ \int_0^\infty \frac{b^{-a}}{\Gamma(a)} \cdot x^{a} \cdot \exp(x - x -\frac xb) \approx \sum_{j=1}^J w_j \frac{\exp\Big( x_j-x_jb^{-1}\Big)}{b^a\cdot\Gamma(a)} \]
shape = 2; scale = 2 # implies mean = 4
ghq20 <- gauss.quad(10, kind="laguerre", alpha = shape)
sum(ghq20$weights /(scale^shape*gamma(shape)) * exp(ghq20$nodes*(1-1/scale)))
[1] 3.999999775854667
Gauss-Hermite Quadrature:
\[ \int_{-\infty}^\infty e^{-x^2} f(x) dx \approx \sum_{j =1}^J w_j f(x_j) \]
Normal Integral \( (\mu = 0, \sigma = 1) \):
\[ \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi}} \exp\bigg(x^2 - x^2 -\frac{x^2}{2}\bigg) dx \approx \sum_{j=1}^J w_j \frac{1}{\sqrt{2\pi}} \exp\bigg(x_j^2 -\frac{x_j^2}{2}\bigg) \]
ghq20 <- gauss.quad(10, kind="hermite")
sum(ghq20$weights /(2*pi)^.5 * exp(ghq20$nodes^2 - ghq20$nodes^2/2))
[1] 0.9999876390643322
Generalized Linear Mixed Models (GLMM)
Pairs with EM or Marginal MLE approaches (a bit cleaner with EM)
Adaptive Gaussian Quadrature (AGQ)
https://www.r-bloggers.com/mixed-models-with-adaptive-gaussian-quadrature/