This document gives an overview of the parametric count data distributions implemented within countreg.
A distribution is commonly determined by its density function \(f(y\,|\,\theta)\), where \(y\) is a realization of a random variable \(\mathrm{Y}\) and \(\theta\) is a vector of parameters allowing the location, scale, and, shape of the distribution to vary.
The main motivation for the use of parametric distribtions within countreg is to solve regression problems. For maximum likelihood estimation the objective function is the log-likelihood,
\[\begin{equation} \ell(\theta\,|\,y) = \sum_{i=1}^{n} \log\, f(y_i\,|\,\theta). \end{equation}\]To solve this optimizaiton problem numerically algorithms of the Newton-Raphson type are employed, which require the first and second derivative of the objective funciton, i.e., the score function \(s\) and the hessian \(h\), respectively,
\[\begin{equation} s_\theta = \frac{\partial \ell}{\partial \theta} \quad \text{and} \quad h_{\theta\theta} = \frac{\partial^2 \ell}{\partial \theta^2}. \end{equation}\]Note, that in many cases it is numerically less burdensome to compute the second derivative numerically instead of applying an analytical solution.
For prediction purposes it is convenient to have functions on hand that allow the computation of the expected value and the variance given a set of parameters.
These two points, numerical optimization and prediction, motivate to extend the infrastucture of the distributions implemented in countreg.
The standard infrastructure within stats provides 4 functions for each distibution. The prefixes 'd'
, 'p'
, 'q'
, and 'r'
indicate the density, cumulative distribution function (CDF), the quantile function, and a simulator for random deviates, respectively. The implementation in countreg aims at extending this infrastructure by the score function sxxx
, the hessian hxxx
, the mean mean_xxx
, and the variance var_xxx
.
The interface of the score function look as follows,
sxxx(x, theta1, theta2, parameter = c("theta1" ,"theta2"), drop = TRUE)
The first argument x
is a vector of quantiles, theta1
and theta2
are vectors of the parameters specifying the distribution (names and amount of parameters are choosen as an example), the argument parameter
gets a character string (or a vector of charater strings) indicating wrt which parameter the score should be computed, the logical drop
indicates whether the result should be a matrix or if the dimension should be dropped. The interface of the hessian hxxx
is analogously.
The interface of mean_xxx
and var_xxx
is
mean_xxx(theta1, theta2, drop = TRUE)
"xpois"
)with expected value \(\mathsf{E}(\mathrm{Y}) = \lambda\) and variance \(\mathsf{VAR}(\mathrm{Y}) = \lambda\).
The score function is \[\begin{equation} s(\lambda\,|\,y) = \frac{y}{\lambda} - 1. \end{equation}\] The hessian is \[\begin{equation} h(\lambda\,|\,y) = - \frac{y}{\lambda^2}. \end{equation}\]"xbinom"
)size
\(= n\) and prob
\(= \pi\) has the density
\[\begin{equation}
f_{Binom}(y\,|\,\pi,n) = {n \choose y} {\pi}^{y} {(1-\pi)}^{n-y}, \quad \text{for} \quad y = 0, \ldots, n,
\end{equation}\]
with expected value \(\mathsf{E}(\mathrm{Y}) = n \cdot \pi\) and variance \(\mathsf{VAR}(\mathrm{Y}) = n \cdot \pi \cdot (1 - \pi)\).
The score function is \[\begin{equation} s(\pi\,|\,y,n) = \frac{y}{\pi} - \frac{n-y}{1-\pi} \end{equation}\] The hessian is \[\begin{equation} h(\pi\,|\,y,n) = - \frac{y}{\pi^2} - \frac{n-y}{(1-\pi)^2} \end{equation}\]"xnbinom"
)with expected value \(\mathsf{E}(\mathrm{Y}) = \mu\) and variance \(\mathsf{VAR}(\mathrm{Y}) = \mu + \mu^2 / \theta\).
The score functions are: \[\begin{equation} s_{\mu}(\mu,\theta\,|\,y) = \frac{y}{\mu} - \frac{y + \theta}{\mu + \theta} \end{equation}\begin{equation} s_{\theta}(\mu,\theta\,|\,y) = \psi_0(y + \theta) - \psi_0(\theta) + \log(\theta) + 1 - \log(\mu + \theta) - \frac{y + \theta}{\mu + \theta} \end{equation}\]where \(\psi_0\) is the digamma function.
The elements of the hessian are \[\begin{equation} h_{\mu\mu}(\mu,\theta\,|\,y) = - \frac{y}{\mu^2} + \frac{y + \theta}{(\mu + \theta)^2} \end{equation}\begin{equation} h_{\theta\theta}(\mu,\theta\,|\,y) = \psi_1(y + \theta) - \psi_1(\theta) + \frac{1}{\theta} - \frac{2}{\mu + \theta} + \frac{y + \theta}{(\mu + \theta)^2} \end{equation}\] where \(\psi_1\) is the trigamma function. \[\begin{equation} h_{\mu\theta}(\mu,\theta\,|\,y) = \frac{y - \mu}{(\mu + \theta)^2}. \end{equation}\]"xztpois"
)"lambda"
) or \(\mu\) ("mean"
), are implemented. Thus, the score functions can be calculated either wrt \(\lambda\) ("lambda"
) or \(\mu\) ("mean"
):
\[\begin{equation}
s_{\lambda}(\lambda\,|\,x) = \frac{x}{\lambda} - 1 - \frac{e^{-\lambda}}{1 - e^{-\lambda}}
\end{equation}\begin{equation}
s_{\mu}(\lambda\,|\,x) = s_{\lambda} \cdot \frac{\lambda}{\mu \cdot (\lambda + 1 - \mu)}
\end{equation}\]
The hessian is
\[\begin{equation}
h_{\lambda\lambda}(\lambda\,|\,x) = - \frac{x}{\lambda^2} + \frac{e^{-\lambda}}{(1 - e^{-\lambda})^2}.
\end{equation}\]
"xztnbinom"
)"xhpois"
)where \(\mathbf{I}_{\{0\}}(y)\) is an indicator function which takes the value one if \(y\) equals zero, and zero otherwise.
The elements of the Hessian are, \[\begin{equation} h_{\pi\pi}(\pi, \lambda \,|\, y) = (\mathbf{I}_{\{0\}}(y) - 1) \cdot \frac{1}{\pi^2} - \mathbf{I}_{\{0\}}(y) \cdot \frac{1}{(1 - \pi)^2}, \end{equation}\begin{equation} h_{\lambda\lambda}(\pi, \lambda \,|\, y) = (1-\mathbf{I}_{\{0\}}(y)) \cdot \left( - \frac{x}{\lambda^2} + \frac{e^{-\lambda}}{(1 - e^{-\lambda})^2} \right) \end{equation}\] and \[\begin{equation} h_{\pi\lambda}(\pi, \lambda \,|\, y) = 0. \end{equation}\]"xhnbinom"
)where \(s_{\star,\,NB}(\cdot)\) are the score functions of the zero-truncated negative binomial.
The elements of the hessian \[\begin{equation} h_{\pi\pi}(\pi, \mu, \theta\,|\,y) = (\mathbf{I}_{\{0\}}(y) - 1) \cdot \frac{1}{\pi^2} - \mathbf{I}_{\{0\}}(y) \cdot \frac{1}{(1 - \pi)^2}, \end{equation}\] \[\begin{equation} h_{\mu\mu}(\pi, \mu, \theta\,|\,y) = (1-\mathbf{I}_{\{0\}}(y)) \cdot h_{\mu\mu,\,NB}(\mu,\theta\,|\,y) \end{equation}\] \[\begin{equation} h_{\theta\theta}(\pi, \mu, \theta\,|\,y) = (1-\mathbf{I}_{\{0\}}(y)) \cdot h_{\theta\theta,\,NB}(\mu,\theta\,|\,y) \end{equation}\] \[\begin{equation} h_{\mu\theta}(\pi, \mu, \theta\,|\,y) = (1-\mathbf{I}_{\{0\}}(y)) \cdot h_{\mu\theta,\,NB}(\mu,\theta\,|\,y) \end{equation}\] \[\begin{equation} h_{\pi\mu}(\pi, \mu, \theta\,|\,y) = h_{\pi\theta}(\pi, \mu, \theta\,|\,y) = 0. \end{equation}\]