Skip to main content
Log in

Using Fisher Scoring to Fit Extended Poisson Process Models

  • Published:
Computational Statistics Aims and scope Submit manuscript

Summary

The extended Poisson Process model (EPPM) is a generalization of the simple Poisson process. It allows for the construction of distributions that can be over-dispersed or under-dispersed relative to the Poisson distribution. The modelling includes the Poisson and negative binomial distributions as special cases. Generally, a broad class of dispersion models can be characterized in an intuitive and appealing way by a simple parameterisation. In this paper a Fisher scoring algorithm is developed for fitting EPPMs with covariate dependent means.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Aitkin, M. (1978), The analysis of unbalanced cross classifications (with discussion). Journal of the Royal Statistical Society, A 141, 195–223.

    Article  MathSciNet  Google Scholar 

  • Ball, F. (1995), A note on variation in birth processes. Mathematical Scientist, 20, 50–55.

    MathSciNet  MATH  Google Scholar 

  • Breslow, N.E. and Clayton, D.G. (1993), Approximate inference in generalised linear mixed models. Journal of the American Statistical Association, 88, 9–25.

    MATH  Google Scholar 

  • Broyden, C.G. (1970), The convergence of a class of double-rank minimization algorithms. J. Inst. Maths. Applics., 6, 76–90.

    Article  Google Scholar 

  • Clayton, D.G. (1996), Generalized linear mixed models, in Markov Chain Monte Carlo in Practice, eds. W.R. Gilks, S. Richardson and D.J. Spiegelhalter. Chapman & Hall: Boca Raton.

    Google Scholar 

  • Cox, D.R. and Miller, H.D. (1965), The Theory of Stochastic Processes, Methuen: London.

    MATH  Google Scholar 

  • Faddy, M.J. (1997a), Extended Poisson process modelling and analysis of count data. Biometrical Journal, 39, 431–440.

    Article  Google Scholar 

  • Faddy, M.J. (1997b), On extending the negative binomial distribution, and the number of weekly winners of the UK national lottery. Mathematical Scientist, 22, 77–82.

    MathSciNet  MATH  Google Scholar 

  • Feller, W. (1971), An Introduction to Probability Theory and its Applications, Vol. 2, John Wiley & Sons, Inc: New York.

    MATH  Google Scholar 

  • Fletcher, R. (1970), A new approach to variable metric algorithms. Computer Journal, 13, 317–322.

    Article  Google Scholar 

  • GAUSS™ Command Reference (1996), Aptech Systems. Inc., Maple Valley, Washington.

    Google Scholar 

  • Goldfarb, D. (1970), A family of variable metric methods derived by variational means. Mathematics of Computing, 24, 23–26.

    Article  MathSciNet  Google Scholar 

  • Lee, Y. and Neider, J.A. (1996), Hierarchical generalized linear models (with discussion). Journal of the Royal Statistical Society, B 58, 619–678.

    MathSciNet  Google Scholar 

  • Lee, Y. and Neider, J.A. (2001), Hierarchical generalised linear models: A synthesis of generalised linear models, random-effects models and structured dispersions. Biometrika, 88, 987–1006.

    Article  MathSciNet  Google Scholar 

  • McCullagh, P. and Neider, J.A. (1989), Generalized Linear Models, 2nd edn., Chapman & Hall: London.

    Book  Google Scholar 

  • MATLAB® (1996), Using MATLAB. The Math Works Inc., Natick, Massachusetts.

    Google Scholar 

  • Neider, J.A. and Pregibon, D. (1987), An extended quasi-likelihood function. Biometrika, 74, 221–231.

    Article  MathSciNet  Google Scholar 

  • Podlich, H.M., Faddy, M.J. and Smyth, G.K. (1999), Likelihood Computations for Extended Poisson Process Models. InterStat, http://interstat.stat.vt.edu/interstat/Articles/1999/abstracts/S99001.html-ssi.

  • Podlich, H.M., Faddy, M.J. and Smyth, G.K. (2002), A general approach to modeling and analysis of species abundance data with extra zeros. Journal of Agricultural, Biological and Environmental Statistics, 7, 324–334.

    Article  Google Scholar 

  • Rose, C. and Smith, M.D. (2002), Mathematical Statistics with Mathematica, Springer: New York.

    Book  Google Scholar 

  • Severini, T.A. (2000), Likelihood Methods in Statistics, Oxford University Press: New York.

    MATH  Google Scholar 

  • Shanno, D.F. (1970), Conditioning of quasi-Newton methods for function minimization. Mathematics of Computing, 24, 647–656.

    Article  MathSciNet  Google Scholar 

  • Strang, G. (1980), Linear Algebra and its Applications, 2nd edn., Academic Press: New York.

    MATH  Google Scholar 

  • Toscas, P.J., Faddy, M.J. and Burridge, C.Y. (2003), Analysis of the impact of prawn trawling on benthic species in the Great Barrier Reef Marine Park. Environmetrics, in press.

  • Venables, W.N. and Ripley, B.D. (2002), Modern Applied Statistics with S, 4th edn., Springer: New York.

    Book  Google Scholar 

  • Wedderburn, R.M. (1974), Quasi-likelihood functions, generalized linear models, and the Gauss-Newton method. Biometrika, 61, 439–447.

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The preparation of this paper has benefited from discussions with Yun Li and Peter Jones. The authors also thank the referee for helpful comments that improved the paper.

Author information

Authors and Affiliations

Authors

Appendix

To find the first order derivative of the log-likelihood with respect to βj note that

$$\frac{{\partial \log L}}{{\partial {\beta _j}}} = \sum\limits_{i = 1}^m {\frac{{\partial {p_{i{y_i}}}}}{{\partial {\beta _j}}}\frac{1}{{{p_{i{y_i}}}}}} = \sum\limits_{i = 1}^m {\frac{{\partial {a_i}}}{{\partial {\beta _j}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}\frac{1}{{{p_{i{y_i}}}}}} $$
(A.1)
since a1 in the rate (2.3) will be a function of βj through (2.4) and (2.5).

Equation (A.1) shows that before \(\frac{{\partial \log L}}{{\partial {\beta _j}}}\) can be calculated, \(\frac{{\partial {a_i}}}{{\partial {\beta _j}}}\) and \(\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}\), for i = 1, 2. ..., m, need to be evaluated. From (2.5) and (2.4) note that

$$\frac{{\partial {\mu _i}}}{{\partial {\beta _j}}} = \sum\limits_{n = 0}^N {n\frac{{\partial {p_{in}}}}{{\partial {\beta _j}}} = \frac{{\partial {a_i}}}{{\partial {\beta _j}}}} \sum\limits_{n = 0}^N {n\frac{{\partial {p_{in}}}}{{\partial {a_i}}} = {x_{ij}}{e^{{{\text{x}'_i}}\beta }}} .$$
(A.2)

From (2.2), the probabilities pi0, pi1, pi2 ..., with rate (2.3) are calculated from the matrix exponential function

$${\text{p}'_i} = ({p_{i0}}\;\;{p_{i1}}\;\;{p_{i2}} \;\;\cdots \;\;{p_{iN}}) = \text{u}'{e^{{Q_i}}} = \text{u}'{e^{{a_i}{Q_{1i}}}},$$
(A.3)
where
$${Q_i} = {a_i}\left( {\begin{array}{*{20}{c}} { - {b^c}}&{{b^c}}&0&0& \cdots &0 \\ 0&{ - {{(b + 1)}^c}}&{{{(b + 1)}^c}}&0& \cdots &0 \\ 0&0&{ - {{(b + 1)}^c}}&{{{(b + 1)}^c}}& \cdots &0 \\ \vdots & \vdots & \vdots & \vdots &{}& \vdots \\ 0&0&0&0& \cdots &{ - {{(b + N)}^c}} \end{array}} \right) = {a_i}{Q_{1i}}.$$

Using the matrix exponential definition

$${e^{{a_i}{Q_{1i}}}} = \text{I} + {a_i}{Q_{1i}} + \frac{{{{\left( {{a_i}{Q_{1i}}} \right)}^2}}}{{2!}} + \frac{{{{\left( {{a_i}{Q_{1i}}} \right)}^3}}}{{3!}} + \cdots ,$$
(A.4)
$$\begin{array}{*{20}{l}} {\frac{{\partial {e^{{a_i}{Q_{1i}}}}}}{{\partial {a_i}}}}&{ = {Q_{1i}} + \frac{{2{a_i}Q_{1i}^2}}{{2!}} + \frac{{3{a_i}Q_{1i}^3}}{{3!}} + \cdots } \\ {}&{ = {Q_{1i}}\left( {\text{I} + {a_i}{Q_{1i}} + \frac{{{{\left( {{a_i}{Q_{1i}}} \right)}^2}}}{{2!}} + \cdots } \right) = {Q_{1i}}{e^{{a_i}{Q_{1i}}}}} \end{array}$$
(A.5)

(Strang 1980, Chapter 5). Using the result in (A.5), the derivative of (A.3) with respect to ai is

$$\frac{{\partial {{\text{p}'_i}}}}{{\partial {a_i}}} = \left( {\frac{{\partial {p_{i0}}}}{{\partial {a_i}}}\;\;\frac{{\partial {p_{i1}}}}{{\partial {a_i}}}\;\;\frac{{\partial {p_{i2}}}}{{\partial {a_i}}} \;\;\cdots \;\;\frac{{\partial {p_{iN}}}}{{\partial {a_i}}}} \right) = \text{u}'\frac{{\partial \left( {{e^{{a_i}{Q_{1i}}}}} \right)}}{{\partial {a_i}}} = \text{u}'{Q_{1i}}{e^{{a_i}{Q_{1i}}}}.$$
(A.6)

The analytical solution for \(\frac{{\partial {a_i}}}{{\partial {\beta _j}}}\) can be found by substituting the last term in (A.6) into (A.2) to give (3.10). Substituting (3.10) into (A.1) results in (3.4).

The first order derivative of the log-likelihood with respect to b is given in (3.5).

To calculate \(\frac{{\partial \log L}}{{\partial b}},\frac{{\partial {p_{i{y_i}}}}}{{\partial b}}\) needs to be calculated; notmg that

$$\frac{{\partial {{\text{p}'_i}}}}{{\partial b}} = \left( {\frac{{\partial {p_{i0}}}}{{\partial b}}\;\;\frac{{\partial {p_{i1}}}}{{\partial b}}\;\;\frac{{\partial {p_{i2}}}}{{\partial b}} \;\;\cdots \;\;\frac{{\partial {p_{iN}}}}{{\partial b}}} \right) = \text{u}'\frac{{\partial \left( {{e^{{a_i}{Q_{1i}}}}} \right)}}{{\partial b}},$$
\(\frac{{\partial \left( {{e^{{a_i}{Q_{1i}}}}} \right)}}{{\partial b}}\) now needs to be calculated. This can be done using the definition of the matrix exponential in (A.4)
$$\begin{array}{*{20}{l}} {\frac{{\partial \left( {{e^{{a_i}{Q_{1i}}}}} \right)}}{{\partial b}}}&{ = \left( {\frac{{\partial {a_i}}}{{\partial b}}{Q_{1i}} + \frac{{\partial {a_i}}}{{\partial b}}{a_i}Q_{1i}^2 + \frac{{\partial {a_i}}}{{\partial b}}\frac{{a_i^2Q_{1i}^3}}{{2!}} + \cdots } \right)} \\ {}&{ + \left( {{a_i}\frac{{\partial {Q_{1i}}}}{{\partial b}} + \frac{{a_i^2}}{{2!}}\frac{{\partial \left( {Q_{1i}^2} \right)}}{{\partial b}} + \frac{{a_i^3}}{{3!}}\frac{{\partial \left( {Q_{1i}^3} \right)}}{{\partial b}} + \cdots } \right)} \\ {}&{ = \frac{{\partial {a_i}}}{{\partial b}}{Q_{1i}}{e^{{a_i}{Q_{1i}}}} + \frac{{\partial \left( {{e^{{a_{i( - b)}}{Q_{1i}}}}} \right)}}{{\partial b}},} \end{array}$$
(A.7)
where ai(−b) here means that ai is treated as a constant when differentiating with respect to b in the second term. From (A.7) a solution for \(\frac{{\partial {a_i}}}{{\partial b}}\) can be found since
$$\frac{{\partial {\mu _i}}}{{\partial b}} = \sum\limits_{n = 0}^N {n\frac{{\partial {p_{in}}}}{{\partial b}}} = \text{u}'\frac{{\partial \left( {{e^{{a_i}{Q_{1i}}}}} \right)}}{{\partial b}}\text{v} = 0;$$
i.e.,
$$\frac{{\partial {a_i}}}{{\partial b}} = \frac{{\text{u}'\frac{{\partial \left( {{e^{{a_{i( - b)}}{Q_{1i}}}}} \right)}}{{\partial b}}\text{v}}}{{\text{u}'{Q_{1i}}{e^{{a_i}{Q_{1i}}}}\text{v}}}$$

Hence

$$\frac{{\partial {{p'_i}}}}{{\partial b}} = \text{u}'\left( {\frac{{\partial \left( {{e^{{a_{i( - b)}}{Q_{1i}}}}} \right)}}{{\partial b}} - \frac{{\text{u}'\frac{{\partial \left( {{e^{{a_{i( - b)}}{Q_{1i}}}}} \right)}}{{\partial b}}\text{v}}}{{\text{u}'{Q_{1i}}{e^{{a_i}{Q_{1i}}}}\text{v}}}{Q_{1i}}{e^{{a_i}{Q_{1i}}}}} \right).$$
(A.8)

The appropriate terms for the observed data from (A.8) can then be substituted into (3.5) to determine the first order derivative of the log-likelihood with respect to b.

The calculation of \(\text{u}'\frac{{\partial \left( {{e^{{a_{i( - b)}}{Q_{1i}}}}} \right)}}{{\partial b}}\) now needs to be addressed. Equation (2.2) can be written as the solution of the Chapman-Kolmogorov matrix equations (Faddy 1997a)

$$\begin{array}{*{20}{c}} {\frac{{\partial {{p'}_i}\left( t \right)}}{{\partial t}} = \left( {\frac{{\partial {p_{i0}}\left( t \right)}}{{\partial t}}\frac{{\partial {p_{i1}}\left( t \right)}}{{\partial t}}\frac{{\partial {p_{i2}}\left( t \right)}}{{\partial t}} \cdots \frac{{\partial {p_{iN}}\left( t \right)}}{{\partial t}}} \right)} \\ { = {{p'}_i}(t){a_{i( - b)}}{Q_{1i}}} \end{array}$$
(A.9)
at t = 1. Podlich, Faddy and Smyth (1999) noted that a computationally convenient approach to determining the derivative \(\frac{{\partial \text{p}'(t)}}{{\partial b}}\) is to differentiate (A.9) with respect to b and then solve the resulting differential equations. Doing this, and treating ai(−b) as a constant when differentiating
$$\frac{\partial }{{\partial b}}\left( {\frac{{\partial {{p'_i}}}}{{\partial t}}} \right) = \frac{{\partial {{p'_i}}}}{{\partial b}}{a_{i( - b)}}{Q_{1i}} + {p'_i}{a_{i( - b)}}\frac{{\partial {Q_{1i}}}}{{\partial b}} = \text{u}'\frac{{\partial \left( {{e^{{a_{i( - b)}}{Q_{1i}}}}} \right)}}{{\partial b}}{a_{i( - b)}}{Q_{1i}} + {p'_i}{a_{i( - b)}}\frac{{\partial {Q_{1i}}}}{{\partial b}};$$
reversing the order of differentiation and using (A.9) gives
$$\begin{array}{*{20}{l}} {\frac{\partial }{{\partial t}}\left( {\text{p}'\;\frac{{\partial \text{p}'}}{{\partial b}}} \right)}&{ = \left( {\text{p}'\;\frac{{\partial \text{p}'}}{{\partial b}}} \right){a_i}\left( {\begin{array}{*{20}{l}} {{Q_{1i}}}&{\frac{{\partial {Q_{1i}}}}{{\partial b}}} \\ 0&{{Q_{1i}}} \end{array}} \right)} \\ {}&{ = \left( {\text{p}'\;\text{u}'\frac{{\partial \left( {{e^{{a_{i( - b)}}{Q_{1i}}}}} \right)}}{\partial b}} \right){a_i}\left( {\begin{array}{*{20}{l}} {{Q_{1i}}}&{\frac{{\partial {Q_{1i}}}}{{\partial b}}} \\ 0&{{Q_{1i}}} \end{array}} \right)} \\ {}&{\left( {\text{p}'\;\text{u}'\frac{{\partial \left( {{e^{{a_{i( - b)}}{Q_{1i}}}}} \right)}}{\partial b}} \right){a_i}Q_{1i(b)}^*,} \end{array}$$
(A.10)
say. The solution to (A.10) at t = 1 is given by (3.6).

Using a similar approach for the parameter c, the first order derivative of the loglikelihood with respect to c is given by (3.7), with

$$\begin{array}{*{20}{l}} {\frac{{\partial \left( {{e^{{a_i}{Q_{1i}}}}} \right)}}{{\partial c}}}&{ = \frac{{\partial {a_i}}}{{\partial c}}{Q_{1i}}{e^{{a_i}{Q_{1i}}}} + \frac{{\partial \left( {{e^{{a_{i( - c)}}{Q_{1i}}}}} \right)}}{{\partial c}},} \\ {}&{\frac{{\partial {a_i}}}{{\partial c}} = - \frac{{\text{u}'\frac{{\partial \left( {{e^{{a_{i( - c)}}{Q_{1i}}}}} \right)}}{{\partial c}}\text{v}}}{{\text{u}'{Q_{1i}}{e^{{a_i}{Q_{1i}}}}\text{v}}},} \\ {\frac{{\partial {{p'}_i}}}{{\partial c}}}&{\text{u}'\left( {\frac{{\partial \left( {{e^{{a_{i( - c)}}{Q_{1i}}}}} \right)}}{{\partial c}} - \frac{{\text{u}'\frac{{\partial \left( {{e^{{a_{i( - c)}}{Q_{1i}}}}} \right)}}{{\partial c}}\text{v}}}{{\text{u}'{Q_{1i}}{e^{{a_i}{Q_{1i}}}}\text{v}}}{Q_{1i}}{e^{{a_i}{Q_{1i}}}}} \right),} \end{array}$$
and ai(−c) here means that ai is treated as a constant when differentiating with respect to c. The quantity \(\text{u}'\frac{{\partial \left( {{e^{{a_{i( - c)}}{Q_{1i}}}}} \right)}}{{\partial c}}\) can be calculated using (3.8).

To find \(\text{E}\left( {\frac{{\partial \log L}}{{\partial {\beta _j}}}\frac{{\partial \log L}}{{\partial {\beta _k}}}} \right)\), note from (A.1) that

$$\begin{array}{*{20}{l}} {\text{E}\left( {\frac{{\partial \log L}}{{\partial {\beta _j}}}\frac{{\partial \log L}}{{\partial {\beta _k}}}} \right)}&={\text{E}\left\{ {\left( {\sum\limits_{i = 1}^m {\frac{{\partial {a_i}}}{{\partial {\beta _j}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}\frac{1}{{{p_{i{y_i}}}}}} } \right)\left( {\sum\limits_{i = 1}^m {\frac{{\partial {a_i}}}{{\partial {\beta _k}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}\frac{1}{{{p_{i{y_i}}}}}} } \right)} \right\}} \\ {}&{ = \sum\limits_{i = 1}^m {\frac{{\partial {a_i}}}{{\partial {\beta _j}}}\frac{{\partial {a_i}}}{{\partial {\beta _k}}}} \text{E}\left\{ {\frac{1}{{p_{i{y_i}}^2}}{{\left( {\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}} \right)}^2}} \right\}} \\ {}&+{\sum\limits_{i = 1}^m {\sum\limits_{i = 1,j \ne i}^m {\frac{{\partial {a_i}}}{{\partial {\beta _j}}}\frac{{\partial {a_i}}}{{\partial {\beta _k}}}} \text{E}\left\{ {\left( {\frac{1}{{{p_{i{y_i}}}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}} \right)\left( {\frac{1}{{{p_{i{y_i}}}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}} \right)} \right\}} .} \end{array}$$
(A.11)

For independent observations yi and yl, il,

$$\text{E}\left\{ {\left( {\frac{1}{{{p_{i{y_i}}}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}} \right)\left( {\frac{1}{{{p_{i{y_i}}}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}} \right)} \right\} = \text{E}\left( {\frac{1}{{{p_{i{y_i}}}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}} \right)\text{E}\left( {\frac{1}{{{p_{i{y_i}}}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}} \right).$$
(A.12)

However, using (A.12), putting 1′ = (1 1 1 ⋯ 1) and letting pijn represent the (j, n) element of the matrix exponential \({e^{{a_i}{Q_{1i}}}}\)

$$\text{E}\left( {\frac{1}{{{p_{i{y_i}}}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}} \right) = \sum\limits_{n = 0}^N {\frac{{\partial {p_{in}}}}{{\partial {a_i}}}\text{u}'{Q_{1i}}{e^{{a_i}{Q_{1i}}}}1 = - {b^c}} \sum\limits_{n = 1}^{N + 1} {{p_{i1n}} + {b^c}} \sum\limits_{n = 1}^{N + 1} {{p_{i2n}} = 0} ,$$
(A.13)
since \(\sum\limits_{n = 1}^{N + 1} {{p_{i1n}}} = \sum\limits_{n = 1}^{N + 1} {{p_{i2n}} = 1} \) (to any desired approximation by choice of N in (2.2)).

As a result of (A.13) and (A.12), (A.11) becomes

$$\text{E}\left( {\frac{{\partial \log L}}{{\partial {\beta _j}}}\frac{{\partial \log L}}{{\partial {\beta _k}}}} \right) = \sum\limits_{i = 1}^m {\frac{{\partial {a_i}}}{{\partial {\beta _j}}}\frac{{\partial {a_i}}}{{\partial {\beta _k}}}} \text{E}\left\{ {\frac{1}{{p_{i{y_i}}^2}}{{\left( {\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}} \right)}^2}} \right\},$$
(A.14)
where \(\frac{{\partial {a_i}}}{{\partial {\beta _j}}}\) and \(\frac{{\partial {a_i}}}{{\partial {\beta _k}}}\) are given by (3.10). The expression on the right hand side of (A.14) can also be expressed as in (3.9).

For \(\text{E}\left( {\frac{{\partial \log L}}{{\partial {\beta _j}}}\frac{{\partial \log L}}{{\partial b}}} \right)\), note from (A.1) and (3.5) that

$$\begin{array}{*{20}{l}} {\text{E}\left( {\frac{{\partial \log L}}{{\partial {\beta _j}}}\frac{{\partial \log L}}{{\partial b}}} \right)}&{ = {\text{E}}\left\{ {\left( {\sum\limits_{i = 1}^m {\frac{{\partial {a_i}}}{{\partial {\beta _j}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}\frac{1}{{{p_{i{y_i}}}}}} } \right)\left( {\sum\limits_{i = 1}^m {\frac{{\partial {p_{i{y_i}}}}}{{\partial b}}\frac{1}{{{p_{i{y_i}}}}}} } \right)} \right\}} \\ {}&{ = \sum\limits_{i = 1}^m {\frac{{\partial {a_i}}}{{\partial {\beta _j}}}} \text{E}\left( {\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}\frac{{\partial {p_{i{y_i}}}}}{{\partial b}}\frac{1}{{p_{i{y_i}}^2}}} \right)} \\ {}&{ + \sum\limits_{i = 1}^m {\sum\limits_{i = 1,j \ne i}^m {\frac{{\partial {a_i}}}{{\partial {\beta _j}}}} } {\text{E}}\left\{ {\left( {\frac{{\partial {p_{i{y_i}}}}}{{\partial {a_i}}}\frac{1}{{{p_{i{y_i}}}}}} \right)\left( {\frac{{\partial {p_{i{y_i}}}}}{{\partial b}}\frac{1}{{{p_{i{y_i}}}}}} \right)} \right\}.} \end{array}$$
(A.15)

Using (A.13) and independence of observations y1 and yl, il, (A.15) becomes (3.11). A similar approach can be used to obtain (3.12).

For \(\text{E}\left( {\frac{{\partial \log L}}{{\partial b}}\frac{{\partial \log L}}{{\partial c}}} \right)\), note from (3.5) and (3.7) that

$$\begin{array}{*{20}{l}} {E\left( {\frac{{\partial \log L}}{{\partial b}}\frac{{\partial \log L}}{{\partial c}}} \right)}&{ = E\left\{ {\left( {\sum\limits_{i = 1}^m {\frac{{\partial {p_{i{y_i}}}}}{{\partial b}}\frac{1}{{{p_{i{y_i}}}}}} } \right)\left( {\sum\limits_{i = 1}^m {\frac{{\partial {p_{i{y_i}}}}}{{\partial c}}\frac{1}{{{p_{i{y_i}}}}}} } \right)} \right\}} \\ {}&{ = \sum\limits_{i = 1}^m {E\left( {\frac{{\partial {p_{i{y_i}}}}}{{\partial b}}\frac{{\partial {p_{i{y_i}}}}}{{\partial c}}\frac{1}{{p_{i{y_i}}^2}}} \right)} } \\ {}&{ + \sum\limits_{i = 1}^m {\sum\limits_{i = 1,j \ne i}^m {E\left\{ {\left( {\frac{{\partial {p_{i{y_i}}}}}{{\partial b}}\frac{1}{{{p_{i{y_i}}}}}} \right)\left( {\frac{{\partial {p_{i{y_i}}}}}{{\partial c}}\frac{1}{{{p_{i{y_i}}}}}} \right)} \right\}} } .} \end{array}$$
(A.16)

From independence of observations y1 and yl, il, and noting that

$$\text{E}\left( {\frac{{\partial {p_{i{y_i}}}}}{{\partial b}}\frac{1}{{{p_{i{y_i}}}}}} \right) = \sum\limits_{n = 0}^N {\frac{{\partial {p_{in}}}}{{\partial b}} = 0,} $$
since \(\sum\limits_{n = 0}^N {{p_{in}} = 1} \) (to any desired approximation by choice of N in (2.2)), (A.16) becomes (3.13).

Using a similar approach to that for \(\text{E}\left( {\frac{{\partial \log L}}{{\partial b}}\frac{{\partial \log L}}{{\partial c}}} \right)\). (3.14) and (3.15) can be derived.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Toscas, P.J., Faddy, M.J. Using Fisher Scoring to Fit Extended Poisson Process Models. CompStat 19, 425–443 (2004). https://doi.org/10.1007/BF03372105

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF03372105

Key words

Navigation