### Archive

Posts Tagged ‘optimization’

## Programming an estimation command in Stata: Using optimize() to estimate Poisson parameters


This is the eighteenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

Using optimize()

There are many optional choices that one may make when solving a nonlinear optimization problem, but there are very few that one must make. The optimize*() functions in Mata handle this problem by making a set of default choices for you, requiring that you specify a few things, and allowing you to change any of the default choices.

When I use optimize() to solve a Read more…

Categories: Programming Tags:

## Programming an estimation command in Stata: A review of nonlinear optimization using Mata


This is the seventeenth post in the series Programming an estimation command in Stata. I recommend that you start at the beginning. See Programming an estimation command in Stata: A map to posted entries for a map to all the posts in this series.

A quick review of nonlinear optimization

We want to maximize a real-valued function $$Q(\thetab)$$, where $$\thetab$$ is a $$p\times 1$$ vector of parameters. Minimization is done by maximizing $$-Q(\thetab)$$. We require that $$Q(\thetab)$$ is twice, continuously differentiable, so that we can use a second-order Taylor series to approximate $$Q(\thetab)$$ in a neighborhood of the point $$\thetab_s$$,

$Q(\thetab) \approx Q(\thetab_s) + \gb_s'(\thetab -\thetab_s) + \frac{1}{2} (\thetab -\thetab_s)’\Hb_s (\thetab -\thetab_s) \tag{1}$

where $$\gb_s$$ is the $$p\times 1$$ vector of first derivatives of $$Q(\thetab)$$ evaluated at $$\thetab_s$$ and $$\Hb_s$$ is the $$p\times p$$ matrix of second derivatives of $$Q(\thetab)$$ evaluated at $$\thetab_s$$, known as the Hessian matrix.