How can I make SNOPT take less computer time?
The default values of the optional parameters
are chosen to provide maximum robustness.
These choices often cause SNOPT to take more
computer time on easy-to-solve problems. In
many cases it is possible to reduce the time
needed to solve a problem by carefully choosing
the optional parameters to match the problem
that often influence run time are the crash
option and the number of limited memory Hessian
The best values
for these parameters will vary with each problem,
but the following values often make problems
with easy-to-moderate difficulty solve more
3 Hessian updates 5
Why does SNOPT perform minor iterations on
an unconstrained problem?
The QP sub problem sometimes
does more than one minor iteration for each
each major iteration. I had assumed that minor
iterations are only possible when there are
SNOPT starts the first QP with all variables
fixed at their initial values. These are known
as "temporary bounds", or "temporary
constraints". The first QP will perform
minor iterations as these variables are allowed
to change from their initial values.
Why does SNOPT sometimes do one or two
minor iterations when my problem only has
If the equations for the search direction
are ill-conditioned, an iterative refinement
scheme is used to improve the accuracy. The
iterative refinement iterations are counted
as minor iterations.
If I attempt to solve my problem using SNOPT,
I get the error message: EXIT -- the current
point cannot be improved. Is there a bug in
This type of error is almost always caused
by a problem with the model or calling subroutine.
There are two steps that should be followed
if this error occurs:
(a) Check your main program
using some Fortran syntax checker. I strongly
recommend FTNCHEK which is in the public
domain and can be downloaded from various
sites. See e.g., http://wwwcn.cern.ch/dci/asis/products/MISC/ftnchek.html,
I never run a new code without using FTNCHEK
first. It has saved me many months of painful
(b) If step (a) shows that
your code is clean, try running SNOPT with
the option "Verify level 3''. This
will check that you have coded the derivatives
I have installed SNOPT on a DEC Alpha
as directed in the README.install file. However,
when running the tests, some of the problems
fail to converge or get the wrong answer when
I run through the standard cases. Can you
offer any advice?
The problem is most likely with the options
that determine the level of compiler optimization.
If the level is too high, you may see inconsistent
results. For example, the following options
do and do not work for SNOPT on a DEC Alpha:
Worked: -fast -O4 -tune host
-inline all Failed: -fast -O5 -tune host -inline
I am using a parameter estimation problem
with SNOPT. In this type of problem the objective
and constraints are both functions of a complicated
vector-valued function v(x). SNOPT requires
the objective and constraint functions to
be defined in two separate subroutines. Although
it is possible to compute the problem functions
separately or pass information using the user
workspace parameters cu, iu and ru, , this
is either very complicated or expensive in
CPU time. Is it possible to compute the objective
and constraints at the same time?
The distribution for SNOPT includes a subroutine
SNOPTM that is equivalent to SNOPT except
that the objective and constraint functions
can be computed in the same subroutine.
I have just implemented a test version of
my code, and I get the error message: 21 EXIT
-- error in basis package. What am I doing
This error is usually caused by an error in
the definition of the input arrays a(*), ha(*),
The derivatives for my problem are very
expensive and so I use the option ``Non derivative
line search'' to reduce the number of times
the derivatives are calculated. However, I
notice that the number of derivative calculations
doesn't seem to be any smaller. Why is this
Two things need to be done when using a non
derivative line search. First, the option
``Non derivative line search'' be set. Second,
the user must skip the computation of the
derivatives in funcon and funobj when SNOPT
sets the input variable mode = 0.
I am using SNOPT to optimize a model that
requires a significant amount of work to compute
the objective gradient. In testing various
scenarios, I would like to be able to fix
some of the variables for a particular run.
question is the following: do I have to provide
the derivatives of the objective function
with respect to the fixed variables, or can
I simply use a dummy value for this gradient?
Computing the fixed gradients exactly will
mean considerable extra computation.
course, I can always discard the fixed variables
from the problem given to SNOPT, but this
will mean I will need to change the problem
functions for each run.
You can fix the jth variable at the value
const by including the constraint bl(j) =
bu(j) = const. If you assign a dummy value
to gObj(j), it should not make any difference
to the run, except that the reduced costs
(i.e., Lagrange multipliers) for any fixed
variables will be meaningless.
If you leave the gradient
undefined (i.e., you don't set the components
of gObj associated with the fixed variables)
then SNOPT will compute them by finite differences:
gObj(j) = (fObjJ - fObj)/delta, where fObjJ
is fObj evaluated at the perturbed point x
+ e_j delta. I assume that you DON'T want
to do this!