PDA

View Full Version : JMSL Version for Matlab's lsqnonlin Routine



dtineo
05-18-2009, 12:48 PM
After a few days testing with various permutations of tolerance parameters, I've failed to find an accurate way of using the "BoundedLeastSquares" class in place of the matlab "lsqnonlin" routine. Aside from various differences in methodology (i.e. matlab expects guess parameters x, jmsl expects initial guess solutions), I've found one fundamental difference that I don't see possible in the jmsl version of the same routine...

Matlab lsqnonlin documentation:

By default lsqnonlin chooses the large-scale algorithm. This algorithm is a subspace trust-region method and is based on the interior-reflective Newton method described in [1] and [2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region Methods for Nonlinear Minimization and Preconditioned Conjugate Gradients.

It doesn't appear that JMSL has an implementation for the interior-reflective Newton method to handle a termed "large-scale" algorithm. Is there a way to handle this discrepancy using JMSL?

ed
05-19-2009, 10:41 AM
Both the JMSL algorithms BoundedLeastSquares and NonlinLeastSquares use a modified Levenberg-Marquardt algorithm and an active set strategy. If you have a code example that fails, you should get in touch with technical support to see if they can dig into things and find a solution. There may be other optimizers in the library you can try, but none implement the interior-reflective Newton method.