dtineo
05-18-2009, 11:48 AM
After a few days testing with various permutations of tolerance parameters, I've failed to find an accurate way of using the "BoundedLeastSquares" class in place of the matlab "lsqnonlin" routine. Aside from various differences in methodology (i.e. matlab expects guess parameters x, jmsl expects initial guess solutions), I've found one fundamental difference that I don't see possible in the jmsl version of the same routine...
Matlab lsqnonlin documentation:
By default lsqnonlin chooses the large-scale algorithm. This algorithm is a subspace trust-region method and is based on the interior-reflective Newton method described in [1] and [2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region Methods for Nonlinear Minimization and Preconditioned Conjugate Gradients.
It doesn't appear that JMSL has an implementation for the interior-reflective Newton method to handle a termed "large-scale" algorithm. Is there a way to handle this discrepancy using JMSL?
Matlab lsqnonlin documentation:
By default lsqnonlin chooses the large-scale algorithm. This algorithm is a subspace trust-region method and is based on the interior-reflective Newton method described in [1] and [2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region Methods for Nonlinear Minimization and Preconditioned Conjugate Gradients.
It doesn't appear that JMSL has an implementation for the interior-reflective Newton method to handle a termed "large-scale" algorithm. Is there a way to handle this discrepancy using JMSL?