Skip to content

Interim updates#232

Merged
rozyczko merged 22 commits intodevelopfrom
interim_updates
Apr 28, 2026
Merged

Interim updates#232
rozyczko merged 22 commits intodevelopfrom
interim_updates

Conversation

@rozyczko
Copy link
Copy Markdown
Member

This pull request adds support for progress callbacks during fitting operations. The main goal is to allow users to receive progress updates and optionally cancel fits in progress. Added progress_callback parameter and a way to handle cancellation gracefully.

  • Added an optional progress_callback parameter to the fit methods in Fitter, MinimizerBase, MultiFitter and all minimizer subclasses, allowing users to receive iterative progress updates and cancel fits.

  • Implemented progress callback integration in LMFit: added _create_iter_callback to wrap the user callback, constructed detailed progress payloads, and enabled fit cancellation by raising a new FitCancelled exception. Ensured parameter values are restored on cancellation or error.

  • Added the FitCancelled exception class to signal user-requested cancellation, and ensured proper restoration of parameter values on cancellation or error.

Copy link
Copy Markdown

@github-actions github-actions Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pull request does not contain a valid label. Please add one of the following labels: ['[scope] bug', '[scope] enhancement', '[scope] documentation', '[scope] significant', '[scope] maintenance']

@rozyczko
Copy link
Copy Markdown
Member Author

Required for easyscience/EasyReflectometryApp#290

@rozyczko rozyczko added [scope] enhancement Adds/improves features (major.MINOR.patch) [priority] medium Normal/default priority [area] base classes Changes to or creation of new base classes labels Apr 13, 2026
@rozyczko rozyczko marked this pull request as ready for review April 17, 2026 08:01
Copy link
Copy Markdown
Contributor

@damskii9992 damskii9992 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR needs an ADR explaining the software architecture design implemented and the rationale for why this design was chosen etc.
We probably also need a tutorial showcasing how the callback might be used. Both for ourselves as a kind of documentation but also for potential users.
I have not bothered looking at the unit tests.

Comment thread src/easyscience/fitting/minimizers/minimizer_base.py
Comment thread src/easyscience/fitting/minimizers/minimizer_base.py Outdated
Comment thread src/easyscience/fitting/minimizers/minimizer_bumps.py Outdated
Comment thread src/easyscience/fitting/minimizers/minimizer_bumps.py Outdated
Comment thread src/easyscience/fitting/minimizers/minimizer_bumps.py Outdated
Comment thread src/easyscience/fitting/minimizers/minimizer_dfo.py
Comment thread src/easyscience/fitting/minimizers/minimizer_dfo.py Outdated
results.y_obs = self._cached_model.y
results.y_calc = self.evaluate(results.x, minimizer_parameters=results.p)
results.y_err = weights
results.n_evaluations = int(fit_results.nf)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this be fit_results.nfev?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, this is correct for DFO-LS.
nf = number of objective evaluations
DFO doesn't follow SciPy style OptimizeResults API

Comment thread src/easyscience/fitting/minimizers/minimizer_dfo.py
chi2_val = self.chi2
reduced_val = self.reduced_chi2
if not np.isfinite(chi2_val) or not np.isfinite(reduced_val):
raise ValueError
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error should probably have some context

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This ValueError is not a user-facing error. It is raised only to be caught immediately by the except Exception in line 54, after which chi2 and reduced_chi2 are rendered as N/A

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even then, in the future we might change some code elsewhere to the catch exception and simply re-raise the error here. Without the error context, such re-raises would be hard to diagnose. It never hurts to have an descriptive error-message, even if it isn't currently being used.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is necessary, but I will begrudingly accept the request.

@rozyczko rozyczko requested a review from damskii9992 April 25, 2026 07:50
Copy link
Copy Markdown
Contributor

@damskii9992 damskii9992 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would still like to see a tutorial or something which showcases how the progress callback might actually be used, not just a MagicMock unittest.

results.y_obs = self._cached_model.y
results.y_calc = self.evaluate(results.x, minimizer_parameters=results.p)
results.y_err = self._cached_model.dy
results.n_evaluations = n_evaluations
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I probably wouldn't save n_evaluations here, but rather iterations, since what we want to report isn't how many times the fit has evaluated the model function, but rather how many iterations the fit has run.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be somewhat pedantic: both number of iterations and number of evaluations can be relevant parameters.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cross-backend consistency (what I said earlier): LMFit.nfev and DFO-LS.nf both count objective-function calls, so n_evaluations matches.
I would like to be consistent and report just this.
I will document explicitly in the BUMPS docstring that for BUMPS this is objective evaluations, not iterations.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then I would like to require that we ALSO report iterations in all of the minimizers. Because I only see a use-case for iterations, not for objective evaluations.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Users may be interested in the number of objective evaluations if the objective function is very time-consuming to calculate. A completely hypothetical example could be if the function involves a full or partial reduction workflow for an ESS instrument ;)

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As hinted at above, I'm not against reporting both

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really see the need to confuse users but OK. We can add this attribute.
Done

Comment thread src/easyscience/fitting/minimizers/minimizer_bumps.py
Comment thread src/easyscience/fitting/minimizers/minimizer_bumps.py Outdated
def __call__(self, history):
self.last_step = int(history.step[0])


Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are only supposed to have 1 class per python file. I could accept 1 additional small class, but now you're reaching 2-3 small additional classes, so I think they belong in separate files in separate sub-folders.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are very small, single task classes which apply directly to Bumps class functionality. Splitting them and placing in say, utils.py would mean we could reuse them elsewhere. This is not the case.
If, at some point, we find these can be instantiated by another class, I'd pull them out.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think I'd prefer them pulled out into separate files in a seperate folder, say BUMPS_utils, and then a file for each class. I like clean file structures.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then we should probably spell it out in an ADR

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

method_str = method_dict.get('method', self._method)
fitclass = self._resolve_fitclass(method_str)

step_counter = _StepCounterMonitor()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this change with our latest changes #231? We don't need the minimal step counter for the FitResults now, right? Since we get the n_evaluations there.

Comment thread src/easyscience/fitting/minimizers/minimizer_bumps.py Outdated
progress_callback: Optional[Callable[[dict], Optional[bool]]] = None,
callback: Optional[Callable[[DFOCallbackState], None]] = None,
callback_every: int = 1,
callback_on_improvement_only: bool = False,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You did not separate this out into a different PR, so I guess we should discuss it here. Shouldn't these options also be available in the other minimizers? :)

Comment thread src/easyscience/fitting/minimizers/minimizer_dfo.py Outdated
1 for parameter in params.values() if getattr(parameter, 'vary', False)
)
degrees_of_freedom = residual_array.size - varied_parameter_count
reduced_chi2 = chi2 / degrees_of_freedom if degrees_of_freedom > 0 else chi2
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question as for DFO, does LMFIT not calculate chi2 and can't you just use that instead of calculating it again every iteration?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope. lmfit's iter_cb signature is (params, iter, resid, *args, **kwargs). There is no pre-computed chi2 passed in. LMFIT's MinimizerResult.chisqr is only available after the fit completes, not during iteration. So summing residuals**2 inside the callback is the correct approach.

Comment thread tests/unit/fitting/minimizers/test_minimizer_bumps.py Outdated
@rozyczko
Copy link
Copy Markdown
Member Author

I would still like to see a tutorial or something which showcases how the progress callback might actually be used, not just a MagicMock unittest.

Added progress-callback.ipynb

" f'final reduced chi2 = {res.reduced_chi2:.4f}, '\n",
" f'n_evaluations = {res.n_evaluations}'\n",
" )"
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran this notebook, and DFO returned:
final reduced chi2 = 0.0000
Which is obviously wrong . . .

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In dfo, _gen_fit_results was assigning results.y_err = weights. But weights are the residual weights (1/σ - they get multiplied into residuals by _make_model, and FitResults.chi2 computes ((y_obs - y_calc) / y_err)**2, expecting y_err to be σ. LMFit and Bumps both correctly invert weights to σ; DFO did not. With weights ≫ 1, chi² collapsed toward 0; with weights = 1, the bug was masked.

Copy link
Copy Markdown
Contributor

@damskii9992 damskii9992 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise, pending answers/resolutions to my last 2 comments. I can finally approve this PR. You should also update the ADR to reflect the latest decisions :)

@rozyczko rozyczko merged commit 4be7c47 into develop Apr 28, 2026
33 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

[area] base classes Changes to or creation of new base classes [priority] medium Normal/default priority [scope] enhancement Adds/improves features (major.MINOR.patch)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants