In many areas of sciences in the wider sense, the question of confronting models and experimental measurements is crucial. To contribute to answering this question, one relies not only on experiments but also on the processing of measurements. This processing is often necessary because of instrument limitations (e.g., resolution, whether temporal, spatial, or spectral,… noise level,…) or because of indirect relationships between the measurements and the quantity of interest (e.g., convolution, truncation,…). The processing then performs an inversion operation, whose input is the set measurements and whose output is an estimated object.
In most cases, inversion is an ill-posed problem, for which it is not possible to define, based on measurements only, a reconstructed object that is stable with respect to perturbations (e.g., acquisition or model errors,…). To cope with this difficulty, on relies on the concept of regularization: it consists of taking into account additional information about the unknown in order to reformulate the problem and produce a well-posed statement. With this in view, the course covers two main types of approaches:
- deterministic and variational: penalization and constraints, numerical optimization
- probabilistic: Bayesian approach and stochastic sampling,… (and optimization here again)
and describes the strong link between them. These approaches are able to include various types of information, such as the positivity or spatial regularity of an image, the presence of rare events (e.g., contours, impulses,…), as well as information provided by a set of examples. This produces high-resolution images. In addition to this aspect, the course addresses more advanced issues, grouped under the terms self-calibrated (myopic/blind) and adaptive (noise and object parameter estimation). Furthermore, emphasis will be placed on a key issue, that of uncertainties.
This Learning Resources page is all about inverse problems. It focuses on the problem of image deconvolution and resolution enhancement. It offers various documents (in PDF format): slideshow for lectures, exercise topics and practical matlab-work.
Lectures
Introduction, motivation [*] — PDF — In this introductory part, we motivate developments using examples from imaging in various fields and using various modalities, and we introduce the notations and the convolutional model.
- Fields of application: medical, astrophysics, geophysics, remote sensing, non-destructive testing,…
- Modalities: scanner, MRI, tomography, ultrasound, optics, interferometry,…
- Data science issues: denoising, restoration, deconvolution, reconstruction, super-resolution, segmentation,…
We thus address the issue of inversion and analyze its ill-posed/ill-conditioned nature.
Linear methods [*] — PDF — This first part concerns linear deconvolution methods. A very basic and fundamental tool. Emphasis is placed on the circulant case.
- Least squares approaches, inverse filtering, truncated eigen/singular values decomposition,…
- Quadratic penalization, optimization, Wiener filtering, Wiener-Hunt, Philips-Twomey-Tikhonov
Each method is interpreted in terms of filtering (frequency interpretation), numerical analysis (matrix conditioning), and statistics (bias-variance).
Self-tuned and self-adaptive [*]— PDF — This chapter addresses more advanced issues, grouped under the term “unsupervised”: self-learning aspects (noise and object parameter estimation) and self-calibration (instrument parameter estimation). The methodological framework is that of Bayesian statistics, and implementation is based on stochastic sampling and Markov chain Monte Carlo methods.
- Bayesian interpretation, maximizer, and posterior mean.
- Optimality of the restoration function.
- Extended posterior distribution, full Bayes (noise and object parameters, instrument parameters,…).
- Gibbs and Metropolis-Hastings sampling.
Convex penalty and edge preservation [*] — PDF — This section introduces a first family of nonlinear deconvolution methods. They are based on L2-L1 penalties (e.g., Huber) with a view to preserving rare events (contours, edges,…). It draws on the previous linear case in terms of criteria, optimization, and coding. In terms of optimization, the presented tool is based on the Legendre transform and the concept of convex conjugate, as well as half-quadratic optimization (distinct from quadratic splitting). They enable a significant improvement in image quality, particularly in terms of resolution.
Constraints, positivity and support [*] — PDF — This part focuses on taking constraints into account, particularly positivity and support. It also draws on previous results in terms of criteria, optimization, and coding. The methods are based on the family of augmented Lagrangian algorithms.
- Introduction to constraints, linear cases, properties, convexity.
- Concepts of optimization under constraints, augmented Lagrangian criterion.
- ADMM (Alternative Direction Methods of Multipliers) algorithms.
This approach, which is particularly efficient, also allows for a significant improvement in image resolution, in addition to offering the possibility of complying with physical constraints of positivity.
Sparsity and deconvolution — Here we are interested in a category of objects that have a particular characteristic: the vast majority of components are zero, and only a small number are non-zero. This is referred to as sparsity. This type of object is encountered in ultrasound or seismology when the size of an inhomogeneity or the thickness of an interface is small compared to the wavelength used. It is also encountered in spectroscopy or spectral line analysis,… and even in astronomy and many other fields. The methods presented are still based on a convex penalty, but this penalty has two characteristics: (i) it addresses the components separately, thus introducing no link between them, and (ii) it is non-derivable at zero, thus favoring null components. However, this non-derivability prohibits the use of algorithms founded on the gradient or semi-quadratic ideas. The focus is on a class of algorithms based on the concepts of subgradient and proximal operator. Coordinate descent algorithms and ADMM will also be considered.
Model comparison — PDF — Here we are interested in the problem of model comparison. Not only are the image and certain parameters unknown, but there are also several candidate models for describing the instrument, the object, and the noise. The lecture presents an optimal decision strategy to select between candidate models and numerical tools for its implementation.
Deconvolution-Segmentation of textured images — PDF — Here’s a look at the issue of joint deconvolution and segmentation. A key idea is that image segmentation is easier if the image is clear, and conversely, deconvolution is easier if the regions/contours are known. The aim is therefore to perform both operations jointly. The presentation focuses on the case of textured images with specific orientations.
Prior Diffusion Models [*] — PDF — This section focuses on the integration of information provided by a set of examples specific to the domain under investigation. A diffusion model is used to describe the probability density presumed to be the source of the set of examples. This density is then included as a prior to form a posterior in the Bayesian framework. Then samples of the posterior are produced using various algorithms
- Diffusion Posterior Sampling and Pseudoinverse-guided Diffusion Models, in a way inspired by ancestral sampling,
- Filtering Posterior Sampling, inherited from Bayesian filtering,
- Twisted Diffusion Sampler based on Sequential Monte Carlo,
- And a (proposed) Gibbs Diffusion Posterior Sampling.
We then expect to obtain higher quality images since they are consistent with the specific set of examples.
Three extras — This section contains a few extras, miscellaneous items.
- Quadratic minimization, system solvers, matrix inversion: a short and partial view — PDF
- An example in astronomy: bi-model, smoothness and parsimony, positivity and support, ADMM — PDF
- An example in radar imaging: high resolution and sparsity, ADMM — PDF
- A few concluding remarks and some perspectives — PDF
Exercises
- [PDF] Circulant matrices, quadratic functions, Legendre transform, linear constraints, sparsity, denoising,…
MatlabWork
- [PDF] Linear solutions and filtering: quadratic penalty and Wiener-Hunt.
- [PDF] Bayesian standpoint and stochastic sampling for hyperparameter estimation.
- [PDF] Convex penalty and edge preservation: Huber and half-quadratic approach.
- [PDF] Positivity constraints and ADMM.
Bibliography
- J.-F. Giovannelli and J. Idier, Eds., Regularization and Bayesian Methods for Inverse Problems in Signal and Image Processing, ISTE and John Wiley & Sons Inc., London, 2015.
- P.C. Hansen, Discrete Inverse Problems: Insight and Algorithms, SIAM, Philadelphia, USA, 2010.
- J. Idier, Ed., Bayesian Approach to Inverse Problems, ISTE Ltd and John Wiley & Sons Inc., London, 2008.
- J. Kaipio and E. Somersalo, Statistical and computational inverse problems, Springer, Berlin, Germany, 2005.
- A. Tarantola, Inverse problem theory and methods for model parameter estimation, SIAM, Philadelphia, USA, 2005.
- R.C. Aster, B. Borchers and C.H. Thurber, Parameter Estimation and Inverse Problems, International Geophysics Series. Elsevier, Amsterdam, 2005.
- J.C. Santamarina and D. Fratta, Discrete Signals and Inverse Problems: An Introduction for Engineers and Scientists, WileyBlackwell, Chichester, England, 2005.
- B. Chalmond, Modeling and Inverse Problems in Image Analysis, Applied Mathematical Science. Springer, New-York, USA, 2003.
- M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging, Taylor & Francis, Bristol and Phildelphia, USA, 2002.
| Auteur(s) | Jean-François Giovannelli |
| Contact | Giova@IMS-Bordeaux.fr |
| Langue | English |
| Licence | Creative Commons |