In recent years, large convolutional neural networks have been widely used as tools for image deblurring, because of their ability in restoring images very precisely. It is well known that image deblurring is mathematically modeled as an ill-posed inverse problem and its solution is difficult to approximate when noise affects the data. Really, one limitation of neural networks for deblurring is their sensitivity to noise and other perturbations, which can lead to instability and produce poor reconstructions. In addition, networks do not necessarily take into account the numerical formulation of the underlying imaging problem when trained end-to-end. In this paper, we propose some strategies to improve stability without losing too much accuracy to deblur images with deep-learning-based methods. First, we suggest a very small neural architecture, which reduces the execution time for training, satisfying a green AI need, and does not extremely amplify noise in the computed image. Second, we introduce a unified framework where a pre-processing step balances the lack of stability of the following neural-network-based step. Two different pre-processors are presented. The former implements a strong parameter-free denoiser, and the latter is a variational-model-based regularized formulation of the latent imaging problem. This framework is also formally characterized by mathematical analysis. Numerical experiments are performed to verify the accuracy and stability of the proposed approaches for image deblurring when unknown or not-quantified noise is present; the results confirm that they improve the network stability with respect to noise. In particular, the model-based framework represents the most reliable trade-off between visual precision and robustness.
In this paper, we study the L 1/L 2 minimization on the gradient for imaging applications. Several recent works have demonstrated that L 1/L 2 is better than the L 1 norm when approximating the L 0 norm to promote sparsity. Consequently, we postulate that applying L 1/L 2 on the gradient is better than the classic total variation (the L 1 norm on the gradient) to enforce the sparsity of the image gradient. Numerically, we design a specific splitting scheme, under which we can prove subsequential and global convergence for the alternating direction method of multipliers (ADMM) under certain conditions. Experimentally, we demonstrate visible improvements of L 1/L 2 over L 1 and other nonconvex regularizations for image recovery from low-frequency measurements and two medical applications of magnetic resonance imaging and computed tomography reconstruction. Finally, we reveal some empirical evidence on the superiority of L 1/L 2 over L 1 when recovering piecewise constant signals from low-frequency measurements to shed light on future works.
Computed tomography (CT) techniques are well known for their ability to produce high-quality images needed for medical diagnostic purposes. Unfortunately, standard CT machines are extremely large, heavy, require careful and regular calibration, and are expensive, which can limit their availability in point-of-care situations. An alternative approach is to use portable machines, but parameters related to the geometry of these devices (e.g., distance between source and detector, orientation of source to detector) cannot always be precisely calibrated, and these parameters may change slightly when the machine is adjusted during the image acquisition process. In this work, we describe the non-linear inverse problem that models this situation, and discuss algorithms that can jointly estimate the geometry parameters and compute a reconstructed image. In particular, we propose a hybrid machine learning and block coordinate descent (ML-BCD) approach that uses an ML model to calibrate geometry parameters, and uses BCD to refine the predicted parameters and reconstruct the imaged object simultaneously. We show using numerical experiments that our new method can efficiently improve the accuracy of both the image and geometry parameters.