1 October 2021 Multifocus image fusion using convolutional dictionary learning with adaptive contrast enhancement
Chengfang Zhang
Author Affiliations +
Abstract

Multifocus image fusion technology uses a mathematical model to integrate multifocus areas to obtain full-focus, clear images. The fusion method based on convolutional sparse representation (CSR) trains and “learns” translation-invariant filters, thereby addressing the missing signal infrastructure and high redundancy of the patch-based method. Convolutional dictionary learning and CSR rely on the alternating-direction method of multipliers and ignore model matching between the training and testing phases, leading to convergence difficulties due to the tricky parameter tuning. The block proximal extrapolated gradient method using the majorization and gradient-based restarting scheme (reG-BPEG-M) adopts the driving force coefficient formula and adaptive restart rule to solve the model mismatch problem. We introduce reG-BPEG-M into multifocus image fusion to update filters and sparse code using two-block and multiblock schemes. Compared with other state-of-the-art fusion methods, our strategy reduces model mismatch and improves the convergence of fusion for gray and color multifocus images.

© 2021 SPIE and IS&T 1017-9909/2021/$28.00 © 2021 SPIE and IS&T
Chengfang Zhang "Multifocus image fusion using convolutional dictionary learning with adaptive contrast enhancement," Journal of Electronic Imaging 30(5), 053016 (1 October 2021). https://doi.org/10.1117/1.JEI.30.5.053016
Received: 27 January 2021; Accepted: 17 May 2021; Published: 1 October 2021
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image fusion

Image enhancement

Associative arrays

Image contrast enhancement

Clocks

Convolution

Image filtering

Back to Top