Multifocus image fusion technology uses a mathematical model to integrate multifocus areas to obtain full-focus, clear images. The fusion method based on convolutional sparse representation (CSR) trains and “learns” translation-invariant filters, thereby addressing the missing signal infrastructure and high redundancy of the patch-based method. Convolutional dictionary learning and CSR rely on the alternating-direction method of multipliers and ignore model matching between the training and testing phases, leading to convergence difficulties due to the tricky parameter tuning. The block proximal extrapolated gradient method using the majorization and gradient-based restarting scheme (reG-BPEG-M) adopts the driving force coefficient formula and adaptive restart rule to solve the model mismatch problem. We introduce reG-BPEG-M into multifocus image fusion to update filters and sparse code using two-block and multiblock schemes. Compared with other state-of-the-art fusion methods, our strategy reduces model mismatch and improves the convergence of fusion for gray and color multifocus images. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 2 scholarly publications.
Image fusion
Image enhancement
Associative arrays
Image contrast enhancement
Clocks
Convolution
Image filtering