Abstract:
The research of deep learning-based methods for image fusion has become a current hotspot. Medical image fusion with the problem of few samples also lacks a unified end-t...Show MoreMetadata
Abstract:
The research of deep learning-based methods for image fusion has become a current hotspot. Medical image fusion with the problem of few samples also lacks a unified end-to-end model for the input of different modal pairs. In this article, we propose a two-level dynamic adaptive network for medical image fusion, which addresses the above two problems and provides a unified fusion framework to take the advantage of different modal pairs. Specifically, we develop a dynamic meta-learning method on task level, which achieves a dynamical meta-knowledge transfer from the heterogeneous task of multifocus image fusion to medical image fusion by dynamic convolution decomposition (DCD). Then, we provide an efficient adaptive fusion method on multimodal feature level, which uses dynamic attention mechanism and dynamic channel fusion mechanism to fuse features of different aspects. For model evaluation, we have done the qualitative and quantitative tests on the transferred multifocus deep network and verified its superior fusion performance. On this basis, the experiments are carried out on the public datasets of the two most commonly used modal pairs (computerized tomography (CT)-magnetic resonance imaging (MRI) and positron emission tomography (PET)-MRI) and show that our hierarchical model is superior to the state-of-the-art methods in terms of visual effects and quantitative measurement. Our code is publicly available at https://github.com/zhanglabNKU/TDAN.
Published in: IEEE Transactions on Instrumentation and Measurement ( Volume: 71)