Abstract:
Reconstructing clear scene images in the presence of foreground occlusions remains a formidable challenge for cameras constrained by a single viewpoint. Synthetic apertur...Show MoreMetadata
Abstract:
Reconstructing clear scene images in the presence of foreground occlusions remains a formidable challenge for cameras constrained by a single viewpoint. Synthetic aperture imaging (SAI) has emerged as a solution by integrating visual information from multiple viewpoints, thus overcoming occlusions. Event cameras, characterized by their high temporal resolution, exceptional dynamic range, low power consumption, and resistance to motion blur, offer a promising avenue for capturing intricate details of background objects within a limited range of motion. Leveraging these capabilities, several event-camera-based SAI methodologies have been proposed to effectively tackle dense occlusions. Despite these advancements, existing methodologies encounter obstacles stemming from the hybrid model they employ, comprising a spiking neural network (SNN) encoder and a convolutional neural network (CNN) decoder. Challenges include information degradation within the SNN encoder due to the quantization of full-precision data into binary spikes, as well as insufficient training data for the CNN decoder to adequately learn feature extraction. In response, we present an enhanced event-based image de-occlusion approach. Our method introduces a novel full-precision leaky integrate-and-fire (FP-LIF) mechanism to mitigate information loss within the SNN encoder. Additionally, we propose an isomorphic network knowledge distillation method to augment the feature extraction capabilities of the CNN decoder. Experimental results demonstrate the efficacy of our approach in enhancing event-camera-based SAI methodologies.
Published in: IEEE Signal Processing Letters ( Volume: 31)