Abstract:
Low-light images often suffer from severe detail lost in darker areas and non-uniform illumination distribution across distinct regions. Thus, structure modeling and regi...Show MoreMetadata
Abstract:
Low-light images often suffer from severe detail lost in darker areas and non-uniform illumination distribution across distinct regions. Thus, structure modeling and region-specific illumination manipulation are crucial for high-quality enhanced image generation. However, previous methods encounter limitations in exploring robust structure priors and lack adequate modeling of illumination relationships among different regions, resulting in structure artifacts and color deviations. To alleviate this limitation, we propose a Segmentation-Guided Framework (SGF) which integrates the constructed robust segmentation priors to guide the enhancement process. Specifically, SGF first constructs a robust image-level edge prior based on the segmentation results of the Segment Anything Model (SAM) in a zero-shot manner. Then, we generate lighted-up region-aware feature-level prior by incorporating region-aware dynamic convolution. To adequately model long-distance illumination interactions across distinct regions, we design a segmentation-guided transformer block (SGTB), which utilizes the lighted-up region-aware feature-level prior to guide self-attention calculation. By arranging the SGTBs in a symmetric hierarchical structure, we derive a segmentation-guided enhancement module that operates under the guidance of both the image and feature-level priors. Comprehensive experimental results show that our SGF performs remarkably in both quantitative evaluation and visual comparison.
Published in: IEEE Transactions on Multimedia ( Volume: 26)