Abstract:
Satellite image sequence prediction is a crucial and challenging task. Previous studies leverage optical flow methods or existing deep learning methods on spatial–tempora...Show MoreMetadata
Abstract:
Satellite image sequence prediction is a crucial and challenging task. Previous studies leverage optical flow methods or existing deep learning methods on spatial–temporal sequence models for the task. However, they suffer from either oversimplified model assumptions or blurry predictions and sequential error accumulation issue, for a long-term forecast requirement. In this article, we propose a novel multiscale time conditional generative adversarial network (MSTCGAN). To address the sequential error accumulation issue, MSTCGAN adopts a parallel prediction framework to produce the future image sequences by a one-hot time condition input. In addition, a powerful multiscale generator is designed with the multihead axial attention, which helps to carefully preserve the fine-grained details for appearance consistency. Moreover, we develop a temporal discriminator to address the blurry issue and maintain the motion consistency in prediction. Extensive experiments have been conducted on the FengYun-4A satellite dataset, and the results demonstrate the effectiveness and superiority of the proposed method over state-of-the-art approaches.
Published in: IEEE Transactions on Geoscience and Remote Sensing ( Volume: 60)