Abstract:
Traditional uncertainty quantification (UQ) algorithms are mostly developed for a fixed time (term), such as hourly or daily predictions. Although a few UQ techniques can...Show MoreMetadata
Abstract:
Traditional uncertainty quantification (UQ) algorithms are mostly developed for a fixed time (term), such as hourly or daily predictions. Although a few UQ techniques can compute UQ over time-range, their quantified uncertainty is usually ever-increasing and non-smooth. However, uncertainty can be lower at a certain time in the future. Therefore, this paper presents a neural network (NN) training procedure for both short-term and long-term uncertainty quantification to investigate the level of uncertainty over different times. The training procedure is similar to the conventional lower-upper bound estimation (LUBE) method. The proposed input combination consists of traditional input components and the term of the prediction. The proposed output is the UQ for a sample at that term. Estimation of sub-sample value, initial training with rough targets, and quality balancing over different term-range results in faster training and uniformity. According to the outputs of trained NNs, the uncertainty increases from very short-term to the short term but the uncertainty may decrease in the midterm or the long term. Moreover, the uncertainty may have a periodic portion over time in the long term. We also provide explanations of such periodicity in uncertainty over time curves.
Published in: IEEE Transactions on Emerging Topics in Computational Intelligence ( Volume: 5, Issue: 5, October 2021)