Abstract:
In recent years, volumetric videos have brought immersive experiences to users. Existing viewport-based volumetric video streaming (VVS) systems prune the point cloud acc...Show MoreMetadata
Abstract:
In recent years, volumetric videos have brought immersive experiences to users. Existing viewport-based volumetric video streaming (VVS) systems prune the point cloud according to visibility to reduce bandwidth consumption, leading to a better responsiveness. They also predict bandwidth and allocate bitrate to different parts of the video to enhance Quality-of-Experience (QoE). However, such designs sometimes result in drastic quality fluctuations in real-world deployment, due to limited generalization performance. Our measurement notes that these systems tend to have a significant accuracy loss under an unseen Out-of-Distribution (OoD) environments. On the other hand, open world prediction/adaptation problem have been addressed in the recent reinforcement learning advances, particularly through prompt-based few-shot and zero-shot learning. Inspired by this development, in this work, we first reformulate the volumetric bitrate adaptation (volumetric ABR) into a sequence prediction problem, then we design a volumetric causal transformer algorithm to solve it. We train our model on a large action trajectory data set, then evaluate it on various OoD scenarios. The result show that FewVV consistently outperforms the existing systems on both performance and generalization.
Published in: IEEE Internet of Things Journal ( Volume: 11, Issue: 19, 01 October 2024)