Abstract:
Hearing aids, and more generally embedded devices, have undergone significant evolution, transitioning into devices capable of significant compute on-device. However, nex...Show MoreMetadata
Abstract:
Hearing aids, and more generally embedded devices, have undergone significant evolution, transitioning into devices capable of significant compute on-device. However, next generation speech processing also utilises computationally expensive deep neural networks. Therefore, it is often preferable to execute the high demanding parts at a central, more powerful device. Certainly for localisation, which has less strict latency requirements. This however requires wireless data transmission between the on ear devices and the central unit, which can adversely impact battery life. In this work, we compare different strategies for bandwidth efficient deep localisation. One strategy is to send the audio signals directly to the central device, and use audio codecs, like the LC3plus codec to minimise the bandwidth. An alternative method is to adapt the co-operative localisation method to binaural hearing and investigate methods to reduce its bandwidth. The co-operative model first processes the microphone signals locally, before transmitting features to the central processor for further analysis. We investigate quantisation, time compression and lowering the dimension of these features. The cooperative model proved slightly better at high SNR scenarios, while the audio transmission model at low SNR cases.
Date of Conference: 09-12 September 2024
Date Added to IEEE Xplore: 04 October 2024
ISBN Information: