Impact Statement:Counting individuals in crowds serves numerous purposes, including managing public events, ensuring safety, analyzing crowd behavior, and managing disasters. However, man...Show More
Abstract:
Image-based crowd counting has gained significant attention due to its widespread applications in security and surveillance. Recent advancements in deep learning have led...Show MoreMetadata
Impact Statement:
Counting individuals in crowds serves numerous purposes, including managing public events, ensuring safety, analyzing crowd behavior, and managing disasters. However, many of the current deep learning methods for crowd counting depend on pretrained networks for feature extraction. These methods often lead to the design of large models that require significant memory storage, posing deployment challenges on edge devices. This article presents a new deep learning model that employs shunting inhibition to establish a compact network architecture. The results from our experiments show that the proposed model provides competitive performance in crowd counting while utilizing fewer network parameters.
Abstract:
Image-based crowd counting has gained significant attention due to its widespread applications in security and surveillance. Recent advancements in deep learning have led to the development of numerous methods that have achieved remarkable success in accurately counting crowds. However, many of the existing deep learning methods, which have large model sizes, are unsuitable for deployment on edge devices. This article introduces a novel network architecture and processing element designed to create an efficient and compact deep learning model for crowd counting. The processing element, referred to as the shunting inhibitory neuron, generates complex decision boundaries, making it more powerful than the traditional perceptron. It is employed in both the encoder and decoder modules of the proposed model for feature extraction. Furthermore, the decoder includes alternating convolutional and transformer layers, which provide local receptive fields and global self-attention, respectively. T...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 5, Issue: 11, November 2024)