Abstract:
Accurate prediction of crowd mobility dynamics is essential for effective crowd safety management. Traditional models, such as the Random Waypoint Model (RWM), have been ...Show MoreMetadata
Abstract:
Accurate prediction of crowd mobility dynamics is essential for effective crowd safety management. Traditional models, such as the Random Waypoint Model (RWM), have been widely employed for mobility simulation. However, they suffer inherent limitations, including non-uniform node distribution and speed decay, which reduce their predictive accuracy. Although vision-based crowd mobility tracking methods show promise, movement prediction still faces significant challenges in achieving accuracy, scalability, generalization, and computational efficiency. This paper introduces a novel AI-based multi-visual components analysis approach for crowd dynamics prediction called AIM-CP. This approach combines the YOLOv8 object detection system with a rule-based expert system to enhance prediction accuracy. AIM-CP integrates multi-modal data from face, body, and pose movements across 12 distinct classes, improving real-time processing capabilities and leveraging more scalable and robust AI models. To support this approach, we developed high-resolution datasets, including mall datasets, the CrowdHuman dataset, selected video frames from YouTube, and images from Unsplash. Experimental results demonstrate that AIM-CP is highly effective in detecting mobility dynamics, significantly improving single-modal prediction models with an accuracy of 90.12% in real-world mobility scenarios. These findings suggest that AIM -CP offers a powerful tool for improving crowd safety and mobility management in dynamic environments.
Date of Conference: 04-06 December 2024
Date Added to IEEE Xplore: 19 December 2024
ISBN Information: