Abstract:
Large visual models have recently demonstrated their promising performance on zero-shot transfer. However, so far, none of the existing methods explicitly possess the abi...Show MoreMetadata
Abstract:
Large visual models have recently demonstrated their promising performance on zero-shot transfer. However, so far, none of the existing methods explicitly possess the ability to perform zero-shot transfer on parking-slot detection, which results in current deep-learning based methods relying on training datasets, and methods based on traditional computer vision exhibiting poor robustness. In this paper, we propose a large visual model-based parking-slot detection method, which utilizes a large visual model (segment anything) to segment an around-view image and infer parking-slots by analyzing the relationship of marking-points in masks. In addition, we classify real-world parking-slots into two categories, line-based and area-based. The proposed method employs a two-stage approach which has a manually designed post-processing step without training. Multiple experiments have been carried out on public benchmarks, and our method demonstrates the capability for zero-shot transfer. The code will be released at https://github.com/Zhai0123/SAM-PS.
Published in: 2024 IEEE Intelligent Vehicles Symposium (IV)
Date of Conference: 02-05 June 2024
Date Added to IEEE Xplore: 15 July 2024
ISBN Information: