Elsevier

Neurocomputing

Volume 116, 20 September 2013, Pages 144-149
Neurocomputing

Integrated real-time vision-based preceding vehicle detection in urban roads

https://doi.org/10.1016/j.neucom.2011.11.036Get rights and content

Abstract

This paper presents a solution algorithm for the real-time operation of vision-based preceding vehicle detection systems. The algorithm contains two main components: vehicle detection, and vehicle tracking. Vehicle detection is achieved by using vehicle shadow features to define a region of interest (ROI). The methods such as histogram equalization, ROI entropy and mean of edge image, are adopted to determine the exact vehicle rear box. In such way, vehicles can be detected in video images. In the vehicle tracking process, the predicted box is verified and updated; and certain important parameters such as relative distance or velocity, the number and type of the tracked vehicle are extracted. The proposed solution algorithm has been tested under different traffic conditions in Hong Kong urban areas. Test results demonstrate that the proposed solution algorithm has a good detection accuracy and satisfactory computational performance.

Introduction

Vision-based preceding vehicle detection systems have many applications. For example, they can be used to assist drivers in perceiving potential dangerous situations so as to avoid accidents through sensing and understanding the environment around the vehicles [1], [2], [3], [4], [5], [6], [7]. Currently, to monitor traffic conditions using video images at fixed locations has been commonly adopted [8], [9], [10], [11]. The analysis of video sequences of traffic flow in a dynamic situation (i.e., installed on a moving vehicle) offers considerable improvements over the existing methods of traffic data collection and road traffic monitoring. By detecting vehicles in road networks, real-time traffic parameters, such as the presence and number of vehicles, speed distribution data, turning traffic flows at intersections, queue lengths, space and time occupancy rates, can be acquired and analyzed. In autonomous vehicle guidance, knowledge of road geometry allows a vehicle to follow its route and the detection of road obstacles becomes a necessary and important task for avoiding collision with other vehicles [12].

Most visual vehicle detection systems follow two basic steps: Hypothesis Generation (HG), which hypothesizes the locations of the vehicles in images; and Hypothesis Verification (HV), which verifies the hypotheses of the vehicles’ locations [13]. Algorithms for vehicle detection based on computer vision can be classified into three groups: model-based, learning-based and feature-based methods. The model-based method matches the vehicle candidates in images with various vehicle models stored in the computer. However, the limitation of this method is the reliance on detailed geometric object models of all vehicles and it is unrealistic to build detailed models for all vehicles that could be found on the roadway [14], [15], [16]. The learning-based method trains the system with some typical images, and the trained classifier is used to identify the test images [17], [18], [19], [20], [21], [22]. It is usually employed to confirm detection. That is, the trained classifier is utilized to confirm whether the extracted ROI is a vehicle or not. If the ROIs are not extracted by the detection algorithms, the whole image has to be scanned; as a result it would be very slow [23]. Feature-based methods to detect vehicles are trying to identify certain sub-features of the vehicles, such as distinguishable points or lines, symmetry, edges, and shadows [24], [25], [26], [27]. The advantage of this approach is that some of the features of the moving object remain visible even in the presence of partial occlusion. Furthermore, the same algorithms can be used for detection in daylight, twilight or night-time conditions. It is self-regulating because it selects the most salient features under the given conditions. The main inconvenience is that if the features are not sufficiently present in the image, the vehicle is missed to detect.

To develop a reliable system for preceding vehicle detection employing monocular vision is a difficult task as the view depth information is unavailable. Also, buildings and trees surrounding roads do not appear stationary in the video, which will disturb vehicle detection. Assuming the road is flat and the lane marking is straight, the resulting road area would be a simple triangle in the image. Then the road boundaries and lane markings can be detected by searching for a collection of pixels that do not fit the statistical intensity of the road [28].

The vehicle detection process can be carried out in the road triangle. To detect the preceding vehicle using a feature-based method, the use of 2D features, such as shadow, rear lights, symmetry, texture, edge and shape has been widely studied [24], [25], [26], [27], [29], [30], [31]. Betke et al. [29] developed a real-time vision system that analyzed color videos taken from a forward-looking video camera in a car driving on a highway. The system used a combination of color, edge and motion information to recognize and track the road boundaries, lane markings and other vehicles on the road. However, at night on city expressways, when there are many city lights in the background, the system has problems at finding vehicle outlines and distinguishing vehicles on the road from obstacles in the background. Huang et al. [31] used shadow information underneath a vehicle as a sign pattern for the preceding vehicle detection and a classic Sobel edge operator was used to detect the horizontal shadow points. In our system, another Sobel edge operator is introduced to increase the contrast of the horizontal shadow points. The side effect of the feature enhancement approach is that certain interferences are also enhanced and can be mistaken for vehicles. These false positives are removed by using the difference of the texture feature between the vehicle ROIs and the non-vehicle ROIs. In this way, the vehicle detection rate will be improved.

Detection verification with tracking is a common vision-based approach, which predicts the vehicle's locations in the following frames [29], [32], [33], [34], [35]. Vehicle detection can be improved considerably, both in terms of accuracy and efficiency, by taking advantage of the temporal continuity present in the data. Shen [35] proposed a position prediction mechanism to track the vehicle based on a constant speed model. In this paper, we propose a more general vehicle position prediction formula, which is suitable not only for the constant velocity situation but also for the vehicle acceleration or deceleration. In this way, the detected vehicle in the following frames will be located more precisely.

The vehicle detection method based on shadow, luminance and vehicle edges will be introduced in Section 2. The improved vehicle prediction and tracking method for vehicle detection validation is given in Section 3. Some experimental results will be shown in Section 4 to demonstrate the advantages of our system. Finally, conclusions and discussions of this study will be given in Section 5.

Section snippets

Vehicle detection with shadow feature

We assume that the road triangle including all our objects of interest (roads and vehicles) is gotten. The following vehicle detection process will be carried out in the road triangle. Using shadow information as a sign pattern for vehicle detection has been discussed by many authors [31], [36], [37], but there is no systematic way to select appropriate threshold values. The intensity of the vehicle shadow depends on the illumination of the captured image, which in turn depends on the weather

Vehicle detection with prediction and verification

The accuracy and efficiency of vehicle detection can be further improved by taking advantage of the temporal continuity of a vehicle presented in continuous images [29], [39]. This tracking process can be divided into a two step approach: prediction and verification. The prediction applies a kinematic model to predict the position of ROI in the next image, while the verification checks if we can find the same object in the predicted location as in the previous image.

Shen [35] proposed a ROI

Experimental results and analyses

To evaluate the performance of the proposed vehicle detection system, tests were carried out under different driving conditions in Hong Kong. The system including a normal video camera and a standard PC (core 2 Duo GPU 2.5 G) is able to process approximately 20 fps, which is sufficient for real-time applications.

Fig. 6, Fig. 7, Fig. 8, Fig. 9, Fig. 10 show some representative detection results. The bounding box superimposed on the original images shows the final detection results. Fig. 6, Fig. 7

Discussions and conclusions

In this paper, a new algorithm is proposed to detect vehicles using video images which are obtained from video cameras installed at a moving vehicle. The proposed method employs various vehicle features (such as vehicle shadows, rear lights, luminance entropy and edges) to detect vehicles. The proposed vehicle detection algorithm can be used for the development of driver assistance system and autonomous vehicle systems.

The contributions of this paper are summarized as follows: Firstly, we adopt

Acknowledgments

This paper was supported by an internal research grant J-BB7Q from the research committee of the Hong Kong Polytechnic University, China Postdoctoral Science Foundation, LIESMARS Special Research Funding and the National Natural Science Foundation of China (40721001, 40830530)

Yanwen Chong received the B.S. degree from the Qufu Normal University, China, in 1995, M.S. and Ph.D degrees from Wuhan University, China. He is a Associate Professor with the State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan, China. His research interests include video processing, intelligent transportation system and pattern recognition.

References (39)

  • L. Vlacic et al.

    Intelligent Vehicle Technologies: Theory and Applications

    (2001)
  • P.G. Michalopoulos

    Vehicle detection video through image processing: the autoscope system

    IEEE Trans. Veh. Technol.

    (1991)
  • D. Dailey et al.

    An algorithm to estimate mean traffic speed using uncalibrated cameras

    IEEE Trans. Intell. Transp. Syst.

    (2000)
  • S. Takaba, et al., A traffic flow measuring system using a solid state sensor, in: Proceedings of IEE Conference on...
  • A. Rourke, M.G.H. Bell, Applications of low cost image processing technology in transport, in: Proceedings of the World...
  • Z. Sun et al.

    On road vehicle detection: a review

    IEEE Trans. Pattern Anal. Mach. Intell.

    (2006)
  • D. Koller et al.

    Model-based object tracking in monocular image sequences of road traffic scenes

    Int. J. Comput. Vision

    (1993)
  • K. Baker, G. Sullivan, Performance assessment of model-based tracking, in: Proceedings of the IEEE Workshop on...
  • G. Sullivan

    Visual interpretation of known objects in constrained scenes

    Philos. Trans. R. Soc. (B)

    (1992)
  • Cited by (27)

    • Background subtraction with multi-scale structured low-rank and sparse factorization

      2019, Neurocomputing
      Citation Excerpt :

      Moving object detection is a fundamental problem in computer vision, and plays a critical role in numerous vision applications, such as intelligent transportation [1], vehicle navigation [2] and scene understanding [3].

    • Multi-vehicle detection algorithm through combining Harr and HOG features

      2019, Mathematics and Computers in Simulation
      Citation Excerpt :

      Leissi proposed a vehicle detection algorithm for which offline learning is conducted by using a large number of images obtained from different cameras (dynamic or static) for imparting good generalization ability into the classifier [18]. Chong employed a multi-step method for vehicle detection [5]. Bottom shadow is extracted to calibrate the region of interest (ROI) of the vehicle and the mean models of energy and edge of the region are created.

    • Fast spatio-temporal stereo matching for advanced driver assistance systems

      2016, Neurocomputing
      Citation Excerpt :

      Advanced Driver Assistance Systems aim at enhancing car safety and driving comfort. The objectives of ADAS include object detection and tracking [1–5], traffic sign detection and recognition [6,7], pedestrian detection [8] and so on. One of the most important difficulties that ADAS face is the perception of the environment of the vehicles in real outdoor scenes.

    • Vision-based two-step brake detection method for vehicle collision avoidance

      2016, Neurocomputing
      Citation Excerpt :

      In our work, we focus on the vision-based methods to implement the brake detection. Some of the vision-based methods are mainly based on the rear-light detection and localization at nighttime to implement vehicle detection [2–7]. As analysis, the rear-light is an important feature of vehicles which usually shows white color in the center with surrounding ring of red color.

    • Hybrid computer vision system for drivers' eye recognition and fatigue monitoring

      2014, Neurocomputing
      Citation Excerpt :

      Thanks to recent hardware and software developments computers can be used to monitor drivers' conditions with the potential of alerting in dangerous situations [45,5]. In this respect hybrid intelligence offers new methods and tools which can be successful in solving such difficult problems, as reported in literature [11,1,20,60,12,46,8]. Such driver's supporting methods become parts of the Driver Assisting Systems (DAS).

    View all citing articles on Scopus

    Yanwen Chong received the B.S. degree from the Qufu Normal University, China, in 1995, M.S. and Ph.D degrees from Wuhan University, China. He is a Associate Professor with the State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan, China. His research interests include video processing, intelligent transportation system and pattern recognition.

    Wu Chen received the B.Sc. degree from the Chinese University of Science and Technology in 1982 and PhD degree from the University of Newcastle Upon Tyne in 1992. He is a professor in the Department of Land Surveying and Geoinformatics, Hong Kong Polytechnic University. He has been actively working on positioning and ITS research for over 20 year and published over 200 technical papers in different journals and international conferences.

    Zhilin Li holds a B.Eng. and Ph.D. Since obtaining his Ph.D. from The University of Glasgow (UK) in 1990, he has worked as a research fellow at The University of Newcastle upon Tyne UK), The University of Southampton (UK) and Technical University of Berlin (Germany). He had also worked at Curtin University of Technology (Australia) as a lecturer for two years. He joined the Hong Kong Polytechnic University in early 1996. He is a full professor in geo-informatics (cartography/GIS/remote sensing) at the Dept. of Land Surveying and Geo-Informatics, The Hong Kong Polytechnic University. He is also a vice president of International Cartographic Association. Prof. Li has received the Schwidefsky Medal in 2004 and Gino Cassinis Award in 2008, both from the International Society for Photogrammetry and Remote Sensing (ISPRS), and the State Natural Science Award from the Central Government of China in 2005. He was also awarded a DSc (Doctor of Science) degree by the University of Glasgow in 2009.

    William H. K. Lam received the B.Sc. and M.Sc. degrees from the University of Calgary, Canada and the Ph.D. degree from the University of Newcastle upon Tyne, U.K. He is a Chair Professor in Civil and Transportation Engineering and Associate Head in the Department of Civil and Structural Engineering at The Hong Kong Polytechnic University. He is currently the Chairman of Civil Discipline Advisory Panel- the Hong Kong Institution of Engineers and the President of the Hong Kong Society for Transportation Studies (www.hksts.org). His research interests include: transport network modeling and infrastructure planning, travel demand forecasts and risk assessment, ITS technology and planning, public transport and pedestrian studies. Prof. Lam is the Co-Editor-in-Chief of the Journal of Advanced Transportation and the Editor-in-Chief of the Journal—Transportmetrica.

    Chunhou Zheng received the B.Sc degree in Physics Education in 1995 and the M.Sc. degree in Control Theory & Control Engineering in 2001 from QuFu Normal University, and the Ph.D degree in Pattern Recognition & Intelligent System in 2006, from University of Science and Technology of China. From Feb. 2007 to Jun. 2009 he worked as a Postdoctoral Fellow in Hefei Institutes of Physical Sceience, Chinese Academy of Sciences. From Jul. 2009 to Jul. 2010 he worked as a Postdoctoral Fellow in the Dept. of Computing, The Hong Kong Polytechnic University. He is currently an Professor in the College of Information and Communication Technology, Qufu Normal University, China. His research interests include Pattern Recognition and Bioinformatics.

    Qingquan Li received the B.Sc., M.Sc. and Ph.D degrees from Wuhan Technical University of Surveying and Mapping, China. He is a Professor in State Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing (LIESMRS) at Wuhan University. He is currently the Director of Transportation Research Center and Executive Vice President of Wuhan University. His research interests include: Three dimensional and dynamic data modeling in GIS, location based service, surveying engineering, integration of GIS, GPS and remote sensing, intelligent transportation system, and road surface checking.

    View full text