Keywords

1 Introduction

Abnormal driving behaviors can be used as cues to identify Driving While Intoxicated (DWI) drivers and prevent DWI-related accidents. These cues have been commonly used by law enforcement officials to identify potentially drunk drivers as summarized by the National Highway Traffic Safety Administration (NHTSA) in 2010 [1]. This approach however, is subject to human error and is theoretically limited to only observing a few vehicles at a time due to human mental processing limitations.

Recently, unmanned systems in the traffic monitoring, management, and control are starting to take center stage [2, 3]. They can be launched and deployed within a very short period time and offer bird’s eye view over a large area with a relatively low cost. Exploratory UAS based systems are currently being developed and tested in the United States. For example, Ohio State University [2] utilizes UAS to fly over a freeway for observing flows, speeds, densities, off ramp weaving, and vehicle trajectories. At the University of Florida [4], UAS has been used for traffic monitoring applications. Image processing and speed calculation algorithms were used to identify vehicles and respective vehicle speeds. These projects suggested that UAS based monitoring systems have the potential for traffic and driving-behavior monitoring due to their mobility, large field of view, and capability of following vehicles [2]. However, the existing studies are limited in the overall focus of traffic monitoring. They focus on offline post processing analysis of traffic systems; lacking online image processing systems that detect and track vehicles for real time analysis. To overcome these limitations, this paper proposes a framework for utilization of Unmanned Aircraft System (UAS) technology for driving-behavior monitoring to prevent accidents and promote highway safety. The framework for the proposed application contains methodology for real time vehicle tracking using image processing, vehicle risk modeling through statistical analysis, prototype development, and testing. The real time tracking shows immediate autonomous detection of vehicles, thus allowing for a quicker response time from law enforcement and the risk modeling helps analyze the severity of misbehaviors while providing a basis to take appropriate action to reduce dangerous activity. A designed prototype as well as testing applications is highlighted to show the practicality and effectiveness of the proposed system. The proposed system allows for driving misbehaviors to be detected and analyzed simultaneously thus accounting for more dangerous driving activity on highways and ultimately making roadways safer.

2 System Design

Our framework for driving behavior monitoring and risk analysis using the Unmanned Aircraft System is shown in Fig. 1. First, users select roads for monitoring based on their experience. A flight plan is then generated and uploaded wirelessly to the UAS. The UAS system then automatically takes off and maneuvers according to the flight plan to hover and observe traffic. While hovering, live videos are transmitted wirelessly by the UAS to the ground station computer for near real-time processing. Based on the metrics calculated from the processing, the system identifies the observed behaviors and calculates driving-behavior related risks. In this study, we have developed a heavy duty multi-rotor drone system based on Pixhawk [5] and a vision-based risk analysis system to monitor traffic, identify driving-behavior, and calculate associated risks. The details of the process are discussed below.

Fig. 1.
figure 1

The proposed framework for driving behavior monitoring

2.1 Flight Path Planning

Our system allows the user to choose specific roads to conduct traffic observations for risk analyses. Once roads are selected, the road-path coordinates are uploaded to Mission Planner, which is an open source ground control software. Mission Planner then routes the flight plan into the PixHawk board. The Pixhawk controls the rotors on the UAV during flight for maneuvering along the pitch, yaw, and roll axes. After uploading the flight plan to the drone, the user can send the command to launch the UAV and execute the mission via the Mission Planner interface.

2.2 Driving Behavior Monitoring

Upon reaching the hover point, the UAS transmits video via an onboard video transmitter device. The ground station computer receives analog video via a video receiver, and converts it into a digital format that interfaces with the software for video processing algorithms to be applied and for subsequent key metrics to be calculated. The ultimate objective of calculating behavior risk is based on the observation and quantification of eight different possible driving behaviors.

To observe the eight potential misbehaviors, six key metrics must be identified and quantified. These key metrics consist of Vehicle ID, Speed, Forward Distance, Lane Change, Lane Change Time, and Acceleration. The calculation of these key metrics is therefore the first task of the computer vision algorithm to ultimately determine and quantify the misbehaviors.

The calculation of the six key metrics requires both the position of a vehicle relative to a lane line along with the identification and tracking of a vehicle across video frames. The first priority is to determine the locations of the lanes and lane lines in the video feed, and identify a vehicle identification tracking approach as shown in Fig. 2.

Fig. 2.
figure 2

Lanes and lane lines and land detection

In order to identify these key metrics, the computer vision algorithm, developed and executed in MATLAB, must first detect and track vehicles as well as detect the lanes via user selection. Motion-Based Multiple Object Tracking algorithm by MathWorks, Inc. assists with the detection of vehicles. Essentially, background subtraction is used in which stationary pixels are identified as background and moving pixels are identified as foreground by the system. After separation of the foreground from the background, a blob analyzer identifies groups of connected pixels which likely correspond to moving objects –moving vehicles, in this case. Additionally, a Kalman filter is used to track the vehicles in order for the vehicles’ key metrics to be stored effectively over the course of multiple frames. For lane detection, the user is asked to indicate how many lanes are being observed, and based on the answer, the algorithm requests the user to select two points on each lane line. The algorithm then checks to see if the user made any errors in selections by converting the initial frame to binary, and assessing if the user selected points on white portions of the image (because highway lane lines are mostly white). The algorithm then draws lines on the initial frame and stores the equations of the lines for reference throughout the algorithm processing. With vehicles detected and tracked and lanes identified, the algorithm proceeds to calculate the six key metrics.

2.3 Calculate Driving-Behavior Risk

Behavior decision making and risk quantification can be done by assessing the six key metrics identified for each vehicle in the software component of the system design. Each behavior has an associated risk value which is dependent on how severe the observed behavior is. Therefore, behavior identification is broken down into two categories: decision making and risk analysis. Decision making identifies the behavior being exhibited as normal or irregular, and risk analysis quantifies the severity of each behavior as well as combination of behaviors that may be observed.

To classify driving behaviors as irregular, classification thresholds were established through research. Previous traffic/driving studies and governmental surveys were studied to find the normal and extreme parameters of each key metric other than vehicle ID. Each metric value is collected for the time and distance the observed vehicle is in the view of the UAS.

In this driving behavior monitoring system, risk is defined as the probability of collision due to the different metric parameters that the observed vehicle is demonstrating. Different vehicle speeds, lane change patterns and forward distance maintaining behaviors yield different risk probabilities. These observable behaviors have associated risk values which correspond to four of the six key metrics: vehicle speed, number of lane changes, lane change time and forward distance. The overall risk is defined as weighted probability function of the four the individual risk values. The weights of each individual risk component are selected based on frequency and relevancy to collision risk. For example, tailgating and speeding behaviors are more commonly observed in traffic then weaving behavior therefore they have more weight on the overall risk. Using the weighted probability equation, a detailed and comprehensive collision risk score can be assigned to vehicles in observance. The total risk function is described as follows.

$$ Total Risk = \mathop \sum \nolimits R_{i} w_{i} $$
$$ \mathop \sum \nolimits \left( {w_{i} \ldots w_{j} } \right) = 1 $$
$$ R_{i} = Risk_{i} $$
$$ w_{i} = weight_{i} $$

Two statistical methods are used to calculate the four components of the total risk function. However in general, individual risks are calculated by comparing the observed metric values to values exhibited during normal driving behavior. This means that vehicles displaying normal driving behavior will exhibit metric values that are similar to the normal values with some small variation. For example, vehicles exhibiting normal speed behavior will have metric values that are comparable to the speed limit but not identical. Therefore the normal speed metric can be assumed to be normally distributed with a mean value, the speed limit, and a standard deviation. The same assumptions are made for the forward distance, lane change and lane change time normal behavior metrics.

Risk associated with the speed, number of lane changes, and lane change time also utilizes the distributions of the metric values but is calculated using a different set of equations. The risk associated with these metrics compares how different the observed metric values are to the normal values. If the observed metric distribution matches that of normal behavior, the associated risk is zero. However if the observed metric value distribution differs from the normal, risk is calculated using the Bhattacharyya bound and Bhattacharyya coefficient equations. The Bhattacharyya-bound, DB, and coefficient, Bc, equations are described below. The Bhattacharya coefficient Bc calculates the probability of overlap – thus the risk is calculated as 1-Bc.

$$ D_{B} \left( {1,2} \right) = \frac{1}{4}\ln \left( {\frac{1}{4}\left( {\frac{{\sigma_{1}^{2} }}{{\sigma_{2}^{2} }} + \frac{{\sigma_{2}^{2} }}{{\sigma_{1}^{2} }} + 2} \right)} \right) + \frac{1}{4}\left( {\frac{{\left( {\mu_{1} - \mu_{2} } \right)^{2} }}{{\sigma_{1}^{2} + \sigma_{2}^{2} }}} \right) $$
$$ B_{c} = \frac{1}{{e^{{D_{B} }} }} $$
$$ Risk = 1 - B_{c } $$
$$ \mu_{1} , \mu_{2} = Metric Mean Values $$
$$ \sigma_{1} , \sigma_{2} = Metric Stdv Values $$

Both methods of risk calculation are used to calculate the four different risk values corresponding to the six key metrics each vehicle will display. Because certain vehicle behaviors can indicate high risk by themselves, the total risk function is utilized conditionally so that if one of the components of the function yields an extreme value, the overall risk is indicated as high in the user interface. Immediate action will be taken when high risk behaviors are observed. This can ultimately help identify more dangerous drivers that the current monitoring methods do not account for.

The mathematical quantification of collision risk and identification of irregular behaviors is interfaced into the computer vision system using Matlab computer software. Each vehicle behavior as well as the individual and overall risk scores is embedded as separate functions into the software of the overall system. To implement the risk functions, standard deviations and mean values for normal behaviors as well as misbehaviors need to be approximated. The approximations are inferred values based off of initial research. They have been tuned to give a reasonable and reliable risk quantification. Future testing and research can be conducted to find more accurate approximations for these values.

3 Validation

To demonstrate the code and designs created, physical hardware for a UAS and ground station were designed to test the system. There are many components to the UAS, all which need to work in order for the overall system to function. The basic hardware components include the frame, motors, batteries, RC controller, Pixhawk autopilot, power distribution board, gimbal, and camera. Figure 3 shows the basic structure of the overall hardware system.

Fig. 3.
figure 3

Our heavy duty UAS prototype

The overall frame, along with the rotors, and landing gear is produced with carbon fiber material. This frame was chosen due to its high durable, lightweight characteristics. Along with a 16000 mAh Lipo battery, the UAS is able to sustain 20–40 min of flight, depending on weight and intensity of flight. The eight motors can produce a max thrust of 3.4 kg each, which allow the UAS to carry an additional 20 kg of weight on top of the current hardware it is equipped with. The UAS stands 2 ft. tall and is 4.5 ft. in diameter.

The vision system attached to the belly of the UAS consists of a Tarot gimbal with a GoPro Hero3 camera. A gimbal is used to ensure the video is relatively stable for processing, and the Hero3 will provide good quality video. The video can be recorded using the micro SD card in the GoPro camera or it can be transmitted through the live video transmission: FlySight FPV monitor system. Coupled with a Hauppauge 610 USB-Live 2 Analog Video Digitizer and Video Capture Device, the UAS is able to send live video to a ground station control for immediate processing.

Testing the prototype by having it hover over highways presents many risks and is restricted by FAA and local law enforcement regulations. Therefore two different testing methods were used: remote control (RC) car behavior simulations, and controlled environment road tests.

Using RC cars instead of real cars to simulate the eight driving misbehaviors proved safer and still effective. Each RC car was 1/24 in scale of a real car. A test track with multiple lanes on a similar scale was also created to mimic the real environment as close as possible. All scales were taken into account to ensure calculations made were correct. Figure 4 below is a sample video capture of the RC track during a test run.

Fig. 4.
figure 4

Test track video capture

Speeds were scaled for realism. These tests were conducted indoors, with the overhead camera fixed, tested independently from the UAV unit.

To mitigate risks, controlled environments on university grounds were tested on at altitudes compliant with FAA regulation. The designed prototype was tested for flight routing, live video transmission as well as real time processing capabilities. The UAS footage was also recorded and tested in post-processing. Figure 5 shows two sets of test footages that were processed.

Fig. 5.
figure 5

Speed detection algorithms tested on real footage

4 Conclusion

This paper presents the first UAS-based system to autonomously monitor traffic, automatically identify multiple potentially dangerous driving behaviors and simultaneously calculate risk scores in real time. The work proposes a data-driven, objective approach to traffic law enforcement, and can be used to identify drivers with a high probability of DWI, or drivers that exhibit irregular and potentially dangerous behavior; both of which are potential dangers to surrounding drivers. Unmanned Aircraft Systems are a frontier for breakthrough applications, and this system is a unique attempt to utilize the advantages of a UAS-based system for traffic monitoring – namely, the ability to maneuver at any desired altitude to monitor multiple vehicles, the ability to follow a high-risk vehicle for further monitoring, and the ability to easily review and analyze data. The system proposed is a promising venture, and a gateway to further research in the area, with the hope of reducing traffic accidents and ultimately saving lives.