| |
Last updated on September 20, 2020. This conference program is tentative and subject to change
Technical Program for Wednesday October 21, 2020
|
WeAM1_T1 |
EGYPTIAN_1 |
Vision Sensing and Perception 1. A |
Regular Session |
|
09:25-09:30, Paper WeAM1_T1.1 | |
Runtime Optimization of a CNN Model for Environment Perception |
|
Weber, Michael | FZI Research Center for Information Technology |
Wendenius, Christof | FZI Research Center for Information Technology |
Zöllner, J. Marius | FZI Research Center for Information Technology; KIT Karlsruhe In |
Keywords: Convolutional Neural Networks, Vision Sensing and Perception, Autonomous / Intelligent Robotic Vehicles
Abstract: For self driving cars one of the current key technologies are deep neural networks. Especially in camera based environment perception they are absolutely irreplaceable. The currently developed network models are usually executed on high end consumer or server GPUs. Also the verification of the real-time properties is mostly based on these GPUs. However, if these models are to be used in near-series applications, the question arises whether they can also be used on significantly reduced hardware. To address this question, we we conduct a case study with a camera based traffic light detection system. Promising optimization techniques are adapted and applied to the model to investigate potential performance gains achievable with these techniques in the context of self driving car environment perception. In particular, the tradeoff between quality and speed is to be examined in detail.
|
|
09:30-09:35, Paper WeAM1_T1.2 | |
Traffic Agent Trajectory Prediction Using Social Convolution and Attention Mechanism |
|
Yang, Tao | Institute of Artificial Intelligence and Robotics |
Nan, Zhixiong | Xi'an Jiaotong University |
Zhang, He | Institute of Artificial Intelligence and Robotics, Xi'an Jiaoton |
Chen, Shitao | Xi'an Jiaotong University, Xi'an, China |
Zheng, Nanning | Xi'an Jiaotong University |
Keywords: Autonomous / Intelligent Robotic Vehicles, Convolutional Neural Networks, Recurrent Networks
Abstract: Trajectory prediction is a crucial step in autonomous driving decision-making. We proposed a novel model for predicting the trajectory of traffic agents around an autonomous vehicle correctly and efficiently, which is called ACAS. ACAS uses variable-length LSTM to extract the historical trajectory representations of different types of agents in the scene and use a convolutional neural network, which is centered on the prediction agent and uses intention as the attention mechanism, to extracts the influence of surrounding agents on the central agent, then use a conditional LSTM decoder to make the trajectory prediction of the central agent. The experiment results on BLVD dataset show that our proposed model improves the prediction accuracy by 20%, the average displacement error is 0.65m, and the maximum displacement error is 1.04m, the displacement error of the final point is 0.93m, which can help autonomous vehicles drive more safely. In addition, our model runs 32 fps, which satisfied the real-time requirement.
|
|
09:35-09:40, Paper WeAM1_T1.3 | |
A Driving Behavior Recognition Model with Bi-LSTM and Multi-Scale CNN |
|
Zhang, He | Institute of Artificial Intelligence and Robotics, Xi'an Jiaoton |
Nan, Zhixiong | Xi'an Jiaotong University |
Yang, Tao | Institute of Artificial Intelligence and Robotics |
Liu, Yifan | Xi'an Jiaotong University |
Zheng, Nanning | Xi'an Jiaotong University |
Keywords: Autonomous / Intelligent Robotic Vehicles, Convolutional Neural Networks, Recurrent Networks
Abstract: In autonomous driving, perceiving the driving behaviors of surrounding agents is important for the ego-vehicle to make a reasonable decision. In this paper, we propose a neural network model based on trajectories information for driving behavior recognition. Unlike existing trajectory-based methods that recognize the driving behavior using the hand-crafted features or directly encoding the trajectory, our model involves a Multi-Scale Convolutional Neural Network (MSCNN) module to automatically extract the high-level features which are supposed to encode the rich spatial and temporal information. Given a trajectory sequence of an agent as the input, firstly, the Bi-directional Long Short Term Memory (Bi-LSTM) module and the MSCNN module respectively process the input, generating two features, and then the two features are fused to classify the behavior of the agent. We evaluate the proposed model on the public BLVD dataset, achieving a satisfying performance.
|
|
09:40-09:45, Paper WeAM1_T1.4 | |
PSDet: Efficient and Universal Parking Slot Detection |
|
Wu, Zizhang | ZongMuTech.com |
Sun, Weiwei | University of Victoria |
Man, Wang | Zongmutech |
Xiaoquan, Wang | Zongmutech |
Lizhu, Ding | Zongmutech |
Wang, Fan | Zongmu Technology |
Keywords: Vision Sensing and Perception, Deep Learning, Vehicle Environment Perception
Abstract: While real-time parking slot detection plays a critical role in valet parking systems, existing methods have limited success in real-world application. We argue two reasons accounting for the unsatisfactory performance: i, The available datasets have limited diversity, which causes the low generalization ability. ii, Expert knowledge for parking slot detection is under-estimated. Thus, we annotate a large-scale benchmark for training the network and will release it for the benefit of community. Driven by the observation of various parking lots in our benchmark, we propose the circular descriptor to regress the coordinates of parking slot vertexes and accordingly localize slots accurately. To further boost the performance, we develop the two-stage deep architecture to localize vertexes in the coarse-to-fine manner. In our benchmark and other datasets, it achieves the state-of-the-art accuracy while being real-time in practice.
|
|
09:45-09:50, Paper WeAM1_T1.5 | |
Towards Robust Direct Perception Networks for Automated Driving |
|
Cheng, Chih-Hong | DENSO AUTOMOTIVE Deutschland GmbH |
Keywords: Deep Learning, Vision Sensing and Perception
Abstract: We consider the problem of engineering robust direct perception neural networks with the output being regression. Such networks take high dimensional input image data, and they produce affordances such as the curvature of the upcoming road segment or the distance to the front vehicle. Our proposal starts by allowing a neural network prediction to deviate from the label with tolerance~Delta. The source of tolerance can be either contractual or from limiting factors where two entities may label the same data with slightly different numerical values. The tolerance motivates the use of a non-standard loss function where the loss is set to~0 so long as the prediction-to-label distance is less than~Delta. We further extend the loss function and define a new provably robust criterion that is parametric to the allowed output tolerance~Delta, the layer index~tilde{l} where perturbation is considered, and the maximum perturbation amount~kappa. During training, the robust loss is computed by first propagating symbolic errors from the tilde{l}-th layer (with quantity bounded by kappa) to the output layer, followed by computing the overflow between the error bounds and the allowed tolerance. The overall concept is experimented in engineering a direct perception neural network for understanding the central position of the ego-lane in pixel coordinates.
|
|
09:50-09:55, Paper WeAM1_T1.6 | |
Road Surface Recognition Based on DeepSense Neural Network Using Accelerometer Data |
|
Wu, Shan | University of Tartu |
Hadachi, Amnir | University of Tartu |
Keywords: Deep Learning, Convolutional Neural Networks
Abstract: Smartphones play an important role in our lives, which makes them a good sensor for perceiving our environment. Therefore, many applications have emerged using mobile sensors to solve different problems related to activity recognition, health monitoring, transportation, etc. One of the intriguing issues is mapping the quality and type of roads in our road network, which is very costly to maintain and to examine. In this paper, we propose a methodology to recognize different road types by using accelerometer data of smartphones. The approach is based on DeepSense neural network with customised preprocessing and feature engineering steps. In addition, we compared our method performance against Convolutional Neural Network, Fully-connected Neural Network, Support Vector Machine, and RandomForest classifier. Our approach outperformed all four methods, and it was capable of distinguishing three road types (asphalt roads, stone roads, and off roads).
|
|
09:55-10:00, Paper WeAM1_T1.7 | |
Attention R-CNN for Accident Detection |
|
Le, Trung-Nghia | National Institute of Informatics |
Sugimoto, Akihiro | National Institute of Informatics |
Ono, Shintaro | The University of Tokyo |
Kawasaki, Hiroshi | Kyushu University |
Keywords: Convolutional Neural Networks, Deep Learning, Image, Radar, Lidar Signal Processing
Abstract: This paper addresses accident detection where we not only detect objects with classes, but also recognize their characteristic properties. More specifically, we aim at simultaneously detecting object class bounding boxes on roads and recognizing their status such as safe, dangerous, or crashed. To achieve this goal, we construct a new dataset and propose a baseline method for benchmarking the task of accident detection. We design an accident detection network, called Attention R-CNN, which consists of two streams: one is for object detection with classes and one for characteristic property computation. As an attention mechanism capturing contextual information in the scene, we integrate global contexts exploited from the scene into the stream for object detection. This introduced attention mechanism enables us to recognize object characteristic properties. Extensive experiments on the newly constructed dataset demonstrate the effectiveness of our proposed network.
|
|
10:00-10:05, Paper WeAM1_T1.8 | |
Towards Accurate Vehicle Behaviour Classification with Multi-Relational Graph Convolutional Networks |
|
Mylavarapu, Sravan | IIIT HYDERABAD |
Sandhu, Mahtab | International Institute of Information Technology, Hyderabad |
Vijayan, Priyesh | McGill University |
Krishna, K Madhava | IIIT Hyderabad |
Ravindran, Balaraman | IIT Madras |
Namboodiri, Anoop | IIIT HYDERABAD |
Keywords: Vision Sensing and Perception, Deep Learning
Abstract: Understanding on-road vehicle behaviour from a temporal sequence of sensor data is gaining in popularity. In this paper, we propose a pipeline for understanding vehicle behaviour from a monocular image sequence or video. A monocular sequence along with scene semantics, optical flow and object labels are used to get spatial information about the object (vehicle) of interest and other objects (semantically contiguous set of locations) in the scene. This spatial information is encoded by a Multi-Relational Graph Convolutional Network (MR-GCN), and a temporal sequence of such encodings is fed to a recurrent network to label vehicle behaviours. The proposed framework can classify a variety of vehicle behaviours to high fidelity on datasets that are diverse and include European, Chinese and Indian on-road scenes. The framework also provides for seamless transfer of models across datasets without entailing re-annotation, retraining and even fine-tuning. We show comparative performance gain over baseline Spatio-temporal classifiers and detail a variety of ablations to showcase the efficacy of the framework.
|
|
WeAM1_T2 |
EGYPTIAN_2 |
Cooperative Systems (V2X). A |
Regular Session |
|
09:25-09:30, Paper WeAM1_T2.1 | |
Cooperative Perception with Deep Reinforcement Learning for Connected Vehicles |
|
Aoki, Shunsuke | Carnegie Mellon University |
Higuchi, Takamasa | Toyota Motor North America R&D |
Altintas, Onur | Toyota R&D |
Keywords: Cooperative Systems (V2X), V2X Communication, Reinforcement Learning
Abstract: Sensor-based perception on vehicles are becoming prevalent and important to enhance road safety. Autonomous driving systems use cameras, LiDAR and radar to detect surrounding objects, while human-driven vehicles use them to assist the driver. However, the environmental perception by individual vehicles has the limitations on coverage and/or detection accuracy. For example, a vehicle cannot detect objects occluded by other moving/static obstacles. In this paper, we present a cooperative perception scheme with deep reinforcement learning to enhance the detection accuracy for the surrounding objects. By using deep reinforcement learning to select the data to transmit, our scheme mitigates the network load in vehicular networks and enhances the communication reliability. To design, test and verify the practical and resource-efficient cooperative perception framework, we develop a Cooperative & Intelligent Vehicle Simulation (CIVS) Platform where we integrate three software components: a traffic simulator, a vehicle simulator, and an object classifier. The simulation platform constitutes a unified framework to evaluate a traffic model, vehicle model, communication model, and object classification model. Simulation results show that our scheme decreases packet loss and thereby increases the detection accuracy by up to 12 %, compared to the baseline protocol.
|
|
09:30-09:35, Paper WeAM1_T2.2 | |
TELECARLA: An Open Source Extension of the CARLA Simulator for Teleoperated Driving Research Using Off-The-Shelf Components |
|
Hofbauer, Markus | Technical University of Munich |
Kuhn, Christopher Benjamin | Technical University of Munich |
Petrovic, Goran | BMW Group |
Steinbach, Eckehard | Technische Universitaet Muenchen |
Keywords: Cooperative Systems (V2X), Intelligent Vehicle Software Infrastructure, Hand-off/Take-Over
Abstract: Teledriving is a possible fallback mode to cope with failures of fully autonomous vehicles. One important requirement for teleoperated vehicles is a reliable low delay data transmission solution, which adapts to the current network conditions in order to provide the operator with the best possible situation awareness. Currently, there is no easily accessible solution for the evaluation of such systems and algorithms in a fully controllable environment available. To this end we propose an open source framework for teleoperated driving research using low-cost off-the-shelf components. The proposed system is an extension of the open source simulator CARLA, which is responsible for rendering the driving environment and providing reproducible scenario evaluation. As a proof of concept, we evaluated our teledriving solution against CARLA in remote and local driving scenarios. The proposed teledriving system leads to almost identical performance measurements for local and remote driving. In contrast, remote driving using CARLA’s client server communication results in drastically reduced operator performance. Further, the framework provides an interface for the adaptation of the temporal resolution and target bitrate of the compressed video streams. The proposed framework reduces the required setup effort for teleoperated driving research in academia and industry.
|
|
09:35-09:40, Paper WeAM1_T2.3 | |
TruPercept: Trust Modelling for Autonomous Vehicle Cooperative Perception from Synthetic Data |
|
Hurl, Braden | University of Waterloo |
Cohen, Robin | Universityof Waterloo |
Czarnecki, Krzysztof | University of Waterloo |
Waslander, Steven L | University of Waterloo |
Keywords: Cooperative Systems (V2X), Lidar Sensing and Perception, Self-Driving Vehicles
Abstract: Inter-vehicle communication for autonomous vehicles (AVs) stands to provide significant benefits in terms of perception robustness. We propose a novel approach for AVs to communicate perceptual observations, tempered by trust modelling of peers providing reports. Based on the accuracy of reported object detections as verified locally, communicated messages can be fused to augment perception performance beyond line of sight and at great distance from the ego vehicle. Also presented is a new synthetic dataset which can be used to test cooperative perception. The TruPercept dataset includes unreliable and malicious behaviour scenarios to experiment with some challenges cooperative perception introduces. The TruPercept runtime and evaluation framework allows modular component replacement to facilitate ablation studies as well as the creation of new trust scenarios we are able to show.
|
|
09:40-09:45, Paper WeAM1_T2.4 | |
Lane Change Like a Snake: Cooperative Adaptive Cruise Control with Platoon Lane Change Capability |
|
Haoran, Wang | Tongji University |
Li, Xin | Dalian Maritime University |
Hu, Jia | Tongji University, Federal Highway Administration |
Keywords: Cooperative Systems (V2X), Automated Vehicles, Vehicle Control
Abstract: This research proposes a distributed Successive Platoon-Lane-Change (SuPLC) controller based on optimal control. It is designed for the whole Cooperative Adaptive Cruise Control (CACC) platoon to change lane at a fixed or moving position by turns like a snake. This proposed controller fills the gap in CACC PLC control and has the following contributions: i) introduces a distributed trajectory planner while maintaining inter-vehicle cooperation; ii) is able to change lane successively like a snake in order to improve success rate of lane change; iii) is formulated in space-domain, with a lateral and longitudinal coupled vehicle dynamics constrained; iv) with local stability, string stability, and lateral stability; v) helpful with declining traffic fluctuations. The proposed SuPLC controller is evaluated in an integrated simulation platform with PreScan and Matlab/Simulink. Results confirm that the proposed controller can fulfill the expectations. The average computation time of the proposed SuPLC controller is 0.015 seconds on a laptop equipped with an Intel i7-8750H CPU. It indicates the potential of real time implementation.
|
|
09:45-09:50, Paper WeAM1_T2.5 | |
A Dynamic Programming Approach to Optimal Lane Merging of Connected and Autonomous Vehicles |
|
Lin, Shang-Chien | National Taiwan University |
Hsu, Hsiang | National Taiwan University |
Lin, Yi-Ting | National Taiwan University |
Lin, Chung-Wei | National Taiwan University |
Jiang, Iris Hui-Ru | National Taiwan University |
Liu, Changliu | Carnegie Mellon University |
Keywords: Cooperative ITS, Cooperative Systems (V2X)
Abstract: Lane merging is one of the major sources causing traffic congestion and delay. With the help of vehicle-to-vehicle or vehicle-to-infrastructure communication and autonomous driving technology, there are opportunities to alleviate congestion and delay resulting from lane merging. In this paper, we first summarize modern features and requirements for lane merging, along with the advance of vehicular technology. We then formulate and propose a dynamic programming algorithm to find the optimal solution for a two-lane merging scenario. It schedules the passing order for vehicles while minimizing the time needed for all vehicles to go through the merging point (equivalent to the time that the last vehicle goes through the merging point). We further extend the problem to a consecutive lane-merging scenario. We show the difficulty to apply the original dynamic programming to the consecutive lane-merging scenario and propose an improved version to solve it. Experimental results show that our dynamic programming algorithm can efficiently minimize the time needed for all vehicles to go through the merging point and reduce the average delay of all vehicles, compared with some greedy methods.
|
|
09:50-09:55, Paper WeAM1_T2.6 | |
Hardware in the Loop Test Using Infrastructure Based Emergency Trajectories for Connected Automated Driving |
|
Pechinger, Mathias | University of Applied Sciences Augsburg |
Schroeer, Guido | Siemens Mobility GmbH |
Bogenberger, Klaus | Bundeswehr University Munich |
Markgraf, Carsten | University of Applied Sciences Augsburg |
Keywords: Smart Infrastructure, Cooperative Systems (V2X), Automated Vehicles
Abstract: The path towards safe and highly automated driving is still under development. Especially when it comes to safety aspects, industry and research struggle to reach the required standards. In this paper we suggest using vehicle to infrastructure communication to provide a new kind of safety fallback solution in case conventional automated driving systems fail. This proposal is verified in a Hardware in the Loop setup. A fault is injected to the motion planner of a connected automated vehicle. The vehicle’s own computing platform fails to generate a trajectory. An emergency message is sent to the infrastructure which seamlessly takes over the motion planning task and provides a backup trajectory to the vehicle. The infrastructure’s motion plan is processed by the vehicle’s motion controller and guides the vehicle to a safe stopping position. The scenario is executed with our research vehicle driving in a parking lot on a virtual highway and stopping on the hard shoulder. Additionally, a virtual vehicle is placed in the setup, acting as an obstacle on the shoulder lane of the highway. The whole scenario is thoroughly tested and shows one possibility on how the infrastructure can contribute to a safe path towards highly automated driving.
|
|
09:55-10:00, Paper WeAM1_T2.7 | |
ITS-G5 Antenna Position on Trucks |
|
Mertens, Jan Cedric | Technical University of Munich |
Erb, Dominik | MAN Truck and Bus |
Kraus, Sven | Technische Universität München |
Diermeyer, Frank | Technische Universität München |
Keywords: V2X Communication, Cooperative ITS, Cooperative Systems (V2X)
Abstract: The vision of connected driving is a driving force in current research within automotive engineering. Trucks in particular, with their limitations due to mass and size, benefit from the cooperation of other road users. In order to be able to agree on maneuvers, however, communication between the involved vehicles must be established. This paper investigates in real tests where the antennas are best placed on the truck for vehicle to vehicle communication with ITS-G5 and what ranges are to be expected. Furthermore, the influence of the truck trailer with different materials and bending angles on the signal propagation is analyzed.
|
|
10:00-10:05, Paper WeAM1_T2.8 | |
Gap Closing for Cooperative Driving in Automated Vehicles Using B-Splines for Trajectory Planning |
|
van Hoek, Robbin | Eindhoven University of Technology |
Ploeg, Jeroen | Eindhoven University of Technology |
Nijmeijer, Henk | Eindhoven University of Technology |
Keywords: Autonomous / Intelligent Robotic Vehicles, Cooperative Systems (V2X), Automated Vehicles
Abstract: Recently, increasing interest has been shown in cooperative driving and platooning, as they show great potential for increasing the road throughput, by both increasing road capacity and preventing so called ’ghost’ traffic jams. These cooperative vehicles make use of controllers or trajectory planners that achieve a certain spacing policy. However, these systems may result in large control inputs when the host vehicle is trying to close a gap to a preceding vehicle, leading to uncomfortable responses. In this work, a trajectory planning method that is able to smoothly close a large gap, by modifying the desired spacing policy as a function of time is presented. The resulting trajectory planner is capable of both maintaining a desired spacing as well as closing a gap comfortably to a preceding vehicle.
|
|
WeAM1_T3 |
EGYPTIAN_3 |
Vehicle Control. A |
Regular Session |
|
09:25-09:30, Paper WeAM1_T3.1 | |
Reference Aware Model Predictive Control for Autonomous Vehicles |
|
Collares Pereira, Gonçalo | KTH Royal Institute of Technology |
Lima, Pedro F. | KTH Royal Institute of Technology |
Wahlberg, Bo | KTH Royal Institute of Technology |
Pettersson, Henrik | Scania CV |
Mårtensson, Jonas | KTH Royal Institute of Technology |
Keywords: Vehicle Control, Self-Driving Vehicles, Automated Vehicles
Abstract: This paper presents a path following controller for autonomous vehicles, making use of the linear time-varying model predictive control (LTV-MPC) framework. The controller takes into consideration control input rates and accelerations, not only to account for limitations in the steering dynamics, but also to provide a safe and comfortable ride while minimizing wear and tear of the vehicle components. Furthermore, it introduces a method to handle model references generated by motion planning algorithms that can consider different vehicle models from the controller. The proposed controller is verified by simulations and through experiments in a Scania construction truck, and is shown to have better performance than the state-of-the-art smooth and accurate MPC.
|
|
09:30-09:35, Paper WeAM1_T3.2 | |
Application Specific System Identification for Model-Based Control in Self-Driving Cars |
|
Salt Ducaju, Julian M. | LTH, Lund University |
Tang, Chen | University of California, Berkeley |
Chan, Ching-Yao | ITS, University of California at Berkeley |
Tomizuka, Masayoshi | University of California at Berkeley |
Keywords: Self-Driving Vehicles, Automated Vehicles, Vehicle Control
Abstract: Linear Parameter Varying (LPV) models can be used to describe the vehicular lateral dynamic behavior of self- driving cars. They are particularly suitable for model-based control schemes such as model predictive control (MPC) applied to real-time trajectory tracking control, since they provide a proper trade-off between accuracy in different scenarios and reduced computation cost compared to nonlinear models. The MPC control schemes use the model for a long prediction horizon of the states, therefore prediction errors for a long time horizon should be minimized in order to increase the accuracy of the tracking. For this task, this work presents a system identification procedure for the lateral dynamics of a vehicle that combines a LPV model with a learning algorithm that has been successfully applied to other dynamic systems in the past. Simulation results show the benefits of the identified model in comparison to other well-known vehicular lateral dynamic models.
|
|
09:35-09:40, Paper WeAM1_T3.3 | |
Autonomous Driving Vehicle Control Auto-Calibration System: An Industry-Level, Data-Driven and Learning-Based Vehicle Longitudinal Dynamic Calibrating Algorithm |
|
Zhu, Fan | Baidu USA LLC |
Xu, Xin | Baidu, Inc |
Ma, Lin | Baidu, Inc |
Guo, Dingfeng | Baidu, Inc |
Cui, Xiao | Baidu, Inc |
Kong, Qi | Baidu USA LLC |
Keywords: Self-Driving Vehicles, Unsupervised Learning, Vehicle Control
Abstract: The control module is a crucial part for autonomous driving systems, a typical control algorithm often requires vehicle dynamics (such as longitudinal dynamics) as inputs, which, unfortunately are difficult to calibrate in real time. Further, it is also a challenge to reflect instantaneous changes in longitudinal dynamics (e.g. load changes) using a calibration table. As a result, control performance may deteriorate when load changes considerably (especially for small cargoes). In this paper, we will show how we build a data-driven longitudinal calibration procedure using machine learning techniques to adapt load changes in real time. We first generated offline calibration tables from human driving data. The offline table serves as an initial guess for later uses, and it only requires twenty minutes of data collection and processing. We then used an online learning algorithm to appropriately update the initial table (the offline table) based on real-time performance analysis. Experiments indicated (a) offline auto-calibration leads to a better control accuracy, compared with manual calibration; (b) online auto-calibration is capable to handle load changes and significantly reduce real time control error. This system has been deployed to more than one hundred Baidu self-driving vehicles (both hybrid and electronic vehicles) since April 2018. By January 2019, the system had been tested for more than 2,000 hours and over 10,000 kilometers (6,213 miles) and was still proven to be effective.
|
|
09:40-09:45, Paper WeAM1_T3.4 | |
Longitudinal Dynamics Model Identification of an Electric Car Based on Real Response Approximation |
|
Dominguez Quijada, Salvador | Ecole Centrale of Nantes |
Garcia, Gaetan | Ecole Centrale De Nantes |
Hamon, Arnaud | Ecole Centrale De Nantes |
Fremont, Vincent | Ecole Centrale De Nantes, CNRS, LS2N, UMR 6004 |
Keywords: Self-Driving Vehicles, Automated Vehicles, Vehicle Control
Abstract: Obtaining a realistic and accurate model of the longitudinal dynamics is key for a good speed control of a self-driving car. It is also useful to simulate the longitudinal behavior of the vehicle with high fidelity. In this paper, a straightforward and generic method for obtaining the friction, braking and propulsion forces as a function of speed, throttle input and brake input is proposed. Experimental data is recorded during tests over the full speed range to estimate the forces, to which the corresponding curves are adjusted. A simple and direct balance of forces in the direction tangent to the ground is used to obtain an estimation of the real forces involved. Then a model composed of approximate spline curves that fit the results is proposed. Using splines to model the dynamic response has the advantage of being quick and accurate, avoiding the complexity of parameter identification and tuning of non-linear responses embedding the internal functionalities of the car, like ABS or regenerative brake. This methodology has been applied to LS2N's electric Renault Zoe but can be applied to any other electric car. As shown in the experimental section, a comparison between the estimated acceleration of the car using the model and the real one for a normal driving over a wide range of speeds along a trip of about km/h reveals only m/s² of error standard deviation in a range of m/s² which is very encouraging.
|
|
09:45-09:50, Paper WeAM1_T3.5 | |
Deep Transfer Learning Enable End-To-End Steering Angles Prediction for Self-Driving Car |
|
Jiang, Huatao | Chinese Academy of Science |
Keywords: Deep Learning, Self-Driving Vehicles, Situation Analysis and Planning
Abstract: Autonomous driving has developed rapidly over the last few years. Predicting the steering angle for self-driving car according to different road conditions is very important. There are some endeavors for this topic, including lane detection, object detection on roads, 3-D reconstruction etc., but in our work we focus on a vision based model that directly maps raw input images to steering angles using deep networks and this model don’t depend on specifying the features to learn. In this paper, we propose an end-to-end steering angle prediction model based on deep transfer learning and it can accurately predicts steering angles based on input image sequences which are from on-board camera. This prediction model combine two deep learning models including the convolution neural network (CNN) and the long short-term memory (LSTM). The CNN model we use is VGG16 which is based on transfer learning techniques, and pre-trained on Imagenet with good performance. This network is used to extract spatial features of the input image sequences. And the LSTM network is used to capture the temporal information of the provided images. The model we proposed fully considers spatial-temporal information, and fit the nonlinear relationship well between the input images and the steering angles. In order to validate the proposed model, the experimental study is conducted using the real-world dataset which is provided by Udacity. Experimental results show that the proposed model in this paper can efficiently predict the steering angles and clone humans’ driving behaviors, and our model has a better performance, higher accuracy, and less training time.
|
|
09:50-09:55, Paper WeAM1_T3.6 | |
Vehicle Following Over a Closed Ring Road under Safety Constraint |
|
Pooladsanj, Milad | University of Southern California |
Savla, Ketan | University of Southern California |
Ioannou, Petros | University of Southern California |
Keywords: Automated Vehicles, Vehicle Control, Collision Avoidance
Abstract: Increasing traffic volume with respect to physical space motivates explicit consideration of space constraint in traffic system analysis. We study dynamics of a system of homogeneous vehicles executing safe vehicle following on a closed single lane ring road. Dynamics of each vehicle is governed by a standard second order model and a two mode vehicle following controller. One mode is cruise control and the other is a constant time headway control for safety; switching between modes is determined by a linear combination of relative distance and speed. We show that there exists a threshold value for the number of vehicles at which the equilibria for inter-vehicle configurations transition from being infinite to being unique. We explicitly characterize the unique equilibrium in the latter case as well as the threshold value for transition in terms of system parameters (road length, constant time headway and free flow speed). We also show that, starting from any initial condition, the inter-vehicle configuration converges to an equilibrium. The threshold value for the number of vehicles is also shown to define the boundary of when the transfer function from external disturbance to error in relative spacing changes.
|
|
09:55-10:00, Paper WeAM1_T3.7 | |
Model Predictive Control Based Stability Control of Autonomous Vehicles on Low Friction Road |
|
Joa, Eunhyek | Seoul National University |
Hyun, Youngjin | Seoul National University |
Park, Kwanwoo | Seoul National University |
Kim, Jayu | Seoul National University |
Yi, Kyongsu | Seoul National University |
Keywords: Vehicle Control, Autonomous / Intelligent Robotic Vehicles, Self-Driving Vehicles
Abstract: The challenge lies in developing fully autonomous vehicles is to drive safely in inclement weather. Driving in inclement weather is often a risky task because reacting proactively and stabilizing the vehicle on low friction road is a challenging task unlike driving on high friction road. To tackle such issue, this paper presents a predictive motion framework to operate safely on low friction road without prior knowledge of tire-road friction coefficient. The proposed control algorithm consists of the instability detection algorithm and the longitudinal/lateral motion control algorithms. The instability detection algorithm (1) determines whether the vehicle is stable without knowing friction coefficient and (2) estimates the friction coefficient. The longitudinal and lateral motion control algorithms are separately but interdependently designed based on Model Predictive Control method to proactively control the vehicle with a forecast of vehicle motions. The potential of the proposed approach is shown through computer simulations with a high fidelity vehicle model. The results show that the proposed algorithm (1) successfully reduces speed to avoid path deviation and (2) detects the vehicle instability and stabilize the vehicle due to sudden friction changes.
|
|
10:00-10:05, Paper WeAM1_T3.8 | |
Collision Preventive Velocity Planning Based on Static Environment Representation for Autonomous Driving in Occluded Region |
|
Jeong, Yonghwan | Seoul National University |
Yoo, Jinsoo | Seoul National University |
Yoon, Youngmin | Seoul National University |
Yi, Kyongsu | Seoul National University |
Keywords: Automated Vehicles, Self-Driving Vehicles, Vehicle Control
Abstract: This paper presents the collision preventive velocity planning algorithm to enhance safety when driving the occluded region in an urban environment. The accidents due to the pedestrian who appear from the occluded region occur frequently on complex urban roads. The collision preventive velocity planner has been proposed to reduce the potential risk caused by the occluded region of sensors. The point cloud of 2D Lidar is processed to construct the static obstacle map. A static obstacle boundary is defined to estimate the possible position of the pedestrian appearance based on the static obstacle map. The longitudinal motion planner determines the target states using the static obstacle boundary to prevent the inevitable collision with pedestrians coming from the occluded region. The target states are tracked by MPC based motion tracker to determine the desired acceleration. The collision prevention performance of the proposed algorithm has been validated by the Monte-Carlo simulation. The simulation results demonstrated that the proposed algorithm prevents the collision with pedestrian from the occluded region and improve the safety of urban autonomous driving.
|
|
WeAM2_T1 |
EGYPTIAN_1 |
Vision Sensing and Perception 1. B |
Regular Session |
|
10:15-10:20, Paper WeAM2_T1.1 | |
RSS-Net: Weakly-Supervised Multi-Class Semantic Segmentation with FMCW Radar |
|
Kaul, Prannay | University of Oxford |
De Martini, Daniele | University of Oxford |
Gadd, Matthew | Oxford Robotics Institute, University of Oxford |
Newman, Paul | University of Oxford |
Keywords: Radar Sensing and Perception, Vehicle Environment Perception, Deep Learning
Abstract: This paper presents an efficient annotation procedure and an application thereof to end-to-end, rich semantic segmentation of the sensed environment using Frequency-Modulated Continuous-Wave scanning radar. We advocate radar over the traditional sensors used for this task as it operates at longer ranges and is substantially more robust to adverse weather and illumination conditions. We avoid laborious manual labelling by exploiting the largest radar-focused urban autonomy dataset collected to date, correlating radar scans with RGB cameras and LiDAR sensors, for which semantic segmentation is an already consolidated procedure. The training procedure leverages a state-of-the-art natural image segmentation system which is publicly available and as such, in contrast to previous approaches, allows for the production of copious labels for the radar stream by incorporating four cameras and two LiDAR streams. Additionally, the losses are computed taking into account labels to the radar sensor horizon by accumulating LiDAR returns along a pose-chain ahead and behind of the current vehicle position. Finally, we present the network with multi-channel radar scan inputs in order to deal with ephemeral and dynamic scene objects.
|
|
10:20-10:25, Paper WeAM2_T1.2 | |
Single-Shot 3D Detection of Vehicles from Monocular RGB Images Via Geometrically Constrained Keypoints in Real-Time |
|
Gählert, Nils | Mercedes-Benz AG, University of Jena |
Wan, Jun-Jun | Karlsruhe Institute of Technology, Daimler AG R & D |
Jourdan, Nicolas | Mercedes-Benz AG |
Finkbeiner, Jan Robert | Mercedes-Benz AG |
Franke, Uwe | Daimler AG |
Denzler, Joachim | Friedrich-Schiller-University Jena |
Keywords: Automated Vehicles, Deep Learning, Vision Sensing and Perception
Abstract: In this paper we propose a novel 3D single-shot object detection method for detecting vehicles in monocular RGB images. Our approach lifts 2D detections to 3D space by predicting additional regression and classification parameters and hence keeping the runtime close to pure 2D object detection. The additional parameters are transformed to 3D bounding box keypoints within the network under geometric constraints. Our proposed method features a full 3D description including all three angles of rotation without supervision by any labeled ground truth data for the object’s orientation, as it focuses on certain keypoints within the image plane. While our approach can be combined with any modern object detection framework with only little computational overhead, we exemplify the extension of SSD for the prediction of 3D bounding boxes. We test our approach on different datasets for autonomous driving and evaluate it using the challenging KITTI 3D Object Detection as well as the novel nuScenes Object Detection benchmarks. While we achieve competitive results on both benchmarks we outperform current state-of-the-art methods in terms of speed with more than 20 FPS for all tested datasets and image resolutions.
|
|
10:25-10:30, Paper WeAM2_T1.3 | |
SPFCN: Select and Prune the Fully Convolutional Networks for Real-Time Parking Slot Detection |
|
Yu, Zhuoping | Tongji University |
Gao, Zhong | Tongji University |
Chen, Hansheng | Tongji University |
Huang, Yuyao | Tongji University |
Keywords: Vision Sensing and Perception, Deep Learning, Advanced Driver Assistance Systems
Abstract: For passenger cars equipped with the automatic parking function, convolutional neural networks are employed to detect parking slots on the panoramic surround view, which is an overhead image synthesized by four calibrated fish-eye camera shots. The high accuracy is obtained at the price of low speed or expensive computation equipment, which are sensitive for many car manufacturers. In this paper, the proposed parking slot detector leverages deep convolutional networks for faster speed and smaller model size while keeps accuracy. To achieve the optimal balance, we leave redundancies in weights and developed a strategy to select the best receptive fields and prune the redundant channels automatically after each training epoch. The proposed model is capable of jointly detecting corners and line features of parking slots while running efficiently in real time on average processors. Even without any specific computing devices, the model outperforms existing counterparts, at a frame rate of about 30 FPS on a 2.3 GHz CPU core, yielding parking slot corner localization error of 1.51±2.14 cm (std. err.) and slot detection accuracy of 98%, generally satisfying the requirements in both speed and accuracy on on-board mobile terminals.
|
|
10:30-10:35, Paper WeAM2_T1.4 | |
Deep Learning Based Traffic Signs Boundary Estimation |
|
Hrustic, Emir | ISAE-SUPAERO |
Xu, Zhujun | ISAE-SUPAERO |
Vivet, Damien | ISAE-SUPAERO |
Keywords: Convolutional Neural Networks, Deep Learning, Vision Sensing and Perception
Abstract: In the context of autonomous navigation, the localization of the vehicle relies on the accurate detection and tracking of artificial landmarks. These landmarks are based on handcrafted features. However, because of their low-level nature, they are not informative but also not robust under various conditions (lightning, weather, point-of-view). Moreover, in Advanced Driver-Assistance Systems (ADAS), and road safety, intense efforts have been made to implement automatic visual data processing, with special emphasis on road object recognition. The main idea of this work is to detect accurate higher-level landmarks such as static semantic objects using Deep learning frameworks. We mainly focus on the accurate detection, segmentation and classification of vertical traffic signs according to their function (danger, give way, prohibition/obligation, and indication). This paper presents the boundary estimation of European traffic signs from an embedded monocular camera in a vehicle. We propose a framework using two different deep neural networks in order to: (1) detect and recognize traffic signs in the video flow and (2) regress the coordinates of each vertices of the detected traffic sign to estimate its shape boundary. We also provide a comparison of our method with Mask R-CNN which is the state-of-the-art segmentation method.
|
|
10:35-10:40, Paper WeAM2_T1.5 | |
DS-PASS: Detail-Sensitive Panoramic Annular Semantic Segmentation through SwaftNet for Surrounding Sensing |
|
Yang, Kailun | Karlsruhe Institute of Technology |
Hu, Xinxin | Zhejiang University |
Chen, Hao | Zhejiang University |
Xiang, Kaite | State Key Laboratory of Modern Optical Instrumentation, Zhejiang |
Wang, Kaiwei | Zhejiang Univeristy |
Stiefelhagen, Rainer | Karlsruhe Institute of Technology |
Keywords: Vision Sensing and Perception, Convolutional Neural Networks, Vehicle Environment Perception
Abstract: Semantically interpreting the traffic scene is crucial for autonomous transportation and robotics systems. However, state-of-the-art semantic segmentation pipelines are dominantly designed to work with pinhole cameras and train with narrow Field-of-View (FoV) images. In this sense, the perception capacity is severely limited to offer higher-level confidence for upstream navigation tasks. In this paper, we propose a network adaptation framework to achieve Panoramic Annular Semantic Segmentation (PASS), which allows to re-use conventional pinhole-view image datasets, enabling modern segmentation networks to comfortably adapt to panoramic images. Specifically, we adapt our proposed SwaftNet to enhance the sensitivity to details by implementing attention-based lateral connections between the detail-critical encoder layers and the context-critical decoder layers. We benchmark the performance of efficient segmenters on panoramic segmentation with our extended PASS dataset, demonstrating that the proposed real-time SwaftNet outperforms state-of-the-art efficient networks. Furthermore, we assess real-world performance when deploying the Detail-Sensitive PASS (DS-PASS) system on a mobile robot and an instrumented vehicle, as well as the benefit of panoramic semantics for visual odometry, showing the robustness and potential to support diverse navigational applications.
|
|
10:40-10:45, Paper WeAM2_T1.6 | |
Fully Automated Traffic Sign Substitution in Real-World Images for Large-Scale Data Augmentation |
|
Horn, Daniela | Ruhr University Bochum |
Houben, Sebastian | Ruhr-Universität Bochum |
Keywords: Deep Learning, Vision Sensing and Perception, Vehicle Environment Perception
Abstract: Video-based traffic sign recognition is a key ability of autonomous vehicles but a demanding challenge due to the enormous number of classes and natural conditions in the wild. We address this problem with a fully automatic close-to-life image-to-image translation technique for traffic sign substitution in natural images. The work is intended as data augmentation technique and allows for training rare or unavailable traffic sign classes, or otherwise uncommon cases in visual traffic sign detection and classification. To this end, we extend our previous data generation model and propose a rendering pipeline to create convincing traffic sign images with realistic background and camera recording artifacts. Experiments are conducted by exchanging traffic sign classes on different parts of the German Traffic Sign Recognition Benchmark (GTSRB). We demonstrate that the pipeline is well-suited for generating representative images of unseen traffic sign classes. A baseline image classification setup trained on real data shows an overall performance similar to being trained with a comparable number of artificial data samples. Our code is made publicly available under an open source license.
|
|
10:45-10:50, Paper WeAM2_T1.7 | |
Provident Detection of Vehicles at Night |
|
Oldenziel, Emilio | University of Groningen |
Ohnemus, Lars | Dr. Ing. H.c. F. Porsche AG |
Saralajew, Sascha | Dr. Ing. H.c. F. Porsche AG |
Keywords: Vision Sensing and Perception, Self-Driving Vehicles, Deep Learning
Abstract: Visual perception is one of the most important information sources during driving. However, current camera perception systems are limited to object detection and, hence, to directly visible objects. Because it is mandatory for vehicles to run headlights at night, their emitted light can be detected before a vehicle is directly visible. Humans use this phenomenon to providently detect vehicles at night. In this paper, we analyze the discrepancy between ordinary vehicle detection and provident detection by quantifying the time gap between the two. This is achieved by conducting a test group study where participants are recorded while driving at night. Additionally, the dataset recorded during the study is used to provide a training and test dataset for machine learning approaches. To make use of the dataset for training machine learning methods, we analyze and discuss several annotation techniques. In a proof-of-concept, we used this dataset with its annotations to train a neural network on the direct and provident vehicle detection task. The resulting model shows that neural networks can successfully learn how to detect light-features. With further research and improvements, we are confident that a model for provident vehicle detection can be industrialized for use in production vehicles so that this information can be used in various safety and planning functions, including automatically adapting the high beam before it blinds other road users.
|
|
WeAM2_T2 |
EGYPTIAN_2 |
Cooperative Systems (V2X). B |
Regular Session |
|
10:15-10:20, Paper WeAM2_T2.1 | |
Edge Computing for Interconnected Intersections in Internet of Vehicles |
|
Lee, Gilsoo | Virginia Tech |
Guo, Jianlin | Mitsubishi Electric Research Laboratories |
Kim, Keyong Jin | Mitsubishi Electric Research Laboratories |
Orlik, Philip | Mitsubishi Electric Research Laboratories |
Ahn, Heejin | Mitsubishi Electric Research Laboratories |
Di Cairano, Stefano | Mitsubishi Electric Research Laboratories |
Saad, Walid | Virginia Tech |
Keywords: Vehicle Control, Cooperative ITS, Cooperative Systems (V2X)
Abstract: To improve the traffic flow in the interconnected intersections, the vehicles and infrastructure such as road side units (RSUs) need to collaboratively determine vehicle scheduling while exchanging information via vehicle-to-everything (V2X) communications. However, due to a large number of vehicles and their mobility, such a scheduling in the interconnected intersection is a challenging problem. Moreover, since low-latency information exchange and real-time decision making process are required, it becomes more challenging to design a holistic framework incorporating traffic control and V2X communications. In this paper, a distributed edge computing framework is proposed to solve a travel time minimization problem at the interconnected intersections. Particularly, the proposed framework enables each RSU to decide intersection scheduling while the vehicles individually determine travel trajectory by controlling their dynamics. To this end, a V2X communications protocol is designed to exchange information among vehicles and RSUs. Then, the road segments around intersection are partitioned into sequence, control, and crossing zones. In the sequence zone, optimal time is scheduled for vehicles to pass the intersection with the minimum delay. In the control zone, the location and velocity of each vehicle are controlled to arrive the crossing zone at the scheduled time by using a control algorithm designed to effectively increase driving comfort and reduce fuel consumption. Thus, the proposed framework enables the vehicles to safely pass the crossing zone without collision. Simulation results show that the proposed edge computing can successfully reduce the total travel time by up to 14:3% based on optimal scheduling for the interconnected intersections.
|
|
10:20-10:25, Paper WeAM2_T2.2 | |
Decentralized Cooperative Collision Avoidance for Automated Vehicles: A Real-World Implementation |
|
Bellan, Daniele | Applus IDIADA |
Wartnaby, Charles Ernest | Applus IDIADA |
Keywords: Cooperative Systems (V2X), Collision Avoidance, Automated Vehicles
Abstract: Connected and automated vehicles provide a new opportunity for collision avoidance, in which several cars cooperate to reach an optimal overall outcome. However, this requires solving the challenging real-time problem of global designing of the joint trajectories that a group of cooperating vehicles should follow to avoid the obstacle and mutually avoid each other. The leaderless method demonstrated here uses the notion of ``desired'' versus ``planned'' trajectories, allowing vehicles to influence each other for mutual benefit: each vehicle attempts to avoid independently the desired trajectories of other vehicles, and thus cooperative behavior emerges. Each vehicle is equipped with a model predictive controller that optimizes both its trajectories. This flexible, decentralized method has been executed in a novel demonstration, using two full-sized vehicles on a test track, and results of the fully automated cooperative behavior achieved are presented.
|
|
10:25-10:30, Paper WeAM2_T2.3 | |
Cooperative Automated Lane Merge with Role-Based Negotiation |
|
Eiermann, Lucas | Mercedes-Benz AG |
Sawade, Oliver | IAV GmbH |
Bunk, Sebastian | Daimler Center for Automotive IT Innovations |
Breuel, Gabi | Mercedes-Benz AG |
Radusch, Ilja | Fraunhofer FOKUS |
Keywords: Cooperative Systems (V2X), V2X Communication, Cooperative ITS
Abstract: Recent years have seen steady advance in automated vehicle research and development. Similarly, communication between vehicles is not only seen as a telematics function anymore, but rather to enable better perception, higher coordination capabilities in groups of automated vehicles or even for directly plan cooperative driving maneuvers. Negotiation and planning for safety critical functions demands a high level of safety, security and robustness to external disturbances. The current state of the art in autonomous vehicles is impressive, but even after the huge amount of work that was invested in previous years, they still struggle with basic driving maneuvers such as in intersections and in lane merge scenarios. Cooperation between vehicles using wireless communication can solve these issues partially, as it improves perception, prediction and most of all planning. In this paper we present a cooperative automated lane merge function, specifically looking at challenging on-ramp merge scenarios. We detail the planning, cooperation and execution levels and show an implementation and testing approach using a 3D simulation environment. We present an evaluation of function performance and network load and discuss tradeoffs in function and network protocol design.
|
|
10:30-10:35, Paper WeAM2_T2.4 | |
Space Time Reservation Procedure (STRP) for V2X-Based Maneuver Coordination of Cooperative Automated Vehicles in Diverse Conflict Scenarios |
|
Nichting, Matthias | German Aerospace Center (DLR) |
Hess, Daniel | Deutsches Zentrum Für Luft Und Raumfahrt E.V |
Schindler, Julian | German Aerospace Center (DLR) |
Hesse, Tobias Wilhelm | Heinz Nixdorf Institute, University of Paderborn |
Köster, Frank | German Aerospace Center (DLR) Institute of Transportation System |
Keywords: Cooperative Systems (V2X), V2X Communication, Automated Vehicles
Abstract: In order to use the road network as efficiently as possible and to fully exploit the potential of automated vehicles, vehicle cooperation is essential. Conflicts between road users over available space on the road can only be resolved efficiently through cooperation. To this end, we investigate the space time reservation procedure (STRP) which is a generic approach to cooperation between vehicles. The method is based on a decentralized two-step negotiation procedure during which parts of the road can be reserved in order to carry out maneuvers that were originally conflicting. After the rollout of the first cooperative automated vehicles, a long transition period with mixed traffic is expected. Since the approach in this paper builds on existing road traffic regulations and does not abolish them, the method can also be applied in mixed traffic. Compared to the preliminary work on this approach, the method is simplified and further generalized for already supported traffic situations. This increases flexibility during the execution of cooperative maneuvers. Additionally, new reservation geometries covering almost all possible conflicts in traffic are investigated, e. g. lane changes, intersections, overtaking, roundabouts. Finally, the approach is evaluated by means of test drives with two research vehicles and in simulations.
|
|
10:35-10:40, Paper WeAM2_T2.5 | |
CVIP: A Protocol for Complex Interactions among Connected Vehicles |
|
Häfner, Bernhard | Technical University of Munich |
Jiru, Josef | Fraunhofer IKS |
Roscher, Karsten | Fraunhofer IKS |
Ott, Jörg | Technische Universität München |
Schmitt, Georg Albrecht | BMW |
Sevilmis, Yagmur | Fraunhofer IKS |
Keywords: Cooperative Systems (V2X), Situation Analysis and Planning, V2X Communication
Abstract: Automated vehicles need to interact: to create mutual awareness and to coordinate maneuvers. How this interaction shall be achieved is still an open issue. Several new protocols are discussed for cooperative services such as changing lanes or overtaking, e.g., within the European Telecommunications Standards Institute (ETSI) and Society of Automotive Engineers (SAE). These communication protocols are, however, usually specific to individual maneuvers or based on implicit assumptions on other vehicles’ intentions. To enable reuse and support extensibility towards future maneuvers, we propose CVIP, a protocol framework for complex vehicular interactions. CVIP supports explicitly negotiating maneuvers between the involved vehicles and allows monitoring maneuver progress via status updates. We present our design in detail and demonstrate via simulations that it enables complex inter-vehicle interactions in a flexible, efficient and robust manner. We also discuss open questions to be answered before complex interactions among automated vehicles can become a reality.
|
|
10:40-10:45, Paper WeAM2_T2.6 | |
Provably-Safe Cooperative Driving Via Invariably Safe Sets |
|
Irani Liu, Edmond | Technical University of Munich |
Pek, Christian | Technical University of Munich |
Althoff, Matthias | Technische Universität München |
Keywords: Automated Vehicles, Cooperative Systems (V2X), Cooperative ITS
Abstract: We address the problem of provably-safe cooperative driving for a group of vehicles that operate in mixed traffic scenarios, where both autonomous and human-driven vehicles are present. Our method is based on Invariably Safe Sets (ISSs), which are sets of states that let each of the cooperative vehicles remain safe for an infinite time horizon. The potential conflicts between the ISSs of a group of cooperative vehicles are resolved by examining and negotiating their Safe Maneuver Corridors. As a result, each vehicle obtains its negotiated ISS, which is used as target sets for motion planning. We demonstrate the applicability and benefits of our method on various traffic scenarios from the CommonRoad benchmark suite.
|
|
10:45-10:50, Paper WeAM2_T2.7 | |
Optimal Control of Urban Intersection Scheduling for Connected Automated Vehicles |
|
Jiang, Shenghao | Harvard University |
Keywords: Cooperative Systems (V2X), Automated Vehicles, Reinforcement Learning
Abstract: We propose a novel urban congestion-aware intersection scheduling model based on vehicle to infrastructure communication (V2I) for automated and connected vehicles. In this model, a combinational optimized model which combines passing order and vehicular motion control together is proposed. In order to resolve the intersection conflict issue and improve traffic capacity, driving tube and potential conflict matrix is applied in the schedule optimization model. Take the global average waiting time as optimized object, we propose state encoding approach to collect all the vehicle’s information in the intersection. Then Deep Q Network (DQN) method is applied to resolve the scheduling problem, which outputs the driving tubes enable vector and subsequently 7th polynomial based motion planning trajectory planning is exploited to generate the most comfortable and most efficient trajectory for active vehicles. The optimal time cost profile will be feed back to intersection manager via V2I channel for next time scheduling decision. The performance of this framework is evaluated based on a typical Chinese complicated urban scenario with extensive simulation, our framework achieves encouraging results in terms of average waiting time and peak traffic throughput.
|
|
WeAM2_T3 |
EGYPTIAN_3 |
Vehicle Control. B |
Regular Session |
|
10:15-10:20, Paper WeAM2_T3.1 | |
Reinforcement Learning-Based Path Following Control for a Vehicle with Variable Delay in the Drivetrain |
|
Ultsch, Johannes | Deutsches Zentrum Für Luft Und Raumfahrt (DLR) |
Mirwald, Jonas | German Aerospace Center (DLR) |
Brembeck, Jonathan | German Aerospace Center (DLR) |
de Castro, Ricardo | German Aerospace Center (DLR) |
Keywords: Vehicle Control, Reinforcement Learning
Abstract: In this contribution we propose a reinforcement learning-based controller able to solve the path following problem for vehicles with significant delay in the drivetrain. To efficiently train the controller, a control-oriented simulation model for a vehicle with combustion engine, automatic gear box and hydraulic brake system has been developed. In addition, to enhance the reinforcement learning-based controller, we have incorporated preview information in the feedback state to better deal with the delays. We present our approach of designing a reward function which enables the reinforcement learning-based controller to solve the problem. The controller is trained using the Soft Actor-Critic algorithm by incorporating the developed simulation model. Finally, the performance and robustness is evaluated in simulation. Our controller is able to follow an unseen path and is robust against variations in the vehicle parameters, in our case an additional payload.
|
|
10:20-10:25, Paper WeAM2_T3.2 | |
Optimal Control-Based Eco-Ramp Merging System for Connected and Automated Vehicles |
|
Zhao, Zhouqiao | University of California, Riverside |
Wu, Guoyuan | University of California-Riverside |
Wang, Ziran | Toyota Motor North America |
Barth, Matthew | University of California-Riverside |
Keywords: Eco-driving and Energy-efficient Vehicles, Vehicle Control, Cooperative Systems (V2X)
Abstract: Our current transportation system faces a variety of issues in terms of safety, mobility, and environmental sustainability. The emergence of innovative intelligent transportation systems (ITS) technologies, such as connected and automated vehicles (CAVs), unfolds unprecedented opportunities to address the aforementioned issues. In this paper, we propose a hierarchical ramp merging system that not only allows microscopic cooperative maneuvers for CAVs on the ramp to merge into mainline traffic flow but also has controllability of ramp inflow rate, which enables macroscopic traffic flow control. A centralized optimal control-based approach is proposed to both smooth the merging flow and improve the system-wide mobility as well as fuel consumption of the network. Linear quadratic trackers in both finite horizon and receding horizon forms are developed to solve the optimization problem in terms of path planning and sequence determination, where a microscopic vehicle fuel consumption model is applied. Finally, traffic simulation is conducted through PTV VISSIM to evaluate the impact of the proposed system on a segment of SR-91 E in Corona, CA. The results confirm that under the regulated inflow rate, the proposed system can avoid potential traffic congestion and improve mobility up to 147.5%, and save fuel 47.5% compared to the conventional ramp metering and the ramp without any control approach.
|
|
10:25-10:30, Paper WeAM2_T3.3 | |
Integrated Path-Tracking and Control Allocation Controller for Autonomous Electric Vehicle under Limit Handling Condition |
|
Li, Boyuan | Cranfield University |
Ahmadi, Javad | Cranfield Uinversity |
Lin, Chenhui | Cranfield University |
Siampis, Efstathios | Cranfield Uinversity |
Longo, Stefano | Cranfield University |
Velenis, Efstathios | Cranfield University |
Keywords: Vehicle Control, Self-Driving Vehicles, Electric and Hybrid Technologies
Abstract: In current literature, a number of studies have separately considered path-tracking (PT) control and control allocation (CA) method, but few of studies have integrated them together. This study proposes an integrated PT and CA method for autonomous electric vehicle with independent steering and driving actuators in the limit handling scenario. The high-level feedback PT controller can determine the desired total tire forces and yaw moment, and is designed to guarantee yaw angle error and lateral deviation converge to zero simultaneously. The low-level CA method is formulated as a compact quadratic programming (QP) optimization formulation to optimally allocate individual control actuator. This CA method is designed for a prototype experiment electric vehicle with particularly steering and driving actuator arrangement. The proposed integrated PT controller is validate through numerical simulation based on a high-fidelity CarMaker model on high-speed limit handling scenario.
|
|
10:30-10:35, Paper WeAM2_T3.4 | |
Anomaly Management: Reducing the Impact of Anomalous Drivers with Connected Vehicles |
|
Yang, Hao | McMaster University |
Oguchi, Kentaro | InfoTech Labs, Toyota Motor North America R&D |
Keywords: Collision Avoidance, Vehicle Control, Advanced Driver Assistance Systems
Abstract: Anomalous drivers with errorable behaviors result in dangerous driving environments on roads, and they significantly increase risk of vehicle collisions for themselves and their surrounding vehicles. Eliminating the impact of anomalous drivers to the surrounding vehicles is very critical to improve driving safety. In this paper, an anomaly management system is developed with the help of connected vehicles to solve the problem. An errorable car-following model is introduced to model the dynamics of anomalous vehicles and to analyze their impacts to other vehicles. The system utilizes connected vehicles to monitor the errorable behaviors of the anomaly drivers and estimates acceleration and lane changing advice for connected vehicles to avoid dangerous behaviors. The anomaly management system is evaluated with both synthetic experiments and microscopic traffic simulations to understand its benefits on mitigating the risk of vehicle collisions. In the synthetic experiments, the proposed system shows its capability of removing collision and near-collision events completely. The microscopic simulation indicates that the system can reduce the probability of collisions by up to 10% and the ratio of time to collision by 22%.
|
|
10:35-10:40, Paper WeAM2_T3.5 | |
Opening New Dimensions: Vehicle Motion Planning and Control Using Brakes While Drifting |
|
Goel, Tushar | Stanford University |
Goh, Jonathan Y. | Stanford University |
Gerdes, J Christian | Stanford University |
Keywords: Self-Driving Vehicles, Vehicle Control, Automated Vehicles
Abstract: Autonomous vehicles should be able to maintain control in scenarios that push them beyond the limits of handling. In case of unintended rear tire force saturation while driving, the vehicle should be able to decelerate while ensuring the navigation of an obstacle free path. With that objective, this paper presents a novel architecture capable of controlling a rear-wheel drive vehicle in a drift using brakes in addition to steering and throttle. We demonstrate the existence of another dimension of drift equilibria which allow motion planning algorithms to prescribe vehicle states independently even while drifting. A tangent space analysis illustrates the transformation from an under-actuated to a fully-actuated system with the use of front-wheel braking. Minimal modifications to existing state of the art in drifting can exploit the additional actuation to significantly increase the set of feasible actions for the vehicle. The framework is then experimentally validated for two different trajectories on MARTY, an electric DeLorean drift research platform.
|
|
10:40-10:45, Paper WeAM2_T3.6 | |
Accelerated Convergence of Time-Splitting Algorithm for MPC Using Cross-Node Consensus |
|
Maihemuti, Maierdanjiang | Tsinghua University |
Li, Shengbo Eben | Tsinghua University |
Li, Jie | Tsinghua University |
Gao, Jiaxin | University of Science & Technology Beijing |
Li, Wenyu | Tsinghua University |
Sun, Hao | Beijing Union University |
Cheng, Bo | State Key Laboratory of Automotive Safety and Energy, Tsinghua U |
Keywords: Automated Vehicles, Vehicle Control, Self-Driving Vehicles
Abstract: The splitting strategy over prediction horizon of model predictive control (MPC) has the potential to compute optimal action in a parallel way. However, such time-splitting algorithms often lead to very slow convergence speed because the state consensus only happens in each pair of adjacent nodes, i.e. a point-to-point topology. This paper proposes a generic cross-node consensus method to extend the shortcoming of limiting to point-to-point topology for the purpose of accelerating the convergence of time-splitting MPC. The cross-node consensus is realized by predicting the state transition from one node to another using plant prediction model, which can increase the information exchange efficiency in the prediction horizon. The time-splitting optimization algorithm is implemented by combing with alternating directions method of multipliers (ADMM). Simulations with autonomous driving show that this new algorithm significantly reduces the number of iterations in time-splitting MPC, averagely about 81% compared with classic time-splitting technique.
|
|
10:45-10:50, Paper WeAM2_T3.7 | |
Representation of an Integrated Non-Linear Model-Based Predictive Vehicle Dynamics Control System by a Co-Active Neuro-Fuzzy Inference System |
|
Sieberg, Philipp Maximilian | University of Duisburg-Essen, Germany |
Hürten, Christian | Universität Duisburg-Essen |
Schramm, Dieter | UniversitätDuisburg-Essen |
Keywords: Vehicle Control, Advanced Driver Assistance Systems, Active and Passive Vehicle Safety
Abstract: In the context of automated driving, the control of vehicle dynamics is one of the important issues. In addition to conventional control strategies, algorithms with predictive working principles are particularly relevant here. Using mathematical models, the future system behavior can be predicted and thus optimally set. The present paper deals with an integrated non-linear model-based predictive vehicle dynamics control, taking into account the roll and pitch behavior of a vehicle. Due to the optimization, such model-based predictive control algorithms usually result in high computation efforts. With respect to this issue, a non-linear model-based predictive control algorithm regarding an integrated vehicle dynamics control is represented by a co-active neuro-fuzzy inference system. The validation of the two vehicle dynamics control algorithms is done with respect to the control quality and the computation effort.
|
|
WeLL |
EGYPTIAN_BALLROOM |
Lunch |
Conference Event |
|
WePM1_T1 |
EGYPTIAN_1 |
Automated Vehicles 1. A |
Regular Session |
|
14:35-14:40, Paper WePM1_T1.1 | |
Clustering of the Scenario Space for the Assessment of Automated Driving |
|
Kerber, Jonas | Technische Universität München |
Wagner, Sebastian | Technical University Munich |
Groh, Korbinian | BMW of North America |
Notz, Dominik | BMW of North America |
Kuehbeck, Thomas | BMW Group Technology Office USA |
Watzenig, Daniel | Virtual Vehicle Research Center |
Knoll, Alois | Technische Universität München |
Keywords: Automated Vehicles, Impact on Traffic Flows, Active and Passive Vehicle Safety
Abstract: Assessment and testing are among the biggest challenges for the release of automated driving. Up to this date, the exact procedure to achieve homologation is not settled. Current research focuses on scenario-based approaches that represent driving scenarios as test cases within a scenario space. This avoids redundancies in testing, enables the inclusion of virtual testing into the process, and makes a statement about test coverage possible. However, it is unclear how to define such a scenario space and the coverage criterion. This work presents a novel approach to the definition of the scenario space. Spatiotemporal filtering on naturalistic highway driving data provides a large amount of driving scenarios as a foundation. A custom distance measure between scenarios enables hierarchical agglomerative clustering, categorizing the scenarios into subspaces. The members of a resulting cluster found through this approach reveal a common structure that is visually observable. We discuss a data-driven solution to define the necessary test coverage for the assessment of automated driving. Finally, the contribution of the findings to achieve homologation is elaborated.
|
|
14:40-14:45, Paper WePM1_T1.2 | |
Experimental Evaluation of Minimum Swept-Path Control for Autonomous Reversing of Articulated Vehicles |
|
Liu, Xuanzuo | University of Cambridge |
Madhusudhanan, Anil K. | University of Cambridge |
Cebon, David | University of Cambridge |
Keywords: Advanced Driver Assistance Systems, Vehicle Control, Automated Vehicles
Abstract: This paper validates a newly devised control method for autonomous reversing of articulated vehicles called Minimum Swept Path Control (MSPC) [1, 2]. The theory in [1] can be applied to multiple trailers. The main linear optimal controller was implemented on full-sized tractor-semitrailer and B-double (twin trailer) combinations owned by Cambridge Vehicle Dynamic Consortium (CVDC). An inner-loop compensator using the PID method was developed and tuned to track the desired steer angle generated by the main controller. The experimental results are in agreement with the simulation results in [1], demonstrating that this approach can reduce the overall swept path of articulated vehicles during autonomous reversing significantly and guarantee accurate convergence to the terminal position of the manoeuvre.
|
|
14:45-14:50, Paper WePM1_T1.3 | |
Pattern Recognition for Driving Scenario Detection in Real Driving Data |
|
Montanari, Francesco | AUDI AG, FAU Erlangen-Nürnberg |
German, Reinhard | University of Erlangen-Nuremburg |
Djanatliev, Anatoli | Friedrich-Alexander University Erlangen, Department for Computer |
Keywords: Automated Vehicles
Abstract: For the scenario-based development and testing of automated and connected driving an unknown huge number of different driving scenarios is needed. In this paper we propose an approach that extracts driving scenarios from real driving data without any requirement of predefinitions or rules. Instead of searching for specific scenarios in the data, we cluster recurring patterns and interpret the resulting clusters as potential scenario groups. The method shows promising results. In the exemplary clustering we are able to detect four main scenario groups and corner cases within the clusters. With an huge amount of data this method could be used in the future to set up a scenario database in an automatic manner.
|
|
14:50-14:55, Paper WePM1_T1.4 | |
Personalized Ground Vehicle Collision Avoidance System: From a Computational Resource Re-Allocation Perspective |
|
Wang, Zejiang | The University of Texas at Austin |
Wang, Junmin | The University of Texas at Austin |
Keywords: Advanced Driver Assistance Systems, Collision Avoidance
Abstract: Personalized driving assistance system for vehicle collision avoidance has recently received a considerable amount of attention. Consensus has been reached that both the overall driver-vehicle control performance and the driver acceptance can be increased by embedding individual driver preferences and characteristics into the assistance system design. However, the majority of the existing personalized controllers has not yet taken the available computational resource into account. Indeed, as stricter requirements on emissions, safety, and vehicle connectivity drastically complicate the automotive electronics, it becomes common to aggregate several functions inside one single computing unit. Function consolidation simplifies electronic architecture and saves costs. However, it aggravates the competition for computational resources among different applications. Therefore, this paper proposes a novel perspective for personalized driving assistance system design through computational resource re-allocation. For a driver inherently adept at longitudinal (or lateral) control and less capable of lateral (or longitudinal) control, a stronger support from the collision avoidance system and the underlying computational resource can be allocated towards steering (or braking) assistance by this design. Carsim-Simulink conjoint simulations demonstrate that the overall driver-vehicle control performance can be substantially improved with the same computational resource consumption.
|
|
14:55-15:00, Paper WePM1_T1.5 | |
Smooth Reference Line Generation for a Race Track with Gates Based on Defined Borders |
|
Zubača, Jasmina | Graz University of Technology |
Stolz, Michael | Graz University of Technology |
Watzenig, Daniel | Virtual Vehicle Research Center |
Keywords: Autonomous / Intelligent Robotic Vehicles, Automated Vehicles, Self-Driving Vehicles
Abstract: As racing sports pushed the technological limits of vehicles in the past, automated racing has the potential to directly evaluate new technologies from research in a competitive environment. The required speed of implementation and the fair evaluation criteria enable very fast progress in selecting potential strategies, architectures and algorithms. By this, automated racing strongly contributes to science as well as to future every day implementations of driving automation. In this publication, the approaches used by the team Autonomous-Racing-Graz at the ROBORACE final races in season alpha are revealed. The contribution describes how to generate a smooth reference line in case of non-smooth borders and additional precision gates on a race track. The generation of an initial smooth reference line is key to assure convergence in later optimization. The described data processing is based on track geometry defined by borders only. The specific challenge addressed in the approach was the incorporation of tight, short gates and the low computational effort of the algorithms, enabling fast tuning and adaptation during racing events on-site.
|
|
15:00-15:05, Paper WePM1_T1.6 | |
Robust Function and Sensor Design Considering Sensor Measurement Errors Applied to Automatic Emergency Steering |
|
Lin, Kuan-Fu | Technical University of Munich |
Stöckle, Christoph | Technische Universität München |
Herrmann, Stephan | Audi AG |
Dirndorfer, Tobias | Audi AG |
Utschick, Wolfgang | Technische Universität München |
Keywords: Active and Passive Vehicle Safety, Automated Vehicles, Collision Avoidance
Abstract: Vehicular safety functions that take over the control during dangerous driving situations increase automotive safety. As such functions use the measurements of sensors in order to determine the driving situation, unavoidable sensor measurement errors can have a negative impact on both the safety and the satisfaction of the customer. In this paper, it is shown how a new methodology for the robust design of sensors and functions in vehicular safety considering sensor measurement errors that has already been applied to design an automatic emergency braking (AEB) system can also be applied to design an automatic emergency steering (AES) system. Based on a stochastic model, we formulate the robust design as optimization problems, whose solution yields the optimal parameters for the AES system with respect to a probabilistic quality measure. The probabilistic quality measure is defined similarly to that for the robust design of the AEB system and a closed-form expression is derived for it in case of circular vehicle shapes and an emergency steer intervention with constant lateral acceleration. For more complex vehicle shapes and emergency steer interventions, an approximation of the probabilistic quality measure by a Monte Carlo simulation is proposed, which leads to a robust design which is based on simulations of the vehicular safety system under design and applicable to other vehicular safety systems as well.
|
|
15:05-15:10, Paper WePM1_T1.7 | |
An Efficient Approach to Simulation-Based Robust Function and Sensor Design Applied to an Automatic Emergency Braking System |
|
Leyrer, Michael Ludwig | Technische Universität München |
Stöckle, Christoph | Technische Universität München |
Herrmann, Stephan | Audi AG |
Dirndorfer, Tobias | Audi AG |
Utschick, Wolfgang | Technische Universität München |
Keywords: Active and Passive Vehicle Safety, Automated Vehicles, Collision Avoidance
Abstract: Vehicular safety functions can increase automotive safety by intervening in dangerous situations. However, as such functions rely on sensor measurements to decide actions, they are subject to sensor measurement errors which influence the performance. Therefore, a manufacturer has to design both the sensors and functions in a robust manner considering these errors. A methodology for such a robust design has already been proposed for an automatic emergency braking (AEB) system and is based on a probabilistic quality measure. It is often only possible to evaluate such a probabilistic quality measure through simulations of the system under design. Therefore, a novel approach for efficiently evaluating the probabilistic quality measure through simulations of the AEB system is proposed. The structure of the stochastic problem is analyzed and the new approach derived accordingly. Numerical examples illustrate the savings in computational effort as compared to a Monte Carlo simulation and the accuracy limits. Moreover, the proposed approach generalizes to other vehicular safety systems as well.
|
|
15:10-15:15, Paper WePM1_T1.8 | |
An Optimal Lateral Trajectory Stabilization of Vehicle Using Differential Dynamic Programming |
|
Kumar, Mohit | Forschungszentrum Informatik, Karlsruhe |
Hildebrandt, Arne-Christoph | MAN Truck & Bus SE |
Strauss, Peter | MAN Truck & Bus SE |
Kraus, Sven | Technische Universität München |
Stiller, Christoph | Karlsruhe Institute of Technology |
Zimmermann, Andreas | MAN Truck & Bus SE |
Keywords: Autonomous / Intelligent Robotic Vehicles
Abstract: Vehicle nowadays are equipped with several assisting functions e.g. Cruise Control and Lane Keeping. Classical lateral control approaches used for lane keeping provide good path tracking precision. However the operational domain for these approaches is limited i.e. highway driving. Adaption of control parameters can increase the operational domain, but its difficult to tune classical approaches for the whole range of automated driving maneuvers. Apart from these classical approaches, optimization approaches are also used for lateral trajectory stabilization. The optimization based approaches has a wider operational domain, but the feasibility and real time execution remain some open issues. In this paper, we combine a classical approach and an optimization method for lateral trajectory stabilization. We present a method to optimize the control input calculated using a classical control approach i.e. pure pursuit based on a performance criteria. The performance criteria weighs the precision and comfort requirements. A nonlinear optimization based on Differential Dynamic Programming (DDP) is used to solve the optimization problem. The calculated optimal trajectory is finally evaluated using a line search method to ensure the convergence and to verify the optimization policy. The approach is demonstrated on a full scale automated truck prototype and the experimental results are discussed.
|
|
WePM1_T2 |
EGYPTIAN_2 |
Mapping and Localization 1.A |
Regular Session |
|
14:35-14:40, Paper WePM1_T2.1 | |
TTF: Time-To-Failure Estimation for ScanMatching-Based Localization |
|
Tsuchiya, Chikao | Nissan Motor Co., Ltd |
Takei, Shoichi | Nissan Motor Co., Ltd |
Takeda, Yuichi | Nissan Motor Co., Ltd |
Khiat, Abdelaziz | Nissan Motor Co., Ltd |
Keywords: Mapping and Localization, Deep Learning, Self-Driving Vehicles
Abstract: Self-localization is of paramount importance for autonomous vehicles, since the system interprets traffic scene context with a combination of high definition map and a precise ego-pose. Therefore, alerting the driver of a potential failure ahead of the actual localization failure, is an essential function for any autonomous driving system. This paper introduces a Time-to-Failure (TTF) concept in the localization domain. We propose a TTF predictor with a ResNet34-based feature extractor followed by a LSTM-based regressor. In order to train the predictor, an efficient training data generation scheme using simulation with intentional noises, is also shown. Evaluation of the proposed method is done in the context of regression and classification. The preliminary experimental results show that the proposed method can predict localization failure by up to 10 seconds ahead of the actual event.
|
|
14:40-14:45, Paper WePM1_T2.2 | |
Self-Supervised Map-Segmentation by Mining Minimal-Map-Segments |
|
Tanaka, Kanji | University of Fukui |
Keywords: Mapping and Localization, Vehicle Environment Perception, Unsupervised Learning
Abstract: In visual place recognition (VPR), map segmentation (MS) is a preprocessing technique used to partition a given view-sequence map into place classes (i.e., map segments) so that each class has good place-specific training images for a visual place classifier (VPC). Existing approaches to MS implicitly/explicitly suppose that map segments have a certain size, or individual map segments are balanced in size. However, recent VPR systems showed that very small important map segments (minimal map segments) often suffice for VPC, and the remaining large unimportant portion of the map should be discarded to minimize map maintenance cost. Here, a new MS algorithm that can mine minimal map segments from a large view-sequence map is presented. To solve the inherently NP hard problem, MS is formulated as a video-segmentation problem and the recently-developed efficient point-trajectory based paradigm of video segmentation is used. The proposed map representation was implemented with three types of VPC: deep convolutional neural network, bag-of-words, and object class detector, and each was integrated into a Monte Carlo localization algorithm (MCL) within a topometric VPR framework. Experiments using the publicly available NCLT dataset thoroughly investigate the efficacy of MS in terms of VPR performance.
|
|
14:45-14:50, Paper WePM1_T2.3 | |
Time-Of-Flight Camera Based Indoor Parking Localization Leveraging Manhattan World Regulation |
|
Zhao, Hengwang | Shanghai Jiao Tong University |
Yang, Ming | Shanghai Jiao Tong University |
He, Yuesheng | Shanghai Jiao Tong University, the Key Laboratory of System Contr |
Wang, Chunxiang | Shanghai Jiao Tong University |
Keywords: Mapping and Localization, Autonomous / Intelligent Robotic Vehicles
Abstract: Localization is a key problem for autonomous driving in indoor parking. There have been some previously proposed methods based on UWB, LiDAR, fisheye cameras, etc. However, most of these methods have some drawbacks such as high cost or dependency on light conditions. To address these challenges, this paper proposes a novel Time-of-Flight (ToF) camera based mapping and localization system for indoor parking lots leveraging Manhattan World Regulation. ToF cameras are low-cost and can actively generate dense point clouds of the environment without external light sources. To overcome the shortcoming of ToF camera small field of view, the proposed system utilizes the structural information of the ceiling of indoor parking lots and ManhattanWorld Regulation. We track the surface normals on the unit sphere for drift-free rotation estimation. Based on this drift-free rotation, we can effectively calculate 6-DOF pose with decoupled rotation and translation estimation during mapping or global localization. This new system runs in real-time on limited computation resources and is demonstrated on two different challenging indoor parking, achieving real-time performance at 10 Hz and localization error less than 0.1 meter.
|
|
14:50-14:55, Paper WePM1_T2.4 | |
Multi-Object Monocular SLAM for Dynamic Environments |
|
B Nair, Gokul | Robotics Research Center, KCIS, IIIT Hyderabad |
Daga, Swapnil | Robotics Research Center, KCIS, IIIT Hyderabad |
Sajnani, Rahul | Robotics Research Center, KCIS, IIIT Hyderabad |
Ramesh, Anirudha | Robotics Research Center, KCIS, IIIT Hyderabad |
Ansari, Junaid Ahmed | IIIT Hyderabad |
Jatavallabhula, Krishna Murthy | Mila, Universite De Montreal |
Krishna, K Madhava | IIIT Hyderabad |
Keywords: Mapping and Localization, Vehicle Environment Perception
Abstract: In this paper, we tackle the problem of multibody SLAM from a monocular camera. The term multibody, implies that we track the motion of the camera, as well as that of other dynamic participants in the scene. The quintessential challenge in dynamic scenes is unobservability: it is not possible to unambiguously triangulate a moving object from a moving monocular camera. Existing approaches solve restricted variants of the problem, but the solutions suffer relative scale ambiguity (i.e., a family of infinitely many solutions exist for each pair of motions in the scene). We solve this rather intractable problem by leveraging single-view metrology, advances in deep learning, and category-level shape estimation. We propose a multi pose-graph optimization formulation, to resolve the relative and absolute scale factor ambiguities involved. This optimization helps us reduce the average error in trajectories of multiple bodies over real-world datasets, such as KITTI. To the best of our knowledge, our method is the first practical monocular multi-body SLAM system to perform dynamic multi-object and ego localization in a unified framework in metric scale.
|
|
14:55-15:00, Paper WePM1_T2.5 | |
Improvement of RTK-GNSS with Low-Cost Sensors Based on Accurate Vehicle Motion Estimation Using GNSS Doppler |
|
Takanose, Aoki | Meijo University |
Takikawa, Kanamu | Meijo University |
Arakawa, Takuya | Meijo University |
Meguro, Junichi | Meijo University |
Keywords: Automated Vehicles, Autonomous / Intelligent Robotic Vehicles, Mapping and Localization
Abstract: This study proposes a method for estimating the positions of vehicles in urban environments with high accuracy. We employ satellite positioning by GNSS for position estimation. Real-time kinematic-global navigation satellite systems (RTK-GNSS) with high precision in satellite positioning can estimate positions with centimeter-scale accuracy. However, in urban areas, the position estimation performance deteriorates owing to multipath errors. Therefore, we propose a method to improve the positioning results by increasing the robustness against multipath using vehicle trajectory. The vehicle trajectory estimates the travel route using the attitude angle and speed. Attitude angles are heading, pitching and slip angle. Trajectories can be generated with 0.5m error performance per 100m. In the proposed method, the trajectory is used as a constraint to solve the multipath of RTK-GNSS. In the evaluation test, the ratio of high-accuracy position estimation improved by up to 25% compared to the conventional method. This method can contribute to autonomous vehicles, AGV control, SLAM technology.
|
|
15:00-15:05, Paper WePM1_T2.6 | |
Using Detection, Tracking and Prediction in Visual SLAM to Achieve Real-Time Semantic Mapping of Dynamic Scenarios |
|
Chen, Xingyu | Xi'an Jiaotong University |
Xue, Jianru | Xi'an Jiaotong University |
Fang, Jianwu | Chang'an University |
Pan, Yuxin | Xi'an Jiaotong University |
Zheng, Nanning | Xi'an Jiaotong University |
Keywords: Mapping and Localization, Vision Sensing and Perception
Abstract: In this paper, we propose a lightweight system, RDS-SLAM, based on ORB-SLAM2, which can accurately estimate poses and build semantic maps at object level for dynamic scenarios in real time using only one commonly used Intel Core i7 CPU. In RDS-SLAM, three major improvements, as well as major architectural modifications, are proposed to overcome the limitations of ORB-SLAM2. Firstly, it adopts a lightweight object detection neural network in key frames. Secondly, an efficient tracking and prediction mechanism is embedded into the system to remove the feature points belonging to movable objects in all incoming frames. Thirdly, a semantic octree map is built by probabilistic fusion of detection and tracking results, which enables a robot to maintain a semantic description at object level for potential interactions in dynamic scenarios. We evaluate RDS-SLAM in TUM RGB-D dataset, and experimental results show that RDS-SLAM can run with 30.3 ms per frame in dynamic scenarios using only an Intel Core i7 CPU, and achieves comparable accuracy compared with the state-of-the-art SLAM systems which heavily rely on both Intel Core i7 CPUs and powerful GPUs.
|
|
15:05-15:10, Paper WePM1_T2.7 | |
High Precision Vehicle Localization Based on Tightly-Coupled Visual Odometry and Vector HD Map |
|
Wen, Tuopu | Tsinghua |
Xiao, Zhongyang | Tsinghua University |
Wijaya, Benny | Tsinghua University |
Jiang, Kun | Tsinghua University |
Yang, Mengmeng | Tsinghua University |
Yang, Diange | State Key Laboratory of Automotive Safety and Energy, Collaborat |
Keywords: Mapping and Localization, Automated Vehicles, Vision Sensing and Perception
Abstract: Matching low-cost camera and vector HD map is proven to be a practical and effective way of estimating the location and orientation of intelligent vehicles. However, map-based approach only viable when the landmark observation is adequate and precise. In some areas with sparse and noisy observation, or even non-existent map matching features, the localization results may be unstable thus not-viable. In this paper, we introduce a novel algorithm by fusing visual odometry and vector HD map in a tightly-coupled optimization framework to tackle these problems. Our algorithm exploits the observation of visual feature points and vector HD map landmarks in the sliding window manner and optimize their residuals in a tightly-coupled approach. In this way, the system is more robust against the noisy HD map landmark observations. In addition, our method is able to accurately estimate vehicle pose even when landmarks are sparse. In the end, when our method is tested on two challenging scenarios with noisy and sparse landmark observations, it is able to achieve the MAE of 0.1473m and 0.2496m respectively.
|
|
15:10-15:15, Paper WePM1_T2.8 | |
HD Map Verification without Accurate Localization Prior Using Spatio-Semantic 1D Signals |
|
Pauls, Jan-Hendrik | Karlsruhe Institute of Technology (KIT) |
Strauss, Tobias | Robert Bosch GmbH |
Hasberg, Carsten | Robert Bosch GmbH |
Lauer, Martin | Karlsruher Institut Für Technologie |
Stiller, Christoph | Karlsruhe Institute of Technology |
Keywords: Mapping and Localization, Vehicle Environment Perception, Active and Passive Vehicle Safety
Abstract: High definition (HD) maps have proven to be a necessary component for safe and comfortable automated driving (AD). Naïvely verifying HD maps requires an accurate localization prior in order to correctly associate measurements with map data. In periodic environments, such as highways, localization results are often ambiguous – in particular in longitudinal direction. To still be able to verify an HD map, we propose the use of quasi-continuous 1D signals that can be computed without pointwise association. These signals can be chosen to change significantly when the map has changed while they only change rarely or slowly along the road, making them robust against localization errors. A spatio-semantic clustering yields intuitive groups of map features. These groups are then ordered using a robust projection approach, yielding quasi-continuous 1D signals. Such signals can be computed for map and measurement data and their comparison allows detecting road changes. The purposeful design of the signals and their computation only requires lane-level lateral localization and a coarse longitudinal prior, vastly relaxing the requirements on prior localization results compared to the current state of the art. With four example signals, we demonstrate the effectiveness of our approach on a map verification dataset, detecting between 49 % and 98 % of all changed features at false alarm rates usually below 15 %. Detecting changes per feature allows to still use unchanged features for AD functions. When omitting this ability and aggregating all features, 98 % of all changed road sections can be detected successfully.
|
|
WePM1_T3 |
EGYPTIAN_3 |
Environment Perception. A |
Regular Session |
|
14:35-14:40, Paper WePM1_T3.1 | |
SemanticPOSS: A Point Cloud Dataset with Large Quantity of Dynamic Instances |
|
Pan, Yancheng | Peking University |
Gao, Biao | Peking University |
Mei, Jilin | Peking University |
Geng, Sibo | Peking University |
Li, Chengkun | Beijing Institute of Technology |
Zhao, Huijing | Peking University |
Keywords: Vehicle Environment Perception, Deep Learning, Automated Vehicles
Abstract: 3D semantic segmentation is one of the key tasks for autonomous driving system. Recently, deep learning models for 3D semantic segmentation task have been widely researched, but they usually require large amounts of training data. However, the present datasets for 3D semantic segmentation are lack of point-wise annotation, diversiform scenes and dynamic objects. In this paper, we propose the SemanticPOSS dataset, which contains 2988 various and complicated LiDAR scans with large quantity of dynamic instances. The data is collected in Peking University and uses the same data format as SemanticKITTI. In addition, we evaluate several typical 3D semantic segmentation models on our SemanticPOSS dataset. Experimental results show that SemanticPOSS can help to improve the prediction accuracy of dynamic objects as people, car in some degree. SemanticPOSS will be published at www.poss.pku.edu.cn
|
|
14:40-14:45, Paper WePM1_T3.2 | |
Automatic Generation of Road Geometries to Create Challenging Scenarios for Automated Vehicles Based on the Sensor Setup |
|
Ponn, Thomas | Technical University of Munich |
Lanz, Thomas | Technical University of Munich |
Diermeyer, Frank | Technische Universität München |
Keywords: Automated Vehicles, Active and Passive Vehicle Safety, Vehicle Environment Perception
Abstract: For the offline safety assessment of automated vehicles, the most challenging and critical scenarios must be identified efficiently. Therefore, we present a new approach to define challenging scenarios based on a sensor setup model of the ego vehicle. First, a static optimal approaching path of a road user to the ego-vehicle is calculated using an A* algorithm. As optimal, we consider a poor perception of the road user by the automated vehicle because we want to define scenarios that are as critical as possible. The path is then transferred to a dynamic scenario, where the trajectory of the road user and the road layout are determined. The result is an optimal road geometry, so that the ego-vehicle can perceive an approaching object as poorly as possible. The focus of our work is on highway as Operational Design Domain (ODD).
|
|
14:45-14:50, Paper WePM1_T3.3 | |
TalkyCars: A Distributed Software Platform for Cooperative Perception |
|
Sommer, Martin | Karlsruhe Institute of Technology |
Stang, Marco | Karlsruher Institut für Technologie (KIT) |
Mütsch, Ferdinand | Karlsruhe Institute of Technology |
Sax, Eric | Karlsruhe Institute of Technology |
|
|
14:50-14:55, Paper WePM1_T3.4 | |
Using Drones As Reference Sensors for Neural-Networks-Based Modeling of Automotive Perception Errors |
|
Krajewski, Robert | Institut Für Kraftfahrzeuge, RWTH Aachen University |
Hoss, Michael | RWTH Aachen University |
Meister, Adrian | ETH Zürich |
Thomsen, Fabian | RWTH Aachen |
Bock, Julian | Fka GmbH |
Eckstein, Lutz | RWTH Aachen University |
Keywords: Automated Vehicles, Vehicle Environment Perception, Deep Learning
Abstract: Modeling perception errors of automated vehicles requires reference data, but common reference measurement methods either cannot capture uninstructed road users or suffer from vehicle-vehicle-occlusions. Therefore, we propose a method based on a camera-equipped drone hovering over the field of view of the perception system that is to be modeled. From recordings of this advantageous perspective, computer vision algorithms extract object tracks suited as reference. As a proof of concept of our approach, we create and analyze a phenomenological error model of a lidar-based sensor system. From eight hours of simultaneous traffic recordings at an intersection, we extract synchronized state vectors of associated true-positive vehicle tracks. We model the deviations of the full lidar state vectors from the reference as multivariate Gaussians. The dependency of their covariance matrices and mean vectors on the reference state vector is modeled by a fully-connected neural network. By customizing the network training procedure and losses, we are able to achieve consistent results even in sparsely populated areas of the state space. Finally, we show that time dependencies of errors can be considered separately during sampling by an autoregressive model.
|
|
14:55-15:00, Paper WePM1_T3.5 | |
Model-Less Location-Based Vehicle Behavior Prediction for Intelligent Vehicle |
|
Imanishi, Yuto | Hitachi America, Ltd |
Iihoshi, Yoichi | Hitachi, Ltd |
Okuda, Yuki | Hitachi Automotive Systems, Ltd |
Okada, Takashi | Hitachi Automotive Systems, Ltd |
Keywords: Vehicle Environment Perception, Automated Vehicles, Eco-driving and Energy-efficient Vehicles
Abstract: Predicting surrounding vehicle behavior plays an important role in an intelligent vehicle. Optimization of control strategy considering predicted future events could provide significant benefits by improving efficiency, comfort, and safety. However, realizing such prediction in an arbitrary environment is a challenging task as the real environment is highly diverse. In this paper, we propose a model-less location-based prediction method for a connected vehicle, which shares driving data through a cloud server. The shared data are stored in a relational database management system after associated with the location information. Surrounding vehicle behavior is then predicted with kernel density estimation by referring to nearby data, which implicitly reflect all location-dependent factors, such as road design, traffic rule, and region. Since this method does not rely on any pre-trained models, prediction performance is not affected by the overfitting issue. The performance of the proposed method has been evaluated by applying to optimization-based adaptive cruise control, which minimizes energy loss and a following error based on predicted future position of a preceding vehicle. The experimental result with urban driving data shows that the proposed method is more accurate and fuel efficient than several baseline models including kinematic model and neural networks.
|
|
15:00-15:05, Paper WePM1_T3.6 | |
Context-Aware Multi-Task Learning for Traffic Scene Recognition in Autonomous Vehicles |
|
Lee, Younkwan | GIST |
Jeon, Jihyo | Gwangju Institute of Science and Technology |
Yu, Jongmin | 1990 |
Jeon, Moongu | GIST |
Keywords: Advanced Driver Assistance Systems, Convolutional Neural Networks, Vehicle Environment Perception
Abstract: Traffic scene recognition, which requires various visual classification tasks, is a critical ingredient in autonomous vehicles. However, most existing approaches treat each relevant task independently from one another, never considering the entire task as a whole. Because of this, they are limited to utilizing a task-specific set of features for all possible tasks of inference-time, which ignores the ability to leverage common task-invariant contextual knowledge for the task at hand. To address this problem, we propose an algorithm to jointly learn the task-specific and shared representations by adopting a multi-task learning network. Specifically, we present a lower bound of mutual information constraint between shared feature embedding and input that is considered to be able to extract common contextual information across tasks while preserving essential information of each task jointly. The learned representations capture richer contextual information without additional task-specific network. Extensive experiments on the large-scale dataset HSD demonstrate the effectiveness and superiority of our network over state-of-the-art methods.
|
|
15:05-15:10, Paper WePM1_T3.7 | |
Video Object Detection and Tracking Based on Angle Consistency between Motion and Flow |
|
Seo, Toshiki | Chubu University |
Hirakawa, Tsubasa | Chubu University |
Yamashita, Takayoshi | Chubu University |
Fujiyoshi, Hironobu | Chubu University |
Keywords: Vehicle Environment Perception, Deep Learning, Self-Driving Vehicles
Abstract: Detect to Track and Track to Detect (D&T) extracts a foreground region by using a feature map and region proposal network (RPN) and estimates an object class by using fully connected layers. A correlation layer, which is a hidden layer that obtains displacement between adjacent frames, estimates the movement and size of an object between the adjacent frames. Then, object class and regression are estimated by the feature maps obtained from the correlation layer and RPN. Finally, D&T estimates the moving direction and movement of a bounding box from the detection results obtained from the correlation layer and adjacent frames. Although D&T can achieve accurate object detection and tracking, the object detection and movement estimation of the correlation layer relies on the detection results of the RPN. Therefore, the correlation layer does not acquire local and global pixel changes in video frames and has to estimate the moving direction only from the similarity of detected regions. As a result, the estimation of the moving direction tends to fail. In this work, we propose a method to improve the moving direction estimation by performing the estimation in such a way as to maintain the consistency between the estimated direction and optical flow. Experimental results show that the proposed method can successfully estimate the moving direction and thereby improves both the detection and the tracking accuracy.
|
|
15:10-15:15, Paper WePM1_T3.8 | |
Probabilistic Collision Risk Estimation for Autonomous Driving: Validation Via Statistical Model Checking |
|
Paigwar, Anshul | INRIA (Institut National De Recherche En Informatique Et En Auto |
Baranov, Eduard | Université Catholique De Louvain |
Renzaglia, Alessandro | INRIA |
Laugier, Christian | INRIA |
Legay, Axel | Université Catholique De Louvain |
Keywords: Vehicle Environment Perception, Lidar Sensing and Perception, Automated Vehicles
Abstract: A crucial aspect that automotive systems need to face before being used in everyday life is the validation of their components. To this end, standard exhaustive methods are inappropriate to validate the probabilistic algorithms widely used in this field and new solutions need to be adopted. In this paper, we present an approach based on Statistical Model Checking (SMC) to validate the collision risk assessment generated by a probabilistic perception system. SMC represents an intermediate between test and exhaustive verification by relying on statistics and evaluates the probability of meeting appropriate Key Performance Indicators (KPIs) based on a large number of simulations. As a case study, a state-of-the-art algorithm is adopted to obtain the collision risk estimations. This algorithm provides an environment representation through Bayesian probabilistic occupancy grids and estimates positions in the near future of every static and dynamic part of the grid. Based on these estimations, time-to-collision probabilities are then associated with the corresponding cells. Using CARLA simulator, a large number of execution traces are then generated, considering both collisions and almost-collisions in realistic urban scenarios. Real experiments complete the analysis and show the reliability of the simulation results.
|
|
WePM2_T1 |
EGYPTIAN_1 |
Automated Vehicles 1. B |
Regular Session |
|
15:25-15:30, Paper WePM2_T1.1 | |
Lateral Trajectory Stabilization of an Articulated Truck During Reverse Driving Maneuvers |
|
Kumar, Mohit | Forschungszentrum Informatik, Karlsruhe |
Hildebrandt, Arne-Christoph | MAN Truck & Bus SE |
Strauss, Peter | MAN Truck & Bus SE |
Kraus, Sven | Technische Universität München |
Stiller, Christoph | Karlsruhe Institute of Technology |
Zimmermann, Andreas | MAN Truck & Bus SE |
Keywords: Autonomous / Intelligent Robotic Vehicles, Automated Vehicles, Vehicle Control
Abstract: The stabilization of articulated truck i.e. truck-semitrailer is a complex problem due to the instable dynamics. The control approaches developed for passenger vehicle and solo truck cannot be implemented for truck-semitrailer combination, especially during reverse driving. In this paper, a lateral trajectory stabilization algorithm while reversing for truck-semitrailer is proposed. A cascade control is presented for lateral trajectory stabilization of truck-semitrailer. A high-level path tracking algorithm handles the path tracking of a virtual vehicle which is an equivalent model for the semitrailer. A low-level Linear Quadratic Regulator (LQR) stabilizes the hitch angle. The path tracking problem is formulated as a linear time-varying differential dynamic programming problem subjective to virtual vehicle dynamics in a receding horizon fashion. The virtual vehicle dynamics are defined by the single track kinematic vehicle model and the hitch angle dynamics is used for formulating the Linear Quadratic Regulator problem. The approach is demonstrated with replays of real-world path tracking scenarios on full-scale truck-semitrailer prototype.
|
|
15:30-15:35, Paper WePM2_T1.2 | |
Formalization of Interstate Traffic Rules in Temporal Logic |
|
Maierhofer, Sebastian | Technical University of Munich |
Rettinger, Anna-Katharina | Technical University of Munich |
Mayer, Eva Charlotte | Technical University of Munich |
Althoff, Matthias | Technische Universität München |
Keywords: Automated Vehicles, Self-Driving Vehicles, Legal Impacts
Abstract: To allow autonomous vehicles to safely participate in traffic and to avoid liability claims for car manufacturers, autonomous vehicles must obey traffic rules. However, current traffic rules are not formulated in a precise and mathematical way, so that they cannot be directly applied to autonomous vehicles. Additionally, several legal sources other than national traffic laws must be considered to infer detailed traffic rules. Thus, we formalize traffic rules for interstates based on the German Road Traffic Regulation, the Vienna Convention on Road Traffic, and legal decisions from courts. This makes it possible to automatically and unambiguously check whether traffic rules are being met by autonomous vehicles. Temporal logic is used to express the obtained rules mathematically. Our formalized traffic rules are evaluated for recorded data on more than 2,500 vehicles.
|
|
15:35-15:40, Paper WePM2_T1.3 | |
Sensor and Actuator Latency During Teleoperation of Automated Vehicles |
|
Georg, Jean-Michael | Technical University of Munich |
Feiler, Johannes | Technical University of Munich |
Hoffmann, Simon | Technical University of Munich |
Diermeyer, Frank | Technische Universität München |
Keywords: Automated Vehicles, Image, Radar, Lidar Signal Processing, V2X Communication
Abstract: Due to the challenges of autonomous driving, backup options like teleoperation become a relevant solution for critical scenarios an automated vehicle might face. To enable teleoperated systems, two main problems have to be solved: Safely controlling the vehicle under latency, and presenting the sensor data from the vehicle to the operator in such a way, that the operator can easily understand the vehicles environment and the vehicles current state. While most of the teleoperation systems face similar challenges, the teleoperation of automated vehicles is unique in its scale, safety requirements and system constraints. Two major constraints are the round-trip-latency and the maximum upload-bandwidth. While the latency mainly influences the controllability and safety of the vehicle, the upload-bandwidth affects the amount of transmittable sensor data and therefore operator’s situation awareness, as well as the running costs of the whole system. The focus of this paper is measuring and reducing the end-to-end latency for a teleoperation setup. Therefore the latency is separated into actuator and sensor latency. For each part the different components and settings are analyzed in order to find a realistic minimal end-to-end latency for the teleoperation of automated vehicles. Therefore new measurement methods are developed and existing methods adapted.
|
|
15:40-15:45, Paper WePM2_T1.4 | |
Decision-Making for Automated Vehicles Using a Hierarchical Behavior-Based Arbitration Scheme |
|
Orzechowski, Piotr Franciszek | FZI Research Center for Information Technology |
Burger, Christoph | Karlsruhe Institute of Technology |
Lauer, Martin | Karlsruher Institut Für Technologie |
Keywords: Automated Vehicles, Autonomous / Intelligent Robotic Vehicles, Vehicle Control
Abstract: Behavior planning and decision-making are some of the biggest challenges for highly automated systems. A fully automated vehicle (AV) is faced with numerous tactical and strategical choices. Most state-of-the-art AV platforms are implementing tactical and strategical behavior generation using finite state machines. However, these usually result in poor explainability, maintainability and scalability. Research in robotics has raised many architectures to mitigate these problems, most interestingly behavior-based systems and hybrid derivatives. Inspired by these approaches, we propose a hierarchical behavior-based architecture for tactical and strategical behavior generation in automated driving. It is a generalizing and scalable decision-making framework, utilizing modular behavior blocks to compose more complex behaviors in a bottom-up approach. The system is capable of combining a variety of scenario- and methodology-specific solutions, like POMDPs, RRT* or learning-based behavior, into one understandable and traceable architecture. We extend the hierarchical behavior-based arbitration concept to address scenarios where multiple behavior options are applicable, but have no clear priority among each other. Then, we formulate the behavior generation stack for automated driving in urban and highway environments, incorporating parking and emergency behaviors as well. Finally, we illustrate our design in an explanatory evaluation.
|
|
15:45-15:50, Paper WePM2_T1.5 | |
Experimental Validation of a Real-Time Optimal Controller for Coordination of CAVs in a Multi-Lane Roundabout |
|
Chalaki, Behdad | University of Delaware |
Beaver, Logan | University of Delawre |
Malikopoulos, Andreas | University of Delaware |
Keywords: Automated Vehicles, Assistive Mobility Systems, Autonomous / Intelligent Robotic Vehicles
Abstract: Roundabouts in conjunction with other traffic scenarios, e.g., intersections, merging roadways, speed reduction zones, can induce congestion in a transportation network due to driver responses to various disturbances. Research efforts have shown that smoothing traffic flow and eliminating stop-and-go driving can both improve fuel efficiency of the vehicles and the throughput of a roundabout. In this paper, we validate an optimal control framework developed earlier in a multi-lane roundabout scenario using the University of Delaware's scaled smart city (UDSSC). We first provide conditions where the solution is optimal. Then, we demonstrate the feasibility of the solution using experiments at UDSSC, and show that the optimal solution completely eliminates stop-and-go driving while preserving safety.
|
|
15:50-15:55, Paper WePM2_T1.6 | |
Trailer Hitch Assist: Lightweight Solutions for Automatic Reversing to a Trailer |
|
Ramirez Llanos, Eduardo Jose | Continental Automotive Systems |
Yu, Xin | Continental Automotive Systems |
Berkemeier, Matthew | Continental Automotive Systems |
Keywords: Advanced Driver Assistance Systems, Automated Vehicles, Self-Driving Vehicles
Abstract: This paper proposes a driver assistance function for positioning a vehicle’s hitch underneath the center of a trailer's coupler. Due to a rapid development-to-production cycle, the problem was constrained to use basic ECUs, which have limited performance for real-time implementation. Based on this, we developed two practical solutions to automate the hitching process by using efficient methods that do not require any additional high-end sensors, but rely only on the vehicle’s existing rear-view camera and other existing sensors.
|
|
15:55-16:00, Paper WePM2_T1.7 | |
Instantaneous Velocity Estimation for 360° Perception with Multiple High-Quality Radars: An Experimental Validation Study |
|
Shakibajahromi, Bahareh | ZF TRW Automotive , Drexel University |
Jabalameli, Amirhossein | ZF TRW |
Krishnan, Anirudh S | ZF TRW |
Kanzler, Steven | ZF TRW |
Shayestehmanesh, Saeed | ZF TRW |
Keywords: Radar Sensing and Perception, Automated Vehicles, Advanced Driver Assistance Systems
Abstract: This paper describes two deterministic algorithms that instantaneously estimate the velocity of an observed vehicle (OV) from radar data. The low-complex approaches designed here use the algebraic properties of the Doppler-Azimuth profile to choose a pair of reflection points from the whole set of associated measurements. The selected points are utilized in a linear regression system to estimate the velocity instantaneously. This velocity is then fed into the state estimator. The novelty of this approach is in the algebraic driven choice of a non-singular pair of reflection points. This lends it a lower computation cost than other approaches in the field hence making it a feasible implementation on micro-controllers with limited computation power. The performance of the proposed approaches is evaluated using real-world data collected by the ZF Automated Driving Prototype vehicle. The results successfully validate the ability of this approach to estimate the velocity of the OV in a single frame.
|
|
16:00-16:05, Paper WePM2_T1.8 | |
Vehicle-To-Vehicle Communication for Safe and Fuel-Efficient Platooning |
|
Sidorenko, Galina | Halmstad University |
Thunberg, Johan | Halmstad University |
Sjöberg, Katrin | Scania CV AB |
Vinel, Alexey | Halmstad University |
Keywords: Collision Avoidance, V2X Communication, Automated Vehicles
Abstract: A platoon consists of a string of vehicles traveling close together. Such tight formation allows for increased road throughput and reduced fuel consumption due to decreased air resistance. Furthermore, sensors and control algorithms can be used to provide a high level of automation. In this context, safety -- in terms of no rear-end collisions -- is a key property that needs to be assured. We investigate how vehicle-to-vehicle communication can be used to reduce inter-vehicle distances while guaranteeing safety in emergency braking scenarios. An optimization-based modeling scheme is presented that, under certain restrictions, provides an analytical calculation of inter-vehicle distances for safe braking. In contrast to earlier simulation-based approaches, the framework allows for computationally efficient solutions with explicit guarantees. Two approaches for computing braking strategies in emergency scenarios are proposed. The first assumes centralized coordination by the leading vehicle and exploits necessary optimal conditions of a constrained optimization problem, whereas the second -- the more conservative solution -- assumes only local information and is distributed in nature. We illustrate the usefulness of the approaches through several computational simulations.
|
|
WePM2_T2 |
EGYPTIAN_2 |
Mapping and Localization 1.B |
Regular Session |
|
15:25-15:30, Paper WePM2_T2.1 | |
A Novel Method for Ground-Truth Determination of Lane Information through a Single Web Camera |
|
Ruan, Keyu | IUPUI |
Li, Lingxi | Indiana University-Purdue University Indianapolis |
Song, Guobiao | Aptiv |
Xia, Jing | Aptiv PLC |
Pang, Hongyu | Aptiv |
Keywords: Mapping and Localization, Automated Vehicles
Abstract: The high-definition (HD) map is critical for the localization and motion planning of connected and automated vehicles (CAVs). With all the road and lane information pre-scanned in a certain area, the vehicles can know its position with respect to the lane marks and roadside, and hence make better decisions on planning future trajectories. A common issue, however, is the accuracy of the scanned outputs from different data sources. Because of the limitations of online maps (e.g., zooming and stretching in their image layers), visualizing the data in the bird’s eye view on maps cannot satisfy the accuracy requirement of being the ground-truth system. To this end, a feasible method that can combine sensing data from different sources and obtain reliable ground-truth information is necessary. In this paper, we develop a novel method to transform the data points from the bird’s eye view to the view angle of a web camera installed on the windshield of the ego vehicle. In such a case, the position of landmarks from the captured frames of the camera can be used as the ground-truth. In particular, we take the lane marking detection outputs from the Mobileye system as the reference for a better accuracy. We evaluate the proposed method using the field data on highway I-75 in Michigan, USA. The results show that this method has achieved a very good accuracy of over 90% for location determination of lane information. The main contribution of this paper is that the proposed method can be more intuitive and reliable than using the traditional maps in bird’s eye view.
|
|
15:30-15:35, Paper WePM2_T2.2 | |
ASD-SLAM: A Novel Adaptive-Scale Descriptor Learning for Visual SLAM |
|
Ma, Taiyuan | Shanghai Jiao Tong University |
Wang, Yafei | Shanghai Jiao Tong University |
Wang, Zili | The Company of Xiao Peng |
Liu, Xulei | Shanghai JiaoTong University |
Zhang, Huimin | Shanghai Jiao Tong University |
Keywords: Autonomous / Intelligent Robotic Vehicles, Vision Sensing and Perception, Deep Learning
Abstract: Visual Odometry and Simultaneous Localization and Mapping (SLAM) are widely used in autonomous driving. In the traditional keypoint-based visual SLAM systems, the feature matching accuracy of the front end plays a decisive role and becomes the bottleneck restricting the positioning accuracy, especially in challenging scenarios like viewpoint variation and highly repetitive scenes. Thus, increasing the discriminability and matchability of feature descriptor is of importance to improve the positioning accuracy of visual SLAM. In this paper, we proposed a novel adaptive-scale triplet loss function and apply it to triplet network to generate adaptive-scale descriptor (ASD). Based on ASD, we designed our monocular SLAM system (ASD-SLAM) which is an deep-learning enhanced extension of the state of art ORB-SLAM system. The experimental results show that ASD achieves better performance on the UBC benchmark datasets, at the same time, the ASD-SLAM system also outperforms the current popular visual SLAM frameworks on the KITTI Odometry Dataset.
|
|
15:35-15:40, Paper WePM2_T2.3 | |
ATV Navigation in Complex and Unstructured Environment Containing Stairs |
|
Zhu, Kongtao | Xi'an Jiaotong University |
Zhan, Junxiang | Xi’an Jiaotong University |
Chen, Shitao | Xi'an Jiaotong University, Xi'an, China |
Nan, Zhixiong | Xi'an Jiaotong University |
Zhang, Tangyike | Xi'an Jiaotong University |
Zhu, Dantong | Xi'an Jiaotong University |
Zheng, Nanning | Xi'an Jiaotong University |
Keywords: Autonomous / Intelligent Robotic Vehicles
Abstract: Self-driving and robotic technologies have been widely used in recent years. However, before L5 autonomy has been mature, it is more important and meaningful to apply these technologies to some wheeled robots on specific scenes. All-terrain vehicles (ATV) are widely used in urban search and rescue. An unstructured complex scene on which an ATV works usually contains many unusual obstacles, such as stairs. As a result, it is important for autonomous ATVs to detect, localize, and traverse stairs. In this paper, a real-time outdoor stair detection and localization method is proposed. A VLP-16 LIDAR is used to collect environment data, and the stairs are detected and localized using a single frame of LIDAR data by their geometric features, such as slope and parallel edges. A stair navigation strategy is also proposed in this paper. 3-axis attitudes measured by IMU are used during stair climbing on the basis of the coupling between roll and yaw on a slope. Experiments are carried out and the results prove the robustness and accuracy of detection and localization algorithm. The navigation strategy is proven to be safe and feasible.
|
|
15:40-15:45, Paper WePM2_T2.4 | |
VINS-PL-Vehicle: Points and Lines-Based Monocular VINS Combined with Vehicle Kinematics for Indoor Garage |
|
Zhang, Peizhi | Tongji University |
Lu, Xiong | Tongji Unviersity |
Yu, Zhuoping | Tongji University |
Kang, Rong | Tongji University |
Xu, Mingyu | Tongji University |
Zeng, Dequan | Tongji University |
Keywords: Autonomous / Intelligent Robotic Vehicles, Sensor and Data Fusion, Mapping and Localization
Abstract: In this paper, we propose VINS-PL-Vehicle, a points and lines-based monocular visual-inertial navigation system(VINS) combined with vehicle kinematics for indoor garage. The indoor garage contains some texture-less regions. Thus, it is difficult to ensure the robustness of VINS only using point features. Therefore, in addition to point features, we also add line features on the pillars, parking slots, and top structures in the garage. Besides, we utilize the accurately known velocity and steering wheel angle from the vehicle chassis to form the kinematic constraints between image frames, which can not only solve the problem that the scale is not observable due to insufficient excitation of accelerometer when the vehicle is running at a constant speed, but also utilize its drift-free characteristics to improve the accuracy of system initialization and optimization. The vehicle tests show that our algorithm can achieve significantly higher positioning accuracy than VINS-Mono in the indoor garage.
|
|
15:45-15:50, Paper WePM2_T2.5 | |
Deriving Spatial Occupancy Evidence from Radar Detection Data |
|
Berthold, Philipp | Bundeswehr University Munich |
Michaelis, Martin | University of the Bundeswehr Munich |
Luettel, Thorsten | Bundeswehr University Munich |
Meissner, Daniel | University of Ulm |
Wuensche, Hans Joachim Joe | Bundeswehr University Munich |
Keywords: Radar Sensing and Perception, Sensor and Data Fusion, Vehicle Environment Perception
Abstract: Central low-level sensor data fusion approaches are getting more popular in advanced driver assistant systems. They allow for the resolution of ambiguities in the retrieval of environmental information on the basis of a large, raw data pool. Hereby, one emerging challenge is the unification of sensor data of different formats and sensor types. A popular intermediate layer of data is given by spatial occupancy grids. The conversion of a discrete list of radar detections, which is a commonly utilized measurement format, is problematic due to the sparse spatial resolution. This work addresses this conversion by interpolating the data spatially using generic sensor model knowledge. Traditional approaches derive occupancy evidence in the vicinity of a detection. In addition, we analyze spatial and kinematic properties derived from Doppler measurements, compute likelihoods that multiple detections are caused by the same object and deduce the space between them accordingly. The incorporation of sensor parameters allows full- and short-range radars to be used generically. In addition, we outline the deduction of freespace evidence. The elaborated models and algorithms are evaluated on real-world datasets and discussed w.r.t. their applicability in a subsequent Dempster-Shafer-based sensor data fusion approach.
|
|
15:50-15:55, Paper WePM2_T2.6 | |
Reducing Uncertainty by Fusing Dynamic Occupancy Grid Maps in a Cloud-Based Collective Environment Model |
|
Lampe, Bastian | RWTH Aachen University |
van Kempen, Raphael | RWTH Aachen University |
Kampmann, Alexandru | RWTH Aachen University |
Alrifaee, Bassam | RWTH Aachen University |
Woopen, Timo | RWTH Aachen University |
Eckstein, Lutz | RWTH Aachen University |
Keywords: Automated Vehicles, Cooperative ITS, Sensor and Data Fusion
Abstract: Accurate environment perception is essential for automated vehicles. Since occlusions and inaccuracies regularly occur, the exchange and combination of perception data of multiple vehicles seems promising. This paper describes a method to combine perception data of automated and connected vehicles in the form of evidential Dynamic Occupany Grid Maps (DOGMas) in a cloud-based system. This system is called the Collective Environment Model and is part of the cloud system developed in the project UNICARagil. The presented concept extends existing approaches that fuse evidential grid maps representing static environments of a single vehicle to evidential grid maps computed by multiple vehicles in dynamic environments. The developed fusion process additionally incorporates self-reported data provided by connected vehicles instead of only relying on perception data. We show that the uncertainty in a DOGMa described by Shannon entropy as well as the uncertainty described by a non-specificity measure can be reduced. This enables automated and connected vehicles to behave in ways not before possible due to unknown but relevant information about the environment.
|
|
15:55-16:00, Paper WePM2_T2.7 | |
Open Experimental AGV Platform for Dynamic Obstacle Avoidance in Narrow Corridors |
|
Weckx, Sam | Flanders Make |
Vandewal, Bastiaan | KULeuven |
Rademakers, Erwin | Flanders Make |
Janssen, Karel | Flanders Make |
Geebelen, Kurt | Flanders Make |
Wan, Jia | Flanders Make |
De Geest, Roeland | Flanders Make |
Perik, Harold | Flanders Make |
Gillis, Joris | KU Leuven |
Swevers, Jan | KU Leuven |
Van Nunen, Ellen | Flanders Make |
Keywords: Situation Analysis and Planning, Autonomous / Intelligent Robotic Vehicles, Mapping and Localization
Abstract: Automated Guided Vehicles (AGVs) are a promising solution to automation in the view of Industry 4.0. The amount of goods that can be automatically transported can be further increased by efficient path planning and tracking methods. The efficiency is always a trade off in terms of cost, accuracy and flexibility, but should never influence safety. This paper proposes a flexible path planning and tracking solution, aiming to be applicable to several application domains, and which is able to dynamically avoid an (unforeseen) obstacle by an overtake manoeuvre. The approach is based on Model Predictive Control (MPC), consisting of multi-domain objectives, applicable to multiple vehicle models and is fast in calculation time due to an adjusted multiple shooting approach, which guarantees constraint satisfaction over the entire time domain. Further, a dynamic maximum velocity approach is proposed, which adapts the maximum velocity constraint to the environmental circumstances, such that an emergency brake can be applied if a human would appear behind a corner or obstacle. These algorithms are implemented on an autonomous forklift. The overall system performance is measured by the time-of-arrival of an obstacle avoidance manoeuvre. To evaluate the influence of usage of a low-cost ultra wideband (UWB) localization technology, the same algorithms and platforms are used in combination with standard off-the-shelf laser based localization technology. The UWB technology does lead to a slightly larger spread in terms of time-of-arrival, but is on average very much comparable to the laser-based setup.
|
|
16:00-16:05, Paper WePM2_T2.8 | |
Learning to Compensate for the Drift and Error of Gyroscope in Vehicle Localization |
|
Zhao, Xiangrui | Zhejiang University |
Deng, Chunfang | Zhejiang University |
Kong, Xin | Zhejiang University |
Xu, Jinhong | Zhejiang University |
Liu, Yong | Zju |
Keywords: Recurrent Networks, Autonomous / Intelligent Robotic Vehicles, Sensor and Data Fusion
Abstract: Self-localization is an essential technology for autonomous vehicles. Building robust odometry in a GPS-denied environment is still challenging, especially when LiDAR and camera are uninformative. In this paper, We propose a learning-based approach to cure the drift of gyroscope for vehicle localization. For consumer-level MEMS gyroscope (stability ~10°/h), our GyroNet can estimate the error of each measurement. For high-precision Fiber optics Gyroscope (stability ~0.05°/h), we build a FoGNet which can obtain its drift by observing data in a long time window. We perform comparative experiments on publicly available datasets. The results demonstrate that our GyroNet can get higher precision angular velocity than traditional digital filters and static initialization methods. In the vehicle localization, the FoGNet can effectively correct the small drift of the Fiber optics Gyroscope (FoG) and can achieve better results than the state-of-the-art method.
|
|
WePM2_T3 |
EGYPTIAN_3 |
Environment Perception. B |
Regular Session |
|
15:25-15:30, Paper WePM2_T3.1 | |
Simulation-Based Evaluation of Automotive Sensor Setups for Environmental Perception in Early Development Stages |
|
Hartstern, Maike | BMW Group // Karlsruhe Institute of Technology (KIT) |
Rack, Viktor | BMW AG |
Kaboli, Mohsen | BMW Group Research |
Stork, Wilhelm | Karlsruhe Institute of Technology |
Keywords: Vehicle Environment Perception, Automated Vehicles
Abstract: Car manufacturers are facing the challenge of defining suitable sensor setups that cover all requirements for the particular SAE level of automated driving. Besides the sensors’ performance and surround-view coverage, other factors like vehicle integration, costs and design aspects need to be taken into account. Additionally, a redundant sensor arrangement and the sensors’ sensitivity to environmental influences are of crucial importance for safety. By increasing the degree of automation, vehicles require more external sensors to observe their surrounding environment sufficiently, which raises the variety of setup configurations and the difficulty to identify the optimal one. Concerning the vehicle development process, concepts for sensor setups need to be defined at a very early stage. In this concept stage, it is not feasible to explore every possible sensor arrangement with test drives or to simulate the setup performance with tools used for vehicle validation. Thus, we propose a new simulation-based evaluation method, which allows the configuration of arbitrary sensor setups and enables virtual test drives within specific scenarios to evaluate the setup performance in this early development phase with metrics and key performance indicators. Two different setups are analyzed to demonstrate the results of this evaluation method.
|
|
15:30-15:35, Paper WePM2_T3.2 | |
Predictive Control of an Autonomous Vehicle to Reduce Traffic Instability |
|
Sainct, Remi | Gustave Eiffel University |
Keywords: Automated Vehicles, Impact on Traffic Flows, Vehicle Environment Perception
Abstract: Traffic flow is known to be unstable at high densities, creating undesirable stop-and-go waves. Using real-time data and a long-range perception, an automated vehicle (AV) could detect perturbations early and adapt its trajectory to keep a nearly constant speed, making the overall flow smoother. This paper presents the following algorithm. Using a short-time prediction of its leader's trajectory, the AV computes a future trajectory that minimizes speed variations while keeping both a bounded acceleration and jerk. This controled trajectory is updated in real-time until the perturbation is passed and the vehicle goes back to its normal behavior. Simulation results using human drivers data for the downstream traffic show that the speed variations is always reduced by more than 20 %.
|
|
15:35-15:40, Paper WePM2_T3.3 | |
Advanced Active Learning Strategies for Object Detection |
|
Schmidt, Sebastian | BMW AG |
Rao, Qing | BMW AG |
Tatsch, Julian | BMW Group AG |
Knoll, Alois | Technische Universität München |
Keywords: Deep Learning, Vehicle Environment Perception, Autonomous / Intelligent Robotic Vehicles
Abstract: Future self-driving cars must be able to perceive and understand their surroundings. Deep learning based approaches promise to solve the perception problem but require a large amount of manually labeled training data. Active learning is a training procedure during which the model itself selects interesting samples for labeling based on their Uncertainty, with substantially less data required for training. Recent research in active learning is mostly focused on the simple image classification task. In this paper, we propose novel methods to estimate sample uncertainties for 2D and 3D object detection using Ensembles. We moreover evaluate different training strategies including Continuous Training to alleviate increasing training times due to the active learning cycle. Finally, we investigate the effects of active learning on imbalanced datasets and possible interactions with class weighting. Experiment results show both increased time saving around 55% and a data saving rate of around 30%. For the 3D object detection task, we show that our proposed uncertainty estimation method is valid, saving data of 35% and thus ready for application for automotive object detection use cases.
|
|
15:40-15:45, Paper WePM2_T3.4 | |
Leveraging Uncertainties for Deep Multi-Modal Object Detection in Autonomous Driving |
|
Feng, Di | Robert Bosch GmbH |
Cao, Yifan | Karlsruher Institute of Technology |
Rosenbaum, Lars | Robert Bosch GmbH |
Timm, Fabian | Robert Bosch GmbH |
Dietmayer, Klaus | University of Ulm |
Keywords: Deep Learning, Vehicle Environment Perception, Automated Vehicles
Abstract: This work presents a probabilistic deep neural network that combines LiDAR point clouds and RGB camera images for robust, accurate 3D object detection. We explicitly model uncertainties in the classification and regression tasks, and leverage uncertainties to train the fusion network via a sampling mechanism. We validate our method on three datasets with challenging real-world driving scenarios. Experimental results show that the predicted uncertainties reflect complex environmental uncertainty like difficulties of a human expert to label objects. The results also show that our method consistently improves the Average Precision by up to 7% compared to the baseline method. When sensors are temporally misaligned, the sampling method improves the Average Precision by up to 20%, showing its high robustness against noisy sensor inputs.
|
|
15:45-15:50, Paper WePM2_T3.5 | |
Optical Flow Based Visual Potential Field for Autonomous Driving |
|
Capito, Linda | Ohio State University |
Ozguner, Umit | Ohio State University |
Redmill, Keith | Ohio State University |
Keywords: Automated Vehicles, Vehicle Environment Perception, Vehicle Control
Abstract: Monocular vision based navigation for automated driving is a challenging task due to the lack of enough information to compute temporal relationships among objects on the road. Optical flow is an option to obtain temporal information from monocular camera images, and has been used widely with the purpose of identifying objects and their relative motion. This work proposes to generate an artificial potential field, i.e. visual potential field, from a sequence of images using sparse optical flow, which is used together with a gradient tracking sliding mode controller to navigate the vehicle to destination without collision with obstacles. The angular reference for the vehicle is computed online. This work considers that the vehicle does not require to have a priori information from the map or obstacles to navigate successfully. The proposed technique is tested both in simulation and in a small dataset of real images.
|
|
15:50-15:55, Paper WePM2_T3.6 | |
3D-DEEP: 3-Dimensional Deep-Learning Based on Elevation Patterns for Road Scene Interpretation |
|
Hernández Saz, Álvaro | University of Alcalá |
Woo, Suhan | Yonsei University |
Corrales Sánchez, Héctor | Universidad De Alcalá |
Parra Alonso, Ignacio | Universidad De Alcala |
Kim, Euntai | Yonsei Univ |
Fernandez Llorca, David | University of Alcala |
Sotelo, Miguel A. | University of Alcala |
Keywords: Vehicle Environment Perception, Convolutional Neural Networks
Abstract: This paper describes a new net architecture and its end-to-end training methodology for CNN-based semantic segmentation. The method relies on disparity filtered images, LiDAR projected images for three-dimensional information and BiSeNet as backbone architecture. The developed models were trained and validated over Cityscapes dataset using only fine annotation examples and 19 different training classes, and in KITTI road dataset. 72.32% mIoU has been obtained for the 19 Cityscapes training classes using the validation images. On the other hand, over KITTI dataset the model have achieved an F1 error value of 97.85% in validation and a 96.02% using the test images
|
|
15:55-16:00, Paper WePM2_T3.7 | |
Phenomenological Modelling of Lane Detection Sensors for Validating Performance of Lane Keeping Assist Systems |
|
Höber, Michael | Graz University of Technology |
Nalic, Demin | Technical Unviersity Graz |
Eichberger, Arno | TU Graz |
Samiee, Sajjad | Technical Unviersity Graz |
Magosi, Zoltan | Graz University of Technology |
Payerl, Christian | MAGNA Steyr |
Keywords: Vehicle Environment Perception, Image, Radar, Lidar Signal Processing, Advanced Driver Assistance Systems
Abstract: A well-established Lane Keeping Assist System (LKAS) plays an important role in the field of Automated Driving (AD). An essential issue in LKAS and generally in Advanced Driving Assist Systems (ADAS) is lane detection. Due to the fact that camera systems are inexpensive, most lane detection methods are vision based. To cope with the infinite number of test cases, virtual testing of ADAS has become state of the art. Realistic behavior and analytical models of ADAS components are crucial for reliable simulation results. The focus of this study is performance validation of LKAS applying simulation. High complexity as well as sensitivity to illumination variation, shadows and different weather conditions make it difficult to implement and develop camera or environment models which could map the realistic behavior of LKAS. To avoid these complexities and minimize the modelling efforts, a phenomenological lane detection model (PLDM) is introduced. For that purpose, comprehensive measurements are carried out within the Austrian Light Vehicle Proving Region for Automated Driving (ALP.Lab) using a test vehicle equipped with LKAS. Applying proposed phenomenological model provides the ability to test any LKAS regardless of its controller. The PLDM is implemented and validated with the recorded data in the simulation environment of IPG CarMaker. The results show realistic system performance of the developed and implemented LKAS system.
|
|
16:00-16:05, Paper WePM2_T3.8 | |
SLHP: Short-/Long-Term Hybrid Prediction for Road Users |
|
Takei, Shoichi | Nissan Motor Co., Ltd |
Tanaka, Shinya | Nissan Motor CO., LTD |
Yamaguchi, Shotaro | Nissan Motor Co. Ltd |
Khiat, Abdelaziz | Nissan Motor Co., Ltd |
Keywords: Self-Driving Vehicles, Vehicle Environment Perception
Abstract: We propose a novel method called Short-/Long-term Hybrid Prediction (SLHP) that predicts short-term and long-term trajectories of surrounding objects while estimating their future influences on an autonomous ego-vehicle for both types of trajectories. Recently, long-term prediction methods based on trajectory sample generation and verification with road environments and/or interactions have been proposed; however, they entail high computational costs because they need to generate and verify multiple trajectory samples for multiple objects. Therefore, they are not appropriate in scenes where short-term prediction is required such as for sudden motions by surrounding objects. In contrast, our SLHP consists of a hybrid prediction based on short-term and long-term trajectory predictors. SLHP provides flexible predictions that are appropriate to scenes caused by objects' motions, road environments, interactions, and so on. In this paper, we apply and evaluate our method to cut-in prediction as a typical prediction task by using the public road dataset that includes various cut-in events. Experimental results show that SLHP achieves a correctness rate of F-measure = 0.86 for cut-in prediction. Additionally, we confirmed the effectiveness of our hybrid prediction method that provides prediction as early as 3.57 s and 4.82 s before the cut-in event for short-term and long-term trajectory predictions, respectively.
|
| |