| |
Last updated on September 20, 2020. This conference program is tentative and subject to change
Technical Program for Thursday October 22, 2020
|
ThAM1_T1 |
EGYPTIAN_1 |
Image, Radar, Lidar Signal Processing. A |
Regular Session |
|
09:25-09:30, Paper ThAM1_T1.1 | |
Automatic Interaction Detection between Vehicles and Vulnerable Road Users During Turning at an Intersection |
|
Cheng, Hao | Leibniz Universität Hannover |
Liu, HaiLong | Nagoya University |
Hirayama, Takatsugu | Nagoya University |
Shinmura, Fumito | Nagoya University |
Akai, Naoki | Nagoya University |
Murase, Hiroshi | Nagoya University |
Keywords: Vulnerable Road-User Safety, Image, Radar, Lidar Signal Processing, Deep Learning
Abstract: Interaction detection between vehicles and vulnerable road users (e.g. pedestrians and cyclists) is important for e.g. safety control and autonomous driving. However, there are many challenges for automatically detecting interactions, such as the ambiguity of defining when interaction is required in dynamic traffic activities among different road users and the lack of labeled data for training a machine learning detector. To overcome the challenges, we introduce a way to define whether or not interaction is required in various traffic scenes and create a large real-world dataset from a very challenging intersection. A sequence-to-sequence method that uses the object information and motion information of the traffic scenes extracted by a state-of-the-art object detector and from optical flow, respectively, is proposed for automatic interaction detection. The proposed method generates a probability of interaction at each short interval (<0.1 s) that represents the changing of interaction along a sequence. We obtain a baseline model that differentiates no interaction from interaction on the basis of the location and road user type from the detected object information. Compared with the baseline model, the empirical results of the proposed method demonstrate very accurate predictions for vehicle turning sequences with varying length.
|
|
09:30-09:35, Paper ThAM1_T1.2 | |
Multi-Depth Sensing for Applications with Indirect Solid-State LiDAR |
|
Schoenlieb, Armin | Infineon |
Lugitsch, David | Infineon Technologies |
Steger, Christian | Graz University of Technology |
Holweg, Gerald | Infineon |
Druml, Norbert | Infineon Technologies |
Keywords: Lidar Sensing and Perception, Image, Radar, Lidar Signal Processing, Vision Sensing and Perception
Abstract: In recent years, topics like autonomous driving increased the demand on robust environmental sensors. Depth sensors are most commonly used. Solid state Light Detection And Ranging (LiDAR) sensors are well suited for these applications. The measurement principle is based on measuring the phase and consequently the delay of emitted and reflected light. Problems arise if strong reflectors, like street-signs, impair the measurement. In this paper, we present a novel algorithm for depth calculation, based on indirect Time-of-Flight (ToF) data. With this approach it is possible to separate multiple reflectors in the scenery. This allows the generation of multiple depth images. In our approach an arbitrary number of different code sequences are applied as modulation signal. With these code sequences we generate a so called ToF-matrix. With this ToF-matrix, the measured environmental response can be mapped to a distance. As our evaluation shows, our method is able to achieve results with more information compared to conventional ToF-imaging. We demonstrate the separation of the reflection of a street-sign, from a target. This algorithm enables the usage of indirect ToF in automotive areas. We believe that this versatile calculation approach can increase the benefit of indirect LiDAR application for autonomous driving.
|
|
09:35-09:40, Paper ThAM1_T1.3 | |
SalsaNet: Fast Road and Vehicle Segmentation in LiDAR Point Clouds for Autonomous Driving |
|
Aksoy, Eren Erdal | Halmstad University |
Baci, Saimir | Volvo |
Cavdar, Selcuk | Volvo |
Keywords: Lidar Sensing and Perception, Deep Learning, Self-Driving Vehicles
Abstract: In this paper, we introduce a deep encoder-decoder network, named SalsaNet, for efficient semantic segmentation of 3D LiDAR point clouds. SalsaNet segments the road, i.e. drivable free-space, and vehicles in the scene by employing the Bird-Eye-View (BEV) image projection of the point cloud. To overcome the lack of annotated point cloud data, in particular for the road segments, we introduce an autolabeling process which transfers automatically generated labels from the camera to LiDAR. We also explore the role of imagelike projection of LiDAR data in semantic segmentation by comparing BEV with spherical-front-view projection and show that SalsaNet is projection-agnostic. We perform quantitative and qualitative evaluations on the KITTI dataset, which demonstrate that the proposed SalsaNet outperforms other state-ofthe-art semantic segmentation networks in terms of accuracy and computation time. Our code and data are publicly available at https://gitlab.com/aksoyeren/salsanet.git.
|
|
09:40-09:45, Paper ThAM1_T1.4 | |
SiaNMS: Non-Maximum Suppression with Siamese Networks for Multi-Camera 3D Object Detection |
|
Cortés, Irene | Universidad Carlos III De Madrid |
Beltrán, Jorge | Universidad Carlos III De Madrid |
de la Escalera, Arturo | Universidad Carlos III De Madrid |
Garcia, Fernando | Universidad Carlos III De Madrid |
Keywords: Sensor and Data Fusion, Vehicle Environment Perception, Deep Learning
Abstract: The rapid development of embedded hardware in autonomous vehicles broadens their computational capabilities, thus bringing the possibility to mount more complete sensor setups able to handle driving scenarios of higher complexity. As a result, new challenges such as multiple detections of the same object have to be addressed. In this work, a siamese network is integrated into the pipeline of a well-known 3D object detector approach to suppress duplicate proposals coming from different cameras via re-identification. Additionally, associations are exploited to enhance the 3D box regression of the object by aggregating their corresponding LiDAR frustums. The experimental evaluation on the nuScenes dataset shows that the proposed method outperforms traditional NMS approaches.
|
|
09:45-09:50, Paper ThAM1_T1.5 | |
Single-Stage Object Detection from Top-View Grid Maps on Custom Sensor Setups |
|
Wirges, Sascha | Karlsruhe Institute of Technology |
Ding, Shuxiao | Karlsruhe Institute of Technology |
Stiller, Christoph | Karlsruhe Institute of Technology |
Keywords: Lidar Sensing and Perception, Unsupervised Learning, Vehicle Environment Perception
Abstract: We present our approach to unsupervised domain adaptation for single-stage object detectors on top-view grid maps in automated driving scenarios. Our goal is to train a robust object detector on grid maps generated from custom sensor data and setups. We first introduce a single-stage object detector for grid maps based on RetinaNet. We then extend our model by image- and instance-level domain classifiers at different feature pyramid levels which are trained in an adversarial manner. This allows us to train robust object detectors for unlabeled domains. We evaluate our approach quantitatively on the nuScenes and KITTI benchmarks and present qualitative domain adaptation results for unlabeled measurements recorded by our experimental vehicle. Our results demonstrate that object detection accuracy for unlabeled domains can be improved by applying our domain adaptation strategy.
|
|
09:50-09:55, Paper ThAM1_T1.6 | |
Estimation of 2D Bounding Box Orientation with Convex-Hull Points – a Quantitative Evaluation on Accuracy and Efficiency |
|
Liu, Yang | Huawei Technologies Canada |
Liu, Bingbing | Huawei |
Zhang, Hongbo | Huawei Technologies Co., Ltd |
Keywords: Lidar Sensing and Perception, Self-Driving Vehicles, Vehicle Environment Perception
Abstract: Estimating the bounding box from an object point cloud is an essential task in autonomous driving with LiDAR/laser sensors. We present an efficient bounding box estimation method that can be applied on 2D bird’s-eye view (BEV) LiDAR points to generate the bounding box geometry including length, width and orientation. Given a set of 2D points, the method utilizes their convex-hull points to calculate a small set of candidate directions of the box yaw orientation, and therefore reduces the searching space – usually a fine partition of an angle range (e.g. [0, PI/2)) as in the previous solutions – to find the optimal angle. To further improve the efficiency, we investigate the techniques of controlling the number of convex-hull points, by both applying approximate collinearity condition and downsampling the raw point cloud to a smaller size. We provide comprehensive analysis on both accuracy and efficiency of the proposed method on the KITTI 3D object dataset. The results show that without obviously sacrificing the accuracy, the method, especially when using approximate convex-hull points, can significantly improve the time of estimating the bounding box orientation by almost one order of magnitude.
|
|
09:55-10:00, Paper ThAM1_T1.7 | |
Improving 3D Object Detection Via Joint Attribute-Oriented 3D Loss |
|
Ye, Zhen | Xi’an Jiaotong University, Xi’an, P.R.China |
Xue, Jianru | Xi'an Jiaotong University |
Dou, Jian | Laboratory of Visual Cognitive Computing and Intelligent Vehicle |
Pan, Yuxin | Xi'an Jiaotong University |
Fang, Jianwu | Chang'an University |
Wang, Di | Xi'an Jiaotong University |
Zheng, Nanning | Xi'an Jiaotong University |
Keywords: Deep Learning, Lidar Sensing and Perception
Abstract: 3D object detection has become a hot topic in intelligent vehicle applications in recent years. Generally, deep learning has been the primary framework used in 3D object detection and regression of the object location and classification of the objectness are the two indispensable components. In the process of training, the L-n and the focal loss are considered as the frequent solutions to minimize the regression and classification loss, respectively. However, there are two problems to be solved in the existing methods. For regression component, there is a gap between evaluation metrics, e.g., 3D Intersection over Union (IoU), and the traditional regression loss. As for the classification component, confidence score exists ambiguous due to the binary label assignment of target. To solve these problems, we propose a loss by jointing 3D IoU and other geometric attributes (named as jointed attributeoriented 3D loss), which can be directly used in optimizing the regression component. In addition, the jointed attributeoriented 3D loss can assign a soft label for supervising the training of the classification. By incorporating the proposed loss function into several state-of-the-art 3D object detection methods, the significant performance improvement has been achieved on the KITTI benchmark
|
|
10:00-10:05, Paper ThAM1_T1.8 | |
Cluster Analysis of Vehicle Measurement Data for Improved Understanding of Behavior in Urban and Highway Environments |
|
Sonka, Adrian | Institute of Automotive Engineering, Technical University Brauns |
Thal, Silvia | Technische Universität Braunschweig, Institute of Automotive Eng |
Henze, Roman | Technical University of Braunschweig |
Beernaert, Kelly | Ibeo Automotive Systems GmbH |
Lages, Ulrich | Ibeo Automotive Systems GmbH |
Keywords: Unsupervised Learning, Automated Vehicles, Lidar Sensing and Perception
Abstract: The comprehension of the behavior of objects in the environment of an automated vehicle is essential information to be extracted out of traffic measurement data. It can be made accessible by applying a data description process consisting of a cluster analysis and subsequent classification step. In this paper, two different algorithms, TRACLUS and Self-Organizing Maps are utilized for a clustering of laser scanner measurement data, recorded by experimental vehicles in real traffic. The results are evaluated and discussed. A remaining open issue of a cluster analysis is the necessity for a classification of clusters, which is missing or solved through manual solutions in related work. We address the matter with a novel approach for an automated classification of clusters using reference trajectories, which is successfully validated for a first exemplary maneuver on highway measurement data.
|
|
ThAM1_T2 |
EGYPTIAN_2 |
Motion Planning. A |
Regular Session |
|
09:25-09:30, Paper ThAM1_T2.1 | |
Machine Learning Based Motion Planning Approach for Intelligent Vehicles |
|
Artunedo, Antonio | Centre for Automation and Robotics (CSIC-UPM) |
Corrales, Gabriel | Centre for Automation and Robotics |
Villagra, Jorge | Centre for Automation and Robotics (CSIC-UPM) |
Godoy, Jorge | Centre for Automation and Robotics (UPM-CSIC) |
Keywords: Autonomous / Intelligent Robotic Vehicles, Self-Driving Vehicles, Situation Analysis and Planning
Abstract: The complexity to handle complex situations in automated driving requires increasing computational resources. In this work, we propose a machine learning approach for motion planning aiming at optimizing the set of path candidates to be evaluated in accordance with the driving context. Thus, the computation cost of the whole motion planning strategy can be reduced while generating safe and comfortable trajectories when required. The proposed strategy has been implemented in a real experimental platform and validated in different operating environments, successfully providing high quality trajectories in a small time frame.
|
|
09:30-09:35, Paper ThAM1_T2.2 | |
Off-Road Autonomous Vehicles Traversability Analysis and Trajectory Planning Based on Deep Inverse Reinforcement Learning |
|
Zhu, Zeyu | Key Labarotary of Machine Perception, Peking University |
Li, Nan | Peking University |
Sun, Ruoyu | University of Manchester |
Xu, Donghao | Peking University |
Zhao, Huijing | Peking University |
Keywords: Situation Analysis and Planning, Autonomous / Intelligent Robotic Vehicles, Reinforcement Learning
Abstract: Terrain traversability analysis is a fundamental issue to achieve the autonomy of a robot at off-road environments. Geometry-based and appearance-based methods have been studied in decades, while behavior-based methods exploiting learning from demonstration (LfD) are new trends. Behavior-based methods learn cost functions that guide trajectory planning in compliance with experts' demonstrations, which can be more scalable to various scenes and driving behaviors. This research proposes a method of off-road traversability analysis and trajectory planning using Deep Maximum Entropy Inverse Reinforcement Learning. To incorporate the vehicle's kinematics while solving the problem of exponential increase of state-space complexity, two convolutional neural networks, i.e., RL ConvNet and Svf ConvNet, are developed to encode kinematics into convolution kernels and achieve efficient forward reinforcement learning. We conduct experiments in off-road environments. Scene maps are generated using 3D LiDAR data, and expert demonstrations are either the vehicle's real driving trajectories at the scene or synthesized ones to represent specific behaviors such as crossing negative obstacles. Different cost functions of traversability analysis are learned and tested at various scenes of capability in guiding the trajectory planning of different behaviors. We also demonstrate the performance and computation efficiency of the proposed method.
|
|
09:35-09:40, Paper ThAM1_T2.3 | |
Optimal Vehicle Path Planning Using Quadratic Optimization for Baidu Apollo Open Platform |
|
Zhang, Yajia | Baidu USA |
Sun, Hongyi | Baidu USA LLC |
Zhou, Jinyun | Baidu Usa Llc |
Pan, Jiacheng | Baidu USA |
Hu, Jiangtao | Baidu USA |
Miao, Jinghao | Baidu USA |
Keywords: Self-Driving Vehicles, Automated Vehicles, Autonomous / Intelligent Robotic Vehicles
Abstract: Path planning is a key component in motion planning for autonomous vehicles. A path specifies the geometrical shape that the vehicle will travel, thus, it is critical to safe and comfortable vehicle motions. For urban driving scenarios, autonomous vehicles need the ability to navigate in cluttered environment, e.g., roads partially blocked by a number of vehicles/obstacles on the sides. How to generate a kinematically feasible and smooth path, that can avoid collision in complex environment, makes path planning a challenging problem. In this paper, we present a novel quadratic programming approach that generates optimal paths with resolution-complete collision avoidance capability.
|
|
09:40-09:45, Paper ThAM1_T2.4 | |
Collision-Free Path Planning for Automated Vehicles Risk Assessment Using Predictive Occupancy Map |
|
Shen, Dan | Indiana University – Purdue University Indianapolis |
Chen, Yaobin | Purdue School of Engineering and Technology, IUPUI |
Li, Lingxi | Indiana University-Purdue University Indianapolis |
Chien, Stanley | Indiana University-Purdue University Indianapolis |
Keywords: Automated Vehicles, Collision Avoidance, Vehicle Control
Abstract: Vehicle collision avoidance system (CAS) is a control system that can guide the vehicle into a collision-free safe region in the presence of other objects on road. Common CAS functions, such as forward-collision warning and automatic emergency braking, have recently been developed and equipped on production vehicles. However, these CASs focus on mitigating or avoiding potential crashes with the preceding cars and objects. They are not effective for crash scenarios with vehicles from the rear-end or lateral directions. This paper proposes a novel collision avoidance system that will provide the vehicle with all-around (360-degree) collision avoidance capability. A risk evaluation model is developed to calculate potential risk levels by considering surrounding vehicles (according to their relative positions, velocities, and accelerations) and using a predictive occupancy map (POM). By using the POM, the safest path with the minimum risk values is chosen from 12 acceleration-based trajectory directions. The global optimal trajectory is then planned using the optimal rapidly exploring random tree (RRT*) algorithm. The planned vehicle motion profile is generated as the reference for future control. Simulation results show that the developed POM-based CAS demonstrates effective operations to mitigate the potential crashes in both lateral and rear-end crash scenarios.
|
|
09:45-09:50, Paper ThAM1_T2.5 | |
Probabilistic Long-Term Vehicle Trajectory Prediction Via Driver Awareness Mode |
|
Liu, Jinxin | Tsinghua University |
Xiong, Hui | Tsinghua University |
Huang, Heye | Tsinghua University |
Luo, Yugong | Tsinghua University, Beijing |
Zhong, Zhihua | Tsinghua University |
Li, Keqiang | Tsinghua University |
Keywords: Automated Vehicles, Advanced Driver Assistance Systems, Situation Analysis and Planning
Abstract: Making long-term trajectory prediction accurately for surrounding vehicles is the crucial prerequisite for intelligent vehicles to accomplish superb decision making and motion planning. In this paper, to achieve high-quality prediction accuracy both in the short and long term, we propose an integrated probabilistic framework with the combination of driver awareness model and Gaussian process model. The former model can obtain high-level semantic information using low-level two-dimensional motion elements. And the latter incorporates the vehicle physical model to reach good prediction performance with strengthened historical input sequence. Furthermore, experiments on the public naturalistic driving dataset in lane-changing scenarios are conducted to verify our novel approach. Compared with another advanced method, the superiorities of our proposed approach are demonstrated with higher estimation and prediction accuracy, as well as more reasonable uncertainty description in terms of the whole prediction process.
|
|
09:50-09:55, Paper ThAM1_T2.6 | |
A Geometric Approach to On-Road Motion Planning for Long and Multi-Body Heavy-Duty Vehicles |
|
Oliveira, Rui | KTH Royal Institute of Technology |
Ljungqvist, Oskar | Linköping University |
Lima, Pedro F. | KTH Royal Institute of Technology |
Wahlberg, Bo | KTH Royal Institute of Technology |
Keywords: Situation Analysis and Planning, Self-Driving Vehicles, Automated Vehicles
Abstract: Driving heavy-duty vehicles, such as buses and tractor-trailer vehicles, is a difficult task in comparison to passenger cars. Most research on motion planning for autonomous vehicles has focused on passenger vehicles, and many unique challenges associated with heavy-duty vehicles remain open. However, recent works have started to tackle the particular difficulties related to on-road motion planning for buses and tractor-trailer vehicles using numerical optimization approaches. In this work, we propose a framework to design an optimization objective to be used in motion planners. Based on geometric derivations, the method finds the optimal trade-off between the conflicting objectives of centering different axles of the vehicle in the lane. For the buses, we consider the front and rear axles trade-off, whereas for articulated vehicles, we consider the tractor and trailer rear axles trade-off. Our results show that the proposed design strategy produces planned paths that considerably improve the behavior of heavy-duty vehicles by keeping the whole vehicle body in the center of the lane.
|
|
09:55-10:00, Paper ThAM1_T2.7 | |
Clustering Traffic Scenarios Using Mental Models As Little As Possible |
|
Hauer, Florian | Technical University of Munich |
Gerostathopoulos, Ilias | Vrije Universiteit Amsterdam |
Schmidt, Tabea | Technical University of Munich |
Pretschner, Alexander | Technical University of Munich |
Keywords: Automated Vehicles, Unsupervised Learning, Self-Driving Vehicles
Abstract: Test scenario generation for testing automated and autonomous driving systems requires knowledge about the recurring traffic cases, known as scenario types. The most common approach in industry is to have experts create lists of scenario types. This poses the risk both that certain types are overlooked; and that the mental model that underlies the manual process is inadequate. We propose to extract scenario types from real driving data by clustering recorded scenario instances, which are composed of timeseries. Existing works in the domain of traffic data either cannot cope with multivariate timeseries; are limited to one or two vehicles per scenario instance; or they use handcrafted features that are based on the mental model of the data scientist. The latter suffers from similar shortcomings as manual scenario type derivation. Our approach clusters scenario instances relying as little as possible on a mental model. As such, we consider the approach an important complement to manual scenario type derivation. It may yield scenario types overlooked by the experts, and it may provide a different segmentation of a whole set of scenarios instances into scenario types, thus overall increasing confidence in the handcrafted scenario types. We present the application of the approach to a real driving dataset.
|
|
10:00-10:05, Paper ThAM1_T2.8 | |
CommonRoad Drivability Checker: Simplifying the Development and Validation of Motion Planning Algorithms |
|
Pek, Christian | Technical University of Munich |
Rusinov, Vitaliy | Technical University of Munich |
Manzinger, Stefanie | Technische Universität München |
Üste, Murat Can | Technical University of Munich |
Althoff, Matthias | Technische Universität München |
Keywords: Autonomous / Intelligent Robotic Vehicles, Collision Avoidance, Vehicle Control
Abstract: Collision avoidance, kinematic feasibility, and road-compliance must be validated to ensure the drivability of planned motions for autonomous vehicles. Although these tasks are highly repetitive, computationally efficient toolboxes are still unavailable. The CommonRoad Drivability Checker---an open-source toolbox---unifies these mentioned checks. It is compatible with the CommonRoad benchmark suite, which additionally facilitates the development of motion planners. Our toolbox drastically reduces the effort of developing and validating motion planning algorithms. Numerical experiments show that our toolbox is real-time capable and can be used in real test vehicles.
|
|
ThAM1_T3 |
EGYPTIAN_3 |
Driver State Recognition. A |
Regular Session |
|
09:25-09:30, Paper ThAM1_T3.1 | |
Identifying High-Risk Older Drivers by Head-Movement Monitoring Using a Commercial Driver Monitoring Camera |
|
Yoshihara, Yuki | Nagoya University |
Tanaka, Takahiro | Nagoya University |
Osuga, Shin | Aisin Seiki Co., Ltd |
Fujikake, Kazuhiro | Nagoya University |
Karatas, Nihan | Nagoya University |
Kanamori, Hitoshi | Nagoya Univ |
Keywords: Driver State and Intent Recognition, Advanced Driver Assistance Systems, Human-Machine Interface
Abstract: Older drivers experience a high rate of crashes due to road intersections, unseen objects, and failure to find the traffic signals. These characteristics would be recognized if a system could watch and monitor the behaviors of the drivers inside their car. Therefore, in this study, current advances in driver monitoring cameras are used for measuring the head movements of older drivers to evaluate the visual intent of the driver. Several quantitative metrics that compute the temporal and spatial aspects of head movements allow for assessing the behaviors of older and middle-aged drivers. A driving simulator study using urban road scenarios shows that at high vehicle speed, on average, older drivers move their heads more slowly within a narrower range and glance too quickly to recognize surrounding traffic correctly. Correlation analysis validated the high prediction capabilities of head-movement measures towards the future evaluation of driving risks.
|
|
09:30-09:35, Paper ThAM1_T3.2 | |
Analysis of Distraction and Driving Behavior Improvement Using a Driving Support Agent for Elderly and Non-Elderly Drivers on Public Roads |
|
Tanaka, Takahiro | Nagoya University |
Fujikake, Kazuhiro | Nagoya University |
Yoshihara, Yuki | Nagoya University |
Karatas, Nihan | Nagoya University |
Shimazaki, Kan | Nagoya University |
Aoki, Hirofumi | Nagoya University |
Kanamori, Hitoshi | Nagoya Univ |
Keywords: Driver Recognition, Human-Machine Interface, Advanced Driver Assistance Systems
Abstract: Japan has become a more aged society and there are more drivers, 65 years of age and above. Cars represent an important mode of transportation for the elderly; however, in recent years, the number of traffic accidents caused by elderly drivers has been on the rise, and this has become a social issue. Thus, to ensure driving safety, we study a driver agent system that provides driving and feedback support to the elderly drivers for encouraging them to improve their driving. In this paper, we present a summary of the proposed agent and report on a set of experiments using our agent for the elderly and non-elderly drivers in an actual environment with car on public roads. From the analysis of driving operations and fixation points during driving, the results revealed that the acceptability of the agent was high, the agent in the actual car environment did not distract the driver, and the agent could improve driving behavior.
|
|
09:35-09:40, Paper ThAM1_T3.3 | |
Toward Real-Time Estimation of Driver Situation Awareness: An Eye Tracking Approach Based on Moving Objects of Interest |
|
Kim, Hyungil | Virginia Tech Transportation Institute |
Martin, Sujitha | Honda Research Institute USA, Inc |
Tawari, Ashish | Acubed by Airbus |
Misu, Teruhisa | Honda Research Institute |
Gabbard, Joseph | Virginia Tech |
Keywords: Driver Recognition, Human-Machine Interface, Driver State and Intent Recognition
Abstract: Eye-tracking techniques have the potential for estimating driver awareness of road hazards. However, tradi- tional eye-movement measures based on static areas of interest may not capture the unique characteristics of driver eye- glance behavior and challenge the real-time application of the technology on the road. This article proposes a novel method to operationalize driver eye-movement data analysis based on moving objects of interest. A human-subject experiment conducted in a driving simulator demonstrated the potential of the proposed method. Correlation and regression analyses between indirect (i.e., eye-tracking) and direct measures (i.e., SAGAT) of driver awareness identified some promising variables that feature both spatial and temporal aspects of driver eye-glance behavior relative to objects of interest. Results also suggest that eye-glance behavior might be a promising but insufficient predictor of driver awareness. This work is a preliminary step toward real-time, on-road estimation of driver awareness of road hazards. The proposed method could be further combined with computer-vision techniques such as object recognition to fully automate eye-movement data processing as well as machine learning approaches to improve the accuracy of driver awareness estimation.
|
|
09:40-09:45, Paper ThAM1_T3.4 | |
Deep Classification-Driven Domain Adaptation for Cross-Modal Driver Behavior Recognition |
|
Reiß, Simon | Karlsruhe Institute of Technology |
Roitberg, Alina | Karlsruhe Institute of Technology (KIT) |
Haurilet, Monica | Karlsruhe Institute of Technology |
Stiefelhagen, Rainer | Karlsruhe Institute of Technology |
Keywords: Vision Sensing and Perception, Driver Recognition, Deep Learning
Abstract: We encounter a wide range of obstacles when integrating computer vision algorithms into applications inside the vehicle cabin, e.g. variations in illumination, sensor-type and -placement. Thus, designing domain-invariant representations is crucial for employing such models in practice. Still, the vast majority of driver activity recognition algorithms are developed under the assumption of a static domain, i.e. an identical distribution of training- and test data. In this work, we aim to bring driver monitoring to a setting, where domain shifts can occur at any time and explore generative models which learn a shared representation space of the source and target domain. First, we formulate the problem of unsupervised domain adaptation for driver activity recognition, where a model trained on labeled examples from the source domain (i.e. color images) is intended to adjust to a different target domain (i.e. infrared images) where only unlabeled data is available during training. To address this problem, we leverage current progress in image-to-image translation and adopt multiple strategies for learning a joint latent space of the source and target distribution and a mapping function to the domain of interest. As our long-term goal is a robust cross-domain classification, we enhance a Variational Auto-Encoder (VAE) for image translation with a classification-driven optimization strategy. Our model for classification-driven domain transfer leads to the best cross-domain recognition results and outperforms a conventional classification approach in color-to-infrared recognition by 13.75%.
|
|
09:45-09:50, Paper ThAM1_T3.5 | |
Open Set Driver Activity Recognition |
|
Roitberg, Alina | Karlsruhe Institute of Technology (KIT) |
Ma, Chaoxiang | Karlsruhe University of Technology (KIT) |
Haurilet, Monica | Karlsruhe Institute of Technology |
Stiefelhagen, Rainer | Karlsruhe Institute of Technology |
Keywords: Driver State and Intent Recognition, Vision Sensing and Perception, Driver Recognition
Abstract: A common obstacle for applying computer vision models inside the vehicle cabin is the dynamic nature of the surrounding environment, as unforeseen situations may occur at any time. Driver monitoring has been widely researched in the context of closed set recognition i.e. under the premise that all categories are known a priori. Such restrictions represent a significant bottleneck in real-life, as the driver observation models are intended to handle the uncertainty of an open world. In this work, we aim to introduce the concept of open sets to the area of driver observation, where methods have been evaluated only on a static set of classes in the past. First, we formulate the problem of open set recognition for driver monitoring, where a model is intended to identify behaviors previously unseen by the classifier and present a novel Open-Drive&Act benchmark. We combine current closed set models with multiple strategies for novelty detection adopted from general action classification in a generic open set driver behavior recognition framework. In addition to conventional approaches, we employ the prominent I3D architecture extended with modules for assessing its uncertainty via Monte-Carlo dropout. Our experiments demonstrate clear benefits of uncertainty-sensitive models, while leveraging the uncertainty of all the output neurons in a voting-like fashion leads to the best recognition results. To create an avenue for future work, we make Open-Drive&Act public at www.github.com/aroitberg/open-set-driver-activity-recognition.
|
|
09:50-09:55, Paper ThAM1_T3.6 | |
Driver Gaze Estimation in the Real World: Overcoming the Eyeglass Challenge |
|
Rangesh, Akshay | University of California, San Diego |
Zhang, Bowen | University of California, San Diego |
Trivedi, Mohan M. | University of California at San Diego |
Keywords: Driver State and Intent Recognition, Advanced Driver Assistance Systems, Deep Learning
Abstract: A driver's gaze is critical for determining the driver's attention level, state, situational awareness, and readiness to take over control from partially and fully automated vehicles. Tracking both the head and eyes (pupils) can provide a reliable estimation of a driver's gaze using face images under ideal conditions. However, the vehicular environment introduces a variety of challenges that are usually unaccounted for - harsh illumination, nighttime conditions, and reflective/dark eyeglasses. Unfortunately, relying on head pose alone under such conditions can prove to be unreliable owing to significant eye movements. In this study, we offer solutions to address these problems encountered in the real world. To solve issues with lighting, we demonstrate that using an infrared camera with suitable equalization and normalization usually suffices. To handle eyeglasses and their corresponding artifacts, we adopt the idea of image-to-image translation using generative adversarial networks (GANs) to pre-process images prior to gaze estimation. To this end, we propose the Gaze Preserving CycleGAN (GPCycleGAN). This network preserves the driver's gaze while removing potential eyeglasses from infrared face images. Our approach exhibits improved performance and robustness on challenging real-world data spanning 13 subjects and a variety of driving conditions.
|
|
09:55-10:00, Paper ThAM1_T3.7 | |
The More, the Merrier? a Study on In-Car IR-Based Head Pose Estimation |
|
Firintepe, Ahmet | BMW Group |
Selim, Mohamed | German Research Center for Artificial Intelligence (DFKI) |
Pagani, Alain | German Research Center for Artificial Intelligence (DFKI) |
Didier, Stricker | DFKI GmbH, University of Kaiserslautern |
Keywords: Driver State and Intent Recognition, Deep Learning, Convolutional Neural Networks
Abstract: Deep learning methods have proven useful for head pose estimation, but the effect of their depth, type and input resolution based on infrared (IR) images still need to be explored. In this paper, we present a study on in-car head pose estimation on the IR images of the AutoPOSE dataset, where we extract 64 64 and 128 128 pixel cropped head images. We propose the novel networks Head Orientation Network (HON) and ResNetHG and compare them with state-of-the-art methods like the HPN model from DriveAHead on different input resolutions. In addition, we evaluate multiple depths within our HON and ResNetHG networks and their effect on the accuracy. Our experiments show that higher resolution images lead to lower estimation errors. Furthermore, we show that deep learning methods with fewer layers perform better on head orientation regression based on IR images. Our HON and ResNetHG18 architectures outperform the state-of-the-art on IR images on four different metrics, where we achieve a reduction of the residual error of up to 74%.
|
|
10:00-10:05, Paper ThAM1_T3.8 | |
Lightweight Deep Neural Network-Based Real-Time Pose Estimation on Embedded Systems |
|
Heo, Junho | Sogang University |
Kim, Ginam | Sogang University |
Park, Jaeseo | Sogang University |
Kim, Yeonsu | Hyundai Mobis |
Cho, Sung-Sik | Hyundai Mobis |
Lee, Chang Won | Hyundai Mobis |
Kang, Suk-Ju | Sogang University |
Keywords: Driver State and Intent Recognition, Convolutional Neural Networks, Deep Learning
Abstract: This paper proposes a novel real-time pose estimation system on embedded devices for a driver and a front passenger. The main goal of the proposed system is to operate in real time with limited hardware resources while preserving the high accuracy. The proposed system is divided into an object detection and a pose estimation. In the object detection, we eliminate the redundant and inaccurate bounding boxes by considering the characteristics of the target image domain. In the pose estimation, a single-person pose estimation with a lightweight deep learning model has been proposed and knowledge distillation has been adopted to maximize the performance while maintaining the high speed. In the experimental results, the proposed pose estimation has up to 92% of the accuracy and the 9 times less computation compared to the previous methods. The operation speed is 195 frame per second on NVIDIA Jetson TX2.
|
|
ThAM2_T1 |
EGYPTIAN_1 |
Image, Radar, Lidar Signal Processing. B |
Regular Session |
|
10:15-10:20, Paper ThAM2_T1.1 | |
MuRF-Net: Multi-Receptive Field Pillars for 3D Object Detection from Point Cloud |
|
Yang, Zhuo | Xi'an Jiaotong University |
Huang, Yuhao | Institute of Artificial Intelligence and Robotics in Xi’an Jiaot |
Yan, Xinrui | Xi'an Jiaotong University |
Chen, Shitao | Xi'an Jiaotong University, Xi'an, China |
Dong, Jinpeng | Xi’an Jiaotong University |
Sun, Jun | Sunny Central Research Institute |
Liu, Ziyi | Xi'an Jiaotong University |
Nan, Zhixiong | Xi'an Jiaotong University |
Zheng, Nanning | Xi'an Jiaotong University |
Keywords: Self-Driving Vehicles, Deep Learning, Lidar Sensing and Perception
Abstract: In this paper, we propose a point cloud based 3D object detection framework that accounts for both contextual and local information by leveraging multi-receptive field pillars, named as MuRF-Net. Recently, common pipelines can be divided into a voxel-based feature encoder and an object detector. During the feature encoding steps, contextual information is neglected, which is critical for the 3D object detection task. Thus the encoded features are not suitable to input to the subsequent object detector. To address this challenge, we propose the MuRF-Net with a multi-receptive field voxelization mechanism to capture both contextual and local information. After the voxelization, the voxelized points are processed by a feature encoder, and a channel-wise feature reconfiguration module is proposed to combine the features with different receptive fields using a lateral enhanced fusion network. In addition, to handle the increase of memory and computational cost brought by multi-receptive field voxelization, a dynamic voxel encoder is applied taking advantage of the sparseness of point cloud. Experiments on the KITTI benchmark for both 3D object and Bird’s Eye View (BEV) detection tasks on car class are conducted and MuRF-Net outperforms other voxel-based methods these detection tasks by a large margin. Besides, the MuRF-Net can achieve nearly real-time speed with 20Hz.
|
|
10:20-10:25, Paper ThAM2_T1.2 | |
Towards Synchronisation of Multiple Independent MEMS-Based Micro-Scanning LiDAR Systems |
|
Stelzer, Philipp | Graz University of Technology |
Strasser, Andreas | Graz University of Technology |
Steger, Christian | Graz University of Technology |
Plank, Hannes | Infineon |
Druml, Norbert | Infineon Technologies |
Keywords: Automated Vehicles, Lidar Sensing and Perception, Advanced Driver Assistance Systems
Abstract: In intelligent vehicles it is indispensable to have reliable Advanced Driver-Assistance Systems (ADAS) on board. These ADAS require various types of sensors, like Light Detection and Ranging (LiDAR). Nowadays, drivers delegate some responsibilities to their highly automated vehicles, however, it is not legally secured. Nevertheless, the legislator will in future deal with automated vehicles. The fundamentals will be laid to ensure that the transfer of responsibilities will be permitted under certain conditions. Car manufacturers, on the other hand, must ensure that components are safe and reliable. With LiDAR, this could be achieved with Micro-Electro-Mechanical System (MEMS) technology. As with humans as drivers, it is also advantageous for intelligent systems if obstacles in the environment are detected promptly. Especially when the obstacles are moving, it helps to initiate appropriate measures, such as braking. Therefore, it is attempted to extend the Field-of-View (FoV) of the various sensors. By synchronising multiple MEMS mirrors, it is able to extend the FoV of the LiDAR part in an environmental perception system. In this publication, an architecture is proposed for MEMS-based Micro-Scanning LiDAR Systems to achieve synchronisation of multiple independently controlled MEMS mirrors. The architecture was implemented in an FPGA prototyping platform to show its feasibility and evaluate its performance.
|
|
10:25-10:30, Paper ThAM2_T1.3 | |
SCSSnet: Learning Spatially-Conditioned Scene Segmentation on LiDAR Point Clouds |
|
Rist, Christoph Bernd | Daimler AG |
Schmidt, David Josef | Mercedes-Benz AG |
Enzweiler, Markus | Daimler AG |
Gavrila, Dariu M. | TU Delft |
Keywords: Lidar Sensing and Perception, Deep Learning, Vehicle Environment Perception
Abstract: This work proposes a spatially-conditioned neural network to perform semantic segmentation and geometric scene completion in 3D on real-world LiDAR data. Spatially-conditioned scene segmentation (SCSSnet) is a representation suitable to encode properties of large 3D scenes at high resolution. A novel sampling strategy encodes free space information from LiDAR scans explicitly and is both simple and effective. We avoid the need for synthetically generated or volumetric ground truth data and are able to train and evaluate our method on semantically annotated LiDAR scans from the Semantic KITTI dataset. Ultimately, our method is able to predict scene geometry as well as a diverse set of semantic classes over a large spatial extent at arbitrary output resolution instead of a fixed discretization of space. Our experiments confirm that the learned scene representation is versatile and powerful and can be used for multiple downstream tasks. We perform point-wise semantic segmentation, point-of-view depth completion and ground plane segmentation. The semantic segmentation performance of our method surpasses the state of the art by a significant margin of 7 % mIoU.
|
|
10:30-10:35, Paper ThAM2_T1.4 | |
LIBRE: The Multiple 3D LiDAR Dataset |
|
Carballo, Alexander | Nagoya University |
Lambert, Jacob | Nagoya University |
Monrroy Cano, Abraham Israel | Nagoya University |
Wong, David Robert | Nagoya University |
Narksri, Patiphon | Nagoya University |
Kitsukawa, Yuki | TierIV Inc |
Takeuchi, Eijiro | Nagoya University |
Kato, Shinpei | The University of Tokyo |
Takeda, Kazuya | Nagoya University |
Keywords: Lidar Sensing and Perception, Autonomous / Intelligent Robotic Vehicles
Abstract: In this work, we present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors, covering a range of manufacturers, models, and laser configurations. Data captured independently from each sensor includes three different environments and configurations: static targets, where objects were placed at known distances and measured from a fixed position within a controlled environment; adverse weather, where static obstacles were measured from a moving vehicle, captured in a weather chamber where LiDARs were exposed to different conditions (fog, rain, strong light); and finally, dynamic traffic, where dynamic objects were captured from a vehicle driven on public urban roads, multiple times at different times of the day, and including supporting sensors such as cameras, infrared imaging, and odometry devices. LIBRE will contribute to the research community to (1) provide a means for a fair comparison of currently available LiDARs, and (2) facilitate the improvement of existing self-driving vehicles and robotics-related software, in terms of development and tuning of LiDAR-based perception algorithms.
|
|
10:35-10:40, Paper ThAM2_T1.5 | |
EpBRM: Improving a Quality of 3D Object Detection Using End Point Box Regression Module |
|
Shin, Kiwoo | University of California, Berkeley |
Tomizuka, Masayoshi | University of California at Berkeley |
Keywords: Lidar Sensing and Perception, Image, Radar, Lidar Signal Processing, Vision Sensing and Perception
Abstract: We present an endpoint box regression module(epBRM), which is designed for predicting precise 3D bounding boxes using raw LiDAR 3D point clouds. The proposed epBRM is built with sequence of small networks and is computationally lightweight. Our approach can improve a 3D object detection performance by predicting more precise 3D bounding box coordinates. The proposed approach requires 40 minutes of training to improve the detection performance. Moreover, epBRM imposes less than 12ms to network inference time for up-to 20 objects. The proposed approach utilizes a spatial transformation mechanism to simplify the box regression task. Adopting spatial transformation mechanism into epBRM makes it possible to improve the quality of detection with a small sized network. We conduct in-depth analysis of the effect of various spatial transformation mechanisms applied on raw LiDAR 3D point clouds. We also evaluate the proposed epBRM by applying it to several state-of-the-art 3D object detection systems. We evaluate our approach on KITTI dataset, a standard 3D object detection benchmark for autonomous vehicles. The proposed epBRM enhances the overlaps between ground truth bounding boxes and detected bounding boxes, and improves 3D object detection. Our proposed method evaluated in KITTI test server outperforms current state-of-the-art approaches.
|
|
10:40-10:45, Paper ThAM2_T1.6 | |
SUSTech POINTS: A Portable 3D Point Cloud Interactive Annotation Platform System |
|
Li, E | Southern University of Science and Technology |
Wang, Shuaijun | 1088 Xueyuan Avenue, Shenzhen 518055, P.R. China |
Li, Chengyang | Southern University of Science and Technology |
Li, Dachuan | Southern University of Science and Technology |
Wu, Xiangbin | Intel |
Hao, Qi | Southern University of Science and Technology |
Keywords: Vehicle Environment Perception, Lidar Sensing and Perception, Image, Radar, Lidar Signal Processing
Abstract: The major challenges of developing 3D point cloud annotation systems for autonomous driving datasets include convenient user-data interfaces, efficient operations on geometric data units, and scalable annotation tools. This paper presents a {Portable pOint-cloud Interactive aNnotation plaTform System} (i.e. SUSTech POINTS), which contains a set of user-friendly interfaces and efficient annotation tools to help achieve high-quality data annotations with high efficiency. The novelty of this work is threefold: (1) developing a set of visualization modules for fast annotation error localization and convenient annotator-data interactions; (2) developing a set of interactive tools for annotators labeling 3D point clouds and 2D images in high speed; (3) developing an annotation transfer method to label the same objects in different data frames. The developed POINTS system is tested with public datasets such as KITTI and a private dataset (SUSTech SCAPES). The experimental results show that the developed platform can help improve the annotation accuracy and efficiency compared with using other open-source annotation platforms.
|
|
10:45-10:50, Paper ThAM2_T1.7 | |
Scan-Based Semantic Segmentation of LiDAR Point Clouds: An Experimental Study |
|
Triess, Larissa Tamina | Mercedes-Benz AG |
Peter, David | Daimler AG |
Rist, Christoph Bernd | Daimler AG |
Zöllner, J. Marius | FZI Research Center for Information Technology; KIT Karlsruhe In |
Keywords: Lidar Sensing and Perception, Convolutional Neural Networks, Vehicle Environment Perception
Abstract: Autonomous vehicles need to have a semantic understanding of the three-dimensional world around them in order to reason about their environment. State of the art methods use deep neural networks to predict semantic classes for each point in a LiDAR scan. A powerful and efficient way to process LiDAR measurements is to use two-dimensional, image-like projections. In this work, we perform a comprehensive experimental study of image-based semantic segmentation architectures for LiDAR point clouds. We demonstrate various techniques to boost the performance and to improve runtime as well as memory constraints. First, we examine the effect of network size and suggest that much faster inference times can be achieved at a very low cost to accuracy. Next, we introduce an improved point cloud projection technique that does not suffer from systematic occlusions. We use a cyclic padding mechanism that provides context at the horizontal field-of-view boundaries. In a third part, we perform experiments with a soft Dice loss function that directly optimizes for the intersection-over-union metric. Finally, we propose a new kind of convolution layer with a reduced amount of weight-sharing along one of the two spatial dimensions, addressing the large difference in appearance along the vertical axis of a LiDAR scan. We propose a final set of the above methods with which the model achieves an increase of 3.2% in mIoU segmentation performance over the baseline while requiring only 42% of the original inference time.
|
|
ThAM2_T2 |
EGYPTIAN_2 |
Motion Planning. B |
Regular Session |
|
10:15-10:20, Paper ThAM2_T2.1 | |
Interaction-Aware Trajectory Prediction Based on a 3D Spatio-Temporal Tensor Representation Using Convolutional–Recurrent Neural Networks |
|
Krueger, Martin | ZF Group |
Stockem Novo, Anne | ZF Group |
Nattermann, Till | ZF TRW |
Bertram, Torsten | Technische Universität Dortmund |
Keywords: Situation Analysis and Planning, Convolutional Neural Networks, Recurrent Networks
Abstract: Predicting the future trajectories for all vehicles relevant to the ego vehicle is a crucial, yet unsolved challenge to master automated driving. This paper proposes a combination of two lines of research for predicting all the trajectories of a group of vehicles of arbitrary size, considering the mutual interactions possible. Treating the prediction of other vehicles as a planning task for themselves enables the application of the artificial potential field approach. Modeling the driving situation as a potential field turns the trajectory prediction problem back to its original domain – utility space. Humans generate trajectories (that should be predicted) during driving by balancing costs and rewards, which lead to a total utility. The main difficulty inherent to the potential field approach is the hard problem of parameter tuning. Therefore, it is not directly used for prediction. Instead, the potential field representation is used as input for a neural network, which predicts a distribution over trajectories based on distinct maneuvers. This allows a multi-modal prediction for each vehicle and reflects the pattern recognition character.
|
|
10:20-10:25, Paper ThAM2_T2.2 | |
Sensitivity Analysis of a Planning Algorithm Considering Uncertainties |
|
Henze, Franziska | Karlsruhe Institute of Technology |
Fassbender, Dennis | AUDI AG |
Stiller, Christoph | Karlsruhe Institute of Technology |
Keywords: Situation Analysis and Planning, Automated Vehicles, Autonomous / Intelligent Robotic Vehicles
Abstract: Recent trajectory or maneuver planning approaches in automated driving show the tendency to get more complex or even become a black box. Thus, an algorithm‘s decisions become less transparent, especially when uncertain input parameters have a large influence. To identify those input parameters whose uncertainty is more relevant than others’, Morris’ method of elementary effects is used here. It is a quantitative sensitivity analysis that classifies the inputs into relevant and irrelevant, depending on how sensitive the algorithm reacts to changes in the input. The method is adapted to analyze the behavior of a car-following and lane-changing model during an overtaking maneuver with two vehicles. The results show that Morris’ method is capable of determining important parameters for each situation. It is even possible to identify boundaries for the necessary accuracy of each input. With this, we are able to determine input parameter ranges for which the planning algorithm is able to produce reliable output.
|
|
10:25-10:30, Paper ThAM2_T2.3 | |
Motion Planning for Minimisation of Motion Sickness with Fixed Journey Time in Autonomous Vehicles |
|
Htike, Zaw | Cranfield University |
Papaioannou, Georgios | Cranfield University |
Siampis, Efstathios | Cranfield Uinversity |
Velenis, Efstathios | Cranfield University |
Longo, Stefano | Cranfield University |
Keywords: Autonomous / Intelligent Robotic Vehicles
Abstract: This paper presents the application of optimal control problem formulation for motion planning in order to minimise motion sickness in autonomous vehicles. The optimum velocity profile is sought for a predefined road path from a specific starting point to a final one within specific and given boundaries and constrains in order to minimise the motion sickness without compromising the journey time. For the representation of the motion sickness as a cost function to our optimal control problem, the illness rating is used. The influence of road width flexibility on illness rating and for a set of fixed journey times are investigated. According to the results, the road with flexible lateral manoeuvrability is found to provide lower sickness compared to the fixed road path. Moreover, the correlation between sickness and journey time can be represented as a Pareto front.
|
|
10:30-10:35, Paper ThAM2_T2.4 | |
Learning a Directional Soft Lane Affordance Model for Road Scenes Using Self-Supervision |
|
Karlsson, Robin | Tier IV |
Sjoberg, Erik | Ascent Robotics |
Keywords: Situation Analysis and Planning, Deep Learning, Self-Driving Vehicles
Abstract: Humans navigate complex environments in an organized yet flexible manner, adapting to the context and implicit social rules. Understanding these naturally learned patterns of behavior is essential for applications such as autonomous vehicles. However, algorithmically defining these implicit rules of human behavior remains difficult. This work proposes a novel self-supervised method for training a probabilistic network model to estimate the regions humans are most likely to drive in as well as a multimodal representation of the inferred direction of travel at each point. The model is trained on individual hu- man trajectories conditioned on a representation of the driving environment. The model is shown to successfully generalize to new road scenes, demonstrating the model’s potential for real- world application as a prior over socially acceptable driving behavior in challenging or ambiguous scenarios which are poorly handled by explicit traffic rules.
|
|
10:35-10:40, Paper ThAM2_T2.5 | |
Destination Prediction Based on Partial Trajectory Data |
|
Ebel, Patrick | Technische Universität Berlin |
Göl, Ibrahim Emre | Technische Universität Berlin |
Lingenfelder, Christoph | MBition GmbH |
Vogelsang, Andreas | Technical University of Berlin |
Keywords: Assistive Mobility Systems, Situation Analysis and Planning, Recurrent Networks
Abstract: Two-thirds of the people who buy a new car prefer to use a substitute instead of the built-in navigation system. However, for many applications, knowledge about a user's intended destination and route is crucial. For example, suggestions for available parking spots close to the destination can be made or ride-sharing opportunities along the route are facilitated. Our approach predicts probable destinations and routes of a vehicle, based on the most recent partial trajectory and additional contextual data. The approach follows a three-step procedure: First, a k-d tree-based space discretization is performed, mapping GPS locations to discrete regions. Secondly, a recurrent neural network is trained to predict the destination based on partial sequences of trajectories. The neural network produces destination scores, signifying the probability of each region being the destination. Finally, the routes to the most probable destinations are calculated. To evaluate the method, we compare multiple neural architectures and present the experimental results of the destination prediction. The experiments are based on two public datasets of non-personalized, timestamped GPS locations of taxi trips. The best performing models were able to predict the destination of a vehicle with a mean error of 1.3km and 1.43km respectively.
|
|
10:40-10:45, Paper ThAM2_T2.6 | |
A New Approach for Tactical Decision Making in Lane Changing: Sample Efficient Deep Q Learning with a Safety Feedback Reward |
|
Yavas, M. Ugur | Eatron Technologies |
Kumbasar, Tufan | Istanbul Technical University |
Ure, Nazim | Istanbul Technical University |
Keywords: Self-Driving Vehicles, Autonomous / Intelligent Robotic Vehicles, Automated Vehicles
Abstract: Automated lane change is one of the most challenging task to be solved of highly automated vehicles due to its safety-critical, uncertain and multi-agent nature. This paper presents the novel deployment of the state of art Q learning method, namely Rainbow DQN, that uses a new safety driven rewarding scheme to tackle the issues in an dynamic and uncertain simulation environment. We present various comparative results to show that our novel approach of having reward feedback from the safety layer dramatically increases both the agent's performance and sample efficiency. Furthermore, through the novel deployment of Rainbow DQN, it is shown that more intuition about the agent's actions is extracted by examining the distributions of generated Q values of the agents. The proposed algorithm shows superior performance to the baseline algorithm in the challenging scenarios with only 200000 training steps (i.e. equivalent to 55 hours driving).
|
|
10:45-10:50, Paper ThAM2_T2.7 | |
Sensitivity Analysis for Vehicle Dynamics Models -- an Approach to Model Quality Assessment for Automated Vehicles |
|
Nolte, Marcus | Technische Universität Braunschweig |
Schubert, Richard | Technische Universität Braunschweig |
Reisch, Cordula | Technische Universität Braunschweig |
Maurer, Markus | TU Braunschweig |
Keywords: Autonomous / Intelligent Robotic Vehicles, Vehicle Control, Self-Driving Vehicles
Abstract: Model-based approaches have become increasingly popular in the domain of automated driving. This includes runtime algorithms, such as Model Predictive Control, as well as formal and simulative approaches for the verification of automated vehicle functions. With this trend, the quality of models becomes a key factor for automated vehicle safety. Established tools from model theory which can be applied to assure the quality of models are uncertainty and sensitivity analysis. In this paper, we conduct sensitivity analyses for a single and double track vehicle dynamics model to gain insights about the models' behavior under different operating conditions. We compare the models, point out the most important findings regarding the obtained parameters sensitivities, and provide examples of possible applications of the gained insights.
|
|
ThAM2_T3 |
EGYPTIAN_3 |
Driver State Recognition. B |
Regular Session |
|
10:15-10:20, Paper ThAM2_T3.1 | |
ParkPredict: Motion and Intent Prediction of Vehicles in Parking Lots |
|
Shen, Xu | University of California, Berkeley |
Batkovic, Ivo | Zenuity, Chalmers University of Technology |
Govindarajan, Vijay | UC Berkeley |
Falcone, Paolo | Chalmers University of Technology |
Darrell, Trevor | UC Berkeley |
Borrelli, Francesco | University of California, Berkeley |
Keywords: Driver State and Intent Recognition, Automated Vehicles, Deep Learning
Abstract: We investigate the problem of predicting driver behavior in parking lots, an environment which is less structured than typical road networks and features complex, interactive maneuvers in a compact space. Using the CARLA simulator, we develop a parking lot environment and collect a dataset of human parking maneuvers. We then study the impact of model complexity and feature information by comparing a multi-modal Long Short-Term Memory (LSTM) prediction model and a Convolution Neural Network LSTM (CNN-LSTM) to a physics-based Extended Kalman Filter (EKF) baseline. Our results show that 1) intent can be estimated well (roughly 85% top-1 accuracy and nearly 100% top-3 accuracy with the LSTM and CNN-LSTM model); 2) knowledge of the human driver’s intended parking spot has a major impact on predicting parking trajectory; and 3) the semantic representation of the environment improves long term predictions.
|
|
10:20-10:25, Paper ThAM2_T3.2 | |
Robust Driver Head Pose Estimation in Naturalistic Conditions from Point-Cloud Data |
|
Hu, Tiancheng | The University of Texas at Dallas |
Jha, Sumit | University of Texas at Dallas |
Busso, Carlos | University of Texas at Dallas |
Keywords: Driver State and Intent Recognition, Advanced Driver Assistance Systems, Deep Learning
Abstract: Head pose estimation has been a key task in computer vision since a broad range of applications often require accurate information about the orientation of the head. Achieving this goal with regular RGB cameras faces challenges for automotive applications due to occlusions, extreme head poses and sudden changes in illumination. Most of these challenges can be attenuated with algorithms relying on depth cameras. This paper proposes a novel point-cloud based deep learning approach to estimate the driver head pose from depth camera data, addressing these challenges. The proposed algorithms is inspired by the PointNet++ framework, where points are sampled and grouped before extracting discriminative features. We demonstrate the effectiveness of our algorithm by evaluating our approach on a naturalistic driving database from 22 drivers, where the benchmark for the orientation of the driver's head is obtained with the Fi-Cap device. The experimental evaluation demonstrates that our proposed approach relying on point-cloud data achieves predictions that are almost always more reliable than state-of-the-art head pose estimation methods based on regular cameras. Furthermore, our approach provides predictions even for extreme rotations, which is not the case for the baseline methods. To the best of our knowledge, this is the first study to propose head pose estimation using deep learning on point cloud data.
|
|
10:25-10:30, Paper ThAM2_T3.3 | |
Deep Learning with Attention Mechanism for Predicting Driver Intention at Intersection |
|
Girma, Abenezer | NCAT |
Amsalu, Seifemichael Bekele | North Carolina A&T State University |
Workineh, Abrham | Usaa |
Khan, Mubbashar Altaf | North Carolina Agricultural and Technical State University |
Homaifar, Abdollah | North Carolina a & T State University |
Keywords: Driver State and Intent Recognition, Driver Recognition, Advanced Driver Assistance Systems
Abstract: In this paper, a driver's intention prediction near a road intersection is proposed. Our approach uses a deep bidirectional Long Short-Term Memory (LSTM) with an attention mechanism model based on a hybrid-state system (HSS) framework. As intersection is considered to be as one of the major source of road accidents, predicting a driver's intention at an intersection is very crucial. Our method uses a sequence to sequence modeling with an attention mechanism to effectively exploit temporal information out of the time-series vehicular data including velocity and yaw-rate. The model then predicts ahead of time whether the target vehicle/driver will go straight, stop, or take right or left turn. The performance of the proposed approach is evaluated on a naturalistic driving dataset and results show that our method achieves high accuracy as well as outperforms other methods. The proposed solution is promising to be applied in advanced driver assistance systems (ADAS) and as part of active safety system of autonomous vehicles.
|
|
10:30-10:35, Paper ThAM2_T3.4 | |
Back to School: Impact of Training on Driver Behavior and State in Autonomous Vehicles |
|
Sibi, Srinath | Stanford University |
Balters, Stephanie | Norwegian University of Science and Technology |
Fu, Ernestine | Stanford University |
Strack, Gamze | Robert Bosch LLC |
Steinert, Martin | Norwegian University of Science and Technology |
Ju, Wendy | Cornell Tech |
Keywords: Driver State and Intent Recognition, Human-Machine Interface, Automated Vehicles
Abstract: Many producers of automated vehicle systems have begun testing autonomous vehicles on the road. In order to ensure safety and prevent crashes, human drivers are enlisted to monitor autonomous vehicles. However, operators of autonomous systems exhibit negative behavior adaptations in response to prolonged supervision of automation. To pre-vent the onset of undesirable behaviors in safety drivers, we must investigate driver state and behavior changes during the operation of highly automated vehicles. In the study presented here, we examine the effects of theoretical and practical training on the drivers’ response to potentially critical situations in a longitudinal driving simulator study. We also present the effects of encountering a failure of the automated vehicle on driver state and behavior. We conducted a two-part panel driving simulator study (N=28), with an interval of 25-days between the training and testing sessions. We found that while participants with training are better prepared for a potential failure of the automation, participants in both conditions show a rise in sleepy or drowsy behavior before a potential failure of automation
|
|
10:35-10:40, Paper ThAM2_T3.5 | |
Multi-Vehicle Interaction Scenarios Generation with Interpretable Traffic Primitives and Gaussian Process Regression |
|
Zhang, Weiyang | University of Michigan, Ann Arbor |
Wang, Wenshuo | University of Michigan |
Zhu, Jiacheng | Carnegie Mellon University |
Zhao, Ding | Carnegie Mellon University |
Keywords: Automated Vehicles, Driver State and Intent Recognition, Unsupervised Learning
Abstract: Generating multi-vehicle interaction scenarios can benefit the motion planning and decision making of autonomous vehicles when on-road data is insufficient. This paper presents an efficient approach to generate varied multi-vehicle interaction scenarios that can both adapt to different road geometries and inherit the key interaction patterns in real-world driving. Towards this end, the available multi-vehicle interaction scenarios are temporally segmented into several interpretable fundamental building blocks, called traffic primitives, via the Bayesian nonparametric learning. Then, the changepoints of traffic primitives are transformed into the desired road to generate collision-free interaction trajectories through a sampling-based path planning algorithm. The Gaussian process regression is finally introduced to control the variance and smoothness of the generated multi-vehicle interaction trajectories. Experiments with simulation results of three multi-vehicle trajectories at different road conditions are carried out. The experimental results demonstrate that our proposed method can generate a bunch of human-like multi-vehicle interaction trajectories that can fit different road conditions remaining the key interaction patterns of agents in the provided scenarios, which is import to the development of autonomous vehicles.
|
|
10:40-10:45, Paper ThAM2_T3.6 | |
Risk-Aware High-Level Decisions for Automated Driving at Occluded Intersections with Reinforcement Learning |
|
Kamran, Danial | Karlsruhe Institute of Technology |
Fernandez Lopez, Carlos | Karlsruhe Institute of Technology (KIT) |
Lauer, Martin | Karlsruher Institut Für Technologie |
Stiller, Christoph | Karlsruhe Institute of Technology |
Keywords: Reinforcement Learning, Automated Vehicles, Driver State and Intent Recognition
Abstract: Reinforcement learning is nowadays a popular framework for solving different decision making problems in automated driving. However, there are still some remaining crucial challenges that need to be addressed for providing more reliable policies. In this paper, we propose a generic risk-aware DQN approach in order to learn high level actions for driving through unsignalized occluded intersections. The proposed state representation provides lane based information which allows to be used for multi-lane scenarios. Moreover, we propose a risk based reward function which punishes risky situations instead of only collision failures. Such rewarding approach helps to incorporate risk prediction into our deep Q network and learn more reliable policies which are safer in challenging situations. The efficiency of the proposed approach is compared with a DQN learned with conventional collision based rewarding scheme and also with a rule-based intersection navigation policy. Evaluation results show that the proposed approach outperforms both of these methods. It provides safer actions than collision-aware DQN approach and is less overcautious than the rule-based policy.
|
|
10:45-10:50, Paper ThAM2_T3.7 | |
Recognition of Driver Braking Intensity of EHB System Using a Hybrid Learning Approach |
|
Yang, Haohan | Shanghai Jiao Tong University |
Kaku, Chuyo | Chaoli Electric Co., Ltd |
Yu, Fan | Shanghai Jiao Tong University |
Keywords: Driver Recognition, Electric and Hybrid Technologies, Advanced Driver Assistance Systems
Abstract: Accurate recognition of driver braking intensity is of great importance for advanced control of intelligent braking system. In this paper, the braking intensity is classified into four clusters based on an unsupervised Gaussian mixture model (GMM). Then, the architecture of an adaptive-network-based fuzzy inference system (ANFIS) is proposed for braking intensity prediction. A batch learning rule that combines the recursive least squares and gradient descent method used for training ANFIS is adopted to improve the generalization capability. The training data are collected from a hybrid vehicle under real driving conditions. In addition, co-simulation with the software of MATLAB/Simulink and Hardware-in-the-Loop (HiL) tests for an Electronic-Hydraulic Brake (EHB) system are carried out. In comparison to other typical learning methods such as the feedback neural network and the recurrent neural network, the simulation and experimental results demonstrate the effectiveness and accuracy of the proposed hybrid learning approach for braking intensity recognition in different braking scenarios.
|
|
ThPM1_T1 |
EGYPTIAN_1 |
Vision Sensing and Perception 2. A |
Regular Session |
|
14:35-14:40, Paper ThPM1_T1.1 | |
Visually Assisted Anti-Lock Braking System |
|
Bahník, Michal | CTU in Prague |
Filyo, Dominik | Czech Technical University in Prague, Faculty of Electrical Engi |
Pekárek, David | Czech Technical University in Prague |
Vlasimsky, Martin | Czech Technical University in Prague |
Cech, Jan | Czech Technical University in Prague, Faculty of Electrical Engi |
Hanis, Tomas | Czech Technical University in Prague, Faculty of Electrical Engi |
Hromcik, Martin | Czech Technical University in Prague |
Keywords: Advanced Driver Assistance Systems, Vision Sensing and Perception, Vehicle Environment Perception
Abstract: The concept of a visually-assisted anti-lock braking system (ABS) is presented. The road conditions in front of the vehicle are assessed in real time based on camera data. The surface type classification (tarmac, gravel, ice, etc.) and related road friction properties are provided to the braking control algorithm in order to adjust the vehicle response accordingly. The system was implemented and tested in simulations as well as on an instrumented sub-scale vehicle. Simulations and experimental results quantitatively demonstrate the benefits of the proposed system in critical maneuvers, such as emergency braking and collision avoidance.
|
|
14:40-14:45, Paper ThPM1_T1.2 | |
Sensitive Detection of Target-Vehicle-Motion Using Vision Only |
|
Mehdi, Syed Bilal | General Motors |
Hu, Yasen | General Motors |
Keywords: Vision Sensing and Perception, Automated Vehicles, Advanced Driver Assistance Systems
Abstract: For safe driving in parking lots, differentiating the few moving vehicles from the many parked is essential but also challenging. Vehicles tend to move at slow speeds in parking areas and, therefore, low-cost sensors including cameras and radars cannot detect their motion using conventional object-localization and speed-measurement methods. This paper presents a novel method of detecting motion of a target vehicle by equating it to rotation of the vehicle’s wheels. Using a monocular 2MP camera, the algorithm is demonstrated to detect motion as slow as 0.1 mph at distances of up to 30 meters away while the host vehicle itself moves at speeds up to 15 mph. Test results also promise an easy and cost-effective path to further increasing the range of the algorithm.
|
|
14:45-14:50, Paper ThPM1_T1.3 | |
A Unified Method for Improving Long-Range Accuracy of Stereo and Monocular Depth Estimation Algorithms |
|
Miclea, Vlad | Technical University of Cluj-Napoca |
Nedevschi, Sergiu | Technical University of Cluj-Napoca |
Keywords: Vision Sensing and Perception, Image, Radar, Lidar Signal Processing, Vehicle Environment Perception
Abstract: Environment perception for driving applications requires very accurate sensors especially when dealing with depth measurements. LiDAR is the most trustworthy sensor in this domain, but it suffers from disadvantages in terms of number of scene points and their temporal alignment. These issues are especially relevant when dealing with long-range measurements, where each 3D point is crucial. As an alternative, in this work we focus on camera-based depth perception for objects at large distance by using stereo reconstruction and monocular depth estimation. Towards improving the capabilities of camera-based, we initially introduce a taxonomy to categorize all types of camera-based depth perception methods with respect to their long-range capabilities. We then present a correction method that works for both stereo and monocular depth perception algorithms that output a depth in discrete setting (most suitable for real-time applications). We show that our method improves the precision for such algorithms for objects at large distances without affecting the near-range accuracy. The method requires only several additional operations, preserving the real-time capabilities of the underlying algorithms.
|
|
14:50-14:55, Paper ThPM1_T1.4 | |
Robust and Accurate Object Velocity Detection by Stereo Camera for Autonomous Driving |
|
Saito, Toru | Subaru Corporation |
Okubo, Toshimi | SUBARU |
Takahashi, Naoki | Subaru Corporation |
Keywords: Vision Sensing and Perception, Automated Vehicles, Advanced Driver Assistance Systems
Abstract: Although the number of camera-based sensors mounted on vehicles has recently increased dramatically, robust and accurate object velocity detection is difficult. Additionally, it is still common to use radar as a fusion system. We have developed a method to accurately detect the velocity of object using a camera, based on a large-scale dataset collected over 20 years by the automotive manufacturer, SUBARU. The proposed method consists of three methods: a High Dynamic Range (HDR) detection method that fuses multiple stereo disparity images, a fusion method that combines the results of monocular and stereo recognitions, and a new velocity calculation method. The evaluation was carried out using measurement devices and a test course that can quantitatively reproduce severe environment by mounting the developed stereo camera on an actual vehicle.
|
|
14:55-15:00, Paper ThPM1_T1.5 | |
Test Method for Measuring the Simulation-To-Reality Gap of Camera-Based Object Detection Algorithms for Autonomous Driving |
|
Reway, Fabio | Technische Hochschule Ingolstadt |
Mohamad Kadri Hoffmann, Abdul | Federal University of Parana |
Wachtel, Diogo | Technische Hochschule Ingolstadt |
Huber, Werner | BMW Group Research and Technology |
Knoll, Alois | Technische Universität München |
Parente Ribeiro, Eduardo | Federal University of Parana |
Keywords: Vision Sensing and Perception, Vehicle Environment Perception, Advanced Driver Assistance Systems
Abstract: The validation of automated driving requires billions of kilometers of test drives to be performed so that safety-in-use is assured. It is difficult to validate it only making use of data acquired on field tests in public roads due to the lack of controllability, e.g. over environment conditions. Therefore, the automotive industry relies on test drives to be executed on a proving ground under controlled conditions or in environment simulation software. The first is realistic, but costly in terms of time and effort. The latter provides a high level of reproducibility, but it is still uncertain how valid the delivered test results are. In this paper, a test method for measuring the simulation-to-reality gap is proposed. For this purpose, a test scenario is defined, built on a proving ground and reproduced in two environment simulation software. Four different environment conditions are considered: day, night, fog and rain. The video data of the real and simulated test drives are recorded and fed into a seriesproduced multi-class object detection algorithm for automated driving. Performance metrics are calculated across the real and virtual domains. Finally, the test results are compared so that the simulation-to-reality gap concerning object detection is measured.
|
|
15:00-15:05, Paper ThPM1_T1.6 | |
Systematization of Corner Cases for Visual Perception in Automated Driving |
|
Breitenstein, Jasmin | Technische Universität Braunschweig |
Termöhlen, Jan-Aike | Technische Universität Braunschweig |
Lipinski, Daniel | Volkswagen Group Research |
Fingscheidt, Tim | Technische Universität Braunschweig |
Keywords: Vision Sensing and Perception, Vehicle Environment Perception, Automated Vehicles
Abstract: One major task in automated driving is the development of robust and safe visual perception modules. It is of utmost importance that visual perception reacts adequately to so-called corner cases, which range from overexposure of the image sensor to unexpected and potentially dangerous traffic situations. Their detection thus has high significance both as an online system in the intelligent vehicle, but also in the extraction of relevant training and test data for perception modules. In this paper, we provide a systematization of corner cases for visual perception in automated driving, with the categories being structured by detection complexity. Furthermore, we discuss existing metrics and datasets which can be used for the evaluation of corner case detection methods depending on their suitability to provide beneficial information for the various categories.
|
|
15:05-15:10, Paper ThPM1_T1.7 | |
NLOS Obstacle Position Estimation from Reflected Image |
|
Takatori, Yusuke | Kanagawa Institute of Technology |
Keywords: Vision Sensing and Perception
Abstract: In this paper, a method for estimating the position of obstacles existing outside the line of sight of the vehicle is proposed. By using a stereo vision camera, it estimates the position of obstacles reflected on the side of another vehicle near the subject vehicle or the glass surface of the building next to the road. First, to clarify the feasibility of such an application, the frequency of the reflection of the virtual image of the vehicle traveling in front of the preceding vehicle of the ego-vehicle on the side of the vehicle traveling in the adjacent lane is measured. Next, since those reflecting surfaces are oriented in various directions, proposed method takes its orientation into account for estimating the position of obstacles. As a result of experiments on the 1/5 scale, we could estimate the position within 20 cm of the error for obstacles existing at distances of 1.4 m to 4 m from the camera in the longitude distance direction. This result indicates that it is possible to estimate the position of the obstacle existing outside the line of sight that is present at a distance of about 20 m from the 5 m in the actual road situation with a precision of about 1 to 2 m.
|
|
15:10-15:15, Paper ThPM1_T1.8 | |
Caption Generation from Road Images for Traffic Scene Construction |
|
Wu, Chuan | School of Software Engineering, Xi'an Jiaotong University |
Li, Yaochen | Xi'an Jiaotong University |
Li, Ling | Xi'an Jiaotong University |
Wang, Le | Xi'an Jiaotong University |
Liu, Yuehu | Institute of Artificial Intelligence and Robotics, Xi'an Jiaoton |
Keywords: Vision Sensing and Perception, Image, Radar, Lidar Signal Processing, Vehicle Environment Perception
Abstract: In this paper, an image captioning network is proposed for traffic scene modeling, which incorporates element attention into the encoder-decoder mechanism to generate more reasonable scene captions. Firstly, the traffic scene elements are detected and segmented according to their clustered locations. Then, the image captioning network is applied to generate the corresponding caption of each subregion. The static and dynamic traffic elements are appropriately organized to construct a 3D corridor scene model. The semantic relationships between the traffic elements are specified according to the captions. The constructed 3D scene model can be utilized for the offline test of unmanned vehicles. The evaluations and comparisons based on the TSD-max and MSCOCO datasets prove the effectiveness of the proposed framework.
|
|
ThPM1_T2 |
EGYPTIAN_2 |
Automated Vehicles 2. A |
Regular Session |
|
14:35-14:40, Paper ThPM1_T2.1 | |
A Unified Evaluation Framework for Autonomous Driving Vehicles |
|
Roshdi, Myada | Huawei |
Nayeer, Nasif | Huawei Technologies Canada |
Elmahgiubi, Mohammed | Huawei Technologies Canada |
Agrawal, Ankur | Mercedes Benz Research and Development North America Inc |
Garcia, Danson Evan | University of Toronto |
Keywords: Automated Vehicles, Self-Driving Vehicles
Abstract: Automated Driving System (ADS) safety assessment is a crucial step before deployment on public roads. Despite the importance of ADS safety assurance to test ADS reliability, most of the existing work is strongly attached to a single testing data source (i.e. on-road collected testing data, simulation or test track). Each source has different fidelity levels and capabilities, therefore there is a lack of a solution that allows for all data sources to complement each other to enable agnostic end to end evaluation and contributes towards different testing goals. Evaluation of ADSs is considered as a mandatory step in the autonomous vehicle development life cycle, demanding a reliable and comprehensive method. Here, we propose a source-agnostic framework, which can perform ADS evaluation compatible with different testing sources. Our findings show that this comprehensive solution can save time, effort and money consumed in ADS evaluation.
|
|
14:40-14:45, Paper ThPM1_T2.2 | |
Mixed Test Environment-Based Vehicle-In-The-Loop Validation - a New Testing Approach for Autonomous Vehicles |
|
Chen, Yu | Xi'an Jiaotong University |
Chen, Shitao | Xi'an Jiaotong University, Xi'an, China |
Xiao, Tong | Xi'an Jiaotong University |
Zhang, Songyi | Xi'an Jiaotong University |
Hou, Qian | Xi'an Jiaotong University, Xi'an, China |
Zheng, Nanning | Xi'an Jiaotong University |
Keywords: Self-Driving Vehicles, Intelligent Vehicle Software Infrastructure, Collision Avoidance
Abstract: The current test of autonomous driving technology requires extensive experimental verification, whether in simulation or on real roads, but how to test autonomous vehicles thoroughly in a safe and comprehensive manner remains a major challenge. The purpose of this paper is to propose a novel mixed test environment-based validation method with vehicle in the loop(ViL) for safer and more effective autonomous driving testing: (1) our method supports more realistic drive safety tests in mixed scenarios which integrate the synthetic and the real-world scenarios. Synthetic scenarios offer complex traffic simulation with diverse road conditions and the real-world scenarios introduce the real autonomous driving vehicle, the real sensor suite as well as the test field to the test loop, having further bridged the gap between Hardware-in-the-Loop(HiL) testing and real road tests than ViL; (2) virtual perceptional results are simulated directly and delivered to the real vehicle in Unified Fusion Data Format(UFDF) without rendering virtual detection data for reduced resource consumption; (3) diverse test scenarios are configurable and reproducible with OSM-based High Definition(HD) map, enabling the simulation to be decoupled from a specific test filed or traffic facilities. A series of experiments on the application of our method have been demonstrated, and our approach is proved to be a promising drive safety testing technique before actual road testing.
|
|
14:45-14:50, Paper ThPM1_T2.3 | |
A Curvilinear Decision Method for Two-Lane Roundabout Crossing and Its Validation under Realistic Traffic Flow |
|
Masi, Stefano | Université De Technologie De Compiègne |
Xu, Philippe | University of Technology of Compiegne |
Bonnifait, Philippe | University of Technology of Compiegne |
Keywords: Self-Driving Vehicles, Collision Avoidance, Situation Analysis and Planning
Abstract: Abstract--- Autonomous vehicles navigation in complex scenarios is still an open issue. One of the major challenges is the safe autonomous vehicles navigation among regular vehicles. In facts, human-driven vehicles behavior and intentions are hard to predict and understand. In this mixed traffic environment, it is considered a hard task to take decisions for an autonomous vehicle. In this work, we propose a strategy to make an autonomous vehicle able to cross safely a roundabout. Our approach relies on High-Definition maps with lane level description which allows to predict the future situation thanks to the concept of virtual vehicles. In particular, this method handles safely collision avoidance and guarantees that no priority constraint is violated during the insertion maneuver without being overly cautious. The performance is evaluated with the SUMO simulator framework. An highly interactive vehicles flow has been generated according to the real data of roundabout scenarios from the INTERACTION dataset. We also propose strategies to extend our algorithm to multi-lane roundabouts and report how these extensions behave in terms of safety and traffic flow.
|
|
14:50-14:55, Paper ThPM1_T2.4 | |
Towards Efficient Hazard Identification in the Concept Phase of Driverless Vehicle Development |
|
Graubohm, Robert | Technische Universität Braunschweig |
Stolte, Torben | Technische Universität Braunschweig |
Bagschik, Gerrit | Technische Universität Braunschweig |
Maurer, Markus | TU Braunschweig |
Keywords: Self-Driving Vehicles, Active and Passive Vehicle Safety
Abstract: The complex functional structure of driverless vehicles induces a multitude of potential malfunctions. Established approaches for a systematic hazard identification generate individual potentially hazardous scenarios for each identified malfunction. This leads to inefficiencies in a purely expert-based hazard analysis process, as each of the many scenarios has to be examined individually. In this contribution, we propose an adaptation of the strategy for hazard identification for the development of automated vehicles. Instead of focusing on malfunctions, we base our process on deviations from desired vehicle behavior in selected operational scenarios analyzed in the concept phase. By evaluating externally observable deviations from a desired behavior, we encapsulate individual malfunctions and reduce the amount of generated potentially hazardous scenarios. After introducing our hazard identification strategy, we illustrate its application on one of the operational scenarios used in the research project UNICAR agil.
|
|
14:55-15:00, Paper ThPM1_T2.5 | |
Re-Using Concrete Test Scenarios Generally Is a Bad Idea |
|
Hauer, Florian | Technical University of Munich |
Pretschner, Alexander | Technical University of Munich |
Holzmüller, Bernd | ITK Engineering GmbH |
Keywords: Automated Vehicles, Self-Driving Vehicles, Advanced Driver Assistance Systems
Abstract: Many approaches for testing automated and autonomous driving systems in dynamic traffic scenarios rely on the reuse of test cases, e.g., recording test scenarios during real test drives or creating “test catalogs”. Both are widely used in industry and in literature. By counterexample, we show that the quality of test cases is system-dependent and that faulty system behavior may stay unrevealed during testing if test cases are naively re-used. We argue that, in general, system-specific “good” test cases need to be generated. Thus, recorded scenarios in general cannot simply be used for testing, and regression testing strategies needs to be rethought for automated and autonomous driving systems. The counterexample involves a system built according to state-of-the-art literature, which is tested in a traffic scenario using a high-fidelity physical simulation tool. Test scenarios are generated using standard techniques from the literature and state-of-the-art methodologies. By comparing the quality of test cases, we argue against a naive re-use of test cases.
|
|
15:00-15:05, Paper ThPM1_T2.6 | |
Integrating Deep Reinforcement Learning with Model-Based Path Planners for Automated Driving |
|
Yurtsever, Ekim | The Ohio State University |
Capito, Linda | Ohio State University |
Redmill, Keith | Ohio State University |
Ozguner, Umit | Ohio State University |
Keywords: Reinforcement Learning, Automated Vehicles, Vehicle Control
Abstract: Automated driving in urban settings is challenging. Human participant behavior is difficult to model, and conventional, rule-based Automated Driving Systems (ADSs) tend to fail when they face unmodeled dynamics. On the other hand, the more recent, end-to-end Deep Reinforcement Learning (DRL) based model-free ADSs have shown promising results. However, pure learning-based approaches lack the hard-coded safety measures of model-based controllers. Here we propose a hybrid approach for integrating a path planning pipe into a vision based DRL framework to alleviate the shortcomings of both worlds. In summary, the DRL agent is trained to follow the path planner's waypoints as close as possible. The agent learns this policy by interacting with the environment. The reward function contains two major terms: the penalty of straying away from the path planner and the penalty of having a collision. The latter has precedence in the form of having a significantly greater numerical value. Experimental results show that the proposed method can plan its path and navigate between randomly chosen origin-destination points in CARLA, a dynamic urban simulation environment. Our code is open-source and available online.
|
|
15:05-15:10, Paper ThPM1_T2.7 | |
Identifying the Operational Design Domain for an Automated Driving System through Assessed Risk |
|
Lee, Chung Won | Huawei Technologies Canada |
Nayeer, Nasif | Huawei Technologies Canada |
Garcia, Danson Evan | University of Toronto |
Agrawal, Ankur | Mercedes Benz Research and Development North America Inc |
Liu, Bingbing | Huawei |
Keywords: Self-Driving Vehicles, Advanced Driver Assistance Systems, Intelligent Vehicle Software Infrastructure
Abstract: Assuring the safety of autonomous vehicles is one of the most significant challenges in the automotive industry. Tech companies and automotive manufacturers use the idea of Operational Design Domain (ODD) to indicate where their Automated Driving Systems (ADS) can operate safely. By definition from SAE J3016, an ODD defines where the ADS is designed to operate. However, it is loosely defined in no particular format, and it is unclear how exactly to formulate the ODD, which leaves it up to the ADS developer to determine. This paper proposes a methodology to identify an ODD for an ADS with statistical data and risk tolerance, where the identified ODD is constituted of a geographical map where the risk of ADS operation is lower than the pre-determined risk threshold for a given set of environmental conditions. Two different ADSs are run through this method as an example to showcase the methodology and link the identified ODD directly to the calculated performance of the ADSs. This systematically generated ODD can mitigate potential safety issues by informing the limitations of the ADS to safety drivers, through geographic and environmental boundaries.
|
|
15:10-15:15, Paper ThPM1_T2.8 | |
Behavioral Competence Tests for Highly Automated Vehicles |
|
Wang, Xinpeng | University of Michigan |
Dong, Yiqun | Fudan University |
Xu, Shaobing | University of Michigan, Ann Arbor |
Peng, Huei | University of Michigan |
Wang, Fucong | University of Michigan |
Liu, Zhenghao | University of Michigan |
Keywords: Automated Vehicles, Collision Avoidance, Vehicle Control
Abstract: It is necessary to evaluate the safety of highly automated vehicles (HAVs) rigorously before their deployment on public roads. This paper describes a procedure to conduct behavioral competence tests for HAVs using the unprotected left-turn as the illustrating scenario. We first describe a model-based method for test case generation. Subsequently, we propose two methods for synchronizing the motions of the primary other vehicle (POV) and the vehicle under test (VUT), one using fixed speed profile, the other using model predictive control (MPC). Finally, we implement the POV algorithm for the left-turn scenario on an experimental vehicle, and conduct field tests using both virtual and real VUT in the Mcity test facility.
|
|
ThPM1_T3 |
EGYPTIAN_3 |
Smart Infrastructure and Traffic Management. A |
Regular Session |
|
14:35-14:40, Paper ThPM1_T3.1 | |
CSG: Critical Scenario Generation from Real Traffic Accidents |
|
Zhang, Xinxin | Intel |
Li, Fei | Intel |
Wu, Xiangbin | Intel |
Keywords: Automated Vehicles, Intelligent Vehicle Software Infrastructure
Abstract: Autonomous driving (AD) is getting closer to our life, but the severe traffic accidents of autonomous vehicle (AV) happened in the past several years warn us that the safety of AVs is still a big challenge for the AD industry. Before volume production, the automotive industry and regulators must ensure the AV can deal with dangerous scenarios. Although road test is the most common method to test the performance and safety of an AV, it has some manifest disadvantages, e.g., highly risky and unrepeatable, low efficiency and lack of useful critical scenarios. Critical-scenario-based simulation can effectively address these problems and become an important complement to road test. In this paper, we present a novel approach to extract critical scenarios from real traffic accident videos and re-generate them in a simulator. We also introduce our integrated toolkit for scenario extraction and scenario test. With the toolkit, we can build a critical scenario library quickly and use it as a benchmark for AV safety assessment, among other purposes. On top of this, we further introduce our safety assessment criteria and scoring method.
|
|
14:40-14:45, Paper ThPM1_T3.2 | |
Robust Tracking of Reference Trajectories for Autonomous Driving in Intelligent Roadside Infrastructure |
|
Fleck, Tobias | FZI Research Center for Information Technology |
Ochs, Sven | FZI Research Center for Information Technology |
Zofka, Marc René | FZI Research Center for Information Technology |
Zöllner, J. Marius | FZI Research Center for Information Technology; KIT Karlsruhe In |
Keywords: Smart Infrastructure, Vulnerable Road-User Safety
Abstract: High quality reference data is crucial for the development of autonomous driving applications. Unfortunately, datasets including fixed, reproducible static environments that contain manifold interactions between traffic participants are not widely available. In this paper we propose a camera based trajectory estimation framework that enables the generation of reference trajectory data in stationary roadside infrastructure. We develop a Simple Online Realtime Tracking (SORT) algorithm that tracks objects in image space utilizing the tracking-by-detection paradigm with a deep neural network detector. By projecting tracks to a ground model, we are able to gather cartesian and georeferenced trajectories for manually driven and autonomous vehicles in the field. We evaluate the framework in stationary roadside infrastructure in the Test Area Autonomous Driving Baden-Württemberg, Germany. A vehicle equipped with inertial measurement unit and differential GPS is used to generate ground truth positions that are compared with our framework.
|
|
14:45-14:50, Paper ThPM1_T3.3 | |
Development of a Stochastic Traffic Environment with Generative Time-Series Models for Improving Generalization Capabilities of Autonomous Driving Agents |
|
Ozturk, Anil | Istanbul Technical University |
Gunel, Mustafa Burak | Istanbul Technical University |
Dal, Melih | Bogazici University |
Yavas, M. Ugur | Eatron Technologies |
Ure, Nazim | Istanbul Technical University |
Keywords: Reinforcement Learning, Self-Driving Vehicles, Collision Avoidance
Abstract: Automated lane changing is a critical feature for advanced autonomous driving systems. In recent years, reinforcement learning (RL) algorithms trained on traffic simulators yielded successful results in computing lane changing policies that strike a balance between safety, agility and compensating for traffic uncertainty. However, many RL algorithms exhibit simulator bias and policies trained on simple simulators do not generalize well to realistic traffic scenarios. In this work, we develop a data driven traffic simulator by training a generative adverserial network (GAN) on real life trajectory data. The simulator generates randomized trajectories that resembles real life traffic interactions between vehicles, which enables training the RL agent on much richer and realistic scenarios. We demonstrate through simulations that RL agents that are trained on GAN-based traffic simulator has stronger generalization capabilities compared to RL agents trained on simple rule-driven simulators.
|
|
14:50-14:55, Paper ThPM1_T3.4 | |
Playground for Early Automotive Service Architecture Design and Evaluation |
|
Cebotari, Vadim | Technical University Munich |
Kugele, Stefan | Technical University of Munich |
Keywords: Advanced Driver Assistance Systems, Intelligent Vehicle Software Infrastructure
Abstract: Context: We consider the structure of service-oriented architectures in vehicular software. Aim: We aim at evaluating the structure and grouping of service architectures. Method: We propose and discuss architectural metrics tailored towards automotive service-oriented architectures. We apply the metrics on an adaptive cruise control case example extracted from the AUTOSAR standard. Results: The application of the proposed metrics to two different service groupings for ACC points clearly to the same service grouping that we consider, after a thorough analysis, to be better with respect to coupling and cohesion attributes. Conclusion: We demonstrate the usefulness of proposed service group metrics in early design phases of the development process and validate the metrics on the case example of an adaptive cruise control function.
|
|
14:55-15:00, Paper ThPM1_T3.5 | |
Cooperative Wireless Congestion Control for Multi-Service V2X Communication |
|
Khan, Mohammad Irfan | Eurecom |
Sepulcre, Miguel | Miguel Hernández University of Elche |
Haerri, Jerome | EURECOM |
Keywords: V2X Communication, Cooperative Systems (V2X)
Abstract: Wireless congestion control and resource allocation for 802.11p based V2X safety communication have been widely investigated for a single Cooperative Awareness service, considering homogeneous resource requirement per vehicle. Future cooperative connected vehicles, will have heterogeneous capabilities and communication needs, which existing congestion control mechanisms have not fully addressed. In this paper, we analyze issues with the channel congestion control protocol standardized in Europe by ETSI, regarding distributed resource allocation for heterogeneous number of services and message types per vehicle. We present a cooperative congestion control mechanism to orchestrate channel resource among a mixed distribution of vehicles with diverse resource requirements under channel congestion. Simulation based evaluation using standardized safety messages show the application performance improvement rendered by our proposed mechanism, compared to the standardized protocol.
|
|
15:00-15:05, Paper ThPM1_T3.6 | |
A Stochastic Particle Filter Energy Optimization Approach for Power-Split Trajectory Planning for Hybrid Electric Autonomous Vehicles |
|
Aubeck, Franz | RWTH Aachen University |
Mertes, Simon | Institute for Combustion Engines, RWTH Aachen |
Lenz, Martin | RWTH Aachen University |
Keywords: Automated Vehicles, Electric and Hybrid Technologies, Eco-driving and Energy-efficient Vehicles
Abstract: In recent years, all major car manufacturers have started to introduce predictive functionalities based on an electronic horizon for the autonomous on-highway operation of their vehicles. Using the Advanced Driver Assistance Systems (ADAS) for anticipatory driving is a fundamental approach to significantly reduce the fuel consumption and pollutant emissions of the internal combustion engines. Today’s Adaptive Cruise Control (ACC) systems try to maintain a constant speed selected by the driver without regarding the energy consumption of the vehicle. There is, however, a degree of freedom to apply cruise speed limits without direct driver involvement in order to save propulsion energy in an Autonomous Vehicle (AV). This work presents a novel velocity and energy optimization method for AV hybrid electric vehicles (HEVs) by using a stochastic optimization technique. By applying Particle Filters (PFs) in a routine of Stochastic Dynamic Programming (SDP) to solve the power split efficiently. The sweet-point operation of the powertrain is calculated by probability hypothesis densities along the distance-based prediction horizon. The optimization approach shows a cost trade-off between horizon resolution, length, iterative combinatorial optimality, and computational efficiency. Finally, the approach is applied to a PHEV vehicle model in a real-time ECU in the Worldwide harmonized Light Duty Test Cycle (WLTC) and in a hilly RDE Aachen-Eifel driving cycle.
|
|
15:05-15:10, Paper ThPM1_T3.7 | |
Formal Methods Approach to the Charging Facility Location Problem for Battery Electric Vehicles |
|
Eagon, Matthew | University of Minnesota, Twin Cities |
Northrop, Will | University of Minnesota |
Keywords: Situation Analysis and Planning, Telematics, Smart Infrastructure
Abstract: Battery electric vehicles (BEVs) are becoming more prevalent as improvements in battery technology and energy management continue to be made. As the number of electric vehicles grows, the demand for fast-charging stations is expected to increase dramatically. Thus, building new charging station infrastructure efficiently will be key to reducing upfront costs while meeting consumer demands. In this work, we propose a method for choosing a set of charging station locations that are optimized based on a set of given common vehicle demand points. As part of this solution, we also offer a novel abstraction of the road network on which energy-efficient paths that account for charge-time delays may be found. The current algorithm chooses the optimal charging locations for a single agent which has a route objective specified using temporal logic. To demonstrate the proposed method, the running example shows how a charging station could be chosen for an electric delivery vehicle. Simulations were run on sample road networks with a given set of demand points to service and potential charging station locations to compare. The method is shown to successfully rank potential charging stations in terms of their expected average charging time cost.
|
|
15:10-15:15, Paper ThPM1_T3.8 | |
Analysis of Recreational and Last Mile E-Scooter Utilization in Different Land Use Regions |
|
Liu, Mingmin | Purdue University |
Mathew, Jijo | Purdue University |
Horton, Deborah | Purdue University |
Bullock, Darcy | Purdue University |
Keywords: Societal Impacts, Electric and Hybrid Technologies, Legal Impacts
Abstract: With rise in shared e-scooter deployment, there is growing interest in evaluating the proportion of recreational vs. non-recreational use so that public agencies can develop appropriate policies and infrastructure support. In the Indianapolis metropolitan area from September 2018 to May 2019, approximately 728,000 e-scooter trips covered 1,250,457 km (777,000 miles) during 162,000 hours of use. This paper focused on three regions with distinct land-use patterns that had 509,241 trips covering 879,209 km (546,317 miles) during 118,440 hours of use. A technique was proposed that compared the ratio of travelled distance to the linear distance between trip start and end points. Trips with ratio larger than 2.0 were assumed to be recreational. As agencies begin to adopt policies and infrastructure support, we believe the methodology described in this paper will be an important consideration for better understanding of the proportion of non-recreational “last-mile” users versus recreational users. The analysis will be helpful for agencies and decision makers in shaping better policies for the safe integration of e-scooters into urban ecosystems.
|
|
ThPM2_T1 |
EGYPTIAN_1 |
Vision Sensing and Perception 2. B |
Regular Session |
|
15:25-15:30, Paper ThPM2_T1.1 | |
In Defense of Multi-Source Omni-Supervised Efficient ConvNet for Robust Semantic Segmentation in Heterogeneous Unseen Domains |
|
Yang, Kailun | Karlsruhe Institute of Technology |
Hu, Xinxin | Zhejiang University |
Wang, Kaiwei | Zhejiang Univeristy |
Stiefelhagen, Rainer | Karlsruhe Institute of Technology |
Keywords: Vehicle Environment Perception, Deep Learning, Vision Sensing and Perception
Abstract: Semantic segmentation renders a unified way of surrounding perception, where most of driving scene detection tasks can be covered by running a single efficient ConvNet through a forward pass. However, current frameworks posit the closed-world paradigm expressed as a single source of distribution over a predetermined set of visual classes, forgetting that a deep model must be deployed in the wild facing unseen domains and unforeseen hazards. In spite of being accurate in its comfort zone, the segmentation model may not generalize well to a new domain. In addition, a model trained with single dataset is heavily limited in terms of recognizable classes. In this paper, we propose an omni-supervised learning framework for semantic segmentation which is able to leverage heterogeneous data sources. Our omni-supervised training framework incorporates all available labeled and unlabeled data, meanwhile bridges multiple training sets to be capable of recognizing more classes that are needed for autonomous navigation application at hand in the new domain. A comprehensive variety of experiments shows that with the proposed multi-source omni-supervised learning solution, an efficient ConvNet like our ERF-PSPNet attains significant robustness gains in open domains that are of critical relevance to real deployment of vision algorithms. Our approach surpasses the state of the art on the highly unconstrained PASS and IDD20K datasets.
|
|
15:30-15:35, Paper ThPM2_T1.2 | |
Lane Detection in Low-Light Conditions Using an Efficient Data Enhancement : Light Conditions Style Transfer |
|
Liu, Tong | Beijing Institute of Technology |
Chen, Zhaowei | Beijing Institute of Technology, |
Yang, Yi | Beijing Institute of Technology |
Wu, ZeHao | Beijing Institute of Technology |
Li, Haowei | Beijing Institute of Technology |
Keywords: Vision Sensing and Perception, Automated Vehicles, Deep Learning
Abstract: Nowadays, deep learning techniques are widely used for lane detection, but application in low-light conditions remains a challenge until this day. Although multi-task learning and contextual-information-based methods have been proposed to solve the problem, they either require additional manual annotations or introduce extra inference overhead respectively. In this paper, we propose a style-transfer-based data enhancement method, which uses Generative Adversarial Networks (GANs) to generate images in low-light conditions, that increases the environmental adaptability of the lane detector. Our solution consists of three parts: the proposed SIM-CycleGAN, light conditions style transfer and lane detection network. It does not require additional manual annotations nor extra inference overhead. We validated our methods on the lane detection benchmark CULane using ERFNet. Empirically, lane detection model trained using our method demonstrated adaptability in low-light conditions and robustness in complex scenarios. Our code for this paper will be publicly available.
|
|
15:35-15:40, Paper ThPM2_T1.3 | |
Real-Time Panoptic Segmentation with Prototype Masks for Automated Driving |
|
Petrovai, Andra | Technical University of Cluj-Napoca |
Nedevschi, Sergiu | Technical University of Cluj-Napoca |
Keywords: Vision Sensing and Perception, Vehicle Environment Perception, Automated Vehicles
Abstract: In this paper we propose a fast fully convolutional neural network for panoptic segmentation that can provide an accurate semantic and instance-level representation of the environment in the 2D space. We tackle panoptic segmentation as a dense classification problem and generate masks for stuff classes as well as for each instance things classes. Our network employs a shared backbone and Feature Pyramid Network for multi-scale feature extraction which we extend with dual-decoders that learn background and foreground specific masks. Guided by object proposals, the panoptic head assembles location-sensitive prototype masks using a learned weighting scheme. Our solution runs in real-time, in 82 ms on high resolution images, making it suitable for robotic applications and automated driving. Extensive experiments on the Cityscapes dataset demonstrate that our panoptic segmentation network is robust and accurate, with 57.3% PQ and 76.9% mIoU.
|
|
15:40-15:45, Paper ThPM2_T1.4 | |
Towards Anomaly Detection in Dashcam Videos |
|
Haresh, Sanjay | Retrocausal, Inc |
Kumar, Sateesh | Retrocausal, Inc |
Zia, Zeeshan | Retrocausal, Inc |
Tran, Quoc-Huy | Retrocausal, Inc |
Keywords: Deep Learning, Vision Sensing and Perception, Advanced Driver Assistance Systems
Abstract: Inexpensive sensing and computation, as well as insurance innovations, have made smart dashboard cameras ubiquitous. Increasingly, simple model-driven computer vision algorithms focused on lane departures or safe following distances are finding their way into these devices. Unfortunately, the long-tailed distribution of road hazards means that these hand-crafted pipelines are inadequate for driver safety systems. We propose to apply data-driven anomaly detection ideas from deep learning to dashcam videos, which hold the promise of bridging this gap. However, there exists almost no literature applying anomaly understanding to moving cameras, and correspondingly there is also a lack of relevant datasets. To counter this issue, we present a large and diverse dataset of truck dashcam videos, namely RetroTrucks, that includes normal and anomalous driving scenes. We apply: (i) one-class classification loss, and (ii) reconstruction-based loss for anomaly detection on RetroTrucks as well as on existing static-camera datasets. We introduce formulations for modeling object interactions in this context as priors. Our experiments indicate that our dataset is indeed more challenging than standard anomaly datasets, and previous anomaly detection methods do not perform well here out-of-the-box. In addition, we share insights into the behavior of these two important families of anomaly detection approaches on dashcam data.
|
|
15:45-15:50, Paper ThPM2_T1.5 | |
Advances in Centerline Estimation for Autonomous Lateral Control |
|
Cudrano, Paolo | Politecnico Di Milano |
Mentasti, Simone | Politecnico Di Milano |
Matteucci, Matteo | Politecnico Di Milano - DEIB |
Bersani, Mattia | Politecnico Di Milano |
Arrigoni, Stefano | Politecnico Di Milano |
Cheli, Federico | Politecnico Di Milano |
Keywords: Vehicle Control, Vision Sensing and Perception, Convolutional Neural Networks
Abstract: The ability of autonomous vehicles to maintain an accurate trajectory within their road lane is crucial for safe operation. This requires detecting the road lines and estimating the car relative pose within its lane. Lateral lines are usually retrieved from camera images. Still, most of the works on line detection are limited to image mask retrieval and do not provide a usable representation in world coordinates. What we propose in this paper is a complete perception pipeline based on monocular vision and able to retrieve all the information required by a vehicle lateral control system: road lines equation, centerline, vehicle heading and lateral displacement. We evaluate our system by acquiring data with accurate geometric ground truth. To act as a benchmark for further research, we make this new dataset publicly available at http://airlab.deib.polimi.it/datasets/ .
|
|
15:50-15:55, Paper ThPM2_T1.6 | |
Automated Focal Loss for Image Based Object Detection |
|
Weber, Michael | FZI Research Center for Information Technology |
Fürst, Michael | DFKI, German Research Center for Artificial Intelligence |
Zöllner, J. Marius | FZI Research Center for Information Technology; KIT Karlsruhe In |
Keywords: Convolutional Neural Networks, Vision Sensing and Perception, Deep Learning
Abstract: Current state-of-the-art object detection algorithms still suffer the problem of imbalanced distribution of training data over object classes and background. Recent work introduced a new loss function called focal loss to mitigate this problem, but at the cost of an additional hyperparameter. Manually tuning this hyperparameter for each training task is highly time-consuming. With automated focal loss we introduce a new loss function which substitutes this hyperparameter by a parameter that is automatically adapted during the training progress and controls the amount of focusing on hard training examples. We show on the COCO benchmark that this leads to an up to 30% faster training convergence. We further introduced a focal regression loss which on the more challenging task of 3D vehicle detection outperforms other loss functions by up to 1.8 AOS and can be used as a value range independent metric for regression.
|
|
15:55-16:00, Paper ThPM2_T1.7 | |
Scalable Active Learning for Object Detection |
|
Haussmann, Elmar | NVIDIA |
Fenzi, Michele | NVIDIA |
Chitta, Kashyap | MPI |
Roy, Donna | Donnar@nvidia.com |
Ivanecky, Jan | NVIDIA Corp |
Xu, Hanson | NVIDIA Corp |
Mittel, Akshita | NVIDIA Corp |
Koumchatzky, Nicolas | NVIDIA |
Farabet, Clement | NVIDIA |
Alvarez, José M. | NVIDIA |
Keywords: Deep Learning, Convolutional Neural Networks
Abstract: Deep Neural Networks trained in a fully supervised fashion are the dominant technology in perception-based autonomous driving systems. While collecting large amounts of unlabeled data is already a major undertaking, only a subset of it can be labeled by humans due to the effort needed for high-quality annotation. Therefore, finding the right data to label has become a key challenge. Active Learning is a powerful technique to improve data efficiency for supervised learning methods, as it aims at selecting the smallest possible training set to reach a required performance. We have built a scalable production system for Active Learning in the domain of autonomous driving. In this paper, we describe the resulting high-level design, sketch some of the challenges and their solutions, our current results at scale and briefly describe the open problems and future directions.
|
|
16:00-16:05, Paper ThPM2_T1.8 | |
An Entropy Based Outlier Score and Its Application to Novelty Detection for Road Infrastructure Images |
|
Wurst, Jonas | Technische Hochschule Ingolstadt |
Flores Fernandez, Alberto | Technische Hochschule Ingolstadt |
Botsch, Michael | Technische Hochschule Ingolstadt |
Utschick, Wolfgang | Technische Universität München |
Keywords: Unsupervised Learning, Automated Vehicles, Self-Driving Vehicles
Abstract: A novel unsupervised outlier score, which can be embedded into graph based dimensionality reduction techniques, is presented in this work. The score uses the directed nearest neighbor graphs of those techniques. Hence, the same measure of similarity that is used to project the data into lower dimensions, is also utilized to determine the outlier score. The outlier score is realized through a weighted normalized entropy of the similarities. This score is applied to road infrastructure images. The aim is to identify newly observed infrastructures given a pre-collected base dataset. Detecting unknown scenarios is a key for accelerated validation of autonomous vehicles. The results show the high potential of the proposed technique. To validate the generalization capabilities of the outlier score, it is additionally applied to various real world datasets. The overall average performance in identifying outliers using the proposed methods is higher compared to state-of-the-art methods. In order to generate the infrastructure images, an openDRIVE parsing and plotting tool for Matlab is developed as part of this work. This tool and the implementation of the entropy based outlier score in combination with Uniform Manifold Approximation and Projection are made publicly available.
|
|
ThPM2_T2 |
EGYPTIAN_2 |
Automated Vehicles 2. B |
Regular Session |
|
15:25-15:30, Paper ThPM2_T2.1 | |
Optimization-Based Incentivization and Control Scheme for Autonomous Traffic |
|
Kalabic, Uros V. | Mitsubishi Electric Research Laboratories (MERL) |
Grover, Piyush | University of Nebraska-Lincoln |
Aeron, Shuchin | Tufts University |
Keywords: Cooperative ITS, Vehicle Control, Self-Driving Vehicles
Abstract: We consider the problem of incentivization and optimal control of autonomous vehicles for improving traffic congestion. In our scenario, autonomous vehicles must be incentivized in order to participate in traffic improvement. Using the theory and methods of optimal transport, we propose a constrained optimization framework over dynamics governed by partial differential equations, so that we can optimally select a portion of vehicles to be incentivized and controlled. The goal of the optimization is to obtain a uniform distribution of vehicles over the spatial domain. To achieve this, we consider two types of penalties on vehicle density, one is the L2 cost and the other is a multiscale-norm cost, commonly used in fluid-mixing problems. To solve this non-convex optimization problem, we introduce a novel algorithm, which iterates between solving a convex optimization problem and propagating the flow of uncontrolled vehicles according to the Lighthill-Whitham-Richards model. We perform numerical simulations, which suggest that the optimization of the L2 cost is ineffective while optimization of the multiscale norm is effective. The results also suggest the use of a dedicated lane for this type of control in practice.
|
|
15:30-15:35, Paper ThPM2_T2.2 | |
CLAP: Cloud-And-Learning-Compatible Autonomous Driving Platform |
|
Zhong, Yuanxin | University of Michigan |
Cao, Zhong | Tsinghua University |
Zhu, Minghan | University of Michigan |
Wang, Xinpeng | University of Michigan |
Yang, Diange | State Key Laboratory of Automotive Safety and Energy, Collaborat |
Peng, Huei | University of Michigan |
|
|
15:35-15:40, Paper ThPM2_T2.3 | |
How Safe Is Safe Enough? Automatic Safety Constraints Boundary Estimation for Decision-Making in Automated Vehicles |
|
Rodionova, Alena | University of Pennsylvania |
Alvarez, Ignacio | INTEL CORPORATION |
Elli, Maria Soledad | Intel Corporation |
Oboril, Fabian | Intel |
Quast, Johannes | Intel Corporation |
Mangharam, Rahul | University of Pennsylvania |
Keywords: Automated Vehicles, Active and Passive Vehicle Safety
Abstract: The determination of safety assurances for automated driving vehicles is one of the most critical challenges in the industry today. Several behavioral safety models for automated driving have been proposed recently and standards discussions are on the way. In this paper we present a method to automatically explore the performance of automated vehicle (AV) safety models utilizing robustness of Metric Temporal Logic (MTL) specifications as a continuous metric of safety. We present a case study of the Responsibility Sensitive Safety model (RSS), introducing a safety evaluation pipeline based on the CARLA driving simulator, RSS and a set of safety critical driving scenarios. Our method automatically extracts safety relevant profiles for these scenarios providing practical parametric boundaries for implementation. Furthermore, we evaluate the trade-offs between safety and utility within the safe RSS parameter space through a proposed naturalistic benchmark challenge that we open-sourced. We analyze different RSS parameter configurations including assertive and more conservative settings, extracted by our specification driven framework. Our results show that while maintaining the safety boundaries, the extracted RSS configuration for assertive driving behavior achieves the highest utility.
|
|
15:40-15:45, Paper ThPM2_T2.4 | |
A Vehicle-In-The-Loop Methodology for Evaluating Automated Driving Functions in Virtual Traffic |
|
Solmaz, Selim | Virtual Vehicle Research Center |
Rudigier, Martin | Virtual Vehicle Research GmbH |
Mischinger, Marlies | Virtual Vehicle Research GmbH |
Keywords: Automated Vehicles, Advanced Driver Assistance Systems, Impact on Traffic Flows
Abstract: This paper introduces a novel vehicle-in-the-loop testing methodology called “Hybrid Testing”, which enables the evaluation of a real vehicle in a virtual traffic scenario in an enclosed proving ground with simulated traffic components and sensor signals. Among the other benefits, the introduced methodology is particularly suited to test and verify ADAS functions in virtual scenarios, which would otherwise be very difficult to create in real world. Development of ADAS functions require extensive testing and validation prior to deployment and generally, exhaustive real-life scenario evaluation of such systems is not feasible, let alone possible. With Hybrid Testing we can combine the benefits of simulation and real-life testing. We show how this methodology, as was developed in the EU-funded project INFRAMIX, can be used to evaluate ADAS functions on an example of a trajectory planning algorithm to demonstrate its working principles and benefits.
|
|
15:45-15:50, Paper ThPM2_T2.5 | |
Implementation and Experimental Evaluation of a MIMO Drifting Controller on a Test Vehicle |
|
Bárdos, Ádám | Budapest University of Technology and Economics , Department Of |
Domina, Ádám | Budapest University of Technology and Economics |
Tihanyi, Viktor | Budapest University of Technology and Economics |
Szalay, Zsolt | Budapest Technology and Economics University, Department of Auto |
Palkovics, László | Budapest University of Technology and Economics , Department Of |
Keywords: Self-Driving Vehicles, Vehicle Control, Automated Vehicles
Abstract: In the future, the presence of highly automated vehicles is expected to become more and more wide spread. In such systems, the whole driving task will be performed by the vehicle autonomously, thus, vehicles must be able to control their motion in various circumstances, even at stability limits. In this paper, the authors consider the control of a steady-state drifting maneuver, which means saturated rear tire forces. In a previous article, a MIMO linear quadratic regulator (LQR) controller was designed, and it showed good performance in simulation environment. The test results of a real vehicle implementation are presented here, which was the logical next step of the work. For the vehicle platform, a series production sports car was chosen. Modifications were made in order to enable by-wire control. After identifying the vehicle model parameters through measurements, the control algorithm was implemented on a rapid prototyping unit. Vehicle states were measured with a high precision dual antenna GNSS module with RTK correction. Additionally, other dynamic parameters from the vehicle CAN bus signals were also used. The main goal was to stabilize different drifting equilibria, which showed satisfying performance of the proposed controller in a real vehicle setup as well.
|
|
15:50-15:55, Paper ThPM2_T2.6 | |
Safety Score: A Quantitative Approach to Guiding Safety-Aware Autonomous Vehicle Computing System Design |
|
Zhao, Hengyu | University of California, San Diego |
Zhang, Yubo | Pony.ai |
Meng, Pingfan | Pony.ai |
Shi, Hui | University of California, San Diego |
Li, Erran | Pony.ai |
Lou, Tiancheng | Pony.ai |
Zhao, Jishen | University of California, San Diego |
Keywords: Automated Vehicles, Collision Avoidance
Abstract: High automated vehicles rely on the computing system in the car to understand the environment and make driving decisions. Therefore, computing system design is essential for ensuring the driving safety. However, to our knowledge, no clear guideline exists so far regarding how to guide the safety-aware autonomous vehicle (AV) computing system design. To understand the safety requirement of AV computing system, we performed a field study by operating industrial Level-4 AV fleets in multiple locations for three months. The field study indicates that traditional computing system performance metrics, such as tail latency, average latency, maximum latency, and timeout, cannot fully satisfy the safety requirement for AV computing system design. To address this issue, we propose the ``safety score'' as a primary metric for measuring the level of safety in AV computing system design.
|
|
15:55-16:00, Paper ThPM2_T2.7 | |
Systems Integration, Simulation, and Control for Autonomous Trucking |
|
Darwesh, Amir | Texas A&M University |
Woods, Grayson | Texas A&M University |
Saripalli, Srikanth | Texas A&M University |
Keywords: Automated Vehicles, Autonomous / Intelligent Robotic Vehicles
Abstract: This paper discusses a platform both in simulation and experimentation for testing autonomous heavy trucking. In simulation, we present a novel use of the video game American Truck Simulator (ATS) as the simulation platform, only costing a fraction of commercial simulator software. In experimentation, we present a modified ProStar 122+ using the PACMod system from AutonomouStuff, a popular by-wire kit. Discussion and review of the by-wire kit and sensors is provided. A proof-of-concept of the platform is shown by performing lane keeping at 65 mph using the Stanley lateral controller and MobilEye detection system. Further, we introduce a rapidly developed longitudinal control algorithm using a pedal actuation map, and 3D lookup tables created from braking and acceleration data. Introductory results are presented to aid the research community for single vehicle autonomous trucking.
|
|
16:00-16:05, Paper ThPM2_T2.8 | |
A Fault Tolerant Lateral Control Strategy for an Autonomous Four Wheel Driven Electric Vehicle |
|
Ramanathan Venkita, Seshan | Flanders Make |
Boulkroune, Boulaid | Flanders Make |
Mishra, Anurodh | Flanders Make |
Van Nunen, Ellen | Flanders Make |
Keywords: Vehicle Control, Automated Vehicles, Active and Passive Vehicle Safety
Abstract: Fail-operational lateral control is a necessary aspect of autonomous driving systems to reach SAE level 4 of automated driving. This paper focuses on fault tolerant control reconfiguration techniques for two fault types: a yawrate sensor fault and an on-board electric motor fault. The yaw-rate fault reconfiguration combines an estimation of the yaw-rate signal with a smooth switch to a yaw-rate independent controller. The actuator fault reconfiguration is based on control allocation techniques. These algorithms, including Fault Detection and Isolation are implemented in a four-wheel driven electric demonstration vehicle. Both types of reconfiguration strategies are experimentally evaluated in an evasive manoeuvre and the results show that the fault reconfiguration techniques succeed to provide fail-operational behavior.
|
|
ThPM2_T3 |
EGYPTIAN_3 |
Smart Infrastructure and Traffic Management. B |
Regular Session |
|
15:25-15:30, Paper ThPM2_T3.1 | |
An Improved Moving Observer Method for Traffic Flow Estimation at Signalized Intersections |
|
Langer, Marcel | AUDI AG |
Schien, Thomas | Technical University of Applied Sciences Regensburg |
Harth, Michael | AUDI AG |
Kates, Ronald | REK Consulting |
Bogenberger, Klaus | Bundeswehr University Munich |
Keywords: Traffic Flow and Management, Impact on Traffic Flows, Smart Infrastructure
Abstract: With the deployment of partially and highly automated vehicles, the automotive industry is greatly increasing its influence on road traffic. In order to ensure a positive influence of automated vehicles on traffic efficiency as well as traffic safety, simulations are broadly used for the development and testing of the required functions. Since these simulations are applied to evaluate the behavior of an automated driving function in the real world, an exact representation of the real world in the simulation is essential for the validity of the generated results. Therefore, there is a need for methods with which certain parameters of real-world situations may be quantified and applied to a simulation. In this work, we propose an approach to measure traffic flow and estimate the traffic state in a network based on extended floating car data. For this purpose, the data concerning the movement of the tracked vehicles is combined with the data regarding surrounding traffic gathered by the vehicles’ sensors. The aim of this combination is to achieve an accurate traffic observation on urban as well as rural roads with a minimal number of test vehicles gathering the data. The application of the method to simulated traffic results in an accurate estimation of the traffic volume. The functionality is also demonstrated based on a limited sample of real-world test data.
|
|
15:30-15:35, Paper ThPM2_T3.2 | |
A Monocular Forward Leading Vehicle Distance Estimation Using Mobile Devices |
|
Wen, Wen | University of Ottawa |
Aghdam, Hamed | University of Ottawa |
Wang, Yong | University of Ottawa |
Laganière, Robert | University of Ottawa |
Petriu, Emil M. | University of Ottawa |
Keywords: Advanced Driver Assistance Systems, Deep Learning, Collision Avoidance
Abstract: Keeping the safe distance from the leading vehicle is crucial for transportation companies with a fleet of old cars. While modern Advanced Driver Assistant Systems (ADAS) might be able to estimate the distance from the front-leading vehicle, traditional ADAS do not usually offer this feature. An alternative solution is to monitor the distance using smartphones that are attached to a place such as a sun visor. The basic idea behind this approach is to detect the front-leading vehicle using the smartphone camera and estimate its distance from the car. Although SSD can achieve real-time performance on powerful GPUs, it remains challenging to run this model in real-time on mobile devices. In this paper, we propose a monocular distance estimator for forward-leading vehicles using a smartphone which is faster and more accurate than the state-of-the-art SSD detector. Specifically, we propose a layer-wise method to generate more efficient default boxes for the SSD and develop a lightweight method for estimating the distance accurately. Our experiments show that the proposed method reduces the number of default boxes by an average of 38.4% while it improves the detection rate and the processing speed compared to the original SSD. Moreover, our monocular distance estimator provides a proper safety buffer zone when the distance is greater than 20 meters.
|
|
15:35-15:40, Paper ThPM2_T3.3 | |
Split Covariance Intersection Filter Based Front-Vehicle Track Estimation for Vehicle Platooning without Communication |
|
Chen, Xiaofeng | Shanghai Jiao Tong University |
Yang, Ming | Shanghai Jiao Tong University |
Yuan, Wei | Shanghai Jiao Tong University |
Li, Hao | Shanghai Jiao Tong University |
Wang, Chunxiang | Shanghai Jiao Tong University |
Keywords: Automated Vehicles, Cooperative ITS
Abstract: Vehicle platooning is an innovative technology for intelligent transportation systems, where each vehicle in the platoon is required to autonomously follow its front vehicle's path unconditionally and accurately. For platoons without communication, accurate path and velocity estimation of the front vehicle is challenging and crucial. Instead of memorizing the original detection result of the front vehicle to generate a path, the path is estimated by fusing motion prediction and observation in this paper. Since there is a correlation between the two estimates, Split Covariance Intersection Filter is used to guarantee the fusion consistency. Besides, a motion model considering velocity is merged into the filter to achieve a precise estimate of the predecessor's velocity simultaneously. Moreover, a path generation approach is designed in a high-frequency loop independent of front vehicle detection to improve continuity of the path. Experimental validations in the real-world environment highlight the remarkable improvement in the accuracy of both path and velocity estimation. Meanwhile, the path generated by the proposed approach is more continuous compared with the commonly used method. Furthermore, complete vehicle platooning demonstrations in diverse environments prove the practicality and robustness of the proposed approach.
|
|
15:40-15:45, Paper ThPM2_T3.4 | |
Realtime Estimation of IEEE 802.11p for Mobile Working Machines Communication Respecting Delay and Packet Loss |
|
Xiang, Yusheng | Karlsruhe Institute of Technology |
Tianqing, Su | Technical University of Braunschweig |
Brach, Christine | Robert Bosch GmbH |
Xiaole, Liu | Technical University of Munich |
Marcus, Geimer | Karlsruhe Institute of Technology |
Keywords: V2X Communication, Intelligent Ground, Air and Space Vehicles
Abstract: The fleet management of mobile working machines with the help of connectivity can increase not only safety but also productivity. However, rare mobile working machines have taken advantage of V2X. Moreover, no one published the simulation results that are suitable for evaluating the performance of the ad-hoc network at a working site on the highway where is congested, with low mobility, and without building. In this paper, we suggested that IEEE 802.11p should be implemented for fleet management, at least for the first version. Furthermore, we proposed an analytical model for machines to estimate the ad-hoc network performance, i.e., the delay and the packet loss probability in real-time based on the simulation results we made in ns-3. The model of this paper can be further used for determining when shall ad-hoc or cellular network be used in the corresponding scenarios.
|
|
15:45-15:50, Paper ThPM2_T3.5 | |
Co-Simulation Platform for Developing InfoRich Energy-Efficient Connected and Automated Vehicles |
|
Aoki, Shunsuke | Carnegie Mellon University |
Jan, Lung En | Carnegie Mellon University |
Zhao, Junfeng | General Motors Company |
Bhat, Anand | Carnegie Mellon University |
Rajkumar, R. | Carnegie Mellon University |
Chang, Chen-Fang | General Motors Company |
Keywords: Intelligent Vehicle Software Infrastructure, Self-Driving Vehicles, Eco-driving and Energy-efficient Vehicles
Abstract: With advances in sensing, computing and communication technologies, Connected and Automated Vehicles (CAVs) are becoming feasible. The advent of CAVs presents new opportunities to improve the energy efficiency of individual vehicles. However, testing and verifying energy-efficient autonomous driving systems are difficult due to safety considerations and repeatability. In this paper, we present a co-simulation platform to develop and test novel vehicle eco-autonomous driving technologies named InfoRich, which incorporates the information from on-board sensors, V2X communications, and map database. The co-simulation platform includes eco-autonomous driving software, vehicle dynamics and powertrain (VD&PT) model, and a traffic environment simulator. Also, we utilize synthetic drive cycles derived from real-world driving data to test the strategies under realistic driving scenarios. To build road networks from the real-world driving data, we develop an Automated Parser and Calculator for Map/Scenario named AutoPASCAL. Overall, the simulation platform provides a realistic vehicle model, powertrain model, sensor model, traffic model, and road-network model to enable the evaluation of the energy efficiency of eco-autonomous driving.
|
|
15:50-15:55, Paper ThPM2_T3.6 | |
On the Manipulation of Wheel Speed Sensors and Their Impact in Autonomous Vehicles |
|
Pöllny, Oliver | Mercedes-Benz AG |
Held, Albert | Daimler AG |
Kargl, Frank | Ulm University |
Keywords: Security, Autonomous / Intelligent Robotic Vehicles, Active and Passive Vehicle Safety
Abstract: Modern vehicles and in particular automated or autonomous vehicles utilize data gained from wheel speed sensors for various functionalities like determining current speed or electronic stability control. The wheel speed is derived from a multitude of hall sensors. By generating an electromagnetic field with a fitting frequency it is possible to change the wheel speed sensor’s interpretation of wheel speed. The braking system and electronic stability control would then base their actions on incorrect data possibly leading to a dangerous situation. In this research, we investigate practical feasibility and impact of such attacks. For this, an audio amplifier and a Helmholtz coil were used to emit electromagnetic fields with varying frequencies. We then analyse the effect of a single manipulated wheel speed sensor on the electronic stability control used in modern vehicles (using a hardware-in-the-loop setup). We finally describe consequences and possible countermeasure for the upcoming autonomous and highly automated vehicles.
|
|
15:55-16:00, Paper ThPM2_T3.7 | |
Conflict Analysis for Cooperative Merging Using V2X Communication |
|
Wang, Hao | University of Michigan, Ann Arbor |
Molnar, Tamas Gabor | University of Michigan |
Avedisov, Sergei | Toyota Motor North America R&D - InfoTech Labs |
Sakr, Ahmed Hamdi | Toyota InfoTechnology Center, U.S.A |
Altintas, Onur | Toyota R&D |
Orosz, Gabor | University of Michigan |
Keywords: V2X Communication, Automated Vehicles, Situation Analysis and Planning
Abstract: In this paper we investigate the problem of a vehicle merging to a main road while another vehicle is approaching on that road. We utilize conflict analysis to help the decision making and control for vehicles of different automation levels. We demonstrate that using vehicle-to-everything (V2X) communication, e.g., basic safety message (BSM), we are able to prevent conflict between the two vehicles. We design a longitudinal controller for the merging vehicle and show that V2X communication is also beneficial in improving the time efficiency of the merge. The results are demonstrated by performing simulations based on real highway data.
|
|
16:00-16:05, Paper ThPM2_T3.8 | |
An Intellegent Vehicle Oriented EMC Reverse Diagnostic Model Based on SVM (I) |
|
Lei, Jianmei | State Key Laboratory of Vehicle NVH and Safety Technology & Chon |
Mu, Jie | Chongqing University |
Zeng, Lingqiu | Chongqing University |
Han, Qingwen | Chongqing University |
Hu, Longbiao | Chongqing University |
Chen, Xu | Chongqing University of Technology |
Chen, Lidong | State Key Laboratory of Vehicle NVH and Safety Technology, Chong |
Keywords: Intelligent Ground, Air and Space Vehicles, Reinforcement Learning, Information Fusion
Abstract: The rapid development of intelligent vehicles brings new challenges to vehicle EMC design, which is benefited from test data-oriented troubleshooting. With the increase in electronic complexity, vehicle on-board system designers should confront with more and more EMC failure possibilities and are in need of effective EMC failure diagnosis approach. However, EMC fault diagnosis is difficult due to the distinguishing features of EMC test dataset, such as small sample, nonlinear, high dimensions, etc. Hence, in this paper, EMC reverse diagnostic model is proposed. Firstly, EMC feature extraction method is designed. Then SVM model is designed to realize EMC faults classification. Corresponding application effect is displayed. Experiment results show that proposed method could match the demand of EMC fault diagnosis for intelligent vehicles.
|
| |