| |
Last updated on June 12, 2022. This conference program is tentative and subject to change
Technical Program for Wednesday June 8, 2022
|
We-A-OR Regular Session, Europa Hall |
Add to My Program |
Automated Driving |
|
|
Chair: Ozaki, Nobuyuki | Nagoya University |
|
08:30-08:50, Paper We-A-OR.1 | Add to My Program |
Tackling Real-World Autonomous Driving Using Deep Reinforcement Learning |
|
Maramotti, Paolo | Università Degli Studi Di Parma |
Capasso, Alessandro Paolo | VisLab, an Ambarella Inc. Company - University of Parma |
Bacchiani, Giulio | VisLab, an Ambarella Inc. Company |
Broggi, Alberto | University of Parma |
Keywords: Self-Driving Vehicles, Reinforcement Learning, Vehicle Control
Abstract: In the typical autonomous driving stack, planning and control systems represent two of the most crucial components in which data retrieved by sensors and processed by perception algorithms are used to implement a safe and comfortable self-driving behavior. In particular, the planning module predicts the path the autonomous car should follow taking the correct high-level maneuver, while control systems perform a sequence of low-level actions, controlling steering angle, throttle and brake. In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts both acceleration and steering angle, thus obtaining a single module able to drive the vehicle using the data processed by localization and perception algorithms on board of the self-driving car. In particular, the system that was fully trained in simulation is able to drive smoothly and safely in obstacle-free environments both in simulation and in a real-world urban area of the city of Parma, proving that the system features good generalization capabilities also driving in those parts outside the training scenarios. Moreover, in order to deploy the system on board of the real self-driving car and to reduce the gap between simulated and real-world performances, we also develop a module represented by a tiny neural network able to reproduce the real vehicle dynamic behavior during the training in simulation.
|
|
08:50-09:10, Paper We-A-OR.2 | Add to My Program |
How to Build and Validate a Safe and Reliable Autonomous Driving Stack? a ROS Based Software Modular Architecture Baseline |
|
Gómez-Huélamo, Carlos | University of Alcalá |
Diaz-Diaz, Alejandro | University of Alcala |
Araluce, Javier | University of Alcala |
Ortiz Huamaní, Miguel Eduardo | University of Alcala |
Gutiérrez-Moreno, Rodrigo | University of Alcalá |
Arango, Felipe | University of Alcala |
Llamazares, Angel | University of Alcalá |
Bergasa, Luis M. | University of Alcala |
Keywords: Self-Driving Vehicles, Vehicle Environment Perception, Vision Sensing and Perception
Abstract: The implementation of Autonomous Driving stacks (ADS) is one of the most challenging engineering tasks of our era. Autonomous Vehicles (AVs) are expected to be driven in highly dynamic environments with a reliability greater than human beings and full autonomy. Furthermore, one of the most important topics is the way to democratize and accelerate the development and research of holistic validation to ensure the robustness of the vehicle. In this paper we present a powerful ROS (Robot Operating System) based modular ADS that achieves state-of-the-art results in challenging scenarios based on the CARLA (Car Learning to Act) simulator, outperforming several strong baselines in a novel evaluation setting which involves non-trivial traffic scenarios and adverse environmental conditions. Our proposal ranks in second position in the CARLA Autonomous Driving Leaderboard (Map Track) and gets the best score considering modular pipelines, as a preliminary stage before implementing it in our real-world autonomous electric car. Our ADS is built towards meeting the requirements to commit the least number of traffic infractions which can be summarized as: Global planning based on the A* algorithm, control layer uses waypoints and Linear-Quadratic Regulator (LQR) algorithm, Hierarchical Interpreted Binary Petry Nets (HIBPNs) to model the behavioural processes, GNSS and IMU to conduct the localization step and a combination of perception pipelines for obstacle detection, traffic signs detection and risk assessment based on LiDAR, Camera and High-Definition (HD) Map information. To encourage the use research in holistic development and testing, our code is publicly available at https://github.com/RobeSafe-UAH/CARLA_Leaderboard.
|
|
09:10-09:30, Paper We-A-OR.3 | Add to My Program |
Segmented Encoding for Sim2Real of RL-Based End-To-End Autonomous Driving |
|
Chung, Seung-Hwan | Naver Labs |
Kong, Seung-Hyun | Korea Advanced Institute for Science and Technology |
Cho, Sangjae | KAIST |
Nahrendra, I Made Aswin | KAIST |
Keywords: Autonomous / Intelligent Robotic Vehicles, Reinforcement Learning, Deep Learning
Abstract: Among the challenges in the recent research of end-to-end (E2E) driving, interpretability and distribution shift in the simulation-to-real (Sim2Real) have drawn considerable attention. Because of low interpretability, we cannot clearly explain the causal relationship between the input image and the control actions by the network. Moreover, the distribution shift problem in Sim2Real degrades the driving performance of the policy in the realworld deployment. In this paper, we propose a segmentation-based classwise disentangled latent encoding algorithm to cope with the two challenges. In the proposed algorithm, multi-class segmentation transfers RGB images in both simulation and real environments to the same domain, while preserving the necessary information of objects of primary classes, such as pedestrian, road, and cars, for driving decisions. Besides, in the class-wise disentangled latent encoding, segmented images are encoded to a latent vector, which improves the interpretability significantly, since the state input has a structured format. The interpretability improvement is testified by the t-stochastic neighbor embedding, image reconstruction and the causal relationship between the real images and the control actions. We deploy the driving policy trained in the simulation directly to an autonomous vehicle platform and show, to the best of our knowledge, the first demonstration of the RL-based E2E autonomous in various real environments.
|
|
We-Po1S Poster Session, Foyer Eurogress |
Add to My Program |
Interactive Session We1 |
|
|
|
09:30-10:50, Paper We-Po1S.1 | Add to My Program |
How to Not Drive: Learning Driving Constraints from Demonstration |
|
Rezaee, Kasra | University of Toronto |
Yadmellat, Peyman | Huawei Technologies Canada |
Keywords: Automated Vehicles, Unsupervised Learning, Deep Learning
Abstract: We propose a new scheme to learn motion planning constraints from human driving trajectories. Behavioral and motion planning are the key components in an autonomous driving system. The behavioral planning is responsible for high-level decision making required to follow traffic rules and interact with other road participants. The motion planner role is to generate feasible, safe trajectories for a self-driving vehicle to follow. The trajectories are generated through an optimization scheme to optimize a cost function based on metrics related to smoothness, movability, and comfort, and subject to a set of constraints derived from the planned behavior, safety considerations, and feasibility. A common practice is to manually design the cost function and constraints. Recent work has investigated learning the cost function from human driving demonstrations. While effective, the practical application of such approaches is still questionable in autonomous driving. In contrast, this paper focuses on learning driving constraints, which can be used as an add-on module to existing autonomous driving solutions. To learn the constraint, the planning problem is formulated as a constrained Markov Decision Process, whose elements are assumed to be known except the constraints. The constraints are then learned by learning the distribution of expert trajectories and estimating the probability of optimal trajectories belonging to the learned distribution. The proposed scheme is evaluated using NGSIM dataset, yielding less than 1% collision rate and out of road maneuvers when the learned constraints is used in an optimization-based motion planner.
|
|
09:30-10:50, Paper We-Po1S.2 | Add to My Program |
Emerging of V2X Paradigm in the Development of a ROS-Based Cooperative Architecture for Transportation System Agents |
|
Elias, Catherine | German University in Cairo |
Shehata, Omar | German University in Cairo |
Morgan, Elsayed Imam | German University in Cairo |
Stiller, Christoph | Karlsruhe Institute of Technology |
Keywords: Cooperative Systems (V2X), Vehicle Control, Smart Infrastructure
Abstract: The Connected and Automated Vehicles (CAVs) technology is in continuous growth, especially during the last decade. Accordingly, the development of the V2X protocols grasps the attention of many researchers. However, there is a gap in structuring a holistic architecture that considers the individual agent modules in the transportation system along with their cooperation, especially in the decision-making layer. Thus, the main contribution of this article is to build a Robotics Operating System (ROS)-based architecture, simulating the interaction between the infrastructure and the car agents. The architecture is designed with some essential characteristics: configurable, modular, comprehensive, usable, and generic. In the architecture, the car agent is built to include four main modules to accomplish a high level of autonomy; Localization, Planning, Control, and communication. Meanwhile, the road agent is composed of two modules; mapping and communication. Both agents are considered ROS nodes that can communicate through custom ROS messages through Service/Client architecture using developed V2I protocols. These protocols are responsible for registering the car while joining the road and assigning a unique identifier to each joined car. Moreover, the road assigns the desired speed suiting the road profile and a lane to be kept by the car. The architecture is verified on a designed track map on Webots simulator. A case study of 10 heterogeneous cars is demonstrated to observe the architecture performance. The architecture showed promising results, successfully controlling the cars along with the designed map with acceptable error via the V2I protocols, allowing a human-like driving experience.
|
|
09:30-10:50, Paper We-Po1S.3 | Add to My Program |
Object-Based Velocity Feedback for Dynamic Occupancy Grids |
|
Jiménez Bermejo, Víctor | Consejo Superior De Investigaciones Científicas |
Godoy, Jorge | Centre for Automation and Robotics (UPM-CSIC) |
Artunedo, Antonio | Centre for Automation and Robotics (CSIC-UPM) |
Villagra, Jorge | Centre for Automation and Robotics (CSIC-UPM) |
Keywords: Vehicle Environment Perception, Lidar Sensing and Perception, Automated Vehicles
Abstract: Dynamic occupancy grids (DOGs) have raised interest in the last years due to their ability to fuse information without explicit data association, to represent free space and arbitrary-shape objects and to estimate obstacles' dynamics. Different works have presented strategies with demonstrated good performance. Most of them rely on LiDAR sensors, and some have shown that including additional velocity measurements enhance the estimation. This work aims at showing that velocity information can be directly inferred from objects displacement. Thus, a strategy using velocity feedback and its inclusion in the DOG is presented. The qualitative and quantitative analysis of results obtained from real data experimentation show a very good performance, specially in dynamic changing situations.
|
|
09:30-10:50, Paper We-Po1S.4 | Add to My Program |
Cross-Layer Authentication Based on Physical-Layer Signatures for Secure Vehicular Communication |
|
Shawky, Mahmoud | PhD Student at James Watt School of Engineering, University of G |
Abbasi, Qammer H. | Dr Abbasi Is a Reader with the James Watt School of Engineering, |
Imran, Muhammad Ali | Dean University of Glasgow UESTC, Head of the Communications Sen |
Ansari, Shuja | Member of the Communications Sensing and Imaging Research Group |
Taha, Ahmad | Member of the Communications Sensing and Imaging Research Group |
Keywords: V2X Communication, Security, Privacy
Abstract: In recent years, research has focused on exploiting the inherent physical (PHY) characteristics of wireless channels to discriminate between different spatially separated network terminals, mitigating the significant costs of signature-based techniques. In this paper, the legitimacy of the corresponding terminal is firstly verified at the protocol stack’s upper layers, and then the re-authentication process is performed at the PHY-layer. In the latter, a unique PHY-layer signature is created for each transmission based on the spatially and temporally correlated channel attributes within the coherence time interval. As part of the verification process, the PHY-layer signature can be used as a message authentication code to prove the packet's authenticity. Extensive simulation has shown the capability of the proposed scheme to support high detection probability at small signal-to-noise ratios. In addition, security evaluation is conducted against passive and active attacks. Computation and communication comparisons are performed to demonstrate that the proposed scheme provides superior performance compared to conventional cryptographic approaches.
|
|
09:30-10:50, Paper We-Po1S.5 | Add to My Program |
Multi-Vehicle Conflict Management with Status and Intent Sharing |
|
Wang, Hao | University of Michigan, Ann Arbor |
Avedisov, Sergei | Toyota Motor North America R&D - InfoTech Labs |
Altintas, Onur | Toyota North America R&D |
Orosz, Gabor | University of Michigan |
Keywords: Cooperative Systems (V2X), Situation Analysis and Planning, V2X Communication
Abstract: In this paper, we extend the conflict analysis framework to resolve conflicts between multiple vehicles with different levels of automation, while utilizing status-sharing and intent-sharing enabled by vehicle-to-everything (V2X) communication. In status-sharing a connected vehicle shares its current state (e.g., position, velocity) with other connected vehicles, whereas in intent-sharing a vehicle shares information about its future trajectory (e.g., velocity bounds). Our conflict analysis framework uses reachability theory to interpret the information contained in status-sharing and intent-sharing messages through conflict charts. These charts enable real-time decision making and control of a connected automated vehicle interacting with multiple remote connected vehicles. Using numerical simulations and real highway traffic data, we demonstrate the effectiveness of the proposed conflict resolution strategies, and reveal the benefits of intent sharing in mixed-autonomy environments.
|
|
09:30-10:50, Paper We-Po1S.6 | Add to My Program |
Predicting Real Life Electric Vehicle Fast Charging Session Duration Using Neural Networks |
|
Deschênes, Anthony | Université Laval |
Gaudreault, Jonathan | Université Laval |
Quimper, Claude-Guy | Université Laval |
Keywords: Deep Learning, Intelligent Vehicle Software Infrastructure
Abstract: Predicting the time needed to charge an electric vehicle from X% to Y% is a difficult task due to the nonlinearity of the charging process and other external factors such as temperature and battery degradation. Using 28,000 real-life level 3 fast charging sessions from 15 different types of electric vehicles, we train models for this task. We compare learning models such as random forest, linear and seconddegree regressions, support vector regressions, and neural networks. The models take into consideration the external temperature, battery capacity, nominal capacity of the electric vehicle, number of charges made during the same day, maximum charging time allowed by the electric vehicle, target voltage, maximum voltage and maximum current asked by the electric vehicle. The models also take into consideration the vehicle type and the charging station type. We use a data augmentation technique (SMOTE) and hyperparameters optimization to enhance our model performances. The structure of the neural networks is optimized using Bayesian Optimization. All models are trained and statistically compared in order to find the overall best model for all vehicle types. The overall best model is a neural network with a sub neural network pre-trained to predict the electric vehicle type.
|
|
09:30-10:50, Paper We-Po1S.7 | Add to My Program |
AVMaestro: A Centralized Policy Enforcement Framework for Safe Autonomous-Driving Environments |
|
Zhang, Ze | University of Michigan |
Singapuram, Sanjay Sri Vallabh | University of Michigan |
Zhang, Qingzhao | University of Michigan |
Hong, David Ke | University of Michigan |
Nguyen, Brandon | University of Michigan |
Mao, Z. Morley | University of Michigan, Ann Arbor |
Mahlke, Scott | University of Michigan |
Chen, Qi Alfred | UC Irvine |
Keywords: Intelligent Vehicle Software Infrastructure, Self-Driving Vehicles, Security
Abstract: Autonomous vehicles (AVs) are on the verge of changing the transportation industry. Despite the fast development of autonomous driving systems (ADSs), they still face safety and security challenges. Current defensive approaches usually focus on a narrow objective and are bound to specific platforms, making them difficult to generalize. To solve these limitations, we propose AVMaestro, an efficient and effective policy enforcement framework for full-stack ADSs. AVMaestro includes a code instrumentation module to systematically collect required information across the entire ADS, which will then be feed into a centralized data examination module, where users can utilize the global information to deploy defensive methods to protect AVs from various threats. AVMaestro is evaluated on top of Apollo-6.0 and experimental results confirm that it can be easily incorporated into the original ADS with almost negligible run-time delay. We further demonstrate that utilizing the global information can not only improve the accuracy of existing intrusion detection methods, but also potentially inspire new security applications.
|
|
09:30-10:50, Paper We-Po1S.8 | Add to My Program |
Socially-Optimal Auction-Theoretic Intersection Management System |
|
Morrissett, Adam | Virginia Commonwealth University |
Martin, Patrick | Virginia Commonwealth University |
Abdelwahed, Sherif | Virginia Commonwealth University |
Keywords: Cooperative ITS, Cooperative Systems (V2X), Self-Driving Vehicles
Abstract: Unsignalized intersections are often sources of congestion and collisions. When human-driven vehicles arrive simultaneously, the drivers typically creep out into the intersection or wave each other through to break stalemates. While intuitive for human drivers, this approach would be challenging for autonomous vehicles (AVs). Current AVs typically operate in isolation without explicitly communicating their intentions to others. In this paper, we propose an auction-based intersection management system (IMS) to determine a crossing schedule. Vehicles bid for crossing time using a cost function over different possible crossing times, and the IMS assigns crossing times that maximize social utility. We evaluate our system with an ambiguous crossing scenario and demonstrate its usefulness in determining socially-optimal crossing schedules.
|
|
09:30-10:50, Paper We-Po1S.9 | Add to My Program |
MCS Analysis for 5G-NR V2X Sidelink Broadcast Communication |
|
Yan, Jin | Eurecom |
Haerri, Jerome | EURECOM |
Keywords: V2X Communication, Cooperative Systems (V2X), Cooperative ITS
Abstract: Leveraging Modulation and Coding Schemes (MCS) in 5G New Radio (NR) Sidelink represents one key strategy to provide sufficient capacity required by future 5G for Vehicle-to-Everything (V2X) services for intelligent vehicles. Early studies either directly adopt the previously optimised QPSK 1/2 by 802.11p/C-V2X or suggest an optimal MCS value under a particular context. In this paper, we identify a MCS value optimal under any context, by evaluating the impact of MCS on V2X broadcast communication considering multiple varying parameters (e.g. variable packet size, transmit rate or density) representative of different 5G V2X services.
|
|
09:30-10:50, Paper We-Po1S.10 | Add to My Program |
RRT-Based Maximum Entropy Inverse Reinforcement Learning for Robust and Efficient Driving Behavior Prediction |
|
Hosoma, Shinpei | Tokyo Institute of Technology |
Sugasaki, Masato | Tokyo Institute of Technology |
Arie, Hiroaki | DENSO CORPORATION |
Shimosaka, Masamichi | Tokyo Institute of Technology |
Keywords: Advanced Driver Assistance Systems, Driver State and Intent Recognition, Reinforcement Learning
Abstract: Advanced driver assistance systems have gained popularity as a safe technology that helps people avoid traffic accidents. To improve system reliability, a lot of research on driving behavior prediction has been extensively researched. In particular, inverse reinforcement learning (IRL) is known as a prominent approach because it can directly learn complicated behaviors from expert demonstrations. Driving data tend to have several optimal behaviors because of their dependency on drivers' preferences. To capture these features, maximum entropy IRL has been getting attention because its probabilistic model can consider suboptimality. While accurate modeling and prediction can be expected, maximum entropy IRL needs to calculate the partition function, which requires large computational costs. Thus, we cannot apply this model to a high-dimensional space for detailed car modeling. In addition, existing research attempts to reduce these costs by approximating maximum entropy IRL; however, a combination of the efficient path planning and the proper parameter updating is required for an accurate approximation, and existing methods have not achieved them. In this study, we leverage a rapidly-exploring random tree (RRT) motion planner, and efficiently sample multiple informative paths from the generated trees. Also, we propose novel RRT-based importance sampling for an accurate approximation. These two processes ensure a stable and fast IRL model in a large high-dimensional space. Experimental results on artificial environments show that our approach improves stability and is faster than the existing IRL methods.
|
|
09:30-10:50, Paper We-Po1S.11 | Add to My Program |
Recognising Place under Distinct Weather Variability, a Comparison between End-To-End and Metric Learning Approaches |
|
Role', Stephane | University of Warwick |
Marnerides, Demetris | Independent |
Debattista, Kurt | University of Warwick |
Cavazzi, Stefano | Ordnance Survey |
Dianati, Mehrdad | University of Warwick |
Keywords: Mapping and Localization, Vision Sensing and Perception, Vehicle Environment Perception
Abstract: Autonomous driving requires robust and accurate real time localisation information to navigate and perform trajectory planning. Although Global Navigation Satellite Systems (GNSS) are most frequently used in this application, they are unreliable within urban environments because of multipath and non-line-of-sight errors. Alternative solutions exist that exploit rich visual content from images that can be corresponded with a stored representation, such as a map, to determine the vehicles location. However, one major cause of reduced location accuracy are variations in environmental conditions between the images captured and those stored in the representation. We tackle this issue directly by collecting a simulated and real-world dataset captured over a single route under multiple environmental conditions. We demonstrate the effectiveness of an end-to-end approach in recognising place and by extension determining vehicle location.
|
|
09:30-10:50, Paper We-Po1S.12 | Add to My Program |
Infrastructure-Based Object Detection and Tracking for Cooperative Driving Automation: A Survey |
|
Bai, Zhengwei | University of California, Riverside |
Wu, Guoyuan | University of California-Riverside |
Qi, Xuewei | Univeristy of California, Riverside |
Liu, Yongkang | University of Texas at Dallas |
Oguchi, Kentaro | Toyota Motor North America R&D |
Barth, Matthew | University of California-Riverside |
Keywords: Image, Radar, Lidar Signal Processing, Vehicle Environment Perception, Cooperative Systems (V2X)
Abstract: Object detection and tracking play a fundamental role in enabling Cooperative Driving Automation (CDA), which is regarded as the revolutionary solution to addressing safety, mobility, and sustainability issues of contemporary transportation systems. Although current computer vision technologies can provide satisfactory object detection results in occlusion-free scenarios, the perception performance of onboard sensors is inevitably limited by the range and occlusion. Owing to the flexible location and pose for sensor installation, infrastructure-based detection, and tracking systems can enhance the perception capability of connected vehicles; as such, they have quickly become a popular research topic. In this survey paper, we review the research progress for infrastructure-based object detection and tracking systems. Architectures of roadside perception systems based on different types of sensors are reviewed to show a high-level description of the workflows for infrastructure-based perception systems. Roadside sensors and different perception methodologies are reviewed and analyzed with detailed literature to provide a low-level explanation for specific methods followed by Datasets and Simulators to draw an overall landscape of infrastructure-based object detection and tracking methods. We highlight current opportunities, open problems, and anticipated future trends.
|
|
09:30-10:50, Paper We-Po1S.13 | Add to My Program |
Cloud Assisted Connected and Automated Mobility System Architecture Design and Experimental Verification: The 5G-MOBIX Autonomous Truck Routing Use Case |
|
Sari, Tahir | Ford Otosan |
Sever, Mert | Istanbul Technical University |
Candan, Arda Taha | TÜBİTAK BİLGEM |
Gİrgİn, Emre | TUBİTAK BİLGEM |
Çİbuk Gİrgİn, GÜlsÜm Tuba | TUBİTAK BİLGEM |
Haklidir, Mehmet | TUBITAK BILGEM |
Keywords: V2X Communication, Vehicle Control, Vehicle Environment Perception
Abstract: In this study, potential usage of 5G networks for cloud assisted connected and autonomous mobility solution is demonstrated with preliminary field tests of Autonomous Truck Routing use case in H2020-ICT18 5G-MOBIX project. The preliminary tests are performed at Ford Otosan test track in Eskişehir, Turkey. The SAFIR cloud server developed by TUBITAK BILGEM is used over 5G network. The 5G-MOBIX Autonomous Truck Routing system architecture is composed of an autonomous truck, cloud and smart infrastructure having three road side units with LIDAR sensors. Navigation software stack is operated on cloud according to data received from autonomous truck and road side units. On-board unit performs required data transmission between cloud and truck. Motion control algorithms are executed to follow reference way points with a rapid prototyping controller in autonomous truck. Feasibility of the proposed connected and automated mobility system architecture is verified
|
|
09:30-10:50, Paper We-Po1S.14 | Add to My Program |
Spatiotemporal Transformer Attention Network for 3D Voxel Level Joint Segmentation and Motion Prediction in Point Cloud |
|
Wei, Zhensong | University of California, Riverside |
Qi, Xuewei | Univeristy of California, Riverside |
Bai, Zhengwei | University of California, Riverside |
Wu, Guoyuan | University of California-Riverside |
Nayak, Saswat Priyadarshi | CE-CERT, University of California Riverside |
Hao, Peng | University of California, Riverside |
Barth, Matthew | University of California-Riverside |
Liu, Yongkang | University of Texas at Dallas |
Oguchi, Kentaro | Toyota ITC |
Keywords: Lidar Sensing and Perception, Deep Learning, Situation Analysis and Planning
Abstract: Environment perception including detection, classification, tracking, and motion prediction are key enablers for automated driving systems and intelligent transportation applications. Fueled by the advances in sensing technologies and machine learning techniques, LiDAR-based sensing systems have become a promising solution. The current challenges of this solution are how to effectively combine different perception tasks into a single backbone and how to efficiently learn the spatiotemporal features directly from point cloud sequences. In this research, we propose a novel spatiotemporal attention network based on a transformer self-attention mechanism for joint semantic segmentation and motion prediction within a point cloud at the voxel level. The network is trained to simultaneously outputs the voxel level class and predicted motion by learning directly from a sequence of point cloud datasets. The proposed backbone includes both a temporal attention module (TAM) and a spatial attention module (SAM) to learn and extract the complex spatiotemporal features. This approach has been evaluated with the nuScenes dataset, and promising performance has been achieved.
|
|
09:30-10:50, Paper We-Po1S.15 | Add to My Program |
Seeing Nearby 3D Scenes Using Ultrasonic Sensors |
|
Shimoyama, Daina | Nagoya Institute of Technology |
Sakaue, Fumihiko | Nagoya Institute of Technology |
Kumano, Shunya | SOKEN, INC |
Koyama, Yu | SOKEN, INC |
Matsuura, Mitsuyasu | SOKEN, INC |
Sato, Jun | Nagoya Institute of Technology |
Keywords: Vehicle Environment Perception, Image, Radar, Lidar Signal Processing, Deep Learning
Abstract: The ultrasonic sensors are widely used for vehicles to extract obstacles in the scene. They can obtain the distance to the object directly at a low cost even in harsh environments. However, since the information from a single ultrasonic sensor is very limited, it has not been used for recovering the detailed 3D structure and semantic labels of the scene, unlike in-vehicle cameras or LiDARs. Therefore, we in this paper propose a method for recovering the dense 3D structure and semantic labels of the scene from a moving ultrasonic sensor mounted on a vehicle. Our method uses the raw profiles of the ultrasonic sensor signals and learns the relationship between the raw ultrasonic signals and the 3D scene using multi-task learning. As a result, our method can recover the dense 3D structure and semantic labels of the scene similar to what we would recover with cameras and LiDARs just from a single moving ultrasonic sensor. The efficiency of the proposed method is tested using real sensor data as well as synthetic sensor data.
|
|
09:30-10:50, Paper We-Po1S.16 | Add to My Program |
Infra Sim-To-Real : An Efficient Baseline and Dataset for Infrastructure Based Online Object Detection and Tracking Using Domain Adaptation |
|
Shyam, Pranjay | Korea Advanced Institute of Science and Technology, KAIST |
Mishra, Sumit | Korea Advanced Institute of Science and Technology |
Yoon, Kuk-Jin | KAIST |
Kim, Kyung-Soo | Korea Advanced Institute of Science and Technology |
Keywords: Smart Infrastructure, Vision Sensing and Perception, Deep Learning
Abstract: Increasing usage of traffic cameras provides an opportunity to utilize them for smart city applications. However, the efficacy of such systems is determined by their ability to detect and track objects of interest from diverse viewpoints accurately. This is challenging due to the diverse viewpoints, elevations, and distinct properties of camera sensors. Thus, to ensure robust performance, the training dataset should cover many variations, including viewpoints, illumination changes, and diverse weather conditions. However, constructing such a dataset is expensive in terms of data collection and annotation. This paper proposes an unsupervised domain adaptation approach wherein a synthetic dataset is generated using a simulator and subsequently used to ensure performance consistency of multi-object-tracking (MOT) algorithms across a diverse range of manually annotated natural scenes. Towards this end, we emphasize achieving domain invariant object detection by combining image stylization and class-balancing augmentation. Furthermore, we extend the robust detection algorithm to track detected objects across a large time scale using feature embeddings generated by the detector. Based on qualitative and quantitative results, we demonstrate the viability of such a system that is invariant to illumination, weather, viewpoint, and scene changes while providing a baseline for future research. Codebase and datasets would be made available at url{https://github.com/pranjay-dev/IS2R}.
|
|
09:30-10:50, Paper We-Po1S.17 | Add to My Program |
Dynamic Adjustment of Reward Function for Proximal Policy Optimization with Imitation Learning: Application to Automated Parking Systems |
|
Albilani, Mohamad | Telecom Sudparis |
Bouzeghoub, Amel | Telecom SudParis |
Keywords: Reinforcement Learning, Automated Vehicles, Advanced Driver Assistance Systems
Abstract: An automated parking system (APS) is responsible for performing a parking maneuver in a secure and timeefficient full autonomy system. These systems include mainly three methods, parking spot exploration, path planning, and path tracking. In the literature, there are several path planning and tracking methods where the application of reinforcement learning is widespread. However, performance tuning and ensuring efficiency remains a significant open problem. Moreover, these methods suffer from a non-linearity issue of vehicle dynamics, that causes a deviation from the original route, and does not respect the BS ISO 16787-2017 standard that outlines the minimum requirements needed in APS. In order to overcome these limitations, our contribution in this paper, named DPPO-IL, is threefold: (i) A new framework using Proximal Policy Optimization algorithm which allows an agent to explore a parking spot, plan and learn the acceleration, braking, and steering angle to park a vehicle in a random spot by avoiding static and dynamic obstacles; (ii) A dynamic adjustment of the reward function using intrinsic reward signals to induce the agent to explore more; (iii) An approach to learn policies from expert demonstrations using imitation learning combined with deep reinforcement learning to speed up the learning phase and reduce the training time; (iv) A taskspecific curriculum learning to train the agent in a very complex environment. Experiments show promising results, especially that our approach managed to achieve a 90% success rate where 97% of them were aligned with the parking spot, with an inclination angle greater than ±0.2° and a deviation less than 0.1 meter. These results exceeded the state of the art while respecting the ISO 16787-2017 standard.
|
|
09:30-10:50, Paper We-Po1S.18 | Add to My Program |
User Experience Evaluation of SAE Level 3 Driving on a Test Track |
|
Wintersberger, Philipp | TU Wien |
Sadeghian, Shadan | University of Siegen |
Schartmüller, Clemens | Technische Hochschule Ingolstadt |
Frison, Anna Katharina | Technische Hochschule Ingolstadt (THI) |
Riener, Andreas | Technische Hochschule Ingolstadt |
Keywords: Hand-off/Take-Over, Automated Vehicles, Novel Interfaces and Displays
Abstract: Studies on imminent Take-Over Requests (TORs) in automated driving have mostly addressed safety aspects rather than user experience (UX). In this study, we investigated the fulfillment of user needs during SAE L3 driving on a test track. Participants engaged in non-driving related tasks (NDRTs) (smartphone or the auditory modality) had to respond to critical TORs to prevent an accident. Our results, based on qualitative methods, show that participants expect L3 vehicles to be safe and confirmed this assessment after the test track experience. Further, participants preferred NDRTs using the auditory modality over the smartphone to maintain situation aware. Our study indicates that drivers may behave responsibly in L3 vehicles, provided they are supported with user interfaces that allow them to engage in NDRTs while fulfilling their needs for security, autonomy, competence, and stimulation.
|
|
09:30-10:50, Paper We-Po1S.19 | Add to My Program |
Motion Sickness Modeling with Visual Vertical Estimation and Its Application to Autonomous Personal Mobility Vehicles |
|
Liu, HaiLong | Nara Institute of Science and Technology |
Inoue, Shota | Nara Institute of Science and Technology |
Wada, Takahiro | Nara Institute of Science and Technology |
Keywords: Autonomous / Intelligent Robotic Vehicles, Human-Machine Interface, Driver State and Intent Recognition
Abstract: Passengers (drivers) of level 3-5 autonomous personal mobility vehicles (APMV) and cars can perform non-driving tasks, such as reading books and smartphones, while driving. It has been pointed out that such activities may increase motion sickness. Many studies have been conducted to build countermeasures, of which various computational motion sickness models have been developed. Many of these are based on subjective vertical conflict (SVC) theory, which describes vertical changes in direction sensed by human sensory organs vs. those expected by the central nervous system. Such models are expected to be applied to autonomous driving scenarios. However, no current computational model can integrate visual vertical information with vestibular sensations. We proposed a 6 DoF SVC-VV model which add a visually perceived vertical block into a conventional six-degrees-of-freedom SVC model to predict VV directions from image data simulating the visual input of a human. Hence, a simple image-based VV estimation method is proposed. As the validation of the proposed model, this paper focuses on describing the fact that the motion sickness increases as a passenger reads a book while using an AMPV, assuming that visual vertical (VV) plays an important role. In the static experiment, it is demonstrated that the estimated VV by the proposed method accurately described the gravitational acceleration direction with a low mean absolute deviation. In addition, the results of the driving experiment using an APMV demonstrated that the proposed 6 DoF SVC-VV model could describe that the increased motion sickness experienced when the VV and gravitational acceleration directions were different.
|
|
09:30-10:50, Paper We-Po1S.20 | Add to My Program |
MCS-SLAM: Multi-Cues Multi-Sensors Fusion SLAM |
|
Frosi, Matteo | Politecnico Di Milano (Milan, Italy) |
Matteucci, Matteo | Politecnico Di Milano - DEIB |
Keywords: Mapping and Localization, Sensor and Data Fusion, Information Fusion
Abstract: Simultaneous localization and mapping (SLAM) is one fundamental topic in robotics due to its applications in autonomous driving. Over the last decades, many systems have been proposed, working on data coming from different sensors, such as cameras or LiDARs. Although excellent results were reached, the majority of these methods exploit the data as is, without extracting additional information or considering multiple sensors simultaneously. In this paper, we present MCS-SLAM, a Graph SLAM system that performs sensor fusion by exploiting multi-cues extracted from sensor data: color/intensity, depth/range and normal information. For each sensor, motion estimation is achieved through minimization of the pixel-wise difference between two multi-cue images. All estimates are then collectively optimized to achieve a coherent transformation. Point clouds received as input are also used to perform loop detection and closure. We compare the performance of the proposed system with state-of-the-art point cloud-based methods, LeGO-LOAM-BOR, LIO-SAM, HDL and ART-SLAM, and show that the proposed algorithm achieves less accuracy than the state-of-the-art, while needing much less computational time. The comparison is made by evaluating the estimated trajectory displacement, using the KITTI dataset.
|
|
09:30-10:50, Paper We-Po1S.21 | Add to My Program |
A Unified Description of Proving Grounds and Test Areas for Automated and Connected Vehicles |
|
Zofka, Marc René | FZI Research Center for Information Technology |
Fleck, Tobias | FZI Research Center for Information Technology |
Zöllner, J. Marius | FZI Research Center for Information Technology; KIT Karlsruhe In |
Keywords: Smart Infrastructure, Automated Vehicles, Cooperative Systems (V2X)
Abstract: Highly automated and connected vehicles are being tested more and more on proving grounds as well as in test areas in public traffic. Between simulation based testing in laboratories and real-world testing, such testing environments offer the opportunity to evaluate scenarios while providing partially controllable and partially observable environments. Although the number of public test areas is increasing, their capabilities and opportunities have not yet been analyzed and certainly have not been brought to a formal description. To overcome this shortage, this paper presents a unified taxonomy that describes proving grounds and test areas for connected autonomous driving in a uniform manner. We introduce a machine-readable and processable representation, that makes it possible to analyze test areas regarding their abilities and benefits to facilitate testing of assisted, automated and connected vehicles. So, necessary technological bricks are classified, a corresponding ontology is presented, common algorithms are discussed and evaluated at the example of smart and connected infrastructure in the Test Area Autonomous Driving Baden-Württemberg. We conclude by giving an overview of future research questions to motivate researchers to use the proposed model as a description baseline for further V&V approaches.
|
|
09:30-10:50, Paper We-Po1S.22 | Add to My Program |
A Comparative Analysis of Decision-Level Fusion for Multimodal Driver Behaviour Understanding |
|
Roitberg, Alina | Karlsruhe Institute of Technology (KIT) |
Peng, Kunyu | Karlsruhe Institute of Technology |
Marinov, Zdravko | Karlsruhe Institute of Technology |
Seibold, Constantin | Karlsruhe Institute of Technology, Institute of Anthropomatics A |
Schneider, David | Karlsruhe Institute of Technology |
Stiefelhagen, Rainer | Karlsruhe Institute of Technology |
Keywords: Driver Recognition, Information Fusion, Sensor and Data Fusion
Abstract: Visual recognition inside the vehicle cabin leads to safer driving and more intuitive human-vehicle interaction but such systems face substantial obstacles as they need to capture different granularities of driver behaviour while dealing with highly limited body visibility and changing illumination. Multimodal recognition mitigates a number of such issues: prediction outcomes of different sensors complement each other due to different modality-specific strengths and weaknesses. While several late fusion methods have been considered in previously published frameworks, they constantly feature different architecture backbones and building blocks making it very hard to isolate the role of the chosen late fusion strategy itself. This paper presents an empirical evaluation of different paradigms for decision-level late fusion in video-based driver observation. We compare seven different mechanisms for joining the results of single-modal classifiers which have been both popular, (e.g., score averaging) and not yet considered, (e.g., rank-level fusion) in the context of driver observation evaluating them based on different criteria and benchmark settings. This is the first systematic study of strategies for fusing outcomes of multimodal predictors inside the vehicles, conducted with the goal to provide guidance for fusion scheme selection.
|
|
09:30-10:50, Paper We-Po1S.23 | Add to My Program |
Dynamic Conflict Mitigation for Cooperative Driving Control of Intelligent Vehicles |
|
Oudainia, Mohamed Radjeb | LAMIH UMR CNRS 8201 (Laboratoire d'Automatique, De Mécanique Et |
Sentouh, Chouki | LAMIH/CNRS University of Valenciennes |
Nguyen, AnhTu | Université Polytechnique Hauts-De-France |
Popieul, Jean-Christophe | Universite De Valenciennes |
Keywords: Autonomous / Intelligent Robotic Vehicles, Vehicle Control, Advanced Driver Assistance Systems
Abstract: The work described in this paper proposes a new dynamic conflict attenuation strategy in driving shared control for intelligent vehicles lane keeping systems. This strategy takes into account the activity and availability of the driver as well as the external risk and conflict between the driver and the control system in order to manage and adapt the level of assistance in real time. The design of an adaptive shared controller is based on a dynamic multi-objective cost function that changes according to the level of assistance. Based on Lyapunov stability arguments, the global asymptotical stability of the closed-loop control system is proven and an LMI optimization is used to formulate the control design. The simulation results, conducted with the SHERPA dynamic car simulator under real-world driving situations, for different scenarios show the importance of adapting the controller in real time in order to decrease the conflict between the driver and the lane keeping system and to ensure the safety of the vehicle as well as to increase the confidence and acceptability of the driver.
|
|
09:30-10:50, Paper We-Po1S.24 | Add to My Program |
Non-Local Evasive Overtaking of Downstream Incidents in Distributed Behavior Planning of Connected Vehicles |
|
Kreidieh, Abdul Rahman | UC Berkeley |
Farid, Yashar | InfoTech Labs, Toyota Motor North America R&D |
Oguchi, Kentaro | Toyota Motor North America R&D |
Keywords: Vehicle Control, Advanced Driver Assistance Systems, Traffic Flow and Management
Abstract: The prevalence of high-speed vehicle-to-everything (V2X) communication will likely significantly influence the future of vehicle autonomy. In several autonomous driving applications, however, the role such systems will play is seldom understood. In this paper, we explore the role of communication signals in enhancing the performance of lane change assistance systems in situations where downstream bottlenecks restrict the mobility of a few lanes. Building off of prior work on modeling lane change incentives, we design a model that 1) encourages automated vehicles to subvert lanes in which distant downstream delays are likely to occur, while also 2) ignoring greedy local incentives when such delays are needed to maintain a specific route. Numerical results on different traffic conditions and penetration rates suggest that the model successfully subverts a significant portion of delays brought about by downstream bottlenecks, both globally and from the perspective of the controlled vehicles.
|
|
09:30-10:50, Paper We-Po1S.25 | Add to My Program |
Wheel Speed Is All You Need: How to Efficiently Detect Automotive Damper Defects Using Frequency Analysis |
|
Huber, Sebastian | Technical University of Munich |
Betz, Johannes | University of Pennsylvania |
Lienkamp, Markus | Technische Universität München |
Keywords: Active and Passive Vehicle Safety, Automated Vehicles, Sensor and Data Fusion
Abstract: Dampers are crucial components of the vehicle's suspension to enable safe and comfortable driving. Therefore, defects like an oil leakage or a gas loss need to be detected expeditiously and with high accuracy. In this paper, we present a novel approach that relies solely on wheel speed signals to detect continuous levels of damper degradation. A dedicated 100,000km real-world driving data set with multiple relevant damper defects and diverse environmental conditions is used for development and validation. Different vehicle types, routes, vehicle loads, tires, and driving styles are taken into account. Our approach comprises a frequency analysis of the wheel speed signals using the Fast Fourier Transform (FFT). A physical connection between defective dampers and oscillations in the wheel speeds enables a regression model to detect defective dampers. By using the residual sum of a polynomial fit of the FFT data points as a regressor variable, the current level of oil loss is determined. Subsequently, the remaining useful life (RUL) of the damper can be extrapolated. The resulting method is a threefold cascaded regression. In our results, we show a high sensitivity of the damper defect detection to vehicle loads as well as low sensitivity to ambient temperatures and rim sizes. The proposed method achieves a mean absolute error (MAE) of 5.4% oil loss. Future research will focus on efficiently implementing the algorithms onboard the vehicle and sending aggregated data to a remote back end for further analysis.
|
|
09:30-10:50, Paper We-Po1S.26 | Add to My Program |
Model-Based Reinforcement Learning for Advanced Adaptive Cruise Control: A Hybrid Car Following Policy |
|
Yavas, M. Ugur | Eatron Technologies |
Kumbasar, Tufan | Istanbul Technical University |
Ure, Nazim | Istanbul Technical University |
Keywords: Impact on Traffic Flows, Advanced Driver Assistance Systems, Reinforcement Learning
Abstract: Adaptive cruise control (ACC) is one of the frontier functionality towards highly automated vehicles and has been widely studied by both academia and industry. However, previous ACC approaches are reactive and rely on the precise information of the current state of a single lead vehicle. With the advancement in the field of artificial intelligence, particularly in reinforcement learning, there is a big opportunity to enhance the current functionality. This paper presents an advanced ACC concept with unique environment representation and model-based reinforcement learning (MBRL) technique which enables predictive driving. By being predictive, we refer to the capability to handle multiple lead vehicles and have internal predictions about the traffic environment which avoids reactive short-term policies. Moreover, we propose a hybrid policy that combines classical car following policies with MBRL policy to avoid accidents by monitoring the internal model of the MBRL policy. Our extensive evaluation in a realistic simulation environment shows that the proposed approach is superior to the reference model-based and model-free algorithms. The MBRL agent requires only 150k samples (approximately 50 hours driving) to converge, which is x4 more sample efficient than model-free methods.
|
|
09:30-10:50, Paper We-Po1S.27 | Add to My Program |
Sharpness Continuous Path Optimization and Sparsification for Automated Vehicles |
|
Kumar, Mohit | Forschungszentrum Informatik, Karlsruhe |
Strauss, Peter | MAN Truck & Bus SE |
Kraus, Sven | Technische Universität München |
Tas, Omer Sahin | FZI Research Center for Information Technology |
Stiller, Christoph | Karlsruhe Institute of Technology |
Keywords: Vehicle Control, Automated Vehicles, Self-Driving Vehicles
Abstract: We present a path optimization approach that ensures driveability while considering a vehicle's lateral dynamics. The lateral dynamics are non-holonomic; therefore, a vehicle cannot follow a path with abrupt changes even with infinitely fast steering. The curvature and sharpness, i.e., the rate change of curvature with respect to the traveled distance, must be continuous to track a defined reference path efficiently. Existing path optimization techniques typically include sharpness limitations but not sharpness continuity. The sharpness discontinuity is especially problematic for heavy-duty vehicles because their actuator dynamics are even slower than cars. We propose an algorithm that constructs a sparsified sharpness continuous path for a given reference path considering the limits on sharpness and its derivative, which subsequently addresses the torque restrictions of the actuator. The sharpness continuous path needs less steering effort and reduces mechanical stress and fatigue in the steering unit. We compare and present the outcomes for each of the three different types of optimized paths. Simulation results demonstrate that computed sharpness continuous path profiles reduce lateral jerks, enhancing comfort and driveability.
|
|
09:30-10:50, Paper We-Po1S.28 | Add to My Program |
G-VOM: A GPU Accelerated Voxel Off-Road Mapping System |
|
Overbye, Tim | Texas A&M University |
Saripalli, Srikanth | Texas A&M University |
Keywords: Mapping and Localization, Vehicle Environment Perception, Self-Driving Vehicles
Abstract: We present a local 3D voxel mapping framework for off-road path planning and navigation. Our method provides both hard and soft positive obstacle detection, negative obstacle detection, slope estimation, and roughness estimation. By using a 3D array lookup table data structure and by leveraging the GPU it can provide online performance. We then demonstrate the system working on three vehicles, a Clearpath Robotics Warthog, Moose, and a Polaris Ranger, and compare against a set of pre-recorded waypoints. This was done at 4.5 m/s in autonomous operation and 12 m/s in manual operation with a map update rate of 10 Hz. Finally, an open-source ROS implementation is provided. https://github.com/unmannedlab/G-VOM
|
|
09:30-10:50, Paper We-Po1S.29 | Add to My Program |
Systematic Evaluation of a Centralized Non-Recurrent Queue Management System |
|
Yang, Hao | McMaster University |
Farid, Yashar | InfoTech Labs, Toyota Motor North America R&D |
Oguchi, Kentaro | Toyota Motor North America R&D |
Keywords: Advanced Driver Assistance Systems, Impact on Traffic Flows, Cooperative Systems (V2X)
Abstract: Vehicle incidents or anomalous slow/stopping vehicles will generate non-recurrent queues and partially block roads. The queues will result in unbalanced lane-level traffic, and the large speed differences among lanes increase the difficulty for the queued vehicles to make lane changes to avoid downstream congestion. In this paper, a centralized non-recurrent queue management (C-NRQM) system is implemented to assist connected vehicles around non-recurrent queues with advisory speed and lane changing instructions to mitigate road congestion as well as to minimize the travel time delay and risk of collisions of all vehicles. A systematic evaluation of the system is conducted with microscopic traffic simulations to assess its mobility and safety benefits under different market penetration rates (MPRs) of connected vehicles. The socially responsibility of the system on the fairness of all road users and its performance under a competing environment with different connected vehicle applications are also evaluated to illustrate its real-world implementations in the future transportation systems. The system can reduces travel time delay by more than 80% for road with medium congestion, and more than 50% for more congested roads. Also, the system evaluation demonstrates that the centralized management has a distinct advantage on improving network performance at high MPRs of connected vehicles and eliminating the negative impact of the competition of different mobility services.
|
|
09:30-10:50, Paper We-Po1S.30 | Add to My Program |
Real-Time Cooperative Motion Planning Using Efficient Model Predictive Contouring Control |
|
Pauls, Jan-Hendrik | Karlsruhe Institute of Technology (KIT) |
Boxheimer, Mario | Karlsruhe Institute of Technology |
Stiller, Christoph | Karlsruhe Institute of Technology |
Keywords: Situation Analysis and Planning, Cooperative Systems (V2X), Collision Avoidance
Abstract: Currently, there is a gap in motion planning approaches. On the one hand, there are optimization-based motion planning techniques which can guarantee safety and feasibility, but are either slow and cooperative or fast and uncooperative. On the other hand, there are learned approaches that are fast and cooperative, but cannot give these desirable guarantees. We propose to combine model predictive contouring control (MPCC) with sophisticated collision avoidance formulations to bridge this gap. By optimizing the total utility of all traffic participants, a cooperative, safe, and feasible trajectory can be planned in real time. Examination of various collision avoidance constraints allows to obtain considerate trajectories while preserving real-time capabilities. A novel inter-stage constraint formulation allows to introduce time-based distance measures in time-discretized MPC formulations. We evaluate the resulting motion planner in various scenarios, comparing two state-of-the-art solvers.
|
|
09:30-10:50, Paper We-Po1S.31 | Add to My Program |
ROOAD: RELLIS Off-Road Odometry Analysis Dataset |
|
Chustz, George | Texas A&M University |
Saripalli, Srikanth | Texas A&M University |
Keywords: Vision Sensing and Perception, Mapping and Localization, Sensor and Data Fusion
Abstract: The development and implementation of visual-inertial odometry (VIO) has focused on structured environments, but interest in localization in off-road environments is growing. In this paper, we present the RELLIS Off-road Odometry Analysis Dataset (ROOAD) which provides high-quality, time-synchronized off-road monocular visual-inertial data sequences to further the development of related research. We evaluated the dataset on two state-of-the-art VIO algorithms, (1) Open-VINS and (2) VINS-Fusion. Our findings indicate that both algorithms perform 2 to 30 times worse on the ROOAD dataset compared to their performance in structured environments. Furthermore, OpenVINS has better tracking stability and real-time performance than VINS-Fusion in the off-road environment, while VINS-Fusion outperformed OpenVINS in tracking accuracy in several data sequences. Since the camera-IMU calibration tool from Kalibr toolkit is used extensively in this work, we have included several calibration data sequences. Our hand measurements show Kalibr's tool achieved +/-1 degrees for orientation error and +/-1 mm at best (x- and y-axis) and +/-10 mm (z-axis) at worse for position error in the camera frame between the camera and IMU. This novel dataset provides a new set of scenarios for researchers to design and test their localization algorithms on, as well as critical insights in the current performance of VIO in off-road environments. ROOAD Dataset: github.com/unmannedlab/ROOAD
|
|
09:30-10:50, Paper We-Po1S.32 | Add to My Program |
Advances in Real-Time Online Vehicle Camera Calibration Via Road Line Markings Parallelism Enforcement |
|
Bellusci, Matteo | Politecnico Di Milano - DEIB |
Matteucci, Matteo | Politecnico Di Milano - DEIB |
Keywords: Vision Sensing and Perception, Image, Radar, Lidar Signal Processing, Vehicle Environment Perception
Abstract: Cameras are among the most used sensors in Advanced Driver Assistance Systems (ADAS) and autonomous vehicles for their low cost and rich stream of information. Nevertheless, they require accurate extrinsic calibration to refer external features, e.g., obstacles and road line markings, to the vehicle reference frame. In this paper, we present a real-time online calibration procedure designed to adjust the camera's pitch and height estimates by enforcing road line markings parallelism. Differently from most of the approaches in the literature, our is not limited to straight line markings as, under the assumption of local width constancy, parallelism is enforced also in case of high curvature line markings. Furthermore, to take into account the vehicle dynamics, e.g., accelerations and braking, our estimation procedure is framed in the context of an inverted pendulum dynamical system for which a robust filter is proposed. Finally, we experimentally assess the performance of the overall approach both in simulated and real scenarios.
|
|
09:30-10:50, Paper We-Po1S.33 | Add to My Program |
Taxonomies of Connected, Cooperative and Automated Mobility |
|
Geissler, Torsten | Federal Highway Research Institute (BASt) |
Shi, Elisabeth | Federal Highway Research Institute (BASt) |
Keywords: Smart Infrastructure, Cooperative ITS, Automated Vehicles
Abstract: Support from the physical and digital road infrastructure can extend the conditions under which connected and automated vehicles can operate safely. While there are separate concepts for the Operational Design Domain (ODD) and Infrastructure Support for Automated Driving (ISAD), there is no clear picture of their interplay yet. This paper suggests an integrated perspective on the challenge of cross-sector collaboration for the benefit of Connected, Cooperative and Automated Mobility (CCAM). Taxonomies are analyzed from three perspectives: the user, the vehicles and the road infrastructure. It is found that besides well-established concepts (SAE J 3016, Principles of Operations Framework) there is a number of emerging taxonomies which consistently fit into the overall collaboration landscape. These taxonomies include the user communication on automated driving, the cooperation classes (SAE J 3216), Infrastructure Support for Automated Driving (ISAD) and Levels of Service for Automated Driving (LOSAD), the latter two being recently proposed as elements of a Smart Roads Classification. It is concluded that the taxonomies should be used and applied as a shared understanding which calls for close collaboration between the actors in order to prepare, pilot, test and deploy Connected Cooperative and Automated Mobility (CCAM) services in the coming decade(s).
|
|
09:30-10:50, Paper We-Po1S.34 | Add to My Program |
Fusion Attention Network for Autonomous Cars Semantic Segmentation |
|
Wang, Chuyao | City University of London |
Aouf, Nabil | City University of London |
Keywords: Vehicle Environment Perception, Image, Radar, Lidar Signal Processing, Self-Driving Vehicles
Abstract: Semantic segmentation is vital for autonomous car scene understanding. It provides more precise subject information than raw RGB images and this, in turn, boosts the performance of autonomous driving. Recently, self-attention methods show great improvement in image semantic segmentation. Attention maps help scene parsing with abundant relationships of every pixel in an image. However, it is computationally demanding. Besides, existing works focus either on channel attention, ignoring the pixel position factors, or on spatial attention, disregarding the impacts of the channels on each other. To address these problems, we present Fusion Attention Network based on self-attention mechanism to harvest rich contextual dependencies. This model consists of two chains: pyramid fusion spatial attention and fusion channel attention. We apply pyramid sampling in the spatial attention module to reduce the computation for spatial attention maps. Channel attention has a similar structure to the spatial attention. We also introduce a fusion technique to calculate contextual dependencies using features from both attention chains. We concatenate the results from spatial and channel attention modules as the enhanced attention map, leading to better semantic segmentation results. We conduct extensive experiments on popular datasets with different settings in addition to an ablation study to prove the efficiency of our approach. Our model achieves better results, on Cityscapes, compared to state-of-the-art methods, and also show good generalization capability on PASCAL VOC 2012.
|
|
09:30-10:50, Paper We-Po1S.36 | Add to My Program |
A Sequential Decision-Theoretic Method for Detecting Mobile Robots Localization Failures |
|
Liang, Sun | Sichuan Changhong Electronics Holding Groups Co |
Menghong, Liu | Sichuan Changhong Electric Co., Ltd |
Huayi, Zhan | Sichuan Changhong Electric Co., Ltd |
Ying, Wu | Electrical Engineering and Computer Science, Northwestern Univer |
Keywords: Autonomous / Intelligent Robotic Vehicles, Mapping and Localization
Abstract: Many methods in mobile robotics usually utilize current sensor measurement to evaluate the localization performance of robots, for example in scan matching and particle filter methods. This immediately detecting methodology tend to cause a problem that a well-localization robot obtains a poor sensor measurement, the robot may mistake momentary observation noise for a localization failure. In this paper, we propose a new robot localization fault detection method for resolving this problem. We model robot localization fault detection as a sequential decision-making problem, where the decision of detecting a localization failure is based on a long-term sensor measurements. We employ two parameters of false-positive and false-negative observation error probabilities, which can eliminate the influence of noisy observations. Further, the proposed method derives Bayesian update equations for the integration of a long-term observations and presents an analytic formula representing the belief function of the reliability of localization results. Experimental studies validate the effectiveness of the proposed method.
|
|
09:30-10:50, Paper We-Po1S.37 | Add to My Program |
Deep-Learning-Based Anomaly Detection for Lane-Changing Decisions |
|
Wang, Sheng-Li | National Taiwan University |
Lin, Chien | National Taiwan University |
Boddupalli, Srivalli | University of Florida |
Lin, Chung-Wei | National Taiwan University |
Ray, Sandip | University of Florida |
Keywords: Vehicle Environment Perception, Security, Deep Learning
Abstract: Vehicles can utilize their sensors or receive messages from other vehicles to acquire information about the surrounding environments. However, the information may be inaccurate, faulty, or maliciously compromised due to sensor failures, communication faults, or security attacks. The goal of this work is to detect if a lane-changing decision and the sensed or received information are anomalous. We develop three anomaly detection approaches based on deep learning: a classifier approach, a predictor approach, and a hybrid approach combining the classifier and the predictor. All of them do not need anomalous data nor lateral features so that they can generally consider lane-changing decisions before the vehicles start moving along the lateral axis. They achieve at least 82% and up to 93% F1 scores against anomaly on data from Simulation of Urban MObility (SUMO) and HighD. We also examine system properties and verify that the detected anomaly includes more dangerous scenarios.
|
|
09:30-10:50, Paper We-Po1S.38 | Add to My Program |
Axial Attention Inside a U-Net for Semantic Segmentation of 3D Sparse LiDAR Point Clouds |
|
Yin, Tang-Kai | National University of Kaohsiung |
Wu, Liang-Yue | NUK |
Hong, Tzung-Pei | National University of Kaohsiung |
Keywords: Image, Radar, Lidar Signal Processing, Deep Learning
Abstract: Semantic segmentation of LiDAR point clouds is essential for driving guidance in autonomous driving. The current approaches may use complex data representations for sparse LiDAR point clouds or sophisticated neural networks to process these data; however, such methods may not be suitable for real-time prediction on mobile devices due to the high computational requirements. In this research, we propose to enhance U-Nets with axial attention modules to process the point clouds on 2D planes after the spherical projection. U-Nets are simple and baseline encoder-decoder networks for semantic segmentation with convolution operations of local context. We improved baseline U-Nets by simply adding axial attention modules inside U-Nets to increase the feature extraction effectiveness by incorporating the capability of modeling long-range dependencies from axial attention. Experiments were performed on the benchmark KITTI and our lab LiDAR datasets. In terms of mIoU (mean intersection over union), performance was increased from 51.3% of U-Nets to 54.8% of the mixed axial-attention U-Nets in KITTI, and from 81.7% of U-Nets to 92.3% of the mixed axial-attention U-Nets in our LiDAR dataset. In testing, the latency of the mixed axial-attention U-Nets in our LiDAR dataset was 22.3 ms per scan, which was fast enough to be real-time. We propose to embed axial attention in the decoder of U-Nets for semantic segmentation of 3D LiDAR point clouds with better IoU performance and real-time prediction capability on mobile devices.
|
|
We-B-OR Regular Session, Europa Hall |
Add to My Program |
Planning and Control |
|
|
Chair: Mester, Rudolf | NTNU Trondheim |
|
10:50-11:10, Paper We-B-OR.1 | Add to My Program |
Conception and Experimental Validation of a Model Predictive Control (MPC) for Lateral Control of a Truck-Trailer |
|
Kumar, Mohit | Forschungszentrum Informatik, Karlsruhe |
Haas, Andreas | MAN Truck & Bus SE |
Strauss, Peter | MAN Truck & Bus SE |
Kraus, Sven | Technische Universität München |
Tas, Omer Sahin | FZI Research Center for Information Technology |
Stiller, Christoph | Karlsruhe Institute of Technology |
Keywords: Vehicle Control, Self-Driving Vehicles, Automated Vehicles
Abstract: The automation of a truck-trailer offers enormous potential for safe and efficient transportation. The optimal control approaches, e.g., MPC, have significantly improved the tracking accuracy and the smoothness of the lateral control of vehicles. MPC application for a truck-trailer is complex compared to a car as the system behavior is different for forwarding and reversing. In this paper, we propose a lateral MPC algorithm for a truck-trailer, where we linearize the system dynamics around a nominal trajectory computed using a control law. The control law formulated as cascade control computes the nominal trajectory, an initial guess to the optimization process. The nominal trajectory lies in the vicinity of the optimal trajectory. We linearize the system dynamics around the computed nominal trajectory, reducing the linearization errors. The region of validity of the linearized system dynamics is narrow due to the system’s instability during reverse driving. A quadratic optimization problem subjective to the linear dynamics of the truck-trailer and state and input constraints defines the optimal control problem. The nominal trajectory is stable over the prediction horizon while reversing, so is the linear prediction model, improving optimization feasibility. Further, the discretization errors are also reduced using a small discrete step and integrating the model multiple times between two prediction steps. We tested the developed MPC approach on a prototypical full-scale truck-trailer system and discussed results.
|
|
11:10-11:30, Paper We-B-OR.2 | Add to My Program |
Vision Transformer for Learning Driving Policies in Complex and Dynamic Environments |
|
Kargar, Eshagh | Aalto University |
Kyrki, Ville | Aalto University |
Keywords: Self-Driving Vehicles, Reinforcement Learning, Deep Learning
Abstract: Driving in a complex and dynamic urban environment is a difficult task that requires a complex decision policy. In order to make informed decisions, one needs to gain an understanding of the long-range context and the importance of other vehicles. In this work, we propose to use Vision Transformer (ViT) to learn a driving policy in urban settings with birds-eye-view (BEV) input images. The ViT network learns the global context of the scene more effectively than with earlier proposed Convolutional Neural Networks (ConvNets). Furthermore, ViT's attention mechanism helps to learn an attention map for the scene which allows the ego car to determine which surrounding cars are important to its next decision. We demonstrate that a DQN agent with a ViT backbone outperforms baseline algorithms with ConvNet backbones pre-trained in various ways. In particular, the proposed method helps reinforcement learning algorithms to learn faster, with increased performance and less data than baselines.
|
|
11:30-11:50, Paper We-B-OR.3 | Add to My Program |
Delay-Aware Robust Control for Safe Autonomous Driving |
|
Kalaria, Dvij | Indian Institute of Technology Kharagpur |
Lin, Qin | Cleveland State University |
Dolan, John | Carnegie Mellon University |
Keywords: Self-Driving Vehicles, Vehicle Control, Collision Avoidance
Abstract: With the advancement of affordable self-driving vehicles using complicated nonlinear optimization but limited computation resources, computation time becomes a matter of concern. Other factors such as actuator dynamics and actuator command processing cost also unavoidably cause delays. In high-speed scenarios, these delays are critical to the safety of a vehicle. Recent works consider these delays individually, but none unifies them all in the context of autonomous driving. Moreover, recent works inappropriately consider computation time as a constant or a large upper bound, which makes the control either less responsive or over-conservative. To deal with all these delays, we present a unified framework by 1) modeling actuation dynamics, 2) using robust tube model predictive control, 3) using a novel adaptive Kalman filter without assuming a known process model and noise covariance, which makes the controller safe while minimizing conservativeness. On one hand, our approach can serve as a standalone controller; on the other hand, our approach provides a safety guard for a high-level controller, which assumes no delay. This can be used for compensating the sim-to-real gap when deploying a black-box learning-enabled controller trained in a simplistic environment without considering delays for practical vehicle systems.
|
|
We-C-OR Regular Session, Europa Hall |
Add to My Program |
ADAS Functions |
|
|
Chair: Olaverri-Monreal, Cristina | Chair Sustainable Transport Logistics 4.0, Johannes Kepler University Linz |
|
14:05-14:25, Paper We-C-OR.1 | Add to My Program |
Auditory and Visual Warning Information Generation of the Risk Object in Driving Scenes Based on Weakly Supervised Learning |
|
Niu, Yingjie | Nagoya University, Graduate School of Informatics, |
Ding, Ming | Nagoya University |
Zhang, Yuxiao | Nagoya University |
Ohtani, Kento | Nagoya University |
Takeda, Kazuya | Nagoya University |
Keywords: Advanced Driver Assistance Systems, Vehicle Environment Perception, Security
Abstract: In this research, a two-stage risk object warning method is proposed to generate the auditory and visual warning information simultaneously from the driving scene. The auditory warning module (AWM) is designed as a classification task by combining the rough location and type information as warning sentences and treating each sentence as one class. The visual warning module (VWM) is designed as a weakly supervised method to save the labor-intensive bounding box marking of risk objects. To confirm the effectiveness of the proposed method, we also create a linguistic risk notification (LRN) dataset by describing the driving scenario as several different sentences. The average accuracy of auditory warning is 96.4% for generating the warning sentences. The average accuracy of the weakly supervised visual warning algorithm is 81.3% for getting the risk vehicle localization without any supervisory information.
|
|
14:25-14:45, Paper We-C-OR.2 | Add to My Program |
Cooperative Platooning with Mixed Traffic on Urban Arterial Roads |
|
Mu, Zeyu | University of Virginia |
Chen, Zheng | University of Virginia |
Ryu, Seunghan | Piedmont Authority for Regional Transportation |
Avedisov, Sergei | Toyota Motor North America R&D - InfoTech Labs |
Guo, Rui | Toyota Motor North America, R&D InfoTech Labs |
Park, B. Brian | University of Virginia |
Keywords: Automated Vehicles, Vehicle Control, Cooperative ITS
Abstract: In this paper, we showcase a framework for cooperative mixed traffic platooning that allows the platooning vehicles to realize multiple benefits from using vehicle-to-everything (V2X) communications and advanced controls on urban arterial roads. A mixed traffic platoon, in general, can be formulated by a lead and ego connected automated vehicles (CAVs) with one or more unconnected human-driven vehicles (UHVs) in between. As this platoon approaches an intersection, the lead vehicle uses signal phase and timing (SPaT) messages from the connected intersection to optimize its trajectory for travel time and energy efficiency as it passes through the intersection. These benefits carry over to the UHVs and the ego vehicle as they follow the lead vehicle. The ego vehicle then uses information from the lead vehicle received through basic safety messages (BSMs) to further optimize its safety, driving comfort, and energy consumption. This is accomplished by the recently designed cooperative adaptive cruise control with unconnected vehicles (CACCu). The performance benefits of our framework are proven and demonstrated by simulations using real-world platooning data from the CACC Field Operation Test (FOT) Dataset from the Netherlands.
|
|
14:45-15:05, Paper We-C-OR.3 | Add to My Program |
Energy-Optimal Control for Eco-Driving on Curved Roads |
|
Bentaleb, Ahmed | UPJV |
El Hajjaji, Ahmed | MIS UPJV |
Abdelhamid, Rabhi | MIS UPJV |
Karama, Asma | University Cadi Ayyad |
Benzaouia, Abdellah | University of Cadi Ayad |
Keywords: Eco-driving and Energy-efficient Vehicles, Automated Vehicles
Abstract: This paper studies the eco-driving problem on curved roads using optimal control. The problem is formulated as an optimization one with the aim of maximizing fuel economy. Based on the road map information and dynamic programming algorithm, the vehicle’s optimal speed profile for the entire curve is calculated. The impact of road, vehicle and algorithm parameters that have an important effect on fuel use calculation is considered and deeply analyzed compared with previous studies. In addition, the cruise control functionality is also analyzed in this work. Furthermore, co-simulation of Matlab/Simulink and CarSim is conducted for a given scenario and the results show that approximately 3.93% of fuel savings can be achieved compared with a typical driver model.
|
|
We-Po2S Poster Session, Foyer Eurogress |
Add to My Program |
Interactive Session We2 |
|
|
|
15:05-16:25, Paper We-Po2S.1 | Add to My Program |
MTBF Model for AVs - from Perception Errors to Vehicle-Level Failures |
|
Oboril, Fabian | Intel |
Buerkle, Cornelius | Intel |
Biton Shack, Simcha | Mobileye |
Sussmann, Alon | Mobileye |
Fabris, Simone | Mobileye |
Keywords: Vehicle Environment Perception, Active and Passive Vehicle Safety, Automated Vehicles
Abstract: The development of Automated Vehicles (AVs) is progressing quickly and the first robotaxi services are being deployed worldwide. However, to receive authority certification for mass deployment, manufactures need to justify that their AVs operate safer than human drivers. This in turn creates the need to estimate and model the collision rate (failure rate) of an AV taking all possible errors and driving situations into account. In other words, there is the strong demand for comprehensive Mean Time Between Failure (MTBF) models for AVs. In this paper, we will introduce such a generic and scalable model that creates a link between errors in the perception system to vehicle-level failures (collisions). Using this model, we are able to derive requirements for the perception quality based on the desired vehicle-level MTBF or vice versa to obtain an MTBF value given a certain mission profile and perception quality.
|
|
15:05-16:25, Paper We-Po2S.2 | Add to My Program |
Scenario and Model-Based Systems Engineering Procedure for the SOTIF-Compliant Design of Automated Driving Functions |
|
Meyer, Max-Arno | RWTH Aachen University |
Silberg, Sebastian | FEV Europe GmbH |
Granrath, Christian | FEV Europe GmbH |
Kugler, Christopher | FEV Europe GmbH |
Wachtmeister, Louis | RWTH Aachen University |
Rumpe, Bernhard | RWTH Aachen |
Christiaens, Sébastien | FEV Europe GmbH |
Andert, Jakob Lukas | RWTH Aachen University |
Keywords: Automated Vehicles, Active and Passive Vehicle Safety
Abstract: Advances in automated driving are creating new challenges for product development in the automotive industry and continuously driving up the cost of product verification and validation. Modern automated driving systems (ADS) must safely handle a considerable number of driving scenarios in compliance with the Safety of the Intended Functionality (SOTIF) standard. While model-based systems engineering (MBSE) has successfully proven itself in the automotive industry as an enabler for complex system and test design, common procedures are neither scenario-based nor do they consider SOTIF. It is yet to be shown, how MBSE approaches can meet these specific requirements of ADS development and, what advantages they can offer over non-model-based methods. In this paper, an extended variant of the established feature-driven MBSE procedure CUBE is presented that includes the analysis of use cases and scenarios. Use-case-specific logical scenarios and the corresponding expected behavior and system architecture are specified using SysML profile extensions. It is demonstrated, how specification model artifacts are used for identifying potentially hazardous scenarios and functional deficiencies and how SOTIF analysis results flow back into the specification process by means of the function “Multi-Story Car Park Chauffeur”. The SysML model is linked to a safety argumentation created using the Goal Structuring Notation to integrate the system specification and the evidence from the SOTIF analysis in a single procedure and toolchain, ensuring full traceability.
|
|
15:05-16:25, Paper We-Po2S.3 | Add to My Program |
Analysis of Real-Time LiDAR Sensor Simulation for Testing Automated Driving Functions on a Vehicle-In-The-Loop Testbench |
|
Chen, Haopeng | Technical University of Berlin |
Müller, Steffen | Technical University of Berlin |
Keywords: Active and Passive Vehicle Safety, Lidar Sensing and Perception, Vehicle Environment Perception
Abstract: A vehicle-in-the-loop (ViL) testbench offers the possibility to test complex scenarios with ready-to-drive vehicles. For this purpose, the environmental sensors are simulated or stimulated. Essential component as a LiDAR is for automated driving systems (AD), its realistic behavior is hard to stimulate on the testbench. We propose a physics-based LiDAR model, which is real-time capable and shows many realistic features. This model simulates the important effects of laser propagation and reflection, mirror reflection motion distortion, reflection detectability and beam divergence. Besides that, we measured the reflectance of materials of interest to determine the reflection model parameters. Experiments proved that the simulation is real-time capable and the results showed a good match with measured data.
|
|
15:05-16:25, Paper We-Po2S.4 | Add to My Program |
Real-To-Synthetic: Generating Simulator Friendly Traffic Scenes from Graph Representation |
|
Tian, Yafu | Nagoya University |
Carballo, Alexander | Nagoya University |
Li, Ruifeng | State Key Laboratory of Robotic and Intelligent System, Harbin I |
Takeda, Kazuya | Nagoya University |
Keywords: Vehicle Environment Perception, Intelligent Vehicle Software Infrastructure, Automated Vehicles
Abstract: Reproducing real-world traffic scenes in the simulator is fundamental to training self-driving systems. Creating a simulation scenario is a complex task, generally done manually: the ego-vehicle and other entities are placed and their trajectories defined, trying to recreate some situation found in real traffic. To reduce the manual burden, here we propose the Real-to-Synthetic toolset. This toolset provides synthetic traffic scene in openDrive format, which can be directly simulated in many simulators such as SUMO or CARLA. Also, we provide a scene generator which generates near-realistic scene from minimum user effort. To maintain the similarity between real-world scene and generated one, here we introduce the concept ``Road Scene Graph''(RSG). In this graph, nodes represent entities while edges stand for pairwise relationships. These relationships could be maintained in the scene generation process while the actor is generated according to the distribution sampled from real-world data. Experiments proved that by using ``Road Scene Graph'', our scene generator proposes a much more convenient way to configure traffic scenes rather than manually defining every actor's initial status and trajectories.
|
|
15:05-16:25, Paper We-Po2S.5 | Add to My Program |
Driver-Automation Shared Steering Control for Intelligent Vehicles under Unexpected Emergency Conditions |
|
Yang, Lu | Tsinghua University |
Wang, Jianqiang | Tsinghua University |
Keywords: Vehicle Control, Human-Machine Interface, Advanced Driver Assistance Systems
Abstract: Most of fatal traffic accidents occur in unexpected emergency conditions, such as, post-impact, tire blowout etc, in which vehicle attitudes are immediately changed due to external disturbances and internal perturbations. It is an extremely challenging task for human driver to effectively and timely stop or control such a vehicle, especially for the inexperienced driver. Towards this end, this paper proposes a driver-automation collaborative control scheme for vehicles subjected to unexpected emergency conditions by assisting human driver's steering manipulation. To begin with, a model predictive lateral controller is constructed to enhance dynamical stability and collision avoidance capability considering model uncertainty and external disturbances. After that, a collaborative steering control authority allocator is designed for adaptively allocating control weighting of respective steering angles, in which the parameterized human driver activation is formulated considering driving action and state. As well, an optimal preview acceleration driver model combined with neuromuscular dynamics is developed for imitating human driver's steering manipulation while harmonizing with the controller. Lastly, simulation examples with different experienced human drivers validated the effectiveness and superiority of proposed control scheme and approaches in lateral stability enhancement and collision avoidance capability of vehicles subject to unexpected emergency conditions.
|
|
15:05-16:25, Paper We-Po2S.6 | Add to My Program |
Clothoidal Mapping of Road Line Markings for Autonomous Driving High-Definition Maps |
|
Gallazzi, Barbara | Politecnico Di Milano |
Cudrano, Paolo | Politecnico Di Milano |
Frosi, Matteo | Politecnico Di Milano (Milan, Italy) |
Mentasti, Simone | Politecnico Di Milano |
Matteucci, Matteo | Politecnico Di Milano - DEIB |
Keywords: Mapping and Localization, Vision Sensing and Perception, Self-Driving Vehicles
Abstract: Lane-level HD maps are crucial for trajectory planning and control in current autonomous vehicles. For this reason, appropriate line models should be adopted to define them. Whereas mapping algorithms often rely on inaccurate representations, clothoid curves possess peculiar smoothness properties that make them desirable representations of road lines in control algorithms. We propose a multi-stage pipeline for the generation of lane-level HD maps from monocular vision relying on clothoidal spline models. We obtain measurements of the line positions using a line detection algorithm, and we exploit a graph-based optimization framework to reach an optimal fitting. An iterative greedy procedure reduces the model complexity removing unnecessary clothoids. We validate our system on a real-world dataset, which we make publicly available for further research at https://airlab.deib.polimi.it/datasets-and-tools/.
|
|
15:05-16:25, Paper We-Po2S.7 | Add to My Program |
Model-Based Framework to Optimize Charger Station Deployment for Battery Electric Vehicles |
|
Eagon, Matthew | University of Minnesota, Twin Cities |
Fakhimi, Setayesh | University of Minnesota, Twin Cities |
Lyu, George | Rice University |
Yang, Audrey | New York University |
Lin, Brian | Stanford University |
Northrop, Will | University of Minnesota |
Keywords: Smart Infrastructure, V2X Communication, Electric and Hybrid Technologies
Abstract: The development of battery electric vehicles (BEVs) is accelerating due to their environmental advantages over gasoline and diesel-powered vehicles, including a decrease in air pollution and an increase in energy efficiency. The deployment of charging infrastructure will need to increase to keep pace with demand, especially for large commercial vehicles for which few public chargers currently exist. In this paper, a new flexible framework is proposed for optimizing the placement of charging stations for BEVs, within which different physical models and optimization techniques may be used. Furthermore, a set of metrics is suggested to help enforce complex constraints and facilitate direct comparison between different optimization techniques. Unlike many existing charger placement techniques, the proposed method directly considers the historical driving patterns on a vehicle-by-vehicle basis, using transparent models to assess impacts of candidate charger placements, thus improving the explainability of the results. In the developed framework, modeled BEVs are first generated along the road network to mimic historical traffic data and are simulated traveling along a given route according to a simplified vehicle model. During the simulation, the charger placement problem is initially relaxed to allow vehicles to charge at any node along the road network, and vehicle states are tracked to assess areas of high charging demand. Charging stations are then placed based on the results of the relaxed simulation, and suggested placements are evaluated via road network simulation with fixed charger locations. This proposed framework is applied to a sample problem of placing charging stations along five major highway corridors for Class 8 over-the-road electric trucks. A novel mixed
|
|
15:05-16:25, Paper We-Po2S.8 | Add to My Program |
Detecting and Identifying Global Visual Novelties in Driving Scenarios |
|
Palacios-Alonso, Miguel A. | Instituto Nacional De Astrofísica, Óptica Y Electrónica |
Escalante, Hugo Jair | INAOE |
Sucar, Luis Enrique | Instituto Nacional De Astrofísica, Óptica Y Electrónica |
Keywords: Image, Radar, Lidar Signal Processing, Vehicle Environment Perception, Automated Vehicles
Abstract: As a safety-critical application, automated driving aims to provide autonomy and safe navigation. Although recent advances in perception algorithms based on deep learning have boosted the progress in such field, state of the art depends on availability of large datasets. Thus, most visual analysis in this context is related to the detection and classification of previously known classes. However, dynamic environments with complex interactions among traffic elements can lead to unpredictable anomalous situations that are not registered previously. On the other hand, most of prior work on visual novelty detection points out novelty instances but does not provide additional useful information that can be used in the next levels for decision-making. In this paper we propose an approach to detect global visual novelties and their corresponding type in real driving scenarios. Based on pixel-wise and perceptual information, we use a generative adversarial network to detect novelties and a support vector machine to identify their categories. Its performance is experimentally evaluated using the Ford self-driving dataset. Experimental results show that the average area under the curve (AUC) surpasses 0.97 for novelty detection, and novelty type identification reaches an average of 92.7% of accuracy requiring only a small number of samples for training.
|
|
15:05-16:25, Paper We-Po2S.9 | Add to My Program |
Mass Detection for Heavy-Duty Vehicles Using Gaussian Belief Propagation |
|
Eagon, Matthew | University of Minnesota, Twin Cities |
Fakhimi, Setayesh | University of Minnesota, Twin Cities |
Pernsteiner, Adam | University of Minnesota, Twin Cities |
Northrop, Will | University of Minnesota |
Keywords: Telematics, Information Fusion, Vehicle Control
Abstract: Predicting vehicle mass is critical to accurately estimate energy use and emissions of commercial trucks. However, data from vehicle telematics is often not at sufficient temporal resolution or accuracy for use in model-based detection methods. In this work, a new statistical mass prediction technique is described for heavy-duty vehicles that incorporates the use Gaussian Belief Propagation (GBP) for probabilistic inference. Similar to Bayesian inference models, the GBP model typically requires less labeled training data than other contemporary machine learning techniques. First, a factor graph is constructed, and a set of Gaussian belief nodes with associated means and variances are fitted to the training data. To better handle noisy input data, the GBP mass prediction model utilizes a k-nearest factors (kNF) algorithm for probabilistic inference on unseen testing data. The proposed method is compared with a classical weighted k-nearest neighbors (kNN) regressor. This statistical kNF-GBP model works even with low-quantity, low-quality initial training data, while being capable of real-time mass estimation. Unlike the kNN regressor, the GBP model produces a measure of uncertainty with its predictions. The proposed method is validated using curve-sampled driving data collected from multiple cloud-connected Class 8 regional haul diesel trucks. Both the kNN regressor and the kNF-GBP mass prediction model were able to predict payload mass with coefficients of determination above 0.97 with minimal data preprocessing.
|
|
15:05-16:25, Paper We-Po2S.10 | Add to My Program |
Adaptive Safe Control for Driving in Uncertain Environments |
|
Gangadhar, Siddharth | Carnegie Mellon University |
Wang, Zhuoyuan | Carnegie Mellon University |
Jing, Haoming | Department of Electrical and Computer Engineering, Carnegie Mell |
Nakahira, Yorie | CMU |
Keywords: Automated Vehicles, Vehicle Control
Abstract: This paper presents an adaptive safe control method that can adapt to changing environments, tolerate large uncertainties, and exploit predictions in autonomous driving. We first derive a sufficient condition to ensure long-term safe probability when there are uncertainties in system parameters. Then, we use the safety condition to formulate a stochastic adaptive safe control method. Finally, we test the proposed technique numerically in a few driving scenarios. The use of long-term safe probability provides a sufficient outlook time horizon to capture future predictions of the environment and planned vehicle maneuvers and to avoid unsafe regions of attractions. The resulting control action systematically mediates behaviors based on uncertainties and can find safer actions even with large uncertainties. This feature allows the system to quickly respond to changes and risks, even before an accurate estimate of the changed parameters can be constructed. The safe probability can be continuously learned and refined. Using more precise probability avoids over-conservatism, which is a common drawback of the deterministic worst-case approaches. The proposed techniques can also be efficiently computed in real-time using onboard hardware and modularly integrated into existing processes such as predictive model controllers.
|
|
15:05-16:25, Paper We-Po2S.11 | Add to My Program |
Driver Behavior Model for the Safety Assessment of Automated Driving (I) |
|
Fries, Alexandra | BMW AG |
Fahrenkrog, Felix | BMW AG |
Donauer, Katharina | BMW AG |
Mai, Marcus | Technische Universität Dresden |
Raisch, Florian | BMW AG |
Keywords: Automated Vehicles, Collision Avoidance, Impact on Traffic Flows
Abstract: Assessing the safety performance of automated driving is essential for the market introduction of this technology. Different regulatory bodies have explicitly or implicitly asked for proofing the safety effects of the technology, respectively to demonstrate that the technology is at least as good as human drivers. Due to the complexity of automated driving, the answer hardly can be found by test track tests. Instead, virtual assessment tools, such as the simulation, are required to assess the safety performance of automated driving. Within the simulation, the baseline, which is defined by the human drivers, needs to be represented as well. This is typically done by a driver behavior model. For this purpose, BMW and its partners has developed the stochastic cognitive model (SCM). SCM is presented in this paper. To show the performance of SCM, it is applied in the critical situation of a passive cut-in maneuver. For this scenario, a Monte-Carlo simulation experiment is conducted. In conclusion, the results of this experiment are compared to real world passive cut-in maneuvers of the HighD and GIDAS PCM datasets.
|
|
15:05-16:25, Paper We-Po2S.12 | Add to My Program |
Quantifying Realistic Behaviour of Traffic Agents in Urban Driving Simulation Based on Questionnaires (I) |
|
Rock, Teresa | TU Berlin |
Bahram, Mohammad | BMW Group Research and Technology |
Himmels, Chantal | BMW Group |
Marker, Stefanie | TU Berlin |
Keywords: Autonomous / Intelligent Robotic Vehicles, Human-Machine Interface
Abstract: Driving simulation is becoming an increasingly important component of research and development in the automotive industry. When performing simulator studies in urban scenarios, the challenge is to create a realistic driving context including natural interactions between the subject and artificial traffic participants, which are simulated by agent models. These traffic agents should behave as similar as possible to real humans. This raises the question of how to define realistic or human-like behaviour of traffic agents and how to measure this. Furthermore, it is necessary to investigate the influence of the surrounding traffic on the driver's behaviour and perception of reality in the simulator. Accordingly, we present a method for quantifying the degree of realism of virtual traffic agents' behaviour and their impact on subjects' experience in a simulator experiment. By means of questionnaires, participants rated their perception of reality and the behaviour of present agent models. The experiment shows that surrounding traffic has a positive effect on subjects' perception and behaviour, indicating that more realistic traffic agents have the potential to improve the validity of simulator studies. Moreover, our results provide new insights regarding required characteristics for the development of human-like traffic agents and give an overview of current strengths and weaknesses.
|
|
15:05-16:25, Paper We-Po2S.13 | Add to My Program |
Modeling Driver Behavior Using Adversarial Inverse Reinforcement Learning (I) |
|
Sackmann, Moritz | Friedrich-Alexander-Universität Erlangen-Nürnberg |
Bey, Henrik | Friedrich-Alexander-Universität Erlangen-Nürnberg |
Hofmann, Ulrich | AUDI AG Ingolstadt |
Thielecke, Jörn | Friedrich-Alexander-Universität Erlangen-Nürnberg |
Keywords: Reinforcement Learning, Deep Learning, Automated Vehicles
Abstract: Driver behavior modeling is an important task for predicting or simulating the evolution of traffic situations. We investigate the use of Adversarial Inverse Reinforcement Learning (AIRL), an IRL-based method, to learning a driving policy from a dataset of real-world trajectories. Compared to the commonly used direct Behavioral Cloning (BC), IRL aims to reconstruct the rewards of drivers, e.g., driving fast but with minimal accelerations. Simultaneously, a policy that maximizes these rewards is learned using standard Reinforcement Learning (RL) methods. This indirection enables us to train AIRL in fictional situations, for which no training trajectories exist. In our experiments, we find that this advantage enables AIRL to produce policies that are significantly more robust than the two competing approaches Generative Adversarial Imitation Learning (GAIL) and BC.
|
|
15:05-16:25, Paper We-Po2S.14 | Add to My Program |
CogMod: Simulating Human Information Processing Limitation While Driving (I) |
|
Jawad, Abdul | University of California Santa Cruz |
Whitehead, Jim | UC Santa Cruz |
Keywords: Automated Vehicles, Autonomous / Intelligent Robotic Vehicles, Self-Driving Vehicles
Abstract: We develop a human driver behavior model (CogMod) based on two complementary cognitive architectures; Queueing Network-Model Human Processor (QN-MHP) and Adaptive Control of Thought - Rational (ACT-R), to represent human cognition while driving. The proposed model can integrate different task-specific analytical driver models under a similar cognitive procedure. The model can simulate variable cognitive processing ability, resulting in different stopping distances in a scenario where the front vehicle brakes sharply when it enters a trigger distance. We evaluate the model based on the distribution of stopping distance with varying cognitive processing time. This approach is useful for modeling non-ego vehicles in scenario-based testing of automated vehicles (AVs).
|
|
15:05-16:25, Paper We-Po2S.15 | Add to My Program |
Adversarial Jaywalker Modeling for Simulation-Based Testing of Autonomous Vehicle Systems (I) |
|
Muktadir, Golam Md | University of California, Santa Cruz |
Whitehead, Jim | UC Santa Cruz |
Keywords: Self-Driving Vehicles, Vehicle Environment Perception, Autonomous / Intelligent Robotic Vehicles
Abstract: We present an approach for creating adversarial jaywalkers, autonomous pedestrian models which intentionally act to create unsafe situations involving other vehicles. An adversarial jaywalker employs a hybrid state-model with social forces and state transition rules. The parameters (for social forces and state transitions) of this model are tuned via reinforcement learning to create risky situations faster with synthetic yet plausible behavior. The resulting jaywalkers are capable of realistic behavior while still engaging in sufficiently risky actions to be useful for testing. These adversarial pedestrian models are useful in a wide range of scenario-based tests for autonomous vehicles.
|
|
15:05-16:25, Paper We-Po2S.16 | Add to My Program |
Early Assessment of System-Level Safety Mechanisms through Co-Simulation-Based Fault Injection (I) |
|
Munaro, Tiziano | Fortiss |
Muntean, Irina | Fortiss |
Keywords: Active and Passive Vehicle Safety, Automated Vehicles, Advanced Driver Assistance Systems
Abstract: Depending on the autonomy level, safety assessment leads to different functional safety requirements for advanced driver-assistance systems and autonomous driving functions. To provide the necessary guarantees, technical safety requirements are derived that support the safety case by means of appropriate system architectures. These build on safety mechanisms: Technical solutions responsible for maintaining the intended functionality (fail-operational) or transition to a safe state in the presence of hardware and software faults (fail-safe). As the choice and implementation of such safety mechanisms are critical decisions with a high impact on the overall architecture, their early validation is crucial for an efficient engineering process. However, analytical safety analysis techniques applied to date support only coarse time models and do not provide explicit guidance for considering systemic real-time properties of closed-loop systems. Therefore, we propose a simulation-based fault injection framework to identify problematic emerging temporal behaviors such as instability. In contrast to existing solutions, we leverage the Functional Mock-up Interface (FMI) standard for black-box co-simulation to overcome intellectual property concerns in distributed automotive supply chains and to account for heterogeneous tool landscapes. By considering the allocation of software units to processing elements as well as the communication infrastructure, our contribution allows for the injection and propagation of faults affecting a vehicle's software and its electrical/electronic (E/E) architecture, which is crucial for the assessment of safety mechanisms. Experimental results obtained by applying the approach to an industry-oriented use case indicate its validity and low overhead.
|
|
15:05-16:25, Paper We-Po2S.17 | Add to My Program |
Combining Virtual Reality and Steer-By-Wire Systems to Validate Driver Assistance Concepts (I) |
|
Weiss, Elliot | Stanford University |
Talbot, John | Stanford University |
Gerdes, J Christian | Stanford University |
Keywords: Human-Machine Interface, Advanced Driver Assistance Systems
Abstract: Emerging driver assistance system architectures require new methods for testing and validation. For advanced driver assistance systems (ADASs) that closely blend control with the driver, it is particularly important that tests elicit natural driving behavior. We present a flexible Human&Vehicle-in-the-Loop (Hu&ViL) platform that provides multisensory feedback to the driver during ADAS testing to address this challenge. This platform, which graphically renders scenarios to the driver through a virtual reality (VR) head-mounted display (HMD) while operating a four-wheel steer-by-wire (SBW) vehicle, enables testing in nominal dynamics, low friction, and high speed configurations. We demonstrate the feasibility of our approach by running experiments with a novel ADAS in low friction and highway settings on a limited proving ground. We further connect this work to a formal method for categorizing test bench configurations and demonstrate a possible progression of tests on different configurations of our platform.
|
|
15:05-16:25, Paper We-Po2S.18 | Add to My Program |
Point-Voxel Fusion for Multimodal 3D Detection (I) |
|
Wang, Ke | Chongqing University |
Zhang, Zhichuang | Chongqing University |
Keywords: Automated Vehicles, Autonomous / Intelligent Robotic Vehicles, Deep Learning
Abstract: Many LiDAR-based methods have achieved encouraging results on 3D detection tasks, but detection of small objects such as pedestrians remains challenging. On the contrary, it is easy to detect small dimensional objects from images of cameras. The existing point cloud and image feature fusion methods are dominated by the point cloud, and due to the sparseness of the point cloud, some information of the image is lost. We propose a new fusion method named PVFusion to try to fuse more image features. We first divide each point into a separate perspective voxel and project the voxel onto the image feature maps. Then the semantic feature of the perspective voxel is fused with the geometric feature of the point. A 3D object detection model is designed using PVFusion. During training we employ the ground truth paste (GT-Paste) data augmentation and solve the occlusion problem caused by newly added object. The KITTI validation set is used to validate the PVFusion based model, which shows 3.6% AP improvement over the other feature fusion methods in pedestrian detection. On the KITTI test set, the PVFusion based model outperforms the other multimodal SOTA methods by 2.2% AP in pedestrian detection.
|
|
15:05-16:25, Paper We-Po2S.19 | Add to My Program |
Residual MBConv Submanifold Module for 3D LiDAR-Based Object Detection (I) |
|
Guo, Lie | Dalian University of Technology |
Huang, Liang | Dalian University of Technology |
Yibing, Zhao | Dalian University of Technology |
Keywords: Automated Vehicles, Convolutional Neural Networks, Image, Radar, Lidar Signal Processing
Abstract: In LiDAR-based point cloud, objects are always represented as 3D bounding boxes with direction. LiDAR-based object detection task is similar to image-based task but comes with additional challenges. In LiDAR-based detection for autonomous vehicles, the size of 3D object is significant smaller compared with size of input scene represented by point cloud, thus conventional 3D backbones cannot effectively preserve detail geometric information of object with only a few points. To resolve this problem, this paper presents a MBConv Submanifold module, which is simple and effective for voxel-based detector from point cloud. The novel convolution architecture introduces inverted bottleneck and residual connection into 3D sparse backbone, which enable detector to learn high dimension feature from point cloud. Experiments shows that MBConv Submanifold module bring consistent improvement over the baseline method: MBConv Submanifold achieves the AP of 68.03% and 54.74% in the moderate cyclist and pedestrian category on the KITTI validation benchmark, outperforms the baseline detector by a large margin.
|
|
15:05-16:25, Paper We-Po2S.20 | Add to My Program |
An Authentication Mechanism for Remote Keyless Entry Systems in Cars to Prevent Replay and RollJam Attacks (I) |
|
Poolat Parameswarath, Rohini | National University of Singapore |
Sikdar, Biplab | National University of Singapore |
Keywords: Security, Intelligent Ground, Air and Space Vehicles
Abstract: Modern cars come with Keyless Entry Systems that can be either Remote Keyless Entry (RKE) systems or Passive Keyless Entry and Start (PKES) systems. In the initial versions of RKE implementation, fixed code was used by the key fob to unlock the car door. However, this method is vulnerable to replay attacks as an adversary may capture and replay the same code later to unlock the car. A rolling code system was introduced to protect RKE systems from such replay attacks. Studies have shown that even the rolling code system is vulnerable to certain attacks. In this work, we investigate the attacks possible on RKE systems and propose an efficient and effective authentication mechanism to defend RKE systems against such attacks with minimal changes to the existing RKE system. The proposed mechanism makes use of hashing and asymmetric cryptographic techniques for the secure transmission of signals from the key fob to the car that cannot be replayed. The security of the proposed mechanism is shown using informal security proof and simulation of the proposed solution is also provided.
|
|
15:05-16:25, Paper We-Po2S.21 | Add to My Program |
Unsupervised Network Intrusion Detection System for AVTP in Automotive Ethernet Networks (I) |
|
Alkhatib, Natasha | University |
Mushtaq, Maria | Institut Polytechnique De Paris |
Ghauch, Hadi | Telecom Paris |
Danger, Jean-Luc | Télécom Paris |
Keywords: Deep Learning, Unsupervised Learning, Security
Abstract: Network Intrusion Detection Systems (NIDSs) are widely regarded as efficient tools for securing in-vehicle networks against diverse cyberattacks. However, since cyberattacks are always evolving, signature-based intrusion detection systems are no longer adopted. An alternative solution can be the deployment of deep learning based intrusion detection system which play an important role in detecting unknown attack patterns in network traffic. Hence, in this paper, we compare the performance of different unsupervised deep and machine learning based anomaly detection algorithms, for real-time detection of anomalies on the Audio Video Transport Protocol (AVTP), an application layer protocol implemented in the recent Automotive Ethernet based in-vehicle network for transmitting media streams. The numerical results, conducted on the recently published "Automotive Ethernet Intrusion Dataset", show that deep learning models signficantly outperfom other state-of-the art traditional anomaly detection models in machine learning under different experimental settings.
|
|
15:05-16:25, Paper We-Po2S.22 | Add to My Program |
Security Analysis of Merging Control for Connected and Automated Vehicles (I) |
|
Jarouf, Abdulah | Hamad Bin Khalifa University |
Meskin, Nader | Qatar University |
Al-Kuwari, Saif | College of Science and Engineering, Hamad Bin Khalifa University |
Shakerpour, Mohammad Hussein, Mohammad Hussein | Qatar University |
Cassandras, Christos | Boston University |
Keywords: Security, Self-Driving Vehicles, Automated Vehicles
Abstract: Securing traffic flows in internet of vehicles (IoV) environments for connected and automated vehicles (CAVs) is a critical task as it should be done in real-time to allow vehicles' controllers engagement on time. In this paper, the security of CAV communication at merging points is studied, the insecure vehicle communication is analysed in terms of the possible security threats and consequences, and security goals are then identified to protect the environment. We present a network topology that improves the availability of the system and propose a high-level design of a vehicle authentication protocol based on public key cryptography to authenticate vehicles. Simulation and analysis of the cryptographic functions are done to choose the best fit for vehicle communication, where Rivest-Shamir-Adleman (RSA)-2048 algorithms provide faster and more efficient computations.
|
|
15:05-16:25, Paper We-Po2S.24 | Add to My Program |
An Hybrid Approach to Improve the Performance of Encoder-Decoder Architectures for Traversability Analysis in Urban Environments (I) |
|
Fusaro, Daniel | University of Padua |
Olivastri, Emilio | University of Padua |
Evangelista, Daniele | University of Padua |
Iob, Pietro | University of Padua |
Pretto, Alberto | University of Padua |
Keywords: Vehicle Environment Perception, Autonomous / Intelligent Robotic Vehicles, Deep Learning
Abstract: Self-driving vehicles and autonomous ground robots require a reliable and accurate method to analyze the traversability of the surrounding environment for safe navigation. This paper proposes a hybrid approach that combines geometric and appearance features for training Deep Encoder-Decoder architectures to detect the traversability score in real urban contexts. The proposed approach has been tested with two Deep Learning architectures on a public dataset of outdoor driving scenarios. Thanks to our approach, we are able to reach high levels of accuracy in detecting the correct traversability score in environments of highly variable complexity. This demonstrates the effectiveness and robustness of the proposed method.
|
|
15:05-16:25, Paper We-Po2S.25 | Add to My Program |
SAN: Scene Anchor Networks for Joint Action-Space Prediction (I) |
|
Janjoš, Faris | Robert Bosch GmbH |
Dolgov, Maxim | Robert Bosch GmbH |
Kuric, Muhamed | Virtual Vehicle Research |
Shen, Yinzhe | University of Stuttgart |
Zöllner, J. Marius | FZI Research Center for Information Technology; KIT Karlsruhe In |
Keywords: Autonomous / Intelligent Robotic Vehicles, Automated Vehicles, Situation Analysis and Planning
Abstract: In this work, we present a novel multi-modal trajectory prediction architecture. We decompose the uncertainty of future trajectories along higher-level scene characteristics and lower-level motion characteristics, and model multi-modality along both dimensions separately. The scene uncertainty is captured in a joint manner, where diversity of scene modes is ensured by training multiple separate anchor networks which specialize to different scene realizations. At the same time, each network outputs multiple trajectories that cover smaller deviations given a scene mode, thus capturing motion modes. In addition, we train our architectures with an outlier-robust regression loss function, which offers a trade-off between the outlier-sensitive L2 and outlier-insensitive L1 losses. Our scene anchor model achieves improvements over the state of the art on the INTERACTION dataset, outperforming the StarNet architecture from our previous work.
|
|
15:05-16:25, Paper We-Po2S.26 | Add to My Program |
Winning the 3rd Japan Automotive AI Challenge - Autonomous Racing with the Autoware.Auto Open Source Software Stack (I) |
|
Zang, Zirui | University of Pennsylvannia |
Tumu, Renukanandan | University of Pennsylvania |
Betz, Johannes | University of Pennsylvania |
Zheng, Hongrui | University of Pennsylvania |
Mangharam, Rahul | University of Pennsylvania |
Keywords: Automated Vehicles, Situation Analysis and Planning, Vehicle Control
Abstract: The 3rd Japan Automotive AI Challenge was an international online autonomous racing challenge where 164 teams competed in December 2021. This paper outlines the winning strategy to this competition, and the advantages and challenges of using the Autoware.Auto open source autonomous driving platform for multi-agent racing. Our winning approach includes a lane-switching opponent overtaking strategy, a global raceline optimization, and the integration of various tools from Autoware.Auto including a Model-Predictive Controller. We describe the use of perception, planning and control modules for high-speed racing applications and provide experience-based insights on working with Autoware.Auto. While our approach is a rule-based strategy that is suitable for non-interactive opponents, it provides a good reference and benchmark for learning-enabled approaches.
|
|
15:05-16:25, Paper We-Po2S.27 | Add to My Program |
Reliable Evaluation of Navigation States Estimation for Automated Driving Systems (I) |
|
Srinara, Surachet | National Cheng Kung University |
Tsai, Syun | National Cheng Kung University |
Lin, Cheng-Xian | National Cheng Kung University |
Tsai, Meng-Lun | National Cheng Kung University |
Chiang, Kai-Wei | National Cheng Kung University |
Keywords: Sensor and Data Fusion, Intelligent Vehicle Software Infrastructure, Automated Vehicles
Abstract: To achieve a higher level of automation for modern development in automated driving systems (ADS), reliable evaluation of navigation states estimation is crucial demand. Although the presence of several approaches on evaluation are presented, but no study has examined problems related to establish a trustable reference system for fully evaluating performance of ADS. This paper proposes new strategies for better handling with the ground truth system for full navigation evaluation with automated driving applications. The first strategy involves making use of the integration solutions of an inertial measurement unit (IMU) and global navigation satellite system (GNSS) as an initial pose for normal distribution transform (NDT) with high-definition (HD) point cloud map. An accurate LiDAR-based navigation estimation could be then achieved. In the second strategy, LiDAR-based position is used as the measurements to update with the loosely coupled (LC)-INS/GNSS/LiDAR integration system. The preliminary results indicate that the proposed LC-INS/GNSS/LiDAR strategy not only estimates full navigation solutions, but also seems to provide more accurate and reliable for evaluating the positioning, navigation and timing (PNT) services compared to conventional methods.
|
|
15:05-16:25, Paper We-Po2S.28 | Add to My Program |
Towards Integrity for GNSS-Based Urban Navigation -- Challenges and Lessons Learned (I) |
|
Schön, Steffen | Leibniz University Hannover |
Baasch, Kai-Niklas | Leibniz University Hannover |
Icking, Lucy | Leibniz Universität Hannover |
Karimidoona, Ali | Leibniz University Hannover |
Lin, Qianwen | Leibniz Universität Hannover |
Ruwisch, Fabian | Leibniz Universität Hannover |
Schaper, Anat | Leibniz Universität Hannover |
Su, Jingyao | Leibniz University Hannover |
Keywords: Mapping and Localization, Sensor and Data Fusion, Information Fusion
Abstract: For safety critical applications like autonomous driving, high trust in the reported navigation solution is mandatory. This trust can be expressed by the navigation performance parameters, especially integrity. Multipath errors are the most challenging error source in GNSS since only partial correction is possible. In order to ensure high integrity of GNSS-based urban navigation, signal propagation mechanisms and the potential error sources induced by the complex measurement environment should be sufficiently understood. In this contribution, we report on recent progress on this topic in our group. We conducted various experiments in urban areas and investigated the behavior and magnitude of GNSS signal propagation errors. To this end, ray tracing algorithms combined with 3D city models are implemented to identify propagation obstructions and quantify propagation errors. A Fresnel zone-based criterion is exploited to determine the occurrence and magnitude of diffraction. GNSS Feature Maps are proposed to visualize the analyses and to predict situations with potential loss of integrity. To measure the integrity of urban navigation, we developed alternative set-based approaches in addition to the classical stochastic approach. Based on interval mathematics and geometrical constraints, they are sufficient to bound remaining systematic uncertainty and feasible for integrity applications.
|
|
15:05-16:25, Paper We-Po2S.29 | Add to My Program |
A Monte Carlo Particle Filter Formulation for Mapless-Based Localization (I) |
|
Braile Przewodowski Filho, Carlos André | University of Sao Paulo - USP |
Osorio, Fernando | USP - University of Sao Paulo |
Keywords: Mapping and Localization, Automated Vehicles, Sensor and Data Fusion
Abstract: In this paper, we extend the Monte Carlo Localization formulation for a more efficient global localization using coarse digital maps (for instance, the OpenStreetMap maps). The proposed formulation uses the map constraints in order to reduce the state dimension, which is ideal for a Monte Carlo-based particle filter. Also, we propose including to the data association process the matching of the traffic signals' information to the road properties, so that their exact position do not need to be previously mapped for updating the filter. In the proposed approach, no low-level point cloud mapping was required and neither the use of LIDAR data. The experiments were conducted using a dataset collected by the CARINA II intelligent vehicle and the results suggest that the method is adequate for a localization pipeline. The dataset is available online and the code is available on GitHub.
|
|
15:05-16:25, Paper We-Po2S.30 | Add to My Program |
On Uncertainty Quantification for Convolutional Neural Network LiDAR Localization (I) |
|
Joerger, Mathieu | Virginia Polytechnic Institute and State University |
Wang, Julian | Virginia Tech |
Hassani, Ali | Virginia Polytechnic Institute and State University |
Keywords: Mapping and Localization, Lidar Sensing and Perception
Abstract: In this paper, we develop and evaluate a Convolutional Neural Network (CNN)-based Light Detection and Ranging (LiDAR) localization algorithm that includes uncertainty quantification for ground vehicle navigation. This paper builds upon prior research where we used a CNN to estimate a rover’s position and orientation (pose) using LiDAR point clouds (PCs). This paper presents a simplification of the LiDAR PC processing and describes our attempts at outputting a covariance matrix in addition to the rover pose estimates. Performance assessment is carried out in a structured, static lab environment using a LiDAR-equipped rover moving along a fixed, repeated trajectory.
|
|
15:05-16:25, Paper We-Po2S.31 | Add to My Program |
Capsule Networks for Hierarchical Novelty Detection in Object Classification (I) |
|
de Graaff, Thies | German Aerospace Center |
Ribeiro de Menezes, Arthur | German Aerospace Center |
Keywords: Vision Sensing and Perception, Deep Learning, Situation Analysis and Planning
Abstract: Hierarchical Novelty Detection (HND) refers to assigning labels to objects in a hierarchical category space, where non-leaf labeling represents a novelty detection of that category. By labeling a novel instance in at least one abstract category, more informed decisions can be made by an automated driving (AD) function, resulting in a safer behavior in novel situations. Current approaches are mainly composed of different architectures based on Convolutional Neural Networks (CNNs). Capsule Networks (CNs) were introduced as an alternative to CNNs that expand their capacity in tasks that were previously challenging. We explore the hierarchical nature of CNs and propose a novel approach for hierarchical novelty detection using a unified CN architecture. As a proof-of-concept, we evaluate it on a novelty detection task based on the Fashion-MNIST dataset. We define a misclassification matrix for evaluation of the performance based on a semantically sensible scenario for this dataset. The results show that our method outperforms the main CNN-based methods in the current literature in this task while also giving more flexibility for task-specific tuning and has the potential to reach state-of-the-art status in more complex HND use cases within the AD domain.
|
|
15:05-16:25, Paper We-Po2S.32 | Add to My Program |
BackboneAnalysis: Structured Insights into Compute Platforms from CNN Inference Latency (I) |
|
Hafner, Frank M. | ZF Friedrichshafen AG |
Zeller, Matthias | University of Stuttgart |
Schutera, Mark | Karlsruhe Institute of Technology |
Abhau, Jochen | ZF Friedrichshafen AG |
Kooij, Julian Francisco Pieter | Delft University of Technology |
Keywords: Convolutional Neural Networks, Security, Deep Learning
Abstract: Customization of a convolutional neural network (CNN) to a specific compute platform involves finding an optimal pareto state between computational complexity of the CNN and resulting throughput in operations per second on the compute platform. However, existing inference performance benchmarks compare complete backbones that entail many differences between their CNN configurations, which do not provide insights in how fine-grade layer design choices affect this balance. BackboneAnalysis is a methodology for extracting structured insights into the trade-off for a chosen target compute platform. Within a one-factor-at-a-time analysis setup, CNN architectures are systematically varied and evaluated based on throughput and latency measurements irrespective of model accuracy. Thereby, we investigate the configuration factors input shape, batch size, kernel size and convolutional layer type. In our experiments, we deploy BackboneAnalysis on a Xavier iGPU and a Coral Edge TPU accelerator. The analysis reveals that the general assumption from optimal Roofline performance that higher operation density in CNNs leads to highert hroughput often needs to be rejected. These results highlight the importance for a neural network architect to be aware of platform-specific latency and throughput behavior in order to derive sensible configuration decisions for a custom CNN.
|
|
15:05-16:25, Paper We-Po2S.33 | Add to My Program |
MEAT: Maneuver Extraction from Agent Trajectories (I) |
|
Schmidt, Julian | Mercedes-Benz AG, Ulm University |
Jordan, Julian | Mercedes-Benz AG |
Raba, David | Mercedes-Benz AG |
Welz, Tobias | Mercedes-Benz AG |
Dietmayer, Klaus | University of Ulm |
Keywords: Automated Vehicles, Situation Analysis and Planning, Driver State and Intent Recognition
Abstract: Advances in learning-based trajectory prediction are enabled by large-scale datasets. However, in-depth analysis of such datasets is limited. Moreover, the evaluation of prediction models is limited to metrics averaged over all samples in the dataset. We propose an automated methodology that allows to extract maneuvers (e.g., left turn, lane change) from agent trajectories in such datasets. The methodology considers information about the agent dynamics and information about the lane segments the agent traveled along. Although it is possible to use the resulting maneuvers for training classification networks, we exemplary use them for extensive trajectory dataset analysis and maneuver-specific evaluation of multiple state-of-the-art trajectory prediction models. Additionally, an analysis of the datasets and an evaluation of the prediction models based on the agent dynamics is provided.
|
|
15:05-16:25, Paper We-Po2S.34 | Add to My Program |
Automated Driving Systems: Impact of Haptic Guidance on Driving Performance after a Take Over Request (I) |
|
Morales-Alvarez, Walter | Chair Sustainable Transport Logistics 4.0, Johannes Kepler Unive |
Certad, Novel | Chair Sustainable Transport Logistics 4.0, Johannes Kepler Unive |
Tadjine, Hadj Hamma | IAV GmbH |
Olaverri-Monreal, Cristina | Chair Sustainable Transport Logistics 4.0, Johannes Kepler Unive |
Keywords: Automated Vehicles, Hand-off/Take-Over, Human-Machine Interface
Abstract: In conditional automation, a response from the driver is expected when a take over request is issued due to unexpected events, emergencies, or reaching the operational design domain boundaries. Cooperation between the automated driving system and the driver can help to guarantee a safe and pleasant transfer if the driver is guided through a haptic guidance system that applies a slight counter-steering force to the steering wheel. We examine in this work the impact of haptic guidance systems on driving performance after a take over request was triggered to avoid sudden obstacles on the road. We studied different driver conditions that involved Non Driving Related Tasks (NRDT). Results showed that haptic guidance systems increased road safety by reducing the lateral error, the distance and reaction time to a sudden obstacle, and the number of collisions.
|
|
15:05-16:25, Paper We-Po2S.35 | Add to My Program |
Deep Federated Learning for Autonomous Driving |
|
Nguyen, Anh | University of Liverpool |
Do, Tuong | AIOZ Singapore |
Tran, Minh | AIOZ |
Nguyen, Binh | AIOZ |
Duong, Chien | AIOZ |
Phan, Tu | AIOZ |
Tjiputra, Erman | AIOZ |
Tran, Quang | AIOZ |
Keywords: Vision Sensing and Perception, Convolutional Neural Networks
Abstract: Autonomous driving is an active research topic in both academia and industry. However, most of the existing solutions focus on improving the accuracy by training learnable models with centralized large-scale data. Therefore, these methods do not take into account the user's privacy. In this paper, we present a new approach to learn autonomous driving policy while respecting privacy concerns. We propose a peer-to-peer Deep Federated Learning (DFL) approach to train deep architectures in a fully decentralized manner and remove the need for central orchestration. We design a new Federated Autonomous Driving network (FADNet) that can improve the model stability, ensure convergence, and handle imbalanced data distribution problems while is being trained with federated learning methods. Intensively experimental results on three datasets show that our approach with FADNet and DFL achieves superior accuracy compared with other recent methods. Furthermore, our approach can maintain privacy by not collecting user data to a central server. Our source code can be found at https://github.com/aioz-ai/FADNet.
|
|
15:05-16:25, Paper We-Po2S.36 | Add to My Program |
Ordered-Logit Pedestrian Stress Model for Traffic Flow with Automated Vehicles (I) |
|
Kamal, Kimia | Ryerson University |
Farooq, Bilal | Ryerson University |
Mahwish, Mudassar, Mahwish | Ryerson University |
Kalatian, Arash | Ryerson University |
Keywords: Automated Vehicles, Vulnerable Road-User Safety, Human-Machine Interface
Abstract: An ordered-logit model is developed to study the effects of Automated Vehicles (AVs) in the traffic mix on the average stress level of a pedestrian when crossing an urban street at mid-block. Information collected from a galvanic skin resistance sensor and virtual reality experiments are transformed into a dataset with interpretable average stress levels (low, medium, and high) and geometric, traffic, and environmental conditions. Modelling results indicate a decrease in average stress level with the increase in the percentage of AVs in the traffic mix.
|
|
We-D-OR Regular Session, Europa Hall |
Add to My Program |
Perception for Intelligent Vehicles |
|
|
Chair: Fernandez Lopez, Carlos | Karlsruhe Institute of Technology (KIT) |
|
16:25-16:45, Paper We-D-OR.1 | Add to My Program |
A Conditional Confidence Calibration Method for 3D Point Cloud Object Detection |
|
Kato, Yoshio | The University of Tokyo |
Kato, Shinpei | The University of Tokyo |
Keywords: Self-Driving Vehicles, Deep Learning, Lidar Sensing and Perception
Abstract: When we apply neural networks to safety-critical systems such as self-driving cars, the reliability of their predictions must be considered. However, recent deep neural networks have tended to output biased confidence. Additionally, the extent of confidence bias estimated by object detectors varies depending on factors such as the detected object's position and size. To address this problem, many researchers have proposed methods for calibrating confidences estimated by object detectors. In this study, we investigate the factors that may cause bias in the confidence of LiDAR-based 3D object detectors and show that our calibration method compensates for the effect of these factors to provide reliable confidence estimations, regardless of the neural network model used or the situations in which objects are detected.
|
|
16:45-17:05, Paper We-D-OR.2 | Add to My Program |
3D-FlowNet: Event-Based Optical Flow Estimation with 3D Representation |
|
Sun, Haixin | Ecole Centrale De Nantes, Nantes, France |
Dao, Minh Quan | École Centrale De Nantes |
Fremont, Vincent | Ecole Centrale De Nantes, CNRS, LS2N, UMR 6004 |
Keywords: Convolutional Neural Networks, Deep Learning, Image, Radar, Lidar Signal Processing
Abstract: Event-based cameras can overpass frame-based cameras limitations for important tasks such as high-speed motion detection during self-driving cars navigation in low illumination conditions. The event cameras' high temporal resolution and high dynamic range, allow them to work in fast motion and extreme light scenarios. However, conventional computer vision methods, such as Deep Neural Networks, are not well adapted to work with event data as they are asynchronous and discrete. Moreover, the traditional 2D-encoding representation methods for event data, sacrifice the time resolution. In this paper, we first improve the 2D-encoding representation by expanding it into three dimensions to better preserve the temporal distribution of the events. We then propose 3D-FlowNet, a novel network architecture that can process the 3D input representation and output optical flow estimations according to the new encoding methods. A self-supervised training strategy is adopted to compensate the lack of labeled datasets for the event-based camera. Finally, the proposed network is trained and evaluated with the Multi-Vehicle Stereo Event Camera (MVSEC) dataset. The results show that our 3D-FlowNet outperforms state-of-the-art approaches with less training epoch (30 compared to 100 of Spike-FlowNet). The code is released in https://github.com/adosum/3D-FlowNet.
|
|
17:05-17:25, Paper We-D-OR.3 | Add to My Program |
CSFlow: Learning Optical Flow Via Cross Strip Correlation for Autonomous Driving |
|
Shi, Hao | Zhejiang University |
Zhou, Yifan | Shanghai AI Laboratory |
Yang, Kailun | Karlsruhe Institute of Technology |
Yin, Xiaoting | Zhejiang University |
Wang, Kaiwei | Zhejiang Univeristy |
Keywords: Vision Sensing and Perception, Vehicle Environment Perception, Deep Learning
Abstract: Optical flow estimation is an essential task in self-driving systems, which helps autonomous vehicles perceive temporal continuity information of surrounding scenes. The calculation of all-pair correlation plays an important role in many existing state-of-the-art optical flow estimation methods. However, the reliance on local knowledge often limits the model’s accuracy under complex street scenes. In this paper, we propose a new deep network architecture for optical flow estimation in autonomous driving——CSFlow, which consists of two novel modules: Cross Strip Correlation module (CSC) and Correlation Regression Initialization module (CRI). CSC utilizes a striping operation across the target image and the attended image to encode global context into correlation volumes, while maintaining high efficiency. CRI is used to maximally exploit the global context for optical flow initialization. Our method has achieved state-of-the-art accuracy on the public autonomous driving dataset KITTI-2015. Code is publicly available at https://github.com/MasterHow
|
| |