ITSC 2024 Paper Abstract

Close

Paper WeBT2.3

Abdul Sahid, Mohammad Asif Bin Abdul Sahid (Agency for Science, Technology and Research (A*STAR)), Lim, Hon Wek (National University of Singapore), Thadimari, Yeshas (Institute of Infocomm Research, A-Star), Imanberdiyev, Nursultan (Institute for Infocomm Research (I2R), A*STAR), CAMCI, Efe (Institute for Infocomm Research (I2R), A*STAR)

Localization of a Tethered Drone & Ground Robot Team by Deep Neural Networks

Scheduled for presentation during the Regular Session "Sensing, Vision, and Perception II" (WeBT2), Wednesday, September 25, 2024, 15:10−15:30, Salon 5

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on October 8, 2024

Keywords Sensing, Vision, and Perception, Automated Vehicle Operation, Motion Planning, Navigation, Multi-autonomous Vehicle Studies, Models, Techniques and Simulations

Abstract

Accurate localization of vehicles is vital for proper coordination in multi-vehicle missions. Drawing inspiration from the recent success in pose estimation of household objects using deep neural networks (DNNs), we propose a new approach for the localization of a tethered drone & ground robot team based on DNNs. Unlike most of the literature, we focus on the localization of such robots without any external infrastructure, such as the Global Navigation Satellite System (GNSS). This enables our robots to operate anywhere, such as indoors where GNSS signals are not available, or in urban environments where the signals are downgraded due to high-rise buildings. Equipped with a downward-facing camera, our drone detects the ground robot in aerial images and estimates its relative pose on-the-fly using DNNs. This pose is then fused with the simultaneous localization and mapping (SLAM) of the ground robot for tandem navigation of both vehicles.

We create a digital twin of our robots in high-fidelity Gazebo simulations and collect custom datasets for DNN training. We then validate the trained DNNs through flight tests at various heights ranging from 5m to 15m. We extensively benchmark our approach through batched tests with two other infrastructure-agnostic localization methods, AprilTag and OV2SLAM. While the purely AprilTag-based approach fails to go beyond 5m height and OV2SLAM requires rich visual features in the surroundings, our proposed approach can achieve flights at more than 10m height without depending on any external visual feature requirements. Finally, we conduct real robot tests demonstrating the real-time feasibility of our approach on an edge device (Nvidia Jetson AGX Orin).

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-10-08  15:19:27 PST  Terms of use