ITSC 2025 Paper Abstract

Close

Paper TH-EA-T28.4

Lu, HaoAng (Xi'an Jiaotong University), Su, Yuanqi (Xi'an Jiaotong University), Zhang, Xiaoning (Xi’an Jiaotong University), Hu, Hao (China Academy of Railway Sciences Corporation Limited)

One Step Closer: Creating the Future to Boost Monocular Semantic Scene Completion

Scheduled for presentation during the Regular Session "S28b-Multi-Sensor Fusion and Perception for Robust Autonomous Driving" (TH-EA-T28), Thursday, November 20, 2025, 14:30−14:50, Stradbroke

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Deep Learning for Scene Understanding and Semantic Segmentation in Autonomous Vehicles, Advanced Sensor Fusion for Robust Autonomous Vehicle Perception, Real-time Object Detection and Tracking for Dynamic Traffic Environments

Abstract

In recent years, visual 3D Semantic Scene Completion (SSC) has emerged as a critical perception task for autonomous driving due to its ability to infer complete 3D scene layouts and semantics from single 2D images. However, in real-world traffic scenarios, a significant portion of the scene remains occluded or outside the camera's field of view - a fundamental challenge that existing monocular SSC methods fail to address adequately.

To overcome these limitations, we propose Creating the Future SSC(CF-SSC), a novel temporal SSC framework that leverages pseudo-future frame prediction to expand the model‘s effective perceptual range. Our approach combines poses and depths to establish accurate 3D correspondences, enabling geometrically-consistent fusion of past, present, and predicted future frames in 3D space. Unlike conventional methods that rely on simple feature stacking, our 3D-aware architecture achieves more robust scene completion by explicitly modeling spatial-temporal relationships.

Extensive experiments on SemanticKITTI and SSCBench-KITTI-360 benchmarks demonstrate state-of-the-art performance, validating the effectiveness of our approach, highlighting our method's ability to improve occlusion reasoning and 3D scene completion accuracy.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:48:14 PST  Terms of use