Multimodal Panoptic Segmentation of 3D Point Clouds
The understanding and interpretation of complex 3D environments is a key challenge of autonomous driving. Lidar sensors and their recorded point clouds are particularly interesting for this challenge since they provide accurate 3D information about the environment. This work presents a multimodal ap...
Saved in:
Main Author: | |
---|---|
Format: | Electronic Book Chapter |
Language: | English |
Published: |
KIT Scientific Publishing
2023
|
Series: | Karlsruher Schriften zur Anthropomatik
|
Subjects: | |
Online Access: | DOAB: download the publication DOAB: description of the publication |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The understanding and interpretation of complex 3D environments is a key challenge of autonomous driving. Lidar sensors and their recorded point clouds are particularly interesting for this challenge since they provide accurate 3D information about the environment. This work presents a multimodal approach based on deep learning for panoptic segmentation of 3D point clouds. It builds upon and combines the three key aspects multi view architecture, temporal feature fusion, and deep sensor fusion. |
---|---|
Physical Description: | 1 electronic resource (248 p.) |
ISBN: | KSP/1000161158 |
Access: | Open Access |