Camera-Lidar Fusion Based Object Segmentation in Adverse Weather Conditions for Autonomous Driving

Abstract

This paper presents the results of a series of machine learning experiments that focus on the domain adaption between different training datasets for autonomous driving. The experiments use a neural network model obtained through transfer learning from the Waymo Open Dataset and then tested with our custom dataset, which was recorded at the TalTech campus in different weather conditions. In this work, we developed a set of tools to extract and process the sensory data from the iseAuto shuttle. The camera and LiDAR sensors of the iseAuto shuttle were calibrated, and the point clouds data of the LiDAR sensor was projected onto the camera plane. We present, here, our publicly available iseAuto dataset which was used for a classification task under adverse weather and low illumination conditions. The iseAuto dataset contains 8000 frames of camera and LiDAR data for traffic object detection and segmentation. We provide ground truth annotations of 2400 frames, in which the object contours were manually labeled out. In addition, we provided the manual-labeled ground-truth segmentations of cars and humans in images, which can be used by the community to test the accuracy of the segmentation of their models. An additional focus of this paper is to demonstrate that with a few custom annotated data, using transfer learning and semi-supervised learning, it is possible to obtain reasonable accuracy on noisy real-world data. The current performance in vehicle segmentation ranges from 65% to 85% in intersection over union (IoU) and between 43% and 60% IoU for pedestrian segmentation in challenging scenarios such as nighttime and rainy weather.

Publication
Baltic Electronics Conference 2024