Using Autoware to Calibrate LiDAR and Camera for ISEAUTO shuttle
The detailed instruction of the Autoware.ai installation and compiling can be found here. In this post, all necessary materials recorded by iseAuto shuttle to replicate the camera-LiDAR extrinsic calibration will be available for downloading.
1. Recording the bag file for camera-LiDAR extrinsic calibration
The first step is recording the camera and LiDAR data with the checkerboard. The checkerboard should be placed in different locations that cover all camera views from left to right, bottom to top, and close to faraway. The exact locations should be based on the size of ckeckerboard and the resolution of the sensors.
Here is the
checkerboard bag file for the iseAuto front camera and LiDAR. The details of the
bag file is like this:
Please noted that for reducing the size of the bag file, 4k resolution images were stored as the message type ‘sensor_msgs/CompressedImage’, which is able to be visualized in Rviz but not applicable for Autoware calibration toolkit. Therefore, there is a need to decompress the images stream on the fly.
2. Intrinsic calibration of the camera
Another important issue is that the Autoware camera-LiDAR calibration toolkit requires the camera’s intrinsic information as the input, all the intrinsic parameters have to be listed follows the Autoware standard. Some camera manufacturers pre-calibrate the cameras after they were produced and provide default intrinsic matrices. In our case, we conducted the intrinsic calibration for our FLIR Grasshopper3 camera before the camera-LiDAR extrinsic calibration. There are a lot of ways to do the intrinsic calibration for the camera. We chose to use ROS camera_calibration package for its simplicity and user-friendly interface.
It is recommended to detach the camera off the vehicle and do intrinsic calibration in a less-interfered environment (lab or office). In this way, live video stream is available for calibration. It is possible to adjust the checkerboard properly until collect enough data, which can promise more precise result than pre-recorded bag file. The process to configure the FLIR camera ROS driver will be available in another post. Here provide a screen recording of how we calibrate our FLIR Grasshopper3 camera with ROS camera_calibration package.
Please read carefully of this page about the detailed tutorials to calibrate a mono camera. For the calibration-node-initiation-command in this tutorial, there are three points that requires extra attention. First is the ‘–size’ flag means the amount of the interior vertex points instead of the squares. The second is the ‘–square’ flag requires the side-length but area of the square. The third is using the ‘–no-service-check’ flag if there is no corresponding service for the camera.
Here is the bag file for intrinsic calibration of our camera. Be careful that this bag file is almost 50 GB because it contains raw 4k images.

ROS camera_calibration package achieves all captured frames for calculation and intrinsic metrics. The final results were saved in text and yaml formats which follow the ROS standard. It is needed to change the format when using these metrics in other applications. Here is the achieved file for our camera’s intrinsic calibration, which also was used in extrinsic calibration with LiDAR sensor. This is the intrinsic calibration result of our FLIR Grasshopper3 camera.

3. Extrinsic calibration of the camera and LiDAR
First is to initiate the roscore in one terminal. Although the autoware camera-LiDAR calibration node will also initiate the roscore, it is recommended to initiate it standalone to make sure the bag file playing and images de-compressing commands can be executed.
Open the second terminal to play the bag file that was provided in Section 1.
rosbag play flir_velodyne_calib.bag -r 0.5 -l --pause
Here set the playing speed to 0.5 for the convenience to pick the image frame. ‘–pause’ flag is compulsory because there is need to pause the image stream to select the pixel and find corresponding point.
Next step is decompressing images on the fly. Open a new terminal and use the command:
rosrun image_transport republish compressed in:=/front_camera/image_color_rect
raw out:=/front_camera/image_color_rect
Please noted that the input topic name in the command must be set as ‘/front_camera/image_color_rect’ instead of the actual name of topic in bag file (’/front_camera/image_color_rect/compressed’). The flag ‘raw’ is also compulsory before the output topic name, otherwise the image streams might be lagging. It is recommended to visualize and check the decompressed images in Rviz before initiating the autoware camera-LiDAR calibration node.
The fourth terminal is to initiate the Rviz visualizer. It is recommended to pre-configure the window size and other parameters such as topic name, global fixed frame, view angle, etc. There settings can be saved as an Rviz configuration file and be reloaded again. Here provides the Rviz configuration file for our bag file. Use the command:
rviz -d $PATH_TO_FILE/cam_lidar_cali.rviz
The last terminal is to initiate the Autoware camera-LiDAR calibration node. Please noted that the autoware.ai must be sourced for this shell, there is also need to make sure that the camera-lidar calibration package is available, as shown in this figure.

There is need to specify the camera’s intrinsic information yaml file and the topic name of the image stream. The camera’s intrinsic information has to in autoware format, which can be downloaded here. The command is:
roslaunch autoware_camera_lidar_calibrator camera_lidar_calibration.launch intrinsics_file:=/home/claude/Dev/iseauto_sensor_ws/src/iseauto/config/fromt_camera_lidar_calib/front_flir_camera_intrinsic_autoware.yaml image_src:=/front_camera/image_color_rect
Now it is able to visualize the point cloud in Rviz and image stream in newly-popped image viewer. The resolution of our image is 4240x2824. It is not recommended to scale down the size of the image viewer during the calibration, therefore high-resolution monitor might be needed, the alternative is configuring two monitors vertically in computer’s display setting to make sure it is always be able to see the checkerboard.
The procedure of the calibration is first select a pixel of the checkerboard in image viewer, then use the ‘Publish Point’ function in Rviz to select the corresponding LiDAR point. The coordinates of the selected pixel and point will be shown in the terminal of the Autoware calibrator node. Repeat this procedure until have 9 pixel-point pairs, then the calibrator will calculate the extrinsic matrices automatically, the result will be shown in the terminal and saved as the yaml file in the user’s home folder. It is recommended to pick the pixel-point pairs when checkerboard is at the different locations, to cover the views from left to right and from close to faraway. This video demonstrates the calibration procedure. There are only three pixel-point pairs were made in the video because of the length.
In practical, there might be need to repeat the whole calibration procedure several times to get the more precise result. The camera-LiDAR extrinsic matrices can be verified by projecting LiDAR points to images.
Here is the camera and LiDAR extrinsic matrices that were saved by the Autoware calibrator. These values were used in our work for projecting LiDAR points to camera plane.
