Implement Autoware Basic Demo in Boxbot

We believe everyone who is working for autonomous driving knows Autoware. Autoware is a set of modules including mapping, localization, planning, detection, and controlling. Autoware is completely open-source, and comprehensive supports are available in its open community, therefore it is always the first choice for beginners to implement in their vehicles to achieve the basic ‘self-driving’ ability. This post describes the procedures (including detailed parameters setting and configuration) to run Autoware in boxbot robot to let it drives automatically alongside the predefined waypoints in maps.

1. System configuration

There is need to compile the Autoware in local host, and the Robot Operating System (ROS) is compulsory to run the Autoware. The version of Autoware we used is autoware.ai 1.14, the detailed instruction of installation can be found here. The ROS version is Melodic (installing instruction). The OS of the host is Ubuntu 18.04.

1.1 Compiling autoware.ai with CUDA enabled

The autoware.ai can be compiled with GPU enabled. Some of the modules in autoware.ai can be boosted by the GPU, for example, the essential LiDAR-based NDT (Normal Distributions Transform) mapping and localization modules. However, even though after solving a lot of incompatible issues of libraries, we have successfully compiled the autoware.ai with CUDA enabled in a desktop which has graphic card , there are still bugs occur in NDT mapping and localization modules. Besides, the computer (Intel NUC) that was used in boxbot has no GPU, and the environment where boxbot was operated does not involve heavy point clouds data, it is not necessary to run the autoware.ai in GPU. As a reference for anyone who want to try to run autowa.ai in GPU, we will briefly list out the issues we encounter during the compiling. Please be aware that we caught these problems on our testing computer which might has different hardware and software specifications from yours.

The OS of our testing computer is Ubuntu 18.04, the graphic card is Nvidia RTX 2070 Super. The version of Driver and CUDA is like this:

The main issue during the compiling is the incompatibility of the CUDA version. Refer to the installation guidance, The CUDA version for the latest autoware.ai 1.14 has to be 10.0. Please noted that the CUDA here is not the same thing as shown in the picture above. The CUDA that was used by the graphic card driver can be different from the one that was used by the other applications. The version of the CUDA that was binded with the graphic card driver was decided by the version of driver itself, for example, the Nvidia Driver used in our computer is 440.33.01, and the version of the CUDA it uses is 10.2.

For system-wide, multiple version can exist at the same time. For example in our testing computer.

Corresponding symlink can be created to choose which CUDA to be used by the command:

ln -sfT /usr/local/cuda-VERSION_YOU_WANT /usr/local/cuda

’nvidia-cuda-toolkit’ is also needed for compiling. It can be installed by the command:

apt install nvidia-cuda-toolkit

Another issue is about the version of the gcc (GNU Compiler Collection) and g++. Autoware.ai’s NDT GPU module requires the gcc6. Similar as the CUDA, there can be multiple gcc versions in the system, and they can be easily switched by commands:

update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-6 60
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-6 60

At the last, please note that the autoware.ai has no longer being maintained by the original developers. The continuous project is the autoware.auto which based on the ROS2, this one might be more worthwhile for a try.

1.2 Connecting CAN bus, LiDAR sensor, and joy stick.

LiDAR sensor is compulsory for the deployment of Autoware, because its essential mapping and localization modules are based on the point cloud data. There is no specific requirement about the model of LiDAR sensor. Theoretically any LiDAR sensor which can be initiated in ROS and publish point cloud data as a topic can be used for the vehicles. The LiDAR sensor we used in boxbot is the Velodyne VLP-16, which has the active support of the drivers from the ROS community. Most of the configurations of the Velodyne VLP-16 driver can be kept as default, except the IP address and port of the LiDAR. Refer to the manual book of the Velodyne VLP-16, the IP address and port can be modified through the web interface of the sensor. It is first need to know the current IP address of the LiDAR sensor, also make sure sensor and host are connected by the ethernet cable. Open a web browser and put the sensor’s IP address in it, an interface like this will be shown in browser.

This is the example of our LiDAR sensor’s configuration. Please note that it is also need to define the IP address and port of the host in LiDAR sensor’s configuration. Therefore the network interface of the host has to be set to static and the corresponding port must be reserved for LiDAR data transfer. In our case, the static IP address of the host is 192.168.200.1, reserved port is 2365, the IP address of the sensor was set as 192.168.200.20. All these information must be consistent with the settings in LiDAR sensor’s ROS launch file.

The joy stick we used to manually control the boxbot is the Logitech Wireless Gamepad F710. The information of the corresponding ROS driver can be found here.

We designed a specific CAN bus for the boxbot, please contact us to for more details.

2. Creating point clouds map

2.1 Recording the point cloud data as a bag file

It is recommended to create the point clouds map statically, which means to drive the vehicle manually and record the LiDAR data. Therefore, LiDAR sensor’s ROS driver and communication with host, vehicle’s CAN bus, and joy stick manual controlling have to be ready before this step.

Here provides the bag file we recorded for the boxbot demo. The name of the point clouds topic has to be ‘/points_raw’ and the frame_id has to be ‘/velodyne’ (required by the ndt_mapping module in Autoware).

As for the configurations of the LiDAR’s ROS driver. The important change we made that compared with the default setting is the ‘max_range’. Because the environment is the corridor of the building, the original value 130 of the ‘max_range’ will make the map contains the points which are too far away. Smaller numbers such as 5 or 10 will cause mapping module unable to orients the points properly. At the last, we chose the 50 for the ‘max_range’. Here is our launch file of Velodyne LiDAR’s ROS driver.

2.2 Replaying the bag file

To make use of the NDT mapping module in Autoware to make point cloud map. First need to initiate the runtime manager, which is a graphic interface in Autoware to load the parameters and initiate the modules. Then in the [Simulation] tag, playing the recored bag.

Please note that it is has to start playing the bag a little bit then pause it. This is critical because when the runtime manager was initiated, the ROS parameter ‘/use_sim_time’ will be set as true. Then the mapping and localization modules will subscribe a topic called ‘/clock’ to use the simulation time. This topic will only be published when the bag file being played. If initiating the mapping module without the ‘/clock’ topic published, there will be some tf related issues.

Point clouds can be verified and visualized in Rviz, which can be initiated in runtime manager. Add the corresponding topic and change the [Fixed Frame] to ‘velodyne’ (the frame_id of point clouds topic). Here is an example of the point cloud visualization.

2.3 Setting the tf

Establishing the transformation between the different frames is critical in Autoware. There are four basic types of the frames in Autoware, which are ‘world’ for world coordinate, ‘map’ for map coordinate, ‘base_link’ for vehicle, and ‘velodyne’ for sensor (there can be other frames for different sensors). The transformations from ‘world’ to ‘map’ and from ‘map’ to ‘base_link’ are constant, which can be pre-defined by the tf package in ROS. The most important transformation is from ‘map’ to ‘base_link’, which is the core localization function provided by Autoware.

1) tf from ‘world’ to ‘map’

Technically, the transformation from the ‘world’ to ‘map’ can rely on the GPS signal, then it will be possible to localize the vehicle in global wise. However for the robots that were designed to operate indoor such as boxbot. It can simply set ‘world’ and ‘map’ frames are same. Here we will use ROS tf package. Go the [Map] tag in runtime manager and click the ‘TF’ button.

The ’tf.launch’ file is like this:

<launch>

<node pkg="tf"  type="static_transform_publisher" name="world_to_map" args="0 0 0 0 0 0 /world /map 10" />

</launch>

The ‘args=“0 0 0 0 0 0 /world /map 10”’ means the x, y, z, roll, pitch, and yaw from frame ‘world’ to ‘map’, the publishing frequency is 10Hz.

As for the tf between the ‘base_link’ and ‘velodyne’, it can be like the tf from ‘world’ to ‘map’ that was defined in a launch file. However, there is a tag in the runtime manager can do the same work as a launch file.

Here need to select the [Velodyne] in [localizer] and adjust the parameters in tag [Baselink to localizer]. ‘x’, ‘y’, ‘z’, ‘roll’, ‘pitch’, and ‘yaw’ represent the relative displacement between the central point of LiDAR sensor and middle point of the vehicle’s rear axle. Click the [TF] button then the tf between frame ‘base_link’ to ‘velodyne’ will be published. Please note that if the frame_id of the point cloud topic is something else but ‘velodyne’, then it is need to publish the tf individually in a launch file.

2.4 Mapping

The mapping module in Autoware is based on NDT (Normal Distributions Transform) algorithm. We will not explain the NDT mathematically but simply focus on the practical usage of the ndt_mapping module in Autoware. Go to the [Computing] tag in runtime manager, in left column to find the [lidar_localizer] then [ndt_mapping], the parameters of the ndt_mapping can be changed by clicking [app] button. There will default settings of parameters when open it for the first time. Based on the environment where the vehicle was operated, some parameters have to be adjusted. In our case, the environment of the demo is the corridor of the building and the resolution of the LiDAR sensor is relatively sparse (16 channels). Therefore, we changed the [Resolution] to 3 and [Minimum Scan Range] to 0.3. Here is our configurations for ndt_mapping module.

For [Method Type], choose the [pcl_generic] if the autoware.ai was compiled without CUDA enabled. There is no doubt that [pcl_anh_gpu] has advantages in speed when processing the point clouds data. However, as mentioned in previous section, even though we managed to compile the autoware.ai with CUDA enabled, when choosing [ndt_anh_gpu], ndt_mapping module was always stuck somewhere during the processing of the point clouds data. At the last, we decided to simply use the [pcl_generic] to process the data by CPU.

After all parameters were adjusted to the desired values, close this window and check the box in front of the [ndt_mapping]. If there is no errors shown in terminal, go to the [Simulation] tag to resume the playing of the bag file. Then you will see the related mapping information of every frame of point clouds shown in terminal. It is also possible to visualize the mapping process in Rviz. Here is an example video of the whole procedure of the mapping.

There are some points need to be bare in mind.

  • All the operations in [Simulation] tag must be done before the other processes. Otherwise there will be errors related to the frame_id.

  • Rviz will consume a lot of CPU power while the point clouds are accumulating, there is no need to keep it on all the time after confirming that mapping module is working. We left it opened just for the demonstration and the whole point clouds data is not very heavy.

  • when mapping the heavy point clouds, there is a possibility that the numbers of ‘(Processed/Input)’ (as shown in the picture bellow) are different from each other too much. The first number represents the frame that is being processing, the last number means the total frames that have been loaded. If the difference of these two numbers are too big, it is recommended to pause the playing of the bag for a while until they are closing to each other. It is also possible to play the bag at a slow rate if the point clouds data in it is heavy. An substituted solution for heavy point clouds data is using [approximate_ndt_mapping], which is similar as the [ndt_mapping] except saving PCD as multiple sub-map files. The interval of each sub-map file can be defined as parameter (in meter unit).

  • When the environment is relatively small and point clouds are sparse, it is recommended to output the PCD file at a higher resolution than default (0.2), we chose the original resolution for the demo of boxbot. The final PCD files will be save in ‘/home/.ros’ if the path was specified.

  • Remember to verify the pcd map file. The procedure of loading and visualizing the map file was shown in video. Please remember to check the distortions and alignments in closed place.

3. Localization based on the point clouds

The use of the point clouds map is localization. The localization function that was used in Autoware is ndt_matching, which is a kind of ‘scan-to-map’ method. Testing the localization is also a way to verify the accuracy of the point clouds map that was created in last section.

Section 2.2 and 2.3 have to be repeated first. The next step is loading the created map, which was shown in the video in Section 2.4. It might takes some time to load the map it is big, wait until the ‘OK’ was shown up.

One important step before initiating the localization module is starting the point clouds filter. Go the [Sensing] tag, click the [app] for [voxel_grid_filter] in [Points Downsampler], set up the parameters.

The voxel grid filter is point clouds downsampling method which approximates (downsamples) all points in each voxel (3D box) with their centroid. Here we set the [Voxel Leaf Size] to 1, which means all points in the 1m-size-3D-box will be approximated to one point.

As the face discussed in Section 2.3, the transformation between frame ‘map’ to ‘base_link’ was computed by Autoware ndt_matching module. In [ndt_matching] under [Computing] tag, click the [app] to adjust the parameters.

Check the [Initial Pos] here, because the environment of boxbot demo is indoor and there is no GNSS sensor equipped on vehicle. Please note that the GNSS sensor is useful in Autoware for publishing the initial pose of the vehicle. Otherwise there might is need to use the tool [2D Pose Estimate] in Rivz to manually localize the vehicle at the beginning.

Check the box in front of the [ndt_matching] after the adjustment of its parameters. Then check the box for [vel_pose_connect] in [autoware_connector], make sure the parameter [Simulation Mode] was not selected in this module.

If there is no error reported in the terminal of runtime manager so far. First to test the localization with the simulated data (the bag file recored in last section). Open the Rviz from runtime manager and load the configuration file (the path usually is ‘/autoware.ai/src/autoware/documentation/autoware_ quickstart_examples/config/default.rviz’). Then start playing bag file in runtime manager. The recorded point clouds data will be shown along with map in Rviz. Here is the example video.

Sometimes at he beginning, the point clouds were not shown in map. This is because ndt_matching can not localize the initial pose of the vehicle, here it is need to use [2D Pose Estimate] tool in Rviz to estimate the initial pose of the vehicle (shown in video). Here is the necessity to have a GNSS sensor equipped on vehicle, which can replace the manual work to estimate the initial pose of the vehicle.

The localization of the real vehicle is similar as the pre-recored data. Initiating the LiDAR sensor and enabling the manual controlling, moving the vehicle to the place of the map, then repeating the same procedure for simulated data except playing the bag file. If the ndt_matching module can not match the real-time point cloud with the map at the beginning, use the [2D Pose Estimate] tool to assist the initial matching. Moving the vehicle in map to check whether or not it can localize itself and the alignment of the real time point clouds and map.

4. Recording the waypoints

Waypoints are used to regulate and control the movements of the vehicle. Each of the waypoint contains the information of x, y, z, yaw, and velocity. When recording the waypoints with the real vehicle, keep the vehicle steady and at a constant speed. It is also recommended to avoid the sharp turning (try to turn as a smooth arc).

Repeat the procedures mentioned above until the real vehicle can be localized in the point clouds map. Go the [Computing] tag, find the [Motion Planning], then [waypoint_maker], then [waypoint_saver]. Click the [app] to change the parameters.

Check the box to save the ‘/current_velocity’. The [interval] means the distance between two waypoints, the unit is meter. We chose the 0.3 because boxbot was not designed to drive quickly the testing environment is relatively small.

Close the window for parameters turning, the initiate the ‘waypoint_saver’ module. Go to Rviz to add the corresponding topic.

Starting moving the vehcile along the desired path. When vehicle arrives the destination, uncheck the box in front of the ‘waypoint_saver’. All waypoints will be saved as a CSV file in pre-defined path. Here is the waypoints we recored for the boxbot demo.

5. Pursuing the waypoints

Pursuing the waypoints means the vehicle automatically moves alongside the pre-defined routes. This is the simplest and the beginning of the autonomous driving technology.

The first step of the waypoints pursuit is finding out the closest waypoint to the vehicle itself, then using the controlling algorithm to move the vehicle to that waypoint. Therefore, the essence of the waypoint pursuit is the self-localization of the vehicle. The details of the mapping and localization have been described in Section 2 and 3. Following the instructions until can localize the vehicle in map. Then loading the waypoints file that was recorded in Section 4 by ‘waypoint_loader’ module.

5.1 Preprocessings of the point clouds data

Go to the [Sensing] tag, the [voxel_grid_filter] needs to be initiated to publish downsampled point clouds topic which was required by ndt_matching module. In addition, [ring_filter] in [Points Downsampler] and [ring_ground_filter] in [Points Preprocessor] also have to be launched to publish a point clouds topic that has ground points being filtered. We kept the parameters of [ring_filter] as default. The parameters of the [ring_ground_filter] have to based on the model of the LiDAR sensor.

The [Sensor Model] means the amount of the laser beams in vertical. We used Velodyne VLP-16 LiDAR sesnor, therefore chose the [16]. [Sensor Height] means the distance of LiDAR sensor to the ground. [Max slope] represents the maximum degrees of the slope of the environment. Please note that if this parameter was not modified based on the real environment, there is a possibility that vehicle recognizes a slope as obstacle. [Vertical Thres] means the threshold height of an object to be recognized as an obstacle. All values in above picture were tuned based on the boxbot and our testing environment. The comparison of the ground-filtered-point-clouds and original point clouds is like this:

Please note that ‘ring_filter’ module requires the point clouds topic has a field named ‘ring’, otherwise errors will occur. For Velodyne LiDAR sensors, ‘ring’ value means the amount of laser beams that were used to produce the corresponding point. The ‘ring’ value is the main criteria for ‘ring_ground_filter’ module to filter the ground points out. Be careful that some other LiDAR sensor models might not provide this information, then other algorithms will be needed here to do the ground filtering.

5.2 Lanes and waypoints planning

In [Computing] tag, go to [Mission Planning] in right column, then [lane planner]. Check the boxes in front of [lane_rule], [lane_stop], and [lane_select].

These three modules are related to the behavior controlling of the vehicle for traffic lights and lane changing. These are not the concerns for boxbot at the moment, but still need to be initiated for lane planning. Therefore, all parameters of them can be kept as default.

Next go the [waypoint_planner] in [Motion Planning]. Initiate the [astar_avoid] and [velocity_set].

These are the modules related to the obstacles avoidance. We set the parameters like this:

please note that ‘velocity_set’ module requires the point clouds topic ‘/point_no_ground’ as input, which is the reason that ‘ring_ground_filter’ and ‘ring_filter’ modules were needed.

The last step is initiating the [pure_pursuit] and [twist_filter] in [waypoin_follower].

These two modules were used to let the vehicle follows the pre-recorded waypoints, and publish the topic ‘/twist_cmd’ which contains the linear and angular velocities that will be listened by vehicle’s CAN system. The configurations of these two moduels are like this:

These are the all configurations for waypoints pursuit. Here is a video shows the whole procedure. For simplicity, we used the simulation data in the video, it is the same process for the real vehicle except the playing the pre-recorded bag file.

Conclusion

This post systematically described the details of the procedure to implement the basic Autoware functions in our boxbot robot to achieve the simple autonomous driving. In this post, all the operations were executed in Autoware’s graphic interface for demonstrating the process step by step. In practical, all these operations will be integrated into several launch files which can be initiated by a simple command. Please contact us for the further information.