26/03/2018

# Setting up a software stack for autonomous mono robot exploration and mapping using ROS

## Prerequisites

This setup is performed and documented on an Ubuntu system with the following software stack:

• Ubuntu 16.04 LTS (may work on other distribution though)
• ROS kinetic: if you don't have ROS pre-installed, please refer to this tutorial http://wiki.ros.org/kinetic/Installation/Ubuntu. I suggest to use the full desktop installation configuration, this will take a while (> 2 GB)
• Gazebo for simulation (it is installed by default if you choose the full desktop installation when installing ROS)
• GIT (sudo apt-get install git)
• Catkin for package building (installed by default when installing ROS)

To follow this post, some basic knowledge on ROS is needed:

## Install the required packages

All ROS packages used in this document is available in my github at: https://github.com/lxsang/ROS-packages.

The first step is to create a workspace for our packages, this assume that you have already installed ROS using the link provided in the previous section:

# initialize rosdep if you didn't do it
sudo rosdep init
rosdep update
# source the environment to support ROS, for convenient, we put the
# command to ~/.bashrc so that,  the environment will be sourced automatically
# each time a new terminal session is opened
echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
source ~/.bashrc
# create new workspace at the home folder
mkdir ~/rosws
cd ~/rosws
# create source directory
mkdir src
# create an empty workspace
catkin_make
# before cloning our packages, we need install some dependencies:
# the default ros navigation stack which is used in
# our exploration packages
# and suitesparse library needed by Karto SLAM
# install the turtlebot dependencies
sudo apt-get install ros-kinetic-kobuki ros-kinetic-kobuki-core
# turtlebot support for gazebo
sudo apt-get install ros-kinetic-kobuki-gazebo ros-kinetic-kobuki-gazebo-plugins
# now clone all the packages in my github repository
git clone https://github.com/lxsang/ROS-packages
# rename it to src
rm -r src
mv ROS-packages src
# ignore this package for now
touch src/pharos_probabilistic_merging/CATKIN_IGNORE
# generate dependent messages
catkin_make multi_master_bridge_generate_messages
# now build all of our packages
catkin_make
#source our workspace
echo "source ~/rosws/devel/setup.sh" >> ~/.bashrc
source ~/.bashrc


Note that, for this tutorial, only some packages are needed, but for simplicity, we just build everything, it costs nothing except time :). If you follow these steps carefully, the build will finish with success. Now it's time to pass to the simulation section to find out how our exploration stack works.

## Simulating single robot autonomous exploration with Gazebo

Gazebo is 3D simulation suite integrated in ROS, which offers the ability to accurately and efficiently simulate populations of robots in complex indoor and outdoor environment. The simulator supports various kind of robot and sensor models. For more information, refer to this link: http://gazebosim.org/tutorials

Basically ROS-based autonomous exploration of unknown environment for single robot involves Autonomous navigation and Mapping as two main processes. Autonomous navigation relies on the current built map from the Mapping process - which use a SLAM algorithm to create a map of the environment - to determine and identify the next goal, then plan an optimal path to that goal. The robot follows that path and extends the known area until the complete environment is explored. Autonomous navigation consists of two main phases: (1) goal allocation, (2) path planning and navigating . Frontier-based allocation is the most referenced method for goal allocation. A set of frontiers is extracted from the current map, the allocator selects a frontier as the next goal based on different allocation strategies (e.g. random, nearest, etc.). In the path planning and navigating phase, the robot follows an optimal global path that is planified based on the selected goal and the actual map. In ROS, this is handled by default by the move_base (http://wiki.ros.org/move_base) package with its local and global planners. However, in our experiments, we found that the default local planner is not suitable to navigate through small spaces such as doorways. We then implemented a dedicated local planner for move_base called adaptive_local_planner which is based on Potential Field to resolve this problem. So in this tutorial, we use adaptive_local_planner as local planer.

For simplicity, our exploration stack will be something like this

As shown in this figure, we use two nodes frontier_detector and frontier_allocator for goal allocation, path planing and navigating is handled by the move_base node with our adaptive local planner integrated. The mapping process is taken care by a SLAM node, here we use Karto as our SLAM algorithm, since we found that this algorithm is really stable both on simulation and real-world experiment.

The simulation stack can be setup using a single launch file that can be found in the gazebo_sim package, the launch file that we use in this documentation is located at src/gazebo_sim/launch/simulation_karto.launch. For this simulation, we will use a Turtlebot model with the Hokuyo URG-04LX-UG01 laser model to map the environment. Here is the explanation:

First we init the simulation environment with some global arguments, then spawn a Turtlebot model in the gazebo environment (line 4) and publish the robot state (line 5) and joints (line 8) to the ROS network

<arg name="world" />
<param name="/use_sim_time" value="true"/>
<param name="robot_description" command="cat $(find gazebo_sim)/launch/includes/turtlebot.urdf"/> <node name="spawn_urdf" pkg="gazebo_ros" type="spawn_model" args="-file$(find gazebo_sim)/launch/includes/turtlebot.urdf -urdf -z 1 -model turtlebot" />
<node pkg="robot_state_publisher" type="robot_state_publisher" name="robot_state_publisher">
<param name="publish_frequency" type="double" value="30.0" />
</node>
<node name="joint_state_publisher" pkg="joint_state_publisher" type="joint_state_publisher" ></node>


Next we load a 3D model of environment specified by the world argument (provided by the command line) in to Gazebo:

 <include file="$(find gazebo_ros)/launch/empty_world.launch"> <arg name="world_name" value="$(find gazebo_sim)/launch/includes/worlds/$(arg world).world"/> <!-- The world_name is with respect to GAZEBO_RESOURCE_PATH environmental variable --> <arg name="paused" value="false"/> <arg name="use_sim_time" value="true"/> <arg name="gui" value="true"/> <arg name="headless" value="false"/> <arg name="recording" value="false"/> <arg name="debug" value="false"/> </include>  Run the Karto SLAM algorithm to create map from laser sensor data  <node pkg="slam_karto" type="slam_karto" name="slam_karto" output="screen"> <param name="odom_frame" value="odom"/> <param name="map_update_interval" value="2"/> <param name="resolution" value="0.05"/> <rosparam command="load" file="$(find slam_karto)/config/mapper_params.yaml" />
</node>


Launching the navigation stack using move_base

<include file="\$(find gazebo_sim)/launch/includes/move_base.launch.xml" />


And finally we run the nodes necessary for the goal allocation:

<node pkg="costmap_2d" type="costmap_2d_node" name="costmap_for_exploration_global" respawn="true" >

<rosparam ns="costmap" subst_value="true">
global_frame: /map
update_frequency: 1.0
publish_frequency: 1.0
rolling_window: false
always_send_full_costmap: true
# Map management parameters
unknown_cost_value: 255

plugins:
- {name: static, type: "costmap_2d::StaticLayer"}
- {name: inflation,        type: "costmap_2d::InflationLayer"}
static:
#Can pull data from gmapping, map_server or a non-rolling costmap
map_topic: /map
inflation:

</rosparam>
</node>

<node pkg="frontier_allocation" type="frontier_detection" name="frontier_detection" output="screen">
<param name="map" value="/costmap_for_exploration_global/costmap/costmap"/>
<param name="min_frontier_size_px" value="10"/>
</node>

<node pkg="frontier_allocation" type="frontier_allocation" name="frontier_allocation" output="screen">
<param name="goal_tolerance" value="0.5"/>
<param name="frontier_tolerance" value="0.05"/>
<param name="random_frontier" value="false"/>
<param name="cmd_vel_topic" value="/mobile_base/commands/velocity"/>
</node>


### Run the simulation

To run the simulation, the first thing we need to do is to copy the 3D models of the the environments to the GAZEBO_RESOURCE_PATH so that Gazebo can recognize them when we launch the previous launch file. Execute these following commands:

# create the model folder if not exist
mkdir -p ~/.gazebo/models
cp src/gazebo_sim/launch/includes/worlds/models/* ~/.gazebo/models/


Now run the exploration stack using roslaunch

# here we use the model of IMT-Lille Douai DIA department as simulation environment
roslaunch gazebo_sim simulation_karto.launch world:=dia


If every thing is correct, you should see the robot begins to scout the environment in the Gazebo window, and the built map is shown in the Rviz window:

## Real-world experiment with a real Turtlebot

MUGNIER ThÃÂ©o, an engineer student at IMT Lille - Douai, has packaged up a docker container containing all the necessary ROS' components for using a real turtlebot to map an indoor environment . The tutorial can be found here: https://github.com/CARMinesDouai/PhaROS2/tree/master/ROS-Mapping .