Robot navigation with TI 3D-TOF sensor

Standard

This blog post describes autonomous navigation of robotic lawn mower using 3D time-of-flight (3D-TOF) sensor.   More specifically, a Husqvarna automower 430x (HRP) with a Texas Instrument OPT8241-CDK-EVM.  A Raspberry Pi 3 board running Ubuntu 16.04 and ROS Kinetic was the embedded computing platform.

HRP – the robot

Husqvarna provided an Automower 430x for this project.  The robotic mower typically follows the path laid out by ground wires embedded in the soil.  To enable HRP to be operated by an external computer, it must be placed in special mode override.  The mower’s public spec is shown below.  The most important of which is the mechanical data needed for sensor transform. later.

specs-for-husqvarna-430x-automower

Preparing the robot

The robot has two active hind wheels each independently driven by a variable speed DC motor.  Differential motions turn the robot clockwise or counterclockwise; synchronized motion advance or retract the robot.  The robot has an outer shell that rests on four joystick-like switches; with any physical contacts, the outer shell will tilt these joysticks causing the robot to stop and retract.  The robot body is well sealed, covered by the outer body shell.  The interior cavity is accessed by removing many hex screws, exposing internal components.

The robot is powered by up to two 4000 mAH Li-ion battery packs that sit between the hind wheels.  The main electronics board sits vertically against the back of the chassis with power connectors running out to the motors and the battery pack. It also have a USB serial port that enables control by an external, and four open UART connectors.

To support autonomous operation and 3D time-of-flight sensor, I found some suitable open spaces within the body add a Raspberry Pi 3 and two 3A@5V DC/DC converters each drawing power from the battery pack through a flip switch; one power the Raspberry Pi, the other power the TI 3D-TOF sensor.  Raspberry Pi 3 was selected because it has adequate CPU horsepower, supports Ubuntu 16.04 and numerous connectivity options, including Wi-Fi.  The robot can be controlled by an external computer through the built-in USB serial port or any of the four UART ports off its mainboard.  To enable Raspberry Pi controlling the robot, a FTDI USB serial cable is used to connect Raspberry Pi to the mainboard.  The TI 3D-TOF sensor is also connected to Raspberry Pi through USB for machine vision.  With the hardware prepared and Raspberry Pi 3 being the main computer on the robot, the next step would be to install ROS.

hrpinside

Figure 1 – Instrumenting HRP for autonomous operation.

ROS overview

ROS stands for Robotic Operating System. It contains a large set of packages used in robotics, covering machine vision, navigation, grasping and manipulation.  It supports distributed processing under a subscriber-publisher model.  Each ROS system has a master node (roscore) that functions as a system-wide coordinator, and multiple nodes each a separate running program that publishes messages or subscribes to messages.

To setup ROS on Ubuntu, the following environment variables should be define.

     export ROS_IP=10.0.0.40
     export ROS_MASER_URI=10.0.0.116:11311

In the above 10.0.0.40 is the IP address of the local machine, and 10.0.0.116 is the IP address of the machine where roscore runs , on port number 11311.

ROS navigation stack

For this project, the most pertinent package is the ROS navigation stack.  The ROS navigation stack, move_base, and the map generation package, gmapping, are heavily leveraged by this project.  Navigation stack plans out required wheel movements when given a map, a goal location, and odometry sensor data and 3D line scanner or point cloud sensor data.  The stack, move _base, shown in Figure 2requires presence of user-supplied nodes for it to work. The one highlighted blue are to be implemented by users of the navigation stack.

overview_tf_small

Figure 2: ROS navigation stack

The sensor sources node is responsible for publishing sensor_msgs/PointCloud or sensor_msgs/LaserScan messages.  But since I planned to use gmapping for SLAM, and gmapping only subscribes to sensor_msgs/LaserScan, I must masquerade 3D-TOF sensor’s point cloud data as line scan data conforming to sensor_msgs/LaserScan.  I did it by sampling TOF sensor’s depth map, taking a horizontal “slice”, which is a number of rows; and find the closest distance in each column of the slice, and report that distance as the value for that column.  The result is a horizontal line scan. For example, if  a slice is take from row 100 to 140, with row 120 being the horizontal center line of a 320×240 depth image, then there will be 320 columns each having a height of 40 pixels, and the shortest distance in the column’s 40 pixels will represent the column.  This will result in a line scan of 320 pixels, which is then published as a sensor_msgs/LaserScan message.

Sensor sources and sensor transformation

Now, the 3D-TOF data points are cartesian coordinates relative to the sensor frame (sensor), whose origin is at the image center.  For navigation, these data points may need to be expressed relative to the robot base frame (base_link), and therefore the base_link→sensor transform needs to be specified.  This is the purpose of  the sensor transforms node in Figure 2.  Instead of creating a custom node, one can simply use the static_transform_publisher. The sensor’s pose relative to the base usually does not change, which means the transform only needs to be published once. One-shot topic or message are called latched topic, or latched message.

Odometry source and base controller

The odometry source node is responsible for publishing nav_msgs/Odometry messages, as well as for publishing robot’s new pose relative to the map’s origin though geometry_msgs/TransformStamped.  ROS provides a C++ example for odometry source. The base controller drives the robot based on incoming cmd_vel topic containing a geometry_msgs/Twist message.  It turned out that the functionality of both odometry source and base controller are realized by  am_driver_safe node in the HRP ROS package provided by Husqvarna Research, and only minor modification is required.  Let’s take a closer look at the HRP ROS package.

HRP ROS package

HRP ROS package comes with am_driver_safe_node that is pretty much ready to integrate with the ROS navigation stack.   The node supports the following ROS topics: Subscribe:

Notice that, from Figure 2, the cmd_vel topic fulfills the base controller functionality; and odom and pose topics fulfill the odometry source functionality.  However, pose is a geometry_msgs/PoseStamped message, while a geometry_msgs/TransformStamped message is required by the navigation stack.  These message carry similar data but have different message format, so minor modification would be required.

Transform chain

ROS uses tf to specify the kinematics chain that ties together different reference frames in the system. The chain needs not be specified by one ROS node; rather each link in the chain can be published by a different node.  For HRP, the full tf chain looks like this: map → odom→ base_link→ sensor where map is an inertial reference frame tied to the map, and  map → odom defines the initial pose of the robot within the map. The partial transform, odom→ base_link, represents robot’s travel relative to the initial pose; and base_link→ sensor defines where the sensor is mounted on the robot; generally a fixed transform, so it only needs to be published once.  The full chain, map → base_link, represents robot’s pose within the map.   RViz is a nice tool that can help visualizing tf. The base_link frame on the HRP automower 430x follows the ROS convention (REP-103), and is attached midway between the hind wheels on the ground plane, with the x-axis pointing in the forward direction, y-axis  pointing to the left, and z-axis pointing up.  For the map frame, the z-axis is pointing upward according to ROS convention (REP-105).

ROS system architecture

Below is a visual representation of the entire ROS system with the HRP robot on the left, and the ground control station on the right.  Besides acting as the ROS master, Raspberry Pi acts as a wireless access point, broadcasting its own SSID, allowing remote computer connect directly to it on a dedicated, private local area network (LAN), a critical step to allow the system operate outdoor.  The VNC client-server enables ground station to login into Raspberry Pi to run a remote desktop, and launch the ROS runtime environment.  Once started, the visualization tool, RViz and keyboard driven teleoperator console, hrp_teleop.py, are launched on the ground control side.  Now the HRP robot can be teleoperated.

Screen Shot 2017-08-29 at 4.17.41 PM

Figure 3 – ROS system architecture.

Setup VNC

VNC is required to access Raspberry Pi from the ground control station. To start the VNC server on display number 1, create the below script and run it:

     #!/bin/sh
     vncserver :1 -geometry 1280x768 -depth 24 -dpi 96

On the remote Ubuntu ground control station, launch the VNC client this way:

     vncviewer 10.0.0.40:1

Replace 10.0.0.40 with your actual VNC server address, and :1 with the actual display number.  Once launch

Teleoperation with 3D sensor active

As an interim step, I wanted to see how well the TI 3D-TOF sensor works in sunlight, so I took it outdoor, running it in my front yard, backyard, and garage.   Below is a side-by-side video showing the actual scene, and real-time, 3D rendering of the robot and the sensor data, using RViz.

From the video, one can see that generally TI 3D-TOF sensor can see obstacles in outdoor sunlight, but the range is shorter than indoor.  For TOF sensors, pixel saturation is probably the greatest challenge for outdoor operation.  To overcome this challenge, exposure time must be reduced; but at the same time, illumination power must increase to improve the signal to noise ratio.  Another enhancement is operate TI 3D-TOF sensor at 940nm, where sunlight is weaker.   As pixels are operating near saturation, the amplitude will actually decrease.  This is because sunlight will bias the DC level of AC modulation, such as the AC amplitude has to decrease to fit under signal rails to avoid saturation.  A detail explanation of TI 3D-TOF sensor pixel operation is beyond the scope of this blog post; interested readers should watch this video tutorial to gain a deeper understanding. For indoor operation, as is demonstrated by the video, the TI 3D-TOF sensor can see the inside of a 12 ft x 12 ft garage in 3D pretty well.   That means, as is, TI 3D-TOF sensor can be used for simultaneous localization and mapping (SLAM).  Note that in Figure 3 ROS system architecture, the navigation stack is in dotted-line. That means, as of today, the navigation stack has not been integrated into the system, but all the necessary user-provided nodes are now available. The next logical step would be to integrate gmapping, the de-facto standard for SLAM, into the system.

SLAM with gmapping

gmapping is perhaps the  most popular SLAM algorithm; it combines odometry and laser scan inputs to build and update maps while locating the robot in the map.  Installation of gmapping on Ubuntu 16.04, after ROS is first installed, is:

     sudo apt-get install ros-kinetic-slam-gmapping

The ROS API is as follow:

Subscribe:

  • tf (tf/tfMessage) : Transforms necessary to relate frames for laser, base, and odometry
  • ~entropy (std_msgs/Float64) : Estimate of the entropy of the distribution over the robot’s pose (a higher value indicates greater uncertainty).

Unfortunately,  gmapping is good at building a new map, but not so good at building on a previously saved map.  If somehow gmapping is interrupted and needs to restart, it will start a whole new map from scratch.  That’s why ROS navigation stack tutorial recommends using gmapping to create a static map, then use AMCL to locate the robot within the static map.  Another possibility is to record partial maps created by gmapping in a database, and spatially piece these partial maps together to form the complete map, much like a jigsaw puzzle.  The mapstitchpackage seems to do the job.  Leveraging stitched maps requires ROS nodes that can periodically save the current map, and serve up the map when required.  This is a job for map_saver and map_server.

Map persistence

map_server is a ROS package for handling map persistence.   map_server node will publish a  map by reading from a file.  Note that both map_server and gmapping serve up maps. map_server publishes:

map_saver subscribes to maps published by SLAM nodes such as gmapping and save them to disk. map_saver subscribes:

Miscellaneous notes for work in progress

Text after this section are terse notes for work in progress.

Global and local maps setup

gmapping embodies a global map and a local map, each a 2d map.  up the Here is how. Setup common configuration:

     obstacle_range: 2.5
     raytrace_range: 3.0
     footprint: [[x0, y0], [x1, y1], ... [xn, yn]]
     #robot_radius: ir_of_robot
     inflation_radius: 0.55
     observation_sources: laser_scan_sensor point_cloud_sensor
     laser_scan_sensor: {sensor_frame: frame_name, data_type: LaserScan, topic: 
     topic_name, marking: true, clearing: true}
     point_cloud_sensor: {sensor_frame: frame_name, data_type: PointCloud, 
     topic: topic_name, marking: true, clearing: true}

Global map configuration:

     global_costmap:
     global_frame: /map
     robot_base_frame: base_link
     update_frequency: 5.0
     static_map: true

Local map configuration:

     local_costmap:
     global_frame: odom
     robot_base_frame: base_link
     update_frequency: 5.0
     publish_frequency: 2.0
     static_map: false
     rolling_window: true
     width: 6.0
     height: 6.0
     resolution: 0.05

global_planner settings should start with: cost factor = 0.55, neutral cost = 66, and lethal cost = 253.

SLAM development plan

Succinctly, the steps to realize HRP navigation are: