Unmanned Ground Systems Challenge

Overview

Along with 3 other teammates from the State Key Laboratory of Robotics, I participated in the Unmanned Ground Systems Challenge in October 2016 and worked specifically on the environment map building and localization under GPS signal lost situation.

This is an image

Fig. 1. Our UGV platform

Environment Map

Built an environment map for non-structured fields with the following steps:

This is an image

Fig. 2. Environment Map BUilding Procedure

The laser data was collected from different lasers: single-line, 32-line, and 64-line. The data was fused after calibration and having manually filled certain points. This is an image

Fig. 3. Fuse Laser Data

A road plane was fitted with a RANSAC method.

This is an image

Fig. 4. Point Clouds Segmentation

The grid map was built with the help of the grid_map package. Finally, the road target was extracted through road skeleton extraction from the image generated from the environment map.

This is an image

Fig. 5. Road Target Extraction

Thus, we obtained all the perception information needed to drive a car intelligently in a field environment.
Obstacle Avoidance Demo

Localization without GPS

Two methods were conducted to meet the requirements.

1. Visual Inertial Odometry

I did this by combining the orb slam with IMU. This is an image

Fig. 6. Visual Inertial Odometry


2. Laser Odometry

Yu Zhou
Yu Zhou
Associate Scientist @Temasek Laboratories

My research interests lie in 3D visual perception and navigation, applied machine learning.

Related