By Scott Harvey and David Bulnes, Engineering

We are living in a world that is exploding with sensor data from a growing array of devices in our workplaces, homes, streets, and in the air. Mastering how to process this information could help us solve many challenges in robotics, specifically to Civil Maps’ vision of the future— autonomous driving.

Autonomous cars and other robotic devices need to know exactly where they are located and how to interact with their surroundings, which they “see” and process through a variety of sensors. These sensors are tasked with collecting massive amounts of three-dimensional data which then which then must be analyzed and interpreted to enable the complex process of automated mobility and localization.

How do we make sense of this wealth of sensor data to generate and localize to multidimensional semantic maps? Mobilize land robots? Enable autonomous drone flight? At Civil Maps, we are building software which is primarily designed to convert raw sensor data streams into the kind of meaningful information that’s useful for level 4 autonomous driving. Previously, our team spent a few years working in heavy industry solving similar problems. With these experiences under our belt, we’re hoping to share some knowledge and get more conversations started about ways to work with a variety of sensors, both for self-driving cars and other objectives.

If you’re interested in learning and discussion, please join us for a free, online, 4-part series designed to help you get going using sensor data as well as more advanced topics, including sensor fusion.

REGISTER HERE

Sessions (50 minutes):

Session One: Thursday, March 30th 1:30 PM PST (4:30 PM EST):

What is LiDAR and how it works

How Civil Maps uses LiDAR

Reading and assessing LiDAR from packet captures

LiDAR relationships to other existing vision sensors

After this session you will be able to understand how LiDAR works, read LiDAR data, and visualize a sample data set.

Session Two: Tuesday, May 9th 1:30 PM PST (4:30 PM EST):

GPS and IMU how they work separately and together

How Civil Maps captures IMU data

Coordinate systems of IMU

Creating a vehicle trajectory from IMU data

After this session you will understand how GPS and IMU work in the context of capturing vehicle motion and a simple technique for creating a trajectory from a sample set of IMU data.

Session Three (Sensor Fusion Part 1): Thursday, May 25th 1:30 PM PST (4:30 PM EST):

General concepts of sensor fusion

Using 3d semantic maps in sensor fusion

Hardware prerequisites of sensor fusion (spatial and temporal)

Code challenge involving Civil Maps cognition

Session Four (Sensor Fusion Part 2): Thursday, June 29th 1:30 PM PST (4:30 PM EST):

Demonstrate how our HD Semantic Map can fused in a self-driving car

Illustrate Atlas DevKit installation

Show how the Atlas DevKit works alone and/or with existing sensor suites

We will be sharing open source code from our Atlas DevKit library with all webinar attendees, as well as sample LiDAR and IMU/GPS data captures from our development vehicles.

P.S. We’re hiring at https://www.civilmaps.com/jobs. Don’t see your role there yet? email: jobs@civilmaps.com