See the difference? As we see here, less lines and information can actually be a good thing and masking will definitely help Vector “focus” on the parts of the image that are necessary.

Now that we have a practical understanding of why we need canny edge detection, we can go ahead and add it to our original code,

Centering Vector With The Hough Transform

Now we can move into the last part of our pipeline before we start driving: the Hough Transform. Here’s the source code,

At the core of the Hough Transform is the idea that we can convert our image into Hough space and this will make it easier for us to identify straight lines. I plan on doing a post on the Hough Transform in the future where I implement it from scratch so stay tuned for that. For now, I’ll leave you with the OpenCV explanation, which is phenomenal.

Finally, we need to use our HoughLines to center ourselves on the line,

The above code is basic and if I’d had more time I’m certain that I could have come up with something better. All it does is pick the first edge, which is the left-most edge, and then checks to see if that is centered. If it doesn’t, then it moves to center that edge and vice-versa for the second edge. However, note that the code above works under the assumption that the two lines detected are the edges of the line. Of course, this isn’t always the case and again this is something that I’m going to come back and fix soon, likely with a post on tuning and perfecting what we’ve done this post.

Testing

Let’s test out our code, here’s a video of how my Vector did,

I realize the video’s sideways, but it was late at night, my bad 😋. Besides that, there’s plenty of room for improvement, and the time constraint shows. However, the next step is to do some tuning on our Hough parameters along with the masking and Canny parameters. Unfortunately, I ran out of time but this is a chance for you! How well can you optimize our pipeline? Feel free to let me know below 😊

Recap And Full Source Code

Let’s recap what we’ve done,

Take a picture with Vector’s camera Turn the image into grayscale Mask that image so that we only see the “important parts” Run Canny edge detection on the image Run the Hough Line Transform to give us the edges of our line Steer using our list of possible edges

And that’s it! Whew! That wasn’t too bad for a first rough pass at it. Soon, I’ll be expanding on this post by tuning our parameters and adding corner detection into our pipeline which will allow us to make sharp turns and traverse any closed path imaginable. And, I’ll also add lane detection so that Vector can drive around the same way autonomous cars do. See you then!

The Full Source Code

Github Repo

Questions? You can find me at www.cggonzalez.com