Today’s post is a followup to a previous (extremely popular) article on detecting barcodes in images using Python and OpenCV.

In the previous post we explored how to detect and find barcodes in images. But today we are going to refactor the code to detect barcodes in video.

As an example, check out the screenshot below (captured using a video stream from my webcam) of me holding up the back cover of Modern Warfare 3 and the barcode being detected successfully.

Note: Big thanks to Jason who commented on the original post and mentioned that it would be really cool to see barcode detection applied to video. Thanks for the suggestion! And you’re 100% right, it is really cool to see barcode detection applied to video.

For example, let’s pretend that we are working at GameStop on the 26th of December. There are a line of kids ten blocks long outside our store — all of them wanting to return or exchange a game (obviously, their parents or relatives didn’t make the correct purchase).

To speedup the exchange process, we are hired to wade out into the sea of teenagers and start the return/exchange process by scanning barcodes. But we have a problem — the laser guns at the register are wired to the computers at the register. And the chords won’t reach far enough into the 10-block long line!

However, we have a plan. We’ll just use our smartphones instead!

Using our trusty iPhones (or Androids), we open up our camera app, set it to video mode, and head into the abyss.

Whenever we hold a video game case with a barcode in front of our camera, our app will detect it, and then relay it back to the register.

Sound too good to be true?

Well. Maybe it is. After all, you can accomplish this exact same task using laser barcode readers and a wireless connection. And as we’ll see later on in this post that our approach only works in certain conditions.

But I still think this a good tutorial on how to utilize OpenCV and Python to read barcodes in video — and more importantly, it shows you how you can glue OpenCV functions together to build a real-world application.

Anyway, continue reading to learn how to detect barcodes in video using OpenCV and Python!

Looking for the source code to this post? Jump Right To The Downloads Section

Real-time barcode detection in video with Python and OpenCV

So here’s the game plan. Our barcode detection in video system can be broken into two components:

Component #1: A module that handles detecting barcodes in images (or in this case, frames of a video) Luckily, we already have this. We’ll just clean the code up a bit and reformat it to our purposes.

A module that handles detecting barcodes in images (or in this case, frames of a video) Luckily, we already have this. We’ll just clean the code up a bit and reformat it to our purposes. Component #2: A driver program that obtains access to a video feed and runs the barcode detection module.

We’ll go ahead and start with the first component, a module to detect barcodes in single frames of a video.

Component 1: Barcode detection in frames of a video

I’m not going to do a complete and exhaustive code review of this component, that was handled in my previous post on barcode detection in images.

However, I will provide a quick review for the sake of completeness (and review a few minor updates). Open up a new file, name it simple_barcode_detection.py , and let’s get coding:

# import the necessary packages import numpy as np import cv2 import imutils def detect(image): # convert the image to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # compute the Scharr gradient magnitude representation of the images # in both the x and y direction using OpenCV 2.4 ddepth = cv2.cv.CV_32F if imutils.is_cv2() else cv2.CV_32F gradX = cv2.Sobel(gray, ddepth=ddepth, dx=1, dy=0, ksize=-1) gradY = cv2.Sobel(gray, ddepth=ddepth, dx=0, dy=1, ksize=-1) # subtract the y-gradient from the x-gradient gradient = cv2.subtract(gradX, gradY) gradient = cv2.convertScaleAbs(gradient) # blur and threshold the image blurred = cv2.blur(gradient, (9, 9)) (_, thresh) = cv2.threshold(blurred, 225, 255, cv2.THRESH_BINARY) # construct a closing kernel and apply it to the thresholded image kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (21, 7)) closed = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel) # perform a series of erosions and dilations closed = cv2.erode(closed, None, iterations=4) closed = cv2.dilate(closed, None, iterations=4) # find the contours in the thresholded image cnts = cv2.findContours(closed.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) # if no contours were found, return None if len(cnts) == 0: return None # otherwise, sort the contours by area and compute the rotated # bounding box of the largest contour c = sorted(cnts, key=cv2.contourArea, reverse=True)[0] rect = cv2.minAreaRect(c) box = cv2.cv.BoxPoints(rect) if imutils.is_cv2() else cv2.boxPoints(rect) box = np.int0(box) # return the bounding box of the barcode return box

If you read the previous post on barcode detection in images then this code should look extremely familiar.

The first thing we’ll do is import the packages we’ll need — NumPy for numeric processing and cv2 for our OpenCV bindings.

From there we define our detect function on Line 6. This function takes a single argument, the image (or frame of a video) that we want to detect a barcode in.

Line 8 converts our image to grayscale, while Lines 12-18 find regions of the image that have high horizontal gradients and low vertical gradients (again, if you would like more detail on this part of the code, refer to the previous post on barcode detection).

We then blur and threshold the image on Lines 21 and 22 so we can apply morphological operations to the image on Lines 25-30. These morphological operations are used to reveal the rectangular region of the barcode and ignore the rest of the contents of the image.

Now that we know the rectangular region of the barcode, we find its contour (or simply, its “outline”) on Lines 33-35.

If no outline can be found, then we make the assumption that there is no barcode in the image (Lines 38 and 39).

However, if we do find contours in the image, then we sort the contours by their area on Line 43 (where the contours with the largest area appear at the front of the list). Again, we are making the assumption that the contour with the largest area is the barcoded region of the frame.

Finally, we take the contour and compute its bounding box (Lines 44-46). This will give us the (x, y) coordinates of the barcoded region, which is returned to the calling function on Line 49.

Now that our simple barcode detector is finished, let’s move on to Component #2, the driver that glues everything together.

Component #2: Accessing our camera to detect barcodes in video

Let’s move on to building the driver to detect barcodes in video. Open up a new file, name it detect_barcode.py , and let’s create the second component:

# import the necessary packages from pyimagesearch import simple_barcode_detection from imutils.video import VideoStream import argparse import time import cv2 # construct the argument parse and parse the arguments ap = argparse.ArgumentParser() ap.add_argument("-v", "--video", help="path to the (optional) video file") args = vars(ap.parse_args()) # if the video path was not supplied, grab the reference to the # camera if not args.get("video", False): vs = VideoStream(src=0).start() time.sleep(2.0) # otherwise, load the video else: vs = cv2.VideoCapture(args["video"])

Again, we’ll start our by importing the packages we need. I’ve placed our simple_barcode_detection function in the pyimagesearch module for organizational purposes. Then, we import argparse for parsing command line arguments and cv2 for our OpenCV bindings.

Lines 9-12 handle parsing our command line arguments. We’ll need a single (optional) switch, --video , which is the path to the video file on desk that contains the barcodes we want to detect.

Note: This switch is useful for running the example videos provided in the source code for this blog post. By omitting this switch you will be able to utilize the webcam of your laptop or desktop.

Lines 16-22 handle grabbing a reference to our vs feed whether it is the webcam (Lines 16-18) or a video file (Lines 21 and 22).

Now that the setup is done, we can move on to applying our actual barcode detection module:

# keep looping over the frames while True: # grab the current frame and then handle if the frame is returned # from either the 'VideoCapture' or 'VideoStream' object, # respectively frame = vs.read() frame = frame[1] if args.get("video", False) else frame # check to see if we have reached the end of the # video if frame is None: break # detect the barcode in the image box = simple_barcode_detection.detect(frame) # if a barcode was found, draw a bounding box on the frame if box is not None: cv2.drawContours(frame, [box], -1, (0, 255, 0), 2) # show the frame and record if the user presses a key cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the 'q' key is pressed, stop the loop if key == ord("q"): break # if we are not using a video file, stop the video file stream if not args.get("video", False): vs.stop() # otherwise, release the camera pointer else: vs.release() # close all windows cv2.destroyAllWindows()

On Line 25 we start looping over the frames of our video — this loop will continue to run until (1) the video runs out of frames or (2) we press the q key on our keyboard and break from the loop.

We query our vs on Line 29, which returns a 2-tuple. We handle whether we’re using VideoStream or cv2.VideoCapture on Line 30.

If the frame was not successfully grabbed (such as when we reach the end of the video file), we break from the loop on Lines 34 and 35.

Now that we have our frame, we can utilize our barcode detection module to detect a barcode in it — this handled on Line 38 and our bounding box is returned to us.

We draw our resulting bounding box around the barcoded region on Line 42 and display our frame to our screen on Line 45.

Finally, Lines 46-50 handle breaking from our loop if the q key is pressed on our keyboard while Lines 53-61 cleanup pointers to our video stream object.

So as you can see, there isn’t much to our driver script!

Let’s put this code into action and look at some results.

Successful barcode detections in video

Let’s try some some examples. Open up a terminal and issue the following command:

$ python detect_barcode.py --video video/video_games.mov

The video at the top of this post demonstrates the output of our script. And below is a screenshot for each of the three successful barcode detections on the video games:

Let’s see if we can detect barcodes on a clothing coupon:

$ python detect_barcode.py --video video/coupon.mov

Here’s an example screenshot from the video stream:

And the full video of the output:

Of course, like I said that this approach only works in optimal conditions (see the following section for a detailed description of limitations and drawbacks).

Here is an example of where the barcode detection did not work:

In this case, the barcode is too far away from the camera and there are too many “distractions” and “noise” in the image, such as large blocks of text on the video game case.

This example is also clearly a failure, I just thought it was funny to include:

Again, this simple implementation of barcode detection will not work in all cases. It is not a robust solution, but rather an example of how simple image processing techniques can give surprisingly good results, provided that assumptions in the following section are met.

Limitations and Drawbacks

So as we’ve seen in this blog post, our approach to detecting barcodes in images works well — provided we make some assumptions regarding the videos we are detecting barcodes in.

The first assumption is that we have a static camera that is “looking down” on the barcode at a 90-degree angle. This will ensure that the gradient region of the barcoded image will be found by our simple barcode detector.

The second assumption is that our video has a “close up” of the barcode, meaning that we are holding our smartphones directly overtop of the barcode, and not holding the barcode far away from the lens. The farther we move the barcode away from the camera, the less successful our simple barcode detector will be.

So how do we improve our simple barcode detector?

Great question.

Christoph Oberhofer has provided a great review on how robust barcode detection is done in QuaggaJS. And my friend Dr. Tomasz Malisiewicz has written a fantastic post on how his VMX software can be utilized to train barcode detectors using machine learning. If you’re looking for the next steps, be sure to check out those posts!

Recognizing and decoding barcodes

Today we detected presence of barcodes. If you’re hoping to actually recognize and decode barcodes, then look no further than the following blog post: An OpenCV barcode and QR code scanner with ZBar. Using a package called ZBar, you’ll be able to decode barcodes into human readable text very easily.

Summary

In this blog post we built upon our previous codebase to detect barcodes in images. We extended our code into two components:

A component to detect barcodes in individual frames of a video.

And a “driver” component that accesses the video feed of our camera or video file.

We then applied our simple barcode detector to detect barcodes in video.

However, our approach does make some assumptions:

The first assumption is that we have a static camera view that is “looking down” on the barcode at a 90-degree angle.

And the second assumption is that we have a “close up” view of the barcode without other interfering objects or noise in the view of the frame.

In practice, these assumptions may or may-not be guaranteeable. It all depends on the application you are developing!

At the very least I hope that this article was able to demonstrate to you some of the basics of image processing, and how these image processing techniques can be leveraged to build a simple barcode detector in video using Python and OpenCV.