The 3 trendiest AI kits in 2017 — A quick guide of Google Vision kit, DeepLens & BerryNet DeepThought Follow Dec 19, 2017 · 4 min read

Last few weeks has been very exciting for us. Amazon introduced the first deep learning enabled video camera — DeepLens. Google announced their latest AIY Project — the Vision Kit. At DT42, we always believe that bringing deep learning to edge devices is the key towards the future. We also believe AI technology should not be dominated only by big tech giants, but readily available for everyone. That is the reason why we released BerryNet[1] project half a year ago. BerryNet is the first AI Gateway FLOSS Project to release the power of AI on edge devices.

With these 3 latest awesome AI edge vision powerups — or I should say toys, you can build up your very own project using AI to solve your problem in your life. Let’s say if you want to build a monkey alert camera to prevent monkeys messing up with your backyard and eating up all your fruit. What’s the steps you need to take by using Google AIY Vision Kit, DeepLens or BerryNet?

Here we want to make a short instruction.

Figure 1. Monkey alarm system

Figure 2 gives a brief illustration of the equipment and software that you will engage with using different tools.

Figure 2. Major components of the Monkey alarm system

The whole Monkey alarm system includes five major components:Data receiver: a camera

(a) Data receiver: a camera

(b) Computation hardware: key hardware component for tensor computation

(c) Software system: including the deep learning libraries and the operating system running on local hardware.

(d) AI model: the deep learning model used for analyzing input data

(e) Alarm trigger system: deliver the detection results to users

Next we will explain more of the steps using the three tools separately.

Google AIY vision kit

Figure 3. Components of the system using vision kit

Hardware you need to prepare: Pi camera 2 (a) , Vision kit (b), Raspberry Pi zero w.

Steps:

1 — Assemble the kit following the instructions from AIY Project website[2], and load the image (3)to SD card.

2 — Train a deep learning model as a monkey detector (d) and compile it,

3 — Load trained model to VisionBonnet to build a monkey detection

4 — Use the SDK to build alarm trigger (e) and control it via Android App.

In the case that the object you want to detect is already bundled with the image, you can simply skip step 2.

AWS DeepLens

Figure 4. Components of the system using DeepLens

Hardware you need to prepare: AWS DeepLens, this includes components (a), (b) and (c)

Steps:

1 — Register, connect and set up DeepLens online.

2 — Use AWS SageMaker to train a monkey detection model (d).

2.1 Create a “monkey detection project” on the DeepLens console

2.2 Import the model trained in step 2.1, and deploy the project to DeepLens

3 — Use AWS Management Console to build alarm trigger (e).

By using AWS DeepLens, unlike the other two kits, you don’t need to prepare all the hardware yourself. However, this also limited the flexibility too.

BerryNet

Figure 5. Components of the system using BerryNet

Hardware you need to prepare: Raspberry Pi 3 (b), an IP/Nest/Pi camera (a). You can also purchase Movidius Neural Compute Sticker for better inference performance. Steps:

1 — Train a deep learning model as a money detector (d)

2 — Install and configure BerryNet (c) with the trained model on Raspberry Pi

3 — Setup input client as data receiver (can be a pi camera, an IP camera or even a Nest camera), and output client as alarm trigger.

Currently, the model training requires users to setup environment manually. For example, following the YOLO website[3] to train the monkey detector. A new easy-to-use service, Epeuva[4] is coming soon to help customers train the model. Click to register for the early invitation.

On Epeuva, you can bring your own data and customized AI models without any coding effort. By repeating step 1 users are easily to build a detection system they want.

We envision a world where deep learning and AI will be democratized for everyone and every device. The BerryNet project is licensed with GPL because we want to take AI from the ivory towers and make it accessible for all.

Computing has gone in massive cycles, shifting from centralized to distributed and back again. We believe edge AI is the key for developing more and more useful applications in the near future.

[1] https://github.com/DT42/BerryNet

[2]https://aiyprojects.withgoogle.com/vision#assembly-guide-7-now-what

[3]https://pjreddie.com/darknet/yolo/

[4]http://www.dt42.io/epeuva/index.html#contact-section