I’ve been keeping an eye on Google Cloud Vision and wanted a quick way to test it out in the real world so I slapped together a quick Ionic 3 app. After I realized how easy it is, I’ve decided to share step-by-step how to build your own. If you just want to see the code you can grab it here.

We will build an Ionic 3 mobile app that will allow the user to take a photo of something, label it using Google Cloud Vision, and save the image and labels to a Firebase realtime database. Here is a short demo of the finished app:

Let’s get to it! First we will install the Ionic CLI, get our Google Cloud project setup, and enable the features we need:

Install the Ionic CLI following the instructions here. Stop when you reach the section on starting your app — we will get to that in a few minutes! Create a new Google Cloud project and enable the Vision API by following the instructions here. Create an API key. Be sure to review the API key best practices here. Create a Firebase project by heading over to Firebase, clicking Get Started, then the Import Google Project button. Choose the project you created in Step 1. Set the database rules by clicking Database, then the Rules tab. Use the configuration below. Important: This will open up your database so that anyone can both read and write data. In a real world application you will want to restrict this access.

{

"rules": {

".read": true,

".write": true

}

}

Now onto the fun part! Let’s create a new Ionic 3 app using the “blank” template:

$ ionic start my-stuff blank

When done, hop into your newly created folder and install some dependencies we will need for this project:

$ cd ./my-stuff $ npm install angularfire2 firebase --save $ ionic cordova plugin add cordova-plugin-camera $ npm install --save @ionic -native/camera

The above will add firebase and angularfire2 (both needed for communication with our realtime Firebase database) as well as add the Cordova camera plugin and Ionic Native library.

Fire up your favorite code editor and edit src/app/app.module.ts. Import the Camera and add it to the array of providers:

import { Camera } from '@ionic-native/camera'; ... providers: [ StatusBar, SplashScreen, Camera, {provide: ErrorHandler, useClass: IonicErrorHandler}, ]

...

Next let’s create a file to store all of our Firebase configuration settings:

$ touch ./src/environment.ts

For the next step you will need your Firebase configuration settings. The quickest way is to go to your Firebase project and click the Add Firebase to your web app button from the Overview. Use those strings to complete your newly created environment.ts file:

export const environment = { firebaseConfig: { apiKey: "", authDomain: "", databaseURL: "", projectId: "", storageBucket: "", messagingSenderId: "" } };

Now, back to src/app/app.module.ts. Add the following imports and add the Http and Firebase modules:

import { HttpModule } from '@angular/http';

import { AngularFireModule } from 'angularfire2';

import { AngularFireDatabaseModule } from 'angularfire2/database';

import { AngularFireAuthModule } from 'angularfire2/auth';

import { environment } from '../environment'; ... imports: [ BrowserModule, HttpModule, IonicModule.forRoot(MyApp), AngularFireModule.initializeApp(environment.firebaseConfig), AngularFireDatabaseModule, AngularFireAuthModule ], ...

Now that we have our app wired up to Firebase, we can move on to Google Cloud Vision. Create a service that will wrap our calls to the Vision API using the Ionic CLI:

$ ionic g provider GoogleCloudVisionService

Open the newly created file (/src/providers/google-cloud-vision-service/google-cloud-vision-service.ts) and add the following:

import { Injectable } from '@angular/core';

import { Http } from '@angular/http';

import 'rxjs/add/operator/map';

import { environment } from '../../environment'; @Injectable() export class GoogleCloudVisionServiceProvider {

constructor(public http: Http) { } getLabels(base64Image) { const body = { "requests": [ { "image": { "content": base64Image }, "features": [ { "type": "LABEL_DETECTION" } ] } ] } return this.http.post('https://vision.googleapis.com/v1/images:annotate?key=' + environment.googleCloudVisionAPIKey, body);

}

}

For this app we will only be using the Label Detection feature, but there are many others available. Be sure to check out all the features available.

Your code editor may be complaining about our googleCloudVisionAPIKey reference. Let’s add it to our environment.ts file, filling in the API key you generated way back in Step 2:

googleCloudVisionAPIKey: ""

Believe it or not, we now have a fully configured app, ready for features! We need a way to take a new photo, analyze it, and display the results. Open up /src/pages/home.ts and add our imports:

import { Component } from '@angular/core'; import { AlertController } from 'ionic-angular'; import { Camera, CameraOptions } from '@ionic-native/camera'; import { GoogleCloudVisionServiceProvider } from '../../providers/google-cloud-vision-service/google-cloud-vision-service'; import { AngularFireDatabase, FirebaseListObservable } from 'angularfire2/database';

Next, create a FirebaseListObservable for our items and wire it up in our constructor:

items: FirebaseListObservable<any[]>; constructor( private camera: Camera, private vision: GoogleCloudVisionServiceProvider, private db: AngularFireDatabase, private alert: AlertController) { this.items = db.list('items'); }

FirebaseListObservable synchronizes data from our realtime database as lists that our application can use. You’ll see new items we take photos of added to the list almost instantly with no refresh needed!

Speaking of, we need a way to add items to our realtime database:

saveResults(imageData, results) { this.items.push({ imageData: imageData, results: results}) .then(_ => { }) .catch(err => { this.showAlert(err) }); }

saveResults takes base64 image data and results and pushes them to our FirebaseListObservable which handles the data sync.

We have a utility function to show us any errors that might rear their ugly heads:

showAlert(message) { let alert = this.alert.create({ title: 'Error', subTitle: message, buttons: ['OK'] }); alert.present(); }

Of course we need a function to take the photo and call our service wrapper:

takePhoto() { const options: CameraOptions = { quality: 100, targetHeight: 500, targetWidth: 500, destinationType: this.camera.DestinationType.DATA_URL, encodingType: this.camera.EncodingType.PNG, mediaType: this.camera.MediaType.PICTURE } this.camera.getPicture(options).then((imageData) => { this.vision.getLabels(imageData).subscribe((result) => { this.saveResults(imageData, result.json().responses); }, err => { this.showAlert(err); }); }, err => { this.showAlert(err); }); }

There is a lot going on here:

Camera options: the important part is that we want a picture (not video) and we want the image data via DATA_URL, which will give us the base64 image data. The other options including quality, size, and format are up to you, but the Vision API does have limits outlined here. While testing without ratcheting down the size I hit the content limit quickly. We use our camera options when calling the Camera API’s getPicture method, which gives us the base64 image data. We take the base64 image data and pass it to the Google Cloud Vision service wrapper’s getLabels method. Finally, we save the base64 image data and the raw results from our API call to our realtime database.

Now we need to switch over to our HTML in /src/pages/home.html:

<ion-header> <ion-navbar> <ion-title>My Stuff</ion-title> </ion-navbar> </ion-header>

<ion-content padding> <ion-card *ngFor="let item of items | async"> <img [src]="'data:image/png;base64,' + item.imageData"/> <ion-card-content> <ion-list no-lines> <ion-list-header>Labels</ion-list-header> <ion-item *ngFor="let label of item.results[0].labelAnnotations">{{label.description}}</ion-item> </ion-list> </ion-card-content> </ion-card> <ion-fab bottom right> <button ion-fab (click)="takePhoto()">

<ion-icon name="camera"></ion-icon>

</button> </ion-fab> </ion-content>

This will create a card for each item in our database with the image and a list of labels returned from Google Cloud Vision. It also adds a FAB that will launch the camera when tapped.

Time to test it out! You will need to connect a real device and run

$ ionic cordova run android --device

or

$ ionic cordova run ios --device

If you run into an error like this:

Error: ./node_modules/firebase/app/shared_promise.js

Module not found: Error: Can't resolve 'promise-polyfill' in ...

Try running the following, taken from here

$ npm install promise-polyfill --save-exact

We’ve really just scratched the surface of what Google Cloud Vision can do. Remember the code is available here. Issues and PR’s are welcome!

Thanks for reading!