This tutorial demonstrates how to use the camera plugin in combination with Firebase’s vision library to read any type of barcode. The example below is demonstrated using the Android emulator with the virtual scene option selected as the camera emulator.

Using the Android Emulator Virtual Scene

For those who do not wish to use the virtual scene, as shown above, please skip this section. Otherwise, start by creating an Android Emulator and select the virtual scene option for the camera of your choice.

Download any barcode image you can find on Google image search, run the emulator and click the ellipsis menu as shown below.

Finally, under the Camera option on the left nav, set the Wall image to point to this barcode file.

Project Setup

I’m not going to detail the steps to create a Flutter project. Instead, I will assume you already have your project ready and running. However, you will need to set up a Firebase Project and add it to your Flutter application project.

You may wonder why Firebase is used? Firebase has a service called ML Kit which we can pass an image to, and retrieve the values of any barcodes read. We can also rest assured that ML Kit has been trained to read all types of barcodes!

Setup Camera Preview

Luckily, there is a flutter plugin conveniently called Camera that allows us to have a camera preview along with the ability to acquire an image and pass it to Firebase ML Vision for barcode results.

Simply add the Camera plugin (with the current version) to your pubspec.yaml

dependencies: flutter: sdk: flutter # The following adds the Cupertino Icons font to your application. # Use with the CupertinoIcons class for iOS style icons. cupertino_icons: ^0.1.2 camera: 0.3.0+3 pubspec.yaml

We’ll take the camera code example straight from there as a basis to work with.

import 'dart:async'; import 'dart:io'; import 'package:flutter/material.dart'; import 'package:camera/camera.dart'; List<CameraDescription> cameras; Future<void> main() async { cameras = await availableCameras(); runApp(App()); } class App extends StatelessWidget { @override Widget build(BuildContext context) { return MaterialApp( home: CameraApp(), ); } } class CameraApp extends StatefulWidget { @override _CameraAppState createState() => _CameraAppState(); } class _CameraAppState extends State<CameraApp> { CameraController controller; @override void initState() { super.initState(); controller = CameraController(cameras[0], ResolutionPreset.medium); controller.initialize().then((_) { if (!mounted) { return; } setState(() {}); }); } @override void dispose() { controller?.dispose(); super.dispose(); } @override Widget build(BuildContext context) { if (!controller.value.isInitialized) { return Container(); } return Stack( alignment: Alignment.center, children: <Widget>[ AspectRatio( aspectRatio: controller.value.aspectRatio, child: CameraPreview(controller) ) ], ); } } Full screen camera preview example

** For those using the Android virtual scene for the camera preview, you can hold Alt + WSAD keys to move around (the wall is in the room behind you) **

Read a Barcode

Now we have a camera preview to work with, we can start taking an image and passing it to Firebase’s vision detection API. As the camera plugin is still in preview, there is currently no way to stream the camera’s preview into ML Kit. Although there is now the functionality to acquire the byte buffer of the preview, the pixel data is not in the correct format that the VisionImage class expects. Converting this to the expected format is out of scope for this tutorial.

Instead, we will create a timer that runs every 3 seconds that takes an image, saves it, and have ML Kit load and read this.

First, let us setup the timer code.

class _CameraAppState extends State<CameraApp> { CameraController controller; Timer _timer; @override void initState() { super.initState(); controller = CameraController(cameras[0], ResolutionPreset.medium); controller.initialize().then((_) { if (!mounted) { return; } setState(() {}); _startTimer(); }); } void _startTimer() { _timer = new Timer(Duration(seconds: 3), _timerElapsed); } void _stopTimer() { if(_timer != null) { _timer.cancel(); _timer = null; } } Future<void> _timerElapsed() async{ _stopTimer(); // Code to capture image and read barcode here... _startTimer(); } } Adding a callback timer

Now that we have the callback function ticking every 3 seconds (safeguarded incase the barcode detection overruns by stopping it during the callback tick) let’s take an image!

Future<void> _timerElapsed() async{ _stopTimer(); File file = await _takePicture(); _startTimer(); } Future<File> _takePicture() async { final Directory extDir = await getApplicationDocumentsDirectory(); final String dirPath = '${extDir.path}/Pictures/barcode'; await Directory(dirPath).create(recursive: true); final File file = new File('$dirPath/barcode.jpg'); if(await file.exists()) await file.delete(); await controller.takePicture(file.path); return file; } Take and save photo example

Every 3 seconds the image will be overridden and passed to the ML Kit API as described below:

class _CameraAppState extends State<CameraApp> { CameraController controller; Timer _timer; String _barcodeRead = ""; // Add this ... // Rest of CameraAppState's methods ... Future<void> _timerElapsed() async{ _stopTimer(); File file = await _takePicture(); await _readBarcode(file); _startTimer(); } Future _readBarcode(File file) async { FirebaseVisionImage firebaseImage = FirebaseVisionImage.fromFile(file); final BarcodeDetector barcodeDetector = FirebaseVision.instance.barcodeDetector(); final List<Barcode> barcodes = await barcodeDetector.detectInImage(firebaseImage); _barcodeRead = ""; for(Barcode barcode in barcodes) { _barcodeRead += barcode.rawValue + ", "; } } } Read the barcode example

For the above code to compile, you will need to add the Firebase ML Vision plugin to the pubspec.yaml (along with path_provider, to get folder locations on the system).

dependencies: flutter: sdk: flutter # The following adds the Cupertino Icons font to your application. # Use with the CupertinoIcons class for iOS style icons. cupertino_icons: ^0.1.2 camera: 0.3.0+3 firebase_ml_vision: 0.5.0+1 path_provider: 0.5.0+1 MLKit added to firebase pubspec.yaml

And add the necessary includes at the top of the main.dart file

import 'dart:async'; import 'dart:io'; import 'package:flutter/material.dart'; import 'package:camera/camera.dart'; import 'package:path_provider/path_provider.dart'; import 'package:firebase_ml_vision/firebase_ml_vision.dart';

So… Now the app can take the photo, read all the barcodes detected in the image and store them in the member variable “_barcodeRead” string, all that is left is to display it!

Display the Barcodes

Add the Text element to the Stack inside of the “Build” method – we can wrap it in a container so that it can be anchored to the bottom of the screen.

@override Widget build(BuildContext context) { if (!controller.value.isInitialized) { return Container(); } return Stack( alignment: Alignment.center, children: <Widget>[ AspectRatio( aspectRatio: controller.value.aspectRatio, child: CameraPreview(controller) ), Container( alignment: Alignment.bottomCenter, child: Text( _barcodeRead.length > 0 ? _barcodeRead : "No Barcode", textAlign: TextAlign.center ), ) ], ); } Display barcode string

Finally, we need to ‘redraw’ the widget whenever we update the barcode variable. To do this in Flutter, all we need to do is call “setState”.