Hi everyone, In this article, you will learn how to implement the ML-Kit text recognition in your app.

What is ML-Kit?

Implementation

Step 1: Add Firebase to Flutter

Step 2: Add the dependencies

Add dependencies to pubspec.yaml file.

dependencies:

flutter:

sdk: flutter firebase_ml_vision: "<newest version>"

camera: "<newest version>"

firebase_ml_vision uses the ML Kit Vision for Firebase API for flutter that is built by the Flutter team.

We also need the camera plugin to scan the text.

Step 3: Initialize the camera

CameraController _camera;



@override

void initState() {

super.initState();

_initializeCamera();

}



void _initializeCamera() async {

final CameraDescription description =

await ScannerUtils.getCamera(_direction);



_camera = CameraController(

description,

ResolutionPreset.high,

);



await _camera.initialize();



_camera.startImageStream((CameraImage image) {



// Here we will scan the text from the image

// which we are getting from the camera.



});

}

We will be using a prewritten class by Flutter Team in their Demo which has some utility method to scan and detect the image from Firebase Ml-kit.

Step 4: Scan the image

When we get the Image from the camera and scan it will we get the VisionText .

VisionText _textScanResults; TextRecognizer _textRecognizer = FirebaseVision.instance.textRecognizer();

Step 5: Get the result

Now _textScanResults will have the result.

If you see VisionText , we can get the whole text, block, lines, and words also.

They are dependent on our choices that what type of result we want.

To get the

Text block :

List<TextBlock> blocks = _textScanResults.blocks;

Text Lines:

List<TextLine> lines = block .lines;

Text words :

List<TextElement> words = line.elements;

Step 6: Build the UI

To show the image result we just need to have the CameraPreview and pass the CameraController object.

@override

Widget build(BuildContext context) {

return Scaffold(

body: Stack(

fit: StackFit.expand,

children: <Widget>[ _camera == null

? Container(

color: Colors.black,

)

: Container(

height: MediaQuery

.of(context)

.size

.height - 150,

child: CameraPreview(_camera)),

],

),

);

}

Show scanned text outlines

To show the outline we can draw them by using CustomPainter because of the VisionText parameters provide the TextContainer and we can find the coordinates and draw on it.

Widget _buildResults(VisionText scanResults) {

CustomPainter painter;

// print(scanResults);

if (scanResults != null) {

final Size imageSize = Size(

_camera.value.previewSize.height - 100,

_camera.value.previewSize.width,

);

painter = TextDetectorPainter(imageSize, scanResults);

getWords(scanResults);



return CustomPaint(

painter: painter,

);

} else {

return Container();

}

}

Now if we see the app.

Thanks for reading this article ❤

If I got something wrong 🙈, Let me know in the comments. I would love to improve.

Clap 👏 If this article helps you.

Check my GitHub repositories.