(translated from French : Flutter, API natives et plugins (2/3))

“We need to talk…”

In the first part we’ve seen the basics of MethodChannels.

We’ll now take a closer look at a basic implementation of a speech recognition channel.

Speech recognition

Sytody in action

The Sytody UI works as follows :

when the speech recognition is permitted, a button allows to start recording

When the record starts :

The ‘Record’ button changes to a ‘Stop’, allowing you to end the recording and finalize the transcription.

the transcription field appears, displaying the intermediate transcripts, and a [x] buttons let you end the recording.

Native APIs

Android(4.1+) and iOS(10+) both offer a speech recognition API :

iOS : Speech API

Android : SpeechRecognizer

To use them from the Flutter application, we will define a channel dedicated to the recognition process : activate, start and stop the recording, display the transcription.

The diagram shows the necessary steps :

Our application requires permissions to use the microphone and the speech recognition. At first launch, on iOS and Android 7.1+, the user must accept the request. Once the request is accepted, the host invokes a Dart method to confirm the recognition availability. From there, Flutter can start the recognition by invoking the “start/listen” method on the dedicated channel. After the start method call, the recording begins, then the host invokes onRecognitionStarted The Flutter application will receive :

the intermediate transcripts ( on iOS)

and once the user stops the recognition ( stop() ), the final transcription via onRecognitionComplete(String completeTranscription)

1st implementation

First, we need to create a project with Swift on iOS side (ObjC is used as default language ). The Flutter CLI let you define this with :

flutter create -i swift --org com.company speech_project

` -i swift ` to choose Swift for iOS

` to choose Swift for iOS --org to choose the project namespace

to choose the project namespace we could also choose to write in Kotlin for Android with `-a kotlin`

Flutter / Dart

A SpeechRecognizer class handles the Flutter <-> Host communication.

Here the “global” messageHandler, used to handle the host methodCalls.

cf. transcriptor.dart

iOS / Swift

For iOS, we can instanciate our FlutterMessageChannel in the appDelegate applicationDidFinishLaunchingWithOptions method.

We bind all the methodChannel methods to “real Swift” methods, and a FlutterResult is used to send result back to the Flutter caller.

To handle the speechRecognition events, our appDelegate implements SFSpeechRecognizerDelegate. This events will be sended to Flutter, via invokeMethod(name, args). In this case, results are just “call confirmations”, signals that there was no errors during the call; the actual “effects/consequences” of this method calls will be notified via the SFSpeechRecognizerDelegate methods.

cf. complete AppDelegate.swift

The implementation of the recognition methods does not really concern Flutter, so I do not get into more detail. cf. AppDelegate.swift

Android / Java

Almost the same API on Android, we define the methodHandler in the onCreate method of the mainActivity :

Here, the mainActivity implements RecognitionListener. cf. complete MainActivity.java

This is it for a first implementation, and for this second part.

In the third and last part, we will see how to modularize crossplatform features, by creating dedicated, easily reusable, plugins.

Resources