Scope

On-device speech recognition increases the user’s privacy by keeping their data off the Cloud. Apple strives to give voice-based AI a major boost through this enhanced speech recognition.

The newly upgraded speech recognition API lets you do a variety of things, like tracking voice quality and speech patterns using the voice analytics metrics.

From providing automated feedback based on recordings to comparing the speech patterns of individuals, there’s so much you can do in the field of AI using on-device speech recognition.

Of course, there are certain trade-offs to consider with this new On-Device Speech Recognition. There is no continuous learning like you have on the Cloud. This can lead to less accuracy on the device. Moreover, the language support is limited to about 10 languages currently.

Nonetheless, on-device support lets you do speech recognition for an unlimited amount of time. A big win over the previous one minute per recording limit the server had.

SFSpeechRecognizer is the engine that drives speech recognition.

iOS 13 SFSpeechRecognizer is smart enough to recognize punctuations in your speech.

Saying a dot adds a full stop. Similarly, a comma, dash, and question mark would return the respective punctuations in the transcription: (, — ?).