NativeScript - Speech Recognition

NativeScript - Speech Recognition

Here's how we can add Speech Recognition to our NativeScript application(s). If you prefer a video guide, see below:

Creating a New Project

I'll start off by generating a new NativeScript project named SpeechRecognition with the following command:

# Ensure you've installed the NativeScript CLI
tns create SpeechRecognition --template nativescript-template-ng-tutorial
Plugin Installation

We can install the SpeechRecognition plugin within our project:

tns plugin add nativescript-speech-recognition

Before adding this to our component(s) or service, we need to add SpeechRecognition to the list of providers inside of our root app.module.ts.

import { SpeechRecognition } from "nativescript-speech-recognition";

  declarations: [AppComponent],
  bootstrap: [AppComponent],
  imports: [NativeScriptModule],
  providers: [SpeechRecognition],
  schemas: [NO_ERRORS_SCHEMA],
export class AppModule {}

Now we can check to see if the device is capable of speech recognition by injecting SpeechRecognition into our component and calling available().

import { Component } from "@angular/core";
import { SpeechRecognition } from 'nativescript-speech-recognition';

  selector: "my-app",
  template: `
    <ActionBar title="Speech Recognition"></ActionBar>
export class AppComponent {

  constructor(private speechRecognition: SpeechRecognition) {
  triggerListening() {
    this.speechRecognition.available().then(available => {
    .catch(error => console.error(error));

Now that we know it's supported, we can start listening for speech. triggerListening() can be made to either start listening to the user or display an alert dialog if it's not supported.

  triggerListening() {
    this.speechRecognition.available().then(available => {
      available ? this.listen() : alert('Speech recognition is not available!');
    .catch(error => console.error(error));

Then, we can create a listen() function that uses startListening() with SpeechRecognitionOptions as a mandatory parameter.

This then returns a callback named onResult() which allows us to access the transcription as a SpeechRecognitionTranscription. Let's make sure we've imported them before going further:

import { SpeechRecognition, SpeechRecognitionTranscription, SpeechRecognitionOptions } from 'nativescript-speech-recognition';

As this happens outside of Angular's view change detection, we'll either have to run this in a Zone or manually perform change detection ourselves. I've opted to import and inject ChangeDetectorRef into our constructor:

import { ChangeDetectorRef } from "@angular/core";


constructor(private change: ChangeDetectorRef){}

Our listen() function therefore looks like this:

  listen() {
    const options: SpeechRecognitionOptions = {
      locale: 'en-US',
      onResult: (transcription: SpeechRecognitionTranscription) => {
        console.log(`Text: ${transcription.text}, Finished: ${transcription.finished}`);
        this.transcription = transcription;

      .then(() => console.log("Started listening"))
      .catch(error => console.error(error));
Stop Listening

In order to stop listening, there are two options. The first option is a manual timeout whenever we've stopped speaking for a couple of seconds; this is implemented automatically. The second option is to call the stopListening() function like so:

  stopListening() {
      .then(() => console.log("Stopped listening."))
      .catch(error => console.error(error));

To make this pretty, I've simply made a label inside of a StackLayout to show the result:

      <Label [text]="transcription.text"></Label>
      <Button (tap)="triggerListening()" text="Start Listening"></Button>
      <Button (tap)="stopListening()" text="Stop Listening"></Button>