AI Notebooks - Tutorial - Use Speech-to-Text powers on audio and video

How to convert Speech to Text using AI Notebooks

Last updated 1st September, 2022.


The purpose of this tutorial is to show you how it is possible to convert speech into text and generate transcripts thanks to AI Notebooks.


In Natural Language Processing (NLP), speech-to-text is a Deep Learning task that enables machines to understand and read human language. There are many applications: transcription, summaries, diarization, subtitle generation, ...

This documentation allows you to test and launch 3 AI Notebooks allowing you to get to grips with and use various speech-to-text features.

  1. The first one will teach you the basics of audio transcript. You will be able to transcribe long local or YouTube audio files, measure the quality of a transcription, add punctuation and summarize them.
  2. The second tutorial is intended to discover more advanced steps such as the detection of speaker changes (diarization) and the generation of video subtitles.
  3. The last tutorial is a comparison of different Speech-to-Text models to find the best one among those available.

The following instructions correspond to each of these 3 tutorials.



You can launch your notebook from the OVHcloud Control Panel or via the ovhai CLI.

Direct link to the full code can be found here.

Launching a Jupyter notebook with "Miniconda" via UI

To launch your notebook from the OVHcloud Control Panel, refer to the following steps.

Code editor

Choose the Jupyterlab code editor.


In this tutorial, the Miniconda framework is used.

With Miniconda, you will be able to set up your environment by installing the Python libraries you need.

You can choose the conda version.

The default version of conda is functional for this tutorial: conda-py39-cuda11.2-v22-4.


GPU is recommended since audio transcription is resource intensive.

Here, using 1 GPU is sufficient.

Launching a Jupyter notebook with "Miniconda" via CLI

If you want to launch it with the CLI, choose the jupyterlab editor and the conda framework.

To access the different versions of conda available, run the following command.

ovhai capabilities framework list -o yaml

This tutorial has been launched with the conda-py39-cuda11.2-v22-4 version.

If you do not specify a version, your notebook starts with the default version of conda.

Choose the number of CPUs/GPUs (<nb-cpus> or <nb-gpus>) to use in your notebook and use the following command.

Here we recommend using 1 GPU.

ovhai notebook run conda jupyterlab \
        --name <notebook-name> \
        --framework-version <conda-version> \
    --gpu <nb-gpus>

You can then reach your notebook’s URL once the notebook is running.

Accessing the notebooks

Once the repository has been cloned, find your notebook by following this path: ai-training-examples > notebooks > natural-language-processing > speech-to-text.

  1. You can find the first tutorial in the basics folder. A preview of this notebook can be found on GitHub here.
  2. The second tutorial corresponds to the advanced folder. A preview of this notebook can be found on GitHub here.
  3. The last folder, named compare-models, contains the third tutorial. A preview of this notebook can be found on GitHub here.

Go further

  • With NLP, you can do sentiment analysis. For more information, click here.


Please send us your questions, feedback and suggestions to improve the service:

Did you find this guide useful?

Please feel free to give any suggestions in order to improve this documentation.

Whether your feedback is about images, content, or structure, please share it, so that we can improve it together.

Your support requests will not be processed via this form. To do this, please use the "Create a ticket" form.

Thank you. Your feedback has been received.

These guides might also interest you...

OVHcloud Community

Access your community space. Ask questions, search for information, post content, and interact with other OVHcloud Community members.

Discuss with the OVHcloud community