AI Notebooks - Tutorial - Audio analysis and classification with AI
How to classify sounds with AI
How to classify sounds with AI
Last updated 1st September, 2022.
The purpose of this tutorial is to show how it is possible to train a model in oder to classify sounds. To do this, we take as an example a dataset of marine mammals sounds.
If you want to upload it from the OVHcloud Control Panel, go to the Object Storage section and create a new object container by clicking Object Storage
> Create an object container
.
In the OVHcloud Control Panel, you can upload files but not folders. For instance, you can upload a .zip file to optimize the bandwidth, then unzip it later when accessing it through your JupyterLab. You can also use the OVHcloud AI CLI to upload files and folders (and be more stable than through your browser).
If you want to run it with the CLI, just follow this this guide. You have to choose the region, the name of your container and the path where your data is located and use the following command:
ovhai data upload <region> <container> <paths>
This tutorial has been realized with the Best of Watkins Marine Mammal Sound Database. If you don't have your own dataset, you can use it by downloading the dataset on Kaggle.
Although this tutorial is based on the use of the TensorFlow image, we advise you to use this image: One image to rule them all. This will help you avoid errors when installing libraries such as Librosa or SoundFile, Python audio libraries.
You need to attach a volume if your data is in your OVHcloud Object Storage and you want to use it during your experiment. For more information on data, volumes and permissions, see our guide on data.
To be able to use the source code below in this article you have to create 2 Object Storage containers mounted as follows:
/workspace/data
, permissions: read & write
/workspace/saved_model
, permissions: read & write
Choose the same region as your object container
> "One image to rule them all" framework
> Attach Object Storage containers (the one that contains your dataset)
If you want to launch it with the CLI, choose the volume you want to attach and the number of GPUs (<nb-gpus>
) to use on your notebook and use the following command:
ovhai notebook run one-for-all jupyterlab \
--name <notebook-name> \
--gpu <nb-gpus>
--volume <container@region/prefix:mount_path:permission>
You can then reach your notebook’s URL once the notebook is running.
Find the notebook by following this path: ai-training-examples
> notebooks
> audio
> audio-classification
> notebook-marine-sound-classification.ipynb
.
Once your dataset is ready and uploaded, you are able to train the model!
A preview of this notebook can be found on GitHub here.
Please send us your questions, feedback and suggestions to improve the service:
Zachęcamy do przesyłania sugestii, które pomogą nam ulepszyć naszą dokumentację.
Obrazy, zawartość, struktura - podziel się swoim pomysłem, my dołożymy wszelkich starań, aby wprowadzić ulepszenia.
Zgłoszenie przesłane za pomocą tego formularza nie zostanie obsłużone. Skorzystaj z formularza "Utwórz zgłoszenie" .
Dziękujemy. Twoja opinia jest dla nas bardzo cenna.
Dostęp do OVHcloud Community Przesyłaj pytania, zdobywaj informacje, publikuj treści i kontaktuj się z innymi użytkownikami OVHcloud Community.
Porozmawiaj ze społecznością OVHcloud