Use tensorboard inside a job

How to use a tensorboard inside an AI Training Job

Last updated 10th of September, 2021.

Objective

The purpose of this tutorial is to show how it is possible to launch a TensorBoard with AI Training.

TensorBoard is a tool made by TensorFlow, for providing the measurements and visualizations needed during the machine learning workflow. It enables tracking experiment metrics like loss and accuracy, visualizing the model graph, projecting embeddings to a lower dimensional space, and much more.

TensorBoard provides a visual interface :

image

The tutorial presents a simple example of launching TensorBoard in a job.

Requirements

Instructions

Have an object store container where your metric logs are saved

First of all, you must have trained your model and saved your results in an object store container (exemple: my_tf_metrics located in Gravelines GRA).

Alternatively, you can have a job already RUNNING that is plugged with that object store container and is writting metric logs inside it (exemple: my_tf_metrics@GRA:/runs:RW:cache). In that last case, don't forget the cache parameter indicating that the volume is cached and sharable among jobs. More information about volumes configuration in jobs can be found here, information about volume caching can be found here.

If you want to see an example of how to use TensorBoard to train a model, please refer to this notebook on GitHub.

Launch TensorBoard in a job

To launch TensorBoard in a job, you need to access the ovhai CLI and run this command:

ovhai job run tensorflow/tensorflow \
    --cpu 1 \
    --default-http-port 6006 \
    --volume my_tf_metrics@GRA:/runs:RO:cache \
    -- tensorboard --logdir=/runs --bind_all

First, set the number of CPUs. For this type of job you don't necessarily need a lot of resources.

--cpu 1 indicates that you request 1 CPU for that job.

The default port for TensorBoard is 6006.

--default-http-port 6006 indicates that the port to reach on the job url is the 6006.

Connect the volume containing your tensorboard metric logs.

--volume my_tf_metrics@GRA:/runs:RO:cache indicates that you are connecting the container my_tf_metrics from Gravelines (GRA) Object Store into the /runs directory of your job. The read only RO permission is enough because TensorBoard does not need access on write. The container my_tf_metrics@GRA should contain your tensorflow metrics.

Specify the tensorboard launch command.

tensorboard --logdir=/runs --bind_all indicates that we want tensoboard to be watching over the /runs directory. Don't forget the --bind_all parameter or you won't be able to access your tensorboard from the public network.

Consider adding the --unsecure-http attribute if you want your application to be reachable without any authentication.

Once the job is running you can access your TensorBoard directly from the job's url.

Feedback

Please send us your questions, feedback and suggestions to improve the service:


Did you find this guide useful?

Please feel free to give any suggestions in order to improve this documentation.

Whether your feedback is about images, content, or structure, please share it, so that we can improve it together.

Your support requests will not be processed via this form. To do this, please use the "Create a ticket" form.

Thank you. Your feedback has been received.


These guides might also interest you...

OVHcloud Community

Access your community space. Ask questions, search for information, post content, and interact with other OVHcloud Community members.

Discuss with the OVHcloud community

In accordance with the 2006/112/CE Directive, modified on 01/01/2015, prices incl. VAT may vary according to the customer's country of residence
(by default, the prices displayed are inclusive of the UK VAT in force).