Models
Learn the concept behind serving engine models
Learn the concept behind serving engine models
Models in a ML Serving project refer to HTTP API endpoints that serve machine learning models.
There are two kinds of models :
A preset model is a model that has already been built and added by OVHcloud administrators of the ML Serving platform and is available for deployment on the fly.
A serialized model is a model that can be loaded from a file with a supported format.
Currently supported formats are:
Instructions about how to export models can be found here:
Each model deployed inside a ML Serving namespace is actually a docker container built and pushed into the linked docker registry and then started inside the kubernetes namespace.
Please feel free to give any suggestions in order to improve this documentation.
Whether your feedback is about images, content, or structure, please share it, so that we can improve it together.
Your support requests will not be processed via this form. To do this, please use the "Create a ticket" form.
Thank you. Your feedback has been received.
Access your community space. Ask questions, search for information, post content, and interact with other OVHcloud Community members.
Discuss with the OVHcloud community