Manage and Deploy Model
Tutorial
Last updated
Tutorial
Last updated
In this tutorial, we will use the MNIST model in TensorFlow 2 as an example to show how to train, manage, and deploy a model.
Remember to follow configuration to enable model management in your group, contact your admin if it is not enabled yet.
Remember to enable model deployment in your group, contact your admin if it is not enabled yet.
The TensorFlow 2.4
image infuseai/docker-stacks:tensorflow-notebook-v2-4-1-dbdcead1
.
An instance type >= minimal requirement (CPU=1, GPU=0, Mem=2G).
The prepared notebook file of the example,
Download model_management_tutorial.ipynb. This example file is referred to TensorFlow 2 quickstart for beginners with added cell to enable MLflow autologging API.
Choose a group with enabled Shared Volume (a.k.a Group Volume).
Please have the image, the instance type on PrimeHub, or request administrators for assistance before we start.
Enter Notebooks from User Portal, select the image, the instance type, and start a notebook.
While inside the group volume, copy/drag the downloaded model_management_tutorial.ipynb
to File Browser. Then, let's open it and Run All Cells.
Enter Models then click on MLflow UI
button.
In the MLflow UI, We will see a newly completed run under the Default
experiment. Now clicking on this run.
In the run information page, scroll down to the Artifacts
section. Clicking on the exported model and Register Model
button.
We can register a new model or update version to the existing model.
We choose Create New Model
and fill in the Model Name
field with tensorflow-mnist-model
. Let's clicking on Register
button to complete model registration.
Enter Models, we will see the registered model in the model list. Now clicking on our model tensorflow-mnist-model
.
In the model detail page, we can find all registered model version here. Let's clicking on the Deploy
button of Version 1
.
We can deploy the selected model version to a new deployment or update to the existing deployment. We choose Create new deployment
and click on OK
button.
We will be directed to create deployment page. Fill in the Deployment Name
field with tensorflow-mnist
. Select the Model Image
field with TensorFlow2 server
; this is a pre-packaged model server image that can serve MLflow autologged TensorFlow model
.
As for the Model URI
field, it will be auto fill-in with registered model scheme.
In the Resources
,
choose the instance type, here we use the one with configuration (CPU: 0.5 / Memory: 1 G / GPU: 0)
leave Replicas
as default (1)
Click on Deploy
button, then we will be redirected to model deployment list page. Wait for a while and click on Refresh
button to check our model is deployed or not.
When the deployment is deployed successfully, we can click on cell to check its detail.
We can view some detailed information in detail page, now let's test our deployed model! Copy the endpoint URL
and replace the ${YOUR_ENDPOINT_URL}
in the following block.
Then copy the entire block to the terminal for execution, and we are sending tensor as request data.
Example of request data
Example of response data
Congratulations! We have versioned our trained model and further deploy it as an endpoint service that can respond requests anytime from everywhere.
For the completed model management feature introduction, see Model Management.
For the reference and limitation of MLflow autologging API in TensorFlow, see MLflow autologging.