PrimeHub
  • Introduction
  • Installation
  • Tiers and Licenses
  • End-to-End Tutorial
    • 1 - MLOps Introduction and Scoping the Project
    • 2 - Train and Manage the Model
    • 3 - Compare, Register and Deploy the Model
    • 4 - Build the Web Application
    • 5 - Summary
  • User Guide
    • User Portal
    • Notebook
      • Notebook Tips
      • Advanced Settings
      • PrimeHub Notebook Extension
      • Submit Notebook as Job
    • Jobs
      • Job Artifacts
      • Tutorial
        • (Part1) MNIST classifier training
        • (Part2) MNIST classifier training
        • (Advanced) Use Job Submission to Tune Hyperparameters
        • (Advanced) Model Serving by Seldon
        • Job Artifacts Simple Usecase
    • Models
      • Manage and Deploy Model
      • Model Management Configuration
    • Deployments
      • Pre-packaged servers
        • TensorFlow server
        • PyTorch server
        • SKLearn server
        • Customize Pre-packaged Server
        • Run Pre-packaged Server Locally
      • Package from Language Wrapper
        • Model Image for Python
        • Model Image for R
        • Reusable Base Image
      • Prediction APIs
      • Model URI
      • Tutorial
        • Model by Pre-packaged Server
        • Model by Pre-packaged Server (PHFS)
        • Model by Image built from Language Wrapper
    • Shared Files
    • Datasets
    • Apps
      • Label Studio
      • MATLAB
      • MLflow
      • Streamlit
      • Tutorial
        • Create Your Own App
        • Create an MLflow server
        • Label Dataset by Label Studio
        • Code Server
    • Group Admin
      • Images
      • Settings
    • Generate an PrimeHub API Token
    • Python SDK
    • SSH Server Feature
      • VSCode SSH Notebook Remotely
      • Generate SSH Key Pair
      • Permission Denied
      • Connection Refused
    • Advanced Tutorial
      • Labeling the data
      • Notebook as a Job
      • Custom build the Seldon server
      • PrimeHub SDK/CLI Tools
  • Administrator Guide
    • Admin Portal
      • Create User
      • Create Group
      • Assign Group Admin
      • Create/Plan Instance Type
      • Add InfuseAI Image
      • Add Image
      • Build Image
      • Gitsync Secret for GitHub
      • Pull Secret for GitLab
    • System Settings
    • User Management
    • Group Management
    • Instance Type Management
      • NodeSelector
      • Toleration
    • Image Management
      • Custom Image Guideline
    • Volume Management
      • Upload Server
    • Secret Management
    • App Settings
    • Notebooks Admin
    • Usage Reports
  • Reference
    • Jupyter Images
      • repo2docker image
      • RStudio image
    • InfuseAI Images List
    • Roadmap
  • Developer Guide
    • GitHub
    • Design
      • PrimeHub File System (PHFS)
      • PrimeHub Store
      • Log Persistence
      • PrimeHub Apps
      • Admission
      • Notebook with kernel process
      • JupyterHub
      • Image Builder
      • Volume Upload
      • Job Scheduler
      • Job Submission
      • Job Monitoring
      • Install Helper
      • User Portal
      • Meta Chart
      • PrimeHub Usage
      • Job Artifact
      • PrimeHub Apps
    • Concept
      • Architecture
      • Data Model
      • CRDs
      • GraphQL
      • Persistence Storages
      • Persistence
      • Resources Quota
      • Privilege
    • Configuration
      • How to configure PrimeHub
      • Multiple Jupyter Notebook Kernels
      • Configure SSH Server
      • Configure Job Submission
      • Configure Custom Image Build
      • Configure Model Deployment
      • Setup Self-Signed Certificate for PrimeHub
      • Chart Configuration
      • Configure PrimeHub Store
    • Environment Variables
Powered by GitBook
On this page
  • Introduction
  • Prerequisites
  • Step 1: Compared and Register the model
  • Step 2: Deploy and test the model service
  • Conclusion
  • Next Section
  1. End-to-End Tutorial

3 - Compare, Register and Deploy the Model

Previous2 - Train and Manage the ModelNext4 - Build the Web Application

Last updated 2 years ago

Introduction

In the previous part, we used PrimeHub Notebook to train and manage the base model. In this part, we will:

  1. Compare the results of these two experiment.

  2. The experiment with the best performance will be registered in PrimeHub Models

  3. Deploy as an online API service via PrimeHub's Model Deployment feature.

  4. Use the command line to test the API service.

Prerequisites

1. Enable Model Deployment

Before we get started, enable the Model Deployment toggle in your group settings.

2. Model pre-package docker image

There are two methods to get your pre-package docker image:

  1. You can use our model pre-package docker image:

    Image Name: infuseaidev/tensorflow2-prepackaged:screw-classification

Step 1: Compared and Register the model

1. Compare the model in the MLFlow server

To compare the results, click the checkbox next to each run to select them, then click the Compare button.

→ MLFlow server UI → Choose two runs → Click Compare.

2. Choose the best run and register the model

Check the best score in Metrics

At the top of the page, click the Run ID link for the run.

In the Artifact, register the model folder as our model registry.

Enter the model information to register the model.

Step 2: Deploy and test the model service

1. Model Deployment

On the Models page in the PrimeHub user portal, click our managed model with the name tf-screw-model.

The following page shows all versions of the tf-screw-model model. Click the Deploy button for Version 1.

In the Deploy Model dialog, select Create new deployment from the Deployment dropdown and then click OK.

Fill in the Deployment information:

Variable
Value

Deployment Name

tf-screw-deployment

Model image

infuseai/tensorflow2-prepackaged:screw-classification

InstanceTypes

CPU 1

Click the Deploy button, and you will be redirected to the Deployments page.

To view the details of the deployment, click the tf-screw-deployment card.

2. Test the model service is available to use

Run the following command line:

curl -F 'binData=@data/arrange/val/good/000.png' <primehub-deploy-url>

You can get the output result:

{"data":{"names":["t:0"],"tensor":{"shape":[1,1],"values":[2.065972089767456]}},"meta":{"requestPath":{"model":"infuseai/tensorflow2-prepackaged:screw-classification"}}}

Conclusion

In this tutorial, we compared and selected a suitable model using MLflow, deployed a model, and tested the model with sample data.

Next Section

In the next article, we will add a web application interface to our model using Streamlit, another app available through PrimeHub Apps.

You can by followed the advanced guide.

If you want to build your custom deployment logic and environment, you can see the information in the .

build your own pre-packaged docker image
advanced tutorial section