PrimeHub
v4.1
v4.1
  • Introduction
  • Installation
  • Tiers and Licenses
  • End-to-End Tutorial
    • 1 - MLOps Introduction and Scoping the Project
    • 2 - Train and Manage the Model
    • 3 - Compare, Register and Deploy the Model
    • 4 - Build the Web Application
    • 5 - Summary
  • User Guide
    • User Portal
    • Notebook
      • Notebook Tips
      • Advanced Settings
      • PrimeHub Notebook Extension
      • Submit Notebook as Job
    • Jobs
      • Job Artifacts
      • Tutorial
        • (Part1) MNIST classifier training
        • (Part2) MNIST classifier training
        • (Advanced) Use Job Submission to Tune Hyperparameters
        • (Advanced) Model Serving by Seldon
        • Job Artifacts Simple Usecase
    • Models
      • Manage and Deploy Model
      • Model Management Configuration
    • Deployments
      • Pre-packaged servers
        • TensorFlow server
        • PyTorch server
        • SKLearn server
        • Customize Pre-packaged Server
        • Run Pre-packaged Server Locally
      • Package from Language Wrapper
        • Model Image for Python
        • Model Image for R
        • Reusable Base Image
      • Prediction APIs
      • Model URI
      • Tutorial
        • Model by Pre-packaged Server
        • Model by Pre-packaged Server (PHFS)
        • Model by Image built from Language Wrapper
    • Shared Files
    • Datasets
    • Apps
      • Label Studio
      • MATLAB
      • MLflow
      • Streamlit
      • Tutorial
        • Create Your Own App
        • Create an MLflow server
        • Label Dataset by Label Studio
        • Code Server
    • Group Admin
      • Images
      • Settings
    • Generate an PrimeHub API Token
    • Python SDK
    • SSH Server Feature
      • VSCode SSH Notebook Remotely
      • Generate SSH Key Pair
      • Permission Denied
      • Connection Refused
    • Advanced Tutorial
      • Labeling the data
      • Notebook as a Job
      • Custom build the Seldon server
      • PrimeHub SDK/CLI Tools
  • Administrator Guide
    • Admin Portal
      • Create User
      • Create Group
      • Assign Group Admin
      • Create/Plan Instance Type
      • Add InfuseAI Image
      • Add Image
      • Build Image
      • Gitsync Secret for GitHub
      • Pull Secret for GitLab
    • System Settings
    • User Management
    • Group Management
    • Instance Type Management
      • NodeSelector
      • Toleration
    • Image Management
      • Custom Image Guideline
    • Volume Management
      • Upload Server
    • Secret Management
    • App Settings
    • Notebooks Admin
    • Usage Reports
  • Reference
    • Jupyter Images
      • repo2docker image
      • RStudio image
    • InfuseAI Images List
    • Roadmap
  • Developer Guide
    • GitHub
    • Design
      • PrimeHub File System (PHFS)
      • PrimeHub Store
      • Log Persistence
      • PrimeHub Apps
      • Admission
      • Notebook with kernel process
      • JupyterHub
      • Image Builder
      • Volume Upload
      • Job Scheduler
      • Job Submission
      • Job Monitoring
      • Install Helper
      • User Portal
      • Meta Chart
      • PrimeHub Usage
      • Job Artifact
      • PrimeHub Apps
    • Concept
      • Architecture
      • Data Model
      • CRDs
      • GraphQL
      • Persistence Storages
      • Persistence
      • Resources Quota
      • Privilege
    • Configuration
      • How to configure PrimeHub
      • Multiple Jupyter Notebook Kernels
      • Configure SSH Server
      • Configure Job Submission
      • Configure Custom Image Build
      • Configure Model Deployment
      • Setup Self-Signed Certificate for PrimeHub
      • Chart Configuration
      • Configure PrimeHub Store
    • Environment Variables
Powered by GitBook
On this page
  • MLflow is required
  • Models
  • Deploy Versioned Model
  1. User Guide

Models

Models Management Overview

PreviousJob Artifacts Simple UsecaseNextManage and Deploy Model

Data scientists requires to repeat training models with various combinations of dataset, feature, parameters etc., and conducting experiments on models, furthermore, to register/to version models which have decent performance according to results. Nowadays, this is one part of MLOps.

Regarding managing versioned models, PrimeHub, by integrating well-known MLflow, provides models management feature, Models that scientists can examine the performance of versioned/registered models and deploy a selected model directly as a service by Deployments on PrimeHub.

MLflow is required

MLflow setting is not configured yet

Mlflow instance is not reachable/running

Models

The page displays registered models from binding MLflow.

If a loading page displays only, please double check MLflow Tracking URI configuration of MLflow setting in Group Setting.

  • MLflow UI button: navigate to binding MLflow server in a new tab.

As long as an experimental model is registered on MLflow, it is listed in Models on PrimeHub as well.

Versioned Model List

By clicking each model name, it navigates into the list of versioned models.

  • Version: Version number

  • Registered At: The registration date/time

  • Deployed By: The deployment name if the model is used for a deployment; click to navigate into the deployment detail page.

  • Parameters: selected parameters of the model

  • Metrics: selected metrics of the model

  • Deploy button: Deploying the selected versioned model.

Parameters and Metrics

Clicking Columns and select parameters and metrics to display as columns in the table.

Versioned Model Detail

The page displays the information regarding this version.

  • Registered At

  • Last Modified

  • Source Run: linking to the run on MLflow

  • Parameters: if any

  • Metrics: if any

  • Tags: if any

Deploy Versioned Model

In order to deploy a certain versioned model, click Deploy of a versioned model and select + Create new deployment or update an existing deployment. It will navigate to Deployment page, continue to submit the deployment with mandatory information.

Deployed

From the deployment information page, Model URI presents models:/<model_name>/<model_version>, e.g., models:/tensorflow-model/2.

  • models:/: the model which is tracked by MLflow is deployed from Model Management

  • <model_name>:the name of the model

  • <model_version: the version number of the model

A running installed is required and has to be configured with relative information.

See .

The model which is used for the deployment is with the information of the deployment name under Deployed by column. Click the deployment will navigate into the page.

See .

MLflow instance
Group Setting
Tutorial: Use MLflow Tracking
Tutorial: Manage and Deploy a Model
deployment detail