Custom build the Seldon server
How to custom build the Seldon server
Introduction
We need to set up the container environment to deploy our registered model. We’ll customize a pre-packaged model image for this step to suit our needs. This will demonstrate how to modify, build, and deploy a custom image using PrimeHub Deployments.
Requirements
To follow the instructions in this section you should have:
A docker account
Familiarity with the command line
Python version 3 or above
An x86/64 CPU (Apple M1 currently not supported)
We will be using the screw model prepackage server as a template.
Step-by-step Method
On your local computer, run the following commands to clone the model server repository:
Check the deployment/ project:
In a text editor, open the following file ./tensorflow2/Model.py
and modify the prediction logic.
After editing and saving Model.py, build the pre-packaged model image with the following command.
Check that the image is listed by running:
The output should look similar to:
Tag and Push to Docker
Tag the image into your Docker registry with the screw-classification tag, replacing with your Docker username.
If you’re not logged into docker yet, log in now:
You can see your image in DockerHub web UI.
Last updated