This doc shows how to package a model into a format-valid docker image for the PrimeHub model deployment feature.
The PrimeHub model deployment feature is based on Seldon. This doc takes reference from Seldon official documentations and other resources which are listed in the last part.
Create a MyModel.py file with the following example template.
classMyModel(object):""" Model template. You can load your model parameters in __init__ from a location accessible at runtime. """def__init__(self):""" Add any initialization parameters. These will be passed at runtime from the graph definition parameters defined in your seldondeployment kubernetes resource manifest. """print("Initializing")defpredict(self,X,features_names=None):""" Return a prediction. Parameters ---------- X : array-like feature_names : array of feature names (optional) """print("Predict called - will run identity function")return X
File and class name MyModel should be the same as MODEL_NAME in Dockerfile
Load or initiate your model under the __init__ function
The predict method takes a numpy-array X and list of string feature_names (optional), then returns an array of predictions (the return array should be at least 2-dimensional)
More detailed information on how to write the Python file for model deployment in different frameworks, please refer to the section Example Codes of Different Frameworks.
Build the Image
Make sure you are in the folder that includes requirements.txt, Dockerfile, python file for model deployment, and model file.
Execute following command to install environment and package our model file into the target image my-model-image.
You have successfully built the docker image for the PrimeHub model deployment.
Push the Image
Next, push the image into the docker hub (or other docker registries) and check PrimeHub tutorial to serve the model under PrimeHub.
Tag your docker image.
dockertagmy-model-imagetest-repo/my-model-image
Then push to docker registry.
dockerpushtest-repo/my-model-image
(Optional) Example Codes for Different Frameworks
Here are some Python snippets of how to export a model file then load it and run the prediction in another file. By following the Python wrapper format, PrimeHub supports various popular ML frameworks to serve models.