This document shows the best practice to reuse a base image and build the model image on top of the base image. Using the base image has some benefits:
If the way you load and use models is the same, these models can share the same base image
If you just want to update the model file, a base image can speed up the building process
The idea is that you write a general model serving code and assume the model file is placed under a certain path. As a model is ready, use docker build to generate the model image from the base image along with the model files.
To prepare the base image, there are two methods
Build the base image by Language Wrapper
Use pre-packaged server as Base Image
Prerequisites
Docker:
Build the Base Image by Language Wrapper
Here, we use Tensorflow 2 as a simple showcase. The code is in the repo.
Build the Base Image
Write a general model serving code Model.py.
import tensorflow as tf
class Model:
def __init__(self):
self.loaded = False
def load(self):
self._model = tf.keras.models.load_model('model')
self.loaded = True
def predict(self, X, feature_names=None, meta=None):
if not self.loaded:
self.load()
output = self._model.predict(X)
return output
Create a requirements.txt file and write down all required packages.
seldon-core==1.6.0
tensorflow==2.3.1
Create a Dockerfile with the following content.
FROM python:3.7-slim
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 9000
# Define environment variable
ENV MODEL_NAME Model
ENV SERVICE_TYPE MODEL
ENV PERSISTENCE 0
CMD exec seldon-core-microservice $MODEL_NAME --service-type $SERVICE_TYPE --persistence $PERSISTENCE --access-log
Build the base image.
docker build . -t tensorflow2-prepackage
Build the Model Image
Based on our previous base image, whenever you have a model outputted by
model.save(export_path)
You can use this base image to build your model deployment image.
First, create a Dockerfile.
FROM tensorflow2-prepackage
COPY export_path model
(Please replace the export_path to your path)
This means you copy your model files into the path that you pre-defined in the base image code.
Then, you can build the model deployment image.
docker build . -t tensorflow2-prepackage-model
Verify the Model Image
To verify the image, you can run it.
docker run -p 5000:5000 --rm tensorflow2-prepackage-model