CRDs
In PrimeHub data model, it mentions when a/an instance type, image, and volume is created via Admin UI, under the hood, there are a CRD object created in Kubernetes and a Realm Role created in Keycloak. This document describes what CRDs are created for and the context of them.
CRD, CustomResourceDefinition, PrimeHub uses the custom resource mechanism to manage structured data (custom objects) stored in Kubernetes. There are three of them, Instance Type
, Image
and Volume
.
For more detail of CRD, please refer to Extend the Kubernetes API with CustomResourceDefinitions.
Instance Type
Basic
An Instance Type
object contains these following settings/configurations.
You can use the following commands to view the stored data.
The structured data (including Node Selector
and Tolerations
if any) of an instance type
object displays in yaml format as below.
Spec:
displayName
Display Namedescription
Descriptionlimits.cpu
CPU Limitlimits.memory
Memory Limitlimits.nvidia.com/gpu
GPU Limitrequests.cpu
CPU Requestrequests.memory
Memory Request
Toleration
When a node is marked with a Taint, it cannot accept any pods which don't tolerate the taints. Toleration are applied to pods so that pods are allowed to schedule onto nodes with matching taints. Please refers to Taints and Toleration for more detail.
On Admin UI, toleration which specifies a tolerable taint(key-value pair) with an effect to take.
We add toleration to tolerate a specific taint with an effect to take, the data is stored as below.
Toleration settings:
effect
:NoSchedule
,NoExecute
,PreferNoSchedule
andNone
.key
: The key of a taint.operator
:tolerations[].operator
,Exists
andEqual
.value
: The value of a taint is required whenOperator
isEqual
.
NodeSelector
Pods can be constrained to only be able to/prefer to run on particular nodes that are labeled matching key-value pairs. We add a nodeSelector with memory/low
via Admin UI, the data is stored as below.
Node Selector settings:
key
: The key of a label.value
: The value of a label.
Image
Basic
An Image
object contains these following settings/configurations.
You can use the following commands to view the stored data.
The structured data (including Pull Secret
if any) of an image
object displays in yaml format as below.
Spec:
displayName
: The display name of a image on UI.description
: Description.url
: The registry url where an image is located.pullSecret
: The name of aSecret
, this is a Secret we add via Admin UI. If required, the secret is used to pull the image.
Volume
Basic
A Volume
object contains these following setting/configuration.
You can use the following commands to view the stored data.
The structured data (including launchGroupOnly
if true) of a type pv volume
object displays in yaml format as below.
Currently, there are types pv
, git
, nfs
, hostPath
and env
of volumes. All of types has following data fields in common, also, each type has its own data fields. In following sections, they are described respectively.
Annotations:
dataset.primehub.io/mountRoot
: A path of mount root.dataset.primehub.io/launchGroupOnly
: It can only be selected in a launch Group iftrue
.dataset.primehub.io/homeSymlink
(hidden from UI): A flag of making a symlink in home directory of users iftrue
.
Spec:
displayName
: The display name on UI.description
: The description.type
:pv
,git
,nfs
,hostPath
andenv
.
Type pv
PV, persistent volume
, volume has a data field,volumeName
. The container mount point of the volume is varied with the combination of volumeName
, and both annotations of mountRoot
and homeSymlink
. PV is auto provisioning by default. There is an option so that the administrators can set the underlying settings manually.
Pv with auto provisioning
Container Mount Point /datasets/test
Pv with manual provisioning
Container Mount Point /datasets/test
Pv with homeSymlink
Container Mount Point /datasets/test
Symlinks ln -s /dataset/test ~/test
Pv with mountRoot
Container Mount Point /foo/bar/test
Pv with mountRoot and homeSymlink specified
Container Mount Point /foo/bar/test
Symlinks ln -s /foo/bar/test ~/test
Type git
Git data volume has a data field, Url
, which points to a git repo and a data field, Secret
, which is added via Admin UI if a secret is required to pull the data volume from repo.
The container mount point of the data volume is varied with the combination of both annotations of mountRoot
and homeSymlink
.
Annotations:
dataset.primehub.io/primehub-gitsync
:true
by default.dataset.primehub.io/gitSyncHostRoot
: (Hidden from UI) The host path to put the gitsync result./home/dataset
by default.dataset.primehub.io/gitSyncRoot
: (Hidden from UI) The path to mount the gitsync data volume./gitsync
by default.
Spec:
url
: The url of a repo.
Gitsync data volume with secret
Spec:
gitsync.secret
: A secret is used for pulling a data volume from repo.
Container Mount Point /gitsync/myrepo
.
Symlinks ln -s /gitsync/myrepo/myrepo /dataset/myrepo
.
Gitsync data volume with homeSymlink
Container Mount Point /gitsync/myrepo
.
Symlinks
ln -s /gitsync/myrepo/myrepo /dataset/myrepo
.ln -s /dataset/myrepo ~/myrepo
.
Gitsync data volume with mountRoot
Container Mount Point /gitsync/myrepo
.
Symlinks ln -s /gitsync/myrepo/myrepo /foo/bar/myrepo
.
Gitsync data volume with homeSymlink and mountRoot
Container Mount Point /gitsync/myrepo
.
Symlinks
ln -s /gitsync/myrepo/myrepo /foo/bar/myrepo
.ln -s /foo/bar/myrepo ~/myrepo
.
Type nfs
Nfs data volume has additional data fields server
and path
which set the nfs ip/domain and the nfs path. The mount point logic is the same as pv data volume.
Nfs data volume example
Type hostPath
HostPath data volume has an additional data field path
which set the path in host. The mount point logic is the same as pv data volume.
HostPath data volume example
Basic env
Taking environment variables as data volumes.
Spec:
variables
: variables inkey/value
pair.
Last updated