The pure::variants Deployment for Kubernetes is the recommended way
to install and manage the pure::variants Services in Kubernetes. This
deployment reduces the complexity of installing the various components
required for a successful deployment and maintenance of pure::variants.
The Kubernetes deployment is based on the Helm Package Manager and
includes the same services and possibilities like the pure::variants
Deployment Templates for Docker. The Helm Chart can also be used for a
deployment into OpenShift. The pure::variants Helm Chart for Kubernetes is
included in the pure::variants Deployment Templates for Docker with the
name pv-chart-<version>.tgz
.
The Kubernetes deployment includes the pure::variants Web Client,
Transform Service and Model Server. All pure::variants related services
are distributed as a helm chart which can be installed with the helm
package manager. In contrast to the pure::variants Deployment Templates
for Docker there are no prepared template files but a configuration file
for the pure::variants helm chart which is called
values.yaml
.
Internally a number of additional containers, orchestrated by Kubernetes are communicating among each other using an internal, preconfigured network structure, which is not exposed externally via ingress routes. Connections of internal services to external services not provided by the respective template, are done using encrypted access using TLS. All externally available services as per default are encrypted using TLS.
In general all prerequisites mentioned for the pure::variants Deployment Templates for Docker are still valid
The deployment is verified and tested with Kubernetes v1.29.4 and OpenShift 4.13.41
Helm Package Manager with version 3.14.1 or later
the pure::variants Docker Setup and the pure::variants Deployment for Kubernetes package from the pure::variants Updatesite
a running pure::variants License Server and its URL which is accessible to containers running in the Kubernetes Cluster (see ???)
Decide how many concurrent transformations shall be possible. Default in the parameters file is to have 1 Transformation Runner enabled. It is recommended to start with this and add more Runners later if needed.
The email address and registration number from your pure::variants license. This can be retrieved from the pure::variants License Servers status page.
A X.509 certificate and key for the hostname and port under which the pure::variant services are exposed from the docker host. The key file must not have password protection. Alternatively you can directly provide a TLS secret in the desired namespace.
All certificates of a non-public certification authority
(CA) in PEM format with .crt
suffix which
will be presented to the pure::variants services (e.g.: License
Server, Model Server, Openid Connect Provider, 3rd party
tools...)
Define designated exposed hostname for the pure::variants services. The exposed port is set to 443. With this hostname users will later be able to access the pure::variants services.
The Web Client registration in the Single Sign-On provider must be conform to the following specification.
Grant types: authorization code and refresh token
Redirect URL: https://pv.example.com/pvgw/auth/oidc/callback
Logout URL: https://pv.example.com/pvgw/ui/
Scopes: The default scopes are openid profile email.
Note: When using the Web Client in combination with Jazz platform the additional scope general is needed.
For global configuration management introspection must be enabled and the introspecting relying parties must be added with their Client ID and the needed permissions to the user management of the pure::variants Model Server.
The Model Server registration in the Single-Sign-On provider must be conform to the following specification.
Grant types: authorization code
Redirect URL: https://localhost/pv/openid
Scopes: openid
Further instructions for the Single Sign-On Setup of the Model Server are described in ???
The images referenced in the pure::variants Kubernetes Deployment need to be built first. For building the images the pure::variants Deployment Templates for Docker are used. The detailed build steps are mentioned in ???.
Please provide at least the following prerequisites before running the build process:
Set a version tag in the .env file for the images with the PV_VERSION variable.
Define the target container registry in the .env file with the variable DOCKER_REGISTRY.
Provide all needed root certificates within the folder
<docker-deployment-templates>/workspace/rootcertificates
.
Add the parameter PV_USERID=1000
anywhere
in your .env file provided in the pure::variants Deployment
Templates for Docker. This will enable the creation of an user in
the transformation runner image.
Once the build is done please run docker compose
--profile build push
to push the images to your container
registry.
Opening the extracted pure::variants Helm Chart the contained
values.yaml
file
will contain all parameters to configure the pure::variants
Deployment.
To keep the complexity manageable not all parameters available are directly visible in the parameters file but defaulted to a meaningful value which matches common customer needs.
The parameters file is divided into sections for each pure::variants service and one global section which applies to all services. These sections are represented in the following subchapters.
The global parameters are applied to all services installed with the pure::variants Deployment for Kubernetes.
PV_VERSION
Enter the pure::variants version label that you want to deploy. Technically this is used as tag for the images which are stored in your container registry and deployed to the Kubernetes cluster.
PV_NAMESPACE
Enter your name of your namespace to which you want to deploy the pure::variants services to.
Note: This namespace needs to be created manually by running
kubectl create ns NameOfNamespace
PV_SERVICE_ACCOUNT
Name of the service account used to access Kubernetes API to create/delete and inspect containers, persistent volume claims and network policies.
DOCKER_REGISTRY
Enter the address of the Container registry. This address must have a trailing slash, e.g. registry.example.com/
IMAGE_PULL_SECRETS
If you underlying container runtime cannot directly pull the images from the container registry, you can provide an array of pull secrets to access the container registry.
Note: The mentioned pull secrets need to be created by you and are not part of the pure::variants Kubernetes deployment.
PV_REG_NUMBER
Enter your pure::variants license resp. registration number, which you for instance can find in your pure::variants license file. This number is used to access the floating licenses on the pure::variants License Server.
PV_LICENSE_SERVER
Enter the address of the pure::variants License Server, e.g. https://pvlicenses.example.com.
PV_EXPOSED_HOST
This option defines the base network address used to access
the pure::variants services. Enter the full name of the host, e.g.
pv.example.com
RBAC_ENABLED
If set to true
automatically creates a
role and role binding for the service account.
TZ
Replace the default time zone
Europe/Berlin
with your time zone. This
information is used to let the containers have the correct time
according to your location. Please see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
for the available time zones.
PV_INGRESS_CLASSNAME
Optional: You can set the name of the ingress class which
shall be used for the ingress resources. If no name is provided it
defaults to the default
ingress class.
PV_SECRET_NAME
Optional: If you prefer to provide a TLS secret by yourself you can specify the name of the secret in the namespace used to encrypt the communication to the pure::variants services.
PV_SSL_CERTIFICATE
Path to the server certificate used to encrypt the
communication to the pure:.variants services. Defaults to
workspace/certificates/server.crt
therefore
the server certificate must be placed in this location of the helm
chart.
PV_SSL_KEY
Path to the server certificate key used to encrypt the
communication to the pure:.variants services. Defaults to
workspace/certificates/server.key
therefore
the server certificate must be placed in this location of the helm
chart.
enabled
Parameter to enable or turn off the Web Client in this
pure::variants Deployment for Kubernetes. Defaults to
true
and therefore the Web Client is part of
this deployment.
PV_MODEL_SERVER
Enter the address of the pure::variants Model Server to use,
e.g. https://pv.example.com/pv/
.
Note: Can be kept empty if the Model Server is also deployed with this pure::variants Deployment for Kubernetes.
PV_WEB_CLIENT_SSO_GC_URI
Leave this option empty if you don't use Global Configurations nor OpenID Connect to authenticate. Otherwise enter the address of the Global Configuration service, e.g. https://jazz.example.com:9443/gc.
PV_WEB_CLIENT_HIDE_INACCESSIBLE_PROJECTS
If set to true, all projects which are not accessible for
the user are completely hidden instead of greyed out in the
projects overview. Defaults to false
.
PV_WEB_CLIENT_LOGLEVEL
Change the log level of the web client to increase or lower
logging information. Defaults to INFO
.
Available options are: INFO, DEBUG,
TRACE
PV_TRANSFORM_STORAGE_SIZE
Define the size of the requested persistent volume claim
which stores all transformation job information. Defaults to
25Gi
PV_RAM_MAX
Define the maximum heap memory which Java process can
acquire inside the web Web Client container. The value is defined
as Megabytes and defaults to: 16389
PV_WEB_CLIENT_RESOURCES
Please define how much CPU and Memory the POD should request and be limited to. The default values are commented out and therefore no resource definitions are set on POD level.
Note: If you define resource specifications please make sure to align to our hardware requirements see ???
PV_GATEWAY_SSO
Enable OIDC as an authentication method for the pure::variants Gateway. Enabling OIDC automatically disables form based authentication method.
PV_GATEWAY_SSO_NAME
Provide a label for the configured Single-Sign-On provider. This name is used as a label of the OIDC button on the login page.
Note: Only needed if OpenID Connect login is enabled.
PV_GATEWAY_SSO_URL
Enter the well-known endpoint of the OpenID Connect provider without the
trailing /.well-known/openid-configuration. Example:
https://jazz.example.com:9643/oidc/endpoint/jazzop
Note: Only needed if OpenID Connect login is enabled.
PV_GATEWAY_SSO_CLIENT_ID
Enter the OpenID Connect client identifier for the pure::variants Web Client as defined at the OpenID Connect provider.
Note: Only needed if OpenID Connect login is enabled.
PV_GATEWAY_SSO_CLIENT_SECRET
Enter the OpenID Connect client secret for the pure::variants Web Client as defined at the OpenID Connect provider.
Note: Only needed if OpenID Connect login is enabled.
PV_GATEWAY_SSO_USERID_CLAIM
Optionally enter the claim which is used for the OpenID Connect token to identify the user id.
Options are idtoken:<claim>
or
userinfo:<claim>
where
<claim>
needs to be replaced with the name of the
claim
Note: Only needed if OpenID Connect login is enabled.
PV_GATEWAY_SSO_SCOPES
Change this option only if a different set of token scopes are needed. The
default scopes requested are: openid profile email
).
Note: Only needed if OpenID Connect login is enabled.
PV_GATEWAY_ADDITIONAL_ANNOTATIONS
The pure::variants Gateway is exposed via an ingress route. If your environment or ingress controller needs a specific annotation please add it below this parameter with an array annotation.
enabled
If set to true
the transformation
runners, which are handling transformations triggered in the Web
Client, are also deployed with this pure::variants Deployment for
Kubernetes.
PV_RUNNER_LOGLEVEL
Define the log level of the transformation runner to
increase the logging entries. Available options are 1 to
7
and defaults to 1
.
PV_RUNNER_RESOURCES
Please define how much CPU and Memory the POD should request and be limited to. The default values are commented out and therefore no resource definitions are set on POD level.
The configured transformation runner is authenticating against the Web Client to ensure no information is passed to non authorized transformation runners.
To add more runner please add more entries to the
values.yaml
parameter file.
runnercredentials: # to create multiple runners copy the following line like the example below: PV_RUNNER_CREDENTIALS_01: '{"id":"runner-01","apikey":"changeit"}' PV_RUNNER_CREDENTIALS_02: '{"id":"runner-02","apikey":"changeit"}'
The transformation executor is a container with a temporary lifespan. It is created on demand to execute the triggered transformation and is deleted once the transformation is finished.
PV_CPU_MIN
Define the requested amount of CPUs for the transformation POD.
PV_CPU_MAX
Define the limit of CPUs the transformation POD can obtain.
PV_RAM_MIN
Define the requested amount of Memory for the transformation POD.
PV_RAM_MAX
Define the limit of Memory the transformation POD can obtain.
Note: If none of the resource variables are set the transformation executor will not set any resource dependencies to its POD definition. If you choose to define resource dependencies please align them to our requirements see Section 2.1, “pure::variants Desktop Client”
enabled
If set to true
the pure::variants Model
Server is deployed with this pure::variants Deployment for
Kubernetes.
PV_MODEL_SERVER_WEB_PASSWORD
Enter the password with which you want to protect the status
web page of the pure::variants Model Server. Defaults to
changeit
.
PV_MODEL_SERVER_APIKEY
Enter the API key with which you want to protect the API endpoints of the pure::variants Model Server.
PV_MODEL_SERVER_SYSTEMUSER_PASSWORD
Enter the password with which you want to protect the pure::variants Model Server superuser system.
Note: This setting has no effect if an external database is connected to the Model Server.
PV_MODEL_SERVER_DB_PASSWORD
Enter the password with which you want to protect the local pure::variants Model Server PostgreSQL database or to connect to your external database.
PV_MODEL_SERVER_LOGLEVEL
Log level of the pure::variants Model Server, from 0 (just errors) to 9 (extensive logging). Defaults to level 1 for minimal logging.
PV_MODEL_SERVER_OPENID_CLIENT_ID
Client identifier issued to the pure::variants Model Server by the OpenID Connect provider.
Note: Only relevant if the Model Server shall be connected with the Single-Sign-On provider.
PV_MODEL_SERVER_OPENID_CLIENT_SECRET
Client secret assigned to the pure::variants Model Server by the OpenID Connect provider.
Note: Only relevant if the Model Server shall be connected with the Single-Sign-On provider.
PV_MODELSERVER_RESOURCES
Please define how much CPU and Memory the POD should request and be limited to. The default values are commented out and therefore no resource definitions are set on POD level.
Note: If you define resource specifications please make sure to align to our hardware requirements see ???
If you want to connect the Model Server to an external database please remove the
#
in front of the following parameters and
provide a reasonable value. The Model Server provided with the
pure::variants Deployment for Kubernetes shares the same requirements
for its database as our standalone Model Server please see ???. Please note when using
an external database, the internal PostgreSQL database needs to be
disabled by changing the value of
database.enabled
to
"false"
.
PV_MODEL_SERVER_DB_TYPE
Provide the type of database the Model Server shall connect
to. Options are: PostgreSQL
,
MSSQL
and Oracle
.
PV_MODEL_SERVER_DB_HOST
Enter the hostname which provides access to the external database.
PV_MODEL_SERVER_DB_PORT
Enter the port with which the external database can be accessed.
PV_MODEL_SERVER_DB_NAME
Enter the name of the external database which has been initialized with the init SQL script of the Model Server.
PV_MODEL_SERVER_DB_USER
Enter the name of the technical user having access to the external database.
This section provides parameters to configure the pure::variants Deployment for Kubernetes internal PostgreSQL database for the Model Server.
Note: For productive usage we highly recommend to use an external/managed database.
enabled
If set to true
the internal PostgreSQL
database is deployed with this pure::variants Deployment for
Kubernetes.
POSTGRES_SIZE
Provide a size which is requested by the persistent volume
claim associated with the database statefulset. Defaults to
25Gi
.
PV_DATABASE_RESOURCES
Please define how much CPU and Memory the POD should request and be limited to. The default values are commented out and therefore no resource definitions are set on POD level.
Once the configuration is done and your images are available on the container registry you can proceed installing your pure::variants Deployment for Kubernetes.
For an installation from scratch please navigate into your helm chart and run the following command which will also create the desired namespace if it does not exist yet.
Note: Please make sure to replace
<name_of_deployment>
with the name helm
should list your pure::variants Deployment for Kubernetes.
Also replace <namespace>
with the
desired namespace in Kubernetes which is defined in the parameter
PV_NAMESPACE.
helm install <name_of_deployment> -n <namespace> --create-namespace .
If the command has been successful your console Output should show the following message:
NAME: <name_of_deployment> LAST DEPLOYED: <Timestamp> NAMESPACE: <namespace> STATUS: deployed REVISION: 1 TEST SUITE: None
Once the installation via helm is done you can verify the deployment by running:
kubectl get all -n <namespace>
You should see your configured pods are pulling their images and
starting up. Once the Pod initialization is done all Pods should have
the status Running
.
Validate Model Server Availability: Use browser to
navigate to Model Server URL (make sure to have the slash at the
end of the URL), e.g., to https://pv.example.com:443/pv/ . You should now see the
basic status page of the Model Server. Try to log in with the
password defined in
PV_MODEL_SERVER_WEB_PASSWORD
.
Validate Model Server Access via Desktop Client: Follow the instructions given in the pure::variants Model Server Administration Manual using the Model Server URL (make sure to have the slash at the end of the URL), e.g., https://pv.example.com:443/pv/ . Create at one pure::variants user with a name matching a valid Single-Sign-On user as described in the manual.
Validate Web Client Availability: Use browser to navigate to Web Client URL (make sure to have the slash at the end of the URL), e.g. to https://pv.example.com/pvweb/ . You should now see the Login page of the Single-Sign-On service. Try to log in with valid user name and credentials from the previous step. Validate Number of available Transformation Runners by switching in the Web Client to the Transformation Dashboard and check that the Idle count is equal to the number of configured Transformation Runners (Default is 1).