Installing the Core Components

To install the Aizen core components, follow these steps:

  1. Create the namespace aizen for the Aizen core components by running this command:

    kubectl create ns aizen
  2. Using the Docker Hub credentials that you obtained for the Aizen microservice images (see Getting Credentials for the Aizen Microservice Images), create a Kubernetes Secret for accessing the Aizen images in the core namespace:

    kubectl create secret docker-registry aizenrepo-creds 
    --docker-username=aizencorp 
    --docker-password=<YOUR DOCKER CREDENTIALS> 
    -n aizen
  3. Customize the settings (see the Default Core Configuration Settings below) in the Aizen core Helm chart based on your environment and preferred configuration:

    • Update the STORAGE_CLASS, INGRESS_HOST, BUCKET_NAME, CLOUD_ENDPOINT_URL, CLOUD_ACCESSKEY_ID, CLOUD_SECRET_KEY, IMAGE_REPO_SECRET, MLFLOW_ACCESSKEY_ID, MLFLOW_SECRET_KEY, and MLFLOW_ENDPOINT_URL values.

    • For the MLFLOW_ENDPOINT_URL, use either an S3 bucket or a local MinIO bucket, but remember to create the bucket. If you are using local MinIO as an object store, make sure to set up MinIO first and then create buckets. See MinIO.

    • Change the default persistence size, which is hardcoded in the deployment of the various components, according to your environment.

    • For an Azure AKS cluster, add these properties to both infrastructure and core deployments: global.s3.azure.enabled=true

      global.s3.azure.values.storage_account_name
      global.s3.azure.values.storage_access_key
      global.s3.azure.values.storage_connection_string
      global.mlflow.artifact.secrets.values.mlflow_endpoint_url=https://<your storage account name>.blob.core.windows.net
      global.mlflow.artifact.secrets.values.mlflow_artifacts_destination=wasbs://<storage containername>@<storage account name>.blob.core.windows.net/<destination folder>
    • For a cloud-based deployment, add these properties to both infrastructure and core deployments:

      infra.hashicorp-vault.vault.server.hostAliases[0].ip="$CLOUD_ENDPOINT_IP",\
      infra.hashicorp-vault.vault.server.hostAliases[0].hostnames[0]="<specify the cloud endpoint url without http>"

  4. For AUTH_TYPE, choose the authentication type that you would like to use when accessing the Aizen Jupyter console. You can specify one of these authentication types:

    • dummy (no authentication): This setting uses only predefined users like admin, aizenai, aizendev, <deployed namespace name> with password aizen@321.

    • native: This setting allows users to sign up by creating the admin user first and then additional users.

    • oauth: This setting uses OAuth-based authentication, such as GitHub, Google, GitLab, and so on.

    • ldap (default): This setting uses the Lightweight Directory Access Protocol (LDAP) and Active Directory.

  5. Add these properties to the Aizen core Helm chart based on your chosen authentication type:

    Use | to separate multiple Bind DNs, groups, and hosts. Use , to separate multiple allowed users and admin users.

    • For LDAP authentication:

      core.console.auth_type
      core.console.ldap_server_host
      core.console.ldap_bind_dn
      core.console.ldap_allowed_groups
      core.console.admin_user
      core.console.allowed_users
    • For OAuth authentication:

      core.console.auth_type=oauth
      core.console.oauth_authenticator
      core.console.oauth_client_id
      core.console.oauth_client_secret
      core.console.oauth_callback_url
      core.constole.allowed_users
      core.console.admin_user

      If you are using github as the oauth_authenticator, set these properties:

      core.console.oauth_allowed_organizations
      core.console.oauth_allowed_scopes

      Consider adding these optional properties for OAuth authentication:

      core.console.oauth_token_url
      core.console.oauth_userdata_url
      core.console.oauth_allowed_groups
  6. Install the Aizen core components by running this command:

    helm -n $NAMESPACE install aizencore $HELMCHART_LOCATION/aizen
  7. If you installed the Aizen core components on a GCP cluster, create the gateways and virtual services for the console, MLflow, dataexplorer, prediction, and explorer by using the Virtual Services and Gateways Command Script (GCP).

Default Core Configuration Settings

NAMESPACE=aizen
INFRA_NAMESPACE=aizen-infra
HELMCHART_LOCATION=aizen-helmcharts-1.0.0

STORAGE_CLASS=
INGRESS_HOST=
BUCKET_NAME=
CLUSTER_NAME=

CLOUD_ENDPOINT_URL=
CLOUD_ACCESSKEY_ID=
CLOUD_SECRET_KEY=
CLOUD_PROVIDER_REGION=
CLOUD_PROVIDER_TYPE=

STORAGE_TYPE=cloud
AUTH_TYPE=ldap

#Needed only for cloudian
CLOUD_ENDPOINT_IP=

#IMAGE
IMAGE_REPO=aizencorp
IMAGE_REPO_SECRET=
IMAGE_TAG=1.0.0

#MLFLOW
MLFLOW_ACCESSKEY_ID=
MLFLOW_SECRET_KEY=
MLFLOW_ENDPOINT_URL=
MLFLOW_ARTIFACT_DESTINATION=
MLFLOW_ARTIFACT_REGION=

#PVC
STORAGE_PERSISTENCE_SIZE=200Gi
CONSOLE_PERSISTENCE_SIZE=100Gi

#You don't need to change anything below this line
helm -n $NAMESPACE install aizencore $HELMCHART_LOCATION/aizen \
--set core.enabled=true,\
global.clustername=$CLUSTER_NAME,\
global.s3.secrets.enabled=true,\
global.s3.endpoint_url=$CLOUD_ENDPOINT_URL,\
global.s3.endpoint_ip=$CLOUD_ENDPOINT_IP,\
global.s3.secrets.values.s3_access_key=$CLOUD_ACCESSKEY_ID,\
global.s3.secrets.values.s3_secret_key=$CLOUD_SECRET_KEY,\
global.customer_bucket_name=$BUCKET_NAME,\
global.storage_class=$STORAGE_CLASS,\
global.mlflow.artifact.region=$MLFLOW_ARTIFACT_REGION,\
global.mlflow.artifact.secrets.values.mlflow_access_key_id=$MLFLOW_ACCESSKEY_ID,\
global.mlflow.artifact.secrets.values.mlflow_access_secret_key=$MLFLOW_SECRET_KEY,\
global.mlflow.artifact.secrets.values.mlflow_endpoint_url=$MLFLOW_ENDPOINT_URL,\
global.mlflow.artifact.secrets.values.mlflow_artifacts_destination=$MLFLOW_ARTIFACT_DESTINATION,\
global.cloud_provider_type=$CLOUD_PROVIDER_TYPE,\
global.cloud_provider_region=$CLOUD_PROVIDER_REGION,\
global.storage_type=$STORAGE_TYPE,\
global.image_registry=$IMAGE_REPO,\
global.storage_class=$STORAGE_CLASS,\
global.image_secret=$IMAGE_REPO_SECRET,\
global.image_tag=$IMAGE_TAG,\
global.ingress.host=$INGRESS_HOST,\
core.console.auth_type=$AUTH_TYPE,\
core.storage.volume_size=$STORAGE_PERSISTENCE_SIZE,\
core.console.volume_size=$CONSOLE_PERSISTENCE_SIZE,\
core.console.ldap_server_host="ldap://aizen-openldap-service.aizen-infra.svc.cluster.local:1389",\
core.console.ldap_bind_dn="uid={username}\,ou=users\,dc=aizencorp\,dc=local\,dc=com|uid={username}\,ou=people\,dc=aizencorp\,dc=local\,dc=com",\
core.console.ldap_allowed_groups="cn=dbgrp\,ou=groups\,dc=aizencorp\,dc=local\,dc=com",\
core.console.admin_user="aizenadmin"

Checking the Deployment Status of the Core Components

Check the status of all the core components by running this command:

kubectl -n aizen get pods

If any of the components are not in a Running state, see Installation Issues.

Last updated