LogoLogo
Have questions?📞 Speak with a specialist.📅 Book a demo now.
  • Welcome
  • INTRODUCTION
    • What Is Aizen?
    • Aizen Platform Interfaces
    • Typical ML Workflow
    • Datasets and Features
    • Resources and GPUs
    • LLM Operations
    • Glossary
  • INSTALLATION
    • Setting Up Your Environment
      • Hardware Requirements
      • Deploying Kubernetes On Prem
      • Deploying Kubernetes on AWS
      • Deploying Kubernetes on GCP
        • GCP and S3 API Interoperability
        • Provisioning the Cloud Service Mesh
        • Installing Ingress Gateways with Istio
      • Deploying Kubernetes on Azure
        • Setting Up Azure Blob Storage
    • Installing Aizen
      • Software Requirements
      • Installing the Infrastructure Components
      • Installing the Core Components
      • Virtual Services and Gateways Command Script (GCP)
      • Helpful Deployment Commands
    • Installing Aizen Remote Components
      • Static Remote Deployment
      • Dynamic Remote Deployment
    • Installing Optional Components
      • MinIO
      • OpenLDAP
      • OpenEBS Operator
      • NGINX Ingress Controller
      • Airbyte
  • GETTING STARTED
    • Managing Users and Roles
      • Aizen Security
      • Adding Users
      • Updating Users
      • Listing Users and Roles
      • Granting or Revoking Roles
      • Deleting Users
    • Accessing the Aizen Platform
    • Using the Aizen Jupyter Console
  • MANAGING ML WORKFLOWS
    • ML Workflow
    • Configuring Data Sources
    • Configuring Data Sinks
    • Creating Training Datasets
    • Performing ML Data Analysis
    • Training an ML Model
    • Adding Real-Time Data Sources
    • Serving an ML Model
    • Training and Serving Custom ML Models
  • MANAGING LLM WORKFLOWS
    • LLM Workflow
    • Configuring Data Sources
    • Creating Training Datasets for LLMs
    • Fine-Tuning an LLM
    • Serving an LLM
    • Adding Cloud Providers
    • Configuring Vector Stores
    • Running AI Agents
  • Notebook Commands Reference
    • Notebook Commands
  • SYSTEM CONFIGURATION COMMANDS
    • License Commands
      • check license
      • install license
    • Authorization Commands
      • add users
      • alter users
      • list users
      • grant role
      • list roles
      • revoke role
      • delete users
    • Cloud Provider Commands
      • add cloudprovider
      • list cloudproviders
      • list filesystems
      • list instancetypes
      • status instance
      • list instance
      • list instances
      • delete cloudprovider
    • Project Commands
      • create project
      • alter project
      • exportconfig project
      • importconfig project
      • list projects
      • show project
      • set project
      • listconfig all
      • status all
      • stop all
      • delete project
      • shutdown aizen
    • File Commands
      • install credentials
      • list credentials
      • delete credentials
      • install preprocessor
  • MODEL BUILDING COMMANDS
    • Data Source Commands
      • configure datasource
      • describe datasource
      • listconfig datasources
      • delete datasource
    • Data Sink Commands
      • configure datasink
      • describe datasink
      • listconfig datasinks
      • alter datasink
      • start datasink
      • status datasink
      • stop datasink
      • list datasinks
      • display datasink
      • delete datasink
    • Dataset Commands
      • configure dataset
      • describe dataset
      • listconfig datasets
      • exportconfig dataset
      • importconfig dataset
      • start dataset
      • status dataset
      • stop dataset
      • list datasets
      • display dataset
      • export dataset
      • import dataset
      • delete dataset
    • Data Analysis Commands
      • loader
      • show stats
      • show datatypes
      • show data
      • show unique
      • count rows
      • count missingvalues
      • plot
      • run analysis
      • run pca
      • filter dataframe
      • list dataframes
      • set dataframe
      • save dataframe
    • Training Commands
      • configure training
      • describe training
      • listconfig trainings
      • start training
      • status training
      • list trainings
      • list tensorboard
      • start tensorboard
      • stop tensorboard
      • stop training
      • restart training
      • delete training
      • list mlflow
      • save embedding
      • list trained-models
      • list trained-model
      • export trained-model
      • import trained-model
      • delete trained-model
      • register model
      • update model
      • list registered-models
      • list registered-model
  • MODEL SERVING COMMANDS
    • Resource Commands
      • configure resource
      • describe resource
      • listconfig resources
      • alter resource
      • delete resource
    • Prediction Commands
      • configure prediction
      • describe prediction
      • listconfig predictions
      • start prediction
      • status prediction
      • test prediction
      • list predictions
      • stop prediction
      • list prediction-logs
      • display prediction-log
      • delete prediction
    • Data Report Commands
      • configure datareport
      • describe datareport
      • listconfig datareports
      • start datareport
      • list data-quality
      • list data-drift
      • list target-drift
      • status data-quality
      • display data-quality
      • status data-drift
      • display data-drift
      • status target-drift
      • display target-drift
      • delete datareport
    • Runtime Commands
      • configure runtime
      • describe runtime
      • listconfig runtimes
      • start runtime
      • status runtime
      • stop runtime
      • delete runtime
  • LLM AND EMBEDDINGS COMMANDS
    • LLM Commands
      • configure llm
      • listconfig llms
      • describe llm
      • start llm
      • status llm
      • stop llm
      • delete llm
    • Vector Store Commands
      • configure vectorstore
      • describe vectorstore
      • listconfig vectorstores
      • start vectorstore
      • status vectorstore
      • stop vectorstore
      • delete vectorstore
    • LLM Application Commands
      • configure llmapp
      • describe llmapp
      • listconfig llmapps
      • start llmapp
      • status llmapp
      • stop llmapp
      • delete llmapp
  • TROUBLESHOOTING
    • Installation Issues
Powered by GitBook

© 2025 Aizen Corporation

On this page
  • Default Core Configuration Settings
  • Checking the Deployment Status of the Core Components
  1. INSTALLATION
  2. Installing Aizen

Installing the Core Components

PreviousInstalling the Infrastructure ComponentsNextVirtual Services and Gateways Command Script (GCP)

Last updated 3 months ago

Before proceeding with these steps, you must first install the Aizen infrastructure components. See Installing the Infrastructure Components.

To install the Aizen core components, follow these steps:

  1. Create the namespace aizen for the Aizen core components by running this command:

    kubectl create ns aizen
  2. Using the Docker Hub credentials that you obtained for the Aizen microservice images (see ), create a Kubernetes Secret for accessing the Aizen images in the core namespace:

    kubectl create secret docker-registry aizenrepo-creds 
    --docker-username=aizencorp 
    --docker-password=<YOUR DOCKER CREDENTIALS> 
    -n aizen
  3. Customize the settings (see the Default Core Configuration Settings below) in the Aizen core Helm chart based on your environment and preferred configuration:

    • Update the STORAGE_CLASS, INGRESS_HOST, BUCKET_NAME, CLOUD_ENDPOINT_URL, CLOUD_ACCESSKEY_ID, CLOUD_SECRET_KEY, IMAGE_REPO_SECRET, MLFLOW_ACCESSKEY_ID, MLFLOW_SECRET_KEY, and MLFLOW_ENDPOINT_URL values.

    • For the MLFLOW_ENDPOINT_URL, use either an S3 bucket or a local MinIO bucket, but remember to create the bucket. If you are using local MinIO as an object store, make sure to set up MinIO first and then create buckets. See MinIO.

    • Change the default persistence size, which is hardcoded in the deployment of the various components, according to your environment.

    • For an Azure AKS cluster, add these properties to both infrastructure and core deployments: global.s3.azure.enabled=true

      global.s3.azure.values.storage_account_name
      global.s3.azure.values.storage_access_key
      global.s3.azure.values.storage_connection_string
      global.mlflow.artifact.secrets.values.mlflow_endpoint_url=https://<your storage account name>.blob.core.windows.net
      global.mlflow.artifact.secrets.values.mlflow_artifacts_destination=wasbs://<storage containername>@<storage account name>.blob.core.windows.net/<destination folder>
    • For a cloud-based deployment, add these properties to both infrastructure and core deployments:

      infra.hashicorp-vault.vault.server.hostAliases[0].ip="$CLOUD_ENDPOINT_IP",\
      infra.hashicorp-vault.vault.server.hostAliases[0].hostnames[0]="<specify the cloud endpoint url without http>"

  4. For AUTH_TYPE, choose the authentication type that you would like to use when accessing the Aizen Jupyter console. You can specify one of these authentication types:

    • dummy (no authentication): This setting uses only predefined users like admin, aizenai, aizendev, <deployed namespace name> with password aizen@321.

    • native: This setting allows users to sign up by creating the admin user first and then additional users.

    • oauth: This setting uses OAuth-based authentication, such as GitHub, Google, GitLab, and so on.

    • ldap (default): This setting uses the Lightweight Directory Access Protocol (LDAP) and Active Directory.

  5. Add these properties to the Aizen core Helm chart based on your chosen authentication type:

    Use | to separate multiple Bind DNs, groups, and hosts. Use , to separate multiple allowed users and admin users.

    • For LDAP authentication:

      core.console.auth_type
      core.console.ldap_server_host
      core.console.ldap_bind_dn
      core.console.ldap_allowed_groups
      core.console.admin_user
      core.console.allowed_users
    • For OAuth authentication:

      core.console.auth_type=oauth
      core.console.oauth_authenticator
      core.console.oauth_client_id
      core.console.oauth_client_secret
      core.console.oauth_callback_url
      core.constole.allowed_users
      core.console.admin_user

      If you are using github as the oauth_authenticator, set these properties:

      core.console.oauth_allowed_organizations
      core.console.oauth_allowed_scopes

      Consider adding these optional properties for OAuth authentication:

      core.console.oauth_token_url
      core.console.oauth_userdata_url
      core.console.oauth_allowed_groups
  6. Install the Aizen core components by running this command:

    helm -n $NAMESPACE install aizencore $HELMCHART_LOCATION/aizen
  7. If you installed the Aizen core components on a GCP cluster, create the gateways and virtual services for the console, MLflow, dataexplorer, prediction, and explorer by using the Virtual Services and Gateways Command Script (GCP).

Default Core Configuration Settings

NAMESPACE=aizen
INFRA_NAMESPACE=aizen-infra
HELMCHART_LOCATION=aizen-helmcharts-1.0.0

STORAGE_CLASS=
INGRESS_HOST=
BUCKET_NAME=
CLUSTER_NAME=

CLOUD_ENDPOINT_URL=
CLOUD_ACCESSKEY_ID=
CLOUD_SECRET_KEY=
CLOUD_PROVIDER_REGION=
CLOUD_PROVIDER_TYPE=

STORAGE_TYPE=cloud
AUTH_TYPE=ldap

#Needed only for cloudian
CLOUD_ENDPOINT_IP=

#IMAGE
IMAGE_REPO=aizencorp
IMAGE_REPO_SECRET=
IMAGE_TAG=1.0.0

#MLFLOW
MLFLOW_ACCESSKEY_ID=
MLFLOW_SECRET_KEY=
MLFLOW_ENDPOINT_URL=
MLFLOW_ARTIFACT_DESTINATION=
MLFLOW_ARTIFACT_REGION=

#PVC
STORAGE_PERSISTENCE_SIZE=200Gi
CONSOLE_PERSISTENCE_SIZE=100Gi

#You don't need to change anything below this line
helm -n $NAMESPACE install aizencore $HELMCHART_LOCATION/aizen \
--set core.enabled=true,\
global.clustername=$CLUSTER_NAME,\
global.s3.secrets.enabled=true,\
global.s3.endpoint_url=$CLOUD_ENDPOINT_URL,\
global.s3.endpoint_ip=$CLOUD_ENDPOINT_IP,\
global.s3.secrets.values.s3_access_key=$CLOUD_ACCESSKEY_ID,\
global.s3.secrets.values.s3_secret_key=$CLOUD_SECRET_KEY,\
global.customer_bucket_name=$BUCKET_NAME,\
global.storage_class=$STORAGE_CLASS,\
global.mlflow.artifact.region=$MLFLOW_ARTIFACT_REGION,\
global.mlflow.artifact.secrets.values.mlflow_access_key_id=$MLFLOW_ACCESSKEY_ID,\
global.mlflow.artifact.secrets.values.mlflow_access_secret_key=$MLFLOW_SECRET_KEY,\
global.mlflow.artifact.secrets.values.mlflow_endpoint_url=$MLFLOW_ENDPOINT_URL,\
global.mlflow.artifact.secrets.values.mlflow_artifacts_destination=$MLFLOW_ARTIFACT_DESTINATION,\
global.cloud_provider_type=$CLOUD_PROVIDER_TYPE,\
global.cloud_provider_region=$CLOUD_PROVIDER_REGION,\
global.storage_type=$STORAGE_TYPE,\
global.image_registry=$IMAGE_REPO,\
global.storage_class=$STORAGE_CLASS,\
global.image_secret=$IMAGE_REPO_SECRET,\
global.image_tag=$IMAGE_TAG,\
global.ingress.host=$INGRESS_HOST,\
core.console.auth_type=$AUTH_TYPE,\
core.storage.volume_size=$STORAGE_PERSISTENCE_SIZE,\
core.console.volume_size=$CONSOLE_PERSISTENCE_SIZE,\
core.console.ldap_server_host="ldap://aizen-openldap-service.aizen-infra.svc.cluster.local:1389",\
core.console.ldap_bind_dn="uid={username}\,ou=users\,dc=aizencorp\,dc=local\,dc=com|uid={username}\,ou=people\,dc=aizencorp\,dc=local\,dc=com",\
core.console.ldap_allowed_groups="cn=dbgrp\,ou=groups\,dc=aizencorp\,dc=local\,dc=com",\
core.console.admin_user="aizenadmin"

Checking the Deployment Status of the Core Components

Check the status of all the core components by running this command:

kubectl -n aizen get pods

If any of the components are not in a Running state, see Installation Issues.

Getting Credentials for the Aizen Microservice Images