# Installing the Core Components

{% hint style="warning" %}
Before proceeding with these steps, you must first install the Aizen infrastructure components. See [installing-the-infrastructure-components](https://aizen-corp.gitbook.io/docs/installation/installing-aizen/installing-the-infrastructure-components "mention").
{% endhint %}

To install the Aizen core components, follow these steps:

1. Create the namespace `aizen` for the Aizen core components by running this command:

   <pre data-overflow="wrap"><code>kubectl create ns aizen
   </code></pre>

2. Using the Docker Hub credentials that you obtained for the Aizen microservice images (see [#getting-credentials-for-the-aizen-microservice-images](https://aizen-corp.gitbook.io/docs/installation/software-requirements#getting-credentials-for-the-aizen-microservice-images "mention")), create a Kubernetes Secret for accessing the Aizen images in the core namespace:

   <pre data-overflow="wrap"><code>kubectl create secret docker-registry aizenrepo-creds 
   --docker-username=aizencorp 
   --docker-password=&#x3C;YOUR DOCKER CREDENTIALS> 
   -n aizen
   </code></pre>

3. Customize the settings (see the [#default-core-configuration-settings](#default-core-configuration-settings "mention") below) in the Aizen core Helm chart based on your environment and preferred configuration:

   * Update the STORAGE\_CLASS, INGRESS\_HOST, BUCKET\_NAME, CLOUD\_ENDPOINT\_URL, CLOUD\_ACCESSKEY\_ID, CLOUD\_SECRET\_KEY, IMAGE\_REPO\_SECRET, MLFLOW\_ACCESSKEY\_ID, MLFLOW\_SECRET\_KEY, and MLFLOW\_ENDPOINT\_URL values.
   * For the MLFLOW\_ENDPOINT\_URL, use either an S3 bucket or a local MinIO bucket, but remember to create the bucket. If you are using local MinIO as an object store, make sure to set up MinIO first and then create buckets. See [minio](https://aizen-corp.gitbook.io/docs/installation/installing-optional-components/minio "mention").
   * Change the default persistence size, which is hardcoded in the deployment of the various components, according to your environment.
   * For an Azure AKS cluster, add these properties to both infrastructure and core deployments: global.s3.azure.enabled=true

     <pre data-overflow="wrap"><code>global.s3.azure.values.storage_account_name
     global.s3.azure.values.storage_access_key
     global.s3.azure.values.storage_connection_string
     global.mlflow.artifact.secrets.values.mlflow_endpoint_url=https://&#x3C;your storage account name>.blob.core.windows.net
     global.mlflow.artifact.secrets.values.mlflow_artifacts_destination=wasbs://&#x3C;storage containername>@&#x3C;storage account name>.blob.core.windows.net/&#x3C;destination folder>
     </code></pre>
   * For a cloud-based deployment, add these properties to both infrastructure and core deployments:&#x20;

     <pre data-overflow="wrap"><code>infra.hashicorp-vault.vault.server.hostAliases[0].ip="$CLOUD_ENDPOINT_IP",\
     infra.hashicorp-vault.vault.server.hostAliases[0].hostnames[0]="&#x3C;specify the cloud endpoint url without http>"
     </code></pre>

4. For AUTH\_TYPE, choose the authentication type that you would like to use when accessing the Aizen Jupyter console. You can specify one of these authentication types:
   * `dummy` (no authentication): This setting uses only predefined users like admin, aizenai, aizendev, `<deployed namespace name>` with password `aizen@321`.
   * `native`: This setting allows users to sign up by creating the admin user first and then additional users.
   * `oauth`: This setting uses OAuth-based authentication, such as GitHub, Google, GitLab, and so on.
   * `ldap` (default): This setting uses the Lightweight Directory Access Protocol (LDAP) and Active Directory.

5. Add these properties to the Aizen core Helm chart based on your chosen authentication type:

   <div data-gb-custom-block data-tag="hint" data-style="info" class="hint hint-info"><p>Use <code>|</code> to separate multiple Bind DNs, groups, and hosts. Use <code>,</code> to separate multiple allowed users and admin users.</p></div>

   * **For LDAP authentication:**

     <pre data-overflow="wrap"><code>core.console.auth_type
     core.console.ldap_server_host
     core.console.ldap_bind_dn
     core.console.ldap_allowed_groups
     core.console.admin_user
     core.console.allowed_users
     </code></pre>
   * **For OAuth authentication:**

     <pre data-overflow="wrap"><code>core.console.auth_type=oauth
     core.console.oauth_authenticator
     core.console.oauth_client_id
     core.console.oauth_client_secret
     core.console.oauth_callback_url
     core.constole.allowed_users
     core.console.admin_user
     </code></pre>

     If you are using `github` as the `oauth_authenticator`, set these properties:

     <pre data-overflow="wrap"><code>core.console.oauth_allowed_organizations
     core.console.oauth_allowed_scopes
     </code></pre>

     Consider adding these optional properties for OAuth authentication:

     <pre data-overflow="wrap"><code>core.console.oauth_token_url
     core.console.oauth_userdata_url
     core.console.oauth_allowed_groups
     </code></pre>

6. Install the Aizen core components by running this command:

   <pre data-overflow="wrap"><code>helm -n $NAMESPACE install aizencore $HELMCHART_LOCATION/aizen
   </code></pre>

7. If you installed the Aizen core components on a GCP cluster, create the gateways and virtual services for the console, MLflow, dataexplorer, prediction, and explorer by using the [virtual-services-and-gateways-command-script-gcp](https://aizen-corp.gitbook.io/docs/installation/installing-aizen/virtual-services-and-gateways-command-script-gcp "mention").

### **Default Core Configuration Settings**

{% code overflow="wrap" %}

```
NAMESPACE=aizen
INFRA_NAMESPACE=aizen-infra
HELMCHART_LOCATION=aizen-helmcharts-1.0.0

STORAGE_CLASS=
INGRESS_HOST=
BUCKET_NAME=
CLUSTER_NAME=

CLOUD_ENDPOINT_URL=
CLOUD_ACCESSKEY_ID=
CLOUD_SECRET_KEY=
CLOUD_PROVIDER_REGION=
CLOUD_PROVIDER_TYPE=

STORAGE_TYPE=cloud
AUTH_TYPE=ldap

#Needed only for cloudian
CLOUD_ENDPOINT_IP=

#IMAGE
IMAGE_REPO=aizencorp
IMAGE_REPO_SECRET=
IMAGE_TAG=1.0.0

#MLFLOW
MLFLOW_ACCESSKEY_ID=
MLFLOW_SECRET_KEY=
MLFLOW_ENDPOINT_URL=
MLFLOW_ARTIFACT_DESTINATION=
MLFLOW_ARTIFACT_REGION=

#PVC
STORAGE_PERSISTENCE_SIZE=200Gi
CONSOLE_PERSISTENCE_SIZE=100Gi

#You don't need to change anything below this line
helm -n $NAMESPACE install aizencore $HELMCHART_LOCATION/aizen \
--set core.enabled=true,\
global.clustername=$CLUSTER_NAME,\
global.s3.secrets.enabled=true,\
global.s3.endpoint_url=$CLOUD_ENDPOINT_URL,\
global.s3.endpoint_ip=$CLOUD_ENDPOINT_IP,\
global.s3.secrets.values.s3_access_key=$CLOUD_ACCESSKEY_ID,\
global.s3.secrets.values.s3_secret_key=$CLOUD_SECRET_KEY,\
global.customer_bucket_name=$BUCKET_NAME,\
global.storage_class=$STORAGE_CLASS,\
global.mlflow.artifact.region=$MLFLOW_ARTIFACT_REGION,\
global.mlflow.artifact.secrets.values.mlflow_access_key_id=$MLFLOW_ACCESSKEY_ID,\
global.mlflow.artifact.secrets.values.mlflow_access_secret_key=$MLFLOW_SECRET_KEY,\
global.mlflow.artifact.secrets.values.mlflow_endpoint_url=$MLFLOW_ENDPOINT_URL,\
global.mlflow.artifact.secrets.values.mlflow_artifacts_destination=$MLFLOW_ARTIFACT_DESTINATION,\
global.cloud_provider_type=$CLOUD_PROVIDER_TYPE,\
global.cloud_provider_region=$CLOUD_PROVIDER_REGION,\
global.storage_type=$STORAGE_TYPE,\
global.image_registry=$IMAGE_REPO,\
global.storage_class=$STORAGE_CLASS,\
global.image_secret=$IMAGE_REPO_SECRET,\
global.image_tag=$IMAGE_TAG,\
global.ingress.host=$INGRESS_HOST,\
core.console.auth_type=$AUTH_TYPE,\
core.storage.volume_size=$STORAGE_PERSISTENCE_SIZE,\
core.console.volume_size=$CONSOLE_PERSISTENCE_SIZE,\
core.console.ldap_server_host="ldap://aizen-openldap-service.aizen-infra.svc.cluster.local:1389",\
core.console.ldap_bind_dn="uid={username}\,ou=users\,dc=aizencorp\,dc=local\,dc=com|uid={username}\,ou=people\,dc=aizencorp\,dc=local\,dc=com",\
core.console.ldap_allowed_groups="cn=dbgrp\,ou=groups\,dc=aizencorp\,dc=local\,dc=com",\
core.console.admin_user="aizenadmin"
```

{% endcode %}

### Checking the Deployment Status of the Core Components

Check the status of all the core components by running this command:

{% code overflow="wrap" %}

```
kubectl -n aizen get pods
```

{% endcode %}

If any of the components are not in a **Running** state, see [installation-issues](https://aizen-corp.gitbook.io/docs/troubleshooting/installation-issues "mention").
