LogoLogo
Have questions?📞 Speak with a specialist.📅 Book a demo now.
  • Welcome
  • INTRODUCTION
    • What Is Aizen?
    • Aizen Platform Interfaces
    • Typical ML Workflow
    • Datasets and Features
    • Resources and GPUs
    • LLM Operations
    • Glossary
  • INSTALLATION
    • Setting Up Your Environment
      • Hardware Requirements
      • Deploying Kubernetes On Prem
      • Deploying Kubernetes on AWS
      • Deploying Kubernetes on GCP
        • GCP and S3 API Interoperability
        • Provisioning the Cloud Service Mesh
        • Installing Ingress Gateways with Istio
      • Deploying Kubernetes on Azure
        • Setting Up Azure Blob Storage
    • Installing Aizen
      • Software Requirements
      • Installing the Infrastructure Components
      • Installing the Core Components
      • Virtual Services and Gateways Command Script (GCP)
      • Helpful Deployment Commands
    • Installing Aizen Remote Components
      • Static Remote Deployment
      • Dynamic Remote Deployment
    • Installing Optional Components
      • MinIO
      • OpenLDAP
      • OpenEBS Operator
      • NGINX Ingress Controller
      • Airbyte
  • GETTING STARTED
    • Managing Users and Roles
      • Aizen Security
      • Adding Users
      • Updating Users
      • Listing Users and Roles
      • Granting or Revoking Roles
      • Deleting Users
    • Accessing the Aizen Platform
    • Using the Aizen Jupyter Console
  • MANAGING ML WORKFLOWS
    • ML Workflow
    • Configuring Data Sources
    • Configuring Data Sinks
    • Creating Training Datasets
    • Performing ML Data Analysis
    • Training an ML Model
    • Adding Real-Time Data Sources
    • Serving an ML Model
    • Training and Serving Custom ML Models
  • MANAGING LLM WORKFLOWS
    • LLM Workflow
    • Configuring Data Sources
    • Creating Training Datasets for LLMs
    • Fine-Tuning an LLM
    • Serving an LLM
    • Adding Cloud Providers
    • Configuring Vector Stores
    • Running AI Agents
  • Notebook Commands Reference
    • Notebook Commands
  • SYSTEM CONFIGURATION COMMANDS
    • License Commands
      • check license
      • install license
    • Authorization Commands
      • add users
      • alter users
      • list users
      • grant role
      • list roles
      • revoke role
      • delete users
    • Cloud Provider Commands
      • add cloudprovider
      • list cloudproviders
      • list filesystems
      • list instancetypes
      • status instance
      • list instance
      • list instances
      • delete cloudprovider
    • Project Commands
      • create project
      • alter project
      • exportconfig project
      • importconfig project
      • list projects
      • show project
      • set project
      • listconfig all
      • status all
      • stop all
      • delete project
      • shutdown aizen
    • File Commands
      • install credentials
      • list credentials
      • delete credentials
      • install preprocessor
  • MODEL BUILDING COMMANDS
    • Data Source Commands
      • configure datasource
      • describe datasource
      • listconfig datasources
      • delete datasource
    • Data Sink Commands
      • configure datasink
      • describe datasink
      • listconfig datasinks
      • alter datasink
      • start datasink
      • status datasink
      • stop datasink
      • list datasinks
      • display datasink
      • delete datasink
    • Dataset Commands
      • configure dataset
      • describe dataset
      • listconfig datasets
      • exportconfig dataset
      • importconfig dataset
      • start dataset
      • status dataset
      • stop dataset
      • list datasets
      • display dataset
      • export dataset
      • import dataset
      • delete dataset
    • Data Analysis Commands
      • loader
      • show stats
      • show datatypes
      • show data
      • show unique
      • count rows
      • count missingvalues
      • plot
      • run analysis
      • run pca
      • filter dataframe
      • list dataframes
      • set dataframe
      • save dataframe
    • Training Commands
      • configure training
      • describe training
      • listconfig trainings
      • start training
      • status training
      • list trainings
      • list tensorboard
      • start tensorboard
      • stop tensorboard
      • stop training
      • restart training
      • delete training
      • list mlflow
      • save embedding
      • list trained-models
      • list trained-model
      • export trained-model
      • import trained-model
      • delete trained-model
      • register model
      • update model
      • list registered-models
      • list registered-model
  • MODEL SERVING COMMANDS
    • Resource Commands
      • configure resource
      • describe resource
      • listconfig resources
      • alter resource
      • delete resource
    • Prediction Commands
      • configure prediction
      • describe prediction
      • listconfig predictions
      • start prediction
      • status prediction
      • test prediction
      • list predictions
      • stop prediction
      • list prediction-logs
      • display prediction-log
      • delete prediction
    • Data Report Commands
      • configure datareport
      • describe datareport
      • listconfig datareports
      • start datareport
      • list data-quality
      • list data-drift
      • list target-drift
      • status data-quality
      • display data-quality
      • status data-drift
      • display data-drift
      • status target-drift
      • display target-drift
      • delete datareport
    • Runtime Commands
      • configure runtime
      • describe runtime
      • listconfig runtimes
      • start runtime
      • status runtime
      • stop runtime
      • delete runtime
  • LLM AND EMBEDDINGS COMMANDS
    • LLM Commands
      • configure llm
      • listconfig llms
      • describe llm
      • start llm
      • status llm
      • stop llm
      • delete llm
    • Vector Store Commands
      • configure vectorstore
      • describe vectorstore
      • listconfig vectorstores
      • start vectorstore
      • status vectorstore
      • stop vectorstore
      • delete vectorstore
    • LLM Application Commands
      • configure llmapp
      • describe llmapp
      • listconfig llmapps
      • start llmapp
      • status llmapp
      • stop llmapp
      • delete llmapp
  • TROUBLESHOOTING
    • Installation Issues
Powered by GitBook

© 2025 Aizen Corporation

On this page
  • Aizen Overview
  • Transform Any Data from Any Source
  • Automate Your AI Pipeline: Train, Serve, and Monitor
  • Power AI Workloads with Scalable Infrastructure Management
  • Aizen Platform Features
  • Application Gateway
  • Storage Layer
  • Microservices
  1. INTRODUCTION

What Is Aizen?

Aizen Overview

Aizen is a full-stack artificial intelligence (AI) platform where you can build, deploy, and scale AI solutions on real-time streaming data. Aizen integrates all stages of the AI pipeline for scalable data processing and predictive analytics. It is a single platform where you can manage your entire pipeline.

Transform Any Data from Any Source

Aizen simplifies data operations (DataOps) with pipeline orchestration, quality monitoring, and governance for efficient, secure data management. Using Aizen, you can connect, ingest, analyze, transform, and store features from any batch or streaming data source. You can pull real-time and batch data from various types and sources, such as Kafka, sensors, and JDBC. With support for diverse types and sources, you can more easily collaborate, integrate, and automate your data management processes.

Automate Your AI Pipeline: Train, Serve, and Monitor

Aizen's advanced machine learning operations (MLOps) framework enables you to train, deploy, serve, and monitor AI solutions with high accuracy and performance. Using Aizen, you can automate your AI pipeline with dynamic distributed model training, continuous integration and delivery (CI/CD) deployment, customizable real-time serving endpoints, and continuous monitoring to detect drift.

Power AI Workloads with Scalable Infrastructure Management

Aizen provides Kubernetes-powered microservices that optimize your AI infrastructure operations (InfraOps), offering automated, scalable, and efficient management of computing resources across both cloud and on-premises (on-prem) environments.

Using Aizen, you can scale AI solutions to accommodate growing data volumes and evolving business needs. You can:

  • Build AI solutions for enterprises of any size, from supply chain and manufacturing to retail and fintech.

  • Create models easily, ranging from small machine learning to large deep learning applications, customized for your needs.

  • Accelerate time-to-value with ready-to-use models that can be quickly fine-tuned for your specific use cases.

Aizen Platform Features

The Aizen platform integrates these features in an easy-to-use and scalable AI platform.

Application Gateway

Aizen's application gateway provides an intuitive, front-end user interface (UI), which you can use to set up your AI workflows without complex programming. Aizen's UI consists of a JupyterLab notebook, which enables you and other data scientists, data engineers, or prompt engineers to securely connect to backend microservices to build, deploy, and scale AI-enabled solutions. Through the Aizen UI, you can:

  • Connect to various data sources and ingest raw data.

  • Transform real-time data streams into features without writing any code, simplifying the data preparation process.

  • Generate machine learning (ML) or deep learning (DL) models, using the UI's algorithm selection and model building features.

Storage Layer

Aizen's storage layer is designed to manage AI data at scale, power real-time feature serving, and optimize training with computing-aware data partitioning and streaming to CPUs/GPUs. The storage layer can handle massive AI data volumes. Its low-latency row access and high-throughput batch access enable fast predictions and efficient training. Its object store persistence and high availability ensure data integrity and minimize downtime and data loss concerns. Its metadata store for features discovery and lineage supports transparency and compliance. Its feature store enables real-time and collaborative AI feature serving. You can manage and search vector embeddings through its vector database integration. Using Aizen's time-travel capability, you can gain insight from historical data and make informed decisions based on past trends.

Microservices

Aizen's Kubernetes-powered microservices enable on-demand scaling of computing resources for dynamic, remote graphics processing unit (GPU) training, which optimizes performance and reduces costs. For more information, see Resources and GPUs.

PreviousWelcomeNextAizen Platform Interfaces

Last updated 3 months ago

Aizen AI System Architecture