LogoLogo
Have questions?📞 Speak with a specialist.📅 Book a demo now.
  • Welcome
  • INTRODUCTION
    • What Is Aizen?
    • Aizen Platform Interfaces
    • Typical ML Workflow
    • Datasets and Features
    • Resources and GPUs
    • LLM Operations
    • Glossary
  • INSTALLATION
    • Setting Up Your Environment
      • Hardware Requirements
      • Deploying Kubernetes On Prem
      • Deploying Kubernetes on AWS
      • Deploying Kubernetes on GCP
        • GCP and S3 API Interoperability
        • Provisioning the Cloud Service Mesh
        • Installing Ingress Gateways with Istio
      • Deploying Kubernetes on Azure
        • Setting Up Azure Blob Storage
    • Installing Aizen
      • Software Requirements
      • Installing the Infrastructure Components
      • Installing the Core Components
      • Virtual Services and Gateways Command Script (GCP)
      • Helpful Deployment Commands
    • Installing Aizen Remote Components
      • Static Remote Deployment
      • Dynamic Remote Deployment
    • Installing Optional Components
      • MinIO
      • OpenLDAP
      • OpenEBS Operator
      • NGINX Ingress Controller
      • Airbyte
  • GETTING STARTED
    • Managing Users and Roles
      • Aizen Security
      • Adding Users
      • Updating Users
      • Listing Users and Roles
      • Granting or Revoking Roles
      • Deleting Users
    • Accessing the Aizen Platform
    • Using the Aizen Jupyter Console
  • MANAGING ML WORKFLOWS
    • ML Workflow
    • Configuring Data Sources
    • Configuring Data Sinks
    • Creating Training Datasets
    • Performing ML Data Analysis
    • Training an ML Model
    • Adding Real-Time Data Sources
    • Serving an ML Model
    • Training and Serving Custom ML Models
  • MANAGING LLM WORKFLOWS
    • LLM Workflow
    • Configuring Data Sources
    • Creating Training Datasets for LLMs
    • Fine-Tuning an LLM
    • Serving an LLM
    • Adding Cloud Providers
    • Configuring Vector Stores
    • Running AI Agents
  • Notebook Commands Reference
    • Notebook Commands
  • SYSTEM CONFIGURATION COMMANDS
    • License Commands
      • check license
      • install license
    • Authorization Commands
      • add users
      • alter users
      • list users
      • grant role
      • list roles
      • revoke role
      • delete users
    • Cloud Provider Commands
      • add cloudprovider
      • list cloudproviders
      • list filesystems
      • list instancetypes
      • status instance
      • list instance
      • list instances
      • delete cloudprovider
    • Project Commands
      • create project
      • alter project
      • exportconfig project
      • importconfig project
      • list projects
      • show project
      • set project
      • listconfig all
      • status all
      • stop all
      • delete project
      • shutdown aizen
    • File Commands
      • install credentials
      • list credentials
      • delete credentials
      • install preprocessor
  • MODEL BUILDING COMMANDS
    • Data Source Commands
      • configure datasource
      • describe datasource
      • listconfig datasources
      • delete datasource
    • Data Sink Commands
      • configure datasink
      • describe datasink
      • listconfig datasinks
      • alter datasink
      • start datasink
      • status datasink
      • stop datasink
      • list datasinks
      • display datasink
      • delete datasink
    • Dataset Commands
      • configure dataset
      • describe dataset
      • listconfig datasets
      • exportconfig dataset
      • importconfig dataset
      • start dataset
      • status dataset
      • stop dataset
      • list datasets
      • display dataset
      • export dataset
      • import dataset
      • delete dataset
    • Data Analysis Commands
      • loader
      • show stats
      • show datatypes
      • show data
      • show unique
      • count rows
      • count missingvalues
      • plot
      • run analysis
      • run pca
      • filter dataframe
      • list dataframes
      • set dataframe
      • save dataframe
    • Training Commands
      • configure training
      • describe training
      • listconfig trainings
      • start training
      • status training
      • list trainings
      • list tensorboard
      • start tensorboard
      • stop tensorboard
      • stop training
      • restart training
      • delete training
      • list mlflow
      • save embedding
      • list trained-models
      • list trained-model
      • export trained-model
      • import trained-model
      • delete trained-model
      • register model
      • update model
      • list registered-models
      • list registered-model
  • MODEL SERVING COMMANDS
    • Resource Commands
      • configure resource
      • describe resource
      • listconfig resources
      • alter resource
      • delete resource
    • Prediction Commands
      • configure prediction
      • describe prediction
      • listconfig predictions
      • start prediction
      • status prediction
      • test prediction
      • list predictions
      • stop prediction
      • list prediction-logs
      • display prediction-log
      • delete prediction
    • Data Report Commands
      • configure datareport
      • describe datareport
      • listconfig datareports
      • start datareport
      • list data-quality
      • list data-drift
      • list target-drift
      • status data-quality
      • display data-quality
      • status data-drift
      • display data-drift
      • status target-drift
      • display target-drift
      • delete datareport
    • Runtime Commands
      • configure runtime
      • describe runtime
      • listconfig runtimes
      • start runtime
      • status runtime
      • stop runtime
      • delete runtime
  • LLM AND EMBEDDINGS COMMANDS
    • LLM Commands
      • configure llm
      • listconfig llms
      • describe llm
      • start llm
      • status llm
      • stop llm
      • delete llm
    • Vector Store Commands
      • configure vectorstore
      • describe vectorstore
      • listconfig vectorstores
      • start vectorstore
      • status vectorstore
      • stop vectorstore
      • delete vectorstore
    • LLM Application Commands
      • configure llmapp
      • describe llmapp
      • listconfig llmapps
      • start llmapp
      • status llmapp
      • stop llmapp
      • delete llmapp
  • TROUBLESHOOTING
    • Installation Issues
Powered by GitBook

© 2025 Aizen Corporation

On this page
  • Syntax
  • Parameters
  • Example
  1. LLM AND EMBEDDINGS COMMANDS
  2. LLM Application Commands

configure llmapp

The configure llmapp command configures an LLM application. It is used to deploy LLM applications, such as an AI agent. In the notebook, you will be guided through a template form that will prompt you for various inputs, such as the LLM application name, the application type, the LLM model to use, and any tools that the AI agent will use.

Syntax

configure llmapp

Parameters

To create an LLM application:

  1. Select New from the LLM App drop-down list. Specify a name for the LLM application in the LLM App Name field.

  2. Select the App Type, which is AI agent.

  3. Enter a brief description of the application in the App Description field. The description will be appended to the LLM system prompt.

  4. Select an LLM provider name from the Provider Name drop-down list, and enter a model name for the application in the LLM Model field.

  5. If applicable, add tools for the AI agent:

    1. Select New from the Tool drop-down list. Specify a name for the tool in the Tool Name field.

    2. Select the Tool Type:

      • Custom Functions: a generic Python module with custom Pydantic functions. Functions and parameters must be adequately annotated. Textual queries are used to generate function calls with appropriate parameter values, which result in direct calls to the Python module.

      • RAG Query: a tool that interfaces with a document Store ID from a vector store. Textual queries may be sent to the vector store to retrieve relevant chunks from documents within a Store ID.

      • REST Query: a tool that interfaces with a REST endpoint URL. The REST endpoint must provide an OpenAPI specification that is adequately annotated. Textual queries are used to generate function calls with appropriate parameter values, which are then sent to the REST endpoint via HTTP Get or Post methods.

      • SQL Query: a tool that interfaces with a SQL table. The SQL table must be a configured data source or dataset. Textual queries are used to generate SQL query statements, which are then sent to a SQL engine.

      • Web Search: a tool that searches the web for relevant information. Textual queries may be sent to a web search engine.

    3. Enter a brief description of the tool in the Tool Description field. The description will be appended to the LLM system prompt.

    4. Keep the Use App LLM checkbox selected to use the application's LLM. If you clear the checkbox, you will need to select a Provider Name and enter an LLM Model.

    5. Select or enter various tool parameters, depending on the tool type. See the Tool Parameters below.

    6. Click the Add Tool button.

  6. Select the Advanced Settings checkbox, and then specify settings for the LLM application's Temperature, Top-P, and Frequency Penalty.

  7. Click the Save Configuration button to save the LLM application.

Tool Parameters

Select the Module File and the Class Name.

Enter the Function Names.

Select the Provider Name, the Vector Store Name, and the Store ID.

Enter the Open AI Spec URL, the Credential File, and the Credential Section.

Select the Embedding Provider, and enter the Embedding Model.

Select the Source Type (datasource or dataset), and then select the Source Name.

There are no additional parameters.

Example

PreviousLLM Application CommandsNextdescribe llmapp

Last updated 3 months ago