LLM Workflow

Fine-Tuning and Serving LLMs

This diagram shows the workflow for fine-tuning and serving large language models (LLMs).

Workflow for LLMs

Some of the steps in the workflow are optional. For example, you can directly serve a pretrained LLM without fine-tuning.

Deploying Vector Stores and AI Agents

This diagram shows the workflow for serving LLMs or embeddings models for deploying vector stores for RAG applications or deploying AI agents.

Workflow for Vector Stores and AI Agents

Some of the steps in the workflow are optional. For example, you do not need to serve LLMs if the LLMs that you plan to use are from a non-Aizen vendor, such as OpenAI. Also, you are only required to configure a vector store if you plan to add a RAG Query tool to the AI agent.

Last updated