LLM Workflow
Last updated
Last updated
This diagram shows the workflow for fine-tuning and serving large language models (LLMs).
Some of the steps in the workflow are optional. For example, you can directly serve a pretrained LLM without fine-tuning.
This diagram shows the workflow for serving LLMs or embeddings models for deploying vector stores for RAG applications or deploying AI agents.
Some of the steps in the workflow are optional. For example, you do not need to serve LLMs if the LLMs that you plan to use are from a non-Aizen vendor, such as OpenAI. Also, you are only required to configure a vector store if you plan to add a RAG Query tool to the AI agent.