Building Multi-Agents systems with LangFlow
An implementation with Azure OpenAI
AI Agents are not new in the landscape of Generative AI, yet it’s been only over the last 2 months they really started taking off from an enterprise perspective.
But what is an AI Agent?
We can define an AI Agent as a hyper-specialized, goal oriented and autonomous AI entity featured by the following component:
- An LLM which acts as the brain of the agent
- A system message which sets the agent’s mission
- An orchestration layer which make it easier to manage the agent’s components (LangChain, Semantic Kernel, Autogen and many others a re examples of orchestrators)
- Memory which provides the agent with the context of the conversation
- Knowledge Base which provides the agent with relevant information to be retrieved
- Tools which allow the agent to perform actions in the surrounding ecosystem.
For example, let’s say we want to build an AI tutor assistant for students.
One of the most prominent features of AI Agents — which marks a paradigm shift in the application development landscape — is that they can be seen as a set of modular, sharable and repeatable components.
But it’s not all. Following the same reasoning, an Agent itself can be seen as a repeatable and sharable component that can be used in different ways:
- As a stand-alone agent
- As a tool for another agent
- As part of a multi-agents workflow
When it comes to multi-agents systems, one of the key element we need to define is the workflow design — in other words, the way our agents are going to work together. There are three main types of workflows:
- Collaborative →agents are free to communicate among each others as a peer-to-peer collaboration.
- Hierarchical →agents are invoked and orchestrated by a Manager Agent which will always be the first point of contact of user’s query
- Sequential →agents are “hard-coded” in a sequence so that the output of one agent becomes the input of the following agent in the row.
Note that, while designing the workflow, we can add any kind of additional logic we need, including loops, conditional branches, fallbacks etc.
In the following example, we are going to keep things easy and stick to a hierarchical workflow leveraging LangFlow as GUI.
LangFlow
LangFlow is an open-source graphical interface designed to make working with LangChain easy and intuitive. You can visually design AI workflows, test ideas quickly, and bring your projects to life in minutes.
Moreover, when it comes to AI Agents, LangFlow gets extremely handy: as we mentioned above, AI Agents are featured by modularity and repeatability, which is perfectly in line with LangFlow’s design. LangFlow comes, indeed, with a set of components (which can include tools, LLMs, memory, vectorDB, as well as custom components you can write from scratch) which can be eventually published in a store (where you can also find other contributors’ components).
To get started with LangFlow, you will need to install it via pip install langflow
and then run the following command to launch it on your local host:
python -m langflow run
To get more familiar with the tool, I recommend to have a look at the official documentation here.
Building your e-commerce multi-agent experience
Business Scenario: We are the owners of a climbing e-store, and we want to enhance our customer experience by creating an multi-agent AI assistant. This assistant leverages the Azure OpenAI service to provide real-time, intelligent support to our customers, helping them:
Identify the perfect climbing gear
Get inspired with beautiful hiking trip
Adding items to the cart
We have the following assets:
- An ecommerce website:
- A backend database for user’s cart:
{
"cart": []
}
- A product database:
From the UI, the user can add items to the cart by clicking on the “Add to cart” button. However, we want to provide a conversational user interface which, eventually, will be able to do that on user’s behalf.
When you initialize LangFlow, you have the chance to either start from a pre-built flow you can pick in the store, or draft your flow from a white canvas, which is what we are going to do:
As you can see, on the left-hand side there is a full list of components you can leverage. In this scenario, we are going to use a mix of pre-built and custom components. Let’s start!
Step 1: Climbing Gear Agent
The first agent we are going to initialize is our Sales representative. This agent will be able to:
- Retrieve products’ information from a SQL DB
- Add item to the cart of our ecommerce website
The first component we need is an Agent, which we can configure with our LLM of choice. Then we need a SQL Tool, for which I’m going to use a LangChain pre-built component available in the left-hand side menu:
Note: since we are going to use this SQL Agent as part of a multi-agents system, we need to configure it as a “tool”, so that eventually can be used by our Climbing Gear Agent (we will do the same for this latter so that can be invoked by the Manager Agent). You can convert an agent — and many other components — to tools by clinking on the tool mode toggle:
Since the SQL component requires it, I also added an Azure OpenAI LLM to enable the tool.
The second tool we need will be the Add to Cart one, which will be a custom one. You can add a custom tool by clicking on “New custom component” in the bottom-left corner. This is how I configured my component:
from langflow.field_typing import Data
from langflow.custom import Component
from langflow.io import MessageTextInput, Output
from langflow.schema import Data
import requests
class AddToCart(Component):
display_name = "AddToCart"
description = "Use as a template to add an item to the cart."
documentation: str = "http://docs.langflow.org/components/custom"
icon = "code"
name = "AddToCart"
inputs = [
MessageTextInput(
name="item_name",
display_name="Item Name",
info="Name of the item to add",
value="Example Item",
tool_mode=True,
),
MessageTextInput(
name="item_price",
display_name="Item Price",
info="Price of the item to add",
value="10.00",
tool_mode=True,
),
]
outputs = [
Output(display_name="Output", name="output", method="build_output"),
]
def build_output(self) -> Data:
"""Add an item to the cart."""
url = "http://localhost:3000/cart"
cart_item = {
"name": self.item_name,
"price": float(self.item_price),
}
response = requests.post(url, json=cart_item)
if response.status_code == 201:
result = f"Item '{self.item_name}' added to cart successfully."
else:
result = f"Failed to add item to cart: {response.status_code} {response.text}"
data = Data(value=result)
self.status = data
return data
This is the overall configuration of our first Agent:
Step 2: Hiking Expert Agent
For this agent, we will keep things simple and use it only as a hyper-specialized advisor on hiking trips. You might wonder — why do we need an agent if it doesn’t even have tool? Sometimes it might be an extra layer which can be avoided, however the hyper-specialization and focus on a specific topic (even without tool) is still something which benefit the overall multi-agent system, making the end-user experience more accurate.
So in this case, we only need another Agent component and configure it with the proper system message.
Step 3: Manager Agent
The third agent will be the manager. Also in this case, we only need an Agent component, which I configured with the following system message:
You are a skilled customer service manager and information router. Your primary responsibility is to use the available tools to accurately address user inquiries and provide detailed, helpful responses. You can:
Retrieve gear names from your climbing catalog
Provide clear guidance on hiking tours
Adding items to the cart
Use multiple tools if required to answer the question.
And that’s it! We now have our crew ready to be tested.
Note that all the agents are provided with two main features: a name and a description. This is key in order for the manager to invoke the proper agent (the same reasoning applies to tools). You can provide a description by clicking on “Edit Tool” in your agent’s component:
This is the final configuration (I added the Chat Input and Output components to finalize the flow):
Note that you can also add the chat history to the flow. To do that, you can leverage either external memory stores or LangFlow’s tables. What you can do is creating a prompt which keep the memory as a variable, and connect it into the Manager Agent’s system message.
You are a helpful assistant that answers questions.
Whenever you are asked specific questions about climbing gears or hiking tours, leverage your tools
History:
{memory}
To test our assistant, we can use the playground provided by LangFlow on the up-right corner:
As you can see, the Manager agent was able to orchestrate the specialist Agents seamlessly and, eventually, the Climbing Gear Agent succesfully added my gear to the cart:
You can also see the Agent’s thinking in the backend in your command line:
Finally, you can consume your flow via API and embed it into any application (in our case, our ecommerce website).
You can also share your flow as a repeatable asset into the LangFlow store. You can now find mine into the store:
Conclusions
Multi-agents systems are definitely a leap forward into the field of generative AI, and they are starting finding concrete applications in enterprises. Deciding upon your workflow strategy, as well as the “specialization” of your agent (one agent with many tool vs many agents with few or no tools) is a key component and part of your planning strategy.
Despite the lack of a golden recipe, it is important to start experimenting and understanding the business impact of different workflow configurations to fully embrace this paradigm shift.