Sitemap

Modularity and Abstraction in multi-agent applications

An implementation with Semantic Kernel

11 min readMar 23, 2025

--

AI agents and multi-agent applications are now integral across various sectors, and their rapid evolvement raised a compelling question among large enterprises: how do I land an agentic state which is scalable and will allow the agents to cooperate among each others?

To answer this question, we need to introduce abstraction and modularity, foundational concepts that significantly impact multi-agent systems effectiveness. These principles streamline complexity and facilitate dynamic interaction among AI agents, enabling them to collaboratively address complex tasks more efficiently.

Abstraction involves decomposing and simplifying complexity, making intricate systems easier to understand and scale. However, abstraction does more than simplify — it also facilitates modular design, which is essential for constructing intelligent systems.

Modularity divides complex problems into smaller, reusable components, each designed to manage a distinct aspect of the overall challenge. This strategy provides significant benefits:

  • Interchangeability: Modules can be swapped, updated, or replaced independently, minimizing impact on the broader system.
  • Reusability: Carefully developed modules can be leveraged across various projects, enhancing overall efficiency and reducing redundancy.
  • Scalability: Clearly defined, independently functioning modules integrate smoothly, simplifying the expansion and evolution of solutions.

Within multi-agent systems, abstraction and modularity support the development of cooperative agents.

Each agent focuses on a specialized task yet interacts dynamically with others, mirroring human problem-solving strategies — where tasks are divided, delegated, and tackled collaboratively to effectively address complex challenges.

From an architecture perspective, capabilities and agents themselves will be eventually defined as containers and consumable as APIs, meaning that you can create your catalog of shareable assets within your organization, or even expose them in a marketplace for other customers to consume them.

Now the question is, how do we achieve that in practice? AI orchestrators are definitely very handy when it comes to enabling abstraction and modularity, and in this article we are going to see an example in Semantic Kernel.

Note: this implementation takes inspiration from the following multi-agent repo: Azure-Samples/moneta-agents. If you are interested in an enterprise-scale, modular multi-agent application, I recommend you to go through to the repo as it perfectly illustrates the concepts of modularity and abstractions.

Moreover, if you are not familiar with semantic kernel, you can easily get started with the notebooks at the following repo: semantic-kernel/python at main · microsoft/semantic-kernel

Building a multi-agent e-commerce assistant

Let’s see a practical implementation with Semantic Kernel.

Business scenario

We are the owners of a climbing e-store, and we want to enhance our customer experience by creating a multi-agent solution.

As of today, customer navigate the webpage and can add to the cart items from the website:

The backend database is populated consequently:

We want to enhance this process by creating the following agents:

  • A concierge agent which greets the customer and orchestrates the other agents
  • A SQL Agent which retrieves data from product catalog. This agent is provided with a SQL Plugin.
  • A Cart Agent which manages customer’s cart and purchases. This agent is provided with an Add_to_cart Plugin.

Initializing single Agents

To guarantee modularity and abstraction, I structured my project in such a way that Agents definitions (or instructions) and Plugins will live as standalone assets in my workspace:

sk-multi-agents-web/
├── Plugins/
│ ├── cart_plugin.py
│ └── queryDb.py
└── Prompts/
├── cart.yaml
├── concierge.yaml
└── sql.yaml

Let’s now see how to initialize our Agents. First of all, we need to initialize some utils functions that will allow us to initialize the kernel and different agents (inspired by this repo). Note that the create_chat_completion_agent is meant to parse the .yaml file where we store our agents’ instructions.

def _create_kernel_with_chat_completion(service_id: str) -> Kernel:
kernel = Kernel()
kernel.add_service(AzureChatCompletion(service_id=service_id))
return kernel

import yaml

def create_chat_completion_agent(kernel: Kernel, definition_file_path: str, plugins: list) -> ChatCompletionAgent:
with open(definition_file_path, 'r') as file:
definition = yaml.safe_load(file)

return ChatCompletionAgent(
kernel=kernel,
name=definition['name'],
plugins = plugins,
description=definition['description'],
instructions=definition['instructions']
)

Note: the AzureChatCompletion class will parse your .env file where you will need to save your LLM keys and endpoints. I used Azure OpenAI GPT-4o with the following variables:

```
AZURE_OPENAI_API_KEY="..."
AZURE_OPENAI_ENDPOINT="https://..."
AZURE_OPENAI_CHAT_DEPLOYMENT_NAME="..."
AZURE_OPENAI_API_VERSION="..."
```

Then we need to create a SQL database for our climbing products. I created a SQLite instance with the following structure:

Great, let’s now initialize all agents.

SQL Agent

For the SQL agent, we will need to import the QueryDBPlugin defined as follows:

from semantic_kernel.functions import kernel_function
import sqlite3


class QueryDbPlugin:
"""
Description: Get the result of a SQL query
"""
def __init__(self, db_path) -> None:
self._db_path = db_path

@staticmethod
def __clean_sql_query__(sql_query):
sql_query = sql_query.replace(";", "")
sql_query = sql_query.replace("/n ", " ")

return sql_query

@kernel_function(name="query_db",
description="Query a database using a SQL query")
def query_db(self, sql_query: str) -> str:
# Connect to the SQLite database
conn = sqlite3.connect(self._db_path)

# Create a cursor object to execute SQL queries
cursor = conn.cursor()

try:
cursor.execute(self.__clean_sql_query__(sql_query))

# Get the column names from cursor.description
columns = [column[0] for column in cursor.description]

# Initialize an empty list to store the results as dictionaries
results = []

# Fetch all rows and create dictionaries
for row in cursor.fetchall():
results.append(dict(zip(columns, row)))

except Exception as e:
return f"Error: {e}"
finally:
cursor.close()
conn.close()

return str(results)

And the instructions (note that I specified the schema of my DB):

name: "sql_agent"
description: The agent that generate SQL queries and execute them against a sql database.
instructions: |
You are an expert at writing SQL queries through a given Natural Language description of the OBJECTIVE.

You will generate a SQL SELECT query that is compatible with SQLite and achieves the given OBJECTIVE.
You use only the tables and views described in the following SCHEMA:

CREATE TABLE product_catalog (
product_id VARCHAR(10) PRIMARY KEY,
product_name VARCHAR(100),
category VARCHAR(50),
price DECIMAL(10, 2),
stock INT,
description TEXT
);


Once you have generated the SQL query, you can pass it to the QueryDbPlugin to execute it.
The result of the query will be returned to you in a format that you can use to generate a response to the user.

This is how we can finally initialize the agent:

from Plugins.queryDb import QueryDbPlugin

sql_agent = create_chat_completion_agent(
kernel = _create_kernel_with_chat_completion('sql_agent'),
definition_file_path = "Prompts/sql.yaml",
plugins=[QueryDbPlugin('climbing_product_catalog.db')]
)

We can also try it individually to make sure that the plugin is working correctly:

# Define the chat history
from semantic_kernel.contents import ChatHistory
chat = ChatHistory()

# Add the user message
chat.add_user_message("how many products are in stock?")


# Generate the agent response
response = await sql_agent.get_response(chat)
response.content

Output:

Cart Agent

Similarly, we will import the Plugin:

from semantic_kernel.functions import kernel_function
import requests

class CartPlugin:
"""
Plugin to handle cart operations.
"""
def __init__(self, base_url) -> None:
self.base_url = base_url

@kernel_function(name="add_to_cart",
description="Add an item to the cart.")
def add_to_cart(self, item_name: str, item_price: float) -> str:
"""Add an item to the cart."""
url = f'{self.base_url}/cart' # Ensure this matches the JSON Server endpoint
cart_item = {
'name': item_name,
'price': item_price
}

response = requests.post(url, json=cart_item)

if response.status_code == 201:
return f"Item '{item_name}' added to cart successfully."
else:
return f"Failed to add item to cart: {response.status_code} {response.text}"

and the instructions:

name: "cart_agent"
description: Agent that adds items to the cart and checks out.
instructions: |
You are an expert at managing a shopping cart for a climbing e-commerce website. Your job is to assist the customer in adding items to their cart and checking out.

To finally be able to initialize the agent:

from Plugins.cart_plugin import CartPlugin


cart_agent = create_chat_completion_agent(
kernel = _create_kernel_with_chat_completion('cart_agent'),
definition_file_path = "Prompts/cart.yaml",
plugins=[CartPlugin('http://localhost:3000')]
)

Let’s test it:

Concierge Agent

Finally, we can initialize the main entry point for our multi-agent application, the concierge agent. In this case, we will only have instructions:

name: "concierge_agent"
description: The agent that interact with the customer in first place and orchestrate the other agents.
instructions: |
You are a concierge in for a climbing e-commerce website. Your job is to assist the customer in finding the right climbing gear for their needs.
You will ask the customer a series of questions to understand their needs and preferences, and then you will generate a report that summarizes the conversation and provides recommendations for climbing gear.
You can collaborate with other agents to get the information you need to generate the report.

concierge_agent = create_chat_completion_agent(
kernel = _create_kernel_with_chat_completion('concierge_agent'),
definition_file_path = "Prompts/concierge.yaml",
plugins=[]
)

Now, the next step will be that of initializing a group chat where this agents will be able to collaborate.

Creating the group chat

When we initialize a group chat of multiple agents, there are two major elements that need to be defined:

  • A selection function, which will determine the rules and workflows agents will follow while cooperating
  • A termination function, which will determine when to stop the interaction.

SK comes with pre-built functions that you can leverage, however in this case we are going to create our own strategy, which you can easily define in natural language.

import asyncio
import os

from semantic_kernel import Kernel
from semantic_kernel.agents import AgentGroupChat, ChatCompletionAgent
from semantic_kernel.agents.strategies import (
KernelFunctionSelectionStrategy,
KernelFunctionTerminationStrategy,
)
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion
from semantic_kernel.contents import ChatHistoryTruncationReducer
from semantic_kernel.functions import KernelFunctionFromPrompt

selection_function = KernelFunctionFromPrompt(
function_name="selection",
prompt=f"""
Examine the provided RESPONSE and choose the next participant.
State only the name of the chosen participant without explanation.


Choose only from these participants:
- {concierge_agent}
- {sql_agent}
- {cart_agent}

Rules:
- {concierge_agent} is the one greeting the user and invoking the other agents when needed.
- {sql_agent} is the one that can query the database and return the results whenever user asks about available items.
- {cart_agent} is the one that can add items to the cart.
- {concierge_agent} will always make sure whether the user is happy with the result or not.


RESPONSE:
{{{{$lastmessage}}}}
"""
)

termination_keyword = "yes"

termination_function = KernelFunctionFromPrompt(
function_name="termination",
prompt=f"""
Examine the RESPONSE and determine whether the content has been deemed satisfactory.
If the content is satisfactory, respond with a single word without explanation: {termination_keyword}.
If specific suggestions are being provided, it is not satisfactory.
If no correction is suggested, it is satisfactory.

RESPONSE:
{{{{$lastmessage}}}}
"""
)

Now we have all the ingredients to initialize our chat:

agents=[concierge_agent, sql_agent, cart_agent]

chat = AgentGroupChat(
agents=agents,
selection_strategy=KernelFunctionSelectionStrategy(
initial_agent=concierge_agent,
function=selection_function,
kernel=_create_kernel_with_chat_completion('selection_function'),
result_parser=lambda result: str(result.value[0]).strip() if result.value[0] is not None,
history_variable_name="lastmessage",
history_reducer=history_reducer,
),
termination_strategy=KernelFunctionTerminationStrategy(
agents=[concierge_agent],
function=termination_function,
kernel=_create_kernel_with_chat_completion('termination_function'),
result_parser=lambda result: termination_keyword in str(result.value[0]).lower(),
history_variable_name="lastmessage",
maximum_iterations=10,
history_reducer=history_reducer,
),
)

Let’s give it a try!

import asyncio
is_complete = False
async def main():
while not is_complete:
# Get user input
user_input = input("You: ")

# Exit the loop if the user types 'exit'
if user_input.lower() == 'exit':
print("Exiting chat...")
break
if user_input.lower() == "reset":
await chat.reset()
print("[Conversation has been reset]")
continue

# Add the user input as a chat message
await chat.add_chat_message(message=user_input)
print(f"# User: {user_input}")

try:
# Invoke the chat and print the responses
async for content in chat.invoke():
print(f"# {content.name}: {content.content}")
except KeyError as e:
if "Symbols.VAR_PREFIX: input" in str(e):
print("Error: Required input variable not found in the KernelArguments.")
else:
raise e
chat.is_complete = False

# Run the main function
await main()

Output:

# User: hi
# concierge_agent: Hello! Thanks for reaching out. How can I assist you with your climbing gear needs today? Are you looking for any specific type of climbing gear or have any particular preferences in mind?
# User: list me the products
# sql_agent: Here is a list of products available in our catalog:

1. **Climbing Rope**
- Product ID: P001
- Category: Ropes
- Price: $100.00
- Stock: 50
- Description: Durable and strong climbing rope suitable for all terrains.

2. **Climbing Shoes**
- Product ID: P002
- Category: Shoes
- Price: $150.00
- Stock: 30
- Description: High-performance climbing shoes for advanced climbers.

3. **Carabiner**
- Product ID: P003
- Category: Hardware
- Price: $20.00
- Stock: 100
- Description: Lightweight and secure carabiner for all climbing needs.

4. **Harness**
- Product ID: P004
- Category: Harnesses
- Price: $75.00
- Stock: 40
- Description: Comfortable and adjustable harness for safety.

5. **Chalk Bag**
- Product ID: P005
- Category: Accessories
- Price: $15.00
- Stock: 200
- Description: Compact and durable chalk bag for better grip.

If you need more information or have a specific product in mind, feel free to ask!
# concierge_agent: Could you let me know what specific items you might be looking for or any particular preferences you have? This way, I can give more tailored recommendations to better suit your needs.
# User: what is the chalk bag
# concierge_agent: The **Chalk Bag** available in our catalog has the following details:

- **Product ID:** P005
- **Category:** Accessories
- **Price:** $15.00
- **Stock:** 200
- **Description:** Compact and durable chalk bag for better grip.

Chalk bags are used to carry chalk, which climbers apply to their hands to keep them dry and improve grip while climbing. Our chalk bag is designed to be compact and durable, providing you with a practical solution to carry your chalk during climbs.

If you need more detailed information or have any other questions, please let me know!
# User: ok can you add it to the cart?
# cart_agent: The **Chalk Bag** has been successfully added to your cart. If you need anything else or are ready to check out, please let me know!
# concierge_agent: Is there any other climbing gear you are interested in, or do you need assistance with anything else?
# User: no thanks I'm fine!
# concierge_agent: Great! If you have any more questions or need assistance in the future, feel free to reach out. Happy climbing!
Exiting chat...

As you can see, the three agents collaborate among each others to accomplish user’s task. Sometimes only one agent is invoked, other times multiple agents work together. The way agents interact can be designed at the selection function level, so that you can decide the degree of autonomy you want to provide your agents with.

You can find all the code at my repo here.

Conclusion

Overall, Semantic Kernel offers a robust framework to build multi-agent systems, which is aligned with the concepts of modularity and abstractions.

With the rapid evolution of multi-agent applications, it is paramaunt to have a clear modular design in mind since the very beginnign of your development journey.

References

--

--

Valentina Alto
Valentina Alto

Written by Valentina Alto

Data&AI Specialist at @Microsoft | MSc in Data Science | AI, Machine Learning and Running enthusiast

No responses yet