About AgentSquare
AgentSquare is a novel LLM agent search framework that utilizes module evolution and recombination to efficiently optimize agent designs across
a modular design space consisting of Planning, Reasoning, Tool Use, and Memory modules.
Main Performance of different Methods across various tasks and the modules that different agents contain are shown below, please check our New module for more details about New Modules.
Overview of AgentSquare
AgentSquare is a modular framework for designing and optimizing LLM agents.
We first propose a modular design space of LLM agents and extract 4 types of standardized modules
including planning, reasoning, tooluse, and memory. Based on this, we design a novel LLM agent
search framework to automatically discover good-performing agents.
You may check
our paper for more details.
Modular Design Space of LLM Agents
Here is the illustration of the modular agent design space and agentic workflow (left) and the standardized IO interface of four types of modules (right).
AgentSquare Search Framework
The following figure is the overview of AgentSquare search framework. AgentSquare optimizes LLM agents through
the mechanisms of module evolution and recombination. We further introduce a performance predictor that implements
an in- context surrogate model for efficient evaluation of novel agents.
Contribute to AgentSquare
We invite you to contribute to AgentSquare by helping us standardize interfaces for additional agent modules.
By expanding our module pool, we can enhance the search process and discover even more optimized agents.
If you have ideas for new modules or interface improvements, we welcome your contributions to build a more robust framework.
Here is the guidance document of standardizing human-designed agents,
and you can submit your standardized modules through this link.
Citation
@article{shang2024agentsquare,
title={AgentSquare: Automatic LLM Agent Search in Modular Design Space},
author={Shang, Yu and Li, Yu and Zhao, Keyu and Ma, Likai and Liu, Jiahe and Xu, Fengli and Li, Yong},
journal={arXiv preprint arXiv:2410.06153},
year={2024}
}