[TOC]
Agents-SDK is a portable, high-performance C++ framework for building on-device, agentic AI systems — think LangChain for the edge. This SDK is purpose-built for developers who want to create local-first AI agents that can reason, plan, and act without relying on the cloud.
- Modular Architecture — Compose agents from interchangeable components.
- Multi-Provider Support — Connect to multiple LLM providers seamlessly:
- OpenAI (GPT-5, GPT-4o, GPT-4)
- Anthropic (Claude 3 family models: Opus, Sonnet, Haiku)
- Google (Gemini family models: Pro, Flash)
- Ollama/llama-cpp (local models like Llama, Mistral, etc.)
- Optimized for Speed and Memory — Built in C++ with focus on performance.
- Built-In Workflow Patterns
- Prompt Chaining
- Routing
- Parallelization
- Orchestrator-Workers
- Evaluator-Optimizer
- Autonomous Agents — Supports modern reasoning strategies:
- ReAct (Reason + Act)
- CoT (Chain-of-Thought) [In Development]
- Plan and Execute
- Zero-Shot
- Reflexion [In Development]
- Extensible Tooling System — Plug in your own tools or use built-in ones (Web Search, Wikipedia, Python Executor, etc).
-
C++20 compatible compiler (GCC 14+, Clang 17+, MSVC 2022+)
-
Bazel 8.3.1+ (https://bazel.build/install)
-
Dependencies (already provided for convenience)
- nlohmann/json
- spdlog
-
Optional:
python3in PATH to use the Python execution tool
-
Clone the repository:
git clone https://github.com/RunEdgeAI/agents-sdk.git
-
Navigate to SDK:
cd agents-sdk -
Obtain API keys:
- For OpenAI models: Get an API key from OpenAI's platform
- For Anthropic models: Get an API key from Anthropic's console
- For Google models: Get an API key from Google AI Studio
- For Websearch tool: Get an API key from brave search
Build everything in this space:
bazel build ...You can configure API keys and other settings in three ways:
-
Using a
.envfile:# Copy the template cp .env.template .env # Edit the file with your API keys vi .env # or use any editor
-
Using environment variables:
export OPENAI_API_KEY=your_api_key_here export ANTHROPIC_API_KEY=your_api_key_here export GEMINI_API_KEY=your_api_key_here export WEBSEARCH_API_KEY=your_api_key_here
-
Passing API keys as command-line arguments (not recommended for production):
bazel run examples:simple_agent -- your_api_key_here
The framework will check for API keys in the following order:
.envfile- Environment variables
- Command-line arguments
Here's a simple example of creating and running an autonomous agent:
#include <agents-cpp/context.h>
#include <agents-cpp/agents/autonomous_agent.h>
#include <agents-cpp/llm_interface.h>
#include <agents-cpp/tools/tool_registry.h>
using namespace agents;
int main() {
// Create LLM
auto llm = createLLM("anthropic", "<your_api_key_here>", "claude-3-5-sonnet-20240620");
// Create agent context
auto context = std::make_shared<Context>();
context->setLLM(llm);
// Register tools
context->registerTool(tools::createWebSearchTool(llm));
// Create the agent
AutonomousAgent agent(context);
agent.setPlanningStrategy(AutonomousAgent::PlanningStrategy::REACT);
// Run the agent
JsonObject result = agent.run("Research the latest developments in quantum computing");
// Access the result
std::cout << result["answer"].get<std::string>() << std::endl;
return 0;
}The simplest way to start is with the simple_agent example, which creates a basic autonomous agent that can use tools to answer questions:
-
Navigate to the release directory:
cd agents-sdk -
From the release directory, run the example:
bazel run examples:simple_agent -- your_api_key_here
Alternatively, you can set your API key as an environment variable:
export OPENAI_API_KEY=your_api_key_here bazel run examples:simple_agent your_api_key_here -
Once running, you'll be prompted to enter a question or task. For example:
Enter a question or task for the agent (or 'exit' to quit): > What's the current status of quantum computing research? -
The agent will:
- Break down the task into steps
- Use tools (like web search) to gather information
- Ask for your approval before proceeding with certain steps (if human-in-the-loop is enabled)
- Provide a comprehensive answer
-
Example output:
Step: Planning how to approach the question Status: Completed Result: { "plan": "1. Search for recent quantum computing research developments..." } -------------------------------------- Step: Searching for information on quantum computing research Status: Waiting for approval Context: {"search_query": "current status quantum computing research 2024"} Approve this step? (y/n): y ...
You can modify examples/simple_agent.cpp to explore different configurations:
-
Change the LLM provider:
// For Anthropic Claude auto llm = createLLM("anthropic", api_key, "claude-3-5-sonnet-20240620"); // For Google Gemini auto llm = createLLM("google", api_key, "gemini-pro");
-
Add different tools:
// Add more built-in tools context->registerTool(tools::createCalculatorTool()); context->registerTool(tools::createPythonCodeExecutionTool());
-
Change the planning strategy:
// Use ReAct planning (reasoning + acting) agent.setPlanningStrategy(AutonomousAgent::PlanningStrategy::REACT); // Or use CoT planning (chain-of-thought) agent.setPlanningStrategy(AutonomousAgent::PlanningStrategy::COT);
The repository includes several examples demonstrating different workflow patterns:
| Example | Description |
|---|---|
simple_agent |
Basic autonomous agent |
prompt_chain_example |
Prompt chaining workflow |
routing_example |
Multi-agent routing |
parallel_example |
Parallel task execution |
orchestrator_example |
Orchestrator–worker pattern |
evaluator_optimizer_example |
Evaluator–optimizer feedback loop |
multimodal_example |
Support for voice, audio, image, docs |
autonomous_agent_example |
Full-featured autonomous agent |
Run examples available:
bazel run examples:<simple_agent> -- your_api_key_herelib/: Public library for SDKinclude/agents-cpp/: Public headerstypes.h: Common type definitionscontext.h: Context for agent executionllm_interface.h: Interface for LLM providerstool.h: Tool interfacememory.h: Agent memory interfaceworkflow.h: Base workflow interfaceagent.h: Base agent interfaceworkflows/: Workflow pattern implementationsagents/: Agent implementationstools/: Tool implementationsllms/: LLM provider implementations
bin/examples/: Example applications
auto custom_tool = createTool(
"calculator",
"Evaluates mathematical expressions",
{
{"expression", "The expression to evaluate", "string", true}
},
[](const JsonObject& params) -> ToolResult {
std::string expr = params["expression"];
// Implement calculation logic here
double result = evaluate(expr);
return ToolResult{
true,
"Result: " + std::to_string(result),
{{"result", result}}
};
}
);
context->registerTool(custom_tool);You can create custom workflows by extending the Workflow base class or combining existing workflows:
class CustomWorkflow : public Workflow {
public:
CustomWorkflow(std::shared_ptr<Context> context)
: Workflow(context) {}
JsonObject run(const std::string& input) override {
// Implement your custom workflow logic here
}
};Don't let infrastructure slow you down. Our Pro version helps accelerate your roadmap with:
- MCP Support: Enable your agent to utilize local and remote MCPs.
- Premium Tools: Access the complete set of tools supported natively including: weather, research, wolfram-alpha, and more.
- Voice SDK: Access to Edge AI's Speech-to-Text, Text-to-Speech, and Voice-Activity-Detection libraries and models.
- Email: support@runedge.ai
- Discord: https://discord.gg/D5unWmt8
This implementation is inspired by Anthropic's article "Building effective agents" and re-engineered in C++ for real-time, low overhead usage on edge devices.
This project is licensed under an evaluation License - see the LICENSE file for details.
The future of AI is on-device
Start with our samples and discover how we could empower the next generation of AI applications.