The next article initially appeared on Hugo Bowne-Anderson’s e-newsletter, Vanishing Gradientsand is being republished right here with the creator’s permission.
On this put up, we’ll construct two AI brokers from scratch in Python. One will likely be a coding agent, the opposite a search agent.
Why have I known as this put up “ Construct a Common-Objective AI Agent in 131 Strains of Python” then? Properly, because it seems now, coding brokers are literally general-purpose brokers in some fairly shocking methods.
What I imply by that is after getting an agent that may write codeit will probably:
- Do an enormous variety of belongings you don’t usually consider as involving code, and
- Prolong itself to do much more issues.
It’s extra applicable to consider coding brokers as “computer-using brokers” that occur to be nice at writing code. That doesn’t imply it’s best to at all times construct a general-purpose agent, nevertheless it’s value understanding what you’re truly constructing once you give an LLM shell entry. That’s additionally why we’ll construct a search agent on this put up: to indicate the sample works no matter what you’re constructing.
For instance, the coding agent we’ll construct beneath has 4 instruments: learn, write, edit, and bash.
It will probably do
- File/life group: Clear your desktop, type downloads by kind, rename trip images with dates, discover and delete duplicates, manage receipts into folders. . .
- Private productiveness: Search all of your notes for one thing you half-remember, compile a packing listing from previous journeys, discover all PDFs containing “tax” from final 12 months. . .
- Media administration: Rename a season of TV episodes correctly, convert pictures to totally different codecs, extract audio from movies, resize images for social media. . .
- Writing and content material: Mix a number of docs into one, convert between codecs, find-and-replace throughout many information. . .
- Knowledge wrangling: Flip a messy CSV right into a clear deal with e book, extract emails from a pile of information, merge spreadsheets from totally different sources. . .
This can be a small subset of what’s attainable. It’s additionally the rationale Claude Cowork appeared promising and why OpenClaw has taken off in the way in which it did.
So how are you going to construct this? On this put up, I’ll present you the right way to construct a minimal model.
Brokers are simply LLMs with instruments in a loop
Brokers are simply LLMs with instruments in a dialog loop and as soon as you understand the sample, you’ll have the ability to construct all sorts of brokers with it:
As Ivan Leo wrote,
The barrier to entry is remarkably low: half-hour and you’ve got an AI that may perceive your codebase and make edits simply by speaking to it.
The aim right here is to indicate that the sample is similar no matter what you’re constructing an agent for. Coding agent, search agent, browser agent, e-mail agent, database agent: all of them comply with the identical construction. The one distinction is the instruments you give them.
Half 1: The coding agent
We’ll begin with a coding agent that may learn, write, and execute code. As acknowledged, the flexibility to write down and execute code with bash additionally turns a “coding agent” right into a “general-purpose agent.” With shell entry, it will probably do something you are able to do from a terminal:
- Kind and manage your native filesystem
- Clear up your desktop
- Batch rename images
- Convert file codecs
- Handle Git repos throughout a number of tasks
- Set up and configure software program
Yow will discover the code right here.
Try Ivan Leo’s put up for a way to do that in JavaScript and Thorsten Ball’s put up for the right way to do it in Go.
Setup
Begin by creating our challenge:

We’ll be utilizing Anthropic right here. Be happy to make use of your LLM of selection. For bonus factors, use Pydantic AI (or an analogous library) and have a constant interface for the assorted totally different LLM suppliers. That means you should use the identical agentic framework for each Claude and Gemini!
Be sure to’ve received an Anthropic API key set as ANTHROPIC_API_KEY surroundings variable.
We’ll construct our agent in 4 steps:
- Hook up our LLM
- Add a instrument that reads information
- Add extra instruments:
write,editandbash
- Add extra instruments:
- Construct the agentic loop
- Construct the conversational loop
1. Hook up our LLM


Textual content in, textual content out. Good! Now let’s give it a instrument.
2. Add a instrument (learn)
We’ll begin by implementing a instrument known as learn which can permit the agent to learn information from the filesystem. In Python, we will use Pydantic for schema validation, which additionally generates JSON schemas we will present to the API:

The Pydantic mannequin offers us two issues: validation and a JSON schema. We will see what the schema seems like:


We wrap this right into a instrument definition that Claude understands:

Then we add instruments to the API name, deal with the instrument request, execute it, and ship the outcome again:

Let’s see what occurs after we run it:

This script calls the Claude API with a person question handed through command line. It sends the question, will get a response, and prints it.
Notice that the LLM matched on the instrument description: Correct, particular descriptions are key! It’s additionally value mentioning that we’ve made two LLM calls right here:
- One during which the instrument is named
- A second during which we ship the results of the instrument name again to the LLM to get the ultimate outcome
This usually journeys up individuals constructing brokers for the primary time, and Google has made a pleasant visualization of what we’re truly doing:

2a. Add extra instruments (write, edit, bash)
Now we have a learn instrument, however a coding agent must do greater than learn. It must:
- Write new information
- Edit current ones
- Execute code to check it
That’s three extra instruments: write, editand bash.
Similar sample as learn. First the schemas:

Then the executors:

And the instrument definitions, together with the code that runs whichever one Claude picks:

The bash instrument is what makes this truly helpful: Claude can now write code, run it, see errors, and repair them. But it surely’s additionally harmful. This instrument might delete your complete filesystem! Proceed with warning: Run it in a sandbox, a container, or a VM.
Apparently, bash is what turns a “coding agent” right into a “general-purpose agent.” With shell entry, it will probably do something you are able to do from a terminal:
- Kind and manage your native filesystem
- Clear up your desktop
- Batch rename images
- Convert file codecs
- Handle Git repos throughout a number of tasks
- Set up and configure software program
It was truly “Pi: The Minimal Agent Inside OpenClaw” that impressed this instance.
Strive asking Claude to edit a file: It usually desires to learn it first to see what’s there. However our present code solely handles one instrument name. That’s the place the agentic loop is available in.
3. Construct the agentic loop
Proper now Claude can solely name one instrument per request. However actual duties want a number of steps: learn a file, edit it, run it, see the error, repair it. We want a loop that lets Claude hold calling instruments till it’s finished.
We wrap the instrument dealing with in a whereas True loop:

Notice that right here we have now despatched your entire previous historical past of amassed messages as we progress via loop iterations. When constructing this out extra, you’ll wish to engineer and handle your context extra successfully. (See beneath for extra on this.)
Let’s strive a multistep activity:

4. Construct the conversational loop
Proper now the agent handles one question and exits. However we would like a back-and-forth dialog: Ask a query, get a solution, ask a follow-up. We want an outer loop that retains asking for enter.
We wrap every little thing in a whereas True:

The messages listing persists throughout turns, so Claude remembers context. That’s the whole coding agent.
As soon as once more we’re merely appending all earlier messages, which implies the context will develop fairly shortly!
A word on agent harnesses
An agent harness is the scaffolding and infrastructure that wraps round an LLM to show it into an agent. It handles:
- The loop: prompting the mannequin, parsing its output, executing instruments, feeding outcomes again
- Device execution: truly working the code/instructions the mannequin asks for
- Context administration: what goes within the immediate, token limits, historical past
- Security/guardrails: affirmation prompts, sandboxing, disallowed actions
- State: maintaining monitor of the dialog, information touched, and many others.
And extra.
Consider it like this: The LLM is the mind; the harness is every little thing else that lets it truly do issues.
What we’ve constructed above is the whats up world of agent harnesses. It covers the loop, instrument executionand fundamental context administration. What it doesn’t have: security guardrails, token limits, persistence, or perhaps a system immediate!
When constructing out from this foundation, I encourage you to comply with the paths of:
- The Pi coding agentwhich provides context loading
AGENTS.mdfrom a number of directories, persistent periods you possibly can resume and department, and an extensibility system (abilities, extensions, prompts) - OpenClawwhich works additional: a persistent daemon (always-on, not invoked), chat because the interface (Telegram, WhatsApp, and many others.), file-based continuity (
SOUL.md,MEMORY.mdevery day logs), proactive habits (heartbeats, cron), preintegrated instruments (browser, subagents, gadget management), and the flexibility to message you with out being prompted
Half 2: The search agent
With a purpose to actually present you that the agentic loop is what powers any agent, we’ll now construct a search agent (impressed by a podcast I did with search legends John Berryman and Doug Turnbull). We’ll use Gemini for the LLM and Exa for internet search. Yow will discover the code right here.
However first, the astute reader could have an attention-grabbing query: If a coding agent actually is a general-purpose agent, why would anybody wish to construct a search agent after we might simply get a coding agent to increase itself and switch itself right into a search agent? Properly, as a result of if you wish to construct a search agent for a enterprise, you’re not going to do it by constructing a coding agent first… So let’s construct it!
Setup
As earlier than, we’ll construct this step-by-step. Begin by creating our challenge:

Set GEMINI_API_KEY (from Google AI Studio) and EXA_API_KEY (from exa.ai) as surroundings variables.
We’ll construct our agent in 4 steps (the identical 4 steps as at all times):
- Hook up our LLM
- Add a instrument (web_search)
- Construct the agentic loop
- Construct the conversational loop
1. Hook up our LLM


2. Add a instrument (web_search)
Gemini can reply from its coaching information, however we don’t need that, man! For present data, it wants to go looking the online. We’ll give it a web_search instrument that calls Exa.

The system instruction grounds the mannequin, (ideally) forcing it to go looking as an alternative of guessing. Notice which you can configure Gemini to at all times use web_searchwhich is 100% reliable, however I needed to indicate the sample that you should use with any LLM API.
We then ship the instrument name outcome again to Gemini:

3. Construct the agentic loop
Some questions want a number of searches. “Evaluate X and Y” requires looking for X, then looking for Y. We want a loop that lets Gemini hold looking out till it has sufficient data.


4. Construct the conversational loop
Similar as earlier than: We wish back-and-forth dialog, not one question and exit. Wrap every little thing in an outer loop:

Messages persist throughout turns, so follow-up questions have context.
Prolong it
The sample is similar for each brokers. Add any instrument:
web_searchto the coding agent: Look issues up whereas codingbashto the search agent: Act on what it findsbrowser: Navigate web sitessend_email: Talkdatabase_query: Run SQL
One factor we’ll be doing is exhibiting how normal goal a coding agent actually could be. As Armin Ronacher wrote in “Pi: The Minimal Agent Inside OpenClaw”:
Pi’s complete concept is that in order for you the agent to do one thing that it doesn’t do butyou don’t go and obtain an extension or a talent or one thing like this. You ask the agent to increase itself. It celebrates the concept of code writing and working code.
Conclusion
Constructing brokers is easy. The magic isn’t complicated algorithms; it’s the dialog loop and well-designed instruments.
Each brokers comply with the identical sample:
- Hook up the LLM
- Add a instrument (or a number of instruments)
- Construct the agentic loop
- Construct the conversational loop
The one distinction is the instruments.
Thanks to Ivan Leo, Eleanor Berger, Mike Powers, Thomas Wiecki, and Mike Loukides for offering suggestions on drafts of this put up.
