I have been using the wonderful Pi coding agent for the last few weeks for my personal software projects. Even though it is missing some “features” that other coding agents have (like language server integration) and it is not integrated deeply into an editor (like Github copilot), it has been working wonderfully for me. And maybe that is precisely why it’s been so great.
By the way, it is also the coding engine that powers OpenClaw which has been the hottest of hot AI topics the last few days.
Agents for development
While I have been very skeptical at first about coding agents, I have warmed up and now I regularly work with them for my work and also in my personal time.
Even for dynamically typed languages like R, they work great now. GPT 5.2 / 5.3 Codex now feels much more like a serious help with complex and long running tasks than earlier models used to. I feel confident that when I give the agent a task that it will produce something usable.
I even find myself starting multiple agents now at the same time - which seemed excessive and futuristic to me just a few weeks ago. My job has become much more about reviewing and guiding than about actually writing the code. There are many things to say about the importance of reviews, but I find that Armin Ronacher summarised them a lot better than I could and I highly recommend that you read his article.
A lot of great options
There are a lot of great options to pick from when you want to do agentic coding.
In my mind, the first tool that existed is GitHub Copilot in VS Code. Copilot itself can be used in multiple editors (like IntelliJ, Zed, RStudio) but the original integration was in VS Code. Additionally, there is software like Cursor, Windsurf, and Antigravity, which are editors that deeply integrate with a coding agent.
Then, there is of course the famous Claude Code, also its rival from OpenAI Codex, and finally the late to the party Gemini CLI. These are coding agents from AI labs and they support only the models of their creators. In contrast to the editors above, these are command line tools and are mostly used in combination with whatever editor you want to use.
There are also coding agents that can be used with any model like OpenCode, Amp, and Pi.
Some definitions
If you have not been following these developments closely, all these definitions can be confusing. So let’s take a step back:
- Model: The model is the LLM model itself that produces some output given some input. The easiest way to interact with a model is the chat interface that we are used to when we use Chat GPT or similar chat interfaces. Examples: GPT 5.3 Codex, Claude Sonnet 4.6, Claude Opus 4.6, Gemini 3.1 Pro, GPT 5.3 Mini, Gemini 3 Flash.
- Agent: An agent is more than just a model, It is a model running in a loop that can use tools to achieve goals and decides itself what to do next. An agent can achieve goals by iteratively working and evaluating if it has achieved its goal. (Read more about the distinction.)
- Harness: The harness is the packaging that makes it all work. It provides the agent with the tools it can use, managed auth, and also allows the user to possibly interact with multiple agents at the same time. (Read more about harnesses.) Examples: Claude Code, Codex, OpenCode, Amp, Pi.
The terms “agent” and “harness” are sometimes used interchangably (even on the official Pi homepage) and Pi is by design a minimal harness. It provides just four basic tools: read, write, edit, bash.
How a harness works
One nice thing about Pi is that it knows quite well how itself works. You can ask it questions like how it would explain itself:
At a high level, pi is a terminal harness that wraps a chat model and a small tool API. When you send a message, pi builds a prompt (system prompt + context files + skills + your message), then calls the selected provider/model (Anthropic, OpenAI, etc.) via its API/transport (SSE/WebSocket/HTTP). The model can respond with normal text or with tool calls; pi executes those tool calls (read/write/edit/bash or extension tools), captures the results, and feeds them back to the model in the same session. Pi stores the conversation as a JSONL session tree, handles compaction when context gets large, and renders everything in the TUI. Extensions/skills can hook into this loop to add tools, commands, UI, or custom behavior.
Here, the model is what pi itself calls the “chat model”. Sending the prompt to the model, calling the tools, and feeding the output back into the model is what we call “the agent”. Handling the sessions, providing more tools, or rendering the TUI are all responsibilities of the harness.
If you find that Pi lacks some features that you really want, you can even ask Pi to extend itself and to write an extension for you.
BYOM
For myself as a developer, it is important to be able to customize the tools that I use. When you use a harness like Claude Code or Codex, you have to rely on the decisions made for you. But we can do better: Mario Zechner has created a minimal harness that gives you transparency and customizability.
This has the big advantage that you can just bring your own model (BYOM) and also switch between models easily. You can control how the model uses tools and also which tools are available.
While Pi does not have all the amenities that other harnesses might have, it’s a fascinating project and works great in my experience.
It also allows you to transparently understand what is going on and intercept a lot of the core logic of the agent and harness.
Building Personal Software
I have been using Pi a little bit at work for some Python or R projects but mostly for a personal project using Go and HTMX.
Originally, I started this project to get accustomed to Go more. I also wanted to see how web development with Go works. I use Gin, templ, and HTMX. As a database, I use SQLite and I deploy to fly.io.
I did not prepare my repo in any special way and there is not that much documentation available. Still, basically from the start, it has been great working with Pi on the project. I was able to achieve a lot more than I would have been on my own.
Before, if I wanted to implement a broader new feature, I would have to research how to do it best and look for examples or projects. Now, I can ask the agent to present a plan to me and discuss the solution with it. I can let it create multiple prototypes and try them out before I let the agent implement a final solution.
The future is exciting
Whether you believe in the grand claims that AI labs make, whether you believe the future of software development is agentic coding, or if software developers even have a future: tools like these are great to play with and you can only really know what is possible if you do play with them.
If you want to read more about Pi, I can recommend the following report about a large refactoring with Pi and also this in-depth exploration what Pi is.
I have been able to be very productive for a medium sized personal project and it has been quite exciting to find out what is possible with these tools.