This One Is Mine
2026-04-20
My local agents finish jobs with a small chime, I'm usually playing music and I miss it entirely.
What I wanted was for the agent to tell me. Not a wall of terminal output, but something conversational. "I finished the refactor. I moved the config parsing into its own package and updated the tests." Or: "I need your help making a decision — should I use a library or build my own?" Something where I could decide whether to pull my focus back, or let it wait.
So I built sayit
sayit is a CLI tool that converts text to speech from the command line. Pipe text into it, pass it a string, and it speaks. It wraps the OpenAI TTS API and supports named personas — voice, speed, model, and an optional LLM rewrite step that converts the input into something more conversational before it hits the TTS API, because written text and spoken word are different things. Any agent that can run a shell command can use it.
# An agent wrapping up a job
echo "Refactor complete. Moved config parsing to its own package. All tests pass." | sayit
# Or with a persona
echo "Three failing tests. You should take a look." | sayit --persona urgent
That's it. With an end-of-session hook, any agent can now translate its summary into spoken word and say it aloud. The job finishes, I hear what happened, and I know what to look at next.
Yes, these probably already exist
I didn't look. I assume tools like this exist — I didn't go searching because building my own was faster than evaluating someone else's. That's a shift in behaviour I've noticed in myself and others. The instinct used to be: find the best existing tool, learn it, configure it, make it work. Now the instinct is: just build the thing you want.
The bespoke argument is simple: tools built for the median user have the median user's needs baked in. The persona system in sayit is exactly what I wanted — not a feature set designed around someone else's workflow. The voices I actually use, the flag interface that makes sense to me, the rewrite step tuned to how I want agents to speak to me. Small things. But they add up, and fitting them into an existing tool would have taken longer than building my own.
The cost of building it used to make that trade-off unreasonable. It no longer does.
This is my tool. There are many like it, but this one is mine.
The build instinct at scale
This shift isn't limited to side projects on a Sunday afternoon. It's a change in default behaviour — and I think it's going to spread far beyond individual engineers.
We all have day jobs. And in those day jobs, we have been making buy-versus-build decisions for decades — and the smart choice has nearly always been buy. The cost of building bespoke tools was too high, the maintenance burden too real, the risk of reinventing wheels too obvious. So you bought the platform, configured it to mostly fit, and lived with the gaps.
That calculation has changed. When the cost of building drops by an order of magnitude, the threshold for "worth building ourselves" drops with it. Tools that were never worth the engineering time suddenly are. Internal workflows that were too niche to justify a product purchase can now be built quickly, shaped exactly to how the team works.
That calculation has changed. When the cost of building drops by an order of magnitude, the threshold for "worth building ourselves" drops with it. Tools that were never worth the engineering time suddenly are. Internal workflows that were too niche to justify a product purchase can now be built, shaped exactly to how the team works.
A word of caution, though. The pendulum will swing too far. There will be engineers — and teams — who look at the reduced cost of building and conclude that the answer is now always build. It isn't. The cost of ownership didn't go to zero. Maintenance, documentation, onboarding, edge cases, security, technical debt — those costs are still real. They're just lower than they used to be. The calculation changed; it didn't disappear. The answer is still somewhere in the middle. It's just that the middle has shifted.
A side experiment: the language question
While building sayit, I tested something else: whether some languages suit agentic development better than others. I chose Go deliberately.
I'll say upfront — this is my view, shaped by my history with these languages. Others will disagree.
I love Python because it gets out of my way. Intent flows into code naturally — the language feels like a thin membrane between what I want and what the machine does. I reached for Go or C++ when I needed performance, and that always felt like a cost: verbosity and rigidity in exchange for speed.
But I no longer express intent at the language layer. I express it to the agent — usually through OpenSpec, a spec-driven tool I use on most projects. Requirements, scenarios, acceptance criteria. The agent implements against the spec in whatever language I've chosen. The language is invisible to me. Python's selling point — that it reads like intent — was a property of the human-to-code interface. That interface is gone.
What matters now is how fast the agent gets feedback. Static typing and compile-time validation give it the tightest loop. An unused import in Go is a compile error — instant, before anything runs. In Python, it's nothing. The kind of thing a human skips past because they're mid-thought. But an agent has no focus to protect. It uses that error as signal and loops immediately. What felt like friction to me is information to the agent.
Where does that leave Python?
Python's reach has exploded far beyond traditional engineering. Data scientists, analysts, researchers, automation specialists — it's become the default tool for anyone who needs to make a computer do something without being a full-time developer.
But those are exactly the use cases that agentic development is swallowing first. When the agent writes the code, the language's readability to a human stops being the deciding factor. Right now, most agent tooling wraps Python because that's where the ecosystem is. Longer term, I'm not sure that holds.
Will we see established languages like Go and Rust gain ground in agent-built software? Will someone build a language designed from the ground up for agents to write — optimised for compiler feedback, static analysis, and machine-readable error messages rather than human ergonomics?
I don't know. But the question is worth asking, because the answer reshapes which languages matter and why.