Recording-based automation: how showing your work replaces describing it to a developer

Instead of documenting your process and handing it to a developer to build, you record yourself doing the work — and the platform turns that recording into a live automation.

You're drowning in tickets. Your team manually logs into Jira, grabs a ticket, checks Confluence for context, updates a spreadsheet somewhere, and sends a Slack. Repeat five hundred times a day. You know this isn't scalable. You've heard "automation" thrown around, but every time you dig into what that actually means, you hit the same wall: it requires engineers, it takes months, and it breaks every time something changes.

Then someone mentions agent builders. And you wonder: is this the thing that finally makes this solvable without hiring more engineers?

What's actually different about agent builders?

For the last decade, automation meant one of two paths. Path one: hire developers to write bots in Python, manage dependencies, deploy them to servers. Path two: use an RPA studio—Automation Anywhere, UiPath, Blue Prism—and record your clicks and keyboard commands into a playbook.

Both paths assume the same thing: someone has to sit down and codify the process from memory. That person is usually not the person doing the work. They ask questions, take notes, and then spend weeks translating that into bot logic. It's slow. It's lossy. It breaks the moment someone changes how they do the work.

Agent builders flip that on its head. Instead of translating a process from memory, you point a tool at the actual work—the live systems, the real screens, the actual flow. The tool observes what you do. It watches you move between Jira and Slack and that one legacy portal nobody has documentation for. Then it generates an agent that can replicate that work.

This isn't science fiction. It's not even new in theory. The shift from "describe what you do" to "show me what you do" is called Observation to Agent—O2A. Computer-use AI—models that can actually navigate interfaces the way humans do—made this practical.

Who is actually using these tools?

Today, agent builders attract two very different audiences.

First, the citizen developers. These are operations people, finance analysts, process owners—people who understand their own workflows deeply but have never written code. Agent builders feel accessible because you don't start from a blank screen writing logic. You start by doing your job. That's a fundamentally different mental model.

Second, the skeptics. Enterprise IT teams worry about "shadow automation." If anyone can build a bot, won't compliance blow up? Won't bots start doing things without governance? There's real risk there. It's why you're already seeing companies split into two camps: permissive ("go build," with guardrails after), and restrictive ("file a ticket, wait for the CoE").

The third group—traditional RPA shops—are still figuring out where they fit. Some are adding observation-based capabilities to their studios. Others are treating agent builders as a threat.

What agent builders can actually do (and what they can't)

Agent builders work well when three things align: the workflow involves moving data between systems, the systems have stable interfaces, and the end goal is clear. A finance team reconciling vendor invoices across three systems? Yes. A customer service team responding to support tickets? Yes. A process that requires deep judgment calls in unstructured data? Today, that's much harder.

The honest limitation: agent builders still struggle with ambiguity. If the rule is "if the invoice is more than 10% off from the PO, escalate," an agent can handle that. If the rule is "if something feels off, investigate," you've got a problem. Computer-use AI is getting better at judgment, but we're not at the point where you can replace experienced humans with agents on judgment-heavy work.

There's also the maturity problem. The category is real, and growing—agent builder searches are trending at roughly 800 per month and climbing—but it's still being defined. What one vendor calls an agent builder, another calls a "low-code automation platform." Standards don't exist yet.

The category matters because it's still being invented

We're in the phase where "agent builder" is still fighting for a name. Some people call it "AI-native automation." Others say "observation-based RPA." The fact that we don't have stable language yet means the category is young enough that vendors are still sorting out what actually works.

But the underlying shift is real: automation is moving from "memorialize the process and code it" to "show me the process and I'll code it." If your team is stuck because process automation feels impossible without engineering resources, agent builders are probably worth testing. But don't go in expecting them to replace judgment, maturity, or governance. They're a tool that works best when you know what problem you're solving and you have a clear process to automate.

The companies winning here aren't the ones who see agent builders as a magic button. They're the ones who are honest about what their people actually do, who pick something small to start with, and who treat the agent as part of their system, not a replacement for thinking.