"We tried this before. It did not work." This is the most useful thing a prospective client can say. It means they have real-world data about what does not work in their business, and it raises the right question before starting again: what was different about that attempt, and what would need to be different this time?

Having reviewed the failed automation attempts of dozens of small firms, the same three patterns appear with enough consistency to be predictive. None of them are about the technology. All of them are about the order of operations.

Pattern one: the wrong process was chosen

The most painful process and the most expensive process are almost never the same thing. Firms choose what to automate based on what causes the most visible friction: the task they dislike most, the one that generates the most complaints, the one that is most obviously manual. This is a reasonable heuristic, but it is not an accurate guide to where the money is.

Consider a law firm where the most complained-about task is the manual entry of client data into the billing system after each engagement. Everyone hates it. It takes two hours a week. The firm automates it and recovers two hours a week — a useful result, but modest. Meanwhile, the intake documentation process takes nine hours of partner time per month, generates a high error rate that requires correction in the second week of every engagement, and has never been flagged as a problem because it has always worked that way. It is annoying but familiar.

The diagnostic approach finds the billing entry and the intake documentation. It costs both of them. The automation budget, applied to the intake process, returns four times as much as the same budget applied to the billing entry. Without the diagnostic, the firm would have spent the same money on the wrong priority.

This is why the first automation choice matters more than any subsequent one. The first attempt sets the organisation's expectation of what automation can deliver. A modest result from a poorly-chosen first target often ends the conversation permanently.

Pattern two: the tool was chosen before the workflow was designed

The second pattern is the most common: the firm purchased a platform — Zapier, Make, n8n, a CRM with built-in automation, a scheduling tool with workflows — set it up, got something partially running, and found that the result was fragile, incomplete, or required more maintenance than the manual process it replaced.

The cause is almost always the same: the tool was chosen because of its features, not because of a documented understanding of the workflow it was meant to support. Without a workflow design that specifies the inputs, the outputs, the edge cases, and the error conditions, the implementation becomes a series of improvisations. Each improvisation introduces a dependency. The dependencies accumulate. The automation becomes brittle in ways that are not visible until something breaks at a bad moment.

The maintenance trap

The most durable automation is boring. It handles exactly the defined inputs, produces exactly the defined outputs, and does nothing else. When something outside that definition happens — an edge case, a new tool version, an API change — it stops and alerts the operator rather than failing silently. Fragile automation fails silently and is discovered weeks later, when the downstream effects are already visible. Designing for maintainability is not exciting. It is the difference between an automation that runs for three years and one that is quietly abandoned after three months.

Pattern three: no handover, no ownership

The third pattern is less about how the automation was built and more about what happened to it afterward. The firm hired a freelancer, a VA, or an IT consultant who built something. It worked. Then the person who built it left, and nobody in the firm understood how it worked, how to change it when the business changed, or what to do when it stopped working.

This pattern produces what firms describe as "a black box": an automation that is running, that they depend on, that they do not understand, and that they are afraid to touch. When the process it supports changes — a new client category, a change in the follow-up sequence, a new tool replacing an old one — the black box cannot be updated without hiring someone who understands it again. The firm is now more dependent on external help than they were before.

Documented ownership is not a nice-to-have. It is the condition under which an automation remains valuable over time. An automation that requires its builder to maintain it indefinitely is not an asset. It is a liability with good months.

What a diagnostic-first approach prevents

  • Pattern one is prevented by the Opportunity Matrix. The Clarity Scan maps every candidate workflow, costs each one in time and money, and ranks them by the ratio of return to implementation complexity. The first Sprint targets the highest-ranked item, not the loudest complaint. The firm knows before committing what the expected return is, and why that target was chosen over the alternatives.
  • Pattern two is prevented by workflow design before tool selection. The Sprint begins with a week of mapping and specification before any build work starts. The workflow is designed on paper: every input, every output, every edge case, every error condition. The tool selection follows from the design. Not the other way around. What gets built is a translation of a documented specification, not an improvisation around a tool's native capabilities.
  • Pattern three is prevented by the handover documentation requirement. Every Sprint concludes with a documentation package that is written for the firm, not for a technical reader. It explains what the automation does in plain language, how to change the most common settings, and what each component depends on. The handover call walks through the documentation together. The test of a good handover is whether someone at the firm who was not involved in the build can maintain the automation six months later. That is the standard we hold ourselves to.

What to do with a previous attempt

If you have an existing automation that is partially working or sitting abandoned, the Clarity Scan can include an audit of what was built: what is functioning, what is not, what would need to change for it to work as intended. This is sometimes faster than starting from scratch, and sometimes not — it depends on the state of the original build. Either way, the audit makes the decision explicit rather than leaving it as a guess.

"We had tried Zapier twice. Both times we got something running that broke within two months. The Clarity Scan told us what we had actually been trying to solve, which turned out to be a different problem than the one we thought we were solving. The second Sprint has been running without intervention for eleven months."

Managing director · Consulting firm · Basel

A previous failed attempt is not evidence that automation does not work for your business. It is evidence that the previous approach had a specific flaw. Identifying that flaw is a diagnostic question, not a technology question.

Wondering if this applies to your business? Ask Kai. It knows the details.

The next step

Start with what failed and why.

If you have tried automation before, tell us what was attempted and what happened. The Clarity Scan maps your current workflows from scratch — including any existing automations worth keeping — and builds the Opportunity Matrix from observed reality, not assumptions.

Get the diagnostic Why the first choice matters → What the audit delivers →