Most accounts of workflow automation stop at the go-live. Numbers are announced, percentages are reported, and the story ends with a result. This one doesn't end there, because what happens after go-live is the part that actually determines whether the investment was worth making.

This is a month-by-month account of a three-therapist physiotherapy practice in Geneva, from their Clarity Scan in July through to the six-month mark. Including what went wrong.

Month 1: The Clarity Scan

The practice had the kind of administrative weight that accumulates quietly in health-sector businesses: not through any single failure, but through years of workarounds layered on top of workarounds. Paper intake forms that patients completed in the waiting room, then were transcribed manually into the patient record system. A scheduling system that could not send its own reminders, so the receptionist made phone calls each morning to confirm the day's appointments. A billing reconciliation process at month end that took the better part of a day.

Eleven hours and twenty minutes of administrative work per week, mapped across three workflows, were going to tasks that did not require a human being. The practice director was sceptical that the number was that high. She was right to be. It is easy to produce inflated estimates by counting time that is already happening in parallel with something else. We had counted it carefully, and we had erred on the side of caution.

Five opportunities were identified. Three were selected for the first Sprint:

  • Patient intake digitisation. Paper forms replaced with a digital intake sent at booking confirmation, auto-populated to the patient record on submission. Estimated recovery: 3.5 hours per week.
  • Automated appointment reminders. SMS reminders sent automatically at 48 hours and 24 hours before each appointment, replacing manual morning calls. Estimated recovery: 5.5 hours per week across the three therapists who assisted with the calls on busy mornings.
  • Billing reconciliation. A connection between the scheduling system and the billing software, eliminating the manual month-end transfer of appointment data. Estimated recovery: 1.75 hours per fortnight.

The two findings not addressed in Sprint 1, a GP referral tracking sequence (deferred, more complex, higher integration risk) and a document storage reorganisation (lower ROI, not time-critical). Both were noted in the Opportunity Matrix for Sprint 2.

Months 2–3: The Sprint

The build took five weeks. The practice was involved for a total of four sessions: a kick-off, two mid-build reviews to test the intake form and the reminder sequence, and a go-live walkthrough. The receptionist attended all four. The therapists attended the go-live.

No one had to learn a new system. The booking platform was one the practice already used. The automation layer ran on Make, invisible to anyone using the front-end tools. The SMS service integrated directly with the booking platform's webhook. From the practice's side, what changed was what they stopped having to do: not what they started having to learn.

Go-live: end of week five. The receptionist's assessment at the end of the first day: "I don't trust it yet."

This is a reasonable response, and it is worth saying clearly: the first weeks after go-live are not the time to stop watching. Automated systems behave differently on live data than they do in testing. The go-live is not the end of implementation. It is the beginning of the stabilisation period.

Month 4: The first incident

The appointment reminder system ran cleanly for three weeks. In the fourth week, a pattern emerged in the monitoring logs: a small number of patients were receiving duplicate reminder messages. The duplication was inconsistent: some patients received two messages for one appointment, others received none.

The cause (the patient record parser was splitting names at hyphens and apostrophes) both common in French-speaking Switzerland. And in some cases generating two partial entries. When the reminder sequence queried the patient list, it found two records for the same appointment and dispatched accordingly.

The error was caught during a weekly monitoring review, two days before any patient raised it with the practice. It was corrected within four hours: a character-handling rule in the parser, tested against the existing patient database, redeployed.

On the value of monitoring

This is the function of the Continuity plan: not to prevent errors. Automated systems will always surface edge cases that testing missed: but to find them before they become complaints, and fix them before they cost anything. In this case, a duplicate SMS is a minor inconvenience. Uncaught, it would have required a manual review of the entire patient list, an apology communication, and an afternoon of the receptionist's time. Caught early, it was a four-hour fix.

Measured results at the end of month four, first full clean month:

Metric
Before
Month 4
Scheduling and reminder calls to reception
~20/day
~11/day
Missed appointments due to reminder failure
2.8/month
0
Billing reconciliation time
6 h/month
28 min
Digital intake form completion rate
0%
89%

Month 5: Steady state

No system changes in month five. The SMS parser was stable. The intake form completion rate held at 89%: the remaining 11% of patients were those who preferred to call or who lacked the digital confidence to complete the form before arriving. The practice accepted this as reasonable and added a note to the front-desk script for when those patients arrived.

The receptionist, who had been openly sceptical of the project in month one, said something during the month-five check-in that was not in any spreadsheet:

"I used to start every morning by calling twenty people to tell them something they already knew. Now when the phone rings, it's someone who needs something. It's different work. I didn't expect to care about that, but I do."

Reception coordinator · Geneva physiotherapy clinic

This is not a metric. It is also not nothing. The people who use automated systems every day have a direct experience of them that numbers do not fully capture. When the receptionist's job changed from performing a ritual to responding to actual need, the work became different in a way that had nothing to do with hours saved.

Month 6: The second incident. And a small addition

The monthly review call in month six identified an issue that no one at the practice had noticed but that was visible in the logs: a change in the booking platform's API had introduced a 3–4 hour lag in synchronisation with the therapists' Google Calendars. Appointment updates were reaching the booking system promptly but arriving in the therapists' calendars several hours late.

No appointment had been missed. No client had noticed. But the lag was creating low-grade uncertainty: therapists checking the booking platform directly rather than trusting their calendar, a workaround that would eventually become a habit and then an assumption, and eventually a source of errors when the calendar was trusted in a situation where it shouldn't have been.

The fix required updating the webhook configuration in Make. It took forty minutes.

At the same call, the receptionist raised an idea she had been thinking about since month four: could the system send a short satisfaction survey three days after a patient's final session? Nothing elaborate: two questions, a text field, a send button. The practice director had mentioned something similar during the Clarity Scan but it had not made it into the Sprint.

It took ninety minutes to build. It required no Sprint, no new contract, no additional cost beyond the Continuity plan. It went live the following week.

Where the practice stands at month six

44%
reduction in scheduling and reminder calls to reception
2
incidents identified and resolved before client impact
CHF 500
Continuity plan cost, covering monitoring and maintenance

At month six, the practice director was asked whether she would recommend the engagement to a colleague running a similar practice.

Her answer ("Yes, but not because of the numbers. The numbers are fine) they do what they said they would do. But the reason I would recommend it is because nothing breaks anymore and I don't think about it."

That is the actual goal. Not that the system is impressive. Not that the ROI calculation validates the investment. That the system is invisible: because invisible means working, reliably, without anyone having to carry it.

What the next Sprint covers

The GP referral tracking sequence: the fourth finding from the Clarity Scan, deferred from Sprint 1. Is scheduled for Sprint 2 in the coming months. The Opportunity Matrix analysis is still current. The workflow has not changed. The estimated recovery is still valid.

The Clarity Scan report has a shelf life that extends well beyond the first Sprint. Most clients use it as a roadmap across twelve to eighteen months, returning to each finding when the timing is right and the capacity exists to absorb the implementation. Nothing in the analysis expires unless the business itself changes significantly.

Wondering if this applies to your business? Ask Kai. It knows the details.

The next step

The diagnostic is the beginning, not the commitment.

The Clarity Scan produces a report that belongs to you. It maps your workflows, costs each one, and tells you honestly where the value is and where it isn't. What you do with that information. And when. Is entirely your decision.

Get the diagnostic See what the scan covers → Read 15 real engagements →