Workflow Audit
I spend two weeks inside your team’s actual processes, score every task for automation potential, and hand you a ranked list of what to build first, what to skip, and what’s going to waste your money if you try it too early.
Book a free call to discussThe problem this solves
Most teams I talk to have bought ChatGPT Enterprise or Microsoft Copilot, and nobody’s really using it for anything beyond rewriting emails and summarising meeting notes. Someone on the leadership team saw a demo, got excited, signed the contract, and six months later the adoption numbers are embarrassing but nobody wants to say it out loud (which is more common than anyone likes to admit). So everyone’s doing their own thing in isolation, nothing connects to anything else, and the whole “AI initiative” is essentially theatre.
The audit gives you something concrete instead. I come in, I look at how your team actually spends their time (not how they think they spend it, which is always different), and I hand you a ranked list of opportunities with honest assessments of what’s genuinely easy, what’s harder than it looks, and what you shouldn’t bother with at all. I wrote about the approach I use to route work between cloud and local LLMs on this site, and that same cost-conscious thinking runs through every recommendation I make.
How the two weeks work
Week 1: Discovery
- Kickoff call (90 minutes), where I need to understand your business, your team structure, and what’s actually driving the interest in AI. Sometimes the answer is “our competitors are doing it” and sometimes it’s “we just lost two people and can’t hire replacements and the work still needs doing.” The reason matters because it completely changes which opportunities are most valuable to pursue first.
- Process mapping workshop, where I work through your team’s actual daily workflows, and by actual I mean what people genuinely do, not the documented processes (those are usually wrong, or at best aspirational). Every email forward, every copy-paste between systems, every “I check this spreadsheet every Monday morning” habit gets mapped. This is where the gold is, because the biggest time savings are almost always hiding in the mundane repetitive work that nobody thinks to mention.
- Data and systems audit, where I look at what tools you use, what format your data is in, what’s structured versus messy, and what has an API versus what’s trapped in spreadsheets or someone’s email inbox. This determines what’s technically possible before we start talking about what’s desirable, because there’s no point recommending an automation that requires clean data if your data is a mess.
Week 2: Analysis & recommendations
- Opportunity scoring, where every task gets scored on four dimensions: hours saved per week, implementation complexity, risk (what happens if the automation gets it wrong?), and dependency (does this need to work before other things can happen?). The scoring isn’t a formula I apply blindly, it’s informed by having built dozens of these systems and knowing where the hidden complexity lives, which edges cases will bite you, and which vendor promises are realistic.
- Prioritised roadmap that tells you “do this first, this second, skip this entirely.” I’m blunt about what isn’t worth doing. Some tasks look automatable on paper but the edge cases make them more trouble than they’re worth, and you’d rather I tell you that now than after you’ve spent three months building something that needs constant babysitting from the same person who was doing the manual work in the first place.
- Tool and cost recommendations covering what to buy, what to build, and what to skip. I don’t take referral fees from tool vendors, so the recommendations are honest. If the best tool for your use case is free and open-source, or if you’d be better off with a Python script than a platform subscription, that’s what I’ll tell you.
What you get
Who this is for
- Teams of 10 to 100 where too much time goes on repetitive, manual work that follows a pattern
- Companies that bought ChatGPT Enterprise or Copilot but it didn’t stick, and you’re not sure whether that’s a tool problem or a strategy problem
- Ops leads or founders who need to build a business case for AI investment with actual numbers, not a slide deck full of hand-waving about “productivity gains”
- Anyone who’s been told “we should use AI more” by someone who’s never actually set it up, configured it, or measured whether it saved any time
Most clients find one automation opportunity in the audit that pays for the entire engagement within a month. Not because the audit is cheap, but because the wasted time is usually much more expensive than anyone realises once you actually sit down and measure it properly.
Common questions
Will our data end up training someone else’s model?
No. During the audit I’m looking at your processes and systems, not feeding your data into AI models. And when I recommend tools, I specifically flag which ones keep your data private and which ones don’t, because that distinction matters a lot more than most vendors want you to think about.
How do we know this will actually save time?
Every opportunity in the roadmap comes with an hours-per-week estimate based on what I observed during the process mapping, not what your team guessed on a survey. I also score implementation complexity, so you can see the ratio of effort to payoff before you commit to building anything.
Will my team see this as a threat?
I’m upfront about this during the process mapping sessions. The goal is to automate the boring repetitive work so people can spend their time on the parts of their job that actually require human judgment. In practice, the people doing the manual work are usually the most enthusiastic about automating it, because they’re the ones who know how tedious it is.
Want to know if this would help?
Book a free 30-minute call. Tell me what’s eating your team’s time, and I’ll tell you honestly whether an audit is the right next step or if something else makes more sense for where you are right now.
Book a call