AI Automation Build
I build multi-step workflows that connect your tools, process your data, and keep running after I’ve left, deployed on your infrastructure with full documentation and training so your team can maintain them without me.
Book a free call to discussWhat this actually means
You’ve got a process that eats 10, 20, maybe 40 hours a week of someone’s time, and it’s the same pattern every time: copying data between systems, reformatting documents, chasing information across three different tools, running the same report every Monday morning. It works, but it’s slow and boring, and the person doing it is too expensive to be spending their day on tasks that follow a predictable pattern. So I come in and build the thing that replaces that work.
Not a proof of concept. Not a demo that looks great in a meeting and then sits in a folder. A working system that runs on your infrastructure, processes your actual data, handles errors gracefully, and keeps going after I’ve gone.
What kinds of things get automated
The honest answer is “anything with a repeatable pattern,” but that’s not very helpful, so here are the categories I build most often:
- Document processing, where invoices, contracts, receipts, or applications come in as PDFs or emails and need to be turned into structured data your systems can actually use. The AI reads the document, extracts what matters, validates it against your rules, and puts it where it needs to go, and the whole thing takes seconds instead of the 15 minutes someone currently spends per document.
- Research and monitoring pipelines, where you need competitive intelligence, pricing changes, regulatory updates, or market data collected and summarised on a schedule. Instead of someone spending Friday afternoon checking twelve websites and updating a spreadsheet, the pipeline runs overnight and the summary lands in your inbox by morning with the changes highlighted and the noise filtered out.
- Content production, covering research, drafting, review, formatting, and publishing. I built the pipeline that produces houtini.com (the site you’re reading right now), and it handles everything from topic research through to WordPress upload via the REST API. That same architecture works for any content team that needs to produce quality material at volume.
- Data transformation, where messy inputs from multiple sources need to become clean, consistent outputs. CSV files that don’t match, API responses that need reshaping, reports from three different systems that need combining into one view. The kind of work that takes a sharp person two hours but could be done by a machine in thirty seconds.
- Custom integrations that connect tools which don’t natively talk to each other. Your CRM doesn’t feed your reporting dashboard? Your accounting system can’t push data to your project management tool? That’s the gap I fill, and I’ve done it enough times with enough different APIs to know where the pitfalls are.
How the build works
Scope call (free, 30 minutes)
You tell me what you’re trying to automate, I tell you honestly whether it’s worth building and roughly what it would cost. Sometimes the answer is “you don’t need me for this, just use Zapier” or “this is a two-hour Python script, not a project.” I’d rather turn down work than build something that doesn’t justify the investment.
Technical design (1 week)
I map out the architecture: what connects to what, where the data flows, what happens when something fails (because something always fails, and the system needs to handle it gracefully rather than silently breaking and producing wrong outputs for three weeks before someone notices). You get a technical design document that explains the whole thing in plain English, not just a diagram that only an engineer could read.
Build (2 to 4 weeks)
Working prototype first, then testing with your real data, then hardening for production. I don’t disappear for a month and come back with a finished product. You see progress weekly, and if something needs to change mid-build (it often does, because seeing a working prototype always sparks new ideas about what else it could do), we adjust the scope and keep moving.
Handover
This is the part most agencies skip, and it’s the part that matters most. You get full documentation covering what the system does, how it works, and what to check if something breaks. You get a training session with whoever’s going to be responsible for it. And you get 30 days of support where I fix anything that goes wrong and answer questions as they come up. The goal is that after those 30 days, you don’t need me anymore.
The tech stack
I build with tools I use every day, not whatever’s trending on Hacker News this week, but things I’ve put through production and trust:
- Claude API and Gemini API for AI reasoning, covering document understanding, classification, extraction, and summarisation
- n8n for workflow orchestration, the visual workflow builder that connects everything together and gives your team a way to see what’s happening without reading code
- Cloudflare Workers and D1 for edge deployment, so your automations run close to your users with SQLite-backed storage and no server management
- MCP servers for structured data connections, and I’ve built 16 of these, all published as open-source npm packages
- Local LLMs via Ollama and LM Studio for anything where data can’t leave your network, running on hardware I’ve specced and tested extensively (including a multi-GPU Threadripper workstation with six NVIDIA cards)
- TypeScript and Python for custom logic when the off-the-shelf tools don’t quite fit
Everything runs on your infrastructure. No ongoing licence fees to me. No vendor lock-in. If you want to modify or extend what I built after the engagement ends, you can, because it’s yours and the code is documented well enough that another developer could pick it up.
Who this is for
- Teams spending 10 or more hours a week on repetitive work that follows a predictable pattern
- Operations leads who’ve tried to automate with Zapier or Make but hit the limits of what those tools can do when the logic gets complex
- Companies that had a workflow audit (from me or someone else) and now need the automations actually built by someone who’s done it before
- Anyone who’s been quoted eye-watering numbers by an agency for something that should cost a fraction of that
A typical build replaces 10 to 40 hours of manual work per week. At even a modest hourly rate, that’s the build cost paid back within 2 to 3 months, and then it keeps running every week without getting tired, making mistakes on a Friday afternoon, or going on holiday.
Common questions
Who maintains this when an API breaks?
During the 30-day support window, I do. After that, your team does, and that’s why the handover documentation and training session exist. I also build monitoring into every automation so it alerts you when something breaks rather than failing silently, which is the failure mode that actually costs you money.
What if the AI confidently gives wrong answers?
Every automation I build includes validation checks and confidence scoring. For high-stakes outputs (financial data, customer communications, anything with legal implications), the system flags uncertain results for human review rather than pushing them through automatically. The goal is to automate the 80% that’s straightforward and route the tricky 20% to a person.
Will our data end up training someone else’s model?
Not if we design it properly. For sensitive data, I use local LLMs running on your hardware via Ollama or LM Studio, so nothing leaves your network. For less sensitive workflows where cloud APIs are faster and cheaper, I use the enterprise API tiers from Anthropic and Google which have contractual guarantees that your data isn’t used for training.
Got a process that’s eating your team’s time?
Book a free 30-minute call. Describe what you’re trying to automate, and I’ll tell you whether it’s worth building, roughly what it would cost, and whether you actually need me or could solve it with something simpler.
Book a call