← All services

Custom MCP Servers

I build the software that gives Claude, or any AI assistant, structured access to your internal systems, your CRM, your databases, your proprietary tools, without your data ever leaving your infrastructure. Published as an npm package you own. No ongoing licence fees.

Book a free call to discuss

What is an MCP server (briefly)

An MCP server is a small piece of software that sits between an AI assistant and your data, giving the AI structured access to specific information without exposing the underlying system. So instead of your team copying data out of your CRM and pasting it into ChatGPT (which is what most people do, and which means your customer data is now sitting on someone else’s servers being used for who knows what), the AI queries your system directly through the MCP server, gets exactly the data it needs in the right format, and nothing leaves your infrastructure.

I wrote a detailed guide that covers the full technical picture: What is an MCP server? But for this page, think of it as a secure, structured bridge between AI and your business data that you control completely.

Why custom

There are off-the-shelf MCP servers for the obvious things, Google Drive, Slack, GitHub, the tools that everyone uses. But your CRM isn’t generic, or if it is, the way you’ve customised it means the standard connector doesn’t surface the data your team actually needs in the way they need it. Your internal reporting tool, your proprietary database, the spreadsheet-that’s-really-a-database that runs half your operations (every company has at least one of these), none of those have pre-built connectors. That’s where custom comes in.

I’ve built 16 production MCP servers, all published as open-source npm packages under @houtini. They cover SEO analysis and search console data, financial market data via Financial Modeling Prep, voice analysis for content pipelines, job board automation for recruitment platforms like Greenhouse and Ashby, web scraping via Crawlee, Google Knowledge Graph lookups, email marketing through Brevo, AI image and video generation via Gemini, carousel generation, and geo-analysis. So when I say I build these, I’m not talking about a theoretical capability. I’m talking about something I do regularly enough to have a published track record you can go and inspect on npm right now.

What this looks like in practice

CRM MCP

Your sales team asks Claude “what’s the status of the Acme deal?” and gets a real answer drawn from your actual CRM data, covering deal stage, last contact, notes from the account manager, and next steps. No copying, no switching tabs, no one having to remember which field that information lives in. And the data stays on your infrastructure the entire time, which matters more than most people realise when they first start thinking about connecting AI to customer records.

Finance MCP

AI-powered reporting on your actual financial data, so someone can ask “show me revenue by product line for the last quarter compared to the same period last year” and get an answer in seconds from your accounting system. The AI does the comparison and spots trends that would take someone an hour to pull together manually, and the underlying numbers come from your real data, not from a model that’s guessing based on its training set.

Knowledge Base MCP

Connect your Confluence, Notion, or internal wiki to Claude so your team can ask questions and get answers grounded in your actual documentation. Not hallucinated answers from the AI’s training data (which is the failure mode that makes people distrust AI, and rightly so), but real answers from your real docs, with source references so anyone can verify what it said.

Monitoring MCP

An AI that watches your dashboards and alerts on anomalies, and not just “this number crossed a threshold” (any alerting tool can do that) but “this pattern looks unusual compared to the last six months and here’s why it might matter.” The kind of monitoring that would require a person staring at a screen for hours, except it runs continuously and never gets distracted.

What you get

📦 Published npm package you own and control
🔒 Runs entirely on your infrastructure
📋 Full documentation and setup guide
🔧 30-day support window post-delivery
💰 No ongoing licence fees, ever

The privacy point

I keep coming back to this because it matters more than most people realise when they first start thinking about AI in their business. Every time someone on your team pastes customer data, financial figures, or proprietary information into ChatGPT or Claude’s web interface, that data has left your control. An MCP server solves this at the architecture level, because the AI can query your data, reason about it, and give answers, but the data itself never leaves your network. And for situations where even the AI queries need to stay internal, I can pair the MCP server with a local LLM running on your own hardware via Ollama or LM Studio, so the entire stack is on-premises.

I’m not aware of another consultancy that builds custom MCP servers as a standalone service. Most AI consultancies don’t have the technical depth to build them. I do, because I use them every day and have published 16 of them as open-source packages that anyone can go and inspect on npm.

Common questions

Will our data end up training someone else’s model?

No. That’s the entire point of the architecture. The MCP server runs on your infrastructure, and your data stays there. The AI assistant sends queries to the MCP server and gets structured responses back, but the raw data never leaves your network. For maximum isolation, I can pair the MCP server with a local LLM so even the AI reasoning happens on your hardware.

Who maintains this when an API breaks?

You get 30 days of support after delivery, and the code is fully documented TypeScript that any competent developer can maintain. I also build in monitoring and error handling so the MCP server tells you when something breaks rather than returning bad data silently. If you need ongoing maintenance after the support window, we can arrange that, but most clients handle it internally.

What if the AI confidently gives wrong answers from our data?

The MCP server returns structured, typed data, not free-text summaries. So when Claude answers “what’s the status of the Acme deal?” it’s reading actual fields from your CRM, not generating a plausible-sounding guess. The hallucination risk drops dramatically when the AI is working with real structured data rather than trying to recall something from its training.

Want to connect AI to your systems?

Book a free call. Tell me what systems you need to connect, what questions your team needs AI to answer, and I’ll tell you whether a custom MCP server is the right approach or if something simpler would work.

Book a call

Receive the latest articles in your inbox

Join the Houtini Newsletter

Practical AI tools, local LLM updates, and MCP workflows straight to your inbox.