Explainer · for data, marketing and e-commerce teams 5 minute read
What is agentic AI?
Agentic AI is software that takes a goal and pursues it through actions, instead of waiting for you to prompt it for the next step.
The same large language model you already meet in ChatGPT or Claude sits inside it. What changes is the loop around the model. It perceives a situation, reasons about it, plans the steps, takes action, and learns from the result. With minimal human involvement once it has been pointed at the goal.
That difference, between answering you and acting for you, is the whole point of agentic AI.
Generative AI is reactive. You prompt it. It generates something, copy, an image, a draft email, a snippet of code. The work ends at the generation. Without you, it does nothing more.
Agentic AI is proactive. You give it a goal. It pursues that goal through a series of actions, calling on tools, reading data, deciding what to do next, until either it has finished the job or it needs you to step back in. The prompt at the start is still there. What follows it is different.
Same brain, different loop
Generative vs agentic AI
Both share the same foundation. Large language models, the things that make ChatGPT and Claude tick, sit at the centre of both. The LLM is the brain in both cases. What surrounds it changes. In generative AI the brain is on its own and the only output channel is text in your window. In agentic AI the brain is wired into a loop with tools, memory, and the goal you set, and the output channel is action in real systems.
Most of the agentic AI you will see in the next year is exactly this: a generative AI you already use, with a different loop around it.
02 · How it actually works
Five steps an agent runs through. Every time.
This is the lifecycle Google's AI team and most of the agent frameworks (LangGraph, CrewAI, the Agent Development Kit) describe. Plain English version below, with the kind of working example our clients see.
The agent loop
Continuous, not linear
01
Perception
The agent reads the situation it has been put in. The goal, the data it has access to, the tools available, the constraints. For a marketing-research agent that might mean: "draft a competitor briefing on these five companies, here is the SERP API and here is our internal CMS."
02
Reasoning
The agent thinks about the goal in stages. Practitioners call this chain-of-thought. The model writes its working out before it acts: "I need to fetch the five competitors, then pull their organic visibility, then summarise the differences, then draft." The reasoning is not always visible, but it is always there.
03
Planning
The agent breaks the goal into steps and decides which tool to use for which step. A small step might be "call the SERP API for competitor one." A larger step might be "loop the same call across all five competitors and stash the results."
04
Action
The agent calls the tool, reads the response, and decides what to do next. This is where most of the daylight between agentic and generative AI shows up. The agent is doing things in real systems, not generating words about doing them.
05
Reflection
When something does not work, the agent notices. The API returns nothing, the data is malformed, a step fails. A well-built agent reflects on the result, adjusts the plan, and tries again. A badly-built one just stops. The reflection step is what separates an agent that survives in production from a demo that only works on the happy path.
Agents are the tools. Agentic AI is the house you build with them.
An AI agent is a single, focused thing. It might do one job, like find a flight at the right price, or summarise yesterday's customer messages, or run an SEO audit on a competitor. Useful, but on its own, narrow.
Agentic AI is what you have when several of those agents work together under a coordinator that knows the wider goal. The coordinator hands the right work to the right agent, gathers the results, decides what to do next. Agents in a toolbox. Agentic AI is the house.
This is the framing most of the production work is moving toward in 2026. A "Forward Deployed Engineer" or "Agent Reliability Engineer" job, both titles climbing in our jobs index, exists to design and run those coordinators. Sound exotic? It is mostly Python, plus careful prompts, plus the connector layer below.
04 · Where it shows up in the work
Agentic AI in data, marketing and e-commerce, this quarter.
Five examples from our own working week. Not future-state. Currently shipping.
Content marketing research
Multi-tool research that reads, synthesises, and drafts in one pass.
The exact work that built the page you are reading. Goal: "build a synthesis on agentic AI from primary sources." The agent fetches, reads, extracts, cross-checks against our own data, drafts. A senior person reviews, edits, ships. The work that used to take a long Tuesday now takes a focused hour, with the human doing the part the human is good at.
Managed via the Shopify CLI inside Claude Code. Goal: "add this product feature, update the cart template, query the order database for last month's repeats." The agent calls the CLI, edits the right files, runs the queries, surfaces the results. End-to-end shop work without leaving the editor.
Charts and diagrams
Data visualisation in the same conversation, not in a separate app.
The Gemini MCP gives Claude image-generation tools. Goal: "make this category breakdown into a chart fit for the deck." The agent picks the chart type, generates it, and drops it into the conversation. No tab-switching, no manual export, no "and now the design team needs this in PNG".
Analytics
SQL extraction, joining, and explanation across stacks.
Pulling and joining data across Shopify, GA4 and GSC, then explaining what changed and why, used to mean a half-day for a senior analyst. With the right MCPs in place, the agent does the extraction and the join, the human does the interpretation. Time saved is exactly the time the analyst should have been spending on the interpretation.
Landing-page and small-build work
From a Figma file or a written brief to a working page, in a session.
The same agentic loop, with the right skills library plus a design-to-code path, takes a Figma frame or a paragraph of intent and produces a deployable page. The kind of work that used to need a brief, an agency, and three weeks. The senior judgement, on what the page needs to say and what action it needs to drive, stays human. Everything else is the loop.
If you want the longer reading on what AI is doing across the labour market, with primary-source data, our research page is the next stop.
05 · The connector layer
Agents are useless without tools. MCP is how you give them tools.
An agent that can only call its own brain is a glorified chatbot. The interesting bit, the part that turns "AI assistant" into "agentic AI", is the connector layer. The set of tools the agent can reach for, and the protocol it uses to reach them.
The Model Context Protocol, MCP for short, is the standard for that connector layer. Anthropic published it. Most of the agent ecosystem has adopted it. An MCP server exposes a set of tools, like "search SERPs", "read this email account", "query this database", and any agent that speaks MCP can call them.
How the agent reaches your tools
MCP · Model Context Protocol
We publish sixteen open-source MCP servers under @houtini, each one solving a specific problem we hit on real engagements. The longer reading on what an MCP server is and why it matters is in our explainer. The full kit is on the tools page.
06 · When not to reach for it
Agentic AI is not always the right tool.
If the work is fully predictable, repeats the same way every time, and the cost of a wrong answer is high, classic automation or RPA is cheaper, faster and more reliable than an agent. Agentic AI earns its place in different conditions: where the work needs reasoning, where edge cases are normal, or where the goal requires coordination across several tools and the right path is not knowable in advance.
The honest read on the agentic AI hype cycle is that a lot of work being pitched as agentic could be done better by a script. The signal that you actually need an agent is when you find yourself writing endless if-statements to handle exceptions. That is the moment a model with reasoning starts to earn its keep.
The other honest read: agentic AI works best on top of clean inputs. If the data is bad, the integrations are flaky, and the goal is fuzzy, an agent will not save you. It will just produce a more eloquent failure than a script would have.
07 · What to do
Three things you can do this quarter, without us.
No amount of explainer reads as well as an afternoon of actually doing it. Pick one.
For the build-side reader
Get a Claude Code setup running.
An afternoon. Our getting-started guide covers the install, the first useful workflows, and the gotchas that catch people on day one.
Our /research page is the longer read on what AI is doing inside named jobs right now, with primary-source data and our own job-market index for triangulation.
If you work through one of those and end up wanting help building the agent layer around how your team actually works, that is what we are useful for. Not before.