What MCP Servers Can Do
Let’s start with what actually happens when an agent uses a tool. Then we’ll break down how it works.
Watch a Tool Call Happen
You ask an agent about the weather. Here’s exactly what happens behind the scenes.
That’s the whole pattern. The server exposes tools. The AI reads them, picks the right one, and uses the result to answer you. Every MCP interaction — checking email, querying a database, creating a task — follows this exact flow.
The Three Building Blocks
The weather server above exposed tools — but that’s just one of three things an MCP server can offer. The key question for each: who decides when it gets used?
1. Tools — the AI decides
Actions the agent can take: send an email, look up a contact, create a task. The AI reads the available tools and picks the right one based on what you ask. This is the most common building block by far.
2. Resources — the app decides
Data the application pulls in on your behalf — like documents or database records. The AI doesn’t choose to read a resource; the app does, based on your actions.
3. Prompts — you decide
Reusable templates you can trigger — like a “review this PR” button that gives the AI structured instructions. Think of them as saved workflows your team can standardize.
How Tools Show Up to the AI
When an agent connects to an MCP server, the server hands over a list of tool definitions — name, description, and what parameters it expects. These go into the AI’s context window (its working memory), alongside your conversation.
The description is everything. It tells the AI when to reach for a tool. A vague description means the AI guesses. A precise one means it picks correctly every time.
When someone asks “can Gumloop connect to X?” — the answer is almost always yes. If there’s an API, someone can build an MCP server for it. That server becomes a set of tools any Gumloop agent can use instantly.
