This guide walks you through creating a smart agent that interacts with an MCP (Model Context Protocol) server using the mcp-use library. Leveraging LangChain’s ReAct-style architecture, you’ll be able to design agents that can reason, use tools, and complete complex tasks step-by-step.
This is the third installment in the MCP series. If you haven’t yet, check out:
Build an MCP Server with Python
What is MCP in AI: The Building Blocks of AI
Table of Contents
Overview
This example shows how to Build an MCP Client using:
- LangChain’s ChatOpenAI model
- LangChain helps manage prompts, memory, and reasoning in a structured way.
- It’s great for building intelligent, multi-step agents quickly.
MCPClientandMCPAgentfrom themcp-uselibrary- These tools handle communication with an MCP (Model Context Protocol) server.
- They help structure agent workflows and manage context.
How MCP Agents Behave
When you build an MCP client, it’s important to understand how MCP agents work. They follow a simple but powerful pattern called ReAct — short for Reasoning and Acting.
Here’s how it works:
- Think Before Acting: The agent breaks down each task into small steps. First, it reasons about the problem — asking, “What should I do next?”
- Use Tools via MCP: It uses tools that are available through the connected MCP server. These tools can be anything from a calculator to a document search function — whatever is exposed by your Model Context Protocol setup.
- Observe and Adjust: After using a tool, the agent looks at the output (called an “observation”) and uses that result to decide what to do next.
- Know When to Stop: The agent keeps repeating this think-act-observe cycle until it either finishes the task or hits a set limit of steps (
max_steps).
Requirements
Before you start building your MCP (Model Context Protocol) client, make sure you have the following set up:
1. Install Required Packages
Use pip to install the core libraries you’ll need:
pip install mcp-use langchain-openaiThese libraries help your MCP client communicate with the Model Context Protocol and OpenAI’s language models.
2. GitHub Repository
You can find the source code and examples on GitHub:
mcp-use/mcp-use
3. OpenAI API Key
Make sure you have a valid OpenAI API key.
Set it as an environment variable:
export OPENAI_API_KEY=your-api-key-hereThis allows your MCP client to securely connect to OpenAI’s services.
4. Access to an MCP Server
You’ll need access to a running MCP server either:
- Locally (for testing or development), or
- Remotely (a hosted or cloud-based MCP instance)
Code Example
Let’s walk through a simple example of how to build an MCP client using langchain-openai and mcp-use.
Create a Python File
Create a new file named client.py and add the following code:
import asyncio
from langchain_openai import ChatOpenAI
from mcp_use import MCPClient, MCPAgent
async def main():
# Connect to the MCP server
client = MCPClient(config={
"mcpServers": {
"default": {
"url": "https://your-mcp-server-url.com/mcp"
}
}
})
# Set up your language model (using OpenAI)
llm = ChatOpenAI(model="gpt-4.1")
# Initialize the MCP Agent with your LLM and client
agent = MCPAgent(llm=llm, client=client, max_steps=10)
# Run a task
resp = await agent.run(
"List all the books in the database and search for each on the internet, then gather their descriptions and metadata."
)
print(resp)
if __name__ == "__main__":
asyncio.run(main())
Connecting to MCP with Different Configuration Options
You can connect to an MCP server in multiple ways, depending on your use case. Here’s how to configure it:
1. Remote MCP Server (Hosted)
config = {
"mcpServers": {
"remote": {
"url": "https://your-remote-mcp-url.com/mcp"
}
}
}
Use this if you’re connecting to a cloud-hosted MCP server.
2. Local MCP Server
config = {
"mcpServers": {
"local": {
"url": "http://localhost:8080/mcp/sse"
}
}
}
Use this when running an MCP server on your local machine, e.g., during development or testing.
Start MCP from a CLI Command
For more dynamic use cases (like running the MCP server with a custom command), you can configure it like this:
config = {
"mcpServers": {
"cli": {
"command": "uv",
"args": ["run", "--with", "mcp", "mcp", "run", "server.py"]
}
}
}
This allows you to run an MCP server directly from a CLI process on startup.
Sample Output – Execution Trace
Here’s what a typical interaction looks like when using your MCP client to process a natural language query:
User Input:
Search books in my database and tell me the authors, topics, and some details about each of them.
Behind the Scenes (Agent Trace)
🚀 Initializing MCP agent and connecting to services...
🧠 Agent ready with tools: fetch_books, internet_search, summarize_web_content...
💬 Received query...
🏁 Starting agent execution with max_steps=10
👣 Step 1/10
🔧 Tool call: fetch_books → ✅ Found 4 books
👣 Step 2/10
🔧 Tool call: duckduckgo_detailed_search → Searched each book
👣 Step 3/10
🔧 Tool call: summarize_web_content → Combined descriptions + metadata
✅ Agent finished at step 3
🎉 Execution complete in 23.1 secondsSample Output – Final Response
Here’s the clean and user-friendly output that the MCP client generates:
1. **Chess Strategy, by Edward Lasker**
- Description: Classic chess guide from 1921
- Topics: Chess, Strategy
- Links: Archive.org, Project Gutenberg
2. **Deep Learning with Python**
- Author: François Chollet
- Topics: Deep Learning, Python, Keras
- Links: Manning, O'Reilly, IEEE Xplore
...With just one prompt, the MCP client fetches structured, enriched data by coordinating multiple tools—showing how Model Context Protocol makes AI interactions more powerful and traceable.
Key Concepts
| Concept | Description |
|---|---|
MCPClient | Interface to connect with the MCP server |
MCPAgent | LangChain-compatible agent for reasoning + tool usage |
max_steps | Max number of reasoning-act cycles per task |
| Tool Chaining | Agent calls multiple tools in sequence |
| Async Execution | All MCP calls are fully asynchronous |
Summary: You’ve Built Your First MCP Client
By now, you’ve successfully set up and run a working example of an MCP (Model Context Protocol) client. Here’s what you’ve accomplished:
- Connected to an MCP server (local or remote)
- Launched a LangChain-based agent using OpenAI’s models
- Executed complex, multi-step tasks with smart tool integration
This gives you a solid foundation to build powerful, intelligent AI systems that can use tools, make decisions, and scale in real-world applications.
From here, you’re ready to take the next steps toward production-ready, tool-augmented AI.
