Trae: Introducing MCP Server Integration With Practical Examples


ByteDance’s AI IDE Trae version 1.3.0 (April 2025) adds full support for MCP servers, embracing the new Model Context Protocol (MCP) standard for connecting LLMs to external data and tools.
MCP is an open, client-server protocol designed to break down data silos: it provides a universal interface so that AI models can query databases, code repos, APIs, and other systems via lightweight “servers”, In effect, MCP is like a USB-C port for AI – a standardized way for applications to plug in context and capabilities. This post explains what MCP servers do, what’s new in Trae’s MCP implementation, the technical underpinnings (performance, architecture, transports), and a hands-on example of configuring an MCP server in Trae.
What Are MCP Servers (Model Context Protocol)?
The Model Context Protocol (MCP) is a recently released open standard (open-sourced by Anthropic in late 2024) that standardizes how AI models access data and tools.
An MCP server is a lightweight program that exposes a specific capability – for example, fetching documents from a database, running code queries, or interfacing with GitHub – over a well-defined protocol. An MCP server implements a set of "commands" or tools that an LLM can invoke. From the LLM’s perspective, all MCP servers look the same: the protocol abstracts away the details of each data source.
MCP solves the problem of fragmented, one-off integrations. Traditionally, each AI assistant or agent needs a custom connector (for Slack, for Google Drive, for company APIs, etc.). MCP replaces that with a single client-server architecture: a host application (like Trae) connects to one or more MCP servers, and the servers connect to local or remote data sources.
Key benefits of MCP include:
Unified integrations: Use pre-built MCP servers (GitHub, Supabase, Postgres, etc.) or write your own, all conforming to one spec.
Flexible deployment: MCP servers can run locally or remotely over HTTP.
Model-agnostic: The same MCP server can be used with different LLMs or AI assistants, so you’re not locked into one platform.
Secure and consistent: The protocol encourages security best practices (auth, origin checks) and consistent command schemas across tools.
In summary, MCP servers let an AI IDE or agent call external tools as if they were built-in functions. Trae’s new support for MCP means developers can seamlessly extend the IDE’s AI with whatever services they need.
Trae v1.3.0: New MCP Integration
In Trae 1.3.0, ByteDance has integrated MCP in several ways to streamline external tool access and automation. The highlights include:
Standardized MCP plugin system: Trae can now connect to any external tool via the MCP protocol. Developers can wire up services like Supabase, GitHub, custom REST APIs, or any local tool by adding it as an MCP server, This expands Trae’s “contextual capabilities”.
Project and global configuration: You define MCP servers in JSON config files. Trae supports both global and project-level settings. Global MCP servers live in a user-wide file (e.g.
~/.trae/mcp.json
), while a project can include its own.trae/mcp.json
ormcp_settings.json
in the root. Either way, you list servers under an"mcpServers"
key and provide the command to run them. This makes integration repeatable and shareable.Built-in MCP Marketplace: Trae includes a searchable MCP marketplace in the IDE, where developers can browse and add pre-made MCP server definitions (for services like GitHub, Postgres, etc.) without manual config, This one-click deployment lowers the barrier: if you need GitHub integration, simply select the GitHub MCP server and authenticate, and Trae handles the rest.
Agent and builder integration: The new Builder Agent mode in Trae can dynamically call MCP servers as part of AI-driven tasks, For example, an automated agent can use an MCP server to fetch repo data, call external APIs, or even manipulate the filesystem. This means more complex workflows (multi-step builds, testing, deployment, etc.) can be handled by the AI.
Cross-platform support: Trae’s MCP features work on macOS, Windows, and Linux, It supports advanced LLMs like Claude 3.5 Sonnet and GPT-4o, either locally or via cloud APIs. According to the release notes, Trae’s AI engine is accelerated with Intel’s OpenVINO for on-device inference, so many LLM tasks and MCP calls run efficiently even without a GPU.
Taken together, Trae 1.3.0 treats MCP as a first-class citizen: it not only adds a configuration mechanism, but also a marketplace and UI for MCP servers, and the underlying ability for its AI agents to use any connected service.
Example 1: Setting Up puppeteer, a build in MCP Server in Trae
In this example we will setup a built in MCP server: puppeteer that let`s us search the internet.
First go to the MCP tab and click on the ADD button:
Then search for: puppeteer
Click on the (+) button
Then click on confirm (This MCP server needs no configuration)
If all goes Ok, you will see this in your MCP servers list:
Now to use it, i tried this prompt, after selecting the option “@Builder with MCP” from you “@Agents” list:
“explore quran.us.kg and extract the list of quran surahs“
Or you can explicitly run a predefined command from the puppeteer MCP list:
Run the prompt “/puppeteer_navigate explore quran.us.kg and extract the list of quran surahs“
Example 2: Setting Up lighthouse-mcp server, a custom MCP Server in Trae
As a second example, we will add lighthouse-mcp server to our Trae editor, first go to the MCP tab and click on the ADD button:
Then select the “Configure Manually” link.
Then click on the “Row Config (JSON)” button
Then past this config at the bottom of the new opened file in your editor:
"lighthouse": {
"command": "npx",
"args": ["lighthouse-mcp"],
"disabled": false,
"autoApprove": []
},
If all goes Ok, you will see this in the MCP servers list:
Now let’s see how it work, first select the option “@Builder with MCP” from you “@Agents” list, Then enter this example prompt: “analyze my lighthouse scores for quran.us.kg“ Trae will use Chrome on the background and make a lighthouse test over the given website ( https://quran.us.kg in my case) then will return this:
Conclusion
In conclusion, the integration of MCP servers in Trae version 1.3.0 marks a significant advancement in how AI models interact with external data and tools. By adopting the Model Context Protocol, Trae provides a unified, flexible, and secure way to extend its capabilities, allowing developers to seamlessly connect to a wide range of services. The introduction of features like the MCP marketplace, standardized plugin system, and cross-platform support enhances the IDE's functionality, making it easier for developers to automate complex workflows and leverage AI-driven tasks. With practical examples like setting up the puppeteer and lighthouse-mcp servers, Trae demonstrates the practical benefits of MCP integration, empowering developers to efficiently access and utilize external resources within their projects.
Subscribe to my newsletter
Read articles from adelpro directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

adelpro
adelpro
I am a web developer