Pro Tips

Understanding MCP Servers: The Bridge Between AI and Your Data

Apr 3, 2025

The Model Context Protocol (MCP) is an open standard designed and released by Anthropic to connect LLMs like Claude to the systems where your data lives - including content repositories, business tools, and GibsonAI’s cloud databases. Its primary goal is to help AI models produce better, more relevant responses by giving them direct access to your information so they are not guessing or fumbling around in the dark.

What Are MCP Servers and Why Do They Matter?

MCP servers function as bridges between AI models and data sources. Think of them as specialized connectors that follow a standardized protocol - similar to how USB-C provides a universal way to connect devices to various peripherals. MCP provides a standardized way to connect AI models to different data sources and tools without needing to manually feed information to the AI.

Before MCP, every new data source required its own custom implementation, making truly connected AI systems difficult to scale. Every integration was fragmented and unique, causing significant development overhead or putting the onus on the user to provide context to the LLM.

The Architecture: How MCP Actually Works

MCP follows a client-server architecture with a couple of components:

  • MCP Hosts: Programs like Claude Desktop, IDEs like Cursor, or other AI tools that want to access data through MCP. The host typically manages a client that maintains a 1:1 connection with the server

  • MCP Servers: Lightweight programs that each expose specific capabilities and data through the standardized protocol. This allows the host to understand the capabilities of the server and call necessary tools when applicable

  • Data Sources: Databases, services, external systems available via APIs, processes running on your computer, or even local files. Each server will expose access to different data sources via "tools"

Basically, MCP gives you a standardized framework for how you create the glue between a given system and an AI tool like the Claude app. It also establishes a communication conduit between the systems and AI tools with structured and predictable formatting. They help you pipe your data into an LLM’s context. This also allows the LLM to perform actions available via an MCP server. For example, it could automatically delete files older than 30 days from your Downloads folder if you ask the LLM to do that in natural language.

From Programmer's Perspective: Why MCP Is Revolutionary

The true value of MCP becomes clear when viewed from a programmatic architecture perspective. Extending LLMs with tools in a standardized way isn't a new idea. Many providers like OpenAI and Anthropic have integrated tool use into their APIs and GibsonAI uses multiple tools to build databases as well. However, developers are still responsible for managing the definition, handling, and execution of these tools. Furthermore, tool use implementation may vary slightly between providers, creating a fragmented ecosystem.

To address this inconsistency, packages like Langchain were developed to extend the standardization process. By offering Python classes that represent WebSearch and other functionalities, developers can easily import and utilize them within their Python code. They can also write custom tools within this framework to combine different LLM providers and tools.

However, adding new tools still requires adjustments to the main application's code. While this may not be a significant issue for developers working on their own applications, it becomes problematic when adding tools without changing the codebase or when dealing with applications where code access isn't available.

This is where MCP offers a solution: it provides a standard that surpasses Langchain modules by defining the communication layer across disparate applications.

Real-World Applications: What Can You Do With MCP?

Anthropic has already released sample MCP servers for numerous systems:

  • Filesystem - Secure file operations with configurable access controls

  • GitHub/GitLab - Repository management and API integration

  • Git - Tools to read, search, and manipulate Git repositories

  • Google Drive - File access and search capabilities

  • PostgreSQL/SQLite - Database access with schema inspection

  • Slack - Channel management and messaging capabilities

  • Memory - Knowledge graph-based persistent memory system

  • Puppeteer - Browser automation and web scraping

  • Brave Search - Web and local search using Brave's Search API

  • Google Maps - Location services, directions, and place details

  • Fetch - Web content fetching and conversion

Making Data Access Easier Everywhere

By feeding documentation and sample applications from the MCP GitHub repository alongside an OpenAPI specification into Claude, a fully functional MCP integration can be generated within moments, no extensive coding required.

This approach works with virtually any third-party service. For instance, providing Trello's API documentation allows Claude to build an MCP server that enables direct interaction with that service through the desktop application in a standardized manner.

GibsonAI is also creating an MCP client that will allow IDEs like Cursor to interact with your GibsonAI account to create and update projects, make schema modifications, and deploy and manage your cloud databases, all with natural language in the chat interface. This will also improve the context available to Cursor so it can write working code to interact with your hosted API and manage the data in your database. It has never been easier to integrate a complete backend with your front end development.

Getting Started With MCP

According to Anthropic's documentation, developers can start building and testing MCP connectors today. All Claude.ai plans support connecting MCP servers to the Claude Desktop app. To begin:

  • Install pre-built MCP servers through the Claude Desktop app

  • Follow the quickstart guide to build your first MCP server

  • Contribute to open-source repositories of connectors and implementations

Developers can also build their own MCP servers using the Python SDK or simply use one of the dozens already listed in the MCP Github repository. You can also find an extensive list of MCP servers published by Cursor.

Configuring MCP Servers in Cursor

Adding MCP servers to Cursor can drastically improve the quality of output Cursor provides, along with extending the capabilities of Cursor’s agents. To add an MCP server to Cursor, head over to `Cursor Settings` > `MCP` and click `Add new MCP server`

Next, update the configuration (JSON) to include the new server’s start command. It should look something like this:

{
  "mcpServers": {
    "gibson": {
      "command": "gibson",
      "args": ["mcp", "run"]
    }
  }
}

The Value of MCP Servers

All LLM systems run into context window problems at some point, effectively losing their medium-term memories. MCP represents a significant step toward AI systems that can file away information and retrieve it in a predictable and standardized way, effectively extending their ability to remember information by providing a storage and retrieval method. 

In addition, MCP servers enable LLMs to take action on our behalf by accessing systems and functionality that were previously inaccessible. And who doesn’t want an AI that walks the walk instead of just talking the talk?

Get started free

Build your next database with the power of GibsonAI.

Get started free

Build your next database with the power of GibsonAI.

Sign up today

Get started free

Build your next database with the power of GibsonAI.