Engineering

Building AI-Native Applications with the Model Context Protocol

A developer-focused guide to the Model Context Protocol (MCP). Learn how to connect Claude Desktop and Cursor to an MCP server, build custom tools, and ship AI-native features with governed tool access.

By ThinkNEO EditorialPublished 2026年4月18日 00:38EN

A developer-focused guide to the Model Context Protocol (MCP). Learn how to connect Claude Desktop and Cursor to an MCP server, build custom tools, and ship AI-native features with governed tool access.

What Is MCP, and Why Should Developers Care?

The Model Context Protocol (MCP) is an open standard created by Anthropic that defines how AI models interact with external tools and data sources. If you have built AI features before, you have likely dealt with the pain of custom function calling, bespoke tool schemas, and fragile integrations that break when you switch models.

MCP solves this by providing a universal interface: any MCP-compatible client (Claude Desktop, Cursor, VS Code, or your own application) can discover and invoke tools on any MCP server. Write your tool once, and every client can use it. Change your AI model, and your tools still work.

This guide walks through practical MCP development: connecting to an existing MCP server, understanding the protocol, building custom tools, and shipping AI-native features in production.

Connecting Claude Desktop to an MCP Server

The fastest way to experience MCP is to connect Claude Desktop to an existing server. Here is how to do it in under five minutes.

Step 1: Locate Your Configuration File

Claude Desktop reads MCP server configurations from a JSON file. The location depends on your operating system:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

If the file does not exist, create it. If it exists, you will add to the mcpServers object.

Step 2: Add the MCP Server Configuration

Add a server entry to your configuration. Here is an example connecting to the ThinkNEO MCP server, which provides free security scanning tools:

{
  "mcpServers": {
    "thinkneo": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-remote",
        "https://mcp.thinkneo.ai/mcp"
      ]
    }
  }
}

This configuration uses the mcp-remote package to connect to a remote MCP server over HTTPS. The npx -y flag ensures the package is downloaded automatically if not already installed.

Step 3: Restart Claude Desktop

After saving the configuration file, restart Claude Desktop completely (quit and reopen, not just close the window). When it restarts, you should see a small tools icon indicating that MCP tools are available.

Step 4: Test the Connection

Ask Claude to use one of the available tools. For example:

“Can you scan this text for PII: My email is john.doe@example.com and my SSN is 123-45-6789”

Claude will invoke the check_pii_international tool and return a structured detection report showing the email address and SSN matches, their positions in the text, and the applicable jurisdiction (CCPA for SSN, general for email).

Connecting Cursor to an MCP Server

Cursor, the AI-powered code editor, also supports MCP natively. The configuration follows a similar pattern but uses Cursor’s own config location.

Step 1: Open Cursor Settings

Navigate to Cursor Settings > MCP (or press Cmd+Shift+P / Ctrl+Shift+P and search for “MCP”). You can also directly edit the configuration file:

  • Project-level: .cursor/mcp.json in your project root (recommended for team settings)
  • Global: ~/.cursor/mcp.json for personal tools

Step 2: Add the Server Configuration

The format is identical to Claude Desktop:

{
  "mcpServers": {
    "thinkneo": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-remote",
        "https://mcp.thinkneo.ai/mcp"
      ]
    }
  }
}

Step 3: Use MCP Tools in Your Workflow

With the server connected, you can ask Cursor’s AI to use MCP tools directly in your coding workflow. For example:

  • “Scan my .env file for leaked secrets” — invokes scan_secrets
  • “Check if this API response contains PII” — invokes check_pii_international
  • “Is this user input safe from prompt injection?” — invokes detect_injection

These tools run against the live MCP server, so results reflect the latest detection rules and patterns.

Understanding the MCP Protocol

Before building your own tools, it helps to understand what happens under the hood when an MCP client invokes a tool.

The Communication Flow

MCP uses JSON-RPC 2.0 over a transport layer (typically stdio for local servers, or HTTP/SSE for remote servers). The flow is:

  1. Discovery: The client sends a tools/list request. The server responds with an array of available tools, each with a name, description, and input schema.
  2. Invocation: When the AI model decides to use a tool, the client sends a tools/call request with the tool name and arguments.
  3. Execution: The server validates the arguments against the schema, executes the tool logic, and returns a structured response.
  4. Rendering: The client presents the tool response to the AI model, which incorporates it into its reasoning.

Tool Schema Anatomy

Every MCP tool is defined by three elements:

{
  "name": "check_pii_international",
  "description": "Scan text for personally identifiable information across international jurisdictions including GDPR, LGPD, CCPA, and PDPA.",
  "inputSchema": {
    "type": "object",
    "properties": {
      "text": {
        "type": "string",
        "description": "The text content to scan for PII patterns"
      },
      "jurisdictions": {
        "type": "array",
        "items": { "type": "string" },
        "description": "Optional list of jurisdictions to check. Defaults to all supported."
      }
    },
    "required": ["text"]
  }
}

The description field is critical—it is what the AI model reads to decide when to use the tool. Write descriptions that are precise about what the tool does, what inputs it expects, and what outputs it returns. Vague descriptions lead to incorrect tool selection.

Building Your Own MCP Tools

The MCP ecosystem provides SDKs in TypeScript and Python for building custom tools. Here is a practical example: building a tool that checks if a given domain has valid DMARC email authentication.

TypeScript Example: DMARC Checker Tool

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import dns from "node:dns/promises";

const server = new McpServer({
  name: "email-security-tools",
  version: "1.0.0",
});

server.tool(
  "check_dmarc",
  "Check if a domain has a valid DMARC record configured for email authentication.",
  {
    domain: z.string().describe("The domain to check, e.g. example.com"),
  },
  async ({ domain }) => {
    try {
      const records = await dns.resolveTxt(`_dmarc.${domain}`);
      const dmarcRecord = records
        .flat()
        .find((r) => r.startsWith("v=DMARC1"));

      if (!dmarcRecord) {
        return {
          content: [
            {
              type: "text",
              text: JSON.stringify({
                domain,
                has_dmarc: false,
                recommendation:
                  "No DMARC record found. This domain is vulnerable to email spoofing.",
              }),
            },
          ],
        };
      }

      const policy = dmarcRecord.match(/p=(\w+)/)?.[1] || "none";
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify({
              domain,
              has_dmarc: true,
              policy,
              record: dmarcRecord,
              is_enforcing: policy === "reject" || policy === "quarantine",
            }),
          },
        ],
      };
    } catch (error) {
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify({
              domain,
              has_dmarc: false,
              error: "DNS lookup failed",
            }),
          },
        ],
        isError: true,
      };
    }
  }
);

const transport = new StdioServerTransport();
await server.connect(transport);

Key Implementation Patterns

When building MCP tools for production, follow these patterns:

  1. Return structured JSON in text content. While MCP supports multiple content types, JSON in text fields is the most reliable across clients. Wrap your response data in JSON.stringify().
  2. Use Zod schemas for input validation. The TypeScript SDK integrates with Zod, giving you runtime validation that matches the schema advertised to clients.
  3. Handle errors gracefully. Set isError: true in your response when something goes wrong. Include enough context in the error message for the AI model to decide whether to retry, try a different approach, or inform the user.
  4. Keep tool descriptions precise. The AI model uses the description to decide when to call your tool. Include what the tool does, what inputs it needs, what it returns, and any limitations. Avoid marketing language.
  5. Minimize latency. MCP tool calls happen during the AI model’s reasoning loop. Every 100ms of tool latency is felt by the end user. Cache aggressively, use connection pooling, and prefer async I/O.

From Prototype to Production: Governance Considerations

Building tools is the easy part. Running them in production requires governance infrastructure that most tutorials skip.

Authentication and Authorization

In a team environment, not every user should have access to every tool. Your MCP server should support:

  • API key authentication: Each team or project gets its own API key with usage tracking and rate limits.
  • Tool-level permissions: Administrative tools (like database writes or deployment triggers) should require elevated permissions.
  • Audit logging: Every tool invocation should be logged with the caller identity, arguments, response, and timestamp.

Rate Limiting and Cost Control

MCP tools that call external APIs (OpenAI, cloud services, databases) incur costs. Without rate limiting, a single runaway agent can burn through your monthly budget in hours. Implement:

  • Per-key rate limits (requests per minute and per day)
  • Per-tool rate limits (some tools are more expensive than others)
  • Budget alerts that fire before limits are hit

Security Scanning on Tool Inputs and Outputs

Even in a governed pipeline, tools can receive malicious inputs and return sensitive data. Run security scanning as middleware:

  • Input scanning: Check for prompt injection and embedded secrets before executing the tool
  • Output scanning: Check for PII and sensitive data before returning results to the model

This is where tools like ThinkNEO’s scan_secrets, detect_injection, and check_pii_international become infrastructure rather than standalone utilities. In a production MCP server, they run as middleware on every tool call.

Real-World Architecture: How ThinkNEO Runs MCP in Production

ThinkNEO’s MCP server runs as a Docker container behind an nginx reverse proxy with TLS termination. The architecture handles the full lifecycle:

  1. Client connects via HTTPS to mcp.thinkneo.ai. The connection is upgraded to Server-Sent Events (SSE) for streaming responses.
  2. Authentication is validated against the API key registry. Each key has associated rate limits, allowed tools, and workspace bindings.
  3. Tool discovery returns only the tools that the authenticated key is authorized to access.
  4. Tool invocation passes through the security middleware stack (injection detection, secret scanning, PII detection) before reaching the tool implementation.
  5. Response passes through output scanning before being returned to the client.
  6. Audit log records the full invocation with timing, caller, arguments hash, and response status.

This architecture supports 22 tools serving multiple AI clients simultaneously, with p99 latency under 200ms for the security scanning tools.

Common Pitfalls in MCP Development

  1. Overly broad tool descriptions. A tool described as “helps with security” will be invoked for everything remotely security-related. Be specific: “Scans text for PII patterns across GDPR, LGPD, CCPA, and PDPA jurisdictions.”
  2. Missing error handling. If your tool throws an unhandled exception, the AI model receives no useful information. Always return a structured error response with actionable context.
  3. Synchronous external calls. MCP tool calls block the AI model’s reasoning loop. If your tool makes a slow API call, the entire conversation pauses. Use timeouts and consider returning partial results for long-running operations.
  4. Ignoring schema evolution. As you update your tools, client caches may hold stale schemas. Version your tools and handle calls with outdated parameters gracefully.
  5. No rate limiting on day one. It is tempting to skip rate limiting during development. Do not. A single automated agent can send thousands of tool calls per minute, and discovering this on your cloud bill is unpleasant.

Frequently Asked Questions

Do I need a server to use MCP, or can I run tools locally?

MCP supports both local and remote servers. For development, you can run an MCP server locally using stdio transport (no network required). For team use, deploy a remote server with HTTP/SSE transport so everyone connects to the same tools and governance layer.

Which programming languages support MCP server development?

Official SDKs exist for TypeScript and Python. Community SDKs are available for Go, Rust, and Java. The protocol is JSON-RPC based, so you can implement a server in any language that can read and write JSON over stdio or HTTP.

Can I connect multiple MCP servers to one client?

Yes. Both Claude Desktop and Cursor support multiple server configurations simultaneously. Each server appears as a separate tool namespace. This lets you combine general-purpose tools (like security scanning) with domain-specific tools (like your internal database queries) in the same session.

Next Step

Start by connecting Claude Desktop or Cursor to the ThinkNEO MCP server using the configuration examples above. Explore the available tools by asking the AI to list them. Then build your first custom tool using the TypeScript or Python SDK. The MCP server documentation includes additional examples and the full tool catalog.