Large Language Models

MCP Tools

Published 2025-11-23.
Time to read: 3 minutes.

This page is part of the llm collection.

MCP tools can enhance the capabilities of LLMs. This article discusses a few MCP tools that are easy to set up for most LLMs.

mcp.json

This is the configuration file that I used with MiniMax Mini-Agent Review.

/home/mslinn/.mini-agent/config/mcp.json
{
  "mcpServers": {
    "google_search": {
      "description": "Google Custom Search - https://programmablesearchengine.google.com/controlpanel/all",
      "type": "stdio",
      "command": "python",
      "args": [
        "/mnt/f/work/llm/mcp/google_search/server.py"
      ],
      "env": {
        "GOOGLE_API_KEY":  "super-secret",
        "GOOGLE_SEARCH_ENGINE_ID": "1234567890"
      },
      "disabled": false
    },
    "memory": {
      "description": "Memory - Knowledge graph memory system (long-term memory based on graph database)",
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ],
      "disabled": false
    }
  }
}

Knowledge Graph Server

A knowledge graph MCP server transforms a general-purpose, stateless LLM model into a knowledgeable, context-aware collaborator that can learn and evolve over time. This specialized service provides LLMs with a persistent, structured memory, effectively solving the problem of LLM amnesia across conversations and sessions.

This type of MCP server can act as a universal bridge between LLM clients (like Claude Desktop, Cursor IDE, and mini-agent) if they are configured to share the same knowledge graph database.

A shared knowledge graph MCP server allows LLMs that are good at something can acquire knowledge that other LLMs can utilize.

The primary function of a knowledge graph server is to manage information in a structured format, enabling the LLM to go beyond simple keyword matching and perform complex, contextual reasoning. It operates using three core components:

  1. Entities represent real-world objects, such as people, projects, and concepts.
  2. Relations define relationships between entities.
  3. Observations and facts including specific details (attributes) associated with an entity.

Agents can use the knowledge graph server to manage and retrieve this structured knowledge.

  • Information is stored in a database (like SQLite, PostgreSQL, or a local JSON file), so the LLM remembers details across different chat sessions and application restarts, or information acquired by other LLMs that share the knowledge graph server.
  • The structured nature of the graph allows the LLM to answer complex, multi-hop questions such as, "Who works on projects managed by Fred that use Python?"
  • It enables the LLM to perform a full range of data operations, including creating, reading, updating, and deleting entities and relationships.
  • LLM-powered code editors could use a knowledge graph server to remember project structure, user preferences, and code changes across sessions.
  • By grounding the LLM's responses in a structured knowledge base, it significantly reduces the likelihood of the model generating incorrect or fabricated information.
  • It allows for compartmentalizing knowledge into different projects, ensuring the LLM does not confuse details of different projects.

Helpful For Most Agentic LLMs

A knowledge graph MCP server would not help every LLM. It is primarily designed to enhance LLM agents that are built with the capability to use external tools and reason over structured data. General, consumer-facing models used in simple chat interfaces (like a basic ChatGPT conversation) would not automatically benefit without specific integration to call the server's API.

LLMs that would benefit are those designed to act as agents or operate within development environments, where managing complex, evolving data and maintaining long-term memory are crucial functions. Here are examples of models that would significantly benefit:

  1. OpenLLM Models such as GPT-4o, GPT-4, and GPT-3.5.
  2. Anthropic Models such as Claude Desktop, the Claude 3 family, and Claude 2.
  3. Google Gemini models such as Gemini Pro, and Ultra.
  4. Mistral LLM Models such as Mistral 7B, and Mixtral 8x7B.
  5. Perplexity models.
  6. Llama models such as Llama 3, Llama 2.
  7. Cursor IDE's integrated LLM.
  8. LLM models in agent frameworks such as LangChain and LlamaIndex.

These models can be instructed to call the server’s API functions to proactively retrieve and store specific, structured information as needed, rather than relying solely on their limited initial prompt context.

@modelcontextprotocol/server-memory

The @modelcontextprotocol/server-memory knowledge graph server is an open-source, self-contained server that works entirely locally and does not rely on external services, so no key is required.

Mini-agent is pre-configured with this MCP server. All I had to do was enable it in $HOME/.mini-agent/config/mcp.json as shown above.

After installation and configuration, the next time mini-agent started, it displayed:

Loading MCP tools...
2025-11-23 10:06:36,032 - google_search_mcp - INFO - Google Search MCP Server initialized
2025-11-23 10:06:36,034 - mcp.server.lowlevel.server - INFO - Processing request of type ListToolsRequest
✓ Connected to MCP server 'google_search' - loaded 1 tools
  - google_search: Search the web using Google Custom Search API. Returns searc...
Knowledge Graph MCP Server running on stdio
✓ Connected to MCP server 'memory' - loaded 9 tools
  - create_entities: Create multiple new entities in the knowledge graph...
  - create_relations: Create multiple new relations between entities in the knowle...
  - add_observations: Add new observations to existing entities in the knowledge g...
  - delete_entities: Delete multiple entities and their associated relations from...
  - delete_observations: Delete specific observations from entities in the knowledge ...
  - delete_relations: Delete multiple relations from the knowledge graph...
  - read_graph: Read the entire knowledge graph...
  - search_nodes: Search for nodes in the knowledge graph based on a query...
  - open_nodes: Open specific nodes in the knowledge graph by their names...

Web Search

This MCP server uses the free Google Search API. Get a named key here. When you press the copy icon the key will be placed on the system clipboard. Note that the key is padded with an extra space before and after, so you will need to remove the extra spaces after pasting it to $HOME/.mini-agent/config/mcp.json.

server.py
"""Google Custom Search MCP Server"""

import os
import sys
import json
import logging
from typing import List, Any
import requests

try:
    from mcp.server import NotificationOptions, Server
    from mcp.server.models import InitializationOptions
    from mcp.server.stdio import stdio_server
    from mcp.types import Tool, CallToolResult, TextContent
    import anyio
except ImportError as e:
    print(f"Missing dependencies: {e}")
    print("Install with: pip install mcp requests anyio")
    sys.exit(1)

logging.basicConfig(
    level=logging.INFO,
    format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
    stream=sys.stderr,
)
logger = logging.getLogger("google_search_mcp")

class GoogleSearchMCPServer:
    """Google Custom Search MCP Server"""
    
    def __init__(self):
        self.app = Server("google_search_mcp")
        self._setup_tools()
        logger.info("Google Search MCP Server initialized")
    
    def _setup_tools(self):
        """Setup MCP tools"""
        
        @self.app.list_tools()
        async def handle_list_tools() -> List[Tool]:
            """List available tools"""
            return [
                Tool(
                    name="google_search",
                    description="Search the web using Google Custom Search API. Returns search results with titles, snippets, and URLs.",
                    inputSchema={
                        "type": "object",
                        "properties": {
                            "query": {
                                "type": "string",
                                "description": "Search query string"
                            },
                            "num_results": {
                                "type": "integer", 
                                "description": "Number of results to return (1-10, default: 5)",
                                "default": 5
                            }
                        },
                        "required": ["query"]
                    }
                )
            ]
        
        @self.app.call_tool()
        async def handle_call_tool(name: str, arguments: dict) -> CallToolResult:
            """Handle tool calls"""
            if name == "google_search":
                return await self._google_search(arguments)
            else:
                return CallToolResult(
                    content=[TextContent(type="text", text=f"Unknown tool: {name}")],
                    isError=True
                )
    
    async def _google_search(self, arguments: dict) -> CallToolResult:
        """Perform Google search"""
        query = arguments.get("query", "")
        num_results = arguments.get("num_results", 5)
        
        if not query:
            return CallToolResult(
                content=[TextContent(type="text", text="Query is required")],
                isError=True
            )
        
        # Get API credentials from environment
        api_key = os.environ.get("GOOGLE_API_KEY")
        search_engine_id = os.environ.get("GOOGLE_SEARCH_ENGINE_ID")
        
        if not api_key or not search_engine_id:
            return CallToolResult(
                content=[TextContent(type="text", text="GOOGLE_API_KEY and GOOGLE_SEARCH_ENGINE_ID environment variables are required")],
                isError=True
            )
        
        try:
            # Perform Google Custom Search
            url = "https://www.googleapis.com/customsearch/v1"
            params = {
                "key": api_key,
                "cx": search_engine_id,
                "q": query,
                "num": min(num_results, 10)  # Max 10 results
            }
            
            response = requests.get(url, params=params, timeout=30)
            response.raise_for_status()
            
            data = response.json()
            
            # Format results
            results = []
            if "items" in data:
                for item in data["items"]:
                    results.append({
                        "title": item.get("title", ""),
                        "url": item.get("link", ""),
                        "snippet": item.get("snippet", ""),
                        "displayLink": item.get("displayLink", "")
                    })
            
            if not results:
                content = f"No results found for query: {query}"
            else:
                content = f"--- Google Search Results for [{query}] ---\n"
                for i, result in enumerate(results, 1):
                    content += f"\n{i}. {result['title']}\n"
                    content += f"   {result['snippet']}\n"
                    content += f"   URL: {result['url']}\n"
                content += "\n--- End of Results ---"
            
            return CallToolResult(
                content=[TextContent(type="text", text=content)]
            )
            
        except requests.exceptions.RequestException as e:
            logger.error(f"Search error: {e}")
            return CallToolResult(
                content=[TextContent(type="text", text=f"Search failed: {str(e)}")],
                isError=True
            )
        except Exception as e:
            logger.error(f"Unexpected error: {e}")
            return CallToolResult(
                content=[TextContent(type="text", text=f"Unexpected error: {str(e)}")],
                isError=True
            )
    
    async def run(self):
        """Run the MCP server"""
        async with stdio_server() as (read_stream, write_stream):
            await self.app.run(
                read_stream,
                write_stream,
                InitializationOptions(
                    server_name="google_search_mcp",
                    server_version="1.0.0",
                    capabilities=self.app.get_capabilities(
                        notification_options=NotificationOptions(),
                        experimental_capabilities={},
                    ),
                ),
            )

async def main():
    """Main entry point"""
    try:
        server = GoogleSearchMCPServer()
        await server.run()
    except Exception as e:
        logger.error(f"Failed to start server: {e}", exc_info=True)
        sys.exit(1)

if __name__ == "__main__":
    anyio.run(main)

Normally you would not call this MCP server directly, because it is designed to be a tool that the LLM uses. However, you could test it by typing a request in a chat as follows:

search for 'peace'

* indicates a required field.

Please select the following to receive Mike Slinn’s newsletter:

You can unsubscribe at any time by clicking the link in the footer of emails.

Mike Slinn uses Mailchimp as his marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp’s privacy practices.