MCP Cursor

Enhance your development workflow with AI-powered MCP tools and extensions for Cursor IDE.

Product

  • MCP Servers
  • Getting Started
  • Documentation
  • Open Source

Resources

  • MCP Specification
  • Cursor IDE
  • MCP GitHub
  • Contributing

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy
Made withfor the developer community
© 2026 MCP Cursor. All rights reserved.
MCP Logo
MCP Cursor
IntroductionMCPs
IntroductionMCPs
3D MCP Cursor Visualization
  1. Home
  2. Servers
  3. Inked (Writing Assistant) MCP
Inked (Writing Assistant) MCP Logo

Inked (Writing Assistant) MCP

Model Context Protocol Integration

Overview

Provides tools for managing and generating long-form content through conversation-based drafting and revision workflows, enabling efficient development of novels, reports, and other extensive documents.

Inked (Writing Assistant)

Provides tools for managing and generating long-form content through conversation-based drafting and revision workflows, enabling efficient development of novels, reports, and other extensive documents.

Installation Instructions


README: https://github.com/frgmt0/mcp-inked

Inked

A powerful MCP server for memory management with Claude apps. Fast, simple, and optionally enhanced with AI-powered search.

Features

  • Fast text search - Lightning-fast memory retrieval by default
  • AI-powered search - Optional embedding-based semantic search currently not working as of 06/25/25
  • AI reranking - Experimental reranking for even better results
  • Simple storage - Plain text storage in SQLite (no encryption overhead)
  • Secure - All data stored locally in ~/.inked/

Installation

Option 1: (Recommended)

npm install -g @frgmt/inked

Option 2: Local Development

git clone https://github.com/frgmt/inked.git
cd inked
npm install
npm run build
node dist/index.js

Basic Usage

Add to your MCP server configuration:

Standard (fast text search):

{
  "mcpServers": {
    "inked": {
      "command": "npx",
      "args": ["@frgmt/inked"]
    }
  }
}

^ Use this one. the rest wont work. need to work out some kinks

============IGNORE THIS===========

With AI embeddings (semantic search):

{
  "mcpServers": {
    "inked": {
      "command": "npx",
      "args": ["@frgmt/inked", "--use-embeddings"]
    }
  }
}

With embeddings + AI reranking (best results):

{
  "mcpServers": {
    "inked": {
      "command": "npx",
      "args": ["@frgmt/inked", "--use-embeddings", "--use-reranking"]
    }
  }
}

Experimental Features

AI-Powered Search (Optional)

Inked supports experimental embedding-based search for more nuanced memory retrieval.

Embedding Models

FlagModelMemory UsageBest For
--use-embeddingsQwen3-0.6B~2GB RAMShort memories, quick responses
--use-embeddings=4bQwen3-4B~8GB RAMLonger memories, better nuance
--use-embeddings=8bQwen3-8B~16GB RAMComplex memories, documents

Reranking Models (Requires embeddings)

FlagModelAdditional MemoryBest For
--use-rerankingQwen3-Reranker-0.6B~1GB RAMImproved relevance
--use-reranking=4bQwen3-Reranker-4B~4GB RAMBest result quality

How to Choose Models

For most users: Start with no flags (fast text search)

For better semantic understanding: Add --use-embeddings

  • Good for finding memories by meaning rather than exact words
  • First run downloads ~2GB model (one-time)

For nuanced, longer memories: Use --use-embeddings=4b

  • Better at understanding context in longer text
  • Handles more complex relationships between ideas

For best results: Add --use-reranking with embeddings

  • AI re-scores top candidates for optimal ranking
  • Significantly improves search quality

For power users: --use-embeddings=8b --use-reranking=4b

  • Best possible search quality
  • Requires 20+ GB RAM
  • Good for research, documentation, complex projects

Memory Requirements

ConfigurationRAM NeededDownload SizeFirst Launch
Default (text)~50MB0MBInstant
Basic embeddings~2GB~1.2GB2-5 minutes
4B embeddings~8GB~4GB5-10 minutes
8B embeddings~16GB~8GB10-20 minutes
+ Reranking+1-4GB+0.5-2GB+1-3 minutes

Models are cached locally and only downloaded once

=============END IGNORE===========

Usage Guide

Auto-Memory Setup

Add this to your Claude settings/preferences:

"At the start of new conversations, use the inked Read tool with 'ALL' to load my memories. Only mention memories when directly relevant to our conversation. Use the Write tool to save important preferences, facts, or insights that should be remembered for future conversations."

How It Works

  • Read once per conversation: Memories stay in context after initial load
  • Silent operation: Claude uses memories without mentioning them unless relevant
  • Smart writing: Automatically saves important information for future sessions

When to Write Memories

  • User preferences and communication style
  • Important project information and context
  • Recurring topics or themes
  • Facts that should persist across conversations
  • Insights or patterns worth remembering

Tools

read

Search and retrieve memories.

Parameters:

  • search (required): Query string or "ALL" for everything
  • topr (optional): Number of results (1-5, default: 3)

write

Add or delete memories.

Parameters:

  • content (required): Memory text (NEW) or search query (DELETE)
  • sTool (required): "NEW" or "DELETE"
  • id (optional): Specific ID to delete

License

AGPL v3 - Open source for personal use. Commercial use requires either open-sourcing your application or a commercial license.

Featured MCPs

Github MCP - Model Context Protocol for Cursor IDE

Github

This server provides integration with Github's issue tracking system through MCP, allowing LLMs to interact with Github issues.

Sequential Thinking MCP - Model Context Protocol for Cursor IDE

Sequential Thinking

An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process. Break down complex problems into manageable steps, revise and refine thoughts as understanding deepens, and branch into alternative paths of reasoning.

Puppeteer MCP - Model Context Protocol for Cursor IDE

Puppeteer

A Model Context Protocol server that provides browser automation capabilities using Puppeteer. This server enables LLMs to interact with web pages, take screenshots, execute JavaScript, and perform various browser-based operations in a real browser environment.