MCP Cursor

Enhance your development workflow with AI-powered MCP tools and extensions for Cursor IDE.

Product

  • MCP Servers
  • Getting Started
  • Documentation
  • Open Source

Resources

  • MCP Specification
  • Cursor IDE
  • MCP GitHub
  • Contributing

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy
Made withfor the developer community
© 2026 MCP Cursor. All rights reserved.
MCP Logo
MCP Cursor
IntroductionMCPs
IntroductionMCPs
3D MCP Cursor Visualization
  1. Home
  2. Servers
  3. Firecrawl MCP
Firecrawl MCP Logo

Firecrawl MCP

Model Context Protocol Integration

Overview

Integrates with the Firecrawl API to enable web scraping and intelligent content searching for extracting structured data and performing customizable searches.

Firecrawl

Integrates with the Firecrawl API to enable web scraping and intelligent content searching for extracting structured data and performing customizable searches.

Installation Instructions


README: https://github.com/Msparihar/mcp-server-firecrawl

Firecrawl MCP Server

A Model Context Protocol (MCP) server for web scraping, content searching, site crawling, and data extraction using the Firecrawl API.

Features

  • Web Scraping: Extract content from any webpage with customizable options

    • Mobile device emulation
    • Ad and popup blocking
    • Content filtering
    • Structured data extraction
    • Multiple output formats
  • Content Search: Intelligent search capabilities

    • Multi-language support
    • Location-based results
    • Customizable result limits
    • Structured output formats
  • Site Crawling: Advanced web crawling functionality

    • Depth control
    • Path filtering
    • Rate limiting
    • Progress tracking
    • Sitemap integration
  • Site Mapping: Generate site structure maps

    • Subdomain support
    • Search filtering
    • Link analysis
    • Visual hierarchy
  • Data Extraction: Extract structured data from multiple URLs

    • Schema validation
    • Batch processing
    • Web search enrichment
    • Custom extraction prompts

Installation

# Global installation
npm install -g @modelcontextprotocol/mcp-server-firecrawl

# Local project installation
npm install @modelcontextprotocol/mcp-server-firecrawl

Quick Start

  1. Get your Firecrawl API key from the developer portal

  2. Set your API key:

    Unix/Linux/macOS (bash/zsh):

    export FIRECRAWL_API_KEY=your-api-key
    

    Windows (Command Prompt):

    set FIRECRAWL_API_KEY=your-api-key
    

    Windows (PowerShell):

    $env:FIRECRAWL_API_KEY = "your-api-key"
    

    Alternative: Using .env file (recommended for development):

    # Install dotenv
    npm install dotenv
    
    # Create .env file
    echo "FIRECRAWL_API_KEY=your-api-key" > .env
    

    Then in your code:

    import dotenv from 'dotenv';
    dotenv.config();
    
  3. Run the server:

    mcp-server-firecrawl
    

Integration

Claude Desktop App

Add to your MCP settings:

{
  "firecrawl": {
    "command": "mcp-server-firecrawl",
    "env": {
      "FIRECRAWL_API_KEY": "your-api-key"
    }
  }
}

Claude VSCode Extension

Add to your MCP configuration:

{
  "mcpServers": {
    "firecrawl": {
      "command": "mcp-server-firecrawl",
      "env": {
        "FIRECRAWL_API_KEY": "your-api-key"
      }
    }
  }
}

Usage Examples

Web Scraping

// Basic scraping
{
  name: "scrape_url",
  arguments: {
    url: "https://example.com",
    formats: ["markdown"],
    onlyMainContent: true
  }
}

// Advanced extraction
{
  name: "scrape_url",
  arguments: {
    url: "https://example.com/blog",
    jsonOptions: {
      prompt: "Extract article content",
      schema: {
        title: "string",
        content: "string"
      }
    },
    mobile: true,
    blockAds: true
  }
}

Site Crawling

// Basic crawling
{
  name: "crawl",
  arguments: {
    url: "https://example.com",
    maxDepth: 2,
    limit: 100
  }
}

// Advanced crawling
{
  name: "crawl",
  arguments: {
    url: "https://example.com",
    maxDepth: 3,
    includePaths: ["/blog", "/products"],
    excludePaths: ["/admin"],
    ignoreQueryParameters: true
  }
}

Site Mapping

// Generate site map
{
  name: "map",
  arguments: {
    url: "https://example.com",
    includeSubdomains: true,
    limit: 1000
  }
}

Data Extraction

// Extract structured data
{
  name: "extract",
  arguments: {
    urls: ["https://example.com/product1", "https://example.com/product2"],
    prompt: "Extract product details",
    schema: {
      name: "string",
      price: "number",
      description: "string"
    }
  }
}

Configuration

See configuration guide for detailed setup options.

API Documentation

See API documentation for detailed endpoint specifications.

Development

# Install dependencies
npm install

# Build
npm run build

# Run tests
npm test

# Start in development mode
npm run dev

Examples

Check the examples directory for more usage examples:

  • Basic scraping: scrape.ts
  • Crawling and mapping: crawl-and-map.ts

Error Handling

The server implements robust error handling:

  • Rate limiting with exponential backoff
  • Automatic retries
  • Detailed error messages
  • Debug logging

Security

  • API key protection
  • Request validation
  • Domain allowlisting
  • Rate limiting
  • Safe error messages

Contributing

See CONTRIBUTING.md for contribution guidelines.

License

MIT License - see LICENSE for details.

Featured MCPs

Github MCP - Model Context Protocol for Cursor IDE

Github

This server provides integration with Github's issue tracking system through MCP, allowing LLMs to interact with Github issues.

Sequential Thinking MCP - Model Context Protocol for Cursor IDE

Sequential Thinking

An MCP server implementation that provides a tool for dynamic and reflective problem-solving through a structured thinking process. Break down complex problems into manageable steps, revise and refine thoughts as understanding deepens, and branch into alternative paths of reasoning.

Puppeteer MCP - Model Context Protocol for Cursor IDE

Puppeteer

A Model Context Protocol server that provides browser automation capabilities using Puppeteer. This server enables LLMs to interact with web pages, take screenshots, execute JavaScript, and perform various browser-based operations in a real browser environment.