Update (Feb 2026): Since this post was written, the MCP server has been published to npm. Installation is now a single npx command — no cloning required. The server has also grown from 3 to 5 tools, adding PR coverage and branch comparison. See the updated installation section and new tools below.

The Problem: When AI Meets HTTP Limitations

Codecov is a code coverage platform that tracks how much of your codebase is exercised by tests. It integrates with CI/CD pipelines to report coverage percentages, highlight uncovered lines, and surface trends over time—so you know whether your test suite is actually keeping pace with your code.

I needed Claude Code to access our self-hosted Codecov instance. Simple enough, right? Just make some HTTP requests. But Claude Code was struggling with direct HTTP calls—the errors were messy, the context was getting polluted, and it wasn’t reliable.

I searched for an official Codecov MCP server. Nothing. Then I searched for any MCP server for Codecov and found exactly one third-party option: stedwick/codecov-mcp . It was a solid starting point, but it didn’t support self-hosted Codecov instances—a dealbreaker for my use case.

With nothing that would work for me, the choice was clear: build my own MCP server. And here’s the meta twist—I’d use Claude Code to build the tooling that extends Claude Code itself.

Time budget: 2 hours. The clock started.

The 30-Minute MVP: From Zero to Working Server

The beauty of MCP servers is their simplicity. The Model Context Protocol is essentially a JSON-RPC interface that lets Claude access external tools. With Claude Code writing the implementation and me providing guidance, we banged out a working server in 30 minutes flat.

The first local commit was a functioning MCP server with three tools—already better than the discontinued one. The server has since grown to five tools as new use cases emerged.

Architecture at a Glance

Here’s the core structure (265 lines total):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
#!/usr/bin/env node
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

// Configuration from environment
export interface CodecovConfig {
  baseUrl: string;  // Self-hosted or codecov.io
  token?: string;   // API token
}

export function getConfig(): CodecovConfig {
  return {
    baseUrl: process.env.CODECOV_BASE_URL || "https://codecov.io",
    token: process.env.CODECOV_TOKEN
  };
}

The CodecovClient class handles API communication:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
export class CodecovClient {
  private baseUrl: string;
  private token?: string;

  private async fetch(path: string): Promise<any> {
    const url = `${this.baseUrl}${path}`;
    const headers: Record<string, string> = {
      "Accept": "application/json",
    };

    if (this.token) {
      headers["Authorization"] = `bearer ${this.token}`;
    }

    const response = await fetch(url, { headers });
    if (!response.ok) {
      throw new Error(`Codecov API error: ${response.status}`);
    }
    return response.json();
  }

  async getFileCoverage(owner: string, repo: string, filePath: string, ref?: string) {
    const refParam = ref ? `?ref=${encodeURIComponent(ref)}` : "";
    const encodedPath = encodeURIComponent(filePath);
    return this.fetch(`/api/v2/gh/${owner}/repos/${repo}/file_report/${encodedPath}${refParam}`);
  }

  async getCommitCoverage(owner: string, repo: string, commitSha: string) {
    return this.fetch(`/api/v2/gh/${owner}/repos/${repo}/commits/${commitSha}`);
  }

  async getRepoCoverage(owner: string, repo: string, branch?: string) {
    const branchParam = branch ? `?branch=${encodeURIComponent(branch)}` : "";
    return this.fetch(`/api/v2/gh/${owner}/repos/${repo}${branchParam}`);
  }
}

Three tools, clean API, configurable endpoints. Done.

The Authentication Stumble: Upload Token vs. API Token

After the initial euphoria wore off, reality hit: 401 Unauthorized.

Here’s what I learned the hard way: Codecov has two completely different token types:

  1. Upload Token: For pushing coverage reports during CI/CD (found in repo Settings → General)
  2. API Token: For reading coverage data via the API (created in Settings → Access)

I was using an upload token. Classic mistake. Once I generated a proper API token from the Codecov Settings → Access tab, authentication worked perfectly.

This became a critical part of the README—no one else should waste time on this gotcha:

1
2
3
4
5
6
7
8
## Token Types

**Important**: Codecov has two different types of tokens:

- **Upload Token**: Used for pushing coverage reports TO Codecov during CI/CD
- **API Token**: Used for reading coverage data FROM Codecov via the API

This MCP server requires an **API token**, not an upload token.

Self-Hosted Support: Why It Mattered

The existing server only supported codecov.io. I use a self-hosted Codecov instance, so I needed configurable endpoints.

The fix was trivial—make the base URL configurable:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
  "mcpServers": {
    "codecov": {
      "command": "npx",
      "args": ["-y", "@egulatee/mcp-codecov"],
      "env": {
        "CODECOV_BASE_URL": "https://codecov.your-company.com",
        "CODECOV_TOKEN": "${CODECOV_TOKEN}"
      }
    }
  }
}

One environment variable unlocked both cloud and enterprise use cases. This kind of flexibility is why building your own tooling matters.

Testing: 100% Coverage on an MCP Server

Here’s where things got interesting. Most MCP servers I’ve seen have zero tests. I wanted production-grade quality.

Claude Code helped implement comprehensive Vitest tests:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
import { describe, it, expect, beforeEach, vi } from 'vitest';
import { CodecovClient } from '../index.js';

describe('CodecovClient', () => {
  let client: CodecovClient;

  beforeEach(() => {
    client = new CodecovClient({
      baseUrl: 'https://codecov.io',
      token: 'test-token'
    });

    global.fetch = vi.fn();
  });

  it('includes authorization header when token is provided', async () => {
    const mockResponse = { ok: true, json: async () => ({ coverage: 95 }) };
    (global.fetch as any).mockResolvedValue(mockResponse);

    await client.getRepoCoverage('owner', 'repo');

    expect(global.fetch).toHaveBeenCalledWith(
      expect.any(String),
      expect.objectContaining({
        headers: expect.objectContaining({
          'Authorization': 'bearer test-token'
        })
      })
    );
  });
});

Final coverage stats:

  • Statements: 100%
  • Branches: 100%
  • Functions: 100%
  • Lines: 100%

We even set up GitHub Actions to run tests on every push and upload coverage reports to… Codecov. Using the MCP server to check coverage on itself. The recursion is beautiful.

Making It Presentable: The Final Polish

The last hour was spent on developer experience:

Comprehensive Documentation

The README grew to 350 lines with:

  • Clear installation instructions for both Claude Desktop and Claude Code
  • Configuration examples for self-hosted instances
  • Troubleshooting guide (especially for the token confusion)
  • API compatibility notes

Badges (Because Why Not?)

1
2
3
[![codecov](https://codecov.io/gh/egulatee/mcp-server-codecov/branch/main/graph/badge.svg)](https://codecov.io/gh/egulatee/mcp-server-codecov)
![Test and Coverage](https://github.com/egulatee/mcp-server-codecov/workflows/Test%20and%20Coverage/badge.svg)
![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)

Installation Options

The server is published on npm as @egulatee/mcp-codecov — no cloning required.

1
2
3
4
claude mcp add --transport stdio codecov \
  --env CODECOV_BASE_URL=https://codecov.io \
  --env CODECOV_TOKEN=${CODECOV_TOKEN} \
  -- npx -y @egulatee/mcp-codecov

Verify the server connected:

1
2
claude mcp get codecov
# Expected: "codecov: @egulatee/mcp-codecov - ✓ Connected"

One command to install. No manual JSON editing required.

Claude Desktop (JSON config)

Add to your claude_desktop_config.json (see the self-hosted example above — same npx approach).

Global npm install

1
npm install -g @egulatee/mcp-codecov

Then use mcp-codecov as the command instead of npx -y @egulatee/mcp-codecov.

Docker

1
docker run ghcr.io/egulatee/mcp-server-codecov

Real-World Usage: It Works Beautifully

Now comes the payoff. Here’s how I actually use this in my daily workflow:

Example 1: PR Review Coverage Check

1
2
3
4
5
6
7
8
9
Me: "Check coverage for commit a1b2c3d in egulatee/my-project"

Claude: *uses mcp__codecov__get_commit_coverage*

Claude: "This commit increased coverage from 92.3% to 93.1%. Here are the
        files with coverage changes:
        - src/auth.ts: 85% → 91% (+6%)
        - src/utils.ts: 78% → 78% (no change)
        - tests/auth.test.ts: 100% (new file)"

Example 2: Finding Coverage Gaps

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
Me: "What's the coverage for src/database/queries.ts on the main branch?"

Claude: *uses mcp__codecov__get_file_coverage*

Claude: "src/database/queries.ts has 76.5% coverage. Uncovered lines:
        - Lines 45-52: Error handling for connection timeout
        - Lines 89-93: Retry logic
        - Line 107: Edge case for null results

        Would you like me to suggest test cases for these gaps?"

Example 3: Repository Health Check

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
Me: "Show me overall coverage for the backend repo"

Claude: *uses mcp__codecov__get_repo_coverage*

Claude: "Repository: mycompany/backend-api
        Overall Coverage: 88.7%
        - Files: 247
        - Lines: 15,234
        - Hits: 13,512
        - Misses: 1,722

        Trend: +2.3% over last 30 days"

Example 4: PR Coverage Check

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
Me: "What's the coverage impact of PR #42 in egulatee/my-project?"

Claude: *uses mcp__codecov__get_pull_request_coverage*

Claude: "PR #42 adds 47 new lines. Coverage impact:
        - Overall: 88.2% → 89.1% (+0.9%)
        - New files: src/payments/stripe.ts at 94.1%
        - Changed files: src/payments/base.ts dropped 2.3% (new branch uncovered)

        The uncovered branch is the retry-on-timeout path at line 83."

Example 5: Branch Comparison

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
Me: "Compare coverage between main and the feature/payments branch"

Claude: *uses mcp__codecov__compare_coverage*

Claude: "feature/payments vs main:
        - Base (main): 88.2%
        - Head (feature/payments): 89.1%
        - Delta: +0.9%

        Files improved: 4  |  Files regressed: 1  |  Unchanged: 242"

No more fumbling with Codecov’s web UI. No more copying coverage numbers. Claude has direct access to the data and can reason about it.

Key Learnings: What This Project Taught Me

1. MCP Servers Are Simpler Than You Think

The entire working server is 265 lines. The protocol is straightforward JSON-RPC. If you can write a function that calls an API, you can write an MCP server.

2. Configuration Flexibility Matters

The difference between “works for me” and “works for everyone” is often just one environment variable. CODECOV_BASE_URL made this server useful for both cloud and enterprise users.

3. Tests on Infrastructure Code Are Worth It

Even “simple” integration code benefits from tests. Mocking the Codecov API helped catch edge cases (null refs, missing tokens, encoding issues) before they became production problems.

4. AI Pair Programming Accelerates Everything

Claude Code wrote 100% of the implementation code. My role was:

  • Architectural decisions (“three tools, not one”)
  • API exploration (“check the Codecov v2 docs”)
  • Review and refinement (“add better error messages”)
  • Testing strategy (“we need 100% coverage”)

This is what AI-augmented development looks like in practice. Human judgment, AI execution.

Important note: MCP servers extend what AI can do, but they don’t control what it will do. For critical workflows, you need validation guardrails—see Build LLM Guardrails, Not Better Prompts for why guardrails matter even with powerful tools like MCP.

5. Documentation Is Half the Product

A working server without docs is useless. The token types section, troubleshooting guide, and configuration examples probably saved future users hours of frustration.

What’s Next?

Now that the MCP server is working beautifully, I’m going back to my current development efforts with a significantly improved workflow. Claude Code now has direct access to coverage data, which means:

  • No more context switching to check coverage reports
  • Coverage-aware code reviews happen naturally in conversation
  • Test gap identification is instant
  • Coverage trends inform architectural decisions

The 2-hour investment in building this tool has already paid dividends. This is the compounding effect of AI-augmented development—tooling that makes the AI more capable, which in turn helps you build better tooling.

The Meta Conclusion

This project embodies what “AI Augmented Software Development” means to me:

  1. Human identifies the need: “Claude needs better Codecov access”
  2. AI implements the solution: Claude Code writes the MCP server
  3. Human provides oversight: Architecture, testing, polish
  4. AI uses the result: Claude Code uses the MCP server to analyze coverage
  5. Cycle repeats: Improvements feed back into the toolchain

We used Claude Code to build tooling that makes Claude Code more powerful. The snake eats its tail, and the result is better software, faster.

Total time: 2 hours. Total value: immeasurable.


About This Article

This post was written by Claude Code (Sonnet 4.5) with guidance and vision from Eric Gulatee. The MCP server project itself was also developed collaboratively—Claude Code wrote 100% of the implementation, while Eric provided architectural decisions, API exploration, code review, and testing strategy. This article is a real-world example of AI-augmented development in action.


GitHub Repository: egulatee/mcp-server-codecov

Package: npx @egulatee/mcp-codecov (v2.4.0, published)

MCP Protocol: Model Context Protocol Documentation


Are you writing unit tests and integration tests for your projects? Are you tracking code coverage to make sure your test suite keeps pace with your codebase? If so, this MCP server might slot right into your workflow.