RRepoGEO

REPOGEO REPORT · LITE

matt1398/claude-devtools

Default branch main · commit cf335c6d · scanned 5/11/2026, 9:41:16 PM

GitHub: 3,329 stars · 249 forks

AI VISIBILITY SCORE
33 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
2 pass · 0 warn · 0 fail
Objective metadata checks
AI knows your name
2 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface matt1398/claude-devtools, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Reposition the README's core value proposition to clarify its nature as a visual debugging UI.

    Why:

    CURRENT
    <p align="center"><strong>Your Claude is coding blind. See everything it did.</strong></p><p align="center"><sub>The debugging tool for Claude Code. Read session transcripts, inspect tool calls, track token usage — directly from the Claude Code logs on your machine.</sub></p>
    COPY-PASTE FIX
    <p align="center"><strong>claude-devtools: The visual debugging and observability UI for Claude Code.</strong></p><p align="center"><sub>Inspect session logs, tool calls, token usage, subagents, and context window directly from your local Claude Code logs.</sub></p>
  • mediumreadme#2
    Add a comparison section to the README.

    Why:

    COPY-PASTE FIX
    ## Comparison to other LLM Observability Tools
    
    Unlike general-purpose LLM observability platforms (e.g., LangSmith, Weights & Biases Prompts, Helicone) which often require API integrations or cloud deployments, claude-devtools is a **local, desktop application** specifically designed for **Claude Code**. It directly reads logs from your machine, providing a private, visual UI to debug and understand your Claude Code sessions without sending data to external services.
  • lowabout#3
    Refine the repository description for clarity.

    Why:

    CURRENT
    The missing DevTools for Claude Code — inspect session logs, tool calls, token usage, subagents, and context window in a visual UI. Free, open source.
    COPY-PASTE FIX
    A free, open-source desktop debugging and observability UI for Claude Code. Inspect session logs, tool calls, token usage, subagents, and context window visually.

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface matt1398/claude-devtools
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
LangSmith
Recommended in 2 of 2 queries
COMPETITOR LEADERBOARD
  1. LangSmith · recommended 2×
  2. Weights & Biases Prompts · recommended 1×
  3. OpenAI Playground · recommended 1×
  4. Humanloop · recommended 1×
  5. Helicone · recommended 1×
  • CATEGORY QUERY
    Struggling to debug AI agent tool calls; need a visual way to inspect LLM session logs.
    you: not recommended
    AI recommended (in order):
    1. LangSmith
    2. Weights & Biases Prompts
    3. OpenAI Playground
    4. Humanloop
    5. Helicone
    6. Streamlit
    7. Dash

    AI recommended 7 alternatives but never named matt1398/claude-devtools. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    Looking for a developer tool to monitor LLM token usage and context window visually.
    you: not recommended
    AI recommended (in order):
    1. LangSmith
    2. OpenAI Playground / API Dashboard
    3. Weights & Biases Prompts (wandb/wandb)
    4. Helicone (helicone/helicone)
    5. Phoenix (Arize-AI/phoenix)
    6. PromptLayer (PromptLayer/promptlayer)

    AI recommended 6 alternatives but never named matt1398/claude-devtools. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    pass

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of matt1398/claude-devtools?
    pass
    AI named matt1398/claude-devtools explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts matt1398/claude-devtools in production, what risks or prerequisites should they evaluate first?
    pass
    AI named matt1398/claude-devtools explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo matt1398/claude-devtools solve, and who is the primary audience?
    pass
    AI did not name matt1398/claude-devtools — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of matt1398/claude-devtools. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/matt1398/claude-devtools.svg)](https://repogeo.com/en/r/matt1398/claude-devtools)
HTML
<a href="https://repogeo.com/en/r/matt1398/claude-devtools"><img src="https://repogeo.com/badge/matt1398/claude-devtools.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

matt1398/claude-devtools — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite