REPOGEO REPORT · LITE
getagentseal/codeburn
Default branch main · commit 8208cf8f · scanned 5/9/2026, 6:16:22 AM
GitHub: 5,892 stars · 457 forks
Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface getagentseal/codeburn, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.
Action plan — copy-paste fixes
3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.
- highreadme#1Reposition core value proposition to the very top of README
Why:
CURRENTThe README currently starts with centered paragraphs and empty links before the main descriptive text.
COPY-PASTE FIXAdd this sentence as the absolute first line of the README, before any formatting or badges: 'CodeBurn: Interactive TUI dashboard for AI coding token usage, cost, and performance observability across 18 AI coding tools (Claude Code, Codex, Cursor, etc.).'
- highabout#2Add a homepage URL to the repository metadata
Why:
COPY-PASTE FIXSet the 'Homepage' field in the repository settings to: https://www.npmjs.com/package/codeburn
- mediumabout#3Refine the 'About' description for clarity and keyword density
Why:
CURRENTSee where your AI coding tokens go. Interactive TUI dashboard for Claude Code, Codex, and Cursor cost observability.
COPY-PASTE FIXInteractive TUI dashboard for AI coding token usage and cost observability. Track spending across Claude Code, Codex, Cursor, and 15+ other AI coding assistants.
Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash
Category visibility — the real GEO test
Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?
Same questions for every model — switch tabs to compare answers and rankings.
- Helicone · recommended 2×
- OpenAI API Usage Dashboard · recommended 1×
- Azure AI Studio / Azure OpenAI Service Monitoring · recommended 1×
- LangChain Callbacks / Tracing · recommended 1×
- LangSmith · recommended 1×
- CATEGORY QUERYHow can I monitor token usage and cost for my AI coding assistant usage?you: not recommendedAI recommended (in order):
- OpenAI API Usage Dashboard
- Azure AI Studio / Azure OpenAI Service Monitoring
- LangChain Callbacks / Tracing
- LangSmith
- Helicone
- LiteLLM (BerriAI/litellm)
- Custom API Wrappers with Logging
AI recommended 7 alternatives but never named getagentseal/codeburn. This is the gap to close.
Show full AI answer
- CATEGORY QUERYLooking for a local tool to observe AI coding costs broken down by project and model.you: not recommendedAI recommended (in order):
- Langfuse
- OpenTelemetry
- Grafana
- Prometheus
- Datadog
- Helicone
- Weights & Biases
- Snowflake
- Google BigQuery
- Tableau
- Microsoft Power BI
- Metabase
AI recommended 12 alternatives but never named getagentseal/codeburn. This is the gap to close.
Show full AI answer
Objective checks
Rule-based audits of metadata signals AI engines weight most.
- Metadata completenesswarn
Suggestion:
- README presencepass
Self-mention check
Does AI even know your repo exists when asked about it directly?
- Compared to common alternatives in this category, what is the core differentiator of getagentseal/codeburn?passAI did not name getagentseal/codeburn — likely talking about a different project
AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?
- If a team adopts getagentseal/codeburn in production, what risks or prerequisites should they evaluate first?passAI named getagentseal/codeburn explicitly
AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?
- In one sentence, what problem does the repo getagentseal/codeburn solve, and who is the primary audience?passAI named getagentseal/codeburn explicitly
AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?
Embed your GEO score
Drop this badge into the README of getagentseal/codeburn. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.
[](https://repogeo.com/en/r/getagentseal/codeburn)<a href="https://repogeo.com/en/r/getagentseal/codeburn"><img src="https://repogeo.com/badge/getagentseal/codeburn.svg" alt="RepoGEO" /></a>Subscribe to Pro for deep diagnoses
getagentseal/codeburn — Lite scans stay free; this card itemizes Pro deep limits vs Lite.
- Deep reports10 / month
- Brand-free category queries5 vs 2 in Lite
- Prioritized action items8 vs 3 in Lite