RRepoGEO

REPOGEO REPORT · LITE

mckaywrigley/clarity-ai

Default branch main · commit 5a33db14 · scanned 5/10/2026, 6:52:38 PM

GitHub: 1,413 stars · 274 forks

AI VISIBILITY SCORE
28 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
1 pass · 1 warn · 0 fail
Objective metadata checks
AI knows your name
2 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface mckaywrigley/clarity-ai, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • hightopics#1
    Add relevant topics to the repository

    Why:

    COPY-PASTE FIX
    ["ai-assistant", "perplexity-clone", "openai", "web-scraping", "llm", "fullstack", "nextjs"]
  • highreadme#2
    Clarify the core functionality in the README's opening

    Why:

    CURRENT
    # Clarity AI
    
    Clarity is simple perplexity.ai clone. Use the code for whatever you like! :)
    COPY-PASTE FIX
    # Clarity AI
    
    Clarity is a simple, open-source Perplexity.ai clone that uses web scraping and OpenAI's API to answer questions with real-time data. Use the code for whatever you like! :)
  • mediumfaq#3
    Add a FAQ section to address common misconceptions

    Why:

    COPY-PASTE FIX
    ## FAQ
    
    ### Does Clarity AI run entirely locally?
    No, Clarity AI fetches information from the web and uses OpenAI's API to generate answers. It is not designed to run entirely locally or keep all data on your machine.

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface mckaywrigley/clarity-ai
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
LangChain
Recommended in 1 of 2 queries
COMPETITOR LEADERBOARD
  1. LangChain · recommended 1×
  2. OpenAI GPT-4 · recommended 1×
  3. Serper API · recommended 1×
  4. Google Search API · recommended 1×
  5. LlamaIndex · recommended 1×
  • CATEGORY QUERY
    How can I build an AI assistant that answers questions using real-time web data?
    you: not recommended
    AI recommended (in order):
    1. LangChain
    2. OpenAI GPT-4
    3. Serper API
    4. Google Search API
    5. LlamaIndex
    6. GPT-3.5 Turbo
    7. Hugging Face Transformers
    8. T5
    9. BART
    10. Beautiful Soup
    11. Scrapy
    12. Microsoft Semantic Kernel
    13. Azure OpenAI Service
    14. Bing Search API
    15. OpenAI API
    16. Requests

    AI recommended 16 alternatives but never named mckaywrigley/clarity-ai. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    What tools help summarize web search results into coherent AI-generated answers?
    you: not recommended
    AI recommended (in order):
    1. Perplexity AI
    2. ChatGPT
    3. Microsoft Copilot
    4. Google Gemini
    5. You.com
    6. NeevaAI
    7. Elicit

    AI recommended 7 alternatives but never named mckaywrigley/clarity-ai. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    warn

    Suggestion:

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of mckaywrigley/clarity-ai?
    pass
    AI did not name mckaywrigley/clarity-ai — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts mckaywrigley/clarity-ai in production, what risks or prerequisites should they evaluate first?
    pass
    AI named mckaywrigley/clarity-ai explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo mckaywrigley/clarity-ai solve, and who is the primary audience?
    pass
    AI named mckaywrigley/clarity-ai explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of mckaywrigley/clarity-ai. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/mckaywrigley/clarity-ai.svg)](https://repogeo.com/en/r/mckaywrigley/clarity-ai)
HTML
<a href="https://repogeo.com/en/r/mckaywrigley/clarity-ai"><img src="https://repogeo.com/badge/mckaywrigley/clarity-ai.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

mckaywrigley/clarity-ai — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite