RRepoGEO

REPOGEO REPORT · LITE

vectara/hallucination-leaderboard

Default branch main · commit f032369a · scanned 5/10/2026, 8:42:38 AM

GitHub: 3,239 stars · 104 forks

AI VISIBILITY SCORE
27 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
2 pass · 0 warn · 0 fail
Objective metadata checks
AI knows your name
1 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface vectara/hallucination-leaderboard, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Reposition the README's opening paragraph to clarify its role as a public benchmark

    Why:

    CURRENT
    Public LLM leaderboard computed using Vectara's Hallucination Evaluation Model, also known as HHEM. This evaluates how often an LLM introduces hallucinations when summarizing a document.
    COPY-PASTE FIX
    This repository hosts the public, continuously updated LLM Hallucination Leaderboard, a critical benchmark for evaluating and comparing Large Language Models on their factual consistency when summarizing documents. Powered by Vectara's Hallucination Evaluation Model (HHEM), it serves researchers and developers focused on LLM reliability and trustworthiness.
  • hightopics#2
    Add more specific topics to improve categorization

    Why:

    CURRENT
    generative-ai, hallucinations, llm
    COPY-PASTE FIX
    generative-ai, hallucinations, llm, llm-evaluation, llm-benchmarking, factual-consistency, summarization
  • mediumabout#3
    Refine the 'About' section (description) to emphasize its unique value

    Why:

    CURRENT
    Leaderboard Comparing LLM Performance at Producing Hallucinations when Summarizing Short Documents
    COPY-PASTE FIX
    Public, continuously updated leaderboard for benchmarking Large Language Models on their factual consistency and hallucination rates when summarizing documents. Essential for LLM evaluation and reliability.

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface vectara/hallucination-leaderboard
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
Appen
Recommended in 1 of 2 queries
COMPETITOR LEADERBOARD
  1. Appen · recommended 1×
  2. Scale AI · recommended 1×
  3. Surveymonkey · recommended 1×
  4. Google Forms · recommended 1×
  5. OpenAI's GPT-4 · recommended 1×
  • CATEGORY QUERY
    How to compare large language model accuracy and factual consistency for summarization tasks?
    you: not recommended
    AI recommended (in order):
    1. Appen
    2. Scale AI
    3. Surveymonkey
    4. Google Forms
    5. OpenAI's GPT-4
    6. Google's Gemini
    7. Ragas (explodinggradients/ragas)
    8. ROUGE
    9. BERTScore
    10. MoverScore
    11. SummaC (tingkai-zhang/SummaC)
    12. QuestEval (m-freitas/questeval)
    13. BLEURT (google-research/bleurt)

    AI recommended 13 alternatives but never named vectara/hallucination-leaderboard. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    What are the most reliable generative AI models for minimizing factual errors in generated text?
    you: not recommended
    AI recommended (in order):
    1. GPT-4
    2. Claude 3 Opus/Sonnet
    3. Google Gemini 1.5 Pro
    4. Llama 3
    5. Cohere Command R+

    AI recommended 5 alternatives but never named vectara/hallucination-leaderboard. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    pass

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of vectara/hallucination-leaderboard?
    pass
    AI did not name vectara/hallucination-leaderboard — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts vectara/hallucination-leaderboard in production, what risks or prerequisites should they evaluate first?
    pass
    AI named vectara/hallucination-leaderboard explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo vectara/hallucination-leaderboard solve, and who is the primary audience?
    pass
    AI did not name vectara/hallucination-leaderboard — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of vectara/hallucination-leaderboard. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/vectara/hallucination-leaderboard.svg)](https://repogeo.com/en/r/vectara/hallucination-leaderboard)
HTML
<a href="https://repogeo.com/en/r/vectara/hallucination-leaderboard"><img src="https://repogeo.com/badge/vectara/hallucination-leaderboard.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

vectara/hallucination-leaderboard — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite