RRepoGEO

REPOGEO REPORT · LITE

Libr-AI/OpenFactVerification

Default branch main · commit 6e1ee9e5 · scanned 5/9/2026, 1:02:59 AM

GitHub: 1,143 stars · 63 forks

AI VISIBILITY SCORE
33 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
2 pass · 0 warn · 0 fail
Objective metadata checks
AI knows your name
2 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface Libr-AI/OpenFactVerification, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Reposition the README H1 and overview to emphasize 'end-to-end pipeline'

    Why:

    CURRENT
    # Loki: An Open-source Tool for Fact Verification
    
    ## Overview
    Loki is our open-source solution designed to automate the process of verifying factuality. It provides a comprehensive pipeline for dissecting long texts into individual claims, assessing their worthiness for verification, generating queries for evidence search, crawling for evidence, and ultimately verifying the claims.
    COPY-PASTE FIX
    # Loki: An Open-source, End-to-End Pipeline for Automated Fact Verification
    
    ## Overview
    Loki is a comprehensive open-source solution designed to automate the entire process of verifying factuality, from dissecting long texts into individual claims to evidence search, crawling, and final verification. Unlike generic LLM frameworks, Loki offers a complete, integrated pipeline especially useful for journalists, researchers, and developers building dedicated fact-checking systems.
  • mediumtopics#2
    Add more specific topics to differentiate from generic AI tools

    Why:

    CURRENT
    ai, factuality, hallucination
    COPY-PASTE FIX
    ai, factuality, hallucination, fact-checking, claim-verification, rag, llm-pipeline
  • mediumcomparison#3
    Add a 'Comparison with Alternatives' section to the README

    Why:

    COPY-PASTE FIX
    ## Comparison with Alternatives
    
    Loki stands out as an open-source, LLM-powered framework for end-to-end fact verification, emphasizing transparency and reproducibility through its use of Retrieval Augmented Generation (RAG). While tools like LangChain or LlamaIndex provide components for building LLM applications, Loki offers a complete, integrated pipeline specifically for automated fact-checking. This differentiates it from generic LLMs (e.g., GPT-4, PaLM 2) which require significant custom development to achieve similar verification capabilities, and from proprietary fact-checking services.

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface Libr-AI/OpenFactVerification
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
OpenAI GPT-4 / GPT-3.5 Turbo
Recommended in 1 of 2 queries
COMPETITOR LEADERBOARD
  1. OpenAI GPT-4 / GPT-3.5 Turbo · recommended 1×
  2. langchain-ai/langchain · recommended 1×
  3. run-llama/llama_index · recommended 1×
  4. Google Cloud Vertex AI · recommended 1×
  5. PaLM 2 · recommended 1×
  • CATEGORY QUERY
    How to automate fact-checking and claim verification for large documents using AI?
    you: not recommended
    AI recommended (in order):
    1. OpenAI GPT-4 / GPT-3.5 Turbo
    2. LangChain (langchain-ai/langchain)
    3. LlamaIndex (run-llama/llama_index)
    4. Google Cloud Vertex AI
    5. PaLM 2
    6. Gemini
    7. Hugging Face Transformers (huggingface/transformers)
    8. BERT
    9. RoBERTa
    10. DeBERTa
    11. Elasticsearch (elastic/elasticsearch)
    12. OpenSearch (opensearch-project/OpenSearch)
    13. Weaviate (weaviate/weaviate)
    14. Pinecone
    15. ChromaDB (chroma-core/chroma)
    16. Full Fact API
    17. Snopes API

    AI recommended 17 alternatives but never named Libr-AI/OpenFactVerification. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    What open-source solutions help detect AI hallucination and verify information credibility?
    you: not recommended
    AI recommended (in order):
    1. LlamaIndex
    2. Haystack
    3. OpenFacto
    4. Wikidata Query Service
    5. Sentence-Transformers
    6. spaCy
    7. Guardrails AI
    8. LMQL

    AI recommended 8 alternatives but never named Libr-AI/OpenFactVerification. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    pass

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of Libr-AI/OpenFactVerification?
    pass
    AI did not name Libr-AI/OpenFactVerification — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts Libr-AI/OpenFactVerification in production, what risks or prerequisites should they evaluate first?
    pass
    AI named Libr-AI/OpenFactVerification explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo Libr-AI/OpenFactVerification solve, and who is the primary audience?
    pass
    AI named Libr-AI/OpenFactVerification explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of Libr-AI/OpenFactVerification. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/Libr-AI/OpenFactVerification.svg)](https://repogeo.com/en/r/Libr-AI/OpenFactVerification)
HTML
<a href="https://repogeo.com/en/r/Libr-AI/OpenFactVerification"><img src="https://repogeo.com/badge/Libr-AI/OpenFactVerification.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

Libr-AI/OpenFactVerification — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite