RRepoGEO

REPOGEO REPORT · LITE

DestinyLinker/MingLi-Bench

Default branch main · commit dd45b4d4 · scanned 5/7/2026, 8:02:58 PM

GitHub: 802 stars · 120 forks

AI VISIBILITY SCORE
35 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
1 pass · 1 warn · 0 fail
Objective metadata checks
AI knows your name
3 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface DestinyLinker/MingLi-Bench, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

2 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Update README H1 to explicitly include "LLM Benchmark"

    Why:

    CURRENT
    # Chinese Fortune Telling Bench
    COPY-PASTE FIX
    # MingLi-Bench: LLM Benchmark for Chinese Fortune Telling
  • mediumcomparison#2
    Add a "Comparison to Alternatives" section in the README

    Why:

    COPY-PASTE FIX
    ## Comparison to Alternatives
    
    Unlike generic LLM evaluation frameworks (e.g., LM Evaluation Harness, Ragas), MingLi-Bench is specifically designed for the nuanced domain of Chinese traditional fortune telling. While resources like Chinese Fortune Calendar provide information on divination, MingLi-Bench offers a structured, multiple-choice benchmark dataset and evaluation framework for assessing LLM accuracy in Bazi and Ziwei Doushu.

Category GEO backends resolved for this scan: google/gemini-2.0-flash-001, deepseek/deepseek-chat

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.0-flash-001. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface DestinyLinker/MingLi-Bench
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
LM Evaluation Harness
Recommended in 1 of 2 queries
COMPETITOR LEADERBOARD
  1. LM Evaluation Harness · recommended 1×
  2. GPTScore · recommended 1×
  3. Ragas · recommended 1×
  4. LangChain Evaluation · recommended 1×
  5. ATE (Adversarial Testing Environment) · recommended 1×
  • CATEGORY QUERY
    How to benchmark large language models on traditional Chinese divination practices like Bazi?
    you: not recommended
    AI recommended (in order):
    1. LM Evaluation Harness
    2. GPTScore
    3. Ragas
    4. LangChain Evaluation
    5. ATE (Adversarial Testing Environment)
    6. Amazon Mechanical Turk
    7. Toloka
    8. spaCy
    9. NLTK

    AI recommended 9 alternatives but never named DestinyLinker/MingLi-Bench. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    Where can I find a dataset to test AI accuracy in Chinese astrological predictions?
    you: not recommended
    AI recommended (in order):
    1. Chinese Fortune Calendar
    2. Zi Wei Dou Shu
    3. Ming Li Xue
    4. Tian Yi Gui Ren
    5. JSTOR
    6. ProQuest
    7. Google Scholar
    8. Kaggle
    9. Data.gov

    AI recommended 9 alternatives but never named DestinyLinker/MingLi-Bench. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    warn

    Suggestion:

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of DestinyLinker/MingLi-Bench?
    pass
    AI named DestinyLinker/MingLi-Bench explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts DestinyLinker/MingLi-Bench in production, what risks or prerequisites should they evaluate first?
    pass
    AI named DestinyLinker/MingLi-Bench explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo DestinyLinker/MingLi-Bench solve, and who is the primary audience?
    pass
    AI named DestinyLinker/MingLi-Bench explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of DestinyLinker/MingLi-Bench. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/DestinyLinker/MingLi-Bench.svg)](https://repogeo.com/en/r/DestinyLinker/MingLi-Bench)
HTML
<a href="https://repogeo.com/en/r/DestinyLinker/MingLi-Bench"><img src="https://repogeo.com/badge/DestinyLinker/MingLi-Bench.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

DestinyLinker/MingLi-Bench — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite