RRepoGEO

REPOGEO REPORT · LITE

microsoftarchive/promptbench

Default branch main · commit fcda538b · scanned 5/16/2026, 2:56:31 PM

GitHub: 2,803 stars · 220 forks

AI VISIBILITY SCORE
33 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
2 pass · 0 warn · 0 fail
Objective metadata checks
AI knows your name
2 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface microsoftarchive/promptbench, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

2 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Reposition the README's opening statement to highlight specific value

    Why:

    CURRENT
    <p align="center">
        <strong>PromptBench</strong>: A Unified Library for Evaluating and Understanding Large Language Models.
        
        <br />
    COPY-PASTE FIX
    <p align="center">
        <strong>PromptBench</strong>: A unified, research-oriented framework for systematically evaluating Large Language Model (LLM) robustness, fairness, and safety, particularly through adversarial prompting and comprehensive benchmarking. It provides a single library to assess various LLM providers and local models.
        
        <br />
  • mediumtopics#2
    Refine repository topics for better specificity and coverage

    Why:

    CURRENT
    adversarial-attacks, benchmark, chatgpt, evaluation, large-language-models, prompt, prompt-engineering, robustness
    COPY-PASTE FIX
    llm-evaluation, llm-benchmarking, llm-robustness, adversarial-prompts, prompt-security, llm-safety, unified-framework, large-language-models, prompt-engineering, adversarial-attacks, benchmark, chatgpt, evaluation, prompt, robustness

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface microsoftarchive/promptbench
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
LangChain
Recommended in 2 of 2 queries
COMPETITOR LEADERBOARD
  1. LangChain · recommended 2×
  2. DeepEval · recommended 2×
  3. EleutherAI/lm-evaluation-harness · recommended 1×
  4. Open LLM Leaderboard (Hugging Face) · recommended 1×
  5. LlamaIndex · recommended 1×
  • CATEGORY QUERY
    Looking for a framework to benchmark and compare large language model performance.
    you: not recommended
    AI recommended (in order):
    1. EleutherAI/lm-evaluation-harness (EleutherAI/lm-evaluation-harness)
    2. Open LLM Leaderboard (Hugging Face)
    3. LangChain
    4. DeepEval
    5. LlamaIndex
    6. MLCommons MLPerf Inference

    AI recommended 6 alternatives but never named microsoftarchive/promptbench. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    Need a library to evaluate prompt engineering strategies and LLM robustness.
    you: not recommended
    AI recommended (in order):
    1. LangChain
    2. DeepEval
    3. Ragas
    4. Phoenix
    5. Giskard
    6. PromptTools

    AI recommended 6 alternatives but never named microsoftarchive/promptbench. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    pass

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of microsoftarchive/promptbench?
    pass
    AI did not name microsoftarchive/promptbench — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts microsoftarchive/promptbench in production, what risks or prerequisites should they evaluate first?
    pass
    AI named microsoftarchive/promptbench explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo microsoftarchive/promptbench solve, and who is the primary audience?
    pass
    AI named microsoftarchive/promptbench explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of microsoftarchive/promptbench. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/microsoftarchive/promptbench.svg)](https://repogeo.com/en/r/microsoftarchive/promptbench)
HTML
<a href="https://repogeo.com/en/r/microsoftarchive/promptbench"><img src="https://repogeo.com/badge/microsoftarchive/promptbench.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

microsoftarchive/promptbench — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite