RRepoGEO

REPOGEO REPORT · LITE

openai/human-eval

Default branch master · commit 6d43fb98 · scanned 5/15/2026, 5:02:39 PM

GitHub: 3,225 stars · 445 forks

AI VISIBILITY SCORE
28 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
1 pass · 1 warn · 0 fail
Objective metadata checks
AI knows your name
2 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface openai/human-eval, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

2 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Reposition the README's opening to clearly state its purpose as an LLM code generation evaluation benchmark

    Why:

    CURRENT
    # HumanEval: Hand-Written Evaluation Set 
    
    This is an evaluation harness for the HumanEval problem solving dataset described in the paper "Evaluating Large Language Models Trained on Code".
    COPY-PASTE FIX
    # HumanEval: A Benchmark for Evaluating Large Language Models on Code Generation
    
    This repository provides the HumanEval dataset and an evaluation harness specifically designed to benchmark the code generation capabilities of large language models (LLMs). It offers a standardized, hand-written set of programming problems to rigorously assess how well LLMs can synthesize correct and functional code from natural language prompts, distinguishing it from general code quality tools or competitive programming platforms.
  • mediumhomepage#2
    Add a homepage link to the associated research paper

    Why:

    COPY-PASTE FIX
    https://arxiv.org/abs/2107.03374

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface openai/human-eval
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
SonarQube
Recommended in 1 of 2 queries
COMPETITOR LEADERBOARD
  1. SonarQube · recommended 1×
  2. ESLint · recommended 1×
  3. Pylint · recommended 1×
  4. Checkstyle · recommended 1×
  5. CodeClimate · recommended 1×
  • CATEGORY QUERY
    How can I rigorously benchmark the code quality from large language models?
    you: not recommended
    AI recommended (in order):
    1. SonarQube
    2. ESLint
    3. Pylint
    4. Checkstyle
    5. CodeClimate
    6. Radon
    7. GMetrics
    8. Lizard
    9. JaCoCo
    10. Coverage.py
    11. Istanbul
    12. Snyk
    13. OWASP ZAP
    14. Bandit

    AI recommended 14 alternatives but never named openai/human-eval. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    What tools are available for creating programming problem datasets to test AI code generation?
    you: not recommended
    AI recommended (in order):
    1. HackerRank for Work
    2. Codeforces
    3. LeetCode
    4. Google Code Jam/Kick Start Infrastructure
    5. GitHub
    6. GCC
    7. Clang
    8. Python interpreter
    9. Sphere Online Judge (SPOJ)

    AI recommended 9 alternatives but never named openai/human-eval. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    warn

    Suggestion:

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of openai/human-eval?
    pass
    AI did not name openai/human-eval — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts openai/human-eval in production, what risks or prerequisites should they evaluate first?
    pass
    AI named openai/human-eval explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo openai/human-eval solve, and who is the primary audience?
    pass
    AI named openai/human-eval explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of openai/human-eval. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/openai/human-eval.svg)](https://repogeo.com/en/r/openai/human-eval)
HTML
<a href="https://repogeo.com/en/r/openai/human-eval"><img src="https://repogeo.com/badge/openai/human-eval.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

openai/human-eval — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite