RRepoGEO

REPOGEO REPORT · LITE

stepjam/RLBench

Default branch master · commit 02720bba · scanned 5/10/2026, 3:27:03 PM

GitHub: 1,765 stars · 314 forks

AI VISIBILITY SCORE
35 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
1 pass · 1 warn · 0 fail
Objective metadata checks
AI knows your name
3 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface stepjam/RLBench, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Reposition README opening to clarify its role as a benchmark built on simulators

    Why:

    CURRENT
    **RLBench** is an ambitious large-scale benchmark and learning environment designed to facilitate research in a number of vision-guided manipulation research areas, including: reinforcement learning, imitation learning, multi-task learning, geometric computer vision, and in particular, few-shot learning.
    COPY-PASTE FIX
    **RLBench** is an ambitious large-scale benchmark and learning environment for robot learning, providing a standardized suite of vision-guided manipulation tasks. Unlike general-purpose simulation platforms, RLBench focuses on offering a ready-to-use environment for reinforcement learning, imitation learning, multi-task learning, and few-shot learning research.
  • hightopics#2
    Add relevant topics to improve categorization

    Why:

    COPY-PASTE FIX
    robot-learning, reinforcement-learning, imitation-learning, multi-task-learning, few-shot-learning, robotics, benchmark, simulation-environment, computer-vision, manipulation
  • mediumreadme#3
    Add a section to README clarifying the existing license

    Why:

    COPY-PASTE FIX
    Add a new section to the README, for example:
    
    ## License
    RLBench uses a custom license. Please refer to the [LICENSE](LICENSE) file for full details on its terms and conditions.

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface stepjam/RLBench
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
Gazebo
Recommended in 2 of 2 queries
COMPETITOR LEADERBOARD
  1. Gazebo · recommended 2×
  2. CoppeliaSim · recommended 2×
  3. MuJoCo · recommended 2×
  4. Webots · recommended 2×
  5. NVIDIA Isaac Sim · recommended 1×
  • CATEGORY QUERY
    What are good large-scale simulation environments for robotic manipulation research and benchmarking?
    you: not recommended
    AI recommended (in order):
    1. NVIDIA Isaac Sim
    2. Unity
    3. Gazebo
    4. CoppeliaSim
    5. MuJoCo
    6. Webots

    AI recommended 6 alternatives but never named stepjam/RLBench. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    Seeking a flexible learning environment to develop and test vision-guided robot control policies.
    you: not recommended
    AI recommended (in order):
    1. Isaac Sim
    2. Gazebo
    3. PyBullet
    4. MuJoCo
    5. CoppeliaSim
    6. Webots

    AI recommended 6 alternatives but never named stepjam/RLBench. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    warn

    Suggestion:

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of stepjam/RLBench?
    pass
    AI named stepjam/RLBench explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts stepjam/RLBench in production, what risks or prerequisites should they evaluate first?
    pass
    AI named stepjam/RLBench explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo stepjam/RLBench solve, and who is the primary audience?
    pass
    AI named stepjam/RLBench explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of stepjam/RLBench. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/stepjam/RLBench.svg)](https://repogeo.com/en/r/stepjam/RLBench)
HTML
<a href="https://repogeo.com/en/r/stepjam/RLBench"><img src="https://repogeo.com/badge/stepjam/RLBench.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

stepjam/RLBench — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite