REPOGEO REPORT · LITE
OpenDriveLab/UniVLA
Default branch main · commit 0ab9e9dd · scanned 5/16/2026, 12:13:25 PM
GitHub: 1,078 stars · 64 forks
Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface OpenDriveLab/UniVLA, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.
Action plan — copy-paste fixes
3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.
- highreadme#1Reposition the core value proposition in the README's opening
Why:
CURRENT# :earth_asia: UniVLA <div id="top" align="center"> <p align="center"> </p> </div> > #### :page_facing_up: Paper | :rocket: Demo Page (Coming Soon) > :black_nib: Qingwen Bu, Y. Yang, J. Cai, S. Gao, G. Ren, M. Yao, P. Luo, H. Li > :e-mail: Primary Contact: Qingwen Bu (buqingwen@opendrivelab.com) ### :fire: Highlights - A recipe towards generalist policy by planning in a unified, embodiment-agnostic action space.
COPY-PASTE FIX# UniVLA: A Generalist Robot Policy for Embodiment-Agnostic Action Learning UniVLA introduces a novel approach to develop generalist robot policies by planning in a unified, embodiment-agnostic action space. It extracts task-centric latent actions from cross-embodiment videos, achieving state-of-the-art results on multiple benchmarks.
- mediumtopics#2Expand GitHub topics with more specific keywords
Why:
CURRENTrobot-learning, vision-language-actions-models, vla
COPY-PASTE FIXrobot-learning, vision-language-actions-models, vla, generalist-robotics, embodied-ai, foundation-models, cross-embodiment-learning, robot-manipulation
- mediumcomparison#3Add a comparison section to the README
Why:
COPY-PASTE FIX## :balance_scale: Comparison with State-of-the-Art UniVLA differentiates itself from other embodied AI models like RT-1, RT-2, and RT-X by focusing on a unified, embodiment-agnostic action space and extracting task-centric latent actions from diverse cross-embodiment videos. This approach enables more compute-efficient training and superior generalization across various robotic platforms and tasks.
Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash
Category visibility — the real GEO test
Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?
Same questions for every model — switch tabs to compare answers and rankings.
- RT-1 · recommended 2×
- RT-2 · recommended 2×
- RT-X · recommended 2×
- Diffusion Policy · recommended 2×
- Eureka · recommended 1×
- CATEGORY QUERYHow can I develop a generalist robot policy that works across various physical embodiments?you: not recommendedAI recommended (in order):
- RT-1
- RT-2
- RT-X
- Eureka
- Voyager
- Diffusion Policy
- ACT
- Isaac Gym
- MuJoCo
- RoboMimic
- Behavior Cloning
AI recommended 11 alternatives but never named OpenDriveLab/UniVLA. This is the gap to close.
Show full AI answer
- CATEGORY QUERYWhat are the best approaches for learning robot actions from diverse vision-language data?you: not recommendedAI recommended (in order):
- Open-X Embodied Foundation Models
- RT-X
- RT-1
- RT-2
- CLIP
- OpenAI CLIP
- Diffusion Policy
- ACT-Diffusion
- BC-Z
- ALOHA
- Perceiver IO
- Gato
- R3M
- VIP
AI recommended 14 alternatives but never named OpenDriveLab/UniVLA. This is the gap to close.
Show full AI answer
Objective checks
Rule-based audits of metadata signals AI engines weight most.
- Metadata completenesspass
- README presencepass
Self-mention check
Does AI even know your repo exists when asked about it directly?
- Compared to common alternatives in this category, what is the core differentiator of OpenDriveLab/UniVLA?passAI named OpenDriveLab/UniVLA explicitly
AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?
- If a team adopts OpenDriveLab/UniVLA in production, what risks or prerequisites should they evaluate first?passAI named OpenDriveLab/UniVLA explicitly
AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?
- In one sentence, what problem does the repo OpenDriveLab/UniVLA solve, and who is the primary audience?passAI named OpenDriveLab/UniVLA explicitly
AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?
Embed your GEO score
Drop this badge into the README of OpenDriveLab/UniVLA. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.
[](https://repogeo.com/en/r/OpenDriveLab/UniVLA)<a href="https://repogeo.com/en/r/OpenDriveLab/UniVLA"><img src="https://repogeo.com/badge/OpenDriveLab/UniVLA.svg" alt="RepoGEO" /></a>Subscribe to Pro for deep diagnoses
OpenDriveLab/UniVLA — Lite scans stay free; this card itemizes Pro deep limits vs Lite.
- Deep reports10 / month
- Brand-free category queries5 vs 2 in Lite
- Prioritized action items8 vs 3 in Lite