RRepoGEO

REPOGEO REPORT · LITE

google-research/tapas

Default branch master · commit 569a3c31 · scanned 5/15/2026, 2:48:19 AM

GitHub: 1,203 stars · 216 forks

AI VISIBILITY SCORE
35 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
1 pass · 1 warn · 0 fail
Objective metadata checks
AI knows your name
3 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface google-research/tapas, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Clarify the core problem TAPAS solves in the README's opening

    Why:

    CURRENT
    # TAble PArSing (TAPAS)
    
    Code and checkpoints for training the transformer-based Table QA models introduced in the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](#how-to-cite-tapas).
    COPY-PASTE FIX
    # TAble PArSing (TAPAS)
    
    TAPAS provides end-to-end neural models for directly answering natural language questions over structured tables, without needing to generate SQL queries. This repository contains code and checkpoints for training transformer-based Table QA models introduced in the paper [TAPAS: Weakly Supervised Table Parsing via Pre-training](#how-to-cite-tapas).
  • mediumhomepage#2
    Add a homepage URL to the repository metadata

    Why:

    COPY-PASTE FIX
    https://ai.googleblog.com/2020/04/tapas-question-answering-over-tables.html
  • lowtopics#3
    Add more specific topics to improve categorization

    Why:

    CURRENT
    nlp-machine-learning, question-answering, table-parsing, tensorflow
    COPY-PASTE FIX
    nlp-machine-learning, question-answering, table-parsing, tensorflow, table-qa, natural-language-understanding, structured-data-qa

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface google-research/tapas
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
huggingface/transformers
Recommended in 1 of 2 queries
COMPETITOR LEADERBOARD
  1. huggingface/transformers · recommended 1×
  2. Salesforce/codet5p-220m-bpe-text-to-sql · recommended 1×
  3. microsoft/tapex-large-finetuned-wikisql · recommended 1×
  4. Salesforce/codet5p-770m-bpe-text-to-sql · recommended 1×
  5. RasaHQ/rasa · recommended 1×
  • CATEGORY QUERY
    How to build a system for answering natural language questions using structured table data?
    you: not recommended
    AI recommended (in order):
    1. Hugging Face Transformers (huggingface/transformers)
    2. Salesforce/codet5p-220m-bpe-text-to-sql
    3. microsoft/tapex-large-finetuned-wikisql
    4. Salesforce/codet5p-770m-bpe-text-to-sql
    5. Rasa (RasaHQ/rasa)
    6. LangChain (langchain-ai/langchain)
    7. GPT-3.5
    8. GPT-4
    9. Microsoft TAPEX
    10. PyTorch (pytorch/pytorch)
    11. TensorFlow (tensorflow/tensorflow)
    12. T5
    13. BART
    14. Apache Calcite (apache/calcite)

    AI recommended 14 alternatives but never named google-research/tapas. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    What machine learning models are available for understanding and parsing text within tables?
    you: not recommended
    AI recommended (in order):
    1. LayoutLMv3
    2. LayoutLMv2
    3. LayoutXLM
    4. Donut
    5. Table Transformer
    6. Tesseract OCR
    7. Camelot
    8. Tabula

    AI recommended 8 alternatives but never named google-research/tapas. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    warn

    Suggestion:

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of google-research/tapas?
    pass
    AI named google-research/tapas explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts google-research/tapas in production, what risks or prerequisites should they evaluate first?
    pass
    AI named google-research/tapas explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo google-research/tapas solve, and who is the primary audience?
    pass
    AI named google-research/tapas explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of google-research/tapas. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/google-research/tapas.svg)](https://repogeo.com/en/r/google-research/tapas)
HTML
<a href="https://repogeo.com/en/r/google-research/tapas"><img src="https://repogeo.com/badge/google-research/tapas.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

google-research/tapas — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite