RRepoGEO

REPOGEO REPORT · LITE

poetiq-ai/poetiq-arc-agi-solver

Default branch main · commit a6947cff · scanned 5/15/2026, 10:38:21 PM

GitHub: 1,273 stars · 214 forks

AI VISIBILITY SCORE
28 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
1 pass · 1 warn · 0 fail
Objective metadata checks
AI knows your name
2 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface poetiq-ai/poetiq-arc-agi-solver, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • hightopics#1
    Add specific topics for ARC-AGI and AI benchmarks

    Why:

    COPY-PASTE FIX
    arc-agi, agi-solver, abstract-reasoning, ai-benchmark, state-of-the-art, llm-reasoning, reproduction-framework
  • highreadme#2
    Reposition README opening to emphasize 'reproduction framework'

    Why:

    CURRENT
    # Poetiq: SOTA Reasoning on ARC-AGI
    
    [](https://opensource.org/licenses/MIT)
    [](https://www.python.org/downloads/)
    [](https://arcprize.org/)
    
    This repository allows reproduction of **Poetiq's** record-breaking submission to the ARC-AGI-1 and ARC-AGI-2 benchmarks.
    COPY-PASTE FIX
    # Poetiq ARC-AGI Solver: Reproduce SOTA Abstract Reasoning Results
    
    [](https://opensource.org/licenses/MIT)
    [](https://www.python.org/downloads/)
    [](https://arcprize.org/)
    
    This repository provides the official framework to reproduce Poetiq's record-breaking, state-of-the-art submissions to the ARC-AGI-1 and ARC-AGI-2 benchmarks.
  • mediumabout#3
    Clarify the repository description to highlight its function as a solver framework

    Why:

    CURRENT
    This repository allows reproduction of Poetiq's record-breaking submission to the ARC-AGI-1 and ARC-AGI-2 benchmarks.
    COPY-PASTE FIX
    This repository provides the official solver framework to reproduce Poetiq's record-breaking, state-of-the-art submissions to the ARC-AGI-1 and ARC-AGI-2 benchmarks.

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface poetiq-ai/poetiq-arc-agi-solver
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
Perceiver IO / Perceiver
Recommended in 1 of 2 queries
COMPETITOR LEADERBOARD
  1. Perceiver IO / Perceiver · recommended 1×
  2. PyTorch Geometric · recommended 1×
  3. DeepMind's Graph Nets library · recommended 1×
  4. Vision Transformers (ViT) · recommended 1×
  5. DeepMind's AlphaGeometry · recommended 1×
  • CATEGORY QUERY
    How can I reproduce state-of-the-art results for abstract reasoning challenges using AI models?
    you: not recommended
    AI recommended (in order):
    1. Perceiver IO / Perceiver
    2. PyTorch Geometric
    3. DeepMind's Graph Nets library
    4. Vision Transformers (ViT)
    5. DeepMind's AlphaGeometry
    6. DreamCoder
    7. AlphaZero

    AI recommended 7 alternatives but never named poetiq-ai/poetiq-arc-agi-solver. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    What tools help evaluate and improve AI performance on general intelligence benchmarks?
    you: not recommended
    AI recommended (in order):
    1. EleutherAI's LM Evaluation Harness
    2. OpenAI Evals
    3. BigBench
    4. Hugging Face `evaluate` library
    5. HumanEval
    6. MMLU
    7. LangChain
    8. LlamaIndex

    AI recommended 8 alternatives but never named poetiq-ai/poetiq-arc-agi-solver. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    warn

    Suggestion:

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of poetiq-ai/poetiq-arc-agi-solver?
    pass
    AI named poetiq-ai/poetiq-arc-agi-solver explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts poetiq-ai/poetiq-arc-agi-solver in production, what risks or prerequisites should they evaluate first?
    pass
    AI named poetiq-ai/poetiq-arc-agi-solver explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo poetiq-ai/poetiq-arc-agi-solver solve, and who is the primary audience?
    pass
    AI did not name poetiq-ai/poetiq-arc-agi-solver — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of poetiq-ai/poetiq-arc-agi-solver. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/poetiq-ai/poetiq-arc-agi-solver.svg)](https://repogeo.com/en/r/poetiq-ai/poetiq-arc-agi-solver)
HTML
<a href="https://repogeo.com/en/r/poetiq-ai/poetiq-arc-agi-solver"><img src="https://repogeo.com/badge/poetiq-ai/poetiq-arc-agi-solver.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

poetiq-ai/poetiq-arc-agi-solver — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite