RRepoGEO

REPOGEO REPORT · LITE

meridianlabs-ai/inspect_petri

Default branch main · commit 6d9b9e1d · scanned 5/10/2026, 3:41:34 AM

GitHub: 1,143 stars · 181 forks

AI VISIBILITY SCORE
35 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
1 pass · 1 warn · 0 fail
Objective metadata checks
AI knows your name
3 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface meridianlabs-ai/inspect_petri, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Reposition README opening to clarify LLM focus and disambiguate 'Petri'

    Why:

    CURRENT
    Welcome to Inspect Petri, an auditing agent that enables automated monitoring and interaction with language models to detect potential alignment issues, reward hacking, and other concerning behaviors.
    COPY-PASTE FIX
    Welcome to Inspect Petri, an auditing agent for Large Language Models (LLMs). This project is *not* related to traditional Petri nets for process modeling. Inspect Petri enables automated monitoring and interaction with LLMs to detect potential alignment issues, reward hacking, and other concerning behaviors.
  • hightopics#2
    Add relevant topics to the repository

    Why:

    COPY-PASTE FIX
    llm-alignment, llm-safety, ai-auditing, reward-hacking, adversarial-testing, language-models, machine-learning, python
  • mediumreadme#3
    Add a 'Comparison to Alternatives' section in the README

    Why:

    COPY-PASTE FIX
    Add a new section, for example, '## Comparison to Alternatives' or '## Why Inspect Petri?' to highlight how Inspect Petri differentiates itself from tools like Giskard, Arize AI, or OpenAI Evals.

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface meridianlabs-ai/inspect_petri
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
GiskardAI/giskard
Recommended in 1 of 2 queries
COMPETITOR LEADERBOARD
  1. GiskardAI/giskard · recommended 1×
  2. Arize AI · recommended 1×
  3. Fiddler AI · recommended 1×
  4. whylabs/whylogs · recommended 1×
  5. microsoft/responsible-ai-toolbox · recommended 1×
  • CATEGORY QUERY
    How can I automatically test and monitor large language models for ethical alignment issues?
    you: not recommended
    AI recommended (in order):
    1. Giskard (GiskardAI/giskard)
    2. Arize AI
    3. Fiddler AI
    4. whylogs (whylabs/whylogs)
    5. Microsoft Responsible AI Toolbox (microsoft/responsible-ai-toolbox)
    6. IBM AI Fairness 360 (AIF360) (IBM/AIF360)
    7. Hugging Face Evaluate (huggingface/evaluate)

    AI recommended 7 alternatives but never named meridianlabs-ai/inspect_petri. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    Tools for simulating adversarial scenarios to evaluate LLM safety and detect reward hacking?
    you: not recommended
    AI recommended (in order):
    1. Garak
    2. Adversarial GLUE (AdvGLUE)
    3. OpenAI Evals
    4. Anthropic's Red Teaming efforts
    5. Hugging Face Evaluate library
    6. OpenAI API
    7. Anthropic API
    8. Google Gemini API

    AI recommended 8 alternatives but never named meridianlabs-ai/inspect_petri. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    warn

    Suggestion:

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of meridianlabs-ai/inspect_petri?
    pass
    AI named meridianlabs-ai/inspect_petri explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts meridianlabs-ai/inspect_petri in production, what risks or prerequisites should they evaluate first?
    pass
    AI named meridianlabs-ai/inspect_petri explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo meridianlabs-ai/inspect_petri solve, and who is the primary audience?
    pass
    AI named meridianlabs-ai/inspect_petri explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of meridianlabs-ai/inspect_petri. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/meridianlabs-ai/inspect_petri.svg)](https://repogeo.com/en/r/meridianlabs-ai/inspect_petri)
HTML
<a href="https://repogeo.com/en/r/meridianlabs-ai/inspect_petri"><img src="https://repogeo.com/badge/meridianlabs-ai/inspect_petri.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

meridianlabs-ai/inspect_petri — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite
meridianlabs-ai/inspect_petri — RepoGEO report