RRepoGEO

REPOGEO REPORT · LITE

thunlp/PromptPapers

Default branch main · commit 1ae4bd1e · scanned 5/14/2026, 5:58:02 AM

GitHub: 4,301 stars · 390 forks

AI VISIBILITY SCORE
35 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
1 pass · 1 warn · 0 fail
Objective metadata checks
AI knows your name
3 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface thunlp/PromptPapers, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Clarify the repo's primary purpose in the README's opening paragraph

    Why:

    CURRENT
    We have released an open-source prompt-learning toolkit, check out **OpenPrompt!**
    
    We strongly encourage the researchers that want to promote their fantastic work to the community to make **pull request** to update their paper's information! (See [contributing details](#contribution))
    
    Effective adaptation of pre-trained models could be probed from different perspectives. Prompt-learning more focuses on the organization of training procedure and the unification of different tasks, while delta tuning (parameter efficient methods) provides another direction from the specific optimization of pre-trained models. Check DeltaPapers!
    COPY-PASTE FIX
    This repository is a curated, must-read collection of papers on prompt-based tuning for pre-trained language models, maintained by Ning Ding and Shengding Hu. We strongly encourage researchers to make pull requests to update paper information! (See [contributing details](#contribution))
    
    We also have an open-source prompt-learning toolkit, check out **OpenPrompt!** Effective adaptation of pre-trained models could be probed from different perspectives. Prompt-learning more focuses on the organization of training procedure and the unification of different tasks, while delta tuning (parameter efficient methods) provides another direction from the specific optimization of pre-trained models. Check DeltaPapers!
  • highlicense#2
    Add a LICENSE file to the repository

    Why:

    COPY-PASTE FIX
    Create a `LICENSE` file in the root directory with the content of the Creative Commons Attribution 4.0 International (CC-BY-4.0) license.
  • mediumtopics#3
    Update repository topics for accuracy and specificity

    Why:

    CURRENT
    ai, bert, machine-learning, nlp, pre-trained-language-models, prompt, prompt-based, prompt-learning, prompt-toolkit
    COPY-PASTE FIX
    ai, bert, machine-learning, nlp, pre-trained-language-models, prompt, prompt-based, prompt-learning, prompt-engineering, paper-list, research-papers

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface thunlp/PromptPapers
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
GPT-3
Recommended in 1 of 2 queries
COMPETITOR LEADERBOARD
  1. GPT-3 · recommended 1×
  2. T5 · recommended 1×
  3. Awesome-Prompt-Engineering · recommended 1×
  4. Prompt Engineering Guide · recommended 1×
  5. Learn Prompting · recommended 1×
  • CATEGORY QUERY
    What are the essential research papers for understanding prompt-based tuning in pre-trained language models?
    you: not recommended
    AI recommended (in order):
    1. GPT-3
    2. T5

    AI recommended 2 alternatives but never named thunlp/PromptPapers. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    Where can I find a curated collection of significant works on prompt engineering for large language models?
    you: not recommended
    AI recommended (in order):
    1. Awesome-Prompt-Engineering
    2. Prompt Engineering Guide
    3. Learn Prompting
    4. Papers With Code
    5. arXiv
    6. Hugging Face

    AI recommended 6 alternatives but never named thunlp/PromptPapers. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    warn

    Suggestion:

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of thunlp/PromptPapers?
    pass
    AI named thunlp/PromptPapers explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts thunlp/PromptPapers in production, what risks or prerequisites should they evaluate first?
    pass
    AI named thunlp/PromptPapers explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo thunlp/PromptPapers solve, and who is the primary audience?
    pass
    AI named thunlp/PromptPapers explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of thunlp/PromptPapers. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/thunlp/PromptPapers.svg)](https://repogeo.com/en/r/thunlp/PromptPapers)
HTML
<a href="https://repogeo.com/en/r/thunlp/PromptPapers"><img src="https://repogeo.com/badge/thunlp/PromptPapers.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

thunlp/PromptPapers — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite