RRepoGEO

REPOGEO REPORT · LITE

Beomi/KoAlpaca

Default branch main · commit fb5c84e2 · scanned 5/13/2026, 10:42:44 AM

GitHub: 1,577 stars · 226 forks

AI VISIBILITY SCORE
59 /100
Needs work
Category recall
1 / 2
Avg rank #2.0 when recommended
Rule findings
1 pass · 1 warn · 0 fail
Objective metadata checks
AI knows your name
2 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface Beomi/KoAlpaca, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Reposition README introduction to highlight fine-tuning

    Why:

    CURRENT
    The current README starts with "Update Logs" after an image, without a clear introductory statement about its core capabilities.
    COPY-PASTE FIX
    Add a prominent introductory paragraph to the README, such as: 'KoAlpaca is an open-source, instruction-following large language model specifically designed for Korean. It provides a robust base for fine-tuning on various Korean NLP applications, enabling developers and researchers to adapt it for custom tasks.'
  • mediumhomepage#2
    Add a homepage URL to the repository's "About" section

    Why:

    COPY-PASTE FIX
    Add the URL of the primary Hugging Face model page (e.g., `https://huggingface.co/beomi/KoAlpaca-Polyglot-5.8B-v1.1b`) or a dedicated project website to the repository's 'About' section.
  • lowtopics#3
    Expand repository topics to include broader NLP application terms

    Why:

    CURRENT
    alpaca, chatkoalpaca, koalpaca, korean-nlp, llama, polyglot-ko
    COPY-PASTE FIX
    alpaca, chatkoalpaca, koalpaca, korean-nlp, llama, polyglot-ko, nlp, fine-tuning, language-model, instruction-tuning

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
1 / 2
50% of queries surface Beomi/KoAlpaca
Avg rank
#2.0
Lower is better. #1 = top recommendation.
Share of voice
8%
Of all named tools, what % are you?
Top rival
Polyglot-Ko
Recommended in 1 of 2 queries
COMPETITOR LEADERBOARD
  1. Polyglot-Ko · recommended 1×
  2. KLUE-RoBERTa · recommended 1×
  3. mBERT · recommended 1×
  4. XLM-R · recommended 1×
  5. BLOOM · recommended 1×
  • CATEGORY QUERY
    Which open-source large language models are best for understanding Korean instructions and commands?
    you: #2
    AI recommended (in order):
    1. Polyglot-Ko
    2. KoAlpaca ← you
    3. KLUE-RoBERTa
    4. mBERT
    5. XLM-R
    6. BLOOM
    Show full AI answer
  • CATEGORY QUERY
    I need an open-source language model suitable for fine-tuning on Korean NLP applications.
    you: not recommended
    AI recommended (in order):
    1. Polyglot-ko (EleutherAI/polyglot-ko-12.8b)
    2. KoGPT (SKT/KoGPT)
    3. KLUE-RoBERTa (KLUE/roberta-large)
    4. KoBERT (skt/kobert-base-v1)
    5. bert-base-multilingual-cased
    6. xlm-roberta-base/large

    AI recommended 6 alternatives but never named Beomi/KoAlpaca. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    warn

    Suggestion:

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of Beomi/KoAlpaca?
    pass
    AI did not name Beomi/KoAlpaca — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts Beomi/KoAlpaca in production, what risks or prerequisites should they evaluate first?
    pass
    AI named Beomi/KoAlpaca explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo Beomi/KoAlpaca solve, and who is the primary audience?
    pass
    AI named Beomi/KoAlpaca explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of Beomi/KoAlpaca. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/Beomi/KoAlpaca.svg)](https://repogeo.com/en/r/Beomi/KoAlpaca)
HTML
<a href="https://repogeo.com/en/r/Beomi/KoAlpaca"><img src="https://repogeo.com/badge/Beomi/KoAlpaca.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

Beomi/KoAlpaca — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite