RRepoGEO

REPOGEO REPORT · LITE

ngxson/wllama

Default branch master · commit b19148a6 · scanned 5/12/2026, 3:52:02 AM

GitHub: 1,060 stars · 92 forks

AI VISIBILITY SCORE
33 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
2 pass · 0 warn · 0 fail
Objective metadata checks
AI knows your name
2 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface ngxson/wllama, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Reposition README H1 and opening paragraph for clarity

    Why:

    CURRENT
    # wllama - Wasm binding for llama.cpp
    
    WebAssembly binding for llama.cpp
    COPY-PASTE FIX
    # wllama - Run llama.cpp models directly in your browser with WebAssembly
    
    ngxson/wllama enables high-performance inference of `llama.cpp`-compatible Large Language Models (LLMs) directly within the web browser, leveraging WebAssembly (Wasm) and WebGPU. It provides an OpenAI-compatible API for client-side multimodal and tool-calling capabilities, without requiring a backend server or dedicated GPU.
  • mediumtopics#2
    Add more specific topics for browser-based LLM inference

    Why:

    CURRENT
    llama, llamacpp, llm, wasm, webassembly
    COPY-PASTE FIX
    llama, llamacpp, llm, wasm, webassembly, browser-llm, client-side-ai, web-llm-inference, on-device-ai, webgpu
  • mediumreadme#3
    Add a 'Why wllama?' or 'Comparison' section to the README

    Why:

    COPY-PASTE FIX
    ## Why wllama? (or Comparison)
    
    wllama stands out by focusing specifically on bringing `llama.cpp`'s capabilities directly to the browser. Unlike general-purpose browser ML frameworks like Transformers.js, TensorFlow.js, or ONNX Runtime Web, wllama is optimized for `llama.cpp` models, offering features like WebGPU, multimodal input, and tool calling support. While `llama.cpp` is the foundational project, wllama provides the necessary WebAssembly bindings and browser-specific optimizations to run these models client-side, eliminating the need for a server backend for inference.

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface ngxson/wllama
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
ggerganov/llama.cpp
Recommended in 2 of 2 queries
COMPETITOR LEADERBOARD
  1. ggerganov/llama.cpp · recommended 2×
  2. tensorflow/tfjs · recommended 2×
  3. mlc-ai/web-llm · recommended 1×
  4. xenova/transformers.js · recommended 1×
  5. microsoft/onnxruntime · recommended 1×
  • CATEGORY QUERY
    How to run large language models directly in the web browser using WebAssembly?
    you: not recommended
    AI recommended (in order):
    1. Web LLM (mlc-ai/web-llm)
    2. Transformers.js (xenova/transformers.js)
    3. ONNX Runtime Web (microsoft/onnxruntime)
    4. llama.cpp (ggerganov/llama.cpp)
    5. TensorFlow.js (tensorflow/tfjs)
    6. Pyodide (pyodide/pyodide)

    AI recommended 6 alternatives but never named ngxson/wllama. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    Looking for a client-side library to add multimodal LLM capabilities to a web app.
    you: not recommended
    AI recommended (in order):
    1. Transformers.js (huggingface/transformers.js)
    2. TensorFlow.js (tensorflow/tfjs)
    3. ONNX Runtime Web (microsoft/onnxruntime-web)
    4. MediaPipe (google/mediapipe)
    5. Llama.cpp (ggerganov/llama.cpp)

    AI recommended 5 alternatives but never named ngxson/wllama. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    pass

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of ngxson/wllama?
    pass
    AI did not name ngxson/wllama — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts ngxson/wllama in production, what risks or prerequisites should they evaluate first?
    pass
    AI named ngxson/wllama explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo ngxson/wllama solve, and who is the primary audience?
    pass
    AI named ngxson/wllama explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of ngxson/wllama. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/ngxson/wllama.svg)](https://repogeo.com/en/r/ngxson/wllama)
HTML
<a href="https://repogeo.com/en/r/ngxson/wllama"><img src="https://repogeo.com/badge/ngxson/wllama.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

ngxson/wllama — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite