RRepoGEO

REPOGEO REPORT · LITE

qualcomm/nexa-sdk

Default branch main · commit 1915274c · scanned 5/15/2026, 12:07:26 PM

GitHub: 8,048 stars · 997 forks

AI VISIBILITY SCORE
33 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
2 pass · 0 warn · 0 fail
Objective metadata checks
AI knows your name
2 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface qualcomm/nexa-sdk, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Reposition the README's opening paragraph to clarify broad scope and core value

    Why:

    CURRENT
    NexaSDK lets you build the smartest and fastest on-device AI with minimum energy. It is a highly performant local inference framework that runs the latest multimodal AI models locally on NPU, GPU, and CPU - across Android, Windows, and Linux devices with a few lines of code.
    COPY-PASTE FIX
    NexaSDK is a highly performant local inference framework for **frontier LLMs and VLMs**, enabling **day-0 model support** across **GPU, NPU, and CPU** on **PC (Python/C++), mobile (Android & iOS), and Linux/IoT (Arm64 & x86 Docker)**. Build the smartest and fastest on-device AI with minimum energy, supporting models like OpenAI GPT-OSS, IBM Granite-4, Qwen-3-VL, Gemma-3n, Ministral-3, and more.
  • highreadme#2
    Add an explicit section clarifying broad platform and hardware support

    Why:

    COPY-PASTE FIX
    Add a section titled 'Broad Platform & Hardware Support' or similar, stating: 'NexaSDK is designed for broad compatibility, running on GPU, NPU, and CPU across various devices including PC (Windows, Linux), mobile (Android, iOS), and IoT (Arm64 & x86 Docker). While optimized for Qualcomm hardware, it is not limited to specific Qualcomm platforms like the Robotics RB3, ensuring wide applicability for on-device AI development.'
  • mediumtopics#3
    Expand and refine repository topics for better categorization

    Why:

    CURRENT
    gemma3, go, gpt-oss, granite4, llama, llama3, llm, on-device-ai, phi3, qwen3, qwen3vl, sdk, stable-diffusion, vlm
    COPY-PASTE FIX
    llm, vlm, on-device-ai, local-inference, edge-ai, mobile-ai, iot-ai, day-0-models, frontier-llms, multimodal-ai, python, cplusplus, android, ios, linux, windows, gpu, npu, cpu, gemma3, gpt-oss, granite4, llama, llama3, phi3, qwen3, qwen3vl, stable-diffusion, sdk

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface qualcomm/nexa-sdk
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
ggerganov/llama.cpp
Recommended in 2 of 2 queries
COMPETITOR LEADERBOARD
  1. ggerganov/llama.cpp · recommended 2×
  2. ollama/ollama · recommended 1×
  3. LM Studio · recommended 1×
  4. huggingface/transformers · recommended 1×
  5. TimDettmers/bitsandbytes · recommended 1×
  • CATEGORY QUERY
    How can I run the newest multimodal AI models locally on various devices efficiently?
    you: not recommended
    AI recommended (in order):
    1. Ollama (ollama/ollama)
    2. LM Studio
    3. llama.cpp (ggerganov/llama.cpp)
    4. llava.cpp (ggerganov/llama.cpp)
    5. Hugging Face `transformers` (huggingface/transformers)
    6. bitsandbytes (TimDettmers/bitsandbytes)
    7. auto-gptq (PanQiWei/AutoGPTQ)
    8. PyTorch (pytorch/pytorch)
    9. ONNX Runtime (microsoft/onnxruntime)
    10. TensorRT
    11. MLC LLM (mlc-ai/mlc-llm)

    AI recommended 11 alternatives but never named qualcomm/nexa-sdk. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    Looking for a framework to deploy LLMs and VLMs on mobile or IoT devices with C++.
    you: not recommended
    AI recommended (in order):
    1. ONNX Runtime
    2. TFLite
    3. MNN
    4. NCNN
    5. OpenVINO
    6. Pytorch Mobile

    AI recommended 6 alternatives but never named qualcomm/nexa-sdk. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    pass

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of qualcomm/nexa-sdk?
    pass
    AI named qualcomm/nexa-sdk explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts qualcomm/nexa-sdk in production, what risks or prerequisites should they evaluate first?
    pass
    AI named qualcomm/nexa-sdk explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo qualcomm/nexa-sdk solve, and who is the primary audience?
    pass
    AI did not name qualcomm/nexa-sdk — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of qualcomm/nexa-sdk. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/qualcomm/nexa-sdk.svg)](https://repogeo.com/en/r/qualcomm/nexa-sdk)
HTML
<a href="https://repogeo.com/en/r/qualcomm/nexa-sdk"><img src="https://repogeo.com/badge/qualcomm/nexa-sdk.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

qualcomm/nexa-sdk — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite