RRepoGEO

REPOGEO REPORT · LITE

mit-han-lab/llm-awq

Default branch main · commit d6e797a4 · scanned 5/13/2026, 9:02:41 PM

GitHub: 3,534 stars · 315 forks

AI VISIBILITY SCORE
28 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
1 pass · 1 warn · 0 fail
Objective metadata checks
AI knows your name
2 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface mit-han-lab/llm-awq, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • hightopics#1
    Add relevant GitHub topics to the repository

    Why:

    COPY-PASTE FIX
    llm-quantization, quantization, llm-compression, large-language-models, deep-learning, pytorch, cuda, mlsys, awq, inference-acceleration, multi-modal-llm
  • highreadme#2
    Reposition the README H1 to explicitly state 'LLM Quantization Library'

    Why:

    CURRENT
    # AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
    COPY-PASTE FIX
    # AWQ: An LLM Quantization Library for Activation-aware Weight Quantization (LLM Compression and Acceleration)
  • mediumcomparison#3
    Add a 'Comparison to Alternatives' section in README

    Why:

    COPY-PASTE FIX
    ## Comparison to Alternatives
    
    AWQ differentiates itself from general LLM inference frameworks (like ONNX Runtime, TensorRT, or Hugging Face Optimum) by focusing specifically on **activation-aware weight quantization** for LLMs. While these frameworks provide broad optimization capabilities, AWQ offers a specialized, highly accurate, and efficient method for compressing LLMs to low bitrates (INT3/4) with minimal performance degradation, including support for instruction-tuned and multi-modal models. Our approach protects salient weights based on activation magnitudes, leading to superior accuracy compared to other quantization-only methods.

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface mit-han-lab/llm-awq
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
ONNX Runtime
Recommended in 2 of 2 queries
COMPETITOR LEADERBOARD
  1. ONNX Runtime · recommended 2×
  2. Hugging Face Optimum · recommended 2×
  3. OpenVINO · recommended 1×
  4. NVIDIA TensorRT · recommended 1×
  5. DeepSpeed · recommended 1×
  • CATEGORY QUERY
    Need to optimize large language models for efficient deployment and faster inference.
    you: not recommended
    AI recommended (in order):
    1. OpenVINO
    2. NVIDIA TensorRT
    3. ONNX Runtime
    4. DeepSpeed
    5. Hugging Face Optimum
    6. PyTorch's `torch.quantization`
    7. TensorFlow Lite
    8. FlashAttention

    AI recommended 8 alternatives but never named mit-han-lab/llm-awq. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    What tools enable significant speedup for multimodal LLM inference through quantization?
    you: not recommended
    AI recommended (in order):
    1. llama.cpp
    2. Hugging Face Optimum
    3. ONNX Runtime
    4. Intel OpenVINO
    5. TensorRT
    6. AutoGPTQ
    7. bitsandbytes
    8. MLC LLM

    AI recommended 8 alternatives but never named mit-han-lab/llm-awq. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    warn

    Suggestion:

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of mit-han-lab/llm-awq?
    pass
    AI did not name mit-han-lab/llm-awq — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts mit-han-lab/llm-awq in production, what risks or prerequisites should they evaluate first?
    pass
    AI named mit-han-lab/llm-awq explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo mit-han-lab/llm-awq solve, and who is the primary audience?
    pass
    AI named mit-han-lab/llm-awq explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of mit-han-lab/llm-awq. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/mit-han-lab/llm-awq.svg)](https://repogeo.com/en/r/mit-han-lab/llm-awq)
HTML
<a href="https://repogeo.com/en/r/mit-han-lab/llm-awq"><img src="https://repogeo.com/badge/mit-han-lab/llm-awq.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

mit-han-lab/llm-awq — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite