RRepoGEO

REPOGEO REPORT · LITE

intel/neural-compressor

Default branch master · commit 58e578e3 · scanned 5/12/2026, 6:02:19 AM

GitHub: 2,634 stars · 305 forks

AI VISIBILITY SCORE
33 /100
Critical
Category recall
0 / 2
Not recommended in any query
Rule findings
2 pass · 0 warn · 0 fail
Objective metadata checks
AI knows your name
2 / 3
Direct prompts that named your repo
HOW TO READ THIS REPORT

Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface intel/neural-compressor, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.

Action plan — copy-paste fixes

3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.

OVERALL DIRECTION
  • highreadme#1
    Reposition README's main heading to highlight SOTA LLM quantization and Intel optimization.

    Why:

    CURRENT
    <h3> An open-source Python library supporting popular model compression techniques on mainstream deep learning frameworks (PyTorch, TensorFlow, and JAX)</h3>
    COPY-PASTE FIX
    <h3> The leading open-source Python library for SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity, optimized for Intel hardware across PyTorch, TensorFlow, and ONNX Runtime.</h3>
  • mediumreadme#2
    Add a "Key Differentiators" section to the README.

    Why:

    COPY-PASTE FIX
    ## Key Differentiators
    
    Unlike generic quantization tools, Intel® Neural Compressor offers a unified, framework-agnostic approach to model optimization (especially quantization and pruning) with a strong emphasis on maximizing inference performance on Intel CPUs, GPUs, and other Intel hardware. We provide state-of-the-art low-bit LLM quantization techniques and comprehensive model compression for PyTorch, TensorFlow, and ONNX Runtime.
  • lowtopics#3
    Add specific LLM names and advanced quantization formats to topics.

    Why:

    CURRENT
    auto-tuning, awq, fp4, gptq, int4, int8, knowledge-distillation, large-language-models, low-precision, mxformat, post-training-quantization, pruning, quantization, quantization-aware-training, smoothquant, sparsegpt, sparsity
    COPY-PASTE FIX
    auto-tuning, awq, deepseek, fp4, flux, framepack, gptq, int4, int8, knowledge-distillation, llama, large-language-models, low-precision, mxformat, nvfp4, post-training-quantization, pruning, quantization, quantization-aware-training, qwen, smoothquant, sparsegpt, sparsity

Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash

Category visibility — the real GEO test

Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?

Same questions for every model — switch tabs to compare answers and rankings.

Recall
0 / 2
0% of queries surface intel/neural-compressor
Avg rank
Lower is better. #1 = top recommendation.
Share of voice
0%
Of all named tools, what % are you?
Top rival
TimDettmers/bitsandbytes
Recommended in 1 of 2 queries
COMPETITOR LEADERBOARD
  1. TimDettmers/bitsandbytes · recommended 1×
  2. AWQ · recommended 1×
  3. GPTQ · recommended 1×
  4. huggingface/optimum · recommended 1×
  5. microsoft/onnxruntime · recommended 1×
  • CATEGORY QUERY
    How to apply state-of-the-art low-bit quantization for large language models?
    you: not recommended
    AI recommended (in order):
    1. bitsandbytes (TimDettmers/bitsandbytes)
    2. AWQ
    3. GPTQ
    4. Hugging Face Optimum (huggingface/optimum)
    5. ONNX Runtime (microsoft/onnxruntime)
    6. Intel OpenVINO (openvinotoolkit/openvino)
    7. NVIDIA TensorRT (NVIDIA/TensorRT)
    8. LLM.int8()
    9. SqueezeLLM
    10. PyTorch (pytorch/pytorch)

    AI recommended 10 alternatives but never named intel/neural-compressor. This is the gap to close.

    Show full AI answer
  • CATEGORY QUERY
    What are effective model compression techniques for PyTorch and TensorFlow deep learning models?
    you: not recommended
    AI recommended (in order):
    1. torch.quantization
    2. tf.lite.TFLiteConverter
    3. TensorFlow Model Optimization Toolkit (tensorflow/model-optimization)
    4. torch.nn.utils.prune
    5. pytorch_model_pruning (IntelLabs/pytorch_model_pruning)
    6. torchdistill (yoshitomo-matsubara/torchdistill)
    7. keras.losses.KLDivergence
    8. tensorly (tensorly/tensorly)
    9. AutoGluon (awslabs/autogluon)
    10. AutoKeras (keras-team/autokeras)
    11. tf.keras.applications
    12. MobileNetV2
    13. EfficientNet
    14. torchvision.models
    15. MobileNetV3
    16. EfficientNet_B0
    17. EfficientNet_B7

    AI recommended 17 alternatives but never named intel/neural-compressor. This is the gap to close.

    Show full AI answer

Objective checks

Rule-based audits of metadata signals AI engines weight most.

  • Metadata completeness
    pass

  • README presence
    pass

Self-mention check

Does AI even know your repo exists when asked about it directly?

  • Compared to common alternatives in this category, what is the core differentiator of intel/neural-compressor?
    pass
    AI named intel/neural-compressor explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • If a team adopts intel/neural-compressor in production, what risks or prerequisites should they evaluate first?
    pass
    AI named intel/neural-compressor explicitly

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

  • In one sentence, what problem does the repo intel/neural-compressor solve, and who is the primary audience?
    pass
    AI did not name intel/neural-compressor — likely talking about a different project

    AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?

Embed your GEO score

Drop this badge into the README of intel/neural-compressor. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.

RepoGEO badge previewLive preview
MARKDOWN (README)
[![RepoGEO](https://repogeo.com/badge/intel/neural-compressor.svg)](https://repogeo.com/en/r/intel/neural-compressor)
HTML
<a href="https://repogeo.com/en/r/intel/neural-compressor"><img src="https://repogeo.com/badge/intel/neural-compressor.svg" alt="RepoGEO" /></a>
Pro

Subscribe to Pro for deep diagnoses

intel/neural-compressor — Lite scans stay free; this card itemizes Pro deep limits vs Lite.

  • Deep reports10 / month
  • Brand-free category queries5 vs 2 in Lite
  • Prioritized action items8 vs 3 in Lite