REPOGEO REPORT · LITE
intel/neural-compressor
Default branch master · commit 58e578e3 · scanned 5/12/2026, 6:02:19 AM
GitHub: 2,634 stars · 305 forks
Action plan is what to do next — copy-pasteable changes prioritized by impact. Category visibility is the real GEO test: when a user asks an AI a brand-free question that should surface intel/neural-compressor, does the AI actually recommend you — or your competitors? Objective checks verify the metadata signals AI engines weight first. Self-mention check detects whether AI even knows you exist by name.
Action plan — copy-paste fixes
3 prioritized changes generated by gemini-2.5-flash. Mark items done after you ship the fix.
- highreadme#1Reposition README's main heading to highlight SOTA LLM quantization and Intel optimization.
Why:
CURRENT<h3> An open-source Python library supporting popular model compression techniques on mainstream deep learning frameworks (PyTorch, TensorFlow, and JAX)</h3>
COPY-PASTE FIX<h3> The leading open-source Python library for SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity, optimized for Intel hardware across PyTorch, TensorFlow, and ONNX Runtime.</h3>
- mediumreadme#2Add a "Key Differentiators" section to the README.
Why:
COPY-PASTE FIX## Key Differentiators Unlike generic quantization tools, Intel® Neural Compressor offers a unified, framework-agnostic approach to model optimization (especially quantization and pruning) with a strong emphasis on maximizing inference performance on Intel CPUs, GPUs, and other Intel hardware. We provide state-of-the-art low-bit LLM quantization techniques and comprehensive model compression for PyTorch, TensorFlow, and ONNX Runtime.
- lowtopics#3Add specific LLM names and advanced quantization formats to topics.
Why:
CURRENTauto-tuning, awq, fp4, gptq, int4, int8, knowledge-distillation, large-language-models, low-precision, mxformat, post-training-quantization, pruning, quantization, quantization-aware-training, smoothquant, sparsegpt, sparsity
COPY-PASTE FIXauto-tuning, awq, deepseek, fp4, flux, framepack, gptq, int4, int8, knowledge-distillation, llama, large-language-models, low-precision, mxformat, nvfp4, post-training-quantization, pruning, quantization, quantization-aware-training, qwen, smoothquant, sparsegpt, sparsity
Category GEO backends resolved for this scan: google/gemini-2.5-flash, deepseek/deepseek-v4-flash
Category visibility — the real GEO test
Brand-free queries asked to google/gemini-2.5-flash. Did AI recommend you, or someone else?
Same questions for every model — switch tabs to compare answers and rankings.
- TimDettmers/bitsandbytes · recommended 1×
- AWQ · recommended 1×
- GPTQ · recommended 1×
- huggingface/optimum · recommended 1×
- microsoft/onnxruntime · recommended 1×
- CATEGORY QUERYHow to apply state-of-the-art low-bit quantization for large language models?you: not recommendedAI recommended (in order):
- bitsandbytes (TimDettmers/bitsandbytes)
- AWQ
- GPTQ
- Hugging Face Optimum (huggingface/optimum)
- ONNX Runtime (microsoft/onnxruntime)
- Intel OpenVINO (openvinotoolkit/openvino)
- NVIDIA TensorRT (NVIDIA/TensorRT)
- LLM.int8()
- SqueezeLLM
- PyTorch (pytorch/pytorch)
AI recommended 10 alternatives but never named intel/neural-compressor. This is the gap to close.
Show full AI answer
- CATEGORY QUERYWhat are effective model compression techniques for PyTorch and TensorFlow deep learning models?you: not recommendedAI recommended (in order):
- torch.quantization
- tf.lite.TFLiteConverter
- TensorFlow Model Optimization Toolkit (tensorflow/model-optimization)
- torch.nn.utils.prune
- pytorch_model_pruning (IntelLabs/pytorch_model_pruning)
- torchdistill (yoshitomo-matsubara/torchdistill)
- keras.losses.KLDivergence
- tensorly (tensorly/tensorly)
- AutoGluon (awslabs/autogluon)
- AutoKeras (keras-team/autokeras)
- tf.keras.applications
- MobileNetV2
- EfficientNet
- torchvision.models
- MobileNetV3
- EfficientNet_B0
- EfficientNet_B7
AI recommended 17 alternatives but never named intel/neural-compressor. This is the gap to close.
Show full AI answer
Objective checks
Rule-based audits of metadata signals AI engines weight most.
- Metadata completenesspass
- README presencepass
Self-mention check
Does AI even know your repo exists when asked about it directly?
- Compared to common alternatives in this category, what is the core differentiator of intel/neural-compressor?passAI named intel/neural-compressor explicitly
AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?
- If a team adopts intel/neural-compressor in production, what risks or prerequisites should they evaluate first?passAI named intel/neural-compressor explicitly
AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?
- In one sentence, what problem does the repo intel/neural-compressor solve, and who is the primary audience?passAI did not name intel/neural-compressor — likely talking about a different project
AI answers can be confidently wrong. Read for accuracy: does it match your actual tech stack, audience, and differentiator?
Embed your GEO score
Drop this badge into the README of intel/neural-compressor. It auto-updates whenever the report is rescanned and links back to the latest report — easy public proof that you care about AI discoverability.
[](https://repogeo.com/en/r/intel/neural-compressor)<a href="https://repogeo.com/en/r/intel/neural-compressor"><img src="https://repogeo.com/badge/intel/neural-compressor.svg" alt="RepoGEO" /></a>Subscribe to Pro for deep diagnoses
intel/neural-compressor — Lite scans stay free; this card itemizes Pro deep limits vs Lite.
- Deep reports10 / month
- Brand-free category queries5 vs 2 in Lite
- Prioritized action items8 vs 3 in Lite