SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
Paper
•
2211.10438
•
Published
•
6
This repository contains a SmoothQuant-processed FP16 version of Meta LLaMA-3.1-8B-Instruct.
The model is not quantized. SmoothQuant is applied only as an outlier-suppression preprocessing step to improve downstream low-bit quantization.
This model applies SmoothQuant (activation-aware weight smoothing) to the FP16 weights of LLaMA-3.1-8B-Instruct.
No architectural changes or precision reduction are performed.
The purpose of this model is to serve as an intermediate checkpoint in a deployment pipeline targeting CPU and edge devices using llama.cpp and mixed-precision quantization.
llama.cppUsers should:
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"ArslanRobo/llama-3.1-8b-instruct-smoothquant-fp16",
torch_dtype=torch.float16,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(
"ArslanRobo/llama-3.1-8b-instruct-smoothquant-fp16"
)
Training Details
Training Data
No additional training data was used.
The model weights are derived directly from the base LLaMA-3.1-8B-Instruct model.
Training Procedure
Preprocessing
Activation statistics collected on WikiText-2
SmoothQuant applied with α = 0.5
FP16 precision preserved
Training Hyperparameters
Training regime: FP16 (no training, preprocessing only)
Speeds, Sizes, Times
Model size: ~15 GB (FP16)
SmoothQuant calibration: ~5–10 minutes on NVIDIA T4
Evaluation
Testing Data, Factors & Metrics
Testing Data
WikiText-2 (used for calibration and analysis)
Factors
Activation outliers
Weight distribution
Quantization sensitivity
Metrics
Perplexity (evaluated post-quantization)
Quantization MSE and SNR (simulated)
Results
SmoothQuant reduces quantization error
Improves robustness for 4-bit and mixed-precision quantization
Minimal impact on FP16 perplexity
Model Examination
Weight distribution smoothing
Reduced activation outliers
Improved numerical stability for low-bit quantization
Environmental Impact
Hardware Type: NVIDIA T4 GPU
Hours used: ~1 hour
Cloud Provider: Google Colab / Kaggle
Compute Region: Not specified
Carbon Emitted: Not measured
Technical Specifications
Model Architecture and Objective
Decoder-only Transformer
Causal language modeling objective
Compute Infrastructure
Hardware
NVIDIA T4 GPU (16 GB VRAM)
Software
PyTorch
Hugging Face Transformers
Accelerate
Datasets
Citation
BibTeX
bibtex
Copy code
@article{xiao2023smoothquant,
title={SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models},
author={Xiao, Guangxuan and others},
journal={arXiv preprint arXiv:2211.10438},
year={2023}
}
APA
Xiao, G., et al. (2023). SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models. arXiv:2211.10438.
Model Card Authors
Muhammad Arslan Rafiq