MXFP4 Visualizer
🔬 MXFP4 Quantization Visualizer
📊 Input BF16 Values
📏 Vec Size: 16
⚖️ Scale: 1.0
🎯 Output Comparison
BF16 Input (16-bit each)
0100000010100000
1011111110100000
0011111101100000
0100000010000000
0100000010100000
1011111110100000
0011111101100000
0100000010000000
0100000010100000
1011111110100000
0011111101100000
0100000010000000
0100000010100000
1011111110100000
0011111101100000
0100000010000000
MXFP4 Output (4-bit each)
0100
1010
0010
0110
0011
1001
0101
1100
0010
0110
1011
0101
1001
0011
1110
0110
💾 Original: 256 bits
📦 Compressed: 96 bits
🗜️ Ratio: 2.67:1
💰 Saved: 62.5%
🎲 MXFP4 E2M1 Lookup Table
Highlighted values show the quantized output mappings
⚙️ Quantization Process
Scale Factor: 1.0 (Max value: 4.0 → MXFP4 range: ±6.0)
📈 Quantization Error Analysis
MSE: 0.0938 | MAE: 0.250 | Max Error: 0.500
📊 Memory Analysis
📦
Original Size
64 bits
🗜️
MXFP4 Size
48 bits
💰
Memory Saved
16 bits