{"id":118,"date":"2026-04-04T00:40:04","date_gmt":"2026-04-04T00:40:04","guid":{"rendered":"https:\/\/c-by-b.ai\/blog\/?p=118"},"modified":"2026-04-04T00:40:05","modified_gmt":"2026-04-04T00:40:05","slug":"the-quantization-surprise","status":"publish","type":"post","link":"https:\/\/c-by-b.ai\/blog\/the-quantization-surprise\/","title":{"rendered":"The Quantization Surprise"},"content":{"rendered":"\n<p>The evaluator will sometimes needs to run in compute constrained environments. The Qwen3.5-4B base model at full BF16 precision is 8.7 GB \u2014 workable but heavy alongside the embedding model, evidence corpus, and classification heads. Quantization compresses the weights by reducing numerical precision. The question is how far you can push it before something breaks.<\/p>\n\n\n\n<p>The answer surprised us. 4-bit is essentially free. 3-bit breaks something that accuracy metrics don&#8217;t measure.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Experiment<\/h2>\n\n\n\n<p>We quantized Qwen3.5-4B to four levels using MLX native affine quantization (group size 64), then ran each through the full experimental pipeline: layer probes, generative baselines, hidden state caching, evidence head training, and 100-seed suppression sweeps.<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Variant<\/th><th>Size<\/th><th>Speed vs BF16<\/th><\/tr><\/thead><tbody><tr><td>BF16<\/td><td>8.7 GB<\/td><td>1.0x<\/td><\/tr><tr><td>8-bit<\/td><td>4.3 GB<\/td><td>~3.3x faster<\/td><\/tr><tr><td>4-bit<\/td><td>2.2 GB<\/td><td>3.6x faster<\/td><\/tr><tr><td>3-bit<\/td><td>1.6 GB<\/td><td>~3.3x faster<\/td><\/tr><tr><td>2-bit<\/td><td>\u2014<\/td><td>abandoned<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>2-bit collapsed immediately \u2014 triple classification probes inverted (peaking at L1 instead of deeper layers, meaning the model&#8217;s processing was destroyed). It generated nonsense at 533 seconds for 10 records. Abandoned without full evaluation.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">4-bit: Identical Where It Matters<\/h2>\n\n\n\n<p>The 4-bit model matched BF16 across every metric we track:<\/p>\n\n\n\n<p>Evidence head AUC at L15: 0.970 vs 0.971. Decision head accuracy (dirsup_L19_mean): 0.893 vs 0.892. Suppression statistical significance: p=0.0006 for both. VETO\u2192APPROVE errors: zero for both. The confusion matrix error profiles barely shifted \u2014 same error types, same counts, same distribution.<\/p>\n\n\n\n<p>The hidden state activations are computed in FP16 regardless of weight quantization, but the values differ because they pass through approximate weight matrices. Those approximations didn&#8217;t degrade the geometric structure the classification heads rely on. If anything, the quantization noise may act as a beneficial regularizer \u2014 adding slight perturbation to decision boundaries without crossing them.<\/p>\n\n\n\n<p>At 2.2 GB and 3.6x faster inference, 4-bit was the obvious production choice.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">3-bit: The Safety Cliff<\/h2>\n\n\n\n<p>3-bit looked almost as good on standard metrics. Probe accuracy dropped only 1.5 to 4.8 percentage points depending on the skill. Decision accuracy on the suppression sweep was within a few points of 4-bit. A reasonable person looking at the accuracy numbers alone would say 3-bit is viable.<\/p>\n\n\n\n<p>But the generative baseline told a different story. On 150 calibration samples, 3-bit produced&nbsp;<strong>7 VETO\u2192APPROVE errors<\/strong>&nbsp;\u2014 actions that should be hard-vetoed getting approved. BF16 and 4-bit both produced zero. The safety property was broken.<\/p>\n\n\n\n<p>This is the central finding: accuracy and safety are different metrics. A model can lose a few percentage points of overall accuracy (tolerable) while simultaneously losing the sharp boundary between VETO and APPROVE (catastrophic). The VETO\u2192APPROVE boundary is a narrow geometric ridge in the model&#8217;s representation space. Moderate quantization noise (4-bit) stays on the ridge. Aggressive quantization (3-bit) falls off it.<\/p>\n\n\n\n<p>The 3-bit evidence heads still ranked triples well \u2014 AUC was only slightly degraded. The 3-bit decision probes were reasonable. The failure was specifically in the safety-critical boundary, and it only showed up when you tested for it directly.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">8-bit: No Reason to Exist<\/h2>\n\n\n\n<p>8-bit was identical to BF16 on everything. Same accuracy, same safety, same error profile. At 4.3 GB it saved some space, but 4-bit at 2.2 GB saved more while matching performance. There was no quality tier where 8-bit was the right choice \u2014 our rationale was you either need full precision (for strategic and nuance research) or you want maximum compression (for edge deployment). Later, when testing the deployed architecture, we learned a different and interesting lesson, but that is jumping ahead.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">What This Changed<\/h2>\n\n\n\n<p>The production model became\u00a0<strong>cbyb1-4B-4bit<\/strong>: 2.2 GB, 3.6x faster than BF16, zero safety degradation. It is essentially Qwen3.5-4B 4-bit but with the evidence and decision heads as well. It runs comfortably on the Mac Mini with room for everything else the prototype needs.<\/p>\n\n\n\n<p>But the 3-bit finding shaped how we think about evaluation. Overall accuracy benchmarks &#8212; the kind you often see on leader boards &#8212; might have cleared 3-bit as deployable. The safety failure only appeared because we specifically tracked VETO\u2192APPROVE as a separate metric with zero tolerance. Any evaluator deployment that relies on aggregate accuracy to validate quantization levels is testing the wrong thing.<\/p>\n\n\n\n<p>The lesson generalizes beyond quantization. Whenever you compress, distill, prune, or otherwise approximate a safety-critical model, the question isn&#8217;t &#8220;did accuracy hold?&#8221; It&#8217;s &#8220;did the specific safety boundaries hold?&#8221; Those are different questions with potentially different answers.<\/p>\n\n\n\n<p><em>Next: a single decision head gets 84% accuracy. One hundred of them, voting together, get 87.5% with zero safety failures.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The evaluator will sometimes needs to run in compute constrained environments. The Qwen3.5-4B base model at full BF16 precision is 8.7 GB \u2014 workable but heavy alongside the embedding model, evidence corpus, and classification heads. Quantization compresses the weights by reducing numerical precision. The question is how far you can push it before something breaks. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","footnotes":""},"categories":[1],"tags":[],"class_list":["post-118","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/c-by-b.ai\/blog\/wp-json\/wp\/v2\/posts\/118","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/c-by-b.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/c-by-b.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/c-by-b.ai\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/c-by-b.ai\/blog\/wp-json\/wp\/v2\/comments?post=118"}],"version-history":[{"count":1,"href":"https:\/\/c-by-b.ai\/blog\/wp-json\/wp\/v2\/posts\/118\/revisions"}],"predecessor-version":[{"id":119,"href":"https:\/\/c-by-b.ai\/blog\/wp-json\/wp\/v2\/posts\/118\/revisions\/119"}],"wp:attachment":[{"href":"https:\/\/c-by-b.ai\/blog\/wp-json\/wp\/v2\/media?parent=118"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/c-by-b.ai\/blog\/wp-json\/wp\/v2\/categories?post=118"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/c-by-b.ai\/blog\/wp-json\/wp\/v2\/tags?post=118"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}