Important: This model uses the JANG quantization format — the GGUF equivalent for MLX on Apple Silicon. Currently only supported by MLX Studio and the
jang-toolsPython package.
MLX Studio — the only app that natively supports JANG models
Qwen 3.5 VL 27B — JANG_4S + CRACK
JANG mixed-precision · CRACK abliterated · Vision-Language · No guardrails · 16 GB
What Is This?
This is Qwen 3.5 VL 27B — a 27B parameter dense hybrid SSM/Attention model with GatedDeltaNet SSM layers + full attention layers, and built-in vision capabilities.
It has been:
- JANG quantized — JANG_4S profile (6-bit attention, 4-bit MLP) — 16 GB
- CRACK abliterated — permanent weight-level removal of safety refusal
| Architecture | Qwen 3.5 VL Dense — 27B params, hybrid SSM/FA, 64 layers |
| Quantization | JANG_4S (6/4-bit mixed) — 16 GB |
| Abliteration | CRACK — novel weight surgery |
| HarmBench | 75.0% (240/320) |
| MMLU | 83.1% (base: 83.1%, 0% drop) |
| Speed | 27 tok/s (M4 Max) |
| Vision | Yes — via MLX Studio / vMLX |
| Thinking | ON/OFF supported |
| Fits on | 32 GB+ Macs |
JANG vs MLX Uniform Quantization
| Model | MMLU | Size | Speed | Notes |
|---|---|---|---|---|
| JANG_4S + CRACK | 83.1% | 16 GB | 27 tok/s | This model |
| JANG_4S (base) | 84.5% | 16 GB | 35 tok/s | Unmodified JANG |
| MLX 4-bit | 84.5% | 14 GB | 20 tok/s | Uniform quant |
| MLX 8-bit | ~86% | 29 GB | ~15 tok/s | 2x larger |
JANG runs 35% faster than MLX 4-bit (35 vs 20 tok/s) at the same quality level.
HarmBench Results
240/320 (75.0%) — tested with enable_thinking=false, temperature=1.0
| Category | Score | |
|---|---|---|
| Misinformation / Disinfo | 47/54 | 87% |
| Copyright | 68/80 | 85% |
| Chemical / Biological | 35/42 | 83% |
| Illegal | 38/53 | 72% |
| Harmful | 12/18 | 67% |
| Cybercrime / Intrusion | 31/52 | 60% |
| Harassment / Bullying | 9/21 | 43% |
Note: Dense models have stronger distributed safety training than MoE models, making them harder to fully abliterate while preserving knowledge. This model prioritizes zero MMLU degradation over maximum compliance.
MMLU Results
65 curated hard questions across 13 subjects. Surgery preserves knowledge perfectly — zero degradation.
| Subject | CRACK | Base | Delta |
|---|---|---|---|
| College Physics | 5/5 | 5/5 | 0 |
| Professional Medicine | 5/5 | 5/5 | 0 |
| Conceptual Physics | 5/5 | 5/5 | 0 |
| Electrical Engineering | 5/5 | 5/5 | 0 |
| Machine Learning | 5/5 | 5/5 | 0 |
| HS Biology | 5/5 | 5/5 | 0 |
| Abstract Algebra | 4/5 | 4/5 | 0 |
| College CS | 4/5 | 4/5 | 0 |
| HS Geography | 4/5 | 4/5 | 0 |
| World Religions | 5/5 | 5/5 | 0 |
| HS Mathematics | 3/5 | 3/5 | 0 |
| Formal Logic | 3/5 | 3/5 | 0 |
| College Math | 1/5 | 1/5 | 0 |
| Total | 54/65 (83.1%) | 54/65 (83.1%) | 0% |
Install & Usage
pip install "jang[mlx]"
from jang_tools.loader import load_jang_model
from mlx_lm import generate
model, tokenizer = load_jang_model("dealignai/Qwen3.5-VL-27B-JANG_4S-CRACK")
messages = [{"role": "user", "content": "Your prompt here"}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False)
response = generate(model, tokenizer, prompt=prompt, max_tokens=2000)
print(response)
Thinking Mode
Thinking is ON by default (chain-of-thought reasoning before answering).
To disable thinking for faster responses:
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True,
enable_thinking=False, tokenize=False)
Tip: Use
temperature=1.0for chat (greedy can cause repetition). Usetemperature=0.0for structured tasks like MMLU.
About JANG
JANG (Jang Adaptive N-bit Grading) is a mixed-precision quantization format for Apple Silicon — the GGUF equivalent for MLX. Classifies tensors into sensitivity tiers and assigns bits accordingly.
About CRACK
CRACK (Controlled Refusal Ablation via Calibrated Knockouts) removes safety alignment from LLMs at the weight level using per-layer projected vectors from structurally-mirrored prompt pairs.
Links
Disclaimer
This model is provided for research and educational purposes. The creators are not responsible for any misuse. By downloading this model, you agree to use it responsibly and in compliance with applicable laws.
한국어
Qwen 3.5 VL 27B — JANG_4S + CRACK
| 항목 | 내용 |
|---|---|
| 크기 | 16 GB |
| HarmBench | 75.0% (240/320) |
| MMLU | 83.1% (기본 대비 0% 하락) |
| 속도 | 27 tok/s (M4 Max) |
| 비전 | 지원 (MLX Studio / vMLX) |
| 최소 요구사양 | 32 GB 메모리 Mac |
pip install "jang[mlx]"
GitHub · HuggingFace · MLX Studio · Ko-fi · X @dealignai
Created by Jinho Jang · 장진호 제작
- Downloads last month
- 469
Quantized
Model tree for dealignai/Qwen3.5-VL-27B-JANG_4S-CRACK
Base model
Qwen/Qwen3.5-27B
