EAGLE3 Draft Head — Qwen3-8B

A speculative decoding draft head for Qwen/Qwen3-8B, trained using the EAGLE3 method on Google Cloud TPU with the SpecJAX framework.

EAGLE3 draft heads accelerate autoregressive generation by proposing multiple tokens per step that a target model then verifies in parallel — typically achieving 2-3x throughput gains with no change in output quality.

Usage

SGLang (GPU)

Qwen3 EAGLE3 is natively supported in SGLang.

python -m sglang.launch_server \
    --model Qwen/Qwen3-8B \
    --speculative-algorithm EAGLE3 \
    --speculative-draft-model-path thoughtworks/Qwen3-8B-Eagle3 \
    --speculative-num-steps 5 \
    --speculative-eagle-topk 4 \
    --dtype bfloat16

Thinking mode

Qwen3 supports an optional thinking mode (/think and /no_think tokens). This draft head was trained on generic instruction-following data and is compatible with both modes:

# Disable thinking mode for pure instruction-following workloads
python -m sglang.launch_server \
    --model Qwen/Qwen3-8B \
    --speculative-algorithm EAGLE3 \
    --speculative-draft-model-path thoughtworks/Qwen3-8B-Eagle3 \
    --speculative-num-steps 5 \
    --speculative-eagle-topk 4 \
    --dtype bfloat16 \
    --chat-template qwen3-instruct-no-thinking

sglang-jax (TPU)

Qwen3 EAGLE3 is natively supported in sglang-jax. Note: sglang-jax's EAGLE3 pipeline is functional but not yet performance-optimized.

python -m sgl_jax.launch_server \
    --model-path Qwen/Qwen3-8B \
    --speculative-algorithm EAGLE3 \
    --speculative-draft-model-path thoughtworks/Qwen3-8B-Eagle3 \
    --speculative-eagle-topk 1 \
    --speculative-num-steps 3 \
    --speculative-num-draft-tokens 4 \
    --tp-size 4 --dtype bfloat16

Python (SGLang client)

import sglang as sgl

llm = sgl.LLM(
    model="Qwen/Qwen3-8B",
    speculative_algorithm="EAGLE3",
    speculative_draft_model_path="thoughtworks/Qwen3-8B-Eagle3",
    speculative_num_steps=5,
    speculative_eagle_topk=4,
    dtype="bfloat16",
)

Training Details

Parameter Value
Framework SpecJAX — pure JAX, no Flax/PyTorch
Hardware Google Cloud TPU v4-32 (4 hosts x 4 chips, TP=4, DP=4)
Dataset 54K mixed: ShareGPT (45%) + UltraChat-200K (35%) + Open-PerfectBlend (20%)
Epochs 3
Steps 4,983 per epoch
Optimizer AdamW, cosine LR decay, 3% warmup
Learning rate 3e-4
Batch size B=4, sequence length T=2048, gradient accumulation 2
TTT length 7 (multi-step speculative rollout)
Training time ~2.0 hours
Precision bfloat16

Training Method

This model uses EAGLE3's Test-Time Training (TTT) objective with a rollout length of 7. At each training step, the draft head autoregressively proposes 7 tokens; the target model provides ground-truth hidden states and logits for all positions; a geometric loss (0.8^k weighting) trains the draft to match the target at each position.

Qwen3's architecture includes per-head QK RMSNorm and tied word embeddings. The draft head is trained to match Qwen3's output distribution at every speculative position.

Performance

Token acceptance rates on generic instruction-following data (ShareGPT-style prompts):

Position Acceptance Rate
acc_0 (1st draft token) 60.0%
acc_1 56.7%
acc_2 55.0%
acc_3 53.8%
acc_4 52.6%
acc_5 51.5%
acc_6 50.4%

Measured on held-out evaluation data. Actual throughput gains depend on hardware, prompt distribution, and runtime version.

Model Architecture

The draft head is a single-layer transformer that operates on the target model's hidden states:

Parameter Value
Architecture LlamaForCausalLM (1 decoder layer)
Hidden size 4096
Attention heads 32 (GQA: 8 KV heads)
Vocabulary size 151,936 (full target vocab)
Draft vocab size 32,000 (top tokens by training frequency)
Parameters ~350M

Limitations

  • Trained on English-dominant instruction data; performance may degrade on non-English inputs or highly domain-specific content.
  • Acceptance rates are measured on generic chat data (non-thinking mode) and may differ under extended thinking prompts.
  • This is a v1 checkpoint trained on generic data. A v2 with target-model-regenerated training data is planned.

License

This model is released under the Apache License 2.0, consistent with the base model's license.

References

@article{li2025eagle3,
  title={EAGLE3: Scalable Speculative Decoding with Training-Free Multi-Draft Speculation},
  author={Li, Yuhui and Wei, Fangyun and Zhang, Chao and Zhang, Hongyang},
  journal={arXiv preprint arXiv:2503.01840},
  year={2025}
}
Downloads last month
273
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for thoughtworks/Qwen3-8B-Eagle3

Finetuned
Qwen/Qwen3-8B
Finetuned
(1380)
this model

Paper for thoughtworks/Qwen3-8B-Eagle3