Qwen3.5-27B-Claude-Opus-4.6-High-Reasoning
A distilled version of Qwen3.5-27B, enhanced with Claude Opus 4.6 reasoning patterns through knowledge distillation.
Evaluation Results
- Evaluator: Qwen3-Coder-Next
- Test Samples: 260 (based on personal dataset, for reference only)
- Date: 2026-03-29
Overall Performance
| Metric | Base Model | Distilled Model | Improvement |
|---|---|---|---|
| Win Rate | 25.77% | 73.85% | +48.08% |
| Avg Latency (s) | 71.70 | 68.47 | -4.5% |
Score Breakdown (10-point scale)
| Dimension | Base Model | Distilled Model | Improvement |
|---|---|---|---|
| Accuracy | 6.35 | 8.59 | +35.3% |
| Logic | 6.47 | 8.69 | +34.3% |
| Completeness | 5.70 | 8.82 | +54.7% |
| Clarity | 6.51 | 8.43 | +29.5% |
| Actionability | 5.85 | 8.56 | +46.3% |
Performance by Category
| Category | Base Win Rate | Distilled Win Rate | Samples |
|---|---|---|---|
| debug | 22.67% | 76.00% | 75 |
| design | 51.32% | 48.68% | 76 |
| prompt | 5.80% | 94.20% | 69 |
| reasoning | 17.50% | 82.50% | 40 |
Quick Start with vLLM
Installation
pip install vllm
Online Inference (Simple)
from vllm import LLM, SamplingParams
# Initialize the model
llm = LLM(
model="HarleyWang/Qwen3.5-27B-Claude-Opus-4.6-High-Reasoning",
dtype="bfloat16",
tensor_parallel_size=2, # Adjust based on your GPU count
max_model_len=8192,
)
# Define sampling parameters
sampling_params = SamplingParams(
temperature=0.7,
top_p=0.9,
max_tokens=2048,
)
# Create messages
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Explain how to optimize a slow database query."}
]
# Use apply_chat_template from tokenizer
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("HarleyWang/Qwen3.5-27B-Claude-Opus-4.6-High-Reasoning")
prompt = tokenizer.apply_chat_template(messages, tokenize=False)
# Generate
outputs = llm.generate(prompt, sampling_params)
# Print output
for output in outputs:
print(output.outputs[0].text)
Online Inference (Streaming)
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
llm = LLM(
model="HarleyWang/Qwen3.5-27B-Claude-Opus-4.6-High-Reasoning",
dtype="bfloat16",
tensor_parallel_size=2,
)
tokenizer = AutoTokenizer.from_pretrained("HarleyWang/Qwen3.5-27B-Claude-Opus-4.6-High-Reasoning")
messages = [
{"role": "user", "content": "Write a Python function to sort a list."}
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False)
sampling_params = SamplingParams(
temperature=0.7,
top_p=0.9,
max_tokens=1024,
stream=True, # Enable streaming
)
for output in llm.generate(prompt, sampling_params, use_tqdm=False):
print(output.outputs[0].text, end="", flush=True)
print()
Offline Inference (Batch)
from vllm import LLM, SamplingParams
llm = LLM(
model="HarleyWang/Qwen3.5-27B-Claude-Opus-4.6-High-Reasoning",
dtype="bfloat16",
tensor_parallel_size=2,
)
prompts = [
"Explain quantum computing.",
"Write a haiku about coding.",
"Debug: Why is this loop infinite?",
]
sampling_params = SamplingParams(
temperature=0.7,
top_p=0.9,
max_tokens=512,
)
outputs = llm.generate(prompts, sampling_params)
for output in outputs:
print(f"Prompt: {output.prompt}")
print(f"Response: {output.outputs[0].text}")
print("-" * 50)
Command Line Interface
# Start an OpenAI-compatible API server
python -m vllm.entrypoints.openai.api_server \
--model HarleyWang/Qwen3.5-27B-Claude-Opus-4.6-High-Reasoning \
--dtype bfloat16 \
--tensor-parallel-size 2 \
--host 0.0.0.0 \
--port 8000
# In another terminal, use the API
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "default",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the capital of France?"}
],
"temperature": 0.7,
"max_tokens": 256
}'
License
This model is licensed under Apache-2.0.
This model was distilled to capture Claude Opus 4.6's reasoning capabilities while maintaining Qwen3.5-27B's efficiency and multilingual support. Recommended for inference with vLLM for optimal performance.
- Downloads last month
- 84
Model tree for HarleyWang/Qwen3.5-27B-Claude-Opus-4.6-High-Reasoning
Base model
Qwen/Qwen3.5-27B