|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
license_link: https://huggingface.co/shuttleai/shuttle-3.5/blob/main/LICENSE |
|
pipeline_tag: text-generation |
|
language: |
|
- en |
|
tags: |
|
- chat |
|
--- |
|
|
|
|
|
<p style="font-size:20px;" align="left"> |
|
<div style="border-radius: 15px;"> |
|
<img |
|
src="https://storage.shuttleai.com/shuttle-3.5.png" |
|
alt="ShuttleAI Thumbnail" |
|
style="width: auto; height: auto; margin-left: 0; object-fit: cover; border-radius: 15px;"> |
|
</div> |
|
|
|
## Shuttle-3.5 |
|
### ☁️ <a href="https://shuttleai.com/" target="_blank">Use via API</a> • 💬 <a href="https://shuttlechat.com/" target="_blank">ShuttleChat</a> |
|
|
|
We are excited to introduce Shuttle-3.5, a fine-tuned version of [Qwen3 32b](https://huggingface.co/Qwen/Qwen3-32B), emulating the writing style of Claude 3 models and thoroughly trained on role-playing data. |
|
|
|
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios. |
|
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning. |
|
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience. |
|
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks. |
|
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**. |
|
|
|
|
|
## Model Overview |
|
|
|
**Shuttle 3.5** has the following features: |
|
- Type: Causal Language Models |
|
- Training Stage: Pretraining & Post-training |
|
- Number of Parameters: 32.8B |
|
- Number of Paramaters (Non-Embedding): 31.2B |
|
- Number of Layers: 64 |
|
- Number of Attention Heads (GQA): 64 for Q and 8 for KV |
|
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts). |
|
|
|
|
|
## Fine-Tuning Details |
|
|
|
- **Training Setup**: The model was trained on 130 million tokens for 40 hours on an H100 GPU. |