shuttle-3.5 / README.md
xtristan's picture
Update README.md
a041e46 verified
metadata
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/shuttleai/shuttle-3.5/blob/main/LICENSE
pipeline_tag: text-generation
language:
  - en
tags:
  - chat

ShuttleAI Thumbnail

Shuttle-3.5

☁️ Use via API • 💬 ShuttleChat

We are excited to introduce Shuttle-3.5, a fine-tuned version of Qwen3 32b, emulating the writing style of Claude 3 models and thoroughly trained on role-playing data.

  • Uniquely support of seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue) within single model, ensuring optimal performance across various scenarios.
  • Significantly enhancement in its reasoning capabilities, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
  • Superior human preference alignment, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
  • Expertise in agent capabilities, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
  • Support of 100+ languages and dialects with strong capabilities for multilingual instruction following and translation.

Model Overview

Shuttle 3.5 has the following features:

  • Type: Causal Language Models
  • Training Stage: Pretraining & Post-training
  • Number of Parameters: 32.8B
  • Number of Paramaters (Non-Embedding): 31.2B
  • Number of Layers: 64
  • Number of Attention Heads (GQA): 64 for Q and 8 for KV
  • Context Length: 32,768 natively and 131,072 tokens with YaRN.

Fine-Tuning Details

  • Training Setup: The model was trained on 130 million tokens for 40 hours on an H100 GPU.