Xinyuan-LLM-14B-0428

๐ค Hugging Face | ๐ค ModelScope
Xinyuan-LLM-14B-0428 Highlights
Xinyuan-LLM-14B-0428 is the first foundational model in the mental health industry, launched by Cylingo Group. Built upon the robust capabilities of Qwen3-14B, this model has been fine-tuned on millions of data points across diverse scenarios within the field.
- The First All-Scenario Mental Health Support Foundation Model with 24/7 Intelligent Capabilities
- Covering Diverse Mental Health Scenarios and Building Personalized Psychological Profiles
- Resolving Multiple Parenting Challenges with Customized Family Companion Solutions
Quickstart
For deployment, you can use sglang>=0.4.6.post1
or vllm>=0.8.5
or to create an OpenAI-compatible API endpoint:
- SGLang:
python -m sglang.launch_server --model-path Cylingo/Xinyuan-LLM-14B-0428
- vLLM:
vllm serve Cylingo/Xinyuan-LLM-14B-0428
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
For non-thinking mode, we suggest using
Temperature=0.8
,TopP=0.8
,TopK=20
, andMinP=0
. For more detailed guidance, please refer to the Best Practices section.
All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the
rope_scaling
configuration only when processing long contexts is required. It is also recommended to modify thefactor
as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to setfactor
as 2.0.
Xinyuan-LLM-14B-0428 does not include a hybrid mode for Thinking similar to Qwen3. For now, we recommend that users stick to the standard mode. We plan to gradually introduce related features to the community in the future.
- Downloads last month
- 21