芝麻 Zhima

介绍

芝麻是一个专注于中文现代诗创作的LLM,能根据用户指令用标题、摘要或关键词生成原创中文现代诗。

芝麻这个名字来源于志摩(徐志摩)的谐音。徐志摩(1897-1931)是一位著名的中国现代诗人。

我们使用AI-Generated_Chinese_Modern_Poetrychinese_modern_poetry在8张A800上全参数训练了24个小时,基座模型使用Qwen2.5-0.5B-Instruct

快速开始

pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Hyaline/Zhima-0.5B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "使用以下标题写一首现代诗:向山谷吹来的风"
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)

English

Introduction

Zhima is an LLM that focuses on Chinese modern poetry creation and can generate original Chinese modern poems based on user instructions using titles, summaries, or keywords.

The name Zhima comes from the homophone of Zhimo (Xu Zhimo). Xu Zhimo (1897-1931) is a famous Chinese modern poet.

We use AI-Generated_Chinese_Modern_Poetry and chinese_modern_poetry to conduct full-parameter training for 24 hours on 8 A800 GPUs, with Qwen2.5-0.5B-Instruct as the base model.

Quick Start

pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Hyaline/Zhima-0.5B"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "使用以下标题写一首现代诗:向山谷吹来的风"
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)
Downloads last month
28
Safetensors
Model size
494M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support