Files changed (1) hide show
  1. README.md +61 -47
README.md CHANGED
@@ -1,48 +1,62 @@
1
- ---
2
- tags:
3
- - autotrain
4
- - text-generation-inference
5
- - text-generation
6
- - peft
7
- library_name: transformers
8
- base_model: Qwen/Qwen2.5-0.5B-Instruct
9
- widget:
10
- - messages:
11
- - role: user
12
- content: What is your favorite condiment?
13
- license: other
14
- datasets:
15
- - ttbui/html_alpaca
16
- ---
17
-
18
- # Model Trained Using AutoTrain
19
-
20
- This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
21
-
22
- # Usage
23
-
24
- ```python
25
-
26
- from transformers import AutoModelForCausalLM, AutoTokenizer
27
-
28
- model_path = "yasserrmd/qwen2.5-html-0.5b"
29
-
30
- tokenizer = AutoTokenizer.from_pretrained(model_path)
31
- model = AutoModelForCausalLM.from_pretrained(
32
- model_path,
33
- device_map="auto",
34
- torch_dtype='auto'
35
- ).eval()
36
-
37
- # Prompt content: "hi"
38
- messages = [
39
- {"role": "user", "content": "generate a sample html for dashboard"}
40
- ]
41
-
42
- input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
43
- output_ids = model.generate(input_ids.to('cuda'))
44
- response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
45
-
46
- # Model response: "Hello! How can I assist you today?"
47
- print(response)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  ```
 
1
+ ---
2
+ tags:
3
+ - autotrain
4
+ - text-generation-inference
5
+ - text-generation
6
+ - peft
7
+ library_name: transformers
8
+ base_model: Qwen/Qwen2.5-0.5B-Instruct
9
+ widget:
10
+ - messages:
11
+ - role: user
12
+ content: What is your favorite condiment?
13
+ license: other
14
+ datasets:
15
+ - ttbui/html_alpaca
16
+ language:
17
+ - zho
18
+ - eng
19
+ - fra
20
+ - spa
21
+ - por
22
+ - deu
23
+ - ita
24
+ - rus
25
+ - jpn
26
+ - kor
27
+ - vie
28
+ - tha
29
+ - ara
30
+ ---
31
+
32
+ # Model Trained Using AutoTrain
33
+
34
+ This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
35
+
36
+ # Usage
37
+
38
+ ```python
39
+
40
+ from transformers import AutoModelForCausalLM, AutoTokenizer
41
+
42
+ model_path = "yasserrmd/qwen2.5-html-0.5b"
43
+
44
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
45
+ model = AutoModelForCausalLM.from_pretrained(
46
+ model_path,
47
+ device_map="auto",
48
+ torch_dtype='auto'
49
+ ).eval()
50
+
51
+ # Prompt content: "hi"
52
+ messages = [
53
+ {"role": "user", "content": "generate a sample html for dashboard"}
54
+ ]
55
+
56
+ input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
57
+ output_ids = model.generate(input_ids.to('cuda'))
58
+ response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
59
+
60
+ # Model response: "Hello! How can I assist you today?"
61
+ print(response)
62
  ```