prithivMLmods commited on
Commit
4b086c6
·
verified ·
1 Parent(s): 3e78783

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md CHANGED
@@ -13,3 +13,44 @@ tags:
13
  - flan
14
  ---
15
  ![xdfgzsxdfg.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/LUO0VbyTOGIp6pde17MJT.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  - flan
14
  ---
15
  ![xdfgzsxdfg.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/LUO0VbyTOGIp6pde17MJT.png)
16
+ # **t5-Flan-Prompt-Enhance**
17
+ T5-Flan-Prompt-Enhance is a fine-tuned model based on **Flan-T5-Small**, specifically designed to **enhance prompts, captions, and annotations**. This means the model has been further trained to improve the quality, clarity, and richness of textual inputs, making them more detailed and expressive.
18
+
19
+ ### Key Features:
20
+ 1. **Prompt Expansion** – Takes short or vague prompts and enriches them with more context, depth, and specificity.
21
+ 2. **Caption Enhancement** – Improves captions by adding more descriptive details, making them more informative and engaging.
22
+ 3. **Annotation Refinement** – Enhances annotations by making them clearer, more structured, and contextually relevant.
23
+
24
+ ### Run with Transformers
25
+
26
+ ```python
27
+ from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
28
+ import torch
29
+
30
+ device = "cuda" if torch.cuda.is_available() else "cpu"
31
+
32
+ # Model checkpoint
33
+ model_checkpoint = "prithivMLmods/t5-Flan-Prompt-Enhance"
34
+
35
+ # Tokenizer
36
+ tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
37
+
38
+ # Model
39
+ model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
40
+
41
+ enhancer = pipeline('text2text-generation',
42
+ model=model,
43
+ tokenizer=tokenizer,
44
+ repetition_penalty=1.2,
45
+ device=0 if device == "cuda" else -1)
46
+
47
+ max_target_length = 256
48
+ prefix = "enhance prompt: "
49
+
50
+ short_prompt = "three chimneys on the roof, green trees and shrubs in front of the house"
51
+ answer = enhancer(prefix + short_prompt, max_length=max_target_length)
52
+ final_answer = answer[0]['generated_text']
53
+ print(final_answer)
54
+ ```
55
+
56
+ This fine-tuning process allows **T5-Flan-Prompt-Enhance** to generate **high-quality, well-structured, and contextually relevant outputs**, which can be particularly useful for tasks such as text generation, content creation, and AI-assisted writing.