library_name: peft
pipeline_tag: summarization
tags:
- transformers
- summarization
- dialogue-summarization
- LoRA
- PEFT
datasets:
- knkarthick/dialogsum
ConvoBrief: LoRA-enhanced BART Model for Dialogue Summarization
This model is a variant of the 'facebook/bart-large-cnn' model, optimized with LoRA (Local Relational Aggregation) for dialogue summarization tasks. LoRA enhances feature aggregation across different positions in the sequence, making it particularly effective for capturing the nuances of dialogues.
LoRA Configuration:
- r: 8 (Number of attention heads in LoRA)
- lora_alpha: 8 (Scaling factor for LoRA attention)
- target_modules: ["q_proj", "v_proj"] (Modules targeted for LoRA, enhancing query and value projections)
- lora_dropout: 0.05 (Dropout rate for LoRA)
- bias: "lora_only" (Bias setting for LoRA)
- task_type: Dialogue Summarization (SEQ_2_SEQ_LM)
This model has been fine-tuned using the PEFT (Parameter-Efficient Fine-Tuning) approach, striking a balance between dialogue summarization objectives for optimal performance.
Usage:
Deploy this LoRA-enhanced BART model for dialogue summarization tasks, where it excels in distilling meaningful summaries from conversational text. Capture the richness of dialogues and generate concise yet informative summaries using the enhanced contextual understanding provided by LoRA.
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from transformers import pipeline
# Load PeftConfig and base model
config = PeftConfig.from_pretrained("Ketan3101/ConvoBrief")
base_model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-cnn")
# Load PeftModel
model = PeftModel.from_pretrained(base_model, "Ketan3101/ConvoBrief")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn")
# Define a pipeline for dialogue summarization
summarization_pipeline = pipeline(
"summarization",
model=model,
tokenizer=tokenizer
)
# Example dialogue for summarization
dialogue = [
#Person1#: Happy Birthday, this is for you, Brian.
#Person2#: I'm so happy you remember, please come in and enjoy the party. Everyone's here, I'm sure you have a good time.
#Person1#: Brian, may I have a pleasure to have a dance with you?
#Person2#: Ok.
#Person1#: This is really wonderful party.
#Person2#: Yes, you are always popular with everyone. and you look very pretty today.
#Person1#: Thanks, that's very kind of you to say. I hope my necklace goes with my dress, and they both make me look good I feel.
#Person2#: You look great, you are absolutely glowing.
#Person1#: Thanks, this is a fine party. We should have a drink together to celebrate your birthday
]
# Combine dialogue into a single string
full_dialogue = " ".join(dialogue)
# Generate summary
summary = summarization_pipeline(full_dialogue, max_length=150, min_length=40, do_sample=True)
print("Original Dialogue:\n", full_dialogue)
print("Generated Summary:\n", summary[0]['summary_text'])
Feel free to customize and expand upon this description and usage example to provide additional context and details about your LoRA-enhanced BART model and how users can effectively use it for dialogue summarization tasks.
Framework versions
- PEFT 0.4.0