metadata
language: en
license: mit
tags:
- summarization
- t5
- text-to-text
- model
- fine-tuned
Fine-tuned T5 Model for Text Summarization
This model is a fine-tuned version of the T5 model (t5-small
) for text summarization tasks. It has been trained on a diverse set of text data to generate concise and coherent summaries from input text.
Model Overview
- Model Type: T5 (Text-to-Text Transfer Transformer)
- Base Model:
t5-small
- Task: Text Summarization
- Language: English (other languages may be supported depending on the dataset used)
Intended Use
This model is designed to summarize long documents, articles, or any form of textual content into shorter, coherent summaries. It can be used for tasks such as:
- Summarizing news articles
- Generating abstracts for academic papers
- Condensing lengthy documents
- Summarizing customer feedback or reviews
Model Details
- Fine-Tuned On: A custom dataset containing text and corresponding summaries.
- Input: Text (e.g., news articles, papers, or long-form content)
- Output: A concise summary of the input text
How to Use
To use this model for text summarization, you can follow the code example below:
from transformers import T5Tokenizer, T5ForConditionalGeneration
# Load the fine-tuned model and tokenizer
model = T5ForConditionalGeneration.from_pretrained("username/fine_tuned_t5")
tokenizer = T5Tokenizer.from_pretrained("username/fine_tuned_t5")
# Input text for summarization
input_text = "Your long input text here."
# Tokenize and summarize
inputs = tokenizer(input_text, return_tensors="pt", max_length=512, truncation=True)
summary_ids = model.generate(inputs["input_ids"], max_length=150, num_beams=4, early_stopping=True)
# Decode the summary
summary = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary)