Update README.md
Browse files
README.md
CHANGED
@@ -32,11 +32,9 @@ Built with [PyTorch Lightning](https://www.pytorchlightning.ai/), this implement
|
|
32 |
Each input sample is formatted as follows:
|
33 |
|
34 |
```
|
35 |
-
truefalse: [
|
36 |
```
|
37 |
|
38 |
-
During training, the answer is occasionally replaced with the `[MASK]` token (controlled by a defined masking probability). This strategy encourages the model to learn both to predict the answer and to generate a corresponding question.
|
39 |
-
|
40 |
### Target Construction
|
41 |
|
42 |
Each target sample is formatted as:
|
@@ -68,8 +66,6 @@ The model’s performance was evaluated using BLEU scores for both the generated
|
|
68 |
| BLEU-3 | 0.3089 |
|
69 |
| BLEU-4 | 0.2431 |
|
70 |
|
71 |
-
Additionally, the answer generation performance is notably strong, achieving a **BLEU-1 score of 90**.
|
72 |
-
|
73 |
*Note: These metrics offer a quantitative assessment of the model’s quality in generating coherent and relevant question-answer pairs.*
|
74 |
|
75 |
## How to Use
|
@@ -82,9 +78,7 @@ from transformers import pipeline
|
|
82 |
generator = pipeline("text2text-generation", model="Fares7elsadek/boolq-t5-base-question-generation")
|
83 |
|
84 |
# Example inference:
|
85 |
-
input_text = "truefalse: [
|
86 |
-
# Alternatively, specify an answer directly:
|
87 |
-
# input_text = "truefalse: yes passage: [Your passage here] </s>"
|
88 |
result = generator(input_text)
|
89 |
print(result)
|
90 |
```
|
|
|
32 |
Each input sample is formatted as follows:
|
33 |
|
34 |
```
|
35 |
+
truefalse: [answer] passage: [passage] </s>
|
36 |
```
|
37 |
|
|
|
|
|
38 |
### Target Construction
|
39 |
|
40 |
Each target sample is formatted as:
|
|
|
66 |
| BLEU-3 | 0.3089 |
|
67 |
| BLEU-4 | 0.2431 |
|
68 |
|
|
|
|
|
69 |
*Note: These metrics offer a quantitative assessment of the model’s quality in generating coherent and relevant question-answer pairs.*
|
70 |
|
71 |
## How to Use
|
|
|
78 |
generator = pipeline("text2text-generation", model="Fares7elsadek/boolq-t5-base-question-generation")
|
79 |
|
80 |
# Example inference:
|
81 |
+
input_text = "truefalse: [answer] passage: [Your passage here] </s>"
|
|
|
|
|
82 |
result = generator(input_text)
|
83 |
print(result)
|
84 |
```
|