asahi417 commited on
Commit
b2bffe0
·
1 Parent(s): 7f103f2

commit files to HF hub

Browse files
README.md CHANGED
@@ -33,27 +33,27 @@ model-index:
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
- value: 0.0
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
- value: 0.3
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
- value: 0.21
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
- value: 54.53
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
- value: 45.77
49
  ---
50
 
51
  # Model Card of `vocabtrimmer/mt5-small-trimmed-es-15000-esquad-qg`
52
- This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-es-15000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es-15000) for question generation task on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
53
 
54
 
55
  ### Overview
56
- - **Language model:** [vocabtrimmer/mt5-small-trimmed-es-15000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-es-15000)
57
  - **Language:** es
58
  - **Training data:** [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (default)
59
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
@@ -89,14 +89,14 @@ output = pipe("del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la In
89
 
90
  | | Score | Type | Dataset |
91
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
- | BERTScore | 54.53 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
93
- | Bleu_1 | 0.33 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
94
- | Bleu_2 | 0 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
95
- | Bleu_3 | 0 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
96
- | Bleu_4 | 0 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
97
- | METEOR | 0.21 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
98
- | MoverScore | 45.77 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
99
- | ROUGE_L | 0.3 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
100
 
101
 
102
 
@@ -108,12 +108,12 @@ The following hyperparameters were used during fine-tuning:
108
  - input_types: paragraph_answer
109
  - output_types: question
110
  - prefix_types: None
111
- - model: vocabtrimmer/mt5-small-trimmed-es-15000
112
  - max_length: 512
113
  - max_length_output: 32
114
- - epoch: 6
115
  - batch: 16
116
- - lr: 0.0001
117
  - fp16: False
118
  - random_seed: 1
119
  - gradient_accumulation_steps: 4
 
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
+ value: 9.24
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
+ value: 24.28
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
+ value: 22.71
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
+ value: 83.74
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
+ value: 58.83
49
  ---
50
 
51
  # Model Card of `vocabtrimmer/mt5-small-trimmed-es-15000-esquad-qg`
52
+ This model is fine-tuned version of [ckpts/mt5-small-trimmed-es-15000](https://huggingface.co/ckpts/mt5-small-trimmed-es-15000) for question generation task on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
53
 
54
 
55
  ### Overview
56
+ - **Language model:** [ckpts/mt5-small-trimmed-es-15000](https://huggingface.co/ckpts/mt5-small-trimmed-es-15000)
57
  - **Language:** es
58
  - **Training data:** [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (default)
59
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
 
89
 
90
  | | Score | Type | Dataset |
91
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
+ | BERTScore | 83.74 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
93
+ | Bleu_1 | 25.39 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
94
+ | Bleu_2 | 17.23 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
95
+ | Bleu_3 | 12.44 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
96
+ | Bleu_4 | 9.24 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
97
+ | METEOR | 22.71 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
98
+ | MoverScore | 58.83 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
99
+ | ROUGE_L | 24.28 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
100
 
101
 
102
 
 
108
  - input_types: paragraph_answer
109
  - output_types: question
110
  - prefix_types: None
111
+ - model: ckpts/mt5-small-trimmed-es-15000
112
  - max_length: 512
113
  - max_length_output: 32
114
+ - epoch: 15
115
  - batch: 16
116
+ - lr: 0.001
117
  - fp16: False
118
  - random_seed: 1
119
  - gradient_accumulation_steps: 4
eval/metric.first.answer.paragraph_answer.question.lmqg_qg_esquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.003157515550463275, "Bleu_2": 0.0001752686606369473, "Bleu_3": 6.932714462902121e-10, "Bleu_4": 1.421284011338596e-12}, "test": {"Bleu_1": 0.0032772051494349926, "Bleu_2": 5.695177524987508e-12, "Bleu_3": 7.105851835792891e-15, "Bleu_4": 2.5894135464018913e-16}}
 
1
+ {"validation": {"Bleu_1": 0.2497301303430144, "Bleu_2": 0.16826958181295876, "Bleu_3": 0.12138670463560199, "Bleu_4": 0.0902292391017187}, "test": {"Bleu_1": 0.253000590280053, "Bleu_2": 0.171669692368683, "Bleu_3": 0.1239449079890073, "Bleu_4": 0.09208676529861734}}
eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.0033104822171441638, "Bleu_2": 0.000188840193588986, "Bleu_3": 7.537787115379621e-10, "Bleu_4": 1.5523808746468753e-12, "METEOR": 0.0021161052625144074, "ROUGE_L": 0.0029004889742446074, "BERTScore": 0.5445500628454133, "MoverScore": 0.4576946960574479}, "test": {"Bleu_1": 0.0032972054283237693, "Bleu_2": 5.745450701039421e-12, "Bleu_3": 7.174832424881181e-15, "Bleu_4": 2.6156143528256543e-16, "METEOR": 0.0020806262871325147, "ROUGE_L": 0.003000302073919095, "BERTScore": 0.5452606138221344, "MoverScore": 0.45772376161968675}}
 
1
+ {"validation": {"Bleu_1": 0.2620904418588027, "Bleu_2": 0.1779458135621885, "Bleu_3": 0.12907229534812506, "Bleu_4": 0.09634916501545024, "METEOR": 0.22536721225409945, "ROUGE_L": 0.24398952418056397, "BERTScore": 0.8336677030780022, "MoverScore": 0.5838149106598906}, "test": {"Bleu_1": 0.2538653490627803, "Bleu_2": 0.17228039320996297, "Bleu_3": 0.12438517620698875, "Bleu_4": 0.09242972232941353, "METEOR": 0.2271249512493336, "ROUGE_L": 0.24282558125268286, "BERTScore": 0.8373773679731452, "MoverScore": 0.5882745608417501}}
eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_esquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_esquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff