File size: 6,511 Bytes
9d45852
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
datasets:
- poem_sentiment
metrics:
- accuracy
model-index:
- name: poem_sentiment
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: poem_sentiment
      type: poem_sentiment
      config: default
      split: validation
      args: default
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.8857142857142857
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# poem_sentiment

This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the poem_sentiment dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4747
- 0: {'precision': 0.8571428571428571, 'recall': 0.9473684210526315, 'f1-score': 0.9, 'support': 19}
- 1: {'precision': 0.7222222222222222, 'recall': 0.7647058823529411, 'f1-score': 0.7428571428571428, 'support': 17}
- 2: {'precision': 0.9393939393939394, 'recall': 0.8985507246376812, 'f1-score': 0.9185185185185185, 'support': 69}
- Accuracy: 0.8857
- Macro avg: {'precision': 0.8395863395863395, 'recall': 0.8702083426810846, 'f1-score': 0.8537918871252205, 'support': 105}
- Weighted avg: {'precision': 0.8893492750635609, 'recall': 0.8857142857142857, 'f1-score': 0.8867271352985638, 'support': 105}

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5

### Training results

| Training Loss | Epoch | Step | Validation Loss | 0                                                                                                              | 1                                                                                                              | 2                                                                                                              | Accuracy | Macro avg                                                                                                         | Weighted avg                                                                                                     |
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------:|:-----------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------------:|
| 1.0922        | 1.0   | 112  | 0.8825          | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 19}                                              | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 17}                                              | {'precision': 0.6571428571428571, 'recall': 1.0, 'f1-score': 0.7931034482758621, 'support': 69}                | 0.6571   | {'precision': 0.21904761904761905, 'recall': 0.3333333333333333, 'f1-score': 0.26436781609195403, 'support': 105} | {'precision': 0.43183673469387757, 'recall': 0.6571428571428571, 'f1-score': 0.5211822660098522, 'support': 105} |
| 0.6877        | 2.0   | 224  | 0.4747          | {'precision': 0.8571428571428571, 'recall': 0.9473684210526315, 'f1-score': 0.9, 'support': 19}                | {'precision': 0.7222222222222222, 'recall': 0.7647058823529411, 'f1-score': 0.7428571428571428, 'support': 17} | {'precision': 0.9393939393939394, 'recall': 0.8985507246376812, 'f1-score': 0.9185185185185185, 'support': 69} | 0.8857   | {'precision': 0.8395863395863395, 'recall': 0.8702083426810846, 'f1-score': 0.8537918871252205, 'support': 105}   | {'precision': 0.8893492750635609, 'recall': 0.8857142857142857, 'f1-score': 0.8867271352985638, 'support': 105}  |
| 0.5299        | 3.0   | 336  | 0.6595          | {'precision': 0.8, 'recall': 0.8421052631578947, 'f1-score': 0.8205128205128205, 'support': 19}                | {'precision': 1.0, 'recall': 0.4117647058823529, 'f1-score': 0.5833333333333334, 'support': 17}                | {'precision': 0.8461538461538461, 'recall': 0.9565217391304348, 'f1-score': 0.8979591836734695, 'support': 69} | 0.8476   | {'precision': 0.882051282051282, 'recall': 0.7367972360568942, 'f1-score': 0.7672684458398744, 'support': 105}    | {'precision': 0.8627106227106227, 'recall': 0.8476190476190476, 'f1-score': 0.8330056564750442, 'support': 105}  |
| 0.9027        | 4.0   | 448  | 0.5981          | {'precision': 1.0, 'recall': 0.7368421052631579, 'f1-score': 0.8484848484848484, 'support': 19}                | {'precision': 0.7333333333333333, 'recall': 0.6470588235294118, 'f1-score': 0.6875, 'support': 17}             | {'precision': 0.868421052631579, 'recall': 0.9565217391304348, 'f1-score': 0.9103448275862069, 'support': 69}  | 0.8667   | {'precision': 0.867251461988304, 'recall': 0.7801408893076681, 'f1-score': 0.8154432253570185, 'support': 105}    | {'precision': 0.870359231411863, 'recall': 0.8666666666666667, 'f1-score': 0.863071478330099, 'support': 105}    |
| 0.4588        | 5.0   | 560  | 0.7815          | {'precision': 0.7727272727272727, 'recall': 0.8947368421052632, 'f1-score': 0.8292682926829269, 'support': 19} | {'precision': 0.6470588235294118, 'recall': 0.6470588235294118, 'f1-score': 0.6470588235294118, 'support': 17} | {'precision': 0.8939393939393939, 'recall': 0.855072463768116, 'f1-score': 0.8740740740740741, 'support': 69}  | 0.8286   | {'precision': 0.7712418300653595, 'recall': 0.7989560431342637, 'f1-score': 0.7834670634288043, 'support': 105}   | {'precision': 0.832034632034632, 'recall': 0.8285714285714286, 'f1-score': 0.8292115111627308, 'support': 105}   |


### Framework versions

- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0