Improve Model Card: Add Metadata, Usage Example, and Paper Information
#1
by
nielsr
HF staff
- opened
README.md
CHANGED
@@ -1,199 +1,91 @@
|
|
1 |
---
|
|
|
2 |
library_name: transformers
|
3 |
-
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
|
|
|
11 |
|
12 |
## Model Details
|
13 |
|
14 |
### Model Description
|
15 |
|
16 |
-
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
- **Developed by:**
|
21 |
-
- **
|
22 |
-
- **
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
### Model Sources
|
29 |
|
30 |
-
|
|
|
|
|
31 |
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
-
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
|
46 |
-
### Downstream Use
|
47 |
|
48 |
-
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
|
195 |
-
[More Information Needed]
|
196 |
|
197 |
-
##
|
198 |
|
199 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: cc-by-4.0
|
3 |
library_name: transformers
|
4 |
+
pipeline_tag: text-generation
|
5 |
---
|
6 |
|
7 |
+
# Efficient Test-Time Scaling via Self-Calibration
|
|
|
|
|
|
|
8 |
|
9 |
+
This model uses self-calibration to improve the efficiency of test-time scaling methods for LLMs. It generates calibrated confidence scores to improve the accuracy of weighted aggregation and best-of-N methods. It is based on the paper [Efficient Test-Time Scaling via Self-Calibration](https://arxiv.org/abs/2503.00031).
|
10 |
|
11 |
## Model Details
|
12 |
|
13 |
### Model Description
|
14 |
|
15 |
+
This model is a fine-tuned version of `meta-llama/Llama-3.1-8B-Instruct` trained to generate calibrated confidence scores for its responses. These confidence scores can then be used to improve the efficiency of various test-time scaling methods, such as weighted aggregation and best-of-N, as described in the associated paper.
|
|
|
|
|
16 |
|
17 |
+
- **Developed by:** Chengsong Huang, Langlin Huang, Jixuan Leng, Jiacheng Liu, and Jiaxin Huang
|
18 |
+
- **License:** cc-by-4.0
|
19 |
+
- **Finetuned from model:** meta-llama/Llama-3.1-8B-Instruct
|
|
|
|
|
|
|
|
|
20 |
|
21 |
+
### Model Sources
|
22 |
|
23 |
+
- **Repository:** [https://github.com/HINT-lab/Self-Calibration](https://github.com/HINT-lab/Self-Calibration)
|
24 |
+
- **Paper:** [https://arxiv.org/abs/2503.00031](https://arxiv.org/abs/2503.00031)
|
25 |
+
- **Hugging Face Datasets:** [https://huggingface.co/datasets/HINT-lab/Llama_3.1-8B-Instruct-Self-Calibration](https://huggingface.co/datasets/HINT-lab/Llama_3.1-8B-Instruct-Self-Calibration)
|
26 |
|
|
|
|
|
|
|
27 |
|
28 |
## Uses
|
29 |
|
|
|
|
|
30 |
### Direct Use
|
31 |
|
32 |
+
This model can be used directly to generate text with associated confidence scores. This is particularly useful for applications that require reliability estimates for generated text, such as question answering and decision-making.
|
|
|
|
|
33 |
|
34 |
+
### Downstream Use
|
35 |
|
36 |
+
The primary downstream use case is to integrate this model with test-time scaling methods like self-consistency and best-of-N to improve their efficiency and accuracy. The generated confidence scores allow for weighted aggregation of samples or selection of the most confident responses, reducing computational cost without sacrificing performance.
|
|
|
|
|
37 |
|
38 |
### Out-of-Scope Use
|
39 |
|
40 |
+
This model is not designed for tasks where confidence estimation is not relevant, such as general-purpose text generation without a need for reliability assessment.
|
41 |
|
|
|
42 |
|
43 |
## Bias, Risks, and Limitations
|
44 |
|
45 |
+
The model may inherit biases present in the original Llama-3.1-8B-Instruct model. Additionally, while the self-calibration framework aims to improve confidence estimation, it is not guaranteed to be perfectly calibrated in all scenarios. Over-reliance on the confidence scores could lead to suboptimal performance in cases where the model misjudges its own uncertainty.
|
|
|
|
|
46 |
|
47 |
### Recommendations
|
48 |
|
49 |
+
Users should be aware of potential biases and limitations in the confidence estimations. It is recommended to evaluate the model's performance on specific downstream tasks and calibrate the confidence scores accordingly if necessary.
|
|
|
|
|
50 |
|
51 |
## How to Get Started with the Model
|
52 |
|
53 |
+
```python
|
54 |
+
inference = SampleInference(
|
55 |
+
model_name=model_name,
|
56 |
+
eos_token_str=eos_token_str,
|
57 |
+
I=I,
|
58 |
+
torch_dtype=torch.float16,
|
59 |
+
device_map="auto"
|
60 |
+
)
|
61 |
+
|
62 |
+
result = inference.run_inference_interactive(
|
63 |
+
query=prompt,
|
64 |
+
method=method, #["earlyexit", "asc_conf", "asc", "sc", "sc_conf", "best_of_n"]
|
65 |
+
threshold=0.7, # threshold in earlyexit, asc and asc_conf
|
66 |
+
max_samples=16, # number of samples
|
67 |
+
temperature=0.8,
|
68 |
+
extract_handler=dataset_handler
|
69 |
+
)
|
70 |
+
```
|
71 |
|
72 |
## Training Details
|
73 |
|
74 |
### Training Data
|
75 |
|
76 |
+
The model was trained using a dataset generated from existing benchmarks, as described in the paper. The process involves sampling multiple responses for each query and then using a "confidence querying prompt" to elicit confidence scores for each response. This labeled data is then used to train the model to generate both text and associated confidence scores. More information can be found in the Hugging Face Datasets repository linked above.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
|
|
|
78 |
|
79 |
+
## Citation
|
80 |
|
81 |
+
```bibtex
|
82 |
+
@misc{huang2025efficienttesttimescalingselfcalibration,
|
83 |
+
title={Efficient Test-Time Scaling via Self-Calibration},
|
84 |
+
author={Chengsong Huang and Langlin Huang and Jixuan Leng and Jiacheng Liu and Jiaxin Huang},
|
85 |
+
year={2025},
|
86 |
+
eprint={2503.00031},
|
87 |
+
archivePrefix={arXiv},
|
88 |
+
primaryClass={cs.LG},
|
89 |
+
url={https://arxiv.org/abs/2503.00031},
|
90 |
+
}
|
91 |
+
```
|