Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -43,7 +43,7 @@ cd data && unzip data.zip && cd ..
|
|
43 |
|
44 |
## 🏃Quick Start
|
45 |
|
46 |
-
> First, we train the model to learn how to compress (step 1). Then, we perform inference on the test set to obtain output results (step 2). Finally, we evaluate the output results (step 3).
|
47 |
|
48 |
### Step 1. Training
|
49 |
|
@@ -57,6 +57,12 @@ Currently, the script's parameters are set to run on a machine with 4 A800 GPUs.
|
|
57 |
|
58 |
### Step 2. Inference
|
59 |
|
|
|
|
|
|
|
|
|
|
|
|
|
60 |
To execute the inference, run the following command:
|
61 |
|
62 |
```bash
|
|
|
43 |
|
44 |
## 🏃Quick Start
|
45 |
|
46 |
+
> First, we train the model to learn when to compress and how to compress (step 1). Then, we perform inference on the test set to obtain output results (step 2). Finally, we evaluate the output results (step 3).
|
47 |
|
48 |
### Step 1. Training
|
49 |
|
|
|
57 |
|
58 |
### Step 2. Inference
|
59 |
|
60 |
+
<details>
|
61 |
+
<summary><b>Inference with a downloaded model</b></summary>
|
62 |
+
|
63 |
+
If you are downloading a trained model from Huggingface, please set the `model_path` parameter in `inference.sh` to the absolute path of the model. The values of other parameters `ckpt` and `model_tag` will be ignored.
|
64 |
+
</details>
|
65 |
+
|
66 |
To execute the inference, run the following command:
|
67 |
|
68 |
```bash
|