philippds commited on
Commit
a2d7cf1
·
verified ·
1 Parent(s): b9e5d19

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +44 -42
README.md CHANGED
@@ -1,42 +1,44 @@
1
- ---
2
- library_name: hivex
3
- original_train_name: AerialWildfireSuppression_difficulty_2_task_7_run_id_1_train
4
- tags:
5
- - hivex
6
- - hivex-aerial-wildfire-suppression
7
- - reinforcement-learning
8
- - multi-agent-reinforcement-learning
9
- model-index:
10
- - name: hivex-AWS-PPO-baseline-task-7-difficulty-2
11
- results:
12
- - task:
13
- type: sub-task
14
- name: find_fire
15
- task-id: 7
16
- difficulty-id: 2
17
- dataset:
18
- name: hivex-aerial-wildfire-suppression
19
- type: hivex-aerial-wildfire-suppression
20
- metrics:
21
- - type: crash_count
22
- value: 0.05341013381257653 +/- 0.05198102898479312
23
- name: Crash Count
24
- verified: true
25
- - type: cumulative_reward
26
- value: 86.7614631652832 +/- 15.132148131374196
27
- name: Cumulative Reward
28
- verified: true
29
- ---
30
-
31
- This model serves as the baseline for the **Aerial Wildfire Suppression** environment, trained and tested on task <code>7</code> with difficulty <code>2</code> using the Proximal Policy Optimization (PPO) algorithm.<br><br>
32
-
33
- Environment: **Aerial Wildfire Suppression**<br>
34
- Task: <code>7</code><br>
35
- Difficulty: <code>2</code><br>
36
- Algorithm: <code>PPO</code><br>
37
- Episode Length: <code>3000</code><br>
38
- Training <code>max_steps</code>: <code>1800000</code><br>
39
- Testing <code>max_steps</code>: <code>180000</code><br><br>
40
-
41
- Train & Test [Scripts](https://github.com/hivex-research/hivex)<br>
42
- Download the [Environment](https://github.com/hivex-research/hivex-environments)
 
 
 
1
+ ---
2
+ library_name: hivex
3
+ original_train_name: AerialWildfireSuppression_difficulty_2_task_7_run_id_1_train
4
+ tags:
5
+ - hivex
6
+ - hivex-aerial-wildfire-suppression
7
+ - reinforcement-learning
8
+ - multi-agent-reinforcement-learning
9
+ model-index:
10
+ - name: hivex-AWS-PPO-baseline-task-7-difficulty-2
11
+ results:
12
+ - task:
13
+ type: sub-task
14
+ name: find_fire
15
+ task-id: 7
16
+ difficulty-id: 2
17
+ dataset:
18
+ name: hivex-aerial-wildfire-suppression
19
+ type: hivex-aerial-wildfire-suppression
20
+ metrics:
21
+ - type: crash_count
22
+ value: 0.05341013381257653 +/- 0.05198102898479312
23
+ name: Crash Count
24
+ verified: true
25
+ - type: cumulative_reward
26
+ value: 86.7614631652832 +/- 15.132148131374196
27
+ name: Cumulative Reward
28
+ verified: true
29
+ ---
30
+
31
+ This model serves as the baseline for the **Aerial Wildfire Suppression** environment, trained and tested on task <code>7</code> with difficulty <code>2</code> using the Proximal Policy Optimization (PPO) algorithm.<br><br>
32
+
33
+ Environment: **Aerial Wildfire Suppression**<br>
34
+ Task: <code>7</code><br>
35
+ Difficulty: <code>2</code><br>
36
+ Algorithm: <code>PPO</code><br>
37
+ Episode Length: <code>3000</code><br>
38
+ Training <code>max_steps</code>: <code>1800000</code><br>
39
+ Testing <code>max_steps</code>: <code>180000</code><br><br>
40
+
41
+ Train & Test [Scripts](https://github.com/hivex-research/hivex)<br>
42
+ Download the [Environment](https://github.com/hivex-research/hivex-environments)
43
+
44
+ [hivex-paper]: https://arxiv.org/abs/2501.04180