klamike commited on
Commit
26026bb
·
verified ·
1 Parent(s): eddb2e9

Convert dataset to Parquet (part 00010-of-00011) (#11)

Browse files

- Convert dataset to Parquet (part 00010-of-00011) (3b52b95ef0966f8aa7f96dcabab271b688d47201)
- Delete loading script (5579368808abd5378774ef73bbe7858a571fa4d4)
- Delete data file (30d2541f82b9cfca5804a6aa70a2c0c572204506)
- Delete data file (5c1a1bc169b38dcc698be0441756c217726b69e8)
- Delete data file (04d9596100c8e372f78ff71638b5e4177c8e7264)
- Delete data file (c8178a4175ad053333213c516614923d62b80dbb)
- Delete data file (8ed2a7906003e2afade52c18e3ccbd4583e771e4)
- Delete data file (86aaf92a7ee135996321c98f3412f392193a7e6c)
- Delete data file (c536e2852e8cd8053ff36581f0209c91c55334ef)
- Delete data file (a95581fc60fb99c57388b303b0f1ee89283f3b74)
- Delete data file (6406488a4a016d3624a1e97ab5f158f53af30ff6)
- Delete data file (261328175a0c6160c331d9eab856f10b0efac8e4)
- Delete data file (a75116b95aeed868eaff6578b0bed12c7a3eefbf)
- Delete data file (e5821f9ee90015b3fd077b40b22ce6eff3f42c79)
- Delete data file (5793a02bcadc3aee6b6a74baa39743bdbbd93768)
- Delete data file (12fb66c29cca7b78772b190190697ab6f3e57889)
- Delete data file (17d89f33b57d14d53aad22cc8d8683f90c333e50)
- Delete data file (ae75d94c3276e09361e541c04c93c45fefc4558b)
- Delete data file (465c9b0514d076a000e986f55bc4e66cfad1faef)
- Delete data file (3bf25cc99f5c70bf21f01c766188f986413abdda)
- Delete data file (fb6f12d922a10329b3547b08d806d6395ed651f1)
- Delete data file (acba017d9571041924a6816556dd8dcb9df5d9ec)
- Delete data file (21dfc1cff3f4f55269207c9f4049a77a23747b66)
- Delete data file (9f7638283a60ee0e05fbaef0320564a010ebfc75)
- Delete data file (433256cbf33a871b984e71b831fc447b41661922)
- Delete data file (225c3e8274dda71fa2d16077ddff92f75d84d183)
- Delete data file (cb5192a8f942cbd40b48e77a96e84c6417ccf745)
- Delete data file (20c240fa81b9dcc2bf028add29d6e575ee88b8c3)
- Delete data file (3110b78d4de0a7c8648240b96e117b3e46b72317)
- Delete data file (2bbf6873cb4609381ae361d50518b65fdb235407)
- Delete data file (28a72b411ba0f60ef06dee65278615ea16add332)
- Delete data file (6d877a25ef1b2485b129bf16ca3a56ee7846557e)
- Delete data file (a34f2277f7e2388c92b7bd8631e3d152d515640c)
- Delete data file (bda0f7032173a8f10323a050ba304c0880f56f62)
- Delete data file (2d6a6b6b3bbc0787a6f0f4f74078d6a354c0f720)
- Delete data file (f7fff2647f969c2afc4ad47bc59f47c6e129f709)

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. infeasible/ACOPF/meta.h5.gz → 1888_rte/test-00077-of-00106.parquet +2 -2
  2. case.json.gz → 1888_rte/test-00078-of-00106.parquet +2 -2
  3. infeasible/DCOPF/dual.h5.gz → 1888_rte/test-00079-of-00106.parquet +2 -2
  4. infeasible/DCOPF/meta.h5.gz → 1888_rte/test-00080-of-00106.parquet +2 -2
  5. 1888_rte/test-00081-of-00106.parquet +3 -0
  6. 1888_rte/test-00082-of-00106.parquet +3 -0
  7. 1888_rte/test-00083-of-00106.parquet +3 -0
  8. 1888_rte/test-00084-of-00106.parquet +3 -0
  9. 1888_rte/test-00085-of-00106.parquet +3 -0
  10. 1888_rte/test-00086-of-00106.parquet +3 -0
  11. 1888_rte/test-00087-of-00106.parquet +3 -0
  12. 1888_rte/test-00088-of-00106.parquet +3 -0
  13. 1888_rte/test-00089-of-00106.parquet +3 -0
  14. 1888_rte/test-00090-of-00106.parquet +3 -0
  15. 1888_rte/test-00091-of-00106.parquet +3 -0
  16. 1888_rte/test-00092-of-00106.parquet +3 -0
  17. 1888_rte/test-00093-of-00106.parquet +3 -0
  18. 1888_rte/test-00094-of-00106.parquet +3 -0
  19. 1888_rte/test-00095-of-00106.parquet +3 -0
  20. 1888_rte/test-00096-of-00106.parquet +3 -0
  21. 1888_rte/test-00097-of-00106.parquet +3 -0
  22. 1888_rte/test-00098-of-00106.parquet +3 -0
  23. 1888_rte/test-00099-of-00106.parquet +3 -0
  24. 1888_rte/test-00100-of-00106.parquet +3 -0
  25. 1888_rte/test-00101-of-00106.parquet +3 -0
  26. 1888_rte/test-00102-of-00106.parquet +3 -0
  27. 1888_rte/test-00103-of-00106.parquet +3 -0
  28. 1888_rte/test-00104-of-00106.parquet +3 -0
  29. 1888_rte/test-00105-of-00106.parquet +3 -0
  30. PGLearn-Medium-1888_rte.py +0 -429
  31. README.md +9 -1
  32. config.toml +0 -42
  33. infeasible/ACOPF/dual.h5.gz +0 -3
  34. infeasible/ACOPF/primal.h5.gz +0 -3
  35. infeasible/DCOPF/primal.h5.gz +0 -3
  36. infeasible/SOCOPF/dual.h5.gz +0 -3
  37. infeasible/SOCOPF/meta.h5.gz +0 -3
  38. infeasible/SOCOPF/primal.h5.gz +0 -3
  39. infeasible/input.h5.gz +0 -3
  40. test/ACOPF/dual.h5.gz +0 -3
  41. test/ACOPF/meta.h5.gz +0 -3
  42. test/ACOPF/primal.h5.gz +0 -3
  43. test/DCOPF/dual.h5.gz +0 -3
  44. test/DCOPF/meta.h5.gz +0 -3
  45. test/DCOPF/primal.h5.gz +0 -3
  46. test/SOCOPF/dual.h5.gz +0 -3
  47. test/SOCOPF/meta.h5.gz +0 -3
  48. test/SOCOPF/primal.h5.gz +0 -3
  49. test/input.h5.gz +0 -3
  50. train/ACOPF/dual.h5.gz +0 -3
infeasible/ACOPF/meta.h5.gz → 1888_rte/test-00077-of-00106.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b2c6cf37eff40ed8fc1cd9a2ac11f7fffb20d0a0cb2240b6f351812dd01b3a35
3
- size 1174026
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5acf26c648a4823d0a8739b7034391b3f1e85ea632bd2cf52a968a164d9f2700
3
+ size 484194525
case.json.gz → 1888_rte/test-00078-of-00106.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:04bef6d8d495d8968d096fdb549b8e646430bfcbd07037199ba0c3d483634c39
3
- size 1448847
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc491c87023d2ac3daf06a109542f844a65ca8a38eb2d21c0930788f52c392ae
3
+ size 484182538
infeasible/DCOPF/dual.h5.gz → 1888_rte/test-00079-of-00106.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a14b5a5e883f7f9c885e69b7c63624434d4affa38a71dd9843c9a9f0439a3693
3
- size 133608111
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f71f84edccf667c48e7bd61d9e884dbaa353555bd3436d1473f494c8a9fe5c0b
3
+ size 484231867
infeasible/DCOPF/meta.h5.gz → 1888_rte/test-00080-of-00106.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:73f147a445190cc6de1a6efdbcaf395e351720c61f4b71eda47624110b2a8172
3
- size 1188365
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:179379a9b1ddb2c2f58631657f441776663bd96c2851bed424b8bcee341f5434
3
+ size 484252131
1888_rte/test-00081-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d95c599e97957c09320192429c316ff366d72f5844d36d0e25aa615bc5bc01f
3
+ size 484270118
1888_rte/test-00082-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:443160958f16c2ffc9473b84b55fcfea1adabbd3d929449a1707457894a59cc3
3
+ size 484008809
1888_rte/test-00083-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:510f8c2258b689a43176462dd5d105a4a78c1af7a92049e81d78a5bbcb6033c3
3
+ size 484346179
1888_rte/test-00084-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6345c4d20cb7a8d8eeafe66d44cafa46861aa11cbecab5d37b0da19417293136
3
+ size 484263539
1888_rte/test-00085-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1c7d72614779de8e4a26969b5230e94bb9f4ccd51de75a4cc81c93541f74391
3
+ size 484162680
1888_rte/test-00086-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87ab687417a26cec0c668aecfdba464453bc06acbd1c3d3d7059538a772038d0
3
+ size 484131364
1888_rte/test-00087-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74e1113323b4aa49fe786393970b55a4ddf0e4983929425fb0e68833de9cad54
3
+ size 484146897
1888_rte/test-00088-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ef7208238c3f0980d5940ec65a7e6b9b5e101ddbd07548cf93b5727f654142e
3
+ size 484014126
1888_rte/test-00089-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:738e7b16c3c682dbe628a4dfc0717b886014f82665ea22617d1f82363a7c0488
3
+ size 484035704
1888_rte/test-00090-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4c9676fe0fbd3aefb9993b0272a64e8423efd087cd9dcd724052c54b3e9e851
3
+ size 484126856
1888_rte/test-00091-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfddc95b58664a4d22378bf7337e18f136e5786b9ab31a88f53bb8e2206d2c6f
3
+ size 484132532
1888_rte/test-00092-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbef47cbffff7233dcb8ebc36fe1781087f8800800d80fbb910d8993d1328a3c
3
+ size 484229583
1888_rte/test-00093-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13af02a56451f49f176b7e71d58f0ea3e9c681c0b2c45b62db28ddc6fbc5c9fa
3
+ size 484222804
1888_rte/test-00094-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8610c4204b61e9fe735f78591f82b406cc287aa590016838de80cd98f01770ee
3
+ size 484258237
1888_rte/test-00095-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d29ad618db82f3a61b77e80ae3810ecbcb731eddce8f7878103c3f2e4e2e8a00
3
+ size 484302220
1888_rte/test-00096-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7805430898242c20fa1404c61ccdab53400c35e314e5d04cc9b1fb2a63e7fefb
3
+ size 484165646
1888_rte/test-00097-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6cdd51bf40954ded4cc117570d2f42cb8069df321fda9b53f7d8d041bf14539
3
+ size 484112826
1888_rte/test-00098-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51596ae2ce5a4e206fed9d88d1b32b86c5a337a86bf96ff08e6eb931cfebe609
3
+ size 484237234
1888_rte/test-00099-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5aca615fdda4bfd3a3143b7de6b12295ab893f6f2e769638a6dfd541ff7bd445
3
+ size 484302042
1888_rte/test-00100-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15fe28f38c44679ecf544f638874c59531d7052bbb936a9558c13cc0a44f2e70
3
+ size 484129090
1888_rte/test-00101-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c76e5c3050579404b6d9ef3df384f3db7c7ee361f63e32286cc78bc0b8535de5
3
+ size 484367186
1888_rte/test-00102-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2be65e9e383240301242a8707ac927c8e2adefb6a1eb96539664f74fa8cb8e3
3
+ size 484177704
1888_rte/test-00103-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56d4b183119a844403f7733e376afd6716e1d3e1337bdf2be3fd7d6ecff75093
3
+ size 484192797
1888_rte/test-00104-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eda253e6a680961122edac643021fc700fafb0c657ff6ec6b944a4da45c5a512
3
+ size 484235175
1888_rte/test-00105-of-00106.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03ed565c053f43d1aaa7f8ca9adf4970b9d23a5f0d1097c3839605a94605034e
3
+ size 484175696
PGLearn-Medium-1888_rte.py DELETED
@@ -1,429 +0,0 @@
1
- from __future__ import annotations
2
- from dataclasses import dataclass
3
- from pathlib import Path
4
- import json
5
- import shutil
6
-
7
- import datasets as hfd
8
- import h5py
9
- import pgzip as gzip
10
- import pyarrow as pa
11
-
12
- # ┌──────────────┐
13
- # │ Metadata │
14
- # └──────────────┘
15
-
16
- @dataclass
17
- class CaseSizes:
18
- n_bus: int
19
- n_load: int
20
- n_gen: int
21
- n_branch: int
22
-
23
- CASENAME = "1888_rte"
24
- SIZES = CaseSizes(n_bus=1888, n_load=1000, n_gen=290, n_branch=2531)
25
- NUM_TRAIN = 371587
26
- NUM_TEST = 92897
27
- NUM_INFEASIBLE = 35516
28
- SPLITFILES = {
29
- "train/SOCOPF/dual.h5.gz": ["train/SOCOPF/dual/xaa", "train/SOCOPF/dual/xab", "train/SOCOPF/dual/xac"],
30
- }
31
-
32
- URL = "https://huggingface.co/datasets/PGLearn/PGLearn-Medium-1888_rte"
33
- DESCRIPTION = """\
34
- The 1888_rte PGLearn optimal power flow dataset, part of the PGLearn-Medium collection. \
35
- """
36
- VERSION = hfd.Version("1.0.0")
37
- DEFAULT_CONFIG_DESCRIPTION="""\
38
- This configuration contains feasible input, primal solution, and dual solution data \
39
- for the ACOPF, DCOPF, and SOCOPF formulations on the {case} system. For case data, \
40
- download the case.json.gz file from the `script` branch of the repository. \
41
- https://huggingface.co/datasets/PGLearn/PGLearn-Medium-1888_rte/blob/script/case.json.gz
42
- """
43
- USE_ML4OPF_WARNING = """
44
- ================================================================================================
45
- Loading PGLearn-Medium-1888_rte through the `datasets.load_dataset` function may be slow.
46
-
47
- Consider using ML4OPF to directly convert to `torch.Tensor`; for more info see:
48
- https://github.com/AI4OPT/ML4OPF?tab=readme-ov-file#manually-loading-data
49
-
50
- Or, use `huggingface_hub.snapshot_download` and an HDF5 reader; for more info see:
51
- https://huggingface.co/datasets/PGLearn/PGLearn-Medium-1888_rte#downloading-individual-files
52
- ================================================================================================
53
- """
54
- CITATION = """\
55
- @article{klamkinpglearn,
56
- title={{PGLearn - An Open-Source Learning Toolkit for Optimal Power Flow}},
57
- author={Klamkin, Michael and Tanneau, Mathieu and Van Hentenryck, Pascal},
58
- year={2025},
59
- }\
60
- """
61
-
62
- IS_COMPRESSED = True
63
-
64
- # ┌──────────────────┐
65
- # │ Formulations │
66
- # └──────────────────┘
67
-
68
- def acopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
69
- features = {}
70
- if primal: features.update(acopf_primal_features(sizes))
71
- if dual: features.update(acopf_dual_features(sizes))
72
- if meta: features.update({f"ACOPF/{k}": v for k, v in META_FEATURES.items()})
73
- return features
74
-
75
- def dcopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
76
- features = {}
77
- if primal: features.update(dcopf_primal_features(sizes))
78
- if dual: features.update(dcopf_dual_features(sizes))
79
- if meta: features.update({f"DCOPF/{k}": v for k, v in META_FEATURES.items()})
80
- return features
81
-
82
- def socopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
83
- features = {}
84
- if primal: features.update(socopf_primal_features(sizes))
85
- if dual: features.update(socopf_dual_features(sizes))
86
- if meta: features.update({f"SOCOPF/{k}": v for k, v in META_FEATURES.items()})
87
- return features
88
-
89
- FORMULATIONS_TO_FEATURES = {
90
- "ACOPF": acopf_features,
91
- "DCOPF": dcopf_features,
92
- "SOCOPF": socopf_features,
93
- }
94
-
95
- # ┌───────────────────┐
96
- # │ BuilderConfig │
97
- # └───────────────────┘
98
-
99
- class PGLearnMedium1888_rteConfig(hfd.BuilderConfig):
100
- """BuilderConfig for PGLearn-Medium-1888_rte.
101
- By default, primal solution data, metadata, input, casejson, are included for the train and test splits.
102
-
103
- To modify the default configuration, pass attributes of this class to `datasets.load_dataset`:
104
-
105
- Attributes:
106
- formulations (list[str]): The formulation(s) to include, e.g. ["ACOPF", "DCOPF"]
107
- primal (bool, optional): Include primal solution data. Defaults to True.
108
- dual (bool, optional): Include dual solution data. Defaults to False.
109
- meta (bool, optional): Include metadata. Defaults to True.
110
- input (bool, optional): Include input data. Defaults to True.
111
- casejson (bool, optional): Include case.json data. Defaults to True.
112
- train (bool, optional): Include training samples. Defaults to True.
113
- test (bool, optional): Include testing samples. Defaults to True.
114
- infeasible (bool, optional): Include infeasible samples. Defaults to False.
115
- """
116
- def __init__(self,
117
- formulations: list[str],
118
- primal: bool=True, dual: bool=False, meta: bool=True, input: bool = True, casejson: bool=True,
119
- train: bool=True, test: bool=True, infeasible: bool=False,
120
- compressed: bool=IS_COMPRESSED, **kwargs
121
- ):
122
- super(PGLearnMedium1888_rteConfig, self).__init__(version=VERSION, **kwargs)
123
-
124
- self.case = CASENAME
125
- self.formulations = formulations
126
-
127
- self.primal = primal
128
- self.dual = dual
129
- self.meta = meta
130
- self.input = input
131
- self.casejson = casejson
132
-
133
- self.train = train
134
- self.test = test
135
- self.infeasible = infeasible
136
-
137
- self.gz_ext = ".gz" if compressed else ""
138
-
139
- @property
140
- def size(self):
141
- return SIZES
142
-
143
- @property
144
- def features(self):
145
- features = {}
146
- if self.casejson: features.update(case_features())
147
- if self.input: features.update(input_features(SIZES))
148
- for formulation in self.formulations:
149
- features.update(FORMULATIONS_TO_FEATURES[formulation](SIZES, self.primal, self.dual, self.meta))
150
- return hfd.Features(features)
151
-
152
- @property
153
- def splits(self):
154
- splits: dict[hfd.Split, dict[str, str | int]] = {}
155
- if self.train:
156
- splits[hfd.Split.TRAIN] = {
157
- "name": "train",
158
- "num_examples": NUM_TRAIN
159
- }
160
- if self.test:
161
- splits[hfd.Split.TEST] = {
162
- "name": "test",
163
- "num_examples": NUM_TEST
164
- }
165
- if self.infeasible:
166
- splits[hfd.Split("infeasible")] = {
167
- "name": "infeasible",
168
- "num_examples": NUM_INFEASIBLE
169
- }
170
- return splits
171
-
172
- @property
173
- def urls(self):
174
- urls: dict[str, None | str | list] = {
175
- "case": None, "train": [], "test": [], "infeasible": [],
176
- }
177
-
178
- if self.casejson:
179
- urls["case"] = f"case.json" + self.gz_ext
180
- else:
181
- urls.pop("case")
182
-
183
- split_names = []
184
- if self.train: split_names.append("train")
185
- if self.test: split_names.append("test")
186
- if self.infeasible: split_names.append("infeasible")
187
-
188
- for split in split_names:
189
- if self.input: urls[split].append(f"{split}/input.h5" + self.gz_ext)
190
- for formulation in self.formulations:
191
- if self.primal:
192
- filename = f"{split}/{formulation}/primal.h5" + self.gz_ext
193
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
194
- else: urls[split].append(filename)
195
- if self.dual:
196
- filename = f"{split}/{formulation}/dual.h5" + self.gz_ext
197
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
198
- else: urls[split].append(filename)
199
- if self.meta:
200
- filename = f"{split}/{formulation}/meta.h5" + self.gz_ext
201
- if filename in SPLITFILES: urls[split].append(SPLITFILES[filename])
202
- else: urls[split].append(filename)
203
- return urls
204
-
205
- # ┌────────────────────┐
206
- # │ DatasetBuilder │
207
- # └────────────────────┘
208
-
209
- class PGLearnMedium1888_rte(hfd.ArrowBasedBuilder):
210
- """DatasetBuilder for PGLearn-Medium-1888_rte.
211
- The main interface is `datasets.load_dataset` with `trust_remote_code=True`, e.g.
212
-
213
- ```python
214
- from datasets import load_dataset
215
- ds = load_dataset("PGLearn/PGLearn-Medium-1888_rte", trust_remote_code=True,
216
- # modify the default configuration by passing kwargs
217
- formulations=["DCOPF"],
218
- dual=False,
219
- meta=False,
220
- )
221
- ```
222
- """
223
-
224
- DEFAULT_WRITER_BATCH_SIZE = 10000
225
- BUILDER_CONFIG_CLASS = PGLearnMedium1888_rteConfig
226
- DEFAULT_CONFIG_NAME=CASENAME
227
- BUILDER_CONFIGS = [
228
- PGLearnMedium1888_rteConfig(
229
- name=CASENAME, description=DEFAULT_CONFIG_DESCRIPTION.format(case=CASENAME),
230
- formulations=list(FORMULATIONS_TO_FEATURES.keys()),
231
- primal=True, dual=True, meta=True, input=True, casejson=False,
232
- train=True, test=True, infeasible=False,
233
- )
234
- ]
235
-
236
- def _info(self):
237
- return hfd.DatasetInfo(
238
- features=self.config.features, splits=self.config.splits,
239
- description=DESCRIPTION + self.config.description,
240
- homepage=URL, citation=CITATION,
241
- )
242
-
243
- def _split_generators(self, dl_manager: hfd.DownloadManager):
244
- hfd.logging.get_logger().warning(USE_ML4OPF_WARNING)
245
-
246
- filepaths = dl_manager.download_and_extract(self.config.urls)
247
-
248
- splits: list[hfd.SplitGenerator] = []
249
- if self.config.train:
250
- splits.append(hfd.SplitGenerator(
251
- name=hfd.Split.TRAIN,
252
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["train"]), n_samples=NUM_TRAIN),
253
- ))
254
- if self.config.test:
255
- splits.append(hfd.SplitGenerator(
256
- name=hfd.Split.TEST,
257
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["test"]), n_samples=NUM_TEST),
258
- ))
259
- if self.config.infeasible:
260
- splits.append(hfd.SplitGenerator(
261
- name=hfd.Split("infeasible"),
262
- gen_kwargs=dict(case_file=filepaths.get("case", None), data_files=tuple(filepaths["infeasible"]), n_samples=NUM_INFEASIBLE),
263
- ))
264
- return splits
265
-
266
- def _generate_tables(self, case_file: str | None, data_files: tuple[hfd.utils.track.tracked_str | list[hfd.utils.track.tracked_str]], n_samples: int):
267
- case_data: str | None = json.dumps(json.load(open_maybe_gzip_cat(case_file))) if case_file is not None else None
268
- data: dict[str, h5py.File] = {}
269
- for file in data_files:
270
- v = h5py.File(open_maybe_gzip_cat(file), "r")
271
- if isinstance(file, list):
272
- k = "/".join(Path(file[0].get_origin()).parts[-3:-1]).split(".")[0]
273
- else:
274
- k = "/".join(Path(file.get_origin()).parts[-2:]).split(".")[0]
275
- data[k] = v
276
- for k in list(data.keys()):
277
- if "/input" in k: data[k.split("/", 1)[1]] = data.pop(k)
278
-
279
- batch_size = self._writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
280
- for i in range(0, n_samples, batch_size):
281
- effective_batch_size = min(batch_size, n_samples - i)
282
-
283
- sample_data = {
284
- f"{dk}/{k}":
285
- hfd.features.features.numpy_to_pyarrow_listarray(v[i:i + effective_batch_size, ...])
286
- for dk, d in data.items() for k, v in d.items() if f"{dk}/{k}" in self.config.features
287
- }
288
-
289
- if case_data is not None:
290
- sample_data["case/json"] = pa.array([case_data] * effective_batch_size)
291
-
292
- yield i, pa.Table.from_pydict(sample_data)
293
-
294
- for f in data.values():
295
- f.close()
296
-
297
- # ┌──────────────┐
298
- # │ Features │
299
- # └──────────────┘
300
-
301
- FLOAT_TYPE = "float32"
302
- INT_TYPE = "int64"
303
- BOOL_TYPE = "bool"
304
- STRING_TYPE = "string"
305
-
306
- def case_features():
307
- # FIXME: better way to share schema of case data -- need to treat jagged arrays
308
- return {
309
- "case/json": hfd.Value(STRING_TYPE),
310
- }
311
-
312
- META_FEATURES = {
313
- "meta/seed": hfd.Value(dtype=INT_TYPE),
314
- "meta/formulation": hfd.Value(dtype=STRING_TYPE),
315
- "meta/primal_objective_value": hfd.Value(dtype=FLOAT_TYPE),
316
- "meta/dual_objective_value": hfd.Value(dtype=FLOAT_TYPE),
317
- "meta/primal_status": hfd.Value(dtype=STRING_TYPE),
318
- "meta/dual_status": hfd.Value(dtype=STRING_TYPE),
319
- "meta/termination_status": hfd.Value(dtype=STRING_TYPE),
320
- "meta/build_time": hfd.Value(dtype=FLOAT_TYPE),
321
- "meta/extract_time": hfd.Value(dtype=FLOAT_TYPE),
322
- "meta/solve_time": hfd.Value(dtype=FLOAT_TYPE),
323
- }
324
-
325
- def input_features(sizes: CaseSizes):
326
- return {
327
- "input/pd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
328
- "input/qd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
329
- "input/gen_status": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=BOOL_TYPE)),
330
- "input/branch_status": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=BOOL_TYPE)),
331
- "input/seed": hfd.Value(dtype=INT_TYPE),
332
- }
333
-
334
- def acopf_primal_features(sizes: CaseSizes):
335
- return {
336
- "ACOPF/primal/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
337
- "ACOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
338
- "ACOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
339
- "ACOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
340
- "ACOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
341
- "ACOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
342
- "ACOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
343
- "ACOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
344
- }
345
- def acopf_dual_features(sizes: CaseSizes):
346
- return {
347
- "ACOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
348
- "ACOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
349
- "ACOPF/dual/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
350
- "ACOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
351
- "ACOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
352
- "ACOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
353
- "ACOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
354
- "ACOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
355
- "ACOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
356
- "ACOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
357
- "ACOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
358
- "ACOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
359
- "ACOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
360
- "ACOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
361
- "ACOPF/dual/sm_fr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
362
- "ACOPF/dual/sm_to": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
363
- "ACOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
364
- }
365
- def dcopf_primal_features(sizes: CaseSizes):
366
- return {
367
- "DCOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
368
- "DCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
369
- "DCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
370
- }
371
- def dcopf_dual_features(sizes: CaseSizes):
372
- return {
373
- "DCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
374
- "DCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
375
- "DCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
376
- "DCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
377
- "DCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
378
- "DCOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
379
- }
380
- def socopf_primal_features(sizes: CaseSizes):
381
- return {
382
- "SOCOPF/primal/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
383
- "SOCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
384
- "SOCOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
385
- "SOCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
386
- "SOCOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
387
- "SOCOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
388
- "SOCOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
389
- "SOCOPF/primal/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
390
- "SOCOPF/primal/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
391
- }
392
- def socopf_dual_features(sizes: CaseSizes):
393
- return {
394
- "SOCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
395
- "SOCOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
396
- "SOCOPF/dual/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
397
- "SOCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
398
- "SOCOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
399
- "SOCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
400
- "SOCOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
401
- "SOCOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
402
- "SOCOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
403
- "SOCOPF/dual/jabr": hfd.Array2D(shape=(sizes.n_branch, 4), dtype=FLOAT_TYPE),
404
- "SOCOPF/dual/sm_fr": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
405
- "SOCOPF/dual/sm_to": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
406
- "SOCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
407
- "SOCOPF/dual/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
408
- "SOCOPF/dual/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
409
- "SOCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
410
- "SOCOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
411
- "SOCOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
412
- "SOCOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
413
- }
414
-
415
- # ┌───────────────┐
416
- # │ Utilities │
417
- # └───────────────┘
418
-
419
- def open_maybe_gzip_cat(path: str | list):
420
- if isinstance(path, list):
421
- dest = Path(path[0]).parent.with_suffix(".h5")
422
- if not dest.exists():
423
- with open(dest, "wb") as dest_f:
424
- for piece in path:
425
- with open(piece, "rb") as piece_f:
426
- shutil.copyfileobj(piece_f, dest_f)
427
- shutil.rmtree(Path(piece).parent)
428
- path = dest.as_posix()
429
- return gzip.open(path, "rb") if path.endswith(".gz") else open(path, "rb")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -288,6 +288,14 @@ dataset_info:
288
  - name: test
289
  num_bytes: 52865882822
290
  num_examples: 92897
291
- download_size: 221548942567
292
  dataset_size: 264328845025
 
 
 
 
 
 
 
 
293
  ---
 
288
  - name: test
289
  num_bytes: 52865882822
290
  num_examples: 92897
291
+ download_size: 256704088692
292
  dataset_size: 264328845025
293
+ configs:
294
+ - config_name: 1888_rte
295
+ data_files:
296
+ - split: train
297
+ path: 1888_rte/train-*
298
+ - split: test
299
+ path: 1888_rte/test-*
300
+ default: true
301
  ---
config.toml DELETED
@@ -1,42 +0,0 @@
1
- # Name of the reference PGLib case. Must be a valid PGLib case name.
2
- pglib_case = "pglib_opf_case1888_rte"
3
- floating_point_type = "Float32"
4
-
5
- [sampler]
6
- # data sampler options
7
- [sampler.load]
8
- noise_type = "ScaledUniform"
9
- l = 0.7 # Lower bound of base load factor
10
- u = 1.1 # Upper bound of base load factor
11
- sigma = 0.20 # Relative (multiplicative) noise level.
12
-
13
-
14
- [OPF]
15
-
16
- [OPF.ACOPF]
17
- type = "ACOPF"
18
- solver.name = "Ipopt"
19
- solver.attributes.tol = 1e-6
20
- solver.attributes.linear_solver = "ma27"
21
-
22
- [OPF.DCOPF]
23
- # Formulation/solver options
24
- type = "DCOPF"
25
- solver.name = "HiGHS"
26
-
27
- [OPF.SOCOPF]
28
- type = "SOCOPF"
29
- solver.name = "Clarabel"
30
- # Tight tolerances
31
- solver.attributes.tol_gap_abs = 1e-6
32
- solver.attributes.tol_gap_rel = 1e-6
33
- solver.attributes.tol_feas = 1e-6
34
- solver.attributes.tol_infeas_rel = 1e-6
35
- solver.attributes.tol_ktratio = 1e-6
36
- # Reduced accuracy settings
37
- solver.attributes.reduced_tol_gap_abs = 1e-6
38
- solver.attributes.reduced_tol_gap_rel = 1e-6
39
- solver.attributes.reduced_tol_feas = 1e-6
40
- solver.attributes.reduced_tol_infeas_abs = 1e-6
41
- solver.attributes.reduced_tol_infeas_rel = 1e-6
42
- solver.attributes.reduced_tol_ktratio = 1e-6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
infeasible/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:198c13db2590bdd260dbee140dfe9a938964f9c86c87a1604ef6bdfe24d4b539
3
- size 4374420083
 
 
 
 
infeasible/ACOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:df254bdf46c8344aa95f2d8dcf87f3d4aaa749fde1e242c322bc5eeddc5f96e2
3
- size 1845622252
 
 
 
 
infeasible/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:95109b8bfcfb0d0814c0f4f8469fbb24bf8b4ff8f4fab96bbe2de35488d6d1d3
3
- size 521286630
 
 
 
 
infeasible/SOCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a43f76dd4e89bf68e5f68a383ac66a158883b0b728d09a1e77e820801c5d2a87
3
- size 7345156989
 
 
 
 
infeasible/SOCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:1acaf7a04ce943510515818d3bf59391ea61410c474f6ee71694ea6ead6bf62d
3
- size 1242153
 
 
 
 
infeasible/SOCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:0e7a6670e7ec7ea3c53c5f099b770b613c139fe634f12b15065bcfd3f7f1de9f
3
- size 2141884604
 
 
 
 
infeasible/input.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4468d12fb16dbc0edd81529a0ccdebefa75ebb22216e5230143fd2d1b4bc8615
3
- size 262430198
 
 
 
 
test/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:035345139ba10243192f6b59a2333643de031651787b4933ae815431e50ea33d
3
- size 10176519735
 
 
 
 
test/ACOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:9c56d1db5ade25fc2db37b3ea1b41cb82aefc65a9bcfe4aa5672d70b3d177f6d
3
- size 3168804
 
 
 
 
test/ACOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:2e2f39aa14286d080209312eccd91986e2c8058ac3ba980a95246e9fc82a2302
3
- size 4599883525
 
 
 
 
test/DCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ec2261a7a3d586c1761db089d5455e17031a84251579dc88b6ad71ae0d54a4c8
3
- size 366996811
 
 
 
 
test/DCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4e4dc1361c5dcc4115d39c6b40ac955916863d7766132e7a34921eeb85030a82
3
- size 3080732
 
 
 
 
test/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:18e68bfc4ec46c8cd71d15d21fa48958a880ba7232a64a33e9e090bd0ef328f1
3
- size 1367542973
 
 
 
 
test/SOCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:20bc17ca1d3f0c812401373324fc1ea41c4eb6bd39f814c572d11b3b866eabc8
3
- size 19218771667
 
 
 
 
test/SOCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:52d38ffdacbabc4cbe184e54730eec544a3a8262d01ad9981e8f72f8929b8d21
3
- size 3164796
 
 
 
 
test/SOCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:be5607be029b86da66c40c96523b4bae3352ff18ad7b9ad77c73f240972f0ac7
3
- size 5602374327
 
 
 
 
test/input.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b980b40b75853cac97771fcda5fdcc0cb4bdcda4ba174cbfa899711397ca115a
3
- size 686700820
 
 
 
 
train/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:f57ce97db72c9c5a1004f54fafd4a2fa59011faa9f89d9dd5a4303a5bf4f3145
3
- size 40707507061