klamike commited on
Commit
b72971c
·
verified ·
1 Parent(s): 7dafe48

Convert dataset to Parquet (part 00013-of-00014) (#14)

Browse files

- Convert dataset to Parquet (part 00013-of-00014) (4f2ff25e0960886d441a1b1e99100c25306c3848)
- Delete data file (e9401f83e0ddedbc2dbf99168e1a21c06f113a92)
- Delete loading script (bb32ebbf0fbea74d8a48999ec98bf7d704e28a39)
- Delete data file (c3e22df2f577f6a4f57e1b60824d1357b78746f0)
- Delete data file (4a64a3e17392de5e929a566c49179e0144947f1e)
- Delete data file (4ab99f080e333f450489f98772959ed417870bd7)
- Delete data file (865dbe0055d377e701004b3f5b9254730b11e8f3)
- Delete data file (3c30e1d27de8a1b500c11b91fbb93103a97650c4)
- Delete data file (fd266e48441503c9200eda3863fd44b37a46f1a1)
- Delete data file (7b1006df0fbe280a343cd7497d34604245bbbc57)
- Delete data file (204718a2d941f170a4935c0bcd29eef9b2057420)
- Delete data file (4b4b79cd950683db7fb3322b0dc7b114d94dece5)
- Delete data file (3ab9d73fe7717014870f754a31560eb37087ccc0)
- Delete data file (3cfd3f898b4674edd8c170a8e1a5f94b486cd18d)
- Delete data file (4ef91ccac5a15eca9a188e1b32073a70032f5c43)
- Delete data file (fcaa60bb492dc99dced3c5dbccf8ffc3653a22f2)
- Delete data file (1da8e72f9c9a7fb53be8a268d786520815eba5dc)
- Delete data file (402a0a51aa5c54e3ea1d48014cec83cd7c90c65a)
- Delete data file (84074485b6213950fd78b7c31c35433c1fc8eadc)
- Delete data file (138266635d958e098cdd54d018f835d8eb75261d)
- Delete data file (17d4917d5b90a06a1d7e0e5c5b2fc70cce036fb7)
- Delete data file (89511361110c93363802b15ae52e4e524047e2be)
- Delete data file (3b1ef0ac856b691d185bedfd32a24e4f3cdf68fc)
- Delete data file (add263dc09f339f328bb532454ce27fb028837be)
- Delete data file (82e8fd764d5dfb4ae0992386e3dfe23c7add1112)
- Delete data file (bf010e3a169045c7c3e68d4c0ec24eeda2cf79ef)
- Delete data file (8652a6402e4db0d1bc00928c9cf63e7f3532ce4f)
- Delete data file (8441411a62301887ba239265e3d731b90f325b83)
- Delete data file (8f3b8a8780041db35d56642d5be435887d790458)
- Delete data file (aa932e0b53590d943c0cacd17cabd49cfe0719b7)
- Delete data file (c44516df0a213050ef65f3f1f69ae939eaff9ea5)
- Delete data file (98cab517b19ae381e2967835791e08028fdf26ba)
- Delete data file (5255dd6fa00dd4eba7dbf386d7b46efbc0dce612)
- Delete data file (bb66243ac7eea47851f7a51a6b2ec1226292abf1)

Files changed (44) hide show
  1. infeasible/ACOPF/meta.h5.gz → 118_ieee/test-00119-of-00133.parquet +2 -2
  2. case.json.gz → 118_ieee/test-00120-of-00133.parquet +2 -2
  3. infeasible/ACOPF/primal.h5.gz → 118_ieee/test-00121-of-00133.parquet +2 -2
  4. infeasible/ACOPF/dual.h5.gz → 118_ieee/test-00122-of-00133.parquet +2 -2
  5. 118_ieee/test-00123-of-00133.parquet +3 -0
  6. 118_ieee/test-00124-of-00133.parquet +3 -0
  7. 118_ieee/test-00125-of-00133.parquet +3 -0
  8. 118_ieee/test-00126-of-00133.parquet +3 -0
  9. 118_ieee/test-00127-of-00133.parquet +3 -0
  10. 118_ieee/test-00128-of-00133.parquet +3 -0
  11. 118_ieee/test-00129-of-00133.parquet +3 -0
  12. 118_ieee/test-00130-of-00133.parquet +3 -0
  13. 118_ieee/test-00131-of-00133.parquet +3 -0
  14. 118_ieee/test-00132-of-00133.parquet +3 -0
  15. PGLearn-Small-118_ieee.py +0 -397
  16. README.md +9 -1
  17. config.toml +0 -53
  18. infeasible/DCOPF/dual.h5.gz +0 -3
  19. infeasible/DCOPF/meta.h5.gz +0 -3
  20. infeasible/DCOPF/primal.h5.gz +0 -3
  21. infeasible/SOCOPF/dual.h5.gz +0 -3
  22. infeasible/SOCOPF/meta.h5.gz +0 -3
  23. infeasible/SOCOPF/primal.h5.gz +0 -3
  24. infeasible/input.h5.gz +0 -3
  25. test/ACOPF/dual.h5.gz +0 -3
  26. test/ACOPF/meta.h5.gz +0 -3
  27. test/ACOPF/primal.h5.gz +0 -3
  28. test/DCOPF/dual.h5.gz +0 -3
  29. test/DCOPF/meta.h5.gz +0 -3
  30. test/DCOPF/primal.h5.gz +0 -3
  31. test/SOCOPF/dual.h5.gz +0 -3
  32. test/SOCOPF/meta.h5.gz +0 -3
  33. test/SOCOPF/primal.h5.gz +0 -3
  34. test/input.h5.gz +0 -3
  35. train/ACOPF/dual.h5.gz +0 -3
  36. train/ACOPF/meta.h5.gz +0 -3
  37. train/ACOPF/primal.h5.gz +0 -3
  38. train/DCOPF/dual.h5.gz +0 -3
  39. train/DCOPF/meta.h5.gz +0 -3
  40. train/DCOPF/primal.h5.gz +0 -3
  41. train/SOCOPF/dual.h5.gz +0 -3
  42. train/SOCOPF/meta.h5.gz +0 -3
  43. train/SOCOPF/primal.h5.gz +0 -3
  44. train/input.h5.gz +0 -3
infeasible/ACOPF/meta.h5.gz → 118_ieee/test-00119-of-00133.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1fedd20ff86e7361ebd4d15903b06c3936b2f50d0c2a573ca80ed61ef3cb395e
3
- size 1631
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61a1cdfd0d19b9d9687551b75182531640d0b60f9438944508c33d772b6d6f6d
3
+ size 85511127
case.json.gz → 118_ieee/test-00120-of-00133.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e7d03e5afe6f9a6632a1fafbda77d0266e654f151eefff54cb5df6a61a9dfdb4
3
- size 107188
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:caf807bec1d02a26123ad1fc990e67e386bf5be5842e75195f23e0506eb9ca1b
3
+ size 85549308
infeasible/ACOPF/primal.h5.gz → 118_ieee/test-00121-of-00133.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8f7f931e280ac5fa2808928ce6fe91d71072f1f718db8d2941f2d846eb30dcda
3
- size 57738
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d75fb22801b1646fdb9b49acdd4e4c130183d1598512f2b0b70a57053fc041d1
3
+ size 85497406
infeasible/ACOPF/dual.h5.gz → 118_ieee/test-00122-of-00133.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e5e7080d514d5c8caef57d02b9d80ef8eebe9f5f95061f1205a16305a84e46ce
3
- size 133557
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ee74f300582da68e67f6834106eb60a0c1b6b585292305f4b838fc34df075eb
3
+ size 85535488
118_ieee/test-00123-of-00133.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a31f4b50c0f29fa405e6e843612e0f44624d5b1df8ef5375ba2e51462021d9a
3
+ size 85510108
118_ieee/test-00124-of-00133.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:641cc86a1cd9e49d7553a5d0279036a5c269b5905b311ae60c09d4fa65c33981
3
+ size 85521616
118_ieee/test-00125-of-00133.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5874ce5754a44e978d539ef6e0545d34e0455f22262d64525fba85c069d47ea
3
+ size 85506406
118_ieee/test-00126-of-00133.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8831120b27b76c82031d1ff10b66cf5195f4d532ab2479f5734e0c0a83064abe
3
+ size 85534503
118_ieee/test-00127-of-00133.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99a49e61cfb51df6598f1343dbafbe33d0354e3c0d6f9ddd00f7ce388080a8e9
3
+ size 85526807
118_ieee/test-00128-of-00133.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45a32575f15f295a291c340eabcf63472c8be93d3daae428b94b421ff50429c0
3
+ size 85505576
118_ieee/test-00129-of-00133.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a10ce7d8f707f0b8d97d3c1eae5353e64c2fbc292c52c48a881e35e305a2fb11
3
+ size 85513778
118_ieee/test-00130-of-00133.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ada78b43142b34e85c3c4c24299e59cb61fd892b75df9b3c62060f49d0cda18
3
+ size 85534553
118_ieee/test-00131-of-00133.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a114d4ab64fc98b2109314b2c67543f7bf6afe7af39ad7f15e7f2b32a858246
3
+ size 85537113
118_ieee/test-00132-of-00133.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:345ec9264334d31e9d09e61d37ffe717f904c8c49867e5ca6054a7f4e1843648
3
+ size 85526434
PGLearn-Small-118_ieee.py DELETED
@@ -1,397 +0,0 @@
1
- from __future__ import annotations
2
- from dataclasses import dataclass
3
- from pathlib import Path
4
- import json
5
- import gzip
6
-
7
- import datasets as hfd
8
- import h5py
9
- import pyarrow as pa
10
-
11
- # ┌──────────────┐
12
- # │ Metadata │
13
- # └──────────────┘
14
-
15
- @dataclass
16
- class CaseSizes:
17
- n_bus: int
18
- n_load: int
19
- n_gen: int
20
- n_branch: int
21
-
22
- CASENAME = "118_ieee"
23
- SIZES = CaseSizes(n_bus=118, n_load=99, n_gen=54, n_branch=186)
24
- NUM_TRAIN = 799988
25
- NUM_TEST = 199997
26
- NUM_INFEASIBLE = 15
27
-
28
- URL = "https://huggingface.co/datasets/PGLearn/PGLearn-Small-118_ieee"
29
- DESCRIPTION = """\
30
- The 118_ieee PGLearn optimal power flow dataset, part of the PGLearn-Small collection. \
31
- """
32
- VERSION = hfd.Version("1.0.0")
33
- DEFAULT_CONFIG_DESCRIPTION="""\
34
- This configuration contains feasible input, metadata, primal solution, and dual solution data \
35
- for the ACOPF, DCOPF, and SOCOPF formulations on the {case} system.
36
- """
37
- USE_ML4OPF_WARNING = """
38
- ================================================================================================
39
- Loading PGLearn-Small-118_ieee through the `datasets.load_dataset` function may be slow.
40
-
41
- Consider using ML4OPF to directly convert to `torch.Tensor`; for more info see:
42
- https://github.com/AI4OPT/ML4OPF?tab=readme-ov-file#manually-loading-data
43
-
44
- Or, use `huggingface_hub.snapshot_download` and an HDF5 reader; for more info see:
45
- https://huggingface.co/datasets/PGLearn/PGLearn-Small-118_ieee#downloading-individual-files
46
- ================================================================================================
47
- """
48
- CITATION = """\
49
- @article{klamkinpglearn,
50
- title={{PGLearn - An Open-Source Learning Toolkit for Optimal Power Flow}},
51
- author={Klamkin, Michael and Tanneau, Mathieu and Van Hentenryck, Pascal},
52
- year={2025},
53
- }\
54
- """
55
-
56
- IS_COMPRESSED = True
57
-
58
- # ┌──────────────────┐
59
- # │ Formulations │
60
- # └──────────────────┘
61
-
62
- def acopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
63
- features = {}
64
- if primal: features.update(acopf_primal_features(sizes))
65
- if dual: features.update(acopf_dual_features(sizes))
66
- if meta: features.update({f"ACOPF/{k}": v for k, v in META_FEATURES.items()})
67
- return features
68
-
69
- def dcopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
70
- features = {}
71
- if primal: features.update(dcopf_primal_features(sizes))
72
- if dual: features.update(dcopf_dual_features(sizes))
73
- if meta: features.update({f"DCOPF/{k}": v for k, v in META_FEATURES.items()})
74
- return features
75
-
76
- def socopf_features(sizes: CaseSizes, primal: bool, dual: bool, meta: bool):
77
- features = {}
78
- if primal: features.update(socopf_primal_features(sizes))
79
- if dual: features.update(socopf_dual_features(sizes))
80
- if meta: features.update({f"SOCOPF/{k}": v for k, v in META_FEATURES.items()})
81
- return features
82
-
83
- FORMULATIONS_TO_FEATURES = {
84
- "ACOPF": acopf_features,
85
- "DCOPF": dcopf_features,
86
- "SOCOPF": socopf_features,
87
- }
88
-
89
- # ┌───────────────────┐
90
- # │ BuilderConfig │
91
- # └───────────────────┘
92
-
93
- class PGLearnSmall118_ieeeConfig(hfd.BuilderConfig):
94
- """BuilderConfig for PGLearn-Small-118_ieee.
95
- By default, primal solution data, metadata, input, casejson, are included for the train and test splits.
96
-
97
- To modify the default configuration, pass attributes of this class to `datasets.load_dataset`:
98
-
99
- Attributes:
100
- formulations (list[str]): The formulation(s) to include, e.g. ["ACOPF", "DCOPF"]
101
- primal (bool, optional): Include primal solution data. Defaults to True.
102
- dual (bool, optional): Include dual solution data. Defaults to False.
103
- meta (bool, optional): Include metadata. Defaults to True.
104
- input (bool, optional): Include input data. Defaults to True.
105
- casejson (bool, optional): Include case.json data. Defaults to True.
106
- train (bool, optional): Include training samples. Defaults to True.
107
- test (bool, optional): Include testing samples. Defaults to True.
108
- infeasible (bool, optional): Include infeasible samples. Defaults to False.
109
- """
110
- def __init__(self,
111
- formulations: list[str],
112
- primal: bool=True, dual: bool=False, meta: bool=True, input: bool = True, casejson: bool=True,
113
- train: bool=True, test: bool=True, infeasible: bool=False,
114
- compressed: bool=IS_COMPRESSED, **kwargs
115
- ):
116
- super(PGLearnSmall118_ieeeConfig, self).__init__(version=VERSION, **kwargs)
117
-
118
- self.case = CASENAME
119
- self.formulations = formulations
120
-
121
- self.primal = primal
122
- self.dual = dual
123
- self.meta = meta
124
- self.input = input
125
- self.casejson = casejson
126
-
127
- self.train = train
128
- self.test = test
129
- self.infeasible = infeasible
130
-
131
- self.gz_ext = ".gz" if compressed else ""
132
-
133
- @property
134
- def size(self):
135
- return SIZES
136
-
137
- @property
138
- def features(self):
139
- features = {}
140
- if self.casejson: features.update(case_features())
141
- if self.input: features.update(input_features(SIZES))
142
- for formulation in self.formulations:
143
- features.update(FORMULATIONS_TO_FEATURES[formulation](SIZES, self.primal, self.dual, self.meta))
144
- return hfd.Features(features)
145
-
146
- @property
147
- def splits(self):
148
- splits: dict[hfd.Split, dict[str, str | int]] = {}
149
- if self.train:
150
- splits[hfd.Split.TRAIN] = {
151
- "name": "train",
152
- "num_examples": NUM_TRAIN
153
- }
154
- if self.test:
155
- splits[hfd.Split.TEST] = {
156
- "name": "test",
157
- "num_examples": NUM_TEST
158
- }
159
- if self.infeasible:
160
- splits[hfd.Split("infeasible")] = {
161
- "name": "infeasible",
162
- "num_examples": NUM_INFEASIBLE
163
- }
164
- return splits
165
-
166
- @property
167
- def urls(self):
168
- urls: dict[str, None | str | list] = {
169
- "case": None, "train": [], "test": [], "infeasible": [],
170
- }
171
-
172
- if self.casejson: urls["case"] = f"case.json" + self.gz_ext
173
-
174
- split_names = []
175
- if self.train: split_names.append("train")
176
- if self.test: split_names.append("test")
177
- if self.infeasible: split_names.append("infeasible")
178
-
179
- for split in split_names:
180
- if self.input: urls[split].append(f"{split}/input.h5" + self.gz_ext)
181
- for formulation in self.formulations:
182
- if self.primal: urls[split].append(f"{split}/{formulation}/primal.h5" + self.gz_ext)
183
- if self.dual: urls[split].append(f"{split}/{formulation}/dual.h5" + self.gz_ext)
184
- if self.meta: urls[split].append(f"{split}/{formulation}/meta.h5" + self.gz_ext)
185
- return urls
186
-
187
- # ┌────────────────────┐
188
- # │ DatasetBuilder │
189
- # └────────────────────┘
190
-
191
- class PGLearnSmall118_ieee(hfd.ArrowBasedBuilder):
192
- """DatasetBuilder for PGLearn-Small-118_ieee.
193
- The main interface is `datasets.load_dataset` with `trust_remote_code=True`, e.g.
194
-
195
- ```python
196
- from datasets import load_dataset
197
- ds = load_dataset("PGLearn/PGLearn-Small-118_ieee", trust_remote_code=True,
198
- # modify the default configuration by passing kwargs
199
- formulations=["DCOPF"],
200
- dual=False,
201
- meta=False,
202
- )
203
- ```
204
- """
205
-
206
- DEFAULT_WRITER_BATCH_SIZE = 10000
207
- BUILDER_CONFIG_CLASS = PGLearnSmall118_ieeeConfig
208
- DEFAULT_CONFIG_NAME=CASENAME
209
- BUILDER_CONFIGS = [
210
- PGLearnSmall118_ieeeConfig(
211
- name=CASENAME, description=DEFAULT_CONFIG_DESCRIPTION.format(case=CASENAME),
212
- formulations=list(FORMULATIONS_TO_FEATURES.keys()),
213
- primal=True, dual=True, meta=True, input=True, casejson=True,
214
- train=True, test=True, infeasible=False,
215
- )
216
- ]
217
-
218
- def _info(self):
219
- return hfd.DatasetInfo(
220
- features=self.config.features, splits=self.config.splits,
221
- description=DESCRIPTION + self.config.description,
222
- homepage=URL, citation=CITATION,
223
- )
224
-
225
- def _split_generators(self, dl_manager: hfd.DownloadManager):
226
- hfd.logging.get_logger().warning(USE_ML4OPF_WARNING)
227
-
228
- filepaths = dl_manager.download_and_extract(self.config.urls)
229
-
230
- splits: list[hfd.SplitGenerator] = []
231
- if self.config.train:
232
- splits.append(hfd.SplitGenerator(
233
- name=hfd.Split.TRAIN,
234
- gen_kwargs=dict(case_file=filepaths["case"], data_files=tuple(filepaths["train"]), n_samples=NUM_TRAIN),
235
- ))
236
- if self.config.test:
237
- splits.append(hfd.SplitGenerator(
238
- name=hfd.Split.TEST,
239
- gen_kwargs=dict(case_file=filepaths["case"], data_files=tuple(filepaths["test"]), n_samples=NUM_TEST),
240
- ))
241
- if self.config.infeasible:
242
- splits.append(hfd.SplitGenerator(
243
- name=hfd.Split("infeasible"),
244
- gen_kwargs=dict(case_file=filepaths["case"], data_files=tuple(filepaths["infeasible"]), n_samples=NUM_INFEASIBLE),
245
- ))
246
- return splits
247
-
248
- def _generate_tables(self, case_file: str | None, data_files: tuple[hfd.utils.track.tracked_str], n_samples: int):
249
- case_data: str | None = json.dumps(json.load(open_maybe_gzip(case_file))) if case_file is not None else None
250
-
251
- opened_files = [open_maybe_gzip(file) for file in data_files]
252
- data = {'/'.join(Path(df.get_origin()).parts[-2:]).split('.')[0]: h5py.File(of) for of, df in zip(opened_files, data_files)}
253
- for k in list(data.keys()):
254
- if "/input" in k: data[k.split("/", 1)[1]] = data.pop(k)
255
-
256
- batch_size = self._writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
257
- for i in range(0, n_samples, batch_size):
258
- effective_batch_size = min(batch_size, n_samples - i)
259
-
260
- sample_data = {
261
- f"{dk}/{k}":
262
- hfd.features.features.numpy_to_pyarrow_listarray(v[i:i + effective_batch_size, ...])
263
- for dk, d in data.items() for k, v in d.items() if f"{dk}/{k}" in self.config.features
264
- }
265
-
266
- if case_data is not None:
267
- sample_data["case/json"] = pa.array([case_data] * effective_batch_size)
268
-
269
- yield i, pa.Table.from_pydict(sample_data)
270
-
271
- for f in opened_files:
272
- f.close()
273
-
274
- # ┌──────────────┐
275
- # │ Features │
276
- # └──────────────┘
277
-
278
- FLOAT_TYPE = "float32"
279
- INT_TYPE = "int64"
280
- BOOL_TYPE = "bool"
281
- STRING_TYPE = "string"
282
-
283
- def case_features():
284
- # FIXME: better way to share schema of case data -- need to treat jagged arrays
285
- return {
286
- "case/json": hfd.Value(STRING_TYPE),
287
- }
288
-
289
- META_FEATURES = {
290
- "meta/seed": hfd.Value(dtype=INT_TYPE),
291
- "meta/formulation": hfd.Value(dtype=STRING_TYPE),
292
- "meta/primal_objective_value": hfd.Value(dtype=FLOAT_TYPE),
293
- "meta/dual_objective_value": hfd.Value(dtype=FLOAT_TYPE),
294
- "meta/primal_status": hfd.Value(dtype=STRING_TYPE),
295
- "meta/dual_status": hfd.Value(dtype=STRING_TYPE),
296
- "meta/termination_status": hfd.Value(dtype=STRING_TYPE),
297
- "meta/build_time": hfd.Value(dtype=FLOAT_TYPE),
298
- "meta/extract_time": hfd.Value(dtype=FLOAT_TYPE),
299
- "meta/solve_time": hfd.Value(dtype=FLOAT_TYPE),
300
- }
301
-
302
- def input_features(sizes: CaseSizes):
303
- return {
304
- "input/pd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
305
- "input/qd": hfd.Sequence(length=sizes.n_load, feature=hfd.Value(dtype=FLOAT_TYPE)),
306
- "input/gen_status": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=BOOL_TYPE)),
307
- "input/branch_status": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=BOOL_TYPE)),
308
- "input/seed": hfd.Value(dtype=INT_TYPE),
309
- }
310
-
311
- def acopf_primal_features(sizes: CaseSizes):
312
- return {
313
- "ACOPF/primal/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
314
- "ACOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
315
- "ACOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
316
- "ACOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
317
- "ACOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
318
- "ACOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
319
- "ACOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
320
- "ACOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
321
- }
322
- def acopf_dual_features(sizes: CaseSizes):
323
- return {
324
- "ACOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
325
- "ACOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
326
- "ACOPF/dual/vm": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
327
- "ACOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
328
- "ACOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
329
- "ACOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
330
- "ACOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
331
- "ACOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
332
- "ACOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
333
- "ACOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
334
- "ACOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
335
- "ACOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
336
- "ACOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
337
- "ACOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
338
- "ACOPF/dual/sm_fr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
339
- "ACOPF/dual/sm_to": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
340
- "ACOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
341
- }
342
- def dcopf_primal_features(sizes: CaseSizes):
343
- return {
344
- "DCOPF/primal/va": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
345
- "DCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
346
- "DCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
347
- }
348
- def dcopf_dual_features(sizes: CaseSizes):
349
- return {
350
- "DCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
351
- "DCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
352
- "DCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
353
- "DCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
354
- "DCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
355
- "DCOPF/dual/slack_bus": hfd.Value(dtype=FLOAT_TYPE),
356
- }
357
- def socopf_primal_features(sizes: CaseSizes):
358
- return {
359
- "SOCOPF/primal/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
360
- "SOCOPF/primal/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
361
- "SOCOPF/primal/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
362
- "SOCOPF/primal/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
363
- "SOCOPF/primal/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
364
- "SOCOPF/primal/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
365
- "SOCOPF/primal/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
366
- "SOCOPF/primal/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
367
- "SOCOPF/primal/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
368
- }
369
- def socopf_dual_features(sizes: CaseSizes):
370
- return {
371
- "SOCOPF/dual/kcl_p": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
372
- "SOCOPF/dual/kcl_q": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
373
- "SOCOPF/dual/w": hfd.Sequence(length=sizes.n_bus, feature=hfd.Value(dtype=FLOAT_TYPE)),
374
- "SOCOPF/dual/pg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
375
- "SOCOPF/dual/qg": hfd.Sequence(length=sizes.n_gen, feature=hfd.Value(dtype=FLOAT_TYPE)),
376
- "SOCOPF/dual/ohm_pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
377
- "SOCOPF/dual/ohm_pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
378
- "SOCOPF/dual/ohm_qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
379
- "SOCOPF/dual/ohm_qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
380
- "SOCOPF/dual/jabr": hfd.Array2D(shape=(sizes.n_branch, 4), dtype=FLOAT_TYPE),
381
- "SOCOPF/dual/sm_fr": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
382
- "SOCOPF/dual/sm_to": hfd.Array2D(shape=(sizes.n_branch, 3), dtype=FLOAT_TYPE),
383
- "SOCOPF/dual/va_diff": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
384
- "SOCOPF/dual/wr": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
385
- "SOCOPF/dual/wi": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
386
- "SOCOPF/dual/pf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
387
- "SOCOPF/dual/pt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
388
- "SOCOPF/dual/qf": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
389
- "SOCOPF/dual/qt": hfd.Sequence(length=sizes.n_branch, feature=hfd.Value(dtype=FLOAT_TYPE)),
390
- }
391
-
392
- # ┌───────────────┐
393
- # │ Utilities │
394
- # └───────────────┘
395
-
396
- def open_maybe_gzip(path):
397
- return gzip.open(path, "rb") if path.endswith(".gz") else open(path, "rb")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -290,6 +290,14 @@ dataset_info:
290
  - name: test
291
  num_bytes: 66279605792
292
  num_examples: 199997
293
- download_size: 34649738796
294
  dataset_size: 331398028956
 
 
 
 
 
 
 
 
295
  ---
 
290
  - name: test
291
  num_bytes: 66279605792
292
  num_examples: 199997
293
+ download_size: 56896542703
294
  dataset_size: 331398028956
295
+ configs:
296
+ - config_name: 118_ieee
297
+ data_files:
298
+ - split: train
299
+ path: 118_ieee/train-*
300
+ - split: test
301
+ path: 118_ieee/test-*
302
+ default: true
303
  ---
config.toml DELETED
@@ -1,53 +0,0 @@
1
- # Name of the reference PGLib case. Must be a valid PGLib case name.
2
- pglib_case = "pglib_opf_case118_ieee"
3
- # Directory where instance/solution files are exported
4
- # must be a valid directory
5
- export_dir = "/storage/home/hcoda1/0/mtanneau3/Git/OPFGenerator/data/scratch/118_ieee"
6
- floating_point_type = "Float32"
7
-
8
- [slurm]
9
- n_samples = 1000000
10
- n_jobs = 42
11
- minibatch_size = 256
12
- queue = "embers"
13
- charge_account = "gts-mtanneau3"
14
- extract_memory = "256gb"
15
-
16
- [sampler]
17
- # data sampler options
18
- [sampler.load]
19
- noise_type = "ScaledUniform"
20
- l = 0.80 # Lower bound of base load factor
21
- u = 1.20 # Upper bound of base load factor
22
- sigma = 0.20 # Relative (multiplicative) noise level.
23
-
24
-
25
- [OPF]
26
-
27
- [OPF.ACOPF]
28
- type = "ACOPF"
29
- solver.name = "Ipopt"
30
- solver.attributes.tol = 1e-6
31
- solver.attributes.linear_solver = "ma27"
32
-
33
- [OPF.DCOPF]
34
- # Formulation/solver options
35
- type = "DCOPF"
36
- solver.name = "HiGHS"
37
-
38
- [OPF.SOCOPF]
39
- type = "SOCOPF"
40
- solver.name = "Clarabel"
41
- # Tight tolerances
42
- solver.attributes.tol_gap_abs = 1e-6
43
- solver.attributes.tol_gap_rel = 1e-6
44
- solver.attributes.tol_feas = 1e-6
45
- solver.attributes.tol_infeas_rel = 1e-6
46
- solver.attributes.tol_ktratio = 1e-6
47
- # Reduced accuracy settings
48
- solver.attributes.reduced_tol_gap_abs = 1e-6
49
- solver.attributes.reduced_tol_gap_rel = 1e-6
50
- solver.attributes.reduced_tol_feas = 1e-6
51
- solver.attributes.reduced_tol_infeas_abs = 1e-6
52
- solver.attributes.reduced_tol_infeas_rel = 1e-6
53
- solver.attributes.reduced_tol_ktratio = 1e-6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
infeasible/DCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8c370903b2eeaf86d008195fa757c474ebcfd442fe7b9a5638ebdb6c1b5635b8
3
- size 13274
 
 
 
 
infeasible/DCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:534cb8cd6274861598ff4845dda5a70062d61643c54d477036948609453c4e77
3
- size 1519
 
 
 
 
infeasible/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:9c52f3439eded847051f0d9ee256fa22ca3959cce23b22259c5fb48c758f9a62
3
- size 17477
 
 
 
 
infeasible/SOCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:019af0091ce05bf95f0b92e45c37defb3ae6d0f8af9cc1a88383b6281871f8fd
3
- size 236324
 
 
 
 
infeasible/SOCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:28833269e8edd15858f8b84e74850031f766c273220d428638141b0cbfa15684
3
- size 1581
 
 
 
 
infeasible/SOCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:536bddc7052217396445899176426766a89b576b08118997ba1e59bc654301b8
3
- size 73273
 
 
 
 
infeasible/input.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:7cd539efdb7b80654617ecb30edc232f19f6b26728a55ab22689f3944842d98a
3
- size 11189
 
 
 
 
test/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:08e05c16177ef53f37d082e2381e886907882aed0093f117c7dd17a66fc7499e
3
- size 1706369347
 
 
 
 
test/ACOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:710336482d1019724f1409e5dc6add4aedad8d6dc37963b6236e876405d7c2bf
3
- size 6707463
 
 
 
 
test/ACOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:dc9edafd9444f0f5bc716c2f70b9a531da6da241084ebcd09118726881bd2f17
3
- size 751377364
 
 
 
 
test/DCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:72f8dd629c05d4c6a7a293f3364e9ad790d4593e25399f567ad38f8843f864fc
3
- size 20839502
 
 
 
 
test/DCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:4d0c9878ae9b90bbb96ef43dcf626a0641c0efd115cc881effe55498c8654574
3
- size 6472859
 
 
 
 
test/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8db0a762c5cfea43bb66ce100981aa764e7edac84d25216b7096ced97e7a18ee
3
- size 222988589
 
 
 
 
test/SOCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:67f9288e35b239ea69eae115bd2eb8bc16a2250b2712e5694369ef9221b62327
3
- size 3121681265
 
 
 
 
test/SOCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ded735554421506b5b0b275ed551398e5633adfa00540a3ace03f1da2a71604f
3
- size 6768166
 
 
 
 
test/SOCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c84f2707838e94c5496c7e6c35d5e595c8a32bbee95ca2694afdeb8c572ed32e
3
- size 948039255
 
 
 
 
test/input.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bdab9e4e8ed39bccb35fd35c5e56c3a8d0a3b18e79f4c2a4cff677e73b5f9928
3
- size 138779147
 
 
 
 
train/ACOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:40d09a8c6ed55baeb6bce555a51d67df766d13e231699122d4223815295ad49b
3
- size 6825443699
 
 
 
 
train/ACOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:b5ec58d2d2d96db7e6714fd5032ee1deb7b9940d89b59379a597fdf8ef048a87
3
- size 26752421
 
 
 
 
train/ACOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a651bbe775bdadb5090257b222e45d8e0e0edfb5cfd311fe337f1ce1cbaa41e0
3
- size 3005486793
 
 
 
 
train/DCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6b84c6dbd4d02295df9bbe9aae8fa4445e96fabc470080362baa3741e2e57684
3
- size 83215066
 
 
 
 
train/DCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:7fce4cc47d8ca85cbd88fb26f49f1997e2f80a57e7517a92c45dcee1b87d5cfa
3
- size 25810256
 
 
 
 
train/DCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:369817d0cbfd0881d448d30cf60699caff3646c952f58d915525bf92900f3613
3
- size 891923852
 
 
 
 
train/SOCOPF/dual.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ffef29e20ecf682619f49ca73e72264333a91d3fd69d69a26f9075ab779f290d
3
- size 12486771368
 
 
 
 
train/SOCOPF/meta.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c03b6abb978a051df887d43641cf011bfcc7967957754d7453407be309eb004d
3
- size 26991924
 
 
 
 
train/SOCOPF/primal.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:8dc1c9ae2a27b8b5acdf1bb6b69ed3b7a856bf9e24a64a0c652cd87310a37f85
3
- size 3792116672
 
 
 
 
train/input.h5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:189abaf463ac7c7aedf175e2d77f2c4b75e808a948757d36290d5f6cd1243283
3
- size 555096600