Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: hp_activation_fn_1: struct<domain_cls: string, domain_kwargs: struct<categories: list<item: string>>, sampler_cls: string, sampler_kwargs: struct<>> hp_activation_fn_2: struct<domain_cls: string, domain_kwargs: struct<categories: list<item: string>>, sampler_cls: string, sampler_kwargs: struct<>> hp_batch_size: struct<domain_cls: string, domain_kwargs: struct<lower: int64, upper: int64, log_scale: bool, cast_int: bool, size: int64>> hp_dropout_1: struct<domain_cls: string, domain_kwargs: struct<lower: double, upper: double, log_scale: bool, cast_int: bool, size: int64>> hp_dropout_2: struct<domain_cls: string, domain_kwargs: struct<lower: double, upper: double, log_scale: bool, cast_int: bool, size: int64>> hp_init_lr: struct<domain_cls: string, domain_kwargs: struct<categories: list<item: double>>, sampler_cls: string, sampler_kwargs: struct<>> hp_lr_schedule: struct<domain_cls: string, domain_kwargs: struct<categories: list<item: string>>, sampler_cls: string, sampler_kwargs: struct<>> hp_n_units_1: struct<domain_cls: string, domain_kwargs: struct<lower: int64, upper: int64, log_scale: bool, cast_int: bool, size: int64>> hp_n_units_2: struct<domain_cls: string, domain_kwargs: struct<lower: int64, upper: int64, log_scale: bool, cast_int: bool, size: int64>> vs hp_epoch: struct<domain_cls: string, domain_kwargs: struct<lower: int64, upper: int64>, sampler_cls: string, sampler_kwargs: struct<>> Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 504, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: hp_activation_fn_1: struct<domain_cls: string, domain_kwargs: struct<categories: list<item: string>>, sampler_cls: string, sampler_kwargs: struct<>> hp_activation_fn_2: struct<domain_cls: string, domain_kwargs: struct<categories: list<item: string>>, sampler_cls: string, sampler_kwargs: struct<>> hp_batch_size: struct<domain_cls: string, domain_kwargs: struct<lower: int64, upper: int64, log_scale: bool, cast_int: bool, size: int64>> hp_dropout_1: struct<domain_cls: string, domain_kwargs: struct<lower: double, upper: double, log_scale: bool, cast_int: bool, size: int64>> hp_dropout_2: struct<domain_cls: string, domain_kwargs: struct<lower: double, upper: double, log_scale: bool, cast_int: bool, size: int64>> hp_init_lr: struct<domain_cls: string, domain_kwargs: struct<categories: list<item: double>>, sampler_cls: string, sampler_kwargs: struct<>> hp_lr_schedule: struct<domain_cls: string, domain_kwargs: struct<categories: list<item: string>>, sampler_cls: string, sampler_kwargs: struct<>> hp_n_units_1: struct<domain_cls: string, domain_kwargs: struct<lower: int64, upper: int64, log_scale: bool, cast_int: bool, size: int64>> hp_n_units_2: struct<domain_cls: string, domain_kwargs: struct<lower: int64, upper: int64, log_scale: bool, cast_int: bool, size: int64>> vs hp_epoch: struct<domain_cls: string, domain_kwargs: struct<lower: int64, upper: int64>, sampler_cls: string, sampler_kwargs: struct<>>
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/datasets-cards)
Blackbox Repository
This dataset contains hyperparameter optimization (HPO) evaluations from several paper:
- fcnet: Tabular benchmarks for joint architecture and hyperparameter optimization. Klein, A. and Hutter, F. 2019.
- icml-deepar, icml-xgboost: A quantile-based approach for hyperparameter transfer learning. Salinas, D., Shen, H., and Perrone, V. 2021.
- lcbench: Auto-PyTorch: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL. Lucas Zimmer, Marius Lindauer, Frank Hutter. 2020.
- nasbench201: NAS-Bench-201: Extending the scope of reproducible neural architecture search. Dong, X. and Yang, Y. 2020.
- pd1: Pre-trained Gaussian processes for Bayesian optimization. Wang, Z. and Dahl G. and Swersky K. and Lee C. and Mariet Z. and Nado Z. and Gilmer J. and Snoek J. and Ghahramani Z. 2021.
- yahpo: YAHPO Gym - An Efficient Multi-Objective Multi-Fidelity Benchmark for Hyperparameter Optimization. Pfisterer F., Schneider S., Moosbauer J., Binder M., Bischl B., 2022
The evaluations can be accessed through Syne Tune HPO library by calling the following:
from syne_tune.blackbox_repository import load_blackbox
blackbox = load_blackbox("nasbench201")["cifar10"]
blackbox_hyperparameter = next(iter(blackbox.hyperparameters.to_dict(orient="records")))
print(f"First hyperparameter: {blackbox_hyperparameter}")
print(
f"Objectives for first hyperparameters: {blackbox(configuration=blackbox_hyperparameter, fidelity=100)}"
)
# > First hyperparameter: {'hp_x0': 'avg_pool_3x3', 'hp_x1': 'nor_conv_1x1', 'hp_x2': 'skip_connect', 'hp_x3': 'nor_conv_1x1', 'hp_x4': 'skip_connect', 'hp_x5': 'skip_connect'}
# > Objective for first hyperparameters: {'metric_valid_error': 0.4177, 'metric_train_error': 0.2246, 'metric_runtime': 15.461778, 'metric_elapsed_time': 1546.179, 'metric_latency': 0.013935976, 'metric_flops': 15.64737, 'metric_params': 0.129306}
In addition, the blackboxes can be used to simulate HPO methods such as ASHA or Bayesian Optimization very fast while keeping identical results with non-simulated tuning.
The files can also be accessed directly from here.
If you are interested in having other blackboxes feel free to create an issue on Syne Tune project, we aim to grow the set over time.
- Downloads last month
- 884