Dataset Viewer
Auto-converted to Parquet
url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
2.98B
node_id
stringlengths
18
32
number
int64
1
7.5k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
comments
sequencelengths
0
30
created_at
int64
1,587B
1,744B
updated_at
int64
1,588B
1,744B
closed_at
int64
1,587B
1,744B
author_association
stringclasses
4 values
sub_issues_summary
dict
body
stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
state_reason
stringclasses
3 values
draft
float64
0
1
pull_request
dict
https://api.github.com/repos/huggingface/datasets/issues/7503
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7503/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7503/comments
https://api.github.com/repos/huggingface/datasets/issues/7503/events
https://github.com/huggingface/datasets/issues/7503
2,978,512,625
I_kwDODunzps6xiH7x
7,503
Inconsistency between load_dataset and load_from_disk functionality
{ "login": "zzzzzec", "id": 60975422, "node_id": "MDQ6VXNlcjYwOTc1NDIy", "avatar_url": "https://avatars.githubusercontent.com/u/60975422?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zzzzzec", "html_url": "https://github.com/zzzzzec", "followers_url": "https://api.github.com/users/zzzzzec/followers", "following_url": "https://api.github.com/users/zzzzzec/following{/other_user}", "gists_url": "https://api.github.com/users/zzzzzec/gists{/gist_id}", "starred_url": "https://api.github.com/users/zzzzzec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zzzzzec/subscriptions", "organizations_url": "https://api.github.com/users/zzzzzec/orgs", "repos_url": "https://api.github.com/users/zzzzzec/repos", "events_url": "https://api.github.com/users/zzzzzec/events{/privacy}", "received_events_url": "https://api.github.com/users/zzzzzec/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[]
1,744,083,982,000
1,744,083,982,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
## Issue Description I've encountered confusion when using `load_dataset` and `load_from_disk` in the datasets library. Specifically, when working offline with the gsm8k dataset, I can load it using a local path: ```python import datasets ds = datasets.load_dataset('/root/xxx/datasets/gsm8k', 'main') ``` output: ```text DatasetDict({ train: Dataset({ features: ['question', 'answer'], num_rows: 7473 }) test: Dataset({ features: ['question', 'answer'], num_rows: 1319 }) }) ``` This works as expected. However, after processing the dataset (converting answer format from #### to \boxed{}) ```python import datasets ds = datasets.load_dataset('/root/xxx/datasets/gsm8k', 'main') ds_train = ds['train'] ds_test = ds['test'] import re def convert(sample): solution = sample['answer'] solution = re.sub(r'####\s*(\S+)', r'\\boxed{\1}', solution) sample = { 'problem': sample['question'], 'solution': solution } return sample ds_train = ds_train.map(convert, remove_columns=['question', 'answer']) ds_test = ds_test.map(convert,remove_columns=['question', 'answer']) ``` I saved it using save_to_disk: ```python from datasets.dataset_dict import DatasetDict data_dict = DatasetDict({ 'train': ds_train, 'test': ds_test }) data_dict.save_to_disk('/root/xxx/datasets/gsm8k-new') ``` But now I can only load it using load_from_disk: ```python new_ds = load_from_disk('/root/xxx/datasets/gsm8k-new') ``` output: ```text DatasetDict({ train: Dataset({ features: ['problem', 'solution'], num_rows: 7473 }) test: Dataset({ features: ['problem', 'solution'], num_rows: 1319 }) }) ``` Attempting to use load_dataset produces unexpected results: ```python new_ds = load_dataset('/root/xxx/datasets/gsm8k-new') ``` output: ```text DatasetDict({ train: Dataset({ features: ['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split'], num_rows: 1 }) test: Dataset({ features: ['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split'], num_rows: 1 }) }) ``` Questions 1. Why is it designed such that after using `save_to_disk`, the dataset cannot be loaded with `load_dataset`? For small projects with limited code, it might be relatively easy to change all instances of `load_dataset` to `load_from_disk`. However, for complex frameworks like TRL or lighteval, diving into the framework code to change `load_dataset` to `load_from_disk` is extremely tedious and error-prone. Additionally, `load_from_disk` cannot load datasets directly downloaded from the hub, which means that if you need to modify a dataset, you have to choose between using `load_from_disk` or `load_dataset`. This creates an unnecessary dichotomy in the API and complicates workflow when working with modified datasets. 2. What's the recommended approach for this use case? Should I manually process my gsm8k-new dataset to make it compatible with load_dataset? Is there a standard way to convert between these formats? thanks~
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7503/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7503/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7502/comments
https://api.github.com/repos/huggingface/datasets/issues/7502/events
https://github.com/huggingface/datasets/issues/7502
2,977,453,814
I_kwDODunzps6xeFb2
7,502
`load_dataset` of size 40GB creates a cache of >720GB
{ "login": "pietrolesci", "id": 61748653, "node_id": "MDQ6VXNlcjYxNzQ4NjUz", "avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pietrolesci", "html_url": "https://github.com/pietrolesci", "followers_url": "https://api.github.com/users/pietrolesci/followers", "following_url": "https://api.github.com/users/pietrolesci/following{/other_user}", "gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}", "starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions", "organizations_url": "https://api.github.com/users/pietrolesci/orgs", "repos_url": "https://api.github.com/users/pietrolesci/repos", "events_url": "https://api.github.com/users/pietrolesci/events{/privacy}", "received_events_url": "https://api.github.com/users/pietrolesci/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[]
1,744,044,754,000
1,744,044,920,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
Hi there, I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows: ```python ds = DatasetDict( { "train": load_dataset( "parquet", data_dir=f"{local_dir}/{tok}", cache_dir=cache_dir, num_proc=min(12, os.cpu_count()), # type: ignore split=ReadInstruction("train", from_=0, to=NUM_TRAIN, unit="abs"), # type: ignore ), "validation": load_dataset( "parquet", data_dir=f"{local_dir}/{tok}", cache_dir=cache_dir, num_proc=min(12, os.cpu_count()), # type: ignore split=ReadInstruction("train", from_=NUM_TRAIN, unit="abs"), # type: ignore ) } ) ``` which still strangely creates 720GB of cache. In addition, if I remove the raw parquet file folder (`f"{local_dir}/{tok}"` in this example), I am not able to load anything. So, I am left wondering what this cache is doing. Am I missing something? Is there a solution to this problem? Thanks a lot in advance for your help! A related issue: https://github.com/huggingface/transformers/issues/10204#issue-809007443. --- Python: 3.11.11 datasets: 3.5.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7502/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7501/comments
https://api.github.com/repos/huggingface/datasets/issues/7501/events
https://github.com/huggingface/datasets/issues/7501
2,976,721,014
I_kwDODunzps6xbSh2
7,501
Nested Feature raises ArrowNotImplementedError: Unsupported cast using function cast_struct
{ "login": "yaner-here", "id": 26623948, "node_id": "MDQ6VXNlcjI2NjIzOTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/26623948?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yaner-here", "html_url": "https://github.com/yaner-here", "followers_url": "https://api.github.com/users/yaner-here/followers", "following_url": "https://api.github.com/users/yaner-here/following{/other_user}", "gists_url": "https://api.github.com/users/yaner-here/gists{/gist_id}", "starred_url": "https://api.github.com/users/yaner-here/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yaner-here/subscriptions", "organizations_url": "https://api.github.com/users/yaner-here/orgs", "repos_url": "https://api.github.com/users/yaner-here/repos", "events_url": "https://api.github.com/users/yaner-here/events{/privacy}", "received_events_url": "https://api.github.com/users/yaner-here/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "Solved by the default `load_dataset(features)` parameters. Do not use `Sequence` for the `list` in `list[any]` json schema, just simply use `[]`. For example, `\"b\": Sequence({...})` fails but `\"b\": [{...}]` works fine." ]
1,744,029,339,000
1,744,029,784,000
1,744,029,783,000
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug `datasets.Features` seems to be unable to handle json file that contains fields of `list[dict]`. ### Steps to reproduce the bug ```json // test.json {"a": 1, "b": [{"c": 2, "d": 3}, {"c": 4, "d": 5}]} {"a": 5, "b": [{"c": 7, "d": 8}, {"c": 9, "d": 10}]} ``` ```python import json from datasets import Dataset, Features, Value, Sequence, load_dataset annotation_feature = Features({ "a": Value("int32"), "b": Sequence({ "c": Value("int32"), "d": Value("int32"), }), }) annotation_dataset = load_dataset( "json", data_files="test.json", features=annotation_feature ) ``` ``` ArrowNotImplementedError: Unsupported cast from list<item: struct<c: int32, d: int32>> to struct using function cast_struct The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[46], line 11 2 from datasets import Dataset, Features, Value, Sequence, load_dataset 4 annotation_feature = Features({ 5 "a": Value("int32"), 6 "b": Sequence({ (...) 9 }), 10 }) ---> 11 annotation_dataset = load_dataset( 12 "json", 13 data_files="test.json", 14 features=annotation_feature 15 ) ``` ### Expected behavior A `datasets.Datasets` instance should be initialized. ### Environment info - `datasets` version: 3.5.0 - Platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39 - Python version: 3.11.11 - `huggingface_hub` version: 0.30.1 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
{ "login": "yaner-here", "id": 26623948, "node_id": "MDQ6VXNlcjI2NjIzOTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/26623948?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yaner-here", "html_url": "https://github.com/yaner-here", "followers_url": "https://api.github.com/users/yaner-here/followers", "following_url": "https://api.github.com/users/yaner-here/following{/other_user}", "gists_url": "https://api.github.com/users/yaner-here/gists{/gist_id}", "starred_url": "https://api.github.com/users/yaner-here/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yaner-here/subscriptions", "organizations_url": "https://api.github.com/users/yaner-here/orgs", "repos_url": "https://api.github.com/users/yaner-here/repos", "events_url": "https://api.github.com/users/yaner-here/events{/privacy}", "received_events_url": "https://api.github.com/users/yaner-here/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7501/timeline
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/7500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7500/comments
https://api.github.com/repos/huggingface/datasets/issues/7500/events
https://github.com/huggingface/datasets/issues/7500
2,974,841,921
I_kwDODunzps6xUHxB
7,500
Make `with_format` correctly indicate that a `Dataset` is compatible with PyTorch's `Dataset` class
{ "login": "benglewis", "id": 3817460, "node_id": "MDQ6VXNlcjM4MTc0NjA=", "avatar_url": "https://avatars.githubusercontent.com/u/3817460?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benglewis", "html_url": "https://github.com/benglewis", "followers_url": "https://api.github.com/users/benglewis/followers", "following_url": "https://api.github.com/users/benglewis/following{/other_user}", "gists_url": "https://api.github.com/users/benglewis/gists{/gist_id}", "starred_url": "https://api.github.com/users/benglewis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benglewis/subscriptions", "organizations_url": "https://api.github.com/users/benglewis/orgs", "repos_url": "https://api.github.com/users/benglewis/repos", "events_url": "https://api.github.com/users/benglewis/events{/privacy}", "received_events_url": "https://api.github.com/users/benglewis/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[]
1,743,933,369,000
1,743,933,369,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Feature request Currently `datasets` does not correctly indicate to the Python type-checker (e.g. `pyright` / `Pylance`) that the output of `with_format` is compatible with PyTorch's `Dataloader` since it does not indicate that the HuggingFace `Dataset` is compatible with the PyTorch `Dataset` class. It would be great if we could get the typing to work nicely. ### Motivation To avoid casting types in our Python code. ### Your contribution I would be happy to contribute a PR if this is something that may be accepted and could work with the current approach. This doesn't have to be for just PyTorch, but I imagine that the same thing would be useful for `tensorflow` and such, but we only have a need for PyTorch at this stage.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7500/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7499/comments
https://api.github.com/repos/huggingface/datasets/issues/7499/events
https://github.com/huggingface/datasets/pull/7499
2,973,489,126
PR_kwDODunzps6Rd4Zp
7,499
Added cache dirs to load and file_utils
{ "login": "gmongaras", "id": 43501738, "node_id": "MDQ6VXNlcjQzNTAxNzM4", "avatar_url": "https://avatars.githubusercontent.com/u/43501738?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gmongaras", "html_url": "https://github.com/gmongaras", "followers_url": "https://api.github.com/users/gmongaras/followers", "following_url": "https://api.github.com/users/gmongaras/following{/other_user}", "gists_url": "https://api.github.com/users/gmongaras/gists{/gist_id}", "starred_url": "https://api.github.com/users/gmongaras/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gmongaras/subscriptions", "organizations_url": "https://api.github.com/users/gmongaras/orgs", "repos_url": "https://api.github.com/users/gmongaras/repos", "events_url": "https://api.github.com/users/gmongaras/events{/privacy}", "received_events_url": "https://api.github.com/users/gmongaras/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[]
1,743,806,164,000
1,743,806,164,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
When adding "cache_dir" to datasets.load_dataset, the cache_dir gets lost in the function calls, changing the cache dir to the default path. This fixes a few of these instances.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7499/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7499", "html_url": "https://github.com/huggingface/datasets/pull/7499", "diff_url": "https://github.com/huggingface/datasets/pull/7499.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7499.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/7498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7498/comments
https://api.github.com/repos/huggingface/datasets/issues/7498/events
https://github.com/huggingface/datasets/issues/7498
2,969,218,273
I_kwDODunzps6w-qzh
7,498
Extreme memory bandwidth.
{ "login": "J0SZ", "id": 185079645, "node_id": "U_kgDOCwgXXQ", "avatar_url": "https://avatars.githubusercontent.com/u/185079645?v=4", "gravatar_id": "", "url": "https://api.github.com/users/J0SZ", "html_url": "https://github.com/J0SZ", "followers_url": "https://api.github.com/users/J0SZ/followers", "following_url": "https://api.github.com/users/J0SZ/following{/other_user}", "gists_url": "https://api.github.com/users/J0SZ/gists{/gist_id}", "starred_url": "https://api.github.com/users/J0SZ/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/J0SZ/subscriptions", "organizations_url": "https://api.github.com/users/J0SZ/orgs", "repos_url": "https://api.github.com/users/J0SZ/repos", "events_url": "https://api.github.com/users/J0SZ/events{/privacy}", "received_events_url": "https://api.github.com/users/J0SZ/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[]
1,743,678,548,000
1,743,678,682,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug When I use hf datasets on 4 GPU with 40 workers I get some extreme memory bandwidth of constant ~3GB/s. However, if I wrap the dataset in `IterableDataset`, this issue is gone and the data also loads way faster (4x faster training on 1 worker). It seems like the workers don't share memory and basically duplicate the data 4x40. ### Steps to reproduce the bug Trainer arguments: ``` dataloader_pin_memory=True, dataloader_num_workers=40, dataloader_prefetch_factor=2, dataloader_persistent_workers=True, ``` Call trainer: ``` trainer = Trainer( model=model, args=train_args, train_dataset=load_from_disk('..').with_fromat('torch'), ) ``` The dataset has 600GB and consists of 1225 files. ### Expected behavior The optimal bandwidth should be 100MB/s to keep up with GPU. ### Environment info Linux Python 3.11 datasets==3.2.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7498/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7497/comments
https://api.github.com/repos/huggingface/datasets/issues/7497/events
https://github.com/huggingface/datasets/issues/7497
2,968,553,693
I_kwDODunzps6w8Ijd
7,497
How to convert videos to images?
{ "login": "tongvibe", "id": 171649931, "node_id": "U_kgDOCjsriw", "avatar_url": "https://avatars.githubusercontent.com/u/171649931?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tongvibe", "html_url": "https://github.com/tongvibe", "followers_url": "https://api.github.com/users/tongvibe/followers", "following_url": "https://api.github.com/users/tongvibe/following{/other_user}", "gists_url": "https://api.github.com/users/tongvibe/gists{/gist_id}", "starred_url": "https://api.github.com/users/tongvibe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tongvibe/subscriptions", "organizations_url": "https://api.github.com/users/tongvibe/orgs", "repos_url": "https://api.github.com/users/tongvibe/repos", "events_url": "https://api.github.com/users/tongvibe/events{/privacy}", "received_events_url": "https://api.github.com/users/tongvibe/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[]
1,743,664,119,000
1,743,664,164,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Feature request Does someone know how to return the images from videos? ### Motivation I am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two version, one is data include images infos and another one is separate to data and videos. Does someone know how to return the images from videos?
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7497/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7496/comments
https://api.github.com/repos/huggingface/datasets/issues/7496/events
https://github.com/huggingface/datasets/issues/7496
2,967,345,522
I_kwDODunzps6w3hly
7,496
Json builder: Allow features to override problematic Arrow types
{ "login": "edmcman", "id": 1017189, "node_id": "MDQ6VXNlcjEwMTcxODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1017189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edmcman", "html_url": "https://github.com/edmcman", "followers_url": "https://api.github.com/users/edmcman/followers", "following_url": "https://api.github.com/users/edmcman/following{/other_user}", "gists_url": "https://api.github.com/users/edmcman/gists{/gist_id}", "starred_url": "https://api.github.com/users/edmcman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edmcman/subscriptions", "organizations_url": "https://api.github.com/users/edmcman/orgs", "repos_url": "https://api.github.com/users/edmcman/repos", "events_url": "https://api.github.com/users/edmcman/events{/privacy}", "received_events_url": "https://api.github.com/users/edmcman/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[]
1,743,622,036,000
1,743,622,036,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Feature request In the JSON builder, use explicitly requested feature types before or while converting to Arrow. ### Motivation Working with JSON datasets is really hard because of Arrow. At the very least, it seems like it should be possible to work-around these problems by explicitly setting problematic columns's types. But it seems like this is not possible because the features are only used *after* converting to arrow. Here's a simple example where the Arrow error could potentially be avoided by converting the column to a string: https://colab.research.google.com/drive/16QHRdbUwKSrpwVfGwu8V8AHr8v2dv0dt?usp=sharing ### Your contribution Maybe with some guidance. I'm not very familiar with arrow or pandas.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7496/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7495/comments
https://api.github.com/repos/huggingface/datasets/issues/7495/events
https://github.com/huggingface/datasets/issues/7495
2,967,034,060
I_kwDODunzps6w2VjM
7,495
Columns in the dataset obtained though load_dataset do not correspond to the one in the dataset viewer since 3.4.0
{ "login": "bruno-hays", "id": 48770768, "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bruno-hays", "html_url": "https://github.com/bruno-hays", "followers_url": "https://api.github.com/users/bruno-hays/followers", "following_url": "https://api.github.com/users/bruno-hays/following{/other_user}", "gists_url": "https://api.github.com/users/bruno-hays/gists{/gist_id}", "starred_url": "https://api.github.com/users/bruno-hays/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bruno-hays/subscriptions", "organizations_url": "https://api.github.com/users/bruno-hays/orgs", "repos_url": "https://api.github.com/users/bruno-hays/repos", "events_url": "https://api.github.com/users/bruno-hays/events{/privacy}", "received_events_url": "https://api.github.com/users/bruno-hays/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[]
1,743,613,271,000
1,743,674,062,000
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug I have noticed that on my dataset named [BrunoHays/Accueil_UBS](https://huggingface.co/datasets/BrunoHays/Accueil_UBS), since the version 3.4.0, every column except audio is missing when I load the dataset. Interestingly, the dataset viewer still shows the correct columns ### Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("BrunoHays/Accueil_UBS", streaming=True) print(next(iter(ds["test"])).keys()) ``` With datasets >= 3.4.0: -> dict_keys(['audio']) With datasets == 3.3.2: -> dict_keys(['audio', 'id', 'speaker', 'sentence', 'raw_sentence', 'start_timestamp', 'end_timestamp', 'overlap']) ### Expected behavior All the columns should be present ### Environment info - `datasets` version: 3.3.2 - Platform: macOS-14.6.1-x86_64-i386-64bit - Python version: 3.10.15 - `huggingface_hub` version: 0.30.1 - PyArrow version: 16.1.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.10.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7495/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7494/comments
https://api.github.com/repos/huggingface/datasets/issues/7494/events
https://github.com/huggingface/datasets/issues/7494
2,965,347,685
I_kwDODunzps6wv51l
7,494
Broken links in pdf loading documentation
{ "login": "VyoJ", "id": 75789232, "node_id": "MDQ6VXNlcjc1Nzg5MjMy", "avatar_url": "https://avatars.githubusercontent.com/u/75789232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VyoJ", "html_url": "https://github.com/VyoJ", "followers_url": "https://api.github.com/users/VyoJ/followers", "following_url": "https://api.github.com/users/VyoJ/following{/other_user}", "gists_url": "https://api.github.com/users/VyoJ/gists{/gist_id}", "starred_url": "https://api.github.com/users/VyoJ/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VyoJ/subscriptions", "organizations_url": "https://api.github.com/users/VyoJ/orgs", "repos_url": "https://api.github.com/users/VyoJ/repos", "events_url": "https://api.github.com/users/VyoJ/events{/privacy}", "received_events_url": "https://api.github.com/users/VyoJ/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[]
1,743,576,322,000
1,743,576,322,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug Hi, just a couple of small issues I ran into while reading the docs for [loading pdf data](https://huggingface.co/docs/datasets/main/en/document_load): 1. The link for the [`Create a pdf dataset`](https://huggingface.co/docs/datasets/main/en/document_load#pdffolder) points to https://huggingface.co/docs/datasets/main/en/pdf_dataset instead of https://huggingface.co/docs/datasets/main/en/document_dataset and hence gives a 404 error. 2. At the top of the page, it's mentioned that to work with pdf datasets we need to have the `pdfplumber` package installed but the link to its installation guide points to `pytorch/vision` [installation instructions](https://github.com/pytorch/vision#installation) instead of `pdfplumber`'s [guide](https://github.com/jsvine/pdfplumber#installation) I love the work on enabling pdf dataset support and these small tweaks would help everyone navigate the docs better. Thanks! ### Steps to reproduce the bug The issue is on the [Load Document Data](https://huggingface.co/docs/datasets/main/en/document_load) page of the datasets docs. ### Expected behavior 1. For solving the first issue, I went through the [source .mdx code](https://github.com/huggingface/datasets/blob/main/docs/source/document_load.mdx?plain=1#L188) of the datasets docs and found that the link is pointing to `./pdf_dataset` instead of `./document_dataset` 2. For the second issue, I went through the [source .mdx code](https://github.com/huggingface/datasets/blob/main/docs/source/document_load.mdx?plain=1#L13) of the datasets docs and found that the link is `pytorch/vision` [installation instructions](https://github.com/pytorch/vision#installation) instead of `pdfplumber`'s [guide](https://github.com/jsvine/pdfplumber#installation) Just replacing these two links should fix the bugs ### Environment info datasets v3.5.0 (main at the time of writing)
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7494/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7494/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7493
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7493/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7493/comments
https://api.github.com/repos/huggingface/datasets/issues/7493/events
https://github.com/huggingface/datasets/issues/7493
2,964,025,179
I_kwDODunzps6wq29b
7,493
push_to_hub does not upload videos
{ "login": "DominikVincent", "id": 9339403, "node_id": "MDQ6VXNlcjkzMzk0MDM=", "avatar_url": "https://avatars.githubusercontent.com/u/9339403?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DominikVincent", "html_url": "https://github.com/DominikVincent", "followers_url": "https://api.github.com/users/DominikVincent/followers", "following_url": "https://api.github.com/users/DominikVincent/following{/other_user}", "gists_url": "https://api.github.com/users/DominikVincent/gists{/gist_id}", "starred_url": "https://api.github.com/users/DominikVincent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DominikVincent/subscriptions", "organizations_url": "https://api.github.com/users/DominikVincent/orgs", "repos_url": "https://api.github.com/users/DominikVincent/repos", "events_url": "https://api.github.com/users/DominikVincent/events{/privacy}", "received_events_url": "https://api.github.com/users/DominikVincent/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[]
1,743,526,820,000
1,743,526,820,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug Hello, I would like to upload a video dataset (some .mp4 files and some segments within them), i.e. rows correspond to subsequences from videos. Videos might be referenced by several rows. I created a dataset locally and it references the videos and the video readers can read them correctly. I use push_to_hub() to upload the dataset to the hub. Expectation: A user uses `load_dataset` and can load the videos. However, the videos seem to be just referenced via paths on the computer and not uploaded to the hub. Therefore a target user cannot load the videos in the dataset. ### Steps to reproduce the bug 1. create a video dataset with paths e.g. { ["videos"]: [path1, path2, ...]} 2. dataset.push_to_hub 3. on a different computer (or same pc if relative paths are used in a different folder): ``` dataset = load_dataset("siplab/egosim", split="train") video = dataset[0]["video_head"] ``` 3. will fail ### Expected behavior Expectation: A user uses `load_dataset` and can load the videos. ### Environment info datasets 3.1.0 Python 3.8.18
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7493/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7492/comments
https://api.github.com/repos/huggingface/datasets/issues/7492/events
https://github.com/huggingface/datasets/pull/7492
2,959,088,568
PR_kwDODunzps6QtCQM
7,492
Closes #7457
{ "login": "Harry-Yang0518", "id": 129883215, "node_id": "U_kgDOB73cTw", "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Harry-Yang0518", "html_url": "https://github.com/Harry-Yang0518", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[ "This PR fixes issue #7457" ]
1,743,367,280,000
1,743,539,172,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
This PR updates the documentation to include the HF_DATASETS_CACHE environment variable, which allows users to customize the cache location for datasets—similar to HF_HUB_CACHE for models.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7492/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7492", "html_url": "https://github.com/huggingface/datasets/pull/7492", "diff_url": "https://github.com/huggingface/datasets/pull/7492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7492.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/7491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7491/comments
https://api.github.com/repos/huggingface/datasets/issues/7491/events
https://github.com/huggingface/datasets/pull/7491
2,959,085,647
PR_kwDODunzps6QtBsD
7,491
docs: update cache.mdx to include HF_DATASETS_CACHE documentation
{ "login": "Harry-Yang0518", "id": 129883215, "node_id": "U_kgDOB73cTw", "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Harry-Yang0518", "html_url": "https://github.com/Harry-Yang0518", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "Already included HF_DATASETS_CACHE" ]
1,743,366,903,000
1,743,367,000,000
1,743,367,000,000
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
{ "login": "Harry-Yang0518", "id": 129883215, "node_id": "U_kgDOB73cTw", "avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Harry-Yang0518", "html_url": "https://github.com/Harry-Yang0518", "followers_url": "https://api.github.com/users/Harry-Yang0518/followers", "following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}", "gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}", "starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions", "organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs", "repos_url": "https://api.github.com/users/Harry-Yang0518/repos", "events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}", "received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7491/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7491/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7491", "html_url": "https://github.com/huggingface/datasets/pull/7491", "diff_url": "https://github.com/huggingface/datasets/pull/7491.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7491.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/7490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7490/comments
https://api.github.com/repos/huggingface/datasets/issues/7490/events
https://github.com/huggingface/datasets/pull/7490
2,958,826,222
PR_kwDODunzps6QsPUI
7,490
(refactor) remove redundant logic in _check_valid_index_key
{ "login": "suzyahyah", "id": 2980993, "node_id": "MDQ6VXNlcjI5ODA5OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/2980993?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suzyahyah", "html_url": "https://github.com/suzyahyah", "followers_url": "https://api.github.com/users/suzyahyah/followers", "following_url": "https://api.github.com/users/suzyahyah/following{/other_user}", "gists_url": "https://api.github.com/users/suzyahyah/gists{/gist_id}", "starred_url": "https://api.github.com/users/suzyahyah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suzyahyah/subscriptions", "organizations_url": "https://api.github.com/users/suzyahyah/orgs", "repos_url": "https://api.github.com/users/suzyahyah/repos", "events_url": "https://api.github.com/users/suzyahyah/events{/privacy}", "received_events_url": "https://api.github.com/users/suzyahyah/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[]
1,743,335,142,000
1,743,335,422,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
This PR contributes a minor refactor, in a small function in `src/datasets/formatting/formatting.py`. No change in logic. In the original code, there are separate if-conditionals for `isinstance(key, range)` and `isinstance(key, Iterable)`, with essentially the same logic. This PR combines these two using a single if statement. **Considerations** 1. Although range in python is guaranteed to have integers, internally calling `int()` on an object that is already an int is negligible. (In python it returns the original object. It doesn't create a new integer object or perform any actual conversion) 2. Technically a range is already an Iterable, and we could just do `isinstance(key, Iterable)` but I explicitly did `isinstance(key, (range, Iterable))` just to be super obvious and consistent that both cases are handled because I see `slice, range, Iterable` everywhere in this `formatting.py` 3. This PR removes the `if len(key)>0` conditional. I think it is cleaner to have it this way for three reasons. - There was originally no else statement and the code would have failed silently anyway. - The if len(key)>0 should be caught much earlier, rather than in `formatting.py`. - There are actually multiple cases where this would fail, if len(key)>0, if key is non numeric or float, or if key is a list of lists. It's clunky to state all this and the error be thrown during max or indexing. **Previous PR and Issues Checks** 1. No known PR or Issues (both closed or open) in hf datasets repository **Tests** 1. Tested using Dataset (load_dataset("wikitext", "wikitext-103-raw-v1")), Pytorch DataLoader, with a Pytorch BatchSampler (list of indexes returned instead of single index).
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7490/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7490", "html_url": "https://github.com/huggingface/datasets/pull/7490", "diff_url": "https://github.com/huggingface/datasets/pull/7490.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7490.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/7489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7489/comments
https://api.github.com/repos/huggingface/datasets/issues/7489/events
https://github.com/huggingface/datasets/pull/7489
2,958,204,763
PR_kwDODunzps6QqSRD
7,489
fix: loading of datasets from Disk(#7373)
{ "login": "sam-hey", "id": 40773225, "node_id": "MDQ6VXNlcjQwNzczMjI1", "avatar_url": "https://avatars.githubusercontent.com/u/40773225?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sam-hey", "html_url": "https://github.com/sam-hey", "followers_url": "https://api.github.com/users/sam-hey/followers", "following_url": "https://api.github.com/users/sam-hey/following{/other_user}", "gists_url": "https://api.github.com/users/sam-hey/gists{/gist_id}", "starred_url": "https://api.github.com/users/sam-hey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sam-hey/subscriptions", "organizations_url": "https://api.github.com/users/sam-hey/orgs", "repos_url": "https://api.github.com/users/sam-hey/repos", "events_url": "https://api.github.com/users/sam-hey/events{/privacy}", "received_events_url": "https://api.github.com/users/sam-hey/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[ "@nepfaff Could you confirm if this fixes the issue for you? I checked Memray, and everything looked good on my end.\r\n\r\nInstall: `pip install git+https://github.com/sam-hey/datasets.git@fix/concatenate_datasets`\r\n", "Will aim to get to this soon. I don't have a rapid testing pipeline setup but need to wait for some AWS nodes to become free", "I now set up a small experiment:\r\n\r\n```python\r\n# Log initial RAM usage\r\n process = psutil.Process(os.getpid())\r\n initial_ram = process.memory_info().rss / (1024 * 1024) # Convert to MB\r\n logging.info(f\"Initial RAM usage: {initial_ram:.2f} MB\")\r\n\r\n chunk_datasets = [\r\n Dataset.load_from_disk(dataset_path, keep_in_memory=False) for _ in range(N)\r\n ]\r\n combined_dataset = concatenate_datasets(chunk_datasets)\r\n\r\n # Log final RAM usage\r\n final_ram = process.memory_info().rss / (1024 * 1024) # Convert to MB\r\n ram_diff = final_ram - initial_ram\r\n logging.info(f\"Final RAM usage: {final_ram:.2f} MB\")\r\n logging.info(f\"RAM usage increase: {ram_diff:.2f} MB\")\r\n```\r\n\r\nThe RAM usage is linearly correlated with `N` on datasets master!\r\n\r\nFor my test dataset:\r\n- N=5 => RAM usage increase: 26302.91 MB\r\n- N=10 => RAM usage increase: 52315.18 MB\r\n- N=20 => RAM usage increase: 104510.65 MB\r\n- N=40 => RAM usage increase: 209166.30 MB\r\n\r\nUnfortunately, your patch doesn't seem to change this:\r\n```bash\r\npip install git+https://github.com/sam-hey/datasets.git@fix/concatenate_datasets\r\npip list | grep datasets\r\ndatasets 3.5.1.dev0\r\n```\r\nGives exactly the same RAM statistics.\r\n\r\n**Edit:** The results are a bit flawed as the memory increase all seems to come from `Dataset.load_from_disk(dataset_path, keep_in_memory=False)` here (which I don't think should happen either?) and not from `concatenate_datasets`. This seems different from my large-scale setup that runs out of memory during `concatenate_datasets` but I don't seem to be able to replicate this here...", "Thanks a lot, @nepfaff, for taking a look at this! It seems that `concatenate_datasets()` is fixed with this PR. I can also confirm that loading a large number of files requires significant memory. However, as I understand it, this is expected/a bug since the memory consumption stems from `pa.memory_map()`, which returns a memory-mapped file.\r\n\r\nThis behavior might be related to this bug: https://github.com/apache/arrow/issues/34423 \r\n\r\n<img width=\"1728\" alt=\"Screenshot 2025-04-03 at 16 01 11\" src=\"https://github.com/user-attachments/assets/475691d8-3aba-4d7e-b8ef-5e7552c70b14\" />\r\n" ]
1,743,265,378,000
1,743,688,939,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
Fixes dataset loading from disk by ensuring that memory maps and streams are properly closed. For more details, see https://github.com/huggingface/datasets/issues/7373.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7489/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7489", "html_url": "https://github.com/huggingface/datasets/pull/7489", "diff_url": "https://github.com/huggingface/datasets/pull/7489.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7489.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/7488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7488/comments
https://api.github.com/repos/huggingface/datasets/issues/7488/events
https://github.com/huggingface/datasets/pull/7488
2,956,559,358
PR_kwDODunzps6QlLmn
7,488
Support underscore int read instruction
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7488). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "you rock, Quentin - thank you!" ]
1,743,177,675,000
1,743,178,844,000
1,743,178,843,000
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
close https://github.com/huggingface/datasets/issues/7481
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7488/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7488", "html_url": "https://github.com/huggingface/datasets/pull/7488", "diff_url": "https://github.com/huggingface/datasets/pull/7488.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7488.patch", "merged_at": "2025-03-28T16:20:43" }
https://api.github.com/repos/huggingface/datasets/issues/7487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7487/comments
https://api.github.com/repos/huggingface/datasets/issues/7487/events
https://github.com/huggingface/datasets/pull/7487
2,956,533,448
PR_kwDODunzps6QlF8N
7,487
Write pdf in map
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,743,176,965,000
1,743,181,793,000
1,743,181,791,000
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
Fix this error when mapping a PDF dataset ``` pyarrow.lib.ArrowInvalid: Could not convert <pdfplumber.pdf.PDF object at 0x13498ee40> with type PDF: did not recognize Python value type when inferring an Arrow data type ``` and also let map() outputs be lists of images or pdfs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7487/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7487", "html_url": "https://github.com/huggingface/datasets/pull/7487", "diff_url": "https://github.com/huggingface/datasets/pull/7487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7487.patch", "merged_at": "2025-03-28T17:09:51" }
https://api.github.com/repos/huggingface/datasets/issues/7486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7486/comments
https://api.github.com/repos/huggingface/datasets/issues/7486/events
https://github.com/huggingface/datasets/issues/7486
2,954,042,179
I_kwDODunzps6wExtD
7,486
`shared_datadir` fixture is missing
{ "login": "lahwaacz", "id": 1289205, "node_id": "MDQ6VXNlcjEyODkyMDU=", "avatar_url": "https://avatars.githubusercontent.com/u/1289205?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lahwaacz", "html_url": "https://github.com/lahwaacz", "followers_url": "https://api.github.com/users/lahwaacz/followers", "following_url": "https://api.github.com/users/lahwaacz/following{/other_user}", "gists_url": "https://api.github.com/users/lahwaacz/gists{/gist_id}", "starred_url": "https://api.github.com/users/lahwaacz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lahwaacz/subscriptions", "organizations_url": "https://api.github.com/users/lahwaacz/orgs", "repos_url": "https://api.github.com/users/lahwaacz/repos", "events_url": "https://api.github.com/users/lahwaacz/events{/privacy}", "received_events_url": "https://api.github.com/users/lahwaacz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "OK I was missing the `pytest-datadir` package. Sorry for the noise!" ]
1,743,099,432,000
1,743,104,951,000
1,743,104,950,000
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug Running the tests for the latest release fails due to missing `shared_datadir` fixture. ### Steps to reproduce the bug Running `pytest` while building a package for Arch Linux leads to these errors: ``` ==================================== ERRORS ==================================== _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>1] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>2] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>3] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>4] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>5] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>6] _________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 _______________ ERROR at setup of test_dataset_with_pdf_feature ________________ [gw44] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 34 @require_pdfplumber def test_dataset_with_pdf_feature(shared_datadir): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:34 _________ ERROR at setup of test_pdf_feature_encode_example[<lambda>0] _________ [gw46] linux -- Python 3.13.2 /build/python-datasets/src/datasets-3.5.0/test-env/bin/python file /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py, line 8 @require_pdfplumber @pytest.mark.parametrize( "build_example", [ lambda pdf_path: pdf_path, lambda pdf_path: open(pdf_path, "rb").read(), lambda pdf_path: {"path": pdf_path}, lambda pdf_path: {"path": pdf_path, "bytes": None}, lambda pdf_path: {"path": pdf_path, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"path": None, "bytes": open(pdf_path, "rb").read()}, lambda pdf_path: {"bytes": open(pdf_path, "rb").read()}, ], ) def test_pdf_feature_encode_example(shared_datadir, build_example): E fixture 'shared_datadir' not found > available fixtures: _hf_gated_dataset_repo_txt_data, arrow_file, arrow_path, audio_file, bz2_csv_path, bz2_file, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, ci_hfh_hf_hub_url, ci_hub_config, cleanup_repo, csv2_path, csv_path, data_dir_with_hidden_files, dataset, dataset_dict, disable_implicit_token, disable_tqdm_output, doctest_namespace, geoparquet_path, gz_file, hf_api, hf_gated_dataset_repo_txt_data, hf_private_dataset_repo_txt_data, hf_private_dataset_repo_txt_data_, hf_private_dataset_repo_zipped_img_data, hf_private_dataset_repo_zipped_img_data_, hf_private_dataset_repo_zipped_txt_data, hf_private_dataset_repo_zipped_txt_data_, hf_token, image_file, json_dict_of_lists_path, json_list_of_dicts_path, jsonl2_path, jsonl_312_path, jsonl_gz_path, jsonl_path, jsonl_str_path, lz4_file, mock_fsspec, mockfs, monkeypatch, parquet_path, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, set_ci_hub_access_token, set_sqlalchemy_silence_uber_warning, set_test_cache_config, set_update_download_counts_to_false, seven_zip_file, sqlite_path, tar_file, tar_jsonl_path, tar_nested_jsonl_path, temporary_repo, tensor_file, testrun_uid, text2_path, text_dir, text_dir_with_unsupported_extension, text_file, text_file_content, text_gz_path, text_path, text_path_with_unicode_new_lines, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpfs, worker_id, xml_file, xz_file, zero_time_out_for_remote_code, zip_csv_path, zip_csv_with_dir_path, zip_file, zip_image_path, zip_jsonl_path, zip_jsonl_with_dir_path, zip_nested_jsonl_path, zip_text_path, zip_text_with_dir_path, zip_unsupported_ext_path, zip_uppercase_csv_path, zstd_file > use 'pytest --fixtures [testpath]' for help on them. /build/python-datasets/src/datasets-3.5.0/tests/features/test_pdf.py:8 ``` ### Expected behavior All fixtures used in tests should be available. ### Environment info Arch Linux build system, building the [python-datasets](https://gitlab.archlinux.org/archlinux/packaging/packages/python-datasets) package. There are actually [many deselected tests](https://gitlab.archlinux.org/archlinux/packaging/packages/python-datasets/-/blob/6f97957f0c326cc7b3da6b7f12326305bcaef374/PKGBUILD#L66-148) which were failing on previous releases, but these errors popped up in 3.5.0.
{ "login": "lahwaacz", "id": 1289205, "node_id": "MDQ6VXNlcjEyODkyMDU=", "avatar_url": "https://avatars.githubusercontent.com/u/1289205?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lahwaacz", "html_url": "https://github.com/lahwaacz", "followers_url": "https://api.github.com/users/lahwaacz/followers", "following_url": "https://api.github.com/users/lahwaacz/following{/other_user}", "gists_url": "https://api.github.com/users/lahwaacz/gists{/gist_id}", "starred_url": "https://api.github.com/users/lahwaacz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lahwaacz/subscriptions", "organizations_url": "https://api.github.com/users/lahwaacz/orgs", "repos_url": "https://api.github.com/users/lahwaacz/repos", "events_url": "https://api.github.com/users/lahwaacz/events{/privacy}", "received_events_url": "https://api.github.com/users/lahwaacz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7486/timeline
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/7485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7485/comments
https://api.github.com/repos/huggingface/datasets/issues/7485/events
https://github.com/huggingface/datasets/pull/7485
2,953,696,519
PR_kwDODunzps6QbjFJ
7,485
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7485). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,743,093,574,000
1,743,093,719,000
1,743,093,582,000
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7485/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7485", "html_url": "https://github.com/huggingface/datasets/pull/7485", "diff_url": "https://github.com/huggingface/datasets/pull/7485.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7485.patch", "merged_at": "2025-03-27T16:39:42" }
https://api.github.com/repos/huggingface/datasets/issues/7484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7484/comments
https://api.github.com/repos/huggingface/datasets/issues/7484/events
https://github.com/huggingface/datasets/pull/7484
2,953,677,168
PR_kwDODunzps6Qbevn
7,484
release: 3.5.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7484). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,743,093,207,000
1,743,093,344,000
1,743,093,262,000
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7484/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7484", "html_url": "https://github.com/huggingface/datasets/pull/7484", "diff_url": "https://github.com/huggingface/datasets/pull/7484.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7484.patch", "merged_at": "2025-03-27T16:34:22" }
https://api.github.com/repos/huggingface/datasets/issues/7483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7483/comments
https://api.github.com/repos/huggingface/datasets/issues/7483/events
https://github.com/huggingface/datasets/pull/7483
2,951,856,468
PR_kwDODunzps6QVInB
7,483
Support skip_trying_type
{ "login": "yoshitomo-matsubara", "id": 11156001, "node_id": "MDQ6VXNlcjExMTU2MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yoshitomo-matsubara", "html_url": "https://github.com/yoshitomo-matsubara", "followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers", "following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}", "gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}", "starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions", "organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs", "repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos", "events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}", "received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7483). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Cool ! Can you run `make style` to fix code formatting ?\r\n\r\nI was also thinking of naming the argument `try_original_type` and have it `True` by default", "@lhoestq \r\n\r\nThank you for the suggestion! I renamed the argument with `True` by default and ran `make style`\r\nDoes it look good?", "Thanks @lhoestq !\r\n\r\nLet me know if there are anything that I can do for this PR. Otherwise, looking forward to seeing this update in the package soon!" ]
1,743,059,240,000
1,744,090,669,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
This PR addresses Issue #7472 cc: @lhoestq
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7483/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7483", "html_url": "https://github.com/huggingface/datasets/pull/7483", "diff_url": "https://github.com/huggingface/datasets/pull/7483.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7483.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/7482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7482/comments
https://api.github.com/repos/huggingface/datasets/issues/7482/events
https://github.com/huggingface/datasets/pull/7482
2,950,890,368
PR_kwDODunzps6QRyY6
7,482
Implement capability to restore non-nullability in Features
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[ "Interestingly, this does not close #7479. The Features are not correctly maintained when calling `from_dict` with the custom Features.", "Unfortunately this PR does not fix the reported issue. After more digging:\r\n\r\n- when the dataset is created, nullability information is lost in Features;\r\n- even with this PR, it will get lost eventually because of internal copying/recreation of the Features object without accounting for the nullable fields;\r\n- even if that is also fixed, and Features.arrow_schema correctly holds the nullability info, [casting the arrow Table](https://github.com/huggingface/datasets/blob/5f8d2ad9a1b0bccfd962d998987228addfd5be9f/src/datasets/arrow_dataset.py#L677) with a less strict schema to a more strict one (with nullability) will fail (only on deeper structs, not on flat fields). \r\n\r\nInterestingly, passing custom Features does not immediately load the underlying data with the right arrow_schema. Instead, the workflow is like this:\r\n\r\n- load pyarrow table with any of the methods (from_dict, from_pandas, etc.), which will always AUTO INFER rather than use a provided schema\r\n- the loaded table with auto-schema will be used to initialize the `Dataset` class, and only during construction will [CAST](https://github.com/huggingface/datasets/blob/5f8d2ad9a1b0bccfd962d998987228addfd5be9f/src/datasets/arrow_dataset.py#L677) the table to the user-provided schema if needed, if it differs from the auto-inferred one.\r\n\r\nSo I figured, since many/all of the pyarrow [`Table.from_*`](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html) methods have a `schema=` argument, we should already load the Table with the correct schema to begin with. As an example, I tried changing this line:\r\n\r\nhttps://github.com/huggingface/datasets/blob/5f8d2ad9a1b0bccfd962d998987228addfd5be9f/src/datasets/arrow_dataset.py#L940\r\n\r\nto include the arrow_schema, if provided:\r\n\r\n```python\r\npa_table = InMemoryTable.from_pydict(mapping=mapping, schema=features.arrow_schema if features is not None else None)\r\n```\r\n\r\nBut that leads to:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ampere/vanroy/datasets/scratch.py\", line 33, in <module>\r\n ds = Dataset.from_dict(\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/local/vanroy/datasets/src/datasets/arrow_dataset.py\", line 957, in from_dict\r\n pa_table = InMemoryTable.from_pydict(mapping=mapping, schema=features.arrow_schema if features is not None else None)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/local/vanroy/datasets/src/datasets/table.py\", line 758, in from_pydict\r\n return cls(pa.Table.from_pydict(*args, **kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"pyarrow/table.pxi\", line 1968, in pyarrow.lib._Tabular.from_pydict\r\n File \"pyarrow/table.pxi\", line 6354, in pyarrow.lib._from_pydict\r\n File \"pyarrow/array.pxi\", line 402, in pyarrow.lib.asarray\r\n File \"pyarrow/array.pxi\", line 252, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 114, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/home/local/vanroy/datasets/src/datasets/arrow_writer.py\", line 201, in __arrow_array__\r\n raise ValueError(\"TypedSequence is supposed to be used with pa.array(typed_sequence, type=None)\")\r\nValueError: TypedSequence is supposed to be used with pa.array(typed_sequence, type=None)\r\n```\r\n\r\nand I am not too familiar with pyarrow to solve this.\r\n\r\nSo ultimately I'm a bit at a loss here. I *think*, if we'd want to do this right, the automatic casting in init should be removed in favor of handling the logic inside `Dataset.from_*`, by passing the schema explicitly to `pa.Table.from_*(..., schema=schema)`. But I lack the knowledge of pyarrow to go further than what I've written about above.\r\n", "It's indeed a bit more work to support nullable since in addition to your comments, there are unclear behavior when it comes to concatenating nullable with non-nullable, and maybe how to handle non-nullable lists and nested data.\r\n\r\nBut yup I agree having the `Dataset.from_*` function pass the `schema` to the `pa.Table.from*` would be the way.\r\n\r\nJust one comment about this error: \r\n\r\n```\r\nValueError: TypedSequence is supposed to be used with pa.array(typed_sequence, type=None)\r\n```\r\n\r\nThis happens because `Dataset.from_dict` uses `OptimizedTypedSequence` by default, which should only be used if the user doesn't specify a schema" ]
1,743,027,369,000
1,743,080,870,000
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
This PR attempts to keep track of non_nullable pyarrow fields when converting a `pa.Schema` to `Features`. At the same time, when outputting the `arrow_schema`, the original non-nullable fields are restored. This allows for more consistent behavior and avoids breaking behavior as illustrated in #7479. I am by no means a pyarrow expert so some logic in `find_non_nullable_fields` may not perfect. Not sure if more logic (type checks) are needed for deep-checking a given schema. Maybe there are other pyarrow structures that need to be covered? Tests are added, but again, these may not have sufficient coverage in terms of pyarrow structure types. closes #7479
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7482/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7482", "html_url": "https://github.com/huggingface/datasets/pull/7482", "diff_url": "https://github.com/huggingface/datasets/pull/7482.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7482.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/7481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7481/comments
https://api.github.com/repos/huggingface/datasets/issues/7481/events
https://github.com/huggingface/datasets/issues/7481
2,950,692,971
I_kwDODunzps6v4ABr
7,481
deal with python `10_000` legal number in slice syntax
{ "login": "sfc-gh-sbekman", "id": 196988264, "node_id": "U_kgDOC73NaA", "avatar_url": "https://avatars.githubusercontent.com/u/196988264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sfc-gh-sbekman", "html_url": "https://github.com/sfc-gh-sbekman", "followers_url": "https://api.github.com/users/sfc-gh-sbekman/followers", "following_url": "https://api.github.com/users/sfc-gh-sbekman/following{/other_user}", "gists_url": "https://api.github.com/users/sfc-gh-sbekman/gists{/gist_id}", "starred_url": "https://api.github.com/users/sfc-gh-sbekman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sfc-gh-sbekman/subscriptions", "organizations_url": "https://api.github.com/users/sfc-gh-sbekman/orgs", "repos_url": "https://api.github.com/users/sfc-gh-sbekman/repos", "events_url": "https://api.github.com/users/sfc-gh-sbekman/events{/privacy}", "received_events_url": "https://api.github.com/users/sfc-gh-sbekman/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
[ "should be an easy fix, I opened a PR" ]
1,743,019,854,000
1,743,178,844,000
1,743,178,844,000
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Feature request ``` In [6]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1000]") In [7]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1_000]") [dozens of frames skipped] File /usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py:444, in _str_to_read_instruction(spec) 442 res = _SUB_SPEC_RE.match(spec) 443 if not res: --> 444 raise ValueError(f"Unrecognized instruction format: {spec}") ValueError: Unrecognized instruction format: train_sft[:1_000] ``` It took me a while to understand what the problem was. But apparently `pyarrow` doesn't allow python numbers that may include `_` as in `1_000`. The `_` aids readability since `10_000_000` vs `10000000` is obviously easier to grasp of what the actual number is. Feature request: ideally `datasets` being a python module will do the right thing and convert python numbers into whatever pyarrow supports - in this case stripping `_`s. Second best it'd err and tell the user that using numbers with `_` in split slices is not acceptible, so that the user won't have to deal with a huge pyarrow assert they know nothing about. Thank you!
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7481/timeline
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/7480
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7480/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7480/comments
https://api.github.com/repos/huggingface/datasets/issues/7480/events
https://github.com/huggingface/datasets/issues/7480
2,950,315,214
I_kwDODunzps6v2jzO
7,480
HF_DATASETS_CACHE ignored?
{ "login": "stephenroller", "id": 31896, "node_id": "MDQ6VXNlcjMxODk2", "avatar_url": "https://avatars.githubusercontent.com/u/31896?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stephenroller", "html_url": "https://github.com/stephenroller", "followers_url": "https://api.github.com/users/stephenroller/followers", "following_url": "https://api.github.com/users/stephenroller/following{/other_user}", "gists_url": "https://api.github.com/users/stephenroller/gists{/gist_id}", "starred_url": "https://api.github.com/users/stephenroller/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stephenroller/subscriptions", "organizations_url": "https://api.github.com/users/stephenroller/orgs", "repos_url": "https://api.github.com/users/stephenroller/repos", "events_url": "https://api.github.com/users/stephenroller/events{/privacy}", "received_events_url": "https://api.github.com/users/stephenroller/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[ "FWIW, it does eventually write to /tmp/roller/datasets when generating the final version.", "Hey, I’d love to work on this issue but I am a beginner, can I work it with you?", "Hi @lhoestq,\nI'd like to look into this issue but I'm still learning. Could you share any quick pointers on the HF_DATASETS_CACHE behavior here? Thanks!", "Hi ! `HF_DATASETS_CACHE` is only for the cache files of the `datasets` library, not for the `huggingface_hub` cache for files downloaded from the Hugging Face Hub.\n\nYou should either specify `HF_HOME` (parent cache path for everything HF) or both `HF_DATASETS_CACHE` and `HF_HUB_CACHE`" ]
1,743,009,574,000
1,744,046,640,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug I'm struggling to get things to respect HF_DATASETS_CACHE. Rationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE. Current version: 3.2.1dev. In the process of testing 3.4.0 ### Steps to reproduce the bug [Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results] dump.py: ```python from datasets import load_dataset dataset = load_dataset("HuggingFaceFW/fineweb", name="sample-100BT", split="train") ``` Repro steps ```bash # ensure no cache $ mv ~/.cache/huggingface ~/.cache/huggingface.bak $ export HF_DATASETS_CACHE=/tmp/roller/datasets $ rm -rf ${HF_DATASETS_CACHE} $ env | grep HF | grep -v TOKEN HF_DATASETS_CACHE=/tmp/roller/datasets $ python dump.py # (omitted for brevity) # (while downloading) $ du -hcs ~/.cache/huggingface/hub 18G hub 18G total # (after downloading) $ du -hcs ~/.cache/huggingface/hub ``` It's a shame because datasets supports s3 (which I could really use right now) but hub does not. ### Expected behavior * ~/.cache/huggingface/hub stays empty * /tmp/roller/datasets becomes full of stuff ### Environment info [Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results]
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7480/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7479/comments
https://api.github.com/repos/huggingface/datasets/issues/7479/events
https://github.com/huggingface/datasets/issues/7479
2,950,235,396
I_kwDODunzps6v2QUE
7,479
Features.from_arrow_schema is destructive
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[]
1,743,007,603,000
1,743,007,618,000
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug I came across this, perhaps niche, bug where `Features` does not/cannot account for pyarrow's `nullable=False` option in Fields. Interestingly, I found that in regular "flat" fields this does not necessarily lead to conflicts, but when a non-nullable field is in a struct, an incompatibility arises. It's not easy to explain in words, so the minimal example below should help I hope. Note that I suggest a solution in the comments in the code, simply allowing `Dataset.to_parquet` to allow for a `schema` argument which, when provided, will override the default ds.features.arrow_schema. ### Steps to reproduce the bug ```python import os from datasets import Dataset, Features import pyarrow as pa import pyarrow.parquet as pq # HF datasets is destructive when you call Features.from_arrow_schema(schema) on a schema # because it will not account for nullable and non-nullable fields in structs (it will always allow nullable) # Reloading the same dataset with the original schema will raise an error because the schema is not the same anymore non_nullable_schema = pa.schema( [ pa.field("text", pa.string(), nullable=False), pa.field("meta", pa.struct( [ pa.field("date", pa.list_(pa.string()), nullable=False), ], ), ), ] ) print("ORIGINAL SCHEMA") print(non_nullable_schema) print() feats = Features.from_arrow_schema(non_nullable_schema) print("FEATUR-IZED SCHEMA (nullable-restrictions are gone)") print(feats.arrow_schema) print() ds = Dataset.from_dict( { "text": ["a", "b", "c"], "meta": [{"date": ["2021-01-01"]}, {"date": ["2021-01-02"]}, {"date": ["2021-01-03"]}], }, features=feats, ) fname = "tmp.parquet" # This is not possible: TypeError: pyarrow.parquet.core.ParquetWriter() got multiple values for keyword argument 'schema' # Though I believe this would be the easiest fix: allow schema to be passed to to_parquet and overwrite the schema in the dataset # ds.to_parquet(fname, schema=non_nullable_schema) ds.to_parquet(fname) try: _ = pq.read_table(fname, schema=non_nullable_schema) finally: os.unlink(fname) ``` ### Expected behavior - Non-destructive behavior when converting an arrow schema to Features; or - the ability to override the default arrow schema with a custom one ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-5.14.0-427.20.1.el9_4.x86_64-x86_64-with-glibc2.34 - Python version: 3.11.10 - `huggingface_hub` version: 0.27.1 - PyArrow version: 18.1.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7479/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7479/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7478/comments
https://api.github.com/repos/huggingface/datasets/issues/7478/events
https://github.com/huggingface/datasets/pull/7478
2,948,993,461
PR_kwDODunzps6QLPe3
7,478
update fsspec 2025.3.0
{ "login": "peteski22", "id": 487783, "node_id": "MDQ6VXNlcjQ4Nzc4Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/487783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/peteski22", "html_url": "https://github.com/peteski22", "followers_url": "https://api.github.com/users/peteski22/followers", "following_url": "https://api.github.com/users/peteski22/following{/other_user}", "gists_url": "https://api.github.com/users/peteski22/gists{/gist_id}", "starred_url": "https://api.github.com/users/peteski22/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peteski22/subscriptions", "organizations_url": "https://api.github.com/users/peteski22/orgs", "repos_url": "https://api.github.com/users/peteski22/repos", "events_url": "https://api.github.com/users/peteski22/events{/privacy}", "received_events_url": "https://api.github.com/users/peteski22/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "Sorry for tagging you @lhoestq but since you merged the linked PR, I wondered if you might be able to help me get this triaged so it can be reviewed/rejected etc. 🙏🏼 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7478). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,742,982,785,000
1,743,189,354,000
1,743,177,115,000
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
It appears there have been two releases of fsspec since this dependency was last updated, it would be great if Datasets could be updated so that it didn't hold back the usage of newer fsspec versions in consuming projects. PR based on https://github.com/huggingface/datasets/pull/7352
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7478/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7478/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7478", "html_url": "https://github.com/huggingface/datasets/pull/7478", "diff_url": "https://github.com/huggingface/datasets/pull/7478.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7478.patch", "merged_at": "2025-03-28T15:51:54" }
https://api.github.com/repos/huggingface/datasets/issues/7477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7477/comments
https://api.github.com/repos/huggingface/datasets/issues/7477/events
https://github.com/huggingface/datasets/issues/7477
2,947,169,460
I_kwDODunzps6vqjy0
7,477
What is the canonical way to compress a Dataset?
{ "login": "eric-czech", "id": 6130352, "node_id": "MDQ6VXNlcjYxMzAzNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/6130352?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eric-czech", "html_url": "https://github.com/eric-czech", "followers_url": "https://api.github.com/users/eric-czech/followers", "following_url": "https://api.github.com/users/eric-czech/following{/other_user}", "gists_url": "https://api.github.com/users/eric-czech/gists{/gist_id}", "starred_url": "https://api.github.com/users/eric-czech/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eric-czech/subscriptions", "organizations_url": "https://api.github.com/users/eric-czech/orgs", "repos_url": "https://api.github.com/users/eric-czech/repos", "events_url": "https://api.github.com/users/eric-czech/events{/privacy}", "received_events_url": "https://api.github.com/users/eric-czech/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[ "I saw this post by @lhoestq: https://discuss.huggingface.co/t/increased-arrow-table-size-by-factor-of-2/26561/4 suggesting that there is at least some internal code for writing sharded parquet datasets non-concurrently. This appears to be that code: https://github.com/huggingface/datasets/blob/94ccd1b4fada8a92cea96dc8df4e915041d695b6/src/datasets/arrow_dataset.py#L5380-L5397\n\nIs there any fundamental reason (e.g. race conditions) that this kind of operation couldn't exist as a utility or method on a `Dataset` with a `num_proc` argument? I am not seeing any other issues explicitly for that ask. \n", "We simply haven't implemented a method to save as sharded parquet locally yet ^^'\n\nRight now the only sharded parquet export method is `push_to_hub()` which writes to HF. But we can have a local one as well. \n\nIn the meantime the easiest way to export as sharded parquet locally is to `.shard()` and `.to_parquet()` (see code from my comment [here](https://github.com/huggingface/datasets/issues/7047#issuecomment-2233163406))", "> In the meantime the easiest way to export as sharded parquet locally is to .shard() and .to_parquet()\n\nMakes sense, BUT how can it be done concurrently? I could of course use multiprocessing myself or a dozen other libraries for parallelizing single-node/local operations like that.\n\nWhat I'm asking though is, what is the way to do this that is most canonical for `datasets` specifically? I.e. what is least likely to causing pickling or other issues because it is used frequently internally by `datasets` and already likely tests for a lot of library-native edge-cases?", "Everything in `datasets` is picklable :) and even better: since the data are memory mapped from disk, pickling in one process and unpickling in another doesn't do any copy - it instantaneously reloads the memory map.\n\nSo feel free to use the library you prefer to parallelize your operations.\n\n(it's another story in distributed setups though, because in that case you either need to copy and send the data or setup a distributed filesystem)" ]
1,742,921,271,000
1,743,671,591,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
Given that Arrow is the preferred backend for a Dataset, what is a user supposed to do if they want concurrent reads, concurrent writes AND on-disk compression for a larger dataset? Parquet would be the obvious answer except that there is no native support for writing sharded, parquet datasets concurrently [[1](https://github.com/huggingface/datasets/issues/7047)]. Am I missing something? And if so, why is this not the standard/default way that `Dataset`'s work as they do in Xarray, Ray Data, Composer, etc.?
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7477/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7476/comments
https://api.github.com/repos/huggingface/datasets/issues/7476/events
https://github.com/huggingface/datasets/pull/7476
2,946,997,924
PR_kwDODunzps6QEbmO
7,476
Priotitize json
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7476). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,742,917,471,000
1,742,917,620,000
1,742,917,500,000
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
`datasets` should load the JSON data in https://huggingface.co/datasets/facebook/natural_reasoning, not the PDF
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7476/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7476/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7476", "html_url": "https://github.com/huggingface/datasets/pull/7476", "diff_url": "https://github.com/huggingface/datasets/pull/7476.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7476.patch", "merged_at": "2025-03-25T15:45:00" }
https://api.github.com/repos/huggingface/datasets/issues/7475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7475/comments
https://api.github.com/repos/huggingface/datasets/issues/7475/events
https://github.com/huggingface/datasets/issues/7475
2,946,640,570
I_kwDODunzps6voiq6
7,475
IterableDataset's state_dict shard_example_idx is always equal to the number of samples in a shard
{ "login": "bruno-hays", "id": 48770768, "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bruno-hays", "html_url": "https://github.com/bruno-hays", "followers_url": "https://api.github.com/users/bruno-hays/followers", "following_url": "https://api.github.com/users/bruno-hays/following{/other_user}", "gists_url": "https://api.github.com/users/bruno-hays/gists{/gist_id}", "starred_url": "https://api.github.com/users/bruno-hays/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bruno-hays/subscriptions", "organizations_url": "https://api.github.com/users/bruno-hays/orgs", "repos_url": "https://api.github.com/users/bruno-hays/repos", "events_url": "https://api.github.com/users/bruno-hays/events{/privacy}", "received_events_url": "https://api.github.com/users/bruno-hays/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[ "Hey, I’d love to work on this issue but I am a beginner, can I work it with you?", "Hello. I'm sorry but I don't have much time to get in the details for now.\nHave you managed to reproduce the issue with the code provided ?\nIf you want to work on it, you can self-assign and ask @lhoestq for directions", "Hi Bruno, I am trying to reproduce it this later in this week and let you know what I found.", "#self-assign" ]
1,742,911,087,000
1,744,046,942,000
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug I've noticed a strange behaviour with Iterable state_dict: the value of shard_example_idx is always equal to the amount of samples in a shard. ### Steps to reproduce the bug I am reusing the example from the doc ```python from datasets import Dataset ds = Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=1) state_dict = None # Iterate through the dataset and print examples for idx, example in enumerate(ds): print(example) if idx == 2: state_dict = ds.state_dict() print("checkpoint") break print(state_dict) ``` Returns: ``` {'a': 0} {'a': 1} checkpoint {'examples_iterable': {'shard_idx': 0, 'shard_example_idx': 6, 'type': 'ArrowExamplesIterable'}, 'epoch': 0} ``` ### Expected behavior shard_example_idx should be 2 instead of 6 If we run with num_shards=2, then shard_example_idx is 3 instead of 2 and so on. ### Environment info - `datasets` version: 3.4.1 - Platform: macOS-14.6.1-arm64-arm-64bit - Python version: 3.12.9 - `huggingface_hub` version: 0.29.3 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7475/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7474
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7474/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7474/comments
https://api.github.com/repos/huggingface/datasets/issues/7474/events
https://github.com/huggingface/datasets/pull/7474
2,945,066,258
PR_kwDODunzps6P91lM
7,474
Remove conditions for Python < 3.9
{ "login": "cyyever", "id": 17618148, "node_id": "MDQ6VXNlcjE3NjE4MTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cyyever", "html_url": "https://github.com/cyyever", "followers_url": "https://api.github.com/users/cyyever/followers", "following_url": "https://api.github.com/users/cyyever/following{/other_user}", "gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}", "starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyyever/subscriptions", "organizations_url": "https://api.github.com/users/cyyever/orgs", "repos_url": "https://api.github.com/users/cyyever/repos", "events_url": "https://api.github.com/users/cyyever/events{/privacy}", "received_events_url": "https://api.github.com/users/cyyever/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[]
1,742,872,084,000
1,742,872,351,000
null
CONTRIBUTOR
{ "total": 0, "completed": 0, "percent_completed": 0 }
This PR remove conditions for Python < 3.9.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7474/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7474", "html_url": "https://github.com/huggingface/datasets/pull/7474", "diff_url": "https://github.com/huggingface/datasets/pull/7474.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7474.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/7473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7473/comments
https://api.github.com/repos/huggingface/datasets/issues/7473/events
https://github.com/huggingface/datasets/issues/7473
2,939,034,643
I_kwDODunzps6vLhwT
7,473
Webdataset data format problem
{ "login": "edmcman", "id": 1017189, "node_id": "MDQ6VXNlcjEwMTcxODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1017189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edmcman", "html_url": "https://github.com/edmcman", "followers_url": "https://api.github.com/users/edmcman/followers", "following_url": "https://api.github.com/users/edmcman/following{/other_user}", "gists_url": "https://api.github.com/users/edmcman/gists{/gist_id}", "starred_url": "https://api.github.com/users/edmcman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edmcman/subscriptions", "organizations_url": "https://api.github.com/users/edmcman/orgs", "repos_url": "https://api.github.com/users/edmcman/repos", "events_url": "https://api.github.com/users/edmcman/events{/privacy}", "received_events_url": "https://api.github.com/users/edmcman/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "I was able to work around it" ]
1,742,577,832,000
1,742,584,798,000
1,742,584,798,000
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1 Error code: FileFormatMismatchBetweenSplitsError All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted as being the webdataset format? (I don't think there is currently a way, but happy to be told that I am wrong.) ### Steps to reproduce the bug ``` import datasets datasets.load_dataset("ejschwartz/idioms") ### Expected behavior The dataset loads. Alternatively, there is a YAML syntax for manually specifying the format. ### Environment info - `datasets` version: 3.2.0 - Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.28.1 - PyArrow version: 19.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
{ "login": "edmcman", "id": 1017189, "node_id": "MDQ6VXNlcjEwMTcxODk=", "avatar_url": "https://avatars.githubusercontent.com/u/1017189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edmcman", "html_url": "https://github.com/edmcman", "followers_url": "https://api.github.com/users/edmcman/followers", "following_url": "https://api.github.com/users/edmcman/following{/other_user}", "gists_url": "https://api.github.com/users/edmcman/gists{/gist_id}", "starred_url": "https://api.github.com/users/edmcman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edmcman/subscriptions", "organizations_url": "https://api.github.com/users/edmcman/orgs", "repos_url": "https://api.github.com/users/edmcman/repos", "events_url": "https://api.github.com/users/edmcman/events{/privacy}", "received_events_url": "https://api.github.com/users/edmcman/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7473/timeline
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/7472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7472/comments
https://api.github.com/repos/huggingface/datasets/issues/7472/events
https://github.com/huggingface/datasets/issues/7472
2,937,607,272
I_kwDODunzps6vGFRo
7,472
Label casting during `map` process is canceled after the `map` process
{ "login": "yoshitomo-matsubara", "id": 11156001, "node_id": "MDQ6VXNlcjExMTU2MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yoshitomo-matsubara", "html_url": "https://github.com/yoshitomo-matsubara", "followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers", "following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}", "gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}", "starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions", "organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs", "repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos", "events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}", "received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[ "Hi ! By default `map()` tries to keep the types of each column of the dataset, so here it reuses the int type since all your float values can be converted to integers. But I agree it would be nice to store float values as float values and don't try to reuse the same type in this case.\n\nIn the meantime, you can either store the float values in a new column, or pass the output `features=` manually to `map()`", "Hi @lhoestq \n\nThank you for the answer & suggestion!\n\nCan we add some flag to `map()` function like `reuses_original_type=True` and skip reusing the original type when it's False?\n\nLet me know if it sounds like a reasonable solution. I am happy to submit a PR for this.", "In general we try to avoid adding new parameters when it's already possible to achieve the same results with existing parameters (here `features=`). But since it's not always convenient to know in advance the `features=` I'm open to contributions to adding this parameter yes", "Thank you for sharing the context. Good to know that. \n\nI submitted a PR #7483. Could you review the PR?", "Hi @lhoestq \n\nLet me know if there is something that I should add to [the PR](https://github.com/huggingface/datasets/pull/7483)!" ]
1,742,543,782,000
1,743,998,812,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug When preprocessing a multi-label dataset, I introduced a step to convert int labels to float labels as [BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) expects float labels and forward function of models in transformers package internally use `BCEWithLogitsLoss` However, the casting was canceled after `.map` process and the label values still use int values, which leads to an error ``` File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py", line 1711, in forward loss = loss_fct(logits, labels) File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/modules/loss.py", line 819, in forward return F.binary_cross_entropy_with_logits( File "/home/yoshitomo/anaconda3/envs/torchdistill/lib/python3.10/site-packages/torch/nn/functional.py", line 3628, in binary_cross_entropy_with_logits return torch.binary_cross_entropy_with_logits( RuntimeError: result type Float can't be cast to the desired output type Long ``` This seems like happening only when the original labels are int values (see examples below) ### Steps to reproduce the bug If the original dataset uses a list of int labels, it will cancel the int->float casting ```python from datasets import Dataset data = { 'text': ['text1', 'text2', 'text3', 'text4'], 'labels': [[0, 1, 2], [3], [3, 4], [3]] } dataset = Dataset.from_dict(data) label_set = set([label for labels in data['labels'] for label in labels]) label2idx = {label: idx for idx, label in enumerate(sorted(label_set))} def multi_labels_to_ids(labels): ids = [0.0] * len(label2idx) for label in labels: ids[label2idx[label]] = 1.0 return ids def preprocess(examples): result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]} print('"labels" are int', examples['labels']) result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']] print('"labels" were converted to multi-label format with float values', result['labels']) return result preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text']) print(preprocessed_dataset[0]['labels']) # Output: "[1, 1, 1, 0, 0]" # Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]" ``` If the original dataset uses non-int labels, it works as expected. ```python from datasets import Dataset data = { 'text': ['text1', 'text2', 'text3', 'text4'], 'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']] } dataset = Dataset.from_dict(data) label_set = set([label for labels in data['labels'] for label in labels]) label2idx = {label: idx for idx, label in enumerate(sorted(label_set))} def multi_labels_to_ids(labels): ids = [0.0] * len(label2idx) for label in labels: ids[label2idx[label]] = 1.0 return ids def preprocess(examples): result = {'sentence': [[0, 3, 4] for _ in range(len(examples['labels']))]} print('"labels" are int', examples['labels']) result['labels'] = [multi_labels_to_ids(l) for l in examples['labels']] print('"labels" were converted to multi-label format with float values', result['labels']) return result preprocessed_dataset = dataset.map(preprocess, batched=True, remove_columns=['labels', 'text']) print(preprocessed_dataset[0]['labels']) # Output: "[1.0, 1.0, 1.0, 0.0, 0.0]" # Expected: "[1.0, 1.0, 1.0, 0.0, 0.0]" ``` Note that the only difference between these two examples is > 'labels': [[0, 1, 2], [3], [3, 4], [3]] v.s > 'labels': [['label1', 'label2', 'label3'], ['label4'], ['label4', 'label5'], ['label4']] ### Expected behavior Even if the original dataset uses a list of int labels, the int->float casting during `.map` process should not be canceled as shown in the above example ### Environment info OS Ubuntu 22.04 LTS Python 3.10.11 datasets v3.4.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7472/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7472/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7471/comments
https://api.github.com/repos/huggingface/datasets/issues/7471/events
https://github.com/huggingface/datasets/issues/7471
2,937,530,069
I_kwDODunzps6vFybV
7,471
Adding argument to `_get_data_files_patterns`
{ "login": "SangbumChoi", "id": 34004152, "node_id": "MDQ6VXNlcjM0MDA0MTUy", "avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SangbumChoi", "html_url": "https://github.com/SangbumChoi", "followers_url": "https://api.github.com/users/SangbumChoi/followers", "following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}", "gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}", "starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions", "organizations_url": "https://api.github.com/users/SangbumChoi/orgs", "repos_url": "https://api.github.com/users/SangbumChoi/repos", "events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}", "received_events_url": "https://api.github.com/users/SangbumChoi/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
[ "Hi ! The pattern can be specified in advance in YAML in the README.md of the dataset :)\n\nFor example\n\n```\n---\nconfigs:\n- config_name: default\n data_files:\n - split: train\n path: \"train/*\"\n - split: test\n path: \"test/*\"\n---\n```\n\nSee the docs at https://huggingface.co/docs/hub/en/datasets-manual-configuration", "@lhoestq How can we choose in this case ? https://huggingface.co/datasets/datasets-examples/doc-image-5\n", "choose what ? sorry I didn't get it ^^'" ]
1,742,541,473,000
1,743,078,652,000
1,742,973,987,000
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Feature request How about adding if the user already know about the pattern? https://github.com/huggingface/datasets/blob/a256b85cbc67aa3f0e75d32d6586afc507cf535b/src/datasets/data_files.py#L252 ### Motivation While using this load_dataset people might use 10M of images for the local files. However, due to searching all the appropriate file pattern in fsspec, purely searching this pattern takes more than 10 hours (real use-case). ### Your contribution Yeah I can make this happen if this seems valid. @lhoestq WDYT? such like ``` def _get_data_files_patterns(pattern_resolver: Callable[[str], list[str]], patterns: PATTERNS) -> dict[str, list[str]]: ```
{ "login": "SangbumChoi", "id": 34004152, "node_id": "MDQ6VXNlcjM0MDA0MTUy", "avatar_url": "https://avatars.githubusercontent.com/u/34004152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SangbumChoi", "html_url": "https://github.com/SangbumChoi", "followers_url": "https://api.github.com/users/SangbumChoi/followers", "following_url": "https://api.github.com/users/SangbumChoi/following{/other_user}", "gists_url": "https://api.github.com/users/SangbumChoi/gists{/gist_id}", "starred_url": "https://api.github.com/users/SangbumChoi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SangbumChoi/subscriptions", "organizations_url": "https://api.github.com/users/SangbumChoi/orgs", "repos_url": "https://api.github.com/users/SangbumChoi/repos", "events_url": "https://api.github.com/users/SangbumChoi/events{/privacy}", "received_events_url": "https://api.github.com/users/SangbumChoi/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7471/timeline
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/7470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7470/comments
https://api.github.com/repos/huggingface/datasets/issues/7470/events
https://github.com/huggingface/datasets/issues/7470
2,937,236,323
I_kwDODunzps6vEqtj
7,470
Is it possible to shard a single-sharded IterableDataset?
{ "login": "jonathanasdf", "id": 511073, "node_id": "MDQ6VXNlcjUxMTA3Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonathanasdf", "html_url": "https://github.com/jonathanasdf", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! Maybe you can look for an option in your dataset to partition your data based on a deterministic filter ? For example each worker could stream the data based on `row.id % num_shards` or something like that ?", "So the recommendation is to start out with multiple shards initially and re-sharding after is not expected to work? :(\n\nWould something like the following work? Some DiskCachingIterableDataset, where worker 0 streams from the datasource, but also writes to disk, and all of the other workers read from what worker 0 wrote? Then that would produce a stream with a deterministic order and we can subsample.", "To be honest it would be cool to support native multiprocessing in `IterableDataset.map` so you can parallelize any specific processing step without having to rely on a torch Dataloader. What do you think ?\n\nrelated: https://github.com/huggingface/datasets/issues/7193 https://github.com/huggingface/datasets/issues/3444 \noriginal issue: https://github.com/huggingface/datasets/issues/2642\n\nAlternatively the DiskCachingIterableDataset idea works, just note that to make it work with a torch Dataloader with num_workers>0 you'll need:\n1. to make your own `torch.utils.data.IterableDataset` and have rank=0 stream the data and share them with the other workers (either via disk as suggested or IPC)\n2. take into account that`datasets.IterableDataset` will yield 0 examples for ranks with id>0 if there is only one shard, but in your case it's ok since you'd only stream from rank=0", "Ohh that would be pretty cool!\n\nThanks for the suggestions, as there's no actionable items for this repo I'm going to close this issue now." ]
1,742,531,617,000
1,742,981,686,000
1,742,971,768,000
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not. Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs more cost too). But the results are also big enough that we don't want to materialize it entirely and instead stream it with an IterableDataset. But after we have the results we want to split it up across workers to parallelize processing. Is something like this possible to do? Here's a failed attempt. The end result should be that each of the shards has unique data, but unfortunately with this attempt the generator gets run once in each shard and the results end up with duplicates... ``` import random import datasets def gen(): print('RUNNING GENERATOR!') items = list(range(10)) random.shuffle(items) yield from items ds = datasets.IterableDataset.from_generator(gen) print('dataset contents:') for item in ds: print(item) print() print('dataset contents (2):') for item in ds: print(item) print() num_shards = 3 def sharded(shard_id): for i, example in enumerate(ds): if i % num_shards in shard_id: yield example ds1 = datasets.IterableDataset.from_generator( sharded, gen_kwargs={'shard_id': list(range(num_shards))} ) for shard in range(num_shards): print('shard', shard) for item in ds1.shard(num_shards, shard): print(item) ```
{ "login": "jonathanasdf", "id": 511073, "node_id": "MDQ6VXNlcjUxMTA3Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonathanasdf", "html_url": "https://github.com/jonathanasdf", "followers_url": "https://api.github.com/users/jonathanasdf/followers", "following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}", "gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions", "organizations_url": "https://api.github.com/users/jonathanasdf/orgs", "repos_url": "https://api.github.com/users/jonathanasdf/repos", "events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}", "received_events_url": "https://api.github.com/users/jonathanasdf/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7470/timeline
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/7469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7469/comments
https://api.github.com/repos/huggingface/datasets/issues/7469/events
https://github.com/huggingface/datasets/issues/7469
2,936,606,080
I_kwDODunzps6vCQ2A
7,469
Custom split name with the web interface
{ "login": "vince62s", "id": 15141326, "node_id": "MDQ6VXNlcjE1MTQxMzI2", "avatar_url": "https://avatars.githubusercontent.com/u/15141326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vince62s", "html_url": "https://github.com/vince62s", "followers_url": "https://api.github.com/users/vince62s/followers", "following_url": "https://api.github.com/users/vince62s/following{/other_user}", "gists_url": "https://api.github.com/users/vince62s/gists{/gist_id}", "starred_url": "https://api.github.com/users/vince62s/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vince62s/subscriptions", "organizations_url": "https://api.github.com/users/vince62s/orgs", "repos_url": "https://api.github.com/users/vince62s/repos", "events_url": "https://api.github.com/users/vince62s/events{/privacy}", "received_events_url": "https://api.github.com/users/vince62s/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[]
1,742,503,559,000
1,742,541,637,000
1,742,541,637,000
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug According the doc here: https://huggingface.co/docs/hub/datasets-file-names-and-splits#custom-split-name it should infer the split name from the subdir of data or the beg of the name of the files in data. When doing this manually through web upload it does not work. it uses "train" as a unique split. example: https://huggingface.co/datasets/eole-nlp/estimator_chatml ### Steps to reproduce the bug follow the link above ### Expected behavior there should be two splits "mlqe" and "1720_da" ### Environment info website
{ "login": "vince62s", "id": 15141326, "node_id": "MDQ6VXNlcjE1MTQxMzI2", "avatar_url": "https://avatars.githubusercontent.com/u/15141326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vince62s", "html_url": "https://github.com/vince62s", "followers_url": "https://api.github.com/users/vince62s/followers", "following_url": "https://api.github.com/users/vince62s/following{/other_user}", "gists_url": "https://api.github.com/users/vince62s/gists{/gist_id}", "starred_url": "https://api.github.com/users/vince62s/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vince62s/subscriptions", "organizations_url": "https://api.github.com/users/vince62s/orgs", "repos_url": "https://api.github.com/users/vince62s/repos", "events_url": "https://api.github.com/users/vince62s/events{/privacy}", "received_events_url": "https://api.github.com/users/vince62s/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7469/timeline
completed
null
null
https://api.github.com/repos/huggingface/datasets/issues/7468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7468/comments
https://api.github.com/repos/huggingface/datasets/issues/7468/events
https://github.com/huggingface/datasets/issues/7468
2,934,094,103
I_kwDODunzps6u4rkX
7,468
function `load_dataset` can't solve folder path with regex characters like "[]"
{ "login": "Hpeox", "id": 89294013, "node_id": "MDQ6VXNlcjg5Mjk0MDEz", "avatar_url": "https://avatars.githubusercontent.com/u/89294013?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hpeox", "html_url": "https://github.com/Hpeox", "followers_url": "https://api.github.com/users/Hpeox/followers", "following_url": "https://api.github.com/users/Hpeox/following{/other_user}", "gists_url": "https://api.github.com/users/Hpeox/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hpeox/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hpeox/subscriptions", "organizations_url": "https://api.github.com/users/Hpeox/orgs", "repos_url": "https://api.github.com/users/Hpeox/repos", "events_url": "https://api.github.com/users/Hpeox/events{/privacy}", "received_events_url": "https://api.github.com/users/Hpeox/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[ "Hi ! Have you tried escaping the glob special characters `[` and `]` ?\n\nbtw note that`AbstractFileSystem.glob` doesn't support regex, instead it supports glob patterns as in the python library [glob](https://docs.python.org/3/library/glob.html)\n" ]
1,742,448,119,000
1,742,897,892,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug When using the `load_dataset` function with a folder path containing regex special characters (such as "[]"), the issue occurs due to how the path is handled in the `resolve_pattern` function. This function passes the unprocessed path directly to `AbstractFileSystem.glob`, which supports regular expressions. As a result, the globbing mechanism interprets these characters as regex patterns, leading to a traversal of the entire disk partition instead of confining the search to the intended directory. ### Steps to reproduce the bug just create a folder like `E:\[D_DATA]\koch_test`, then `load_dataset("parquet", data_dir="E:\[D_DATA]\\test", split="train")` it will keep searching the whole disk. I add two `print` in `glob` and `resolve_pattern` to see the path ### Expected behavior it should load the dataset as in normal folders ### Environment info - `datasets` version: 3.3.2 - Platform: Windows-10-10.0.22631-SP0 - Python version: 3.10.16 - `huggingface_hub` version: 0.29.1 - PyArrow version: 19.0.1 - Pandas version: 2.2.3 - `fsspec` version: 2024.12.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7468/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7467/comments
https://api.github.com/repos/huggingface/datasets/issues/7467/events
https://github.com/huggingface/datasets/issues/7467
2,930,067,107
I_kwDODunzps6upUaj
7,467
load_dataset with streaming hangs on parquet datasets
{ "login": "The0nix", "id": 10550252, "node_id": "MDQ6VXNlcjEwNTUwMjUy", "avatar_url": "https://avatars.githubusercontent.com/u/10550252?v=4", "gravatar_id": "", "url": "https://api.github.com/users/The0nix", "html_url": "https://github.com/The0nix", "followers_url": "https://api.github.com/users/The0nix/followers", "following_url": "https://api.github.com/users/The0nix/following{/other_user}", "gists_url": "https://api.github.com/users/The0nix/gists{/gist_id}", "starred_url": "https://api.github.com/users/The0nix/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/The0nix/subscriptions", "organizations_url": "https://api.github.com/users/The0nix/orgs", "repos_url": "https://api.github.com/users/The0nix/repos", "events_url": "https://api.github.com/users/The0nix/events{/privacy}", "received_events_url": "https://api.github.com/users/The0nix/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[ "Hi ! The issue comes from `pyarrow`, I reported it here: https://github.com/apache/arrow/issues/45214 (feel free to comment / thumb up).\n\nAlternatively we can try to find something else than `ParquetFileFragment.to_batches()` to iterate on Parquet data and keep the option the pass `filters=`..." ]
1,742,340,834,000
1,742,898,484,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
### Describe the bug When I try to load a dataset with parquet files (e.g. "bigcode/the-stack") the dataset loads, but python interpreter can't exit and hangs ### Steps to reproduce the bug ```python3 import datasets print('Start') dataset = datasets.load_dataset("bigcode/the-stack", data_dir="data/yaml", streaming=True, split="train") it = iter(dataset) next(it) print('Finish') ``` The program prints finish but doesn't exit and hangs indefinitely. I tried this on two different machines and several datasets. ### Expected behavior The program exits successfully ### Environment info datasets==3.4.1 Python 3.12.9. MacOS and Ubuntu Linux
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7467/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7467/timeline
null
null
null
https://api.github.com/repos/huggingface/datasets/issues/7466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7466/comments
https://api.github.com/repos/huggingface/datasets/issues/7466/events
https://github.com/huggingface/datasets/pull/7466
2,928,661,327
PR_kwDODunzps6PHQyp
7,466
Fix local pdf loading
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7466). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,742,306,946,000
1,742,307,112,000
1,742,306,961,000
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
fir this error when accessing a local pdf ``` File ~/.pyenv/versions/3.12.2/envs/hf-datasets/lib/python3.12/site-packages/pdfminer/psparser.py:220, in PSBaseParser.seek(self, pos) 218 """Seeks the parser to the given position.""" 219 log.debug("seek: %r", pos) --> 220 self.fp.seek(pos) 221 # reset the status for nextline() 222 self.bufpos = pos ValueError: seek of closed file ```
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7466/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7466", "html_url": "https://github.com/huggingface/datasets/pull/7466", "diff_url": "https://github.com/huggingface/datasets/pull/7466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7466.patch", "merged_at": "2025-03-18T14:09:21" }
https://api.github.com/repos/huggingface/datasets/issues/7464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7464/comments
https://api.github.com/repos/huggingface/datasets/issues/7464/events
https://github.com/huggingface/datasets/pull/7464
2,926,478,838
PR_kwDODunzps6PABJa
7,464
Minor fix for metadata files in extension counter
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7464). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,742,248,631,000
1,742,311,303,000
1,742,311,301,000
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7464/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7464", "html_url": "https://github.com/huggingface/datasets/pull/7464", "diff_url": "https://github.com/huggingface/datasets/pull/7464.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7464.patch", "merged_at": "2025-03-18T15:21:41" }
https://api.github.com/repos/huggingface/datasets/issues/7463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7463/comments
https://api.github.com/repos/huggingface/datasets/issues/7463/events
https://github.com/huggingface/datasets/pull/7463
2,925,924,452
PR_kwDODunzps6O-I6K
7,463
Adds EXR format to store depth images in float32
{ "login": "ducha-aiki", "id": 4803565, "node_id": "MDQ6VXNlcjQ4MDM1NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/4803565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ducha-aiki", "html_url": "https://github.com/ducha-aiki", "followers_url": "https://api.github.com/users/ducha-aiki/followers", "following_url": "https://api.github.com/users/ducha-aiki/following{/other_user}", "gists_url": "https://api.github.com/users/ducha-aiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/ducha-aiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ducha-aiki/subscriptions", "organizations_url": "https://api.github.com/users/ducha-aiki/orgs", "repos_url": "https://api.github.com/users/ducha-aiki/repos", "events_url": "https://api.github.com/users/ducha-aiki/events{/privacy}", "received_events_url": "https://api.github.com/users/ducha-aiki/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
[ "Hi ! I'mn wondering if this shouldn't this be an `Image()` type and decoded as a `PIL.Image` ?\r\n\r\nThis would make it easier to integrate with the rest of the HF ecosystem, and you could still get a numpy array using `ds = ds.with_format(\"numpy\")` which sets all the images to be formatted as numpy arrays", "@lhoestq do you mean to add the decoder, and exr extension to the image format? Yes, that probably would be better ", "yes exactly" ]
1,742,233,360,000
1,743,597,219,000
null
NONE
{ "total": 0, "completed": 0, "percent_completed": 0 }
This PR adds the EXR feature to store depth images (or can be normals, etc) in float32. It relies on [openexr_numpy](https://github.com/martinResearch/openexr_numpy/tree/main) to manipulate EXR images.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7463/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7463", "html_url": "https://github.com/huggingface/datasets/pull/7463", "diff_url": "https://github.com/huggingface/datasets/pull/7463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7463.patch", "merged_at": null }
https://api.github.com/repos/huggingface/datasets/issues/7462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7462/comments
https://api.github.com/repos/huggingface/datasets/issues/7462/events
https://github.com/huggingface/datasets/pull/7462
2,925,612,945
PR_kwDODunzps6O9EA1
7,462
set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7462). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
1,742,227,253,000
1,742,227,411,000
1,742,227,268,000
MEMBER
{ "total": 0, "completed": 0, "percent_completed": 0 }
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7462/timeline
null
0
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7462", "html_url": "https://github.com/huggingface/datasets/pull/7462", "diff_url": "https://github.com/huggingface/datasets/pull/7462.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7462.patch", "merged_at": "2025-03-17T16:01:08" }
End of preview. Expand in Data Studio

Dataset Card for [GitHub issues and comments dataset]

Dataset Summary

[This dataset was created by collecting GitHub issues and their associated comments from the 🤗 Datasets repository using the GitHub API. The dataset serves as the foundation for building a semantic search engine that helps users find answers to their most pressing questions about the library.

The dataset includes metadata fields such as issue titles, descriptions (`body`), timestamps, labels, user associations, and the full list of comments per issue.
It can be used to experiment with tasks like text similarity, semantic search, and information retrieval.

For the tutorial on how this dataset was created, refer to the Hugging Face course:
https://huggingface.co/learn/llm-course/en/chapter5/5?fw=pt#creating-a-dataset-card]

Supported Tasks and Leaderboards

[More Information Needed]

Languages

[More Information Needed]

Dataset Structure

Data Instances

{'url': 'https://api.github.com/repos/huggingface/datasets/issues/7501', 'repository_url': 'https://api.github.com/repos/huggingface/datasets', 'labels_url': 'https://api.github.com/repos/huggingface/datasets/issues/7501/labels{/name}', 'comments_url': 'https://api.github.com/repos/huggingface/datasets/issues/7501/comments', 'events_url': 'https://api.github.com/repos/huggingface/datasets/issues/7501/events', 'html_url': 'https://github.com/huggingface/datasets/issues/7501', 'id': 2976721014, 'node_id': 'I_kwDODunzps6xbSh2', 'number': 7501, 'title': 'Nested Feature raises ArrowNotImplementedError: Unsupported cast using function cast_struct', 'user': {'login': 'yaner-here', 'id': 26623948, 'node_id': 'MDQ6VXNlcjI2NjIzOTQ4', 'avatar_url': 'https://avatars.githubusercontent.com/u/26623948?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/yaner-here', 'html_url': 'https://github.com/yaner-here', 'followers_url': 'https://api.github.com/users/yaner-here/followers', 'following_url': 'https://api.github.com/users/yaner-here/following{/other_user}', 'gists_url': 'https://api.github.com/users/yaner-here/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/yaner-here/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/yaner-here/subscriptions', 'organizations_url': 'https://api.github.com/users/yaner-here/orgs', 'repos_url': 'https://api.github.com/users/yaner-here/repos', 'events_url': 'https://api.github.com/users/yaner-here/events{/privacy}', 'received_events_url': 'https://api.github.com/users/yaner-here/received_events', 'type': 'User', 'user_view_type': 'public', 'site_admin': False}, 'labels': [], 'state': 'closed', 'locked': False, 'assignee': None, 'assignees': [], 'comments': ['Solved by the default load_dataset(features) parameters. Do not use Sequence for the list in list[any] json schema, just simply use []. For example, "b": Sequence({...}) fails but "b": [{...}] works fine.'], 'created_at': 1744029339000, 'updated_at': 1744029784000, 'closed_at': 1744029783000, 'author_association': 'NONE', 'sub_issues_summary': {'total': 0, 'completed': 0, 'percent_completed': 0}, 'body': '### Describe the bug\n\ndatasets.Features seems to be unable to handle json file that contains fields of list[dict].\n\n### Steps to reproduce the bug\n\njson\n// test.json\n{"a": 1, "b": [{"c": 2, "d": 3}, {"c": 4, "d": 5}]}\n{"a": 5, "b": [{"c": 7, "d": 8}, {"c": 9, "d": 10}]}\n\n\npython\nimport json\nfrom datasets import Dataset, Features, Value, Sequence, load_dataset\n\nannotation_feature = Features({\n "a": Value("int32"),\n "b": Sequence({\n "c": Value("int32"),\n "d": Value("int32"),\n }),\n})\nannotation_dataset = load_dataset(\n "json", \n data_files="test.json",\n features=annotation_feature\n)\n\n\n\nArrowNotImplementedError: Unsupported cast from list<item: struct<c: int32, d: int32>> to struct using function cast_struct\n\nThe above exception was the direct cause of the following exception:\n\nDatasetGenerationError Traceback (most recent call last)\nCell In[46], line 11\n 2 from datasets import Dataset, Features, Value, Sequence, load_dataset\n 4 annotation_feature = Features({\n 5 "a": Value("int32"),\n 6 "b": Sequence({\n (...) 9 }),\n 10 })\n---> 11 annotation_dataset = load_dataset(\n 12 "json", \n 13 data_files="test.json",\n 14 features=annotation_feature\n 15 )\n\n\n### Expected behavior\n\nA datasets.Datasets instance should be initialized.\n\n### Environment info\n\n- datasets version: 3.5.0\n- Platform: Linux-6.11.0-21-generic-x86_64-with-glibc2.39\n- Python version: 3.11.11\n- huggingface_hub version: 0.30.1\n- PyArrow version: 19.0.1\n- Pandas version: 2.2.3\n- fsspec version: 2024.12.0', 'closed_by': {'login': 'yaner-here', 'id': 26623948, 'node_id': 'MDQ6VXNlcjI2NjIzOTQ4', 'avatar_url': 'https://avatars.githubusercontent.com/u/26623948?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/yaner-here', 'html_url': 'https://github.com/yaner-here', 'followers_url': 'https://api.github.com/users/yaner-here/followers', 'following_url': 'https://api.github.com/users/yaner-here/following{/other_user}', 'gists_url': 'https://api.github.com/users/yaner-here/gists{/gist_id}', 'starred_url': 'https://api.github.com/users/yaner-here/starred{/owner}{/repo}', 'subscriptions_url': 'https://api.github.com/users/yaner-here/subscriptions', 'organizations_url': 'https://api.github.com/users/yaner-here/orgs', 'repos_url': 'https://api.github.com/users/yaner-here/repos', 'events_url': 'https://api.github.com/users/yaner-here/events{/privacy}', 'received_events_url': 'https://api.github.com/users/yaner-here/received_events', 'type': 'User', 'user_view_type': 'public', 'site_admin': False}, 'reactions': {'url': 'https://api.github.com/repos/huggingface/datasets/issues/7501/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}, 'timeline_url': 'https://api.github.com/repos/huggingface/datasets/issues/7501/timeline', 'state_reason': 'completed', 'draft': None, 'pull_request': None}

Data Fields

[More Information Needed]

Data Splits

[More Information Needed]

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

[More Information Needed]

Contributions

Thanks to @github-username for adding this dataset.

Downloads last month
36