Datasets:
Dataset Viewer
url
string | repository_url
string | labels_url
string | comments_url
string | events_url
string | html_url
string | id
int64 | node_id
string | number
int64 | title
string | user
dict | labels
list | state
string | locked
bool | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
string | type
null | active_lock_reason
null | draft
bool | pull_request
dict | body
string | closed_by
dict | reactions
dict | timeline_url
string | performed_via_github_app
null | state_reason
string | sub_issues_summary
dict | is_pull_request
bool |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7563/comments | https://api.github.com/repos/huggingface/datasets/issues/7563/events | https://github.com/huggingface/datasets/pull/7563 | 3,046,351,253 | PR_kwDODunzps6VS0QL | 7,563 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7563). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T15:18:29 | 2025-05-07T15:21:05 | 2025-05-07T15:18:36 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7563",
"html_url": "https://github.com/huggingface/datasets/pull/7563",
"diff_url": "https://github.com/huggingface/datasets/pull/7563.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7563.patch",
"merged_at": "2025-05-07T15:18:36"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7563/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7562 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7562/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7562/comments | https://api.github.com/repos/huggingface/datasets/issues/7562/events | https://github.com/huggingface/datasets/pull/7562 | 3,046,339,430 | PR_kwDODunzps6VSxmx | 7,562 | release: 3.6.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7562). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T15:15:13 | 2025-05-07T15:17:46 | 2025-05-07T15:15:21 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7562",
"html_url": "https://github.com/huggingface/datasets/pull/7562",
"diff_url": "https://github.com/huggingface/datasets/pull/7562.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7562.patch",
"merged_at": "2025-05-07T15:15:20"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7562/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7562/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7561/comments | https://api.github.com/repos/huggingface/datasets/issues/7561/events | https://github.com/huggingface/datasets/issues/7561 | 3,046,302,653 | I_kwDODunzps61kuO9 | 7,561 | NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet | {
"login": "cyanic-selkie",
"id": 32219669,
"node_id": "MDQ6VXNlcjMyMjE5NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyanic-selkie",
"html_url": "https://github.com/cyanic-selkie",
"followers_url": "https://api.github.com/users/cyanic-selkie/followers",
"following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}",
"gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions",
"organizations_url": "https://api.github.com/users/cyanic-selkie/orgs",
"repos_url": "https://api.github.com/users/cyanic-selkie/repos",
"events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyanic-selkie/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-05-07T15:05:42 | 2025-05-07T15:05:42 | null | NONE | null | null | null | null | ### Describe the bug
When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than attempting to open a PR.
### Steps to reproduce the bug
1. Create an `IterableDataset`.
2. Call `.repeat(None)` on it.
3. Wrap it in a pytorch `DataLoader`
4. Iterate over it.
### Expected behavior
This should work normally.
### Environment info
datasets: 3.5.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7561/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7560/comments | https://api.github.com/repos/huggingface/datasets/issues/7560/events | https://github.com/huggingface/datasets/pull/7560 | 3,046,265,500 | PR_kwDODunzps6VShIc | 7,560 | fix decoding tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7560). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T14:56:14 | 2025-05-07T14:59:02 | 2025-05-07T14:56:20 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7560",
"html_url": "https://github.com/huggingface/datasets/pull/7560",
"diff_url": "https://github.com/huggingface/datasets/pull/7560.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7560.patch",
"merged_at": "2025-05-07T14:56:20"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7560/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7559/comments | https://api.github.com/repos/huggingface/datasets/issues/7559/events | https://github.com/huggingface/datasets/pull/7559 | 3,046,177,078 | PR_kwDODunzps6VSNiX | 7,559 | fix aiohttp import | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7559). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T14:31:32 | 2025-05-07T14:34:34 | 2025-05-07T14:31:38 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7559",
"html_url": "https://github.com/huggingface/datasets/pull/7559",
"diff_url": "https://github.com/huggingface/datasets/pull/7559.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7559.patch",
"merged_at": "2025-05-07T14:31:38"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7559/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7558/comments | https://api.github.com/repos/huggingface/datasets/issues/7558/events | https://github.com/huggingface/datasets/pull/7558 | 3,046,066,628 | PR_kwDODunzps6VR1gN | 7,558 | fix regression | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7558). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-07T13:56:03 | 2025-05-07T13:58:52 | 2025-05-07T13:56:18 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7558",
"html_url": "https://github.com/huggingface/datasets/pull/7558",
"diff_url": "https://github.com/huggingface/datasets/pull/7558.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7558.patch",
"merged_at": "2025-05-07T13:56:18"
} | reported in https://github.com/huggingface/datasets/pull/7557 (I just reorganized the condition)
wanted to apply this change to the original PR but github didn't let me apply it directly - merging this one instead | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7558/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7557/comments | https://api.github.com/repos/huggingface/datasets/issues/7557/events | https://github.com/huggingface/datasets/pull/7557 | 3,045,962,076 | PR_kwDODunzps6VRenr | 7,557 | check for empty _formatting | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting and for the fix ! I tried to reorganize the condition in your PR but didn't get the right permission so. I ended up merging https://github.com/huggingface/datasets/pull/7558 directly so I can make a release today - I hope you don't mind"
] | 2025-05-07T13:22:37 | 2025-05-07T13:57:12 | 2025-05-07T13:57:12 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7557",
"html_url": "https://github.com/huggingface/datasets/pull/7557",
"diff_url": "https://github.com/huggingface/datasets/pull/7557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7557.patch",
"merged_at": null
} | Fixes a regression from #7553 breaking shuffling of iterable datasets
<img width="884" alt="Screenshot 2025-05-07 at 9 16 52 AM" src="https://github.com/user-attachments/assets/d2f43c5f-4092-4efe-ac31-a32cbd025fe3" />
| {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7557/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7556 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7556/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7556/comments | https://api.github.com/repos/huggingface/datasets/issues/7556/events | https://github.com/huggingface/datasets/pull/7556 | 3,043,615,210 | PR_kwDODunzps6VJlTR | 7,556 | Add `--merge-pull-request` option for `convert_to_parquet` | {
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"This is ready for a review, happy to make any changes. The main question for maintainers is how this should interact with #7555. If my suggestion there is accepted, this PR can be kept as is. If not, more changes are required to merge all the PR parts."
] | 2025-05-06T18:05:05 | 2025-05-07T17:41:16 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7556",
"html_url": "https://github.com/huggingface/datasets/pull/7556",
"diff_url": "https://github.com/huggingface/datasets/pull/7556.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7556.patch",
"merged_at": null
} | Closes #7527
Note that this implementation **will only merge the last PR in the case that they get split up by `push_to_hub`**. See https://github.com/huggingface/datasets/discussions/7555 for more details. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7556/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7554/comments | https://api.github.com/repos/huggingface/datasets/issues/7554/events | https://github.com/huggingface/datasets/issues/7554 | 3,043,089,844 | I_kwDODunzps61Yd20 | 7,554 | datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script) | {
"login": "sei-eschwartz",
"id": 50171988,
"node_id": "MDQ6VXNlcjUwMTcxOTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sei-eschwartz",
"html_url": "https://github.com/sei-eschwartz",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions",
"organizations_url": "https://api.github.com/users/sei-eschwartz/orgs",
"repos_url": "https://api.github.com/users/sei-eschwartz/repos",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"received_events_url": "https://api.github.com/users/sei-eschwartz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! there has been some effort on allowing to download only a subset of splits in https://github.com/huggingface/datasets/pull/6832 but no one has been continuing this work so far. This would be a welcomed contribution though\n\nAlso note that loading script are often unoptimized, and we recommend using datasets in standard formats like Parquet instead.\n\nBtw there is a CLI tool to convert a loading script to parquet:\n\n```\ndatasets-cli convert_to_parquet <dataset-name> --trust_remote_code\n```",
"Closing in favor of #6832 "
] | 2025-05-06T14:43:38 | 2025-05-07T14:53:45 | 2025-05-07T14:53:44 | NONE | null | null | null | null | ### Describe the bug
`datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actually process all the splits? But I thought loading scripts were designed to avoid this.
### Steps to reproduce the bug
See [this notebook](https://colab.research.google.com/drive/14kcXp_hgcdj-kIzK0bCG6taE-CLZPVvq?usp=sharing)
Or:
```python
from datasets import load_dataset
dataset = load_dataset('jordiae/exebench', split='test_synth', trust_remote_code=True)
```
### Expected behavior
I expected only the `test_synth` split to be downloaded and processed.
### Environment info
- `datasets` version: 3.5.1
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python version: 3.11.12
- `huggingface_hub` version: 0.30.2
- PyArrow version: 18.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2025.3.0 | {
"login": "sei-eschwartz",
"id": 50171988,
"node_id": "MDQ6VXNlcjUwMTcxOTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/50171988?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sei-eschwartz",
"html_url": "https://github.com/sei-eschwartz",
"followers_url": "https://api.github.com/users/sei-eschwartz/followers",
"following_url": "https://api.github.com/users/sei-eschwartz/following{/other_user}",
"gists_url": "https://api.github.com/users/sei-eschwartz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sei-eschwartz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sei-eschwartz/subscriptions",
"organizations_url": "https://api.github.com/users/sei-eschwartz/orgs",
"repos_url": "https://api.github.com/users/sei-eschwartz/repos",
"events_url": "https://api.github.com/users/sei-eschwartz/events{/privacy}",
"received_events_url": "https://api.github.com/users/sei-eschwartz/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7554/timeline | null | duplicate | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7553/comments | https://api.github.com/repos/huggingface/datasets/issues/7553/events | https://github.com/huggingface/datasets/pull/7553 | 3,042,953,907 | PR_kwDODunzps6VHUNW | 7,553 | Rebatch arrow iterables before formatted iterable | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7553). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq Our CI found an issue with this changeset causing a regression with shuffling iterable datasets \r\n<img width=\"884\" alt=\"Screenshot 2025-05-07 at 9 16 52 AM\" src=\"https://github.com/user-attachments/assets/bf7d9c7e-cc14-47da-8da6-d1a345992d7c\" />\r\n"
] | 2025-05-06T13:59:58 | 2025-05-07T13:17:41 | 2025-05-06T14:03:42 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7553",
"html_url": "https://github.com/huggingface/datasets/pull/7553",
"diff_url": "https://github.com/huggingface/datasets/pull/7553.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7553.patch",
"merged_at": "2025-05-06T14:03:41"
} | close https://github.com/huggingface/datasets/issues/7538 and https://github.com/huggingface/datasets/issues/7475 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7553/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7553/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7552/comments | https://api.github.com/repos/huggingface/datasets/issues/7552/events | https://github.com/huggingface/datasets/pull/7552 | 3,040,258,084 | PR_kwDODunzps6U-BUv | 7,552 | Enable xet in push to hub | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7552). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-05T17:02:09 | 2025-05-06T12:42:51 | 2025-05-06T12:42:48 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7552",
"html_url": "https://github.com/huggingface/datasets/pull/7552",
"diff_url": "https://github.com/huggingface/datasets/pull/7552.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7552.patch",
"merged_at": "2025-05-06T12:42:48"
} | follows https://github.com/huggingface/huggingface_hub/pull/3035
related to https://github.com/huggingface/datasets/issues/7526 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7552/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7551/comments | https://api.github.com/repos/huggingface/datasets/issues/7551/events | https://github.com/huggingface/datasets/issues/7551 | 3,038,114,928 | I_kwDODunzps61FfRw | 7,551 | Issue with offline mode and partial dataset cached | {
"login": "nrv",
"id": 353245,
"node_id": "MDQ6VXNlcjM1MzI0NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/353245?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nrv",
"html_url": "https://github.com/nrv",
"followers_url": "https://api.github.com/users/nrv/followers",
"following_url": "https://api.github.com/users/nrv/following{/other_user}",
"gists_url": "https://api.github.com/users/nrv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nrv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nrv/subscriptions",
"organizations_url": "https://api.github.com/users/nrv/orgs",
"repos_url": "https://api.github.com/users/nrv/repos",
"events_url": "https://api.github.com/users/nrv/events{/privacy}",
"received_events_url": "https://api.github.com/users/nrv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"It seems the problem comes from builder.py / create_config_id()\n\nOn the first call, when the cache is empty we have\n```\nconfig_kwargs = {'data_files': {'train': ['hf://datasets/uonlp/CulturaX@6a8734bc69fefcbb7735f4f9250f43e4cd7a442e/fr/fr_part_00038.parquet']}}\n```\nleading to config_id beeing 'default-2935e8cdcc21c613'\n\nthen, on the second call, \n```\nconfig_kwargs = {'data_files': 'fr/fr_part_00038.parquet'}\n```\nthus explaining why the hash is not the same, despite having the same parameter when calling load_dataset : data_files=\"fr/fr_part_00038.parquet\"",
"Same behavior with version 3.5.1"
] | 2025-05-04T16:49:37 | 2025-05-04T17:20:57 | null | NONE | null | null | null | null | ### Describe the bug
Hi,
a issue related to #4760 here when loading a single file from a dataset, unable to access it in offline mode afterwards
### Steps to reproduce the bug
```python
import os
# os.environ["HF_HUB_OFFLINE"] = "1"
os.environ["HF_TOKEN"] = "xxxxxxxxxxxxxx"
import datasets
dataset_name = "uonlp/CulturaX"
data_files = "fr/fr_part_00038.parquet"
ds = datasets.load_dataset(dataset_name, split='train', data_files=data_files)
print(f"Dataset loaded : {ds}")
```
Once the file has been cached, I rerun with the HF_HUB_OFFLINE activated an get this error :
```
ValueError: Couldn't find cache for uonlp/CulturaX for config 'default-1e725f978350254e'
Available configs in the cache: ['default-2935e8cdcc21c613']
```
### Expected behavior
Should be able to access the previously cached files
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.4.0-215-generic-x86_64-with-glibc2.31
- Python version: 3.12.0
- `huggingface_hub` version: 0.27.0
- PyArrow version: 19.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7551/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7550/comments | https://api.github.com/repos/huggingface/datasets/issues/7550/events | https://github.com/huggingface/datasets/pull/7550 | 3,037,017,367 | PR_kwDODunzps6UzksN | 7,550 | disable aiohttp depend for python 3.13t free-threading compat | {
"login": "Qubitium",
"id": 417764,
"node_id": "MDQ6VXNlcjQxNzc2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qubitium",
"html_url": "https://github.com/Qubitium",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions",
"organizations_url": "https://api.github.com/users/Qubitium/orgs",
"repos_url": "https://api.github.com/users/Qubitium/repos",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qubitium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-05-03T00:28:18 | 2025-05-03T00:28:24 | 2025-05-03T00:28:24 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7550",
"html_url": "https://github.com/huggingface/datasets/pull/7550",
"diff_url": "https://github.com/huggingface/datasets/pull/7550.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7550.patch",
"merged_at": null
} | null | {
"login": "Qubitium",
"id": 417764,
"node_id": "MDQ6VXNlcjQxNzc2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qubitium",
"html_url": "https://github.com/Qubitium",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions",
"organizations_url": "https://api.github.com/users/Qubitium/orgs",
"repos_url": "https://api.github.com/users/Qubitium/repos",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qubitium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7550/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7549/comments | https://api.github.com/repos/huggingface/datasets/issues/7549/events | https://github.com/huggingface/datasets/issues/7549 | 3,036,272,015 | I_kwDODunzps60-dWP | 7,549 | TypeError: Couldn't cast array of type string to null on webdataset format dataset | {
"login": "narugo1992",
"id": 117186571,
"node_id": "U_kgDOBvwgCw",
"avatar_url": "https://avatars.githubusercontent.com/u/117186571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/narugo1992",
"html_url": "https://github.com/narugo1992",
"followers_url": "https://api.github.com/users/narugo1992/followers",
"following_url": "https://api.github.com/users/narugo1992/following{/other_user}",
"gists_url": "https://api.github.com/users/narugo1992/gists{/gist_id}",
"starred_url": "https://api.github.com/users/narugo1992/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/narugo1992/subscriptions",
"organizations_url": "https://api.github.com/users/narugo1992/orgs",
"repos_url": "https://api.github.com/users/narugo1992/repos",
"events_url": "https://api.github.com/users/narugo1992/events{/privacy}",
"received_events_url": "https://api.github.com/users/narugo1992/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"seems to get fixed by explicitly adding `dataset_infos.json` like this\n\n```json\n{\n \"default\": {\n \"description\": \"Image dataset with tags and ratings\",\n \"citation\": \"\",\n \"homepage\": \"\",\n \"license\": \"\",\n \"features\": {\n \"image\": {\n \"dtype\": \"image\",\n \"_type\": \"Image\"\n },\n \"json\": {\n \"id\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"width\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"height\": {\n \"dtype\": \"int32\",\n \"_type\": \"Value\"\n },\n \"rating\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n },\n \"general_tags\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n },\n \"character_tags\": {\n \"feature\": {\n \"dtype\": \"string\",\n \"_type\": \"Value\"\n },\n \"_type\": \"Sequence\"\n }\n }\n },\n \"builder_name\": \"webdataset\",\n \"config_name\": \"default\",\n \"version\": {\n \"version_str\": \"1.0.0\",\n \"description\": null,\n \"major\": 1,\n \"minor\": 0,\n \"patch\": 0\n }\n }\n}\n\n```\n\nwill close this issue if no further issues found"
] | 2025-05-02T15:18:07 | 2025-05-02T15:37:05 | null | NONE | null | null | null | null | ### Describe the bug
```python
from datasets import load_dataset
dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k")
```
got
```
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 626, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 255, in pyarrow.lib.array
File "pyarrow/array.pxi", line 117, in pyarrow.lib._handle_arrow_array_protocol
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 258, in __arrow_array__
out = cast_array_to_feature(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2006, in cast_array_to_feature
arrays = [
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2007, in <listcomp>
_c(array.field(name) if name in array_fields else null_array, subfeature)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2066, in cast_array_to_feature
casted_array_values = _c(array.values, feature.feature)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 2103, in cast_array_to_feature
return array_cast(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1798, in wrapper
return func(array, *args, **kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/table.py", line 1949, in array_cast
raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
TypeError: Couldn't cast array of type string to null
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/load.py", line 2084, in load_dataset
builder_instance.download_and_prepare(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 925, in download_and_prepare
self._download_and_prepare(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1649, in _download_and_prepare
super()._download_and_prepare(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1487, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/builder.py", line 1644, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
`datasets==3.5.1` whats wrong
its inner json structure is like
```yaml
features:
- name: "image"
dtype: "image"
- name: "json.id"
dtype: "string"
- name: "json.width"
dtype: "int32"
- name: "json.height"
dtype: "int32"
- name: "json.rating"
sequence:
dtype: "string"
- name: "json.general_tags"
sequence:
dtype: "string"
- name: "json.character_tags"
sequence:
dtype: "string"
```
i'm 100% sure all the jsons satisfies the abovementioned format.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k")
```
### Expected behavior
load the dataset successfully, with the abovementioned json format and webp images
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 3.5.1
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.10.16
- `huggingface_hub` version: 0.30.2
- PyArrow version: 20.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7549/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7548/comments | https://api.github.com/repos/huggingface/datasets/issues/7548/events | https://github.com/huggingface/datasets/issues/7548 | 3,035,568,851 | I_kwDODunzps607xrT | 7,548 | Python 3.13t (free threads) Compat | {
"login": "Qubitium",
"id": 417764,
"node_id": "MDQ6VXNlcjQxNzc2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/417764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Qubitium",
"html_url": "https://github.com/Qubitium",
"followers_url": "https://api.github.com/users/Qubitium/followers",
"following_url": "https://api.github.com/users/Qubitium/following{/other_user}",
"gists_url": "https://api.github.com/users/Qubitium/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Qubitium/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Qubitium/subscriptions",
"organizations_url": "https://api.github.com/users/Qubitium/orgs",
"repos_url": "https://api.github.com/users/Qubitium/repos",
"events_url": "https://api.github.com/users/Qubitium/events{/privacy}",
"received_events_url": "https://api.github.com/users/Qubitium/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Update: `datasets` use `aiohttp` for data streaming and from what I understand data streaming is useful for large datasets that do not fit in memory and/or multi-modal datasets like image/audio where you only what the actual binary bits to fed in as needed. \n\nHowever, there are also many cases where aiohttp will never be used. Text datasets that are not huge, relative to machine spec, and non-multi-modal datasets. \n\nGetting `aiohttp` fixed for `free threading` appeals to be a large task that is not going to be get done in a quick manner. It may be faster to make `aiohttp` optional and not forced build. Otherwise, testing python 3.13t is going to be a painful install. \n\nI have created a fork/branch that temp disables aiohttp import so non-streaming usage of datasets can be tested under python 3.13.t:\n\nhttps://github.com/Qubitium/datasets/tree/disable-aiohttp-depend",
"We are mostly relying on `huggingface_hub` which uses `requests` to stream files from Hugging Face, so maybe we can move aiohttp to optional dependencies now. Would it solve your issue ? Btw what do you think of `datasets` in the free-threading setting ?",
"> We are mostly relying on `huggingface_hub` which uses `requests` to stream files from Hugging Face, so maybe we can move aiohttp to optional dependencies now. Would it solve your issue ? Btw what do you think of `datasets` in the free-threading setting ?\n\nI am testing transformers + dataset (simple text dataset usage) + GPTQModel for quantization and there were no issues encountered with python 3.13t but my test-case is the base-bare minimal test-case since dataset is not sharded, fully in-memory, text-only, small, not used for training. \n\nOn the technical side, dataset is almost always 100% read-only so there should be zero locking issues but I have not checked the dataset internals so there may be cases where streaming, sharding, and/or cases where datset memory/states are updated needs a per dataset `threading.lock`. \n\nSo yes, making `aiohttp` optional will definitely solve my issue. There is also a companion (datasets and tokenizers usually go hand-in-hand) issue with `Tokenizers` as well but that's simple enough with package version update: https://github.com/huggingface/tokenizers/pull/1774\n",
"Ok I see ! Anyway feel free to edit the setup.py to move aiohttp to optional (tests) dependencies and open a PR, we can run the CI to see if it's ok as a change",
"actually there is https://github.com/huggingface/datasets/pull/7294/ already, let's see if we can merge it",
"wouldn't it be the good reason to switch to `httpx`? 😄 (would require slightly more work, short term agree with https://github.com/huggingface/datasets/issues/7548#issuecomment-2854405923)"
] | 2025-05-02T09:20:09 | 2025-05-07T14:47:09 | null | NONE | null | null | null | null | ### Describe the bug
Cannot install `datasets` under `python 3.13t` due to dependency on `aiohttp` and aiohttp cannot be built for free-threading python.
The `free threading` support issue in `aiothttp` is active since August 2024! Ouch.
https://github.com/aio-libs/aiohttp/issues/8796#issue-2475941784
`pip install dataset`
```bash
(vm313t) root@gpu-base:~/GPTQModel# pip install datasets
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='pypi.org', port=443): Read timed out. (read timeout=15)")': /simple/datasets/
Collecting datasets
Using cached datasets-3.5.1-py3-none-any.whl.metadata (19 kB)
Requirement already satisfied: filelock in /root/vm313t/lib/python3.13t/site-packages (from datasets) (3.18.0)
Requirement already satisfied: numpy>=1.17 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (2.2.5)
Collecting pyarrow>=15.0.0 (from datasets)
Using cached pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl.metadata (3.3 kB)
Collecting dill<0.3.9,>=0.3.0 (from datasets)
Using cached dill-0.3.8-py3-none-any.whl.metadata (10 kB)
Collecting pandas (from datasets)
Using cached pandas-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (89 kB)
Requirement already satisfied: requests>=2.32.2 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (2.32.3)
Requirement already satisfied: tqdm>=4.66.3 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (4.67.1)
Collecting xxhash (from datasets)
Using cached xxhash-3.5.0-cp313-cp313t-linux_x86_64.whl
Collecting multiprocess<0.70.17 (from datasets)
Using cached multiprocess-0.70.16-py312-none-any.whl.metadata (7.2 kB)
Collecting fsspec<=2025.3.0,>=2023.1.0 (from fsspec[http]<=2025.3.0,>=2023.1.0->datasets)
Using cached fsspec-2025.3.0-py3-none-any.whl.metadata (11 kB)
Collecting aiohttp (from datasets)
Using cached aiohttp-3.11.18.tar.gz (7.7 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: huggingface-hub>=0.24.0 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (0.30.2)
Requirement already satisfied: packaging in /root/vm313t/lib/python3.13t/site-packages (from datasets) (25.0)
Requirement already satisfied: pyyaml>=5.1 in /root/vm313t/lib/python3.13t/site-packages (from datasets) (6.0.2)
Collecting aiohappyeyeballs>=2.3.0 (from aiohttp->datasets)
Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl.metadata (5.9 kB)
Collecting aiosignal>=1.1.2 (from aiohttp->datasets)
Using cached aiosignal-1.3.2-py2.py3-none-any.whl.metadata (3.8 kB)
Collecting attrs>=17.3.0 (from aiohttp->datasets)
Using cached attrs-25.3.0-py3-none-any.whl.metadata (10 kB)
Collecting frozenlist>=1.1.1 (from aiohttp->datasets)
Using cached frozenlist-1.6.0-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (16 kB)
Collecting multidict<7.0,>=4.5 (from aiohttp->datasets)
Using cached multidict-6.4.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.3 kB)
Collecting propcache>=0.2.0 (from aiohttp->datasets)
Using cached propcache-0.3.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB)
Collecting yarl<2.0,>=1.17.0 (from aiohttp->datasets)
Using cached yarl-1.20.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (72 kB)
Requirement already satisfied: idna>=2.0 in /root/vm313t/lib/python3.13t/site-packages (from yarl<2.0,>=1.17.0->aiohttp->datasets) (3.10)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /root/vm313t/lib/python3.13t/site-packages (from huggingface-hub>=0.24.0->datasets) (4.13.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (3.4.1)
Requirement already satisfied: urllib3<3,>=1.21.1 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (2.4.0)
Requirement already satisfied: certifi>=2017.4.17 in /root/vm313t/lib/python3.13t/site-packages (from requests>=2.32.2->datasets) (2025.4.26)
Collecting python-dateutil>=2.8.2 (from pandas->datasets)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting pytz>=2020.1 (from pandas->datasets)
Using cached pytz-2025.2-py2.py3-none-any.whl.metadata (22 kB)
Collecting tzdata>=2022.7 (from pandas->datasets)
Using cached tzdata-2025.2-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting six>=1.5 (from python-dateutil>=2.8.2->pandas->datasets)
Using cached six-1.17.0-py2.py3-none-any.whl.metadata (1.7 kB)
Using cached datasets-3.5.1-py3-none-any.whl (491 kB)
Using cached dill-0.3.8-py3-none-any.whl (116 kB)
Using cached fsspec-2025.3.0-py3-none-any.whl (193 kB)
Using cached multiprocess-0.70.16-py312-none-any.whl (146 kB)
Using cached multidict-6.4.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (220 kB)
Using cached yarl-1.20.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (404 kB)
Using cached aiohappyeyeballs-2.6.1-py3-none-any.whl (15 kB)
Using cached aiosignal-1.3.2-py2.py3-none-any.whl (7.6 kB)
Using cached attrs-25.3.0-py3-none-any.whl (63 kB)
Using cached frozenlist-1.6.0-cp313-cp313t-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (385 kB)
Using cached propcache-0.3.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (282 kB)
Using cached pyarrow-20.0.0-cp313-cp313t-manylinux_2_28_x86_64.whl (42.2 MB)
Using cached pandas-2.2.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.9 MB)
Using cached python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
Using cached pytz-2025.2-py2.py3-none-any.whl (509 kB)
Using cached six-1.17.0-py2.py3-none-any.whl (11 kB)
Using cached tzdata-2025.2-py2.py3-none-any.whl (347 kB)
Building wheels for collected packages: aiohttp
Building wheel for aiohttp (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for aiohttp (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [156 lines of output]
*********************
* Accelerated build *
*********************
/tmp/pip-build-env-wjqi8_7w/overlay/lib/python3.13t/site-packages/setuptools/dist.py:759: SetuptoolsDeprecationWarning: License classifiers are deprecated.
!!
********************************************************************************
Please consider removing the following classifiers in favor of a SPDX license expression:
License :: OSI Approved :: Apache Software License
See https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#license for details.
********************************************************************************
!!
self._finalize_license_expression()
running bdist_wheel
running build
running build_py
creating build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/typedefs.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/http_parser.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/client_reqrep.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/client_ws.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_app.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/http_websocket.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/resolver.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/tracing.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/http_writer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/http_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/log.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/__init__.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_runner.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/worker.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/connector.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/client_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_middlewares.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/tcp_helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_response.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_server.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_request.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_urldispatcher.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_exceptions.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/formdata.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/streams.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/multipart.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_routedef.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_ws.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/payload.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/client_proto.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_log.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/base_protocol.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/payload_streamer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/http.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_fileresponse.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/test_utils.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/client.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/cookiejar.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/compression_utils.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/hdrs.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/pytest_plugin.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/web_protocol.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/abc.py -> build/lib.linux-x86_64-cpython-313t/aiohttp
creating build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/__init__.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/writer.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/models.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/reader.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/reader_c.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/helpers.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/reader_py.py -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
running egg_info
writing aiohttp.egg-info/PKG-INFO
writing dependency_links to aiohttp.egg-info/dependency_links.txt
writing requirements to aiohttp.egg-info/requires.txt
writing top-level names to aiohttp.egg-info/top_level.txt
reading manifest file 'aiohttp.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'aiohttp' anywhere in distribution
warning: no files found matching '*.pyi' anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '*.lib' found anywhere in distribution
warning: no previously-included files matching '*.dll' found anywhere in distribution
warning: no previously-included files matching '*.a' found anywhere in distribution
warning: no previously-included files matching '*.obj' found anywhere in distribution
warning: no previously-included files found matching 'aiohttp/*.html'
no previously-included directories found matching 'docs/_build'
adding license file 'LICENSE.txt'
writing manifest file 'aiohttp.egg-info/SOURCES.txt'
copying aiohttp/_cparser.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/_find_header.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/_headers.pxi -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/_http_parser.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/_http_writer.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp
copying aiohttp/py.typed -> build/lib.linux-x86_64-cpython-313t/aiohttp
creating build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/.hash/_cparser.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/.hash/_find_header.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/.hash/_http_parser.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/.hash/_http_writer.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/.hash/hdrs.py.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/.hash
copying aiohttp/_websocket/mask.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/mask.pyx -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
copying aiohttp/_websocket/reader_c.pxd -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket
creating build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash
copying aiohttp/_websocket/.hash/mask.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash
copying aiohttp/_websocket/.hash/mask.pyx.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash
copying aiohttp/_websocket/.hash/reader_c.pxd.hash -> build/lib.linux-x86_64-cpython-313t/aiohttp/_websocket/.hash
running build_ext
building 'aiohttp._websocket.mask' extension
creating build/temp.linux-x86_64-cpython-313t/aiohttp/_websocket
x86_64-linux-gnu-gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O2 -Wall -g -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection -fPIC -I/root/vm313t/include -I/usr/include/python3.13t -c aiohttp/_websocket/mask.c -o build/temp.linux-x86_64-cpython-313t/aiohttp/_websocket/mask.o
aiohttp/_websocket/mask.c:1864:80: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’?
1864 | static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw);
| ^~~~~~~~~~~~~~~~~~~~
| vectorcallfunc
aiohttp/_websocket/mask.c: In function ‘__pyx_f_7aiohttp_10_websocket_4mask__websocket_mask_cython’:
aiohttp/_websocket/mask.c:2905:3: warning: ‘Py_OptimizeFlag’ is deprecated [-Wdeprecated-declarations]
2905 | if (unlikely(__pyx_assertions_enabled())) {
| ^~
In file included from /usr/include/python3.13t/Python.h:76,
from aiohttp/_websocket/mask.c:16:
/usr/include/python3.13t/cpython/pydebug.h:13:37: note: declared here
13 | Py_DEPRECATED(3.12) PyAPI_DATA(int) Py_OptimizeFlag;
| ^~~~~~~~~~~~~~~
aiohttp/_websocket/mask.c: At top level:
aiohttp/_websocket/mask.c:4846:69: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’?
4846 | static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw)
| ^~~~~~~~~~~~~~~~~~~~
| vectorcallfunc
aiohttp/_websocket/mask.c:4891:80: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’?
4891 | static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw)
| ^~~~~~~~~~~~~~~~~~~~
| vectorcallfunc
aiohttp/_websocket/mask.c: In function ‘__Pyx_CyFunction_CallAsMethod’:
aiohttp/_websocket/mask.c:5580:6: error: unknown type name ‘__pyx_vectorcallfunc’; did you mean ‘vectorcallfunc’?
5580 | __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc);
| ^~~~~~~~~~~~~~~~~~~~
| vectorcallfunc
aiohttp/_websocket/mask.c:1954:45: warning: initialization of ‘int’ from ‘vectorcallfunc’ {aka ‘struct _object * (*)(struct _object *, struct _object * const*, long unsigned int, struct _object *)’} makes integer from pointer without a cast [-Wint-conversion]
1954 | #define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall)
| ^
aiohttp/_websocket/mask.c:5580:32: note: in expansion of macro ‘__Pyx_CyFunction_func_vectorcall’
5580 | __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
aiohttp/_websocket/mask.c:5583:16: warning: implicit declaration of function ‘__Pyx_PyVectorcall_FastCallDict’ [-Wimplicit-function-declaration]
5583 | return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
aiohttp/_websocket/mask.c:5583:16: warning: returning ‘int’ from a function with return type ‘PyObject *’ {aka ‘struct _object *’} makes pointer from integer without a cast [-Wint-conversion]
5583 | return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for aiohttp
Failed to build aiohttp
ERROR: Failed to build installable wheels for some pyproject.toml based projects (aiohttp)
```
### Steps to reproduce the bug
See above
### Expected behavior
Install
### Environment info
Ubuntu 24.04 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7548/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7547/comments | https://api.github.com/repos/huggingface/datasets/issues/7547/events | https://github.com/huggingface/datasets/pull/7547 | 3,034,830,291 | PR_kwDODunzps6UsTuF | 7,547 | Avoid global umask for setting file mode. | {
"login": "ryan-clancy",
"id": 1282383,
"node_id": "MDQ6VXNlcjEyODIzODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1282383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryan-clancy",
"html_url": "https://github.com/ryan-clancy",
"followers_url": "https://api.github.com/users/ryan-clancy/followers",
"following_url": "https://api.github.com/users/ryan-clancy/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-clancy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryan-clancy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-clancy/subscriptions",
"organizations_url": "https://api.github.com/users/ryan-clancy/orgs",
"repos_url": "https://api.github.com/users/ryan-clancy/repos",
"events_url": "https://api.github.com/users/ryan-clancy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryan-clancy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7547). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-05-01T22:24:24 | 2025-05-06T13:05:00 | 2025-05-06T13:05:00 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7547",
"html_url": "https://github.com/huggingface/datasets/pull/7547",
"diff_url": "https://github.com/huggingface/datasets/pull/7547.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7547.patch",
"merged_at": "2025-05-06T13:05:00"
} | This PR updates the method for setting the permissions on `cache_path` after calling `shutil.move`. The call to `shutil.move` may not preserve permissions if the source and destination are on different filesystems. Reading and resetting umask can cause race conditions, so directly read what permissions were set for the `temp_file` instead.
This fixes https://github.com/huggingface/datasets/issues/7536. | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7547/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7546/comments | https://api.github.com/repos/huggingface/datasets/issues/7546/events | https://github.com/huggingface/datasets/issues/7546 | 3,034,018,298 | I_kwDODunzps6013H6 | 7,546 | Large memory use when loading large datasets from hub | {
"login": "FredHaa",
"id": 6875946,
"node_id": "MDQ6VXNlcjY4NzU5NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6875946?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FredHaa",
"html_url": "https://github.com/FredHaa",
"followers_url": "https://api.github.com/users/FredHaa/followers",
"following_url": "https://api.github.com/users/FredHaa/following{/other_user}",
"gists_url": "https://api.github.com/users/FredHaa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FredHaa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FredHaa/subscriptions",
"organizations_url": "https://api.github.com/users/FredHaa/orgs",
"repos_url": "https://api.github.com/users/FredHaa/repos",
"events_url": "https://api.github.com/users/FredHaa/events{/privacy}",
"received_events_url": "https://api.github.com/users/FredHaa/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! datasets are memory mapped from disk, so they don't fill out your RAM. Not sure what's the source of your memory issue.\n\nWhat kind of system are you using ? and what kind of disk ?"
] | 2025-05-01T14:43:47 | 2025-05-07T14:41:15 | null | NONE | null | null | null | null | ### Describe the bug
When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train models using multiple large datasets.
### Steps to reproduce the bug
`uv run --with datasets==3.5.1 python`
```python
from datasets import load_dataset
load_dataset('MLCommons/peoples_speech', 'clean')
load_dataset('mozilla-foundation/common_voice_17_0', 'en')
```
### Expected behavior
I would expect that a lot less than 500GB of RAM would be required to load the dataset, or at least that the RAM usage would be cleared as soon as the dataset is loaded (and thus reside as a memory mapped file) such that other datasets can be loaded.
### Environment info
I am currently using the latest datasets==3.5.1 but I have had the same problem with multiple other versions. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7546/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7545/comments | https://api.github.com/repos/huggingface/datasets/issues/7545/events | https://github.com/huggingface/datasets/issues/7545 | 3,031,617,547 | I_kwDODunzps60stAL | 7,545 | Networked Pull Through Cache | {
"login": "wrmedford",
"id": 8764173,
"node_id": "MDQ6VXNlcjg3NjQxNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8764173?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wrmedford",
"html_url": "https://github.com/wrmedford",
"followers_url": "https://api.github.com/users/wrmedford/followers",
"following_url": "https://api.github.com/users/wrmedford/following{/other_user}",
"gists_url": "https://api.github.com/users/wrmedford/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wrmedford/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wrmedford/subscriptions",
"organizations_url": "https://api.github.com/users/wrmedford/orgs",
"repos_url": "https://api.github.com/users/wrmedford/repos",
"events_url": "https://api.github.com/users/wrmedford/events{/privacy}",
"received_events_url": "https://api.github.com/users/wrmedford/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2025-04-30T15:16:33 | 2025-04-30T15:16:33 | null | NONE | null | null | null | null | ### Feature request
Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service.
Enable a three-tier cache lookup for datasets:
1. Local on-disk cache
2. Configurable network cache proxy
3. Official Hugging Face Hub
### Motivation
- Distributed training & ephemeral jobs: In high-performance or containerized clusters, relying solely on a local disk cache either becomes a streaming bottleneck or incurs a heavy cold-start penalty as each job must re-download datasets.
- Traffic & cost reduction: A pull-through network cache lets multiple consumers share a common cache layer, reducing duplicate downloads from the Hub and lowering egress costs.
- Better streaming adoption: By offloading repeat dataset pulls to a locally managed cache proxy, streaming workloads can achieve higher throughput and more predictable latency.
- Proven pattern: Similar proxy-cache solutions (e.g. Harbor’s Proxy Cache for Docker images) have demonstrated reliability and performance at scale: https://goharbor.io/docs/2.1.0/administration/configure-proxy-cache/
### Your contribution
I’m happy to draft the initial PR for adding HF_DATASET_CACHE_NETWORK_LOCATION support in datasets and sketch out a minimal cache-service prototype.
I have limited bandwidth so I would be looking for collaborators if anyone else is interested. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7545/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7545/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7544/comments | https://api.github.com/repos/huggingface/datasets/issues/7544/events | https://github.com/huggingface/datasets/pull/7544 | 3,027,024,285 | PR_kwDODunzps6UR4Nn | 7,544 | Add try_original_type to DatasetDict.map | {
"login": "yoshitomo-matsubara",
"id": 11156001,
"node_id": "MDQ6VXNlcjExMTU2MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/11156001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoshitomo-matsubara",
"html_url": "https://github.com/yoshitomo-matsubara",
"followers_url": "https://api.github.com/users/yoshitomo-matsubara/followers",
"following_url": "https://api.github.com/users/yoshitomo-matsubara/following{/other_user}",
"gists_url": "https://api.github.com/users/yoshitomo-matsubara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yoshitomo-matsubara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yoshitomo-matsubara/subscriptions",
"organizations_url": "https://api.github.com/users/yoshitomo-matsubara/orgs",
"repos_url": "https://api.github.com/users/yoshitomo-matsubara/repos",
"events_url": "https://api.github.com/users/yoshitomo-matsubara/events{/privacy}",
"received_events_url": "https://api.github.com/users/yoshitomo-matsubara/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7544). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Sure! I just committed the changes",
"@lhoestq \r\nLet me know if there are other things to do before merge or other places to add `try_original_type` argument "
] | 2025-04-29T04:39:44 | 2025-05-05T14:42:49 | 2025-05-05T14:42:49 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7544",
"html_url": "https://github.com/huggingface/datasets/pull/7544",
"diff_url": "https://github.com/huggingface/datasets/pull/7544.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7544.patch",
"merged_at": "2025-05-05T14:42:49"
} | This PR resolves #7472 for DatasetDict
The previously merged PR #7483 added `try_original_type` to ArrowDataset, but DatasetDict misses `try_original_type`
Cc: @lhoestq | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7544/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7543/comments | https://api.github.com/repos/huggingface/datasets/issues/7543/events | https://github.com/huggingface/datasets/issues/7543 | 3,026,867,706 | I_kwDODunzps60alX6 | 7,543 | The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.) | {
"login": "jxma20",
"id": 76415358,
"node_id": "MDQ6VXNlcjc2NDE1MzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxma20",
"html_url": "https://github.com/jxma20",
"followers_url": "https://api.github.com/users/jxma20/followers",
"following_url": "https://api.github.com/users/jxma20/following{/other_user}",
"gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxma20/subscriptions",
"organizations_url": "https://api.github.com/users/jxma20/orgs",
"repos_url": "https://api.github.com/users/jxma20/repos",
"events_url": "https://api.github.com/users/jxma20/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxma20/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-04-29T03:04:59 | 2025-04-30T02:22:17 | 2025-04-30T02:22:17 | NONE | null | null | null | null | ### Describe the bug
## bug
When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_batch_size` will be occupied in memory.
However, I found that the map function does not actually reduce memory usage when I used it. At first, I thought there was a bug in the program, causing a memory leak—meaning the memory was not released after the data was stored in the cache. But later, I used a Linux command to check for recently modified files during program execution and found that no new files were created or modified. This indicates that the program did not store the dataset in the disk cache.
## bug solved
After modifying the parameters of the map function multiple times, I discovered the `cache_file_name` parameter. By changing it, the cache file can be stored in the specified directory. After making this change, I noticed that the cache file appeared. Initially, I found this quite incredible, but then I wondered if the cache file might have failed to be stored in a certain folder. This could be related to the fact that I don't have root privileges.
So, I delved into the source code of the map function to find out where the cache file would be stored by default. Eventually, I found the function `def _get_cache_file_path(self, fingerprint):`, which automatically generates the storage path for the cache file. The output was as follows: `/tmp/hf_datasets-j5qco9ug/cache-f2830487643b9cc2.arrow`. My hypothesis was confirmed: the lack of root privileges indeed prevented the cache file from being stored, which in turn prevented the release of memory. Therefore, changing the storage location to a folder where I have write access resolved the issue.
### Steps to reproduce the bug
my code
`train_data = train_data.map(process_fun, remove_columns=['image_name', 'question_type', 'concern', 'question', 'candidate_answers', 'answer'])`
### Expected behavior
Although my bug has been resolved, it still took me nearly a week to search for relevant information and debug the program. However, if a warning or error message about insufficient cache file write permissions could be provided during program execution, I might have been able to identify the cause more quickly. Therefore, I hope this aspect can be improved. I am documenting this bug here so that friends who encounter similar issues can solve their problems in a timely manner.
### Environment info
python: 3.10.15
datasets: 3.5.0 | {
"login": "jxma20",
"id": 76415358,
"node_id": "MDQ6VXNlcjc2NDE1MzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/76415358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxma20",
"html_url": "https://github.com/jxma20",
"followers_url": "https://api.github.com/users/jxma20/followers",
"following_url": "https://api.github.com/users/jxma20/following{/other_user}",
"gists_url": "https://api.github.com/users/jxma20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxma20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxma20/subscriptions",
"organizations_url": "https://api.github.com/users/jxma20/orgs",
"repos_url": "https://api.github.com/users/jxma20/repos",
"events_url": "https://api.github.com/users/jxma20/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxma20/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7543/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7543/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7542/comments | https://api.github.com/repos/huggingface/datasets/issues/7542/events | https://github.com/huggingface/datasets/pull/7542 | 3,025,054,630 | PR_kwDODunzps6ULHxo | 7,542 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7542). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-04-28T14:03:48 | 2025-04-28T14:08:37 | 2025-04-28T14:04:00 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7542",
"html_url": "https://github.com/huggingface/datasets/pull/7542",
"diff_url": "https://github.com/huggingface/datasets/pull/7542.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7542.patch",
"merged_at": "2025-04-28T14:04:00"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7542/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7542/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7541/comments | https://api.github.com/repos/huggingface/datasets/issues/7541/events | https://github.com/huggingface/datasets/pull/7541 | 3,025,045,919 | PR_kwDODunzps6ULF7F | 7,541 | release: 3.5.1 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7541). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-04-28T14:00:59 | 2025-04-28T14:03:38 | 2025-04-28T14:01:54 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7541",
"html_url": "https://github.com/huggingface/datasets/pull/7541",
"diff_url": "https://github.com/huggingface/datasets/pull/7541.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7541.patch",
"merged_at": "2025-04-28T14:01:54"
} | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7541/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7540/comments | https://api.github.com/repos/huggingface/datasets/issues/7540/events | https://github.com/huggingface/datasets/pull/7540 | 3,024,862,966 | PR_kwDODunzps6UKe6T | 7,540 | support pyarrow 20 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7540). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-04-28T13:01:11 | 2025-04-28T13:23:53 | 2025-04-28T13:23:52 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7540",
"html_url": "https://github.com/huggingface/datasets/pull/7540",
"diff_url": "https://github.com/huggingface/datasets/pull/7540.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7540.patch",
"merged_at": "2025-04-28T13:23:52"
} | fix
```
TypeError: ArrayExtensionArray.to_pylist() got an unexpected keyword argument 'maps_as_pydicts'
``` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7540/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7540/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7539/comments | https://api.github.com/repos/huggingface/datasets/issues/7539/events | https://github.com/huggingface/datasets/pull/7539 | 3,023,311,163 | PR_kwDODunzps6UFQ0W | 7,539 | Fix IterableDataset state_dict shard_example_idx counting | {
"login": "Harry-Yang0518",
"id": 129883215,
"node_id": "U_kgDOB73cTw",
"avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Harry-Yang0518",
"html_url": "https://github.com/Harry-Yang0518",
"followers_url": "https://api.github.com/users/Harry-Yang0518/followers",
"following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}",
"gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions",
"organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs",
"repos_url": "https://api.github.com/users/Harry-Yang0518/repos",
"events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}",
"received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7539). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Hi ! FYI I made a PR to fix https://github.com/huggingface/datasets/issues/7538 and it also fixed https://github.com/huggingface/datasets/issues/7475, so if I'm not mistaken this PR is not needed anymore"
] | 2025-04-27T20:41:18 | 2025-05-06T14:24:25 | 2025-05-06T14:24:24 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7539",
"html_url": "https://github.com/huggingface/datasets/pull/7539",
"diff_url": "https://github.com/huggingface/datasets/pull/7539.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7539.patch",
"merged_at": null
} | # Fix IterableDataset's state_dict shard_example_idx reporting
## Description
This PR fixes issue #7475 where the `shard_example_idx` value in `IterableDataset`'s `state_dict()` always equals the number of samples in a shard, even if only a few examples have been consumed.
The issue is in the `_iter_arrow` method of the `ArrowExamplesIterable` class where it updates the `shard_example_idx` state by the full length of the batch (`len(pa_table)`) even when we're only partway through processing the examples.
## Changes
Modified the `_iter_arrow` method of `ArrowExamplesIterable` to:
1. Track the actual number of examples processed
2. Only increment the `shard_example_idx` by the number of examples actually yielded
3. Handle partial batches correctly
## How to Test
I've included a simple test case that demonstrates the fix:
```python
from datasets import Dataset
# Create a test dataset
ds = Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=1)
# Iterate through part of the dataset
for idx, example in enumerate(ds):
print(example)
if idx == 2: # Stop after 3 examples (0, 1, 2)
state_dict = ds.state_dict()
print("Checkpoint state_dict:", state_dict)
break
# Before the fix, the output would show shard_example_idx: 6
# After the fix, it shows shard_example_idx: 3, correctly reflecting the 3 processed examples
```
## Implementation Details
1. Added logic to track the number of examples actually seen in the current shard
2. Modified the state update to only count examples actually yielded
3. Improved handling of partial batches and skipped examples
This fix ensures that checkpointing and resuming works correctly with exactly the expected number of examples, rather than skipping ahead to the end of the batch. | {
"login": "Harry-Yang0518",
"id": 129883215,
"node_id": "U_kgDOB73cTw",
"avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Harry-Yang0518",
"html_url": "https://github.com/Harry-Yang0518",
"followers_url": "https://api.github.com/users/Harry-Yang0518/followers",
"following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}",
"gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions",
"organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs",
"repos_url": "https://api.github.com/users/Harry-Yang0518/repos",
"events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}",
"received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7539/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7538/comments | https://api.github.com/repos/huggingface/datasets/issues/7538/events | https://github.com/huggingface/datasets/issues/7538 | 3,023,280,056 | I_kwDODunzps60M5e4 | 7,538 | `IterableDataset` drops samples when resuming from a checkpoint | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting ! I fixed the issue using RebatchedArrowExamplesIterable before the formatted iterable"
] | 2025-04-27T19:34:49 | 2025-05-06T14:04:05 | 2025-05-06T14:03:42 | COLLABORATOR | null | null | null | null | When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted.
In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one (after formatting). However, the child increments the `shard_example_idx` counter (in its `iter_arrow`) before returning the batch for the whole batch size, which leads to a portion of samples being skipped if the iteration (of the parent iterable) is stopped mid-batch.
Perhaps one way to avoid this would be by signalling the child iterable which samples (within the chunk) are processed by the parent and which are not, so that it can adjust the `shard_example_idx` counter accordingly. This would also mean the chunk needs to be sliced when resuming, but this is straightforward to implement.
The following is a minimal reproducer of the bug:
```python
from datasets import Dataset
from datasets.distributed import split_dataset_by_node
ds = Dataset.from_dict({"n": list(range(24))})
ds = ds.to_iterable_dataset(num_shards=4)
world_size = 4
rank = 0
ds_rank = split_dataset_by_node(ds, rank, world_size)
it = iter(ds_rank)
examples = []
for idx, example in enumerate(it):
examples.append(example)
if idx == 2:
state_dict = ds_rank.state_dict()
break
ds_rank.load_state_dict(state_dict)
it_resumed = iter(ds_rank)
examples_resumed = examples[:]
for example in it:
examples.append(example)
for example in it_resumed:
examples_resumed.append(example)
print("ORIGINAL ITER EXAMPLES:", examples)
print("RESUMED ITER EXAMPLES:", examples_resumed)
``` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7538/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7538/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7537/comments | https://api.github.com/repos/huggingface/datasets/issues/7537/events | https://github.com/huggingface/datasets/issues/7537 | 3,018,792,966 | I_kwDODunzps6z7yAG | 7,537 | `datasets.map(..., num_proc=4)` multi-processing fails | {
"login": "faaany",
"id": 24477841,
"node_id": "MDQ6VXNlcjI0NDc3ODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/24477841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faaany",
"html_url": "https://github.com/faaany",
"followers_url": "https://api.github.com/users/faaany/followers",
"following_url": "https://api.github.com/users/faaany/following{/other_user}",
"gists_url": "https://api.github.com/users/faaany/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faaany/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faaany/subscriptions",
"organizations_url": "https://api.github.com/users/faaany/orgs",
"repos_url": "https://api.github.com/users/faaany/repos",
"events_url": "https://api.github.com/users/faaany/events{/privacy}",
"received_events_url": "https://api.github.com/users/faaany/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"related: https://github.com/huggingface/datasets/issues/7510\n\nwe need to do more tests to see if latest `dill` is deterministic"
] | 2025-04-25T01:53:47 | 2025-05-06T13:12:08 | null | NONE | null | null | null | null | The following code fails in python 3.11+
```python
tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
```
Error log:
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.12/dist-packages/multiprocess/pool.py", line 114, in worker
task = get()
^^^^^
File "/usr/local/lib/python3.12/dist-packages/multiprocess/queues.py", line 371, in get
return _ForkingPickler.loads(res)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 327, in loads
return load(file, ignore, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 313, in load
return Unpickler(file, ignore=ignore, **kwds).load()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 525, in load
obj = StockUnpickler.load(self)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/dill/_dill.py", line 659, in _create_code
if len(args) == 16: return CodeType(*args)
^^^^^^^^^^^^^^^
TypeError: code() argument 13 must be str, not int
```
After upgrading dill to the latest 0.4.0 with "pip install --upgrade dill", it can pass. So it seems that there is a compatibility issue between dill 0.3.4 and python 3.11+, because python 3.10 works fine.
Is the dill deterministic issue mentioned in https://github.com/huggingface/datasets/blob/main/setup.py#L117) still valid? Any plan to unpin?
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7537/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7536/comments | https://api.github.com/repos/huggingface/datasets/issues/7536/events | https://github.com/huggingface/datasets/issues/7536 | 3,018,425,549 | I_kwDODunzps6z6YTN | 7,536 | [Errno 13] Permission denied: on `.incomplete` file | {
"login": "ryan-clancy",
"id": 1282383,
"node_id": "MDQ6VXNlcjEyODIzODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1282383?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryan-clancy",
"html_url": "https://github.com/ryan-clancy",
"followers_url": "https://api.github.com/users/ryan-clancy/followers",
"following_url": "https://api.github.com/users/ryan-clancy/following{/other_user}",
"gists_url": "https://api.github.com/users/ryan-clancy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryan-clancy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryan-clancy/subscriptions",
"organizations_url": "https://api.github.com/users/ryan-clancy/orgs",
"repos_url": "https://api.github.com/users/ryan-clancy/repos",
"events_url": "https://api.github.com/users/ryan-clancy/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryan-clancy/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)",
"> It must be an issue with umask being used by multiple threads indeed. Maybe we can try to make a thread safe function to apply the umask (using filelock for example)\n\n@lhoestq is this something which can go in a 3.5.1 release?",
"Yes for sure",
"@lhoestq - can you take a look at https://github.com/huggingface/datasets/pull/7547/?"
] | 2025-04-24T20:52:45 | 2025-05-06T13:05:01 | 2025-05-06T13:05:01 | CONTRIBUTOR | null | null | null | null | ### Describe the bug
When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS.
It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can sometimes be created with `000` permissions leading to the permission denied error (the user running the code is still the owner of the file). Deleting that particular file and re-running the code with 0 changes will usually succeed.
Is there some race condition happening with the [umask](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L416), which is process global, and the [file creation](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L404)?
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/lib/python3.12/site-packages/datasets/load.py:2084: in load_dataset
builder_instance.download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:925: in download_and_prepare
self._download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:1649: in _download_and_prepare
super()._download_and_prepare(
.venv/lib/python3.12/site-packages/datasets/builder.py:979: in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
.venv/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py:120: in _split_generators
downloaded_files = dl_manager.download(files)
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:159: in download
downloaded_path_or_paths = map_nested(
.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:514: in map_nested
_single_map_nested((function, obj, batched, batch_size, types, None, True, None))
.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:382: in _single_map_nested
return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:206: in _download_batched
return thread_map(
.venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:69: in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
.venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:51: in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
.venv/lib/python3.12/site-packages/tqdm/std.py:1181: in __iter__
for obj in iterable:
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:619: in result_iterator
yield _result_or_cancel(fs.pop())
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:317: in _result_or_cancel
return fut.result(timeout)
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:449: in result
return self.__get_result()
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:401: in __get_result
raise self._exception
../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/thread.py:59: in run
result = self.fn(*self.args, **self.kwargs)
.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:229: in _download_single
out = cached_path(url_or_filename, download_config=download_config)
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:206: in cached_path
output_path = get_from_cache(
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:412: in get_from_cache
fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm)
.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:331: in fsspec_get
fs.get_file(path, temp_file.name, callback=callback)
.venv/lib/python3.12/site-packages/fsspec/asyn.py:118: in wrapper
return sync(self.loop, func, *args, **kwargs)
.venv/lib/python3.12/site-packages/fsspec/asyn.py:103: in sync
raise return_result
.venv/lib/python3.12/site-packages/fsspec/asyn.py:56: in _runner
result[0] = await coro
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <s3fs.core.S3FileSystem object at 0x7f27c18b2e70>
rpath = '<my-bucket>/<my-prefix>/img_1.jpg'
lpath = '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete'
callback = <datasets.utils.file_utils.TqdmCallback object at 0x7f27c00cdbe0>
version_id = None, kwargs = {}
_open_file = <function S3FileSystem._get_file.<locals>._open_file at 0x7f27628d1120>
body = <StreamingBody at 0x7f276344fa80 for ClientResponse at 0x7f27c015fce0>
content_length = 521923, failed_reads = 0, bytes_read = 0
async def _get_file(
self, rpath, lpath, callback=_DEFAULT_CALLBACK, version_id=None, **kwargs
):
if os.path.isdir(lpath):
return
bucket, key, vers = self.split_path(rpath)
async def _open_file(range: int):
kw = self.req_kw.copy()
if range:
kw["Range"] = f"bytes={range}-"
resp = await self._call_s3(
"get_object",
Bucket=bucket,
Key=key,
**version_id_kw(version_id or vers),
**kw,
)
return resp["Body"], resp.get("ContentLength", None)
body, content_length = await _open_file(range=0)
callback.set_size(content_length)
failed_reads = 0
bytes_read = 0
try:
> with open(lpath, "wb") as f0:
E PermissionError: [Errno 13] Permission denied: '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete'
.venv/lib/python3.12/site-packages/s3fs/core.py:1355: PermissionError
```
### Steps to reproduce the bug
I believe this is a race condition and cannot reliably re-produce it, but it happens fairly frequently in our GitHub Actions tests and can also be re-produced (with lesser frequency) on cloud VMs.
### Expected behavior
The dataset loads properly with no permission denied error.
### Environment info
- `datasets` version: 3.5.0
- Platform: Linux-5.10.0-34-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.12.10
- `huggingface_hub` version: 0.30.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7536/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7535 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7535/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7535/comments | https://api.github.com/repos/huggingface/datasets/issues/7535/events | https://github.com/huggingface/datasets/pull/7535 | 3,018,289,872 | PR_kwDODunzps6T0lm3 | 7,535 | Change dill version in requirements | {
"login": "JGrel",
"id": 98061329,
"node_id": "U_kgDOBdhMEQ",
"avatar_url": "https://avatars.githubusercontent.com/u/98061329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JGrel",
"html_url": "https://github.com/JGrel",
"followers_url": "https://api.github.com/users/JGrel/followers",
"following_url": "https://api.github.com/users/JGrel/following{/other_user}",
"gists_url": "https://api.github.com/users/JGrel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JGrel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JGrel/subscriptions",
"organizations_url": "https://api.github.com/users/JGrel/orgs",
"repos_url": "https://api.github.com/users/JGrel/repos",
"events_url": "https://api.github.com/users/JGrel/events{/privacy}",
"received_events_url": "https://api.github.com/users/JGrel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7535). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-04-24T19:44:28 | 2025-04-25T09:31:44 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7535",
"html_url": "https://github.com/huggingface/datasets/pull/7535",
"diff_url": "https://github.com/huggingface/datasets/pull/7535.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7535.patch",
"merged_at": null
} | Change dill version to >=0.3.9,<0.4.5 and check for errors | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7535/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7534/comments | https://api.github.com/repos/huggingface/datasets/issues/7534/events | https://github.com/huggingface/datasets/issues/7534 | 3,017,259,407 | I_kwDODunzps6z17mP | 7,534 | TensorFlow RaggedTensor Support (batch-level) | {
"login": "Lundez",
"id": 7490199,
"node_id": "MDQ6VXNlcjc0OTAxOTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7490199?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lundez",
"html_url": "https://github.com/Lundez",
"followers_url": "https://api.github.com/users/Lundez/followers",
"following_url": "https://api.github.com/users/Lundez/following{/other_user}",
"gists_url": "https://api.github.com/users/Lundez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lundez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lundez/subscriptions",
"organizations_url": "https://api.github.com/users/Lundez/orgs",
"repos_url": "https://api.github.com/users/Lundez/repos",
"events_url": "https://api.github.com/users/Lundez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lundez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Keras doesn't support other inputs other than tf.data.Dataset objects ? it's a bit painful to have to support and maintain this kind of integration\n\nIs there a way to use a `datasets.Dataset` with outputs formatted as tensors / ragged tensors instead ? like in https://huggingface.co/docs/datasets/use_with_tensorflow#dataset-format"
] | 2025-04-24T13:14:52 | 2025-05-06T13:25:10 | null | NONE | null | null | null | null | ### Feature request
Hi,
Currently datasets does not support RaggedTensor output on batch-level.
When building a Object Detection Dataset (with TensorFlow) I need to enable RaggedTensors as that's how BBoxes & classes are expected from the Keras Model POV.
Currently there's a error thrown saying that "Nested Data is not supported".
It'd be very helpful if this was fixed! :)
### Motivation
Enabling Object Detection pipelines for TensorFlow.
### Your contribution
With guidance I'd happily help making the PR.
The current implementation with DataCollator and later enforcing `np.array` is the problematic part (at the end of `np_get_batch` in `tf_utils.py`). As `numpy` don't support "Raggednes" | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7534/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7533 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7533/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7533/comments | https://api.github.com/repos/huggingface/datasets/issues/7533/events | https://github.com/huggingface/datasets/pull/7533 | 3,015,075,086 | PR_kwDODunzps6TpraJ | 7,533 | Add custom fingerprint support to `from_generator` | {
"login": "simonreise",
"id": 43753582,
"node_id": "MDQ6VXNlcjQzNzUzNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/43753582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonreise",
"html_url": "https://github.com/simonreise",
"followers_url": "https://api.github.com/users/simonreise/followers",
"following_url": "https://api.github.com/users/simonreise/following{/other_user}",
"gists_url": "https://api.github.com/users/simonreise/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonreise/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonreise/subscriptions",
"organizations_url": "https://api.github.com/users/simonreise/orgs",
"repos_url": "https://api.github.com/users/simonreise/repos",
"events_url": "https://api.github.com/users/simonreise/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonreise/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"This is great !\r\n\r\nWhat do you think of passing `config_id=` directly to the builder instead of just the suffix ? This would be a power user argument though, or for internal use. And in from_generator the new argument can be `fingerprint=` as in `Dataset.__init__()`\r\n\r\nThe `config_id` can be defined using something like `config_id = \"default-fingerprint=\" + fingerprint`\r\n\r\nI feel ike this could make the Dataset API more coherent if we avoid introducing a new argument while we can juste use `fingerprint=`"
] | 2025-04-23T19:31:35 | 2025-05-07T14:17:05 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7533",
"html_url": "https://github.com/huggingface/datasets/pull/7533",
"diff_url": "https://github.com/huggingface/datasets/pull/7533.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7533.patch",
"merged_at": null
} | This PR adds `dataset_id_suffix` parameter to 'Dataset.from_generator' function.
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset processed in a generator function is large enough.
This PR allows user to pass a custom fingerprint (`dataset_id_suffix`) to be used as a suffix in a dataset name instead of the one generated by hashing the args.
This PR is a possible solution of #7513 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7533/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7532/comments | https://api.github.com/repos/huggingface/datasets/issues/7532/events | https://github.com/huggingface/datasets/pull/7532 | 3,009,546,204 | PR_kwDODunzps6TW8Ss | 7,532 | Document the HF_DATASETS_CACHE environment variable in the datasets cache documentation | {
"login": "Harry-Yang0518",
"id": 129883215,
"node_id": "U_kgDOB73cTw",
"avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Harry-Yang0518",
"html_url": "https://github.com/Harry-Yang0518",
"followers_url": "https://api.github.com/users/Harry-Yang0518/followers",
"following_url": "https://api.github.com/users/Harry-Yang0518/following{/other_user}",
"gists_url": "https://api.github.com/users/Harry-Yang0518/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Harry-Yang0518/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Harry-Yang0518/subscriptions",
"organizations_url": "https://api.github.com/users/Harry-Yang0518/orgs",
"repos_url": "https://api.github.com/users/Harry-Yang0518/repos",
"events_url": "https://api.github.com/users/Harry-Yang0518/events{/privacy}",
"received_events_url": "https://api.github.com/users/Harry-Yang0518/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7532). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Your clarification in your comment at https://github.com/huggingface/datasets/issues/7480#issuecomment-2833640084 sounds great, would you like to update this PR to include it ?",
"Hi @lhoestq, I’ve updated the documentation to reflect the clarifications discussed in #7480. Let me know if anything else is needed!\r\n"
] | 2025-04-22T00:23:13 | 2025-05-06T15:54:38 | 2025-05-06T15:54:38 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7532",
"html_url": "https://github.com/huggingface/datasets/pull/7532",
"diff_url": "https://github.com/huggingface/datasets/pull/7532.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7532.patch",
"merged_at": "2025-05-06T15:54:38"
} |
This pull request updates the Datasets documentation to include the `HF_DATASETS_CACHE` environment variable. While the current documentation only mentions `HF_HOME` for overriding the default cache directory, `HF_DATASETS_CACHE` is also a supported and useful option for specifying a custom cache location for datasets stored in Arrow format.
This addition is based on the discussion in (https://github.com/huggingface/datasets/issues/7457), where users noted the absence of this variable in the documentation despite its functionality. The update adds a new section to `cache.mdx` that explains how to use `HF_DATASETS_CACHE` with an example.
This change aims to improve clarity and help users better manage their cache directories when working in shared environments or with limited local storage.
Closes #7457. | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7532/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7532/timeline | null | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7531/comments | https://api.github.com/repos/huggingface/datasets/issues/7531/events | https://github.com/huggingface/datasets/issues/7531 | 3,008,914,887 | I_kwDODunzps6zWGXH | 7,531 | Deepspeed reward training hangs at end of training with Dataset.from_list | {
"login": "Matt00n",
"id": 60710414,
"node_id": "MDQ6VXNlcjYwNzEwNDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/60710414?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Matt00n",
"html_url": "https://github.com/Matt00n",
"followers_url": "https://api.github.com/users/Matt00n/followers",
"following_url": "https://api.github.com/users/Matt00n/following{/other_user}",
"gists_url": "https://api.github.com/users/Matt00n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Matt00n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Matt00n/subscriptions",
"organizations_url": "https://api.github.com/users/Matt00n/orgs",
"repos_url": "https://api.github.com/users/Matt00n/repos",
"events_url": "https://api.github.com/users/Matt00n/events{/privacy}",
"received_events_url": "https://api.github.com/users/Matt00n/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! How big is the dataset ? if you load it using `from_list`, the dataset lives in memory and has to be copied to every gpu process, which can be slow.\n\nIt's fasted if you load it from JSON files from disk, because in that case the dataset in converted to Arrow and loaded from disk using memory mapping. Memory mapping allows to quickly reload the dataset in other processes.\n\nMaybe we can change `from_list` and other methods to always use the disk though, instead of loading in memory, WDYT ?"
] | 2025-04-21T17:29:20 | 2025-05-06T13:30:41 | null | NONE | null | null | null | null | There seems to be a weird interaction between Deepspeed, the Dataset.from_list method and trl's RewardTrainer. On a multi-GPU setup (10 A100s), training always hangs at the very end of training until it times out. The training itself works fine until the end of training and running the same script with Deepspeed on a single GPU works without hangig. The issue persisted across a wide range of Deepspeed configs and training arguments. The issue went away when storing the exact same dataset as a JSON and using `dataset = load_dataset("json", ...)`. Here is my training script:
```python
import pickle
import os
import random
import warnings
import torch
from datasets import load_dataset, Dataset
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from trl import RewardConfig, RewardTrainer, ModelConfig
####################################### Reward model #################################################
# Explicitly set arguments
model_name_or_path = "Qwen/Qwen2.5-1.5B"
output_dir = "Qwen2-0.5B-Reward-LoRA"
per_device_train_batch_size = 2
num_train_epochs = 5
gradient_checkpointing = True
learning_rate = 1.0e-4
logging_steps = 25
eval_strategy = "steps"
eval_steps = 50
max_length = 2048
torch_dtype = "auto"
trust_remote_code = False
model_args = ModelConfig(
model_name_or_path=model_name_or_path,
model_revision=None,
trust_remote_code=trust_remote_code,
torch_dtype=torch_dtype,
lora_task_type="SEQ_CLS", # Make sure task type is seq_cls
)
training_args = RewardConfig(
output_dir=output_dir,
per_device_train_batch_size=per_device_train_batch_size,
num_train_epochs=num_train_epochs,
gradient_checkpointing=gradient_checkpointing,
learning_rate=learning_rate,
logging_steps=logging_steps,
eval_strategy=eval_strategy,
eval_steps=eval_steps,
max_length=max_length,
gradient_checkpointing_kwargs=dict(use_reentrant=False),
center_rewards_coefficient = 0.01,
fp16=False,
bf16=True,
save_strategy="no",
dataloader_num_workers=0,
# deepspeed="./configs/deepspeed_config.json",
)
################
# Model & Tokenizer
################
model_kwargs = dict(
revision=model_args.model_revision,
use_cache=False if training_args.gradient_checkpointing else True,
torch_dtype=model_args.torch_dtype,
)
tokenizer = AutoTokenizer.from_pretrained(
model_args.model_name_or_path, use_fast=True
)
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path, num_labels=1, trust_remote_code=model_args.trust_remote_code, **model_kwargs
)
# Align padding tokens between tokenizer and model
model.config.pad_token_id = tokenizer.pad_token_id
# If post-training a base model, use ChatML as the default template
if tokenizer.chat_template is None:
model, tokenizer = setup_chat_format(model, tokenizer)
if model_args.use_peft and model_args.lora_task_type != "SEQ_CLS":
warnings.warn(
"You are using a `task_type` that is different than `SEQ_CLS` for PEFT. This will lead to silent bugs"
" Make sure to pass --lora_task_type SEQ_CLS when using this script with PEFT.",
UserWarning,
)
##############
# Load dataset
##############
with open('./prefs.pkl', 'rb') as fh:
loaded_data = pickle.load(fh)
random.shuffle(loaded_data)
dataset = []
for a_wins, a, b in loaded_data:
if a_wins == 0:
a, b = b, a
dataset.append({'chosen': a, 'rejected': b})
dataset = Dataset.from_list(dataset)
# Split the dataset into training and evaluation sets
train_eval_split = dataset.train_test_split(test_size=0.15, shuffle=True, seed=42)
# Access the training and evaluation datasets
train_dataset = train_eval_split['train']
eval_dataset = train_eval_split['test']
##########
# Training
##########
trainer = RewardTrainer(
model=model,
processing_class=tokenizer,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
)
trainer.train()
```
Replacing `dataset = Dataset.from_list(dataset)` with
```python
with open('./prefs.json', 'w') as fh:
json.dump(dataset, fh)
dataset = load_dataset("json", data_files="./prefs.json", split='train')
```
resolves the issue. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7531/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7530/comments | https://api.github.com/repos/huggingface/datasets/issues/7530/events | https://github.com/huggingface/datasets/issues/7530 | 3,007,452,499 | I_kwDODunzps6zQhVT | 7,530 | How to solve "Spaces stuck in Building" problems | {
"login": "kakamond",
"id": 185799756,
"node_id": "U_kgDOCxMUTA",
"avatar_url": "https://avatars.githubusercontent.com/u/185799756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kakamond",
"html_url": "https://github.com/kakamond",
"followers_url": "https://api.github.com/users/kakamond/followers",
"following_url": "https://api.github.com/users/kakamond/following{/other_user}",
"gists_url": "https://api.github.com/users/kakamond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kakamond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kakamond/subscriptions",
"organizations_url": "https://api.github.com/users/kakamond/orgs",
"repos_url": "https://api.github.com/users/kakamond/repos",
"events_url": "https://api.github.com/users/kakamond/events{/privacy}",
"received_events_url": "https://api.github.com/users/kakamond/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n",
"> I'm facing the same issue—Space stuck in \"Building\" even after restart and Factory rebuild. Any fix?\n\nAlso see https://github.com/huggingface/huggingface_hub/issues/3019",
"I'm facing the same issue. The build fails with the same error, and restarting won't help. Is there a fix or ETA? "
] | 2025-04-21T03:08:38 | 2025-04-22T07:49:52 | 2025-04-22T07:49:52 | NONE | null | null | null | null | ### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401 Unauthorized
### Steps to reproduce the bug
Restart space / Factory rebuild cannot avoid it
### Expected behavior
Fix this problem
### Environment info
no requirements.txt can still happen
python gradio spaces | {
"login": "kakamond",
"id": 185799756,
"node_id": "U_kgDOCxMUTA",
"avatar_url": "https://avatars.githubusercontent.com/u/185799756?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kakamond",
"html_url": "https://github.com/kakamond",
"followers_url": "https://api.github.com/users/kakamond/followers",
"following_url": "https://api.github.com/users/kakamond/following{/other_user}",
"gists_url": "https://api.github.com/users/kakamond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kakamond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kakamond/subscriptions",
"organizations_url": "https://api.github.com/users/kakamond/orgs",
"repos_url": "https://api.github.com/users/kakamond/repos",
"events_url": "https://api.github.com/users/kakamond/events{/privacy}",
"received_events_url": "https://api.github.com/users/kakamond/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7530/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7529/comments | https://api.github.com/repos/huggingface/datasets/issues/7529/events | https://github.com/huggingface/datasets/issues/7529 | 3,007,118,969 | I_kwDODunzps6zPP55 | 7,529 | audio folder builder cannot detect custom split name | {
"login": "phineas-pta",
"id": 37548991,
"node_id": "MDQ6VXNlcjM3NTQ4OTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37548991?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phineas-pta",
"html_url": "https://github.com/phineas-pta",
"followers_url": "https://api.github.com/users/phineas-pta/followers",
"following_url": "https://api.github.com/users/phineas-pta/following{/other_user}",
"gists_url": "https://api.github.com/users/phineas-pta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phineas-pta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phineas-pta/subscriptions",
"organizations_url": "https://api.github.com/users/phineas-pta/orgs",
"repos_url": "https://api.github.com/users/phineas-pta/repos",
"events_url": "https://api.github.com/users/phineas-pta/events{/privacy}",
"received_events_url": "https://api.github.com/users/phineas-pta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-20T16:53:21 | 2025-04-20T16:53:21 | null | NONE | null | null | null | null | ### Describe the bug
when using audio folder builder (`load_dataset("audiofolder", data_dir="/path/to/folder")`), it cannot detect custom split name other than train/validation/test
### Steps to reproduce the bug
i have the following folder structure
```
my_dataset/
├── train/
│ ├── lorem.wav
│ ├── …
│ └── metadata.csv
├── test/
│ ├── ipsum.wav
│ ├── …
│ └── metadata.csv
├── validation/
│ ├── dolor.wav
│ ├── …
│ └── metadata.csv
└── custom/
├── sit.wav
├── …
└── metadata.csv
```
using `ds = load_dataset("audiofolder", data_dir="/path/to/my_dataset")`
### Expected behavior
i got `ds` with only 3 splits train/validation/test, whenever i rename train/validation/test folder it also disappear if i re-create `ds`
### Environment info
- `datasets` version: 3.5.0
- Platform: Windows-11-10.0.26100-SP0
- Python version: 3.12.8
- `huggingface_hub` version: 0.30.2
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7529/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7528/comments | https://api.github.com/repos/huggingface/datasets/issues/7528/events | https://github.com/huggingface/datasets/issues/7528 | 3,006,433,485 | I_kwDODunzps6zMojN | 7,528 | Data Studio Error: Convert JSONL incorrectly | {
"login": "zxccade",
"id": 144962041,
"node_id": "U_kgDOCKPx-Q",
"avatar_url": "https://avatars.githubusercontent.com/u/144962041?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zxccade",
"html_url": "https://github.com/zxccade",
"followers_url": "https://api.github.com/users/zxccade/followers",
"following_url": "https://api.github.com/users/zxccade/following{/other_user}",
"gists_url": "https://api.github.com/users/zxccade/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zxccade/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zxccade/subscriptions",
"organizations_url": "https://api.github.com/users/zxccade/orgs",
"repos_url": "https://api.github.com/users/zxccade/repos",
"events_url": "https://api.github.com/users/zxccade/events{/privacy}",
"received_events_url": "https://api.github.com/users/zxccade/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! Your JSONL file is incompatible with Arrow / Parquet. Indeed in Arrow / Parquet every dict should have the same keys, while in your dataset the bboxes have varying keys.\n\nThis causes the Data Studio to treat the bboxes as if each row was missing the keys from other rows.\n\nFeel free to take a look at the docs on object segmentation to see how to format a dataset with bboxes: https://huggingface.co/docs/datasets/object_detection"
] | 2025-04-19T13:21:44 | 2025-05-06T13:18:38 | null | NONE | null | null | null | null | ### Describe the bug
Hi there,
I uploaded a dataset here https://huggingface.co/datasets/V-STaR-Bench/V-STaR, but I found that Data Studio incorrectly convert the "bboxes" value for the whole dataset. Therefore, anyone who downloaded the dataset via the API would get the wrong "bboxes" value in the data file.
Could you help me address the issue?
Many thanks,
### Steps to reproduce the bug
The JSONL file of [V_STaR_test_release.jsonl](https://huggingface.co/datasets/V-STaR-Bench/V-STaR/blob/main/V_STaR_test_release.jsonl) has the correct values of every "bboxes" for each sample.
But in the Data Studio, we can see that the values of "bboxes" have changed, and load the dataset via API will also get the wrong values.
### Expected behavior
Fix the bug to correctly download my dataset.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.0-427.22.1.el9_4.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.16
- `huggingface_hub` version: 0.29.3
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2023.10.0 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7528/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7527 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7527/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7527/comments | https://api.github.com/repos/huggingface/datasets/issues/7527/events | https://github.com/huggingface/datasets/issues/7527 | 3,005,242,422 | I_kwDODunzps6zIFw2 | 7,527 | Auto-merge option for `convert-to-parquet` | {
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
}
] | null | [
"Alternatively, there could be an option to switch from submitting PRs to just committing changes directly to `main`.",
"Why not, I'd be in favor of `--merge-pull-request` to call `HfApi().merge_pull_request()` at the end of the conversion :) feel free to open a PR if you'd like",
"#self-assign"
] | 2025-04-18T16:03:22 | 2025-05-07T12:47:02 | null | NONE | null | null | null | null | ### Feature request
Add a command-line option, e.g. `--auto-merge-pull-request` that enables automatic merging of the commits created by the `convert-to-parquet` tool.
### Motivation
Large datasets may result in dozens of PRs due to the splitting mechanism. Each of these has to be manually accepted via the website.
### Your contribution
Happy to look into submitting a PR if this is of interest to maintainers. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7527/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7526/comments | https://api.github.com/repos/huggingface/datasets/issues/7526/events | https://github.com/huggingface/datasets/issues/7526 | 3,005,107,536 | I_kwDODunzps6zHk1Q | 7,526 | Faster downloads/uploads with Xet storage | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-04-18T14:46:42 | 2025-05-07T17:03:08 | null | MEMBER | null | null | null | null | 
## Xet is out !
Over the past few weeks, Hugging Face’s [Xet Team](https://huggingface.co/xet-team) took a major step forward by [migrating the first Model and Dataset repositories off LFS and to Xet storage](https://huggingface.co/posts/jsulz/911431940353906).
See more information on the HF blog: https://huggingface.co/blog/xet-on-the-hub
You can already enable Xet on Hugging Face account to benefit from faster downloads and uploads :)
We finalized an official integration with the `huggingface_hub` library that means you get the benefits of Xet without any significant changes to your current workflow.
## Previous versions of `datasets`
For older versions of `datasets` you might see this warning in `push_to_hub()`:
```
Uploading files as bytes or binary IO objects is not supported by Xet Storage.
```
This means the `huggingface_hub` + Xet integration isn't enabled for your version of `datasets`.
You can fix this by updating to `datasets>=3.6.0` and `huggingface_hub>=0.31.0`
```
pip install -U datasets huggingface_hub
```
## Known issues
The Dataset Viewer may show errors like
```
Error code: CreateCommitError
```
We are actively working on a fix.
In the meantime you can use an older version of `datasets` to upload datasets
## The future
Stay tuned for more Xet optimizations, especially on [Xet-optimized Parquet](https://huggingface.co/blog/improve_parquet_dedupe)
| null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7526/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 5,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7526/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | false |
https://api.github.com/repos/huggingface/datasets/issues/7525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7525/comments | https://api.github.com/repos/huggingface/datasets/issues/7525/events | https://github.com/huggingface/datasets/pull/7525 | 3,003,032,248 | PR_kwDODunzps6TBOH1 | 7,525 | Fix indexing in split commit messages | {
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! this is expected and is coherent with other naming conventions in `datasets` such as parquet shards naming"
] | 2025-04-17T17:06:26 | 2025-04-28T14:26:27 | 2025-04-28T14:26:27 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7525",
"html_url": "https://github.com/huggingface/datasets/pull/7525",
"diff_url": "https://github.com/huggingface/datasets/pull/7525.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7525.patch",
"merged_at": null
} | When a large commit is split up, it seems the commit index in the message is zero-based while the total number is one-based. I came across this running `convert-to-parquet` and was wondering why there was no `6-of-6` commit. This PR fixes that by adding one to the commit index, so both are one-based.
Current behavior:
<img width="463" alt="Screenshot 2025-04-17 at 1 00 17 PM" src="https://github.com/user-attachments/assets/7f3d389e-cb92-405d-a3c2-f2b1cdf0cb79" /> | {
"login": "klamike",
"id": 17013474,
"node_id": "MDQ6VXNlcjE3MDEzNDc0",
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/klamike",
"html_url": "https://github.com/klamike",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https://api.github.com/users/klamike/gists{/gist_id}",
"starred_url": "https://api.github.com/users/klamike/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/klamike/subscriptions",
"organizations_url": "https://api.github.com/users/klamike/orgs",
"repos_url": "https://api.github.com/users/klamike/repos",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"received_events_url": "https://api.github.com/users/klamike/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7525/timeline | null | null | null | true |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 43