id int64 | title string | body string | author string | labels list | created_at timestamp[ns, tz=UTC] | comments_count int64 | url string | year int32 | month int32 | title_length int64 | body_length int64 | actual_comments int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,520,913,195 | Building a dataset with large variable size arrays results in error ArrowInvalid: Value X too large to fit in C integer type | ### Describe the bug
I used map to store raw audio waveforms of variable lengths in a column of a dataset the `map` call fails with ArrowInvalid: Value X too large to fit in C integer type.
```
Traceback (most recent call last):
Traceback (most recent call last):
File "...lib/python3.12/site-packages/multiprocess/p... | kkoutini | [] | 2025-10-16T08:45:17 | 1 | https://github.com/huggingface/datasets/issues/7821 | 2,025 | 10 | 124 | 3,517 | 0 |
3,517,086,110 | Cannot download opus dataset | When I tried to download opus_books using:
from datasets import load_dataset
dataset = load_dataset("Helsinki-NLP/opus_books")
I got the following errors:
FileNotFoundError: Couldn't find any data file at /workspace/Helsinki-NLP/opus_books. Couldn't find 'Helsinki-NLP/opus_books' on the Hugging Face Hub either: Local... | liamsun2019 | [] | 2025-10-15T09:06:19 | 1 | https://github.com/huggingface/datasets/issues/7819 | 2,025 | 10 | 28 | 758 | 0 |
3,515,887,618 | train_test_split and stratify breaks with Numpy 2.0 | ### Describe the bug
As stated in the title, since Numpy changed in version >2.0 with copy, the stratify parameters break.
e.g. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")` returns a Numpy error.
It works if you downgrade Numpy to a version lower than 2.0.
### Steps to reproduce the bug
... | davebulaval | [] | 2025-10-15T00:01:19 | 1 | https://github.com/huggingface/datasets/issues/7818 | 2,025 | 10 | 51 | 718 | 0 |
3,512,210,206 | disable_progress_bar() not working as expected | ### Describe the bug
Hi,
I'm trying to load a dataset on Kaggle TPU image. There is some known compat issue with progress bar on Kaggle, so I'm trying to disable the progress bar globally. This does not work as you can see in [here](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
In contract, disabling pro... | windmaple | [] | 2025-10-14T03:25:39 | 2 | https://github.com/huggingface/datasets/issues/7816 | 2,025 | 10 | 46 | 799 | 0 |
3,503,446,288 | Caching does not work when using python3.14 | ### Describe the bug
Traceback (most recent call last):
File "/workspace/ctn.py", line 8, in <module>
ds = load_dataset(f"naver-clova-ix/synthdog-{lang}") # или "synthdog-zh" для китайского
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance =... | intexcor | [] | 2025-10-10T15:36:46 | 2 | https://github.com/huggingface/datasets/issues/7813 | 2,025 | 10 | 43 | 3,323 | 0 |
3,500,741,658 | SIGSEGV when Python exits due to near null deref | ### Describe the bug
When I run the following python script using datasets I get a segfault.
```python
from datasets import load_dataset
from tqdm import tqdm
progress_bar = tqdm(total=(1000), unit='cols', desc='cols ')
progress_bar.update(1)
```
```
% lldb -- python3 crashmin.py
(lldb) target create "python3"
Cur... | iankronquist | [] | 2025-10-09T22:00:11 | 4 | https://github.com/huggingface/datasets/issues/7811 | 2,025 | 10 | 48 | 2,733 | 0 |
3,498,534,596 | Support scientific data formats | List of formats and libraries we can use to load the data in `datasets`:
- [ ] DICOMs: pydicom
- [ ] NIfTIs: nibabel
- [ ] WFDB: wfdb
cc @zaRizk7 for viz
Feel free to comment / suggest other formats and libs you'd like to see or to share your interest in one of the mentioned format | lhoestq | [] | 2025-10-09T10:18:24 | 1 | https://github.com/huggingface/datasets/issues/7804 | 2,025 | 10 | 31 | 285 | 0 |
3,497,454,119 | [Docs] Missing documentation for `Dataset.from_dict` | Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the... | aaronshenhao | [] | 2025-10-09T02:54:41 | 2 | https://github.com/huggingface/datasets/issues/7802 | 2,025 | 10 | 52 | 1,513 | 0 |
3,484,470,782 | Audio dataset is not decoding on 4.1.1 | ### Describe the bug
The audio column remain as non-decoded objects even when accessing them.
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
Works fine with `datasets==3.6.0`
Followed the docs in
- https://huggingface.co/docs/dataset... | thewh1teagle | [] | 2025-10-05T06:37:50 | 3 | https://github.com/huggingface/datasets/issues/7798 | 2,025 | 10 | 38 | 673 | 0 |
3,459,496,971 | Cannot load dataset, fails with nested data conversions not implemented for chunked array outputs | ### Describe the bug
Hi! When I load this dataset, it fails with a pyarrow error. I'm using datasets 4.1.1, though I also see this with datasets 4.1.2
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
Error:
```
Traceback (most recent call las... | neevparikh | [] | 2025-09-27T01:03:12 | 1 | https://github.com/huggingface/datasets/issues/7793 | 2,025 | 9 | 97 | 3,476 | 0 |
3,456,802,210 | Concatenate IterableDataset instances and distribute underlying shards in a RoundRobin manner | ### Feature request
I would like to be able to concatenate multiple `IterableDataset` with possibly different features. I would like to then be able to stream the results in parallel (both using DDP and multiple workers in the pytorch DataLoader). I want the merge of datasets to be well balanced between the different ... | LTMeyer | [
"enhancement"
] | 2025-09-26T10:05:19 | 17 | https://github.com/huggingface/datasets/issues/7792 | 2,025 | 9 | 93 | 5,092 | 0 |
3,450,913,796 | `Dataset.to_sql` doesn't utilize `num_proc` | The underlying `SqlDatasetWriter` has `num_proc` as an available argument [here](https://github.com/huggingface/datasets/blob/5dc1a179783dff868b0547c8486268cfaea1ea1f/src/datasets/io/sql.py#L63) , but `Dataset.to_sql()` does not accept it, therefore it is always using one process for the SQL conversion. | tcsmaster | [] | 2025-09-24T20:34:47 | 0 | https://github.com/huggingface/datasets/issues/7788 | 2,025 | 9 | 43 | 304 | 0 |
3,429,267,259 | BIGPATENT dataset inaccessible (deprecated script loader) | dataset: https://huggingface.co/datasets/NortheasternUniversity/big_patent
When I try to load it with the datasets library, it fails with:
RuntimeError: Dataset scripts are no longer supported, but found big_patent.py
Could you please publish a Parquet/Arrow export of BIGPATENT on the Hugging Face so that it can be... | ishmaifan | [] | 2025-09-18T08:25:34 | 2 | https://github.com/huggingface/datasets/issues/7780 | 2,025 | 9 | 57 | 351 | 0 |
3,424,462,082 | push_to_hub not overwriting but stuck in a loop when there are existing commits | ### Describe the bug
`get_deletions_and_dataset_card` stuck at error a commit has happened error since push to hub for http error 412 for tag 4.1.0. The error does not exists in 4.0.0.
### Steps to reproduce the bug
Create code to use push_to_hub, ran twice each time with different content for datasets.Dataset.
The... | Darejkal | [] | 2025-09-17T03:15:35 | 4 | https://github.com/huggingface/datasets/issues/7777 | 2,025 | 9 | 79 | 555 | 0 |
3,417,353,751 | Error processing scalar columns using tensorflow. | `datasets==4.0.0`
```
columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions']
train_ds.set_format(type='tf', columns=columns_to_return)
```
`train_ds`:
```
train_ds type: <class 'datasets.arrow_dataset.Dataset'>, shape: (1000, 9)
columns: ['question', 'sentences', 'answer', 'str_idx', 'en... | khteh | [] | 2025-09-15T10:36:31 | 2 | https://github.com/huggingface/datasets/issues/7772 | 2,025 | 9 | 49 | 987 | 0 |
3,411,654,444 | Custom `dl_manager` in `load_dataset` | ### Feature request
https://github.com/huggingface/datasets/blob/4.0.0/src/datasets/load.py#L1411-L1418
```
def load_dataset(
...
dl_manager: Optional[DownloadManager] = None, # add this new argument
**config_kwargs,
) -> Union[DatasetDict, Dataset, IterableDatasetDict, IterableDataset]:
...
# ... | ain-soph | [
"enhancement"
] | 2025-09-12T19:06:23 | 0 | https://github.com/huggingface/datasets/issues/7767 | 2,025 | 9 | 37 | 2,499 | 0 |
3,411,611,165 | cast columns to Image/Audio/Video with `storage_options` | ### Feature request
Allow `storage_options` to be passed in
1. `cast` related operations (e.g., `cast_columns, cast`)
2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`
```python3
import datasets
image_path = "s3://bucket/sample.png"
dataset = datasets.Dataset.from_d... | ain-soph | [
"enhancement"
] | 2025-09-12T18:51:01 | 5 | https://github.com/huggingface/datasets/issues/7766 | 2,025 | 9 | 56 | 868 | 0 |
3,411,556,378 | polars dataset cannot cast column to Image/Audio/Video | ### Describe the bug
`from_polars` dataset cannot cast column to Image/Audio/Video, while it works on `from_pandas` and `from_dict`
### Steps to reproduce the bug
```python3
import datasets
import pandas as pd
import polars as pl
image_path = "./sample.png"
# polars
df = pl.DataFrame({"image_path": [image_path]})
... | ain-soph | [] | 2025-09-12T18:32:49 | 2 | https://github.com/huggingface/datasets/issues/7765 | 2,025 | 9 | 54 | 1,492 | 0 |
3,401,799,485 | Hugging Face Hub Dataset Upload CAS Error | ### Describe the bug
Experiencing persistent 401 Unauthorized errors when attempting to upload datasets to Hugging Face Hub using the `datasets` library. The error occurs specifically with the CAS (Content Addressable Storage) service during the upload process. Tried using HF_HUB_DISABLE_XET=1. It seems to work for sm... | n-bkoe | [] | 2025-09-10T10:01:19 | 4 | https://github.com/huggingface/datasets/issues/7760 | 2,025 | 9 | 41 | 3,072 | 0 |
3,398,099,513 | Comment/feature request: Huggingface 502s from GHA | This is no longer a pressing issue, but for completeness I am reporting that in August 26th, GET requests to `https://datasets-server.huggingface.co/info\?dataset\=livebench/math` were returning 502s when invoked from [github actions](https://github.com/UKGovernmentBEIS/inspect_evals/actions/runs/17241892475/job/489211... | Scott-Simmons | [] | 2025-09-09T11:59:20 | 0 | https://github.com/huggingface/datasets/issues/7759 | 2,025 | 9 | 50 | 1,286 | 0 |
3,395,590,783 | Option for Anonymous Dataset link | ### Feature request
Allow for anonymized viewing of datasets. For instance, something similar to [Anonymous GitHub](https://anonymous.4open.science/).
### Motivation
We generally publish our data through Hugging Face. This has worked out very well as it's both our repository and archive (thanks to the DOI feature!).... | egrace479 | [
"enhancement"
] | 2025-09-08T20:20:10 | 0 | https://github.com/huggingface/datasets/issues/7758 | 2,025 | 9 | 33 | 1,060 | 0 |
3,389,535,011 | Add support for `.conll` file format in datasets | ### Feature request
I’d like to request native support in the Hugging Face datasets library for reading .conll files (CoNLL format). This format is widely used in NLP tasks, especially for Named Entity Recognition (NER), POS tagging, and other token classification problems.
Right now `.conll` datasets need to be manu... | namesarnav | [
"enhancement"
] | 2025-09-06T07:25:39 | 1 | https://github.com/huggingface/datasets/issues/7757 | 2,025 | 9 | 48 | 2,069 | 0 |
3,387,076,693 | datasets.map(f, num_proc=N) hangs with N>1 when run on import | ### Describe the bug
If you `import` a module that runs `datasets.map(f, num_proc=N)` at the top-level, Python hangs.
### Steps to reproduce the bug
1. Create a file that runs datasets.map at the top-level:
```bash
cat <<EOF > import_me.py
import datasets
the_dataset = datasets.load_dataset("openai/openai_humanev... | arjunguha | [] | 2025-09-05T10:32:01 | 0 | https://github.com/huggingface/datasets/issues/7756 | 2,025 | 9 | 61 | 1,039 | 0 |
3,381,831,487 | datasets massively slows data reads, even in memory | ### Describe the bug
Loading image data in a huggingface dataset results in very slow read speeds, approximately 1000 times longer than reading the same data from a pytorch dataset. This applies even when the dataset is loaded into RAM using a `keep_in_memory=True` flag.
The following script reproduces the result wit... | lrast | [] | 2025-09-04T01:45:24 | 2 | https://github.com/huggingface/datasets/issues/7753 | 2,025 | 9 | 51 | 1,848 | 0 |
3,358,369,976 | Dill version update | ### Describe the bug
Why the datasets is not updating the dill ?
Just want to know if I update the dill version in dill what will be the repucssion.
For now in multiplaces I have to update the library like process requirequire dill 0.4.0 so why not datasets.
Adding a pr too.
### Steps to reproduce the bug
.
###... | Navanit-git | [] | 2025-08-27T07:38:30 | 2 | https://github.com/huggingface/datasets/issues/7751 | 2,025 | 8 | 19 | 366 | 0 |
3,345,391,211 | Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version | Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.... | Awesome075 | [] | 2025-08-22T12:52:03 | 1 | https://github.com/huggingface/datasets/issues/7746 | 2,025 | 8 | 88 | 1,057 | 0 |
3,345,286,773 | Audio mono argument no longer supported, despite class documentation | ### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class doc... | jheitz | [] | 2025-08-22T12:15:41 | 1 | https://github.com/huggingface/datasets/issues/7745 | 2,025 | 8 | 68 | 1,003 | 0 |
3,343,510,686 | dtype: ClassLabel is not parsed correctly in `features.py` | `dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the ... | cmatKhan | [] | 2025-08-21T23:28:50 | 3 | https://github.com/huggingface/datasets/issues/7744 | 2,025 | 8 | 58 | 4,875 | 0 |
3,336,704,928 | module 'pyarrow' has no attribute 'PyExtensionType' | ### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will ... | mnedelko | [] | 2025-08-20T06:14:33 | 2 | https://github.com/huggingface/datasets/issues/7742 | 2,025 | 8 | 51 | 1,983 | 0 |
3,334,848,656 | Preserve tree structure when loading HDF5 | ### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user... | klamike | [
"enhancement"
] | 2025-08-19T15:42:05 | 0 | https://github.com/huggingface/datasets/issues/7741 | 2,025 | 8 | 41 | 368 | 0 |
3,331,537,762 | Replacement of "Sequence" feature with "List" breaks backward compatibility | PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training... | evmaki | [] | 2025-08-18T17:28:38 | 1 | https://github.com/huggingface/datasets/issues/7739 | 2,025 | 8 | 75 | 806 | 0 |
3,328,948,690 | Allow saving multi-dimensional ndarray with dynamic shapes | ### Feature request
I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dim... | ryan-minato | [
"enhancement"
] | 2025-08-18T02:23:51 | 2 | https://github.com/huggingface/datasets/issues/7738 | 2,025 | 8 | 58 | 1,828 | 0 |
3,304,979,299 | Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path | ### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of... | dennys246 | [] | 2025-08-08T19:10:58 | 2 | https://github.com/huggingface/datasets/issues/7733 | 2,025 | 8 | 76 | 5,250 | 0 |
3,304,673,383 | webdataset: key errors when `field_name` has upper case characters | ### Describe the bug
When using a webdataset each sample can be a collection of different "fields"
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characte... | YassineYousfi | [] | 2025-08-08T16:56:42 | 0 | https://github.com/huggingface/datasets/issues/7732 | 2,025 | 8 | 66 | 5,205 | 0 |
3,303,637,075 | Add the possibility of a backend for audio decoding | ### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.... | intexcor | [
"enhancement"
] | 2025-08-08T11:08:56 | 2 | https://github.com/huggingface/datasets/issues/7731 | 2,025 | 8 | 51 | 507 | 0 |
3,300,672,954 | OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory | > Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu | SaleemMalikAI | [] | 2025-08-07T14:07:23 | 1 | https://github.com/huggingface/datasets/issues/7729 | 2,025 | 8 | 85 | 262 | 0 |
3,298,854,904 | NonMatchingSplitsSizesError and ExpectedMoreSplitsError | ### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-... | efsotr | [] | 2025-08-07T04:04:50 | 3 | https://github.com/huggingface/datasets/issues/7728 | 2,025 | 8 | 55 | 1,329 | 0 |
3,295,718,578 | config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally | ### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `loa... | doctorpangloss | [] | 2025-08-06T08:21:37 | 0 | https://github.com/huggingface/datasets/issues/7727 | 2,025 | 8 | 106 | 925 | 0 |
3,292,315,241 | Can not stepinto load_dataset.py? | I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" --> | micklexqg | [] | 2025-08-05T09:28:51 | 0 | https://github.com/huggingface/datasets/issues/7724 | 2,025 | 8 | 33 | 220 | 0 |
3,289,943,261 | Don't remove `trust_remote_code` arg!!! | ### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to Fals... | autosquid | [
"enhancement"
] | 2025-08-04T15:42:07 | 0 | https://github.com/huggingface/datasets/issues/7723 | 2,025 | 8 | 39 | 396 | 0 |
GitHub Issues Dataset
Dataset Description
This dataset contains 40 public GitHub issues collected from a repository of choice.
Although 100 issues were requested from the GitHub API, pull requests were excluded, reducing the dataset to 40 true issues.
It includes metadata about each issue, such as the author, creation date, labels, and content, as well as derived fields for analysis.
The dataset was created as part of a Task 1 Assessment for a data course, to demonstrate:
- API data collection
- Data augmentation
- Publishing datasets on Hugging Face Hub
Dataset Features
| Feature Name | Type | Description |
|---|---|---|
id |
int64 | Unique identifier of the GitHub issue |
title |
string | Issue title |
body |
string | Full text description of the issue |
author |
string | GitHub username of the issue creator |
labels |
list of string | Labels/tags associated with the issue |
created_at |
timestamp[ns, tz=UTC] | Date and time the issue was created |
comments_count |
int64 | Number of comments reported by GitHub API |
url |
string | Direct link to the GitHub issue |
year |
int32 | Year extracted from created_at |
month |
int32 | Month extracted from created_at |
title_length |
int64 | Number of characters in the title |
body_length |
int64 | Number of characters in the body |
actual_comments |
int64 | Count of comments fetched via API |
Data Collection Method
- Data was collected using the GitHub Issues API (
requestslibrary in Python). - Only public issues were retrieved; no private data or personally identifiable information was included.
- Derived fields (
year,month,title_length,body_length) were added for analysis purposes. actual_commentswas fetched from the issue comments endpoint to supplement the originalcomments_count.
Licensing
- Data comes from public GitHub repositories, which are governed by GitHub’s Terms of Service.
- This dataset itself is released under the MIT License, allowing reuse and modification.
Limitations
- Small sample size: 40 issues only (after filtering out pull requests); not representative of all GitHub repositories.
- Possible labeling bias, as labels depend on repository maintainer conventions.
- Only issues in English were collected.
- No guarantees on completeness; the dataset may contain truncated issue bodies.
Ethical Considerations
- Only public data used.
- No private, sensitive, or personally identifiable information collected.
- Compliant with ethical and legal standards for open-source data use.
References
- GitHub API: https://docs.github.com/en/rest/issues/issues
- Hugging Face datasets documentation: https://huggingface.co/docs/datasets
- Downloads last month
- 30