Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    RuntimeError
Message:      Dataset scripts are no longer supported, but found TempoSum.py
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.12/site-packages/datasets/load.py", line 989, in dataset_module_factory
                  raise RuntimeError(f"Dataset scripts are no longer supported, but found {filename}")
              RuntimeError: Dataset scripts are no longer supported, but found TempoSum.py

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization

Chi Seng Cheang, Hou Pong Chan, Derek F. Wong, Xuebo Liu, Zhaocong Li, Yanming Sun, Shudong Liu, Lidia S. Chao
Accepted at EMNLP 2023 | arXiv:2305.01951


πŸ“„ Paper

  • Title: Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization

  • Authors: Chi Seng Cheang; Hou Pong Chan; Derek F. Wong; Xuebo Liu; Zhaocong Li; Yanming Sun; Shudong Liu; Lidia S. Chao :contentReference[oaicite:0]{index=0}

  • Abstract:

    Recent pre-trained language models (PLMs) achieve promising results in existing abstractive summarization datasets. However, existing summarization benchmarks overlap in time with the standard pre-training corpora and finetuning datasets. Hence, the strong performance of PLMs may rely on the parametric knowledge that is memorized during pre-training and fine-tuning.
    Moreover, the knowledge memorized by PLMs may quickly become outdated, which affects the generalization performance of PLMs on future data. In this work, we propose TempoSum, a novel benchmark that contains data samples from 2010 to 2022, to understand the temporal generalization ability of abstractive summarization models. Through extensive human evaluation, we show that parametric knowledge stored in summarization models significantly affects the faithfulness of the generated summaries on future data. Moreover, existing faithfulness enhancement methods cannot reliably improve the faithfulness of summarization models on future data. Finally, we discuss several recommendations to the research community on how to evaluate and improve the temporal generalization capability of text summarization models. :contentReference[oaicite:1]{index=1}

  • Publication: EMNLP 2023

  • ArXiv / DOI: https://arxiv.org/abs/2305.01951 :contentReference[oaicite:2]{index=2}

  • BibTeX:

    @article{cheang2023can,
      title = {Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization},
      author = {Cheang, Chi Seng and Chan, Hou Pong and Wong, Derek F. and Liu, Xuebo and Li, Zhaocong and Sun, Yanming and Liu, Shudong and Chao, Lidia S.},
      journal = {arXiv preprint arXiv:2305.01951},
      year = {2023}
    }
    
Downloads last month
88