Few-Shot and Zero-Shot Learning for Historical Text Normalization
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Documents
- OA-Few-Shot and Zero-Shot Learning for Historical Text Normalization
Final published version, 294 KB, PDF document
Historical text normalization often relies on small training datasets. Recent work has shown that multi-task learning can lead to significant improvements by exploiting synergies with related datasets, but there has been no systematic study of different multi-task learning architectures. This paper evaluates 63 multi-task learning configurations for sequence-to-sequence-based historical text normalization across ten datasets from eight languages, using autoencoding, grapheme-to-phoneme mapping, and lemmatization as auxiliary tasks. We observe consistent, significant improvements across languages when training data for the target task is limited, but minimal or no improvements when training data is abundant. We also show that zero-shot learning outperforms the simple, but relatively strong, identity baseline.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019) |
Publisher | Association for Computational Linguistics |
Publication date | 2019 |
Pages | 104-114 |
DOIs | |
Publication status | Published - 2019 |
Event | 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo) - Hong Kong, China Duration: 3 Nov 2019 → 3 Nov 2019 |
Workshop
Workshop | 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo) |
---|---|
Land | China |
By | Hong Kong |
Periode | 03/11/2019 → 03/11/2019 |
Number of downloads are based on statistics from Google Scholar and www.ku.dk
No data available
ID: 239617207