Data augmentation techniques have been widely used to improve machine learning performance. In this work, we propose a novel method to generate high quality synthetic data for low-resource tagging tasks with language models, where the language model is trained with the linearized labeled sentences. Our method is applicable to both supervised and semi-supervised settings. For the supervised setting, we conduct extensive experiments on named entity recognition (NER), part of speech (POS) and end-to-end target based sentiment analysis (E2E-TBSA) tasks. While for the semi-supervised setting, we evaluate our method on the NER task under the conditions of given unlabeled data only and unlabeled data plus a knowledge base. The results show that our method can consistently outperform the baselines, particularly when the given gold training data are less.
DAGA: Data Augmentation with a Generation Approach for Low-resource Tagging Tasks
Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai, Shafiq Joty, Luo Si, and Chunyan Miao. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP'20) , pages 6045–-6057, 2020.
PDF Abstract BibTex Slides