Generative Data Intelligence

POINTER: Constrained Text Generation via Insertion-based Generative Pre-training. (arXiv:2005.00558v1 [cs.CL])

Date:

[Submitted on 1 May 2020]

Download PDF

Abstract: Large-scale pre-trained language models, such as BERT and GPT-2, have
achieved excellent performance in language representation learning and
free-form text generation. However, these models cannot be directly employed to
generate text under specified lexical constraints. To address this challenge,
we present POINTER, a simple yet novel insertion-based approach for
hard-constrained text generation. The proposed method operates by progressively
inserting new tokens between existing tokens in a parallel manner. This
procedure is recursively applied until a sequence is completed. The resulting
coarse-to-fine hierarchy makes the generation process intuitive and
interpretable. Since our training objective resembles the objective of masked
language modeling, BERT can be naturally utilized for initialization. We
pre-train our model with the proposed progressive insertion-based objective on
a 12GB Wikipedia dataset, and fine-tune it on downstream hard-constrained
generation tasks. Non-autoregressive decoding yields a logarithmic time
complexity during inference time. Experimental results on both News and Yelp
datasets demonstrate that POINTER achieves state-of-the-art performance on
constrained text generation. We intend to release the pre-trained model to
facilitate future research.

Submission history

From: Yizhe Zhang [view email]
[v1]
Fri, 1 May 2020 18:11:54 UTC (832 KB)

Source: https://arxiv.org/abs/2005.00558

spot_img

Latest Intelligence

spot_img