Generative Data Intelligence

Inferential Text Generation with Multiple Knowledge Sources and Meta-Learning. (arXiv:2004.03070v1 [cs.CL])

Date:

(Submitted on 7 Apr 2020)

Abstract: We study the problem of generating inferential texts of events for a variety
of commonsense like textit{if-else} relations. Existing approaches typically
use limited evidence from training examples and learn for each relation
individually. In this work, we use multiple knowledge sources as fuels for the
model. Existing commonsense knowledge bases like ConceptNet are dominated by
taxonomic knowledge (e.g., textit{isA} and textit{relatedTo} relations),
having a limited number of inferential knowledge. We use not only structured
commonsense knowledge bases, but also natural language snippets from
search-engine results. These sources are incorporated into a generative base
model via key-value memory network. In addition, we introduce a meta-learning
based multi-task learning algorithm. For each targeted commonsense relation, we
regard the learning of examples from other relations as the meta-training
process, and the evaluation on examples from the targeted relation as the
meta-test process. We conduct experiments on Event2Mind and ATOMIC datasets.
Results show that both the integration of multiple knowledge sources and the
use of the meta-learning algorithm improve the performance.

Submission history

From: Duyu Tang [view email]
[v1]
Tue, 7 Apr 2020 01:49:18 UTC (484 KB)

Source: https://arxiv.org/abs/2004.03070

spot_img

Latest Intelligence

spot_img