Negative sampling improves hypernymy extraction based on projection learning

Dmitry Ustalov, Nikolay Arefyev, Chris Biemann, Alexander Panchenko

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

19 Citations (SciVal)

Abstract

We present a new approach to extraction of hypernyms based on projection learning and word embeddings. In contrast to classification-based approaches, projection-based methods require no candidate hyponym-hypernym pairs. While it is natural to use both positive and negative training examples in supervised relation extraction, the impact of negative examples on hypernym prediction was not studied so far. In this paper, we show that explicit negative examples used for regularization of the model significantly improve performance compared to the stateof- the-art approach of Fu et al. (2014) on three datasets from different languages.

Original languageEnglish
Title of host publicationShort Papers
PublisherAssociation for Computational Linguistics (ACL)
Pages543-550
Number of pages8
ISBN (Electronic)9781510838604
DOIs
Publication statusPublished - 2017
Externally publishedYes
Event15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017 - Valencia, Spain
Duration: 3 Apr 20177 Apr 2017

Publication series

Name15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017 - Proceedings of Conference
Volume2

Conference

Conference15th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2017
Country/TerritorySpain
CityValencia
Period3/04/177/04/17

Fingerprint

Dive into the research topics of 'Negative sampling improves hypernymy extraction based on projection learning'. Together they form a unique fingerprint.

Cite this