Header menu link for other important links
X
Multi-context information for word representation learning
Published in Association for Computing Machinery, Inc
2019
Abstract
Word embedding techniques in literature are mostly based on Bag of Words models where words that co-occur with each other are considered to be related. However, it is not necessary for similar or related words to occur in the same context window. In this paper, we propose a new approach to combine different types of resources for training word embeddings. The lexical resources used in this work are Dependency Parse Tree and WordNet. Apart from the co-occurrence information, the use of these additional resources helps us in including the semantic and syntactic information from the text in learning the word representations. The learned representations are evaluated on multiple evaluation tasks like Semantic Textual Similarity, Word Similarity. Results of the experimental analyses highlight the usefulness of the proposed methodology. © 2019 Association for Computing Machinery.
About the journal
JournalData powered by TypesetProceedings of the ACM Symposium on Document Engineering, DocEng 2019
PublisherData powered by TypesetAssociation for Computing Machinery, Inc