Some of the material in is restricted to members of the community. By logging in, you may be able to gain additional access to certain collections or items. If you have questions about access or logging in, please use the form on the Contact Page.
Neural network based word embeddings have demonstrated outstanding results in a variety of tasks, and become a standard input for Natural Language Processing (NLP) related deep learning methods. Despite these representations are able to capture semantic regularities in languages, some general questions, e.g., "what kinds of semantic relations do the embeddings represent?" and "how could the semantic relations be retrieved from an embedding?" are not clear and very little relevant work has been done. In this study, we propose a new approach to exploring the semantic relations represented in neural embeddings based on WordNet and Unified Medical Language System (UMLS). Our study demonstrates that neural embeddings do prefer some semantic relations and that the neural embeddings also represent diverse semantic relations. Our study also finds that the Named Entity Recognition (NER)-based phrase composition outperforms Word2phrase and the word variants do not affect the performance on analogy and semantic relation tasks.