Introduction

In recent years, the amalgam of deep learning fundamentals with Natural Language Processing techniques has shown a great improvement in the information mining tasks on unstructured text data.

The models are now able to recognize natural language and speech comparable to human levels. Despite such improvements, discrepancies in the results still exist as sometimes the information is coded very deep in the syntaxes and syntactic structures of the corpus.

Example – Problem with Neural Networks

For example, a conversation system which is trained using recurrent neural network produces the following results in two scenarios:

User: Hi, I took a horrible picture in a museum, can you tell where is it located?

Bot Reply 1: The museum is located at a horrible place

User: Hi, I took a horrible picture in a museum, can you tell where is it located?

Bot Reply 2: The horrible museum is located at this place

The two responses have virtually similar tokens but different structures which completely changes the context. In this article, I will discuss the interdisciplinary field of Computational Linguistics which deals with the structural aspects of the text that are used to solve common text related problems. Some examples are Named entity extraction, coreference resolution, and machine translation.

Computational linguistics

Computational linguistics often overlaps with the field of Natural Language Processing as most of the tasks are common to both the fields. While Natural Language Processing focuses on the tokens/tags and uses them as predictors in machine learning models, Computational Linguistics digs further deeper into the relationships and links among them.

Structural aspects of the text refer to the organization of tokens in a sentence and the how the contexts among them are interrelated. This organization is often depicted by the word-to-word grammar relationships which are also known as dependencies. Dependency is the notion that syntactic units (words) are connected to each other by directed links which describe the relationship possessed by the connected words.

These dependencies map directly onto a directed graph representation, in which words in the sentence are nodes in the graph and grammatical relations are edge labels. This directed graph representation is also called as the dependency tree. For example, the dependency tree of the sentence is shown in the figure below:

AnalyticsVidhya is the largest community of data scientists and provides best resources for understanding data and analytics.

Another way to represent this tree is following:

-> community-NN (root)

-> AnalyticsVidhya-NNP (nsubj)

-> is-VBZ (cop)

-> the-DT (det)

-> largest-JJS (amod)

-> scientists-NNS (pobj)

-> of-IN (prep)

-> data-NNS (case)

-> and-CC (cc)

-> provides-VBZ (conj)

-> resources-NNS (dobj)

-> best-JJS (amod)

-> understanding-VBG (pcomp)

-> for-IN (mark)

-> data-NNS (dobj)

-> and-CC (cc)

-> analytics-NNS (conj)

In this graphical representation of sentence, each term is represented in the pattern of “ -> Element_A – Element_B (Element_C) “. Element_A represents the word, Element_B represents the part of speech tag of word, Element C represents the grammar relation among the word and its parent node, and the indentation before the symbol “ -> “ represents the level of a word in the tree. Here is the reference list to understand the dependency relations. The tree shows that the term “community” is the structural centre of the sentence and is represented as root linked by 7 nodes (“AnalyticsVidhya“, “is“, “the“, “largest“, “and“, “scientists“, “provides“). Out of these 7 connected nodes, the terms “scientists” and “provides” are the root nodes of two other sub-trees. Each subtree is a itself a dependency tree with relations such as – (“provides” <-> “resources” <by> “dobj” relation), (“resources” <-> “best” <by> “amod” relation).

These trees can be generated in python using libraries such as NLTK, Spacy or Stanford-CoreNLP and can be used to obtain subject-verb-object triplets, noun and verb phrases, grammar dependency relationships, and part of speech tags etc for example –

-> scientists-NNS (pobj) -> of-IN (prep) -> data-NNS (nn) Grammar: <prep> <nn> <pobj> POS: IN – NNS – NNS Phrase: of data scientist -> understanding-VBG (pcomp) -> for-IN (prep) -> data-NNS (dobj) -> and-CC (cc) -> analytics-NNS (conj) Grammar: <dobj> <cc> <conj> POS: NNS – CC – NNS Phrase: data and analytics Grammar: <prep> <pcomp> <prep> <dobj> POS: VBG – IN – NNS Phrase: for understanding data and analytics

Applications of Dependency Trees

Named Entity Recognition

Named-entity recognition (NER) is the process of locating and classifying named entities in a textual data into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc.

To recognize the named entities, one needs to parse the dependency tree in a top to bottom manner and identify the noun phrases. Noun phrases are the subtrees which are connected by a relation from nouns family such as nsubj, nobj, dobj etc nmod etc. For example –

Donald Trump will be visiting New Delhi next summer for a conference at Google

-> visiting-VBG (root)

-> Trump-NNP (nsubj)

-> Donald-NNP (compound)

-> will-MD (aux)

-> be-VB (aux)

-> Delhi-NNP (dobj)

-> New-NNP (compound)

-> summer-NN (tmod)

-> next-JJ (amod)

-> conference-NN (nmod)

-> for-IN (case)

-> a-DT (det)

-> Google-NNP (nmod)

-> at-IN (case)

In the above trees, following noun phrases are are detected from the grammar relations of noun family such as:

Trump <-> visiting <by> nsubj

Delhi <-> visiting <by> dobj

summer <-> Delhi <by> nmod

conference <-> visiting <by> nmod

Google <-> conference <by> nmod

Named entities can be obtained by identifying the NNP (proper noun) part of speech tag of the root node. Example – Trump, Delhi, and Google have the part of speech tag NNP. To generate the proper noun phrase linked with root node, one needs to parse the subtree linked with the root nodes. Using the grammar rules, following named entities are obtained:

<compound> <nsubj> : Donald Trump

<compound> <dsubj> : New Delhi

<nmod> : Google

Other tasks such as phrase chunking and entity wise sentiment analysis can be performed using similar processes. For example, one sentence may contain multiple sentiments, contexts and entities and dictionary based models may not perform well. In the following sentence, there are two contexts:

Sentence – His acting was good but the script was poor

Context 1: His acting was good

Context 2: the script was poor

Both of the contexts have different sentiments, one is negative while other is positive. In the dependency tree of this sentence, there are two sub-trees which correspond to different contexts and can be used to extract them. The dependency tree of the sentence is

The subtrees can be extracted and evaluated individually from this dependency tree to compute the dependency tree. This paper from Stanford and this paper from Singapore University describes the efficient approaches to perform NER using dependencies.

Coreference Resolution or Anaphora Resolution

Coreference resolution is the task of finding all expressions that refer to the same entity in a text. It is an important step for a lot of higher level NLP tasks that involve natural language understanding such as document summarization, question answering, and information extraction.

The coreference process

The first step is to create dependency tree of the text. Once a dependency tree is created, it needs to traversed recursively to create multiple features such as:

John telephoned Bill. He lost his laptop.

Np -> noun phrases nodes

Hw -> headwords

Rl -> grammar relations

Lv -> level in the tree

Pos -> part of speech tag

Gen -> gender of part of speech tag

Identify the tokens having the part of speech tag of pronoun family, (identified by PRP tag) and the Proper Nouns or Named entities. By going through different uses of the pronouns. They can be classified in different groups according to the usage.

Using Named Entity Recognizer, identified named entities are Bill, John

Using gender api such as this, gender of named entities are Male entities.

Features of sentences:

Map the pronoun tokens with named entity tokens. Start from the bottommost sentence features and identify part of speech tag and its gender. To properly map the tokens, the properties/features are exploited.

3.1 map the tokens with same gender of pronoun and named entity

3.2 map the tokens with same singularity / plurality

3.3 map the tokens with same grammar relations

This paper from Soochow University describes the use of dependency trees in coreference resolution.

Question Answering

Another important task in which computational linguistics can help to obtain results with high relevance is the Question Answering which is treated as one of the hardest tasks involved with text data. Question answering systems based on computational linguistics uses the syntactic structures of the query questions and matches them with the responses having similar syntactic structures. The similar syntactic structures contribute the answer set to a particular question. For example

Question: What is the capital of India?

Answer: New Delhi is the capital of India

-> capital-NN (root)

-> what-WP (nsubj)

-> is-VBZ (cop)

-> the-DT (det)

-> of-IN (prep)

-> India-NNP (pobj)

-> capital-NN (root)

-> Delhi-NNP (nsubj)

-> New-NNP (nn)

-> is-VBZ (cop)

-> the-DT (det)

-> of-IN (prep)

-> India-NNP (pobj)

Both the question and answer dependency trees have similar patterns and can be used to generate the answer responses to specific queries. This paper1 and paper2 describe the approaches to perform question answering using dependency trees.

Other Tasks that uses Computational Linguistics

Machine Translation

Text Summarization

Natural Language Generation

Natural Language Understanding

Speech to Text

Generating Dependency Trees using Stanford Core NLP

from stanfordcorenlp import StanfordCoreNLP as scn nlp = scn(r'/path/to/stanford-corenlp-full-2017-06-09/') sentence = 'His acting was good but script was poor' print print 'Part of Speech:' print nlp.pos_tag(sentence) print print 'Dependency Parsing:' print nlp.dependency_parse(sentence) [(u'His', u'PRP$'), (u'acting', u'NN'), (u'was', u'VBD'), (u'good', u'JJ'), (u'but', u'CC'), (u'script', u'NN'), (u'was', u'VBD'), (u'poor', u'JJ')] [(u'ROOT', 0, 4), (u'nmod:poss', 2, 1), (u'nsubj', 4, 2), (u'cop', 4, 3), (u'cc', 4, 5), (u'nsubj', 8, 6), (u'cop', 8, 7), (u'conj', 4, 8)]

Dependency Trees using Spacy

import spacy nlp = spacy.load("en") doc = nlp(u’The team is not performing well in the match’') for token in doc: print str(token.text), str(token.lemma_), str(token.pos_), str(token.dep_) >>> The, the, NOUN, nsubj >>> team, team, Noun, nsubh >>> Is, is, VERB, aux >>> Not, not, ADV, neg >>> Performing, perform, VERB, root >>> Well, well, ADV, advmod >>> In, in, ADP, prep >>> The, the, Noun, pobj >>> Match, match ,Noun, pobj from spacy import displacy displacy.serve(doc, style='dep')

End Notes

In this article, I discussed the field of computational linguistics and how grammar relations among the sentences can be used in different tasks related to text data. If you feel, there are any other resources, tasks related to dependency trees and computational linguistics that I have missed, please feel free to comment with your suggestions and feedback.

Learn, engage , hack and get hired!

Related