January 23, 2019 INFI

NLU and NLG — Going Deeper into NLP

By Gedalyah Reback
Reading Time: 2 minutes

Natural language processing is a wide and deepening field related to making AI more adept at working with human language. Inevitably, the field will spin off news disciplines for interacting with natural language in ways that are currently impossible.  

In order to fully comprehend and use language for itself, AI exploits NLP for several subprocesses to mimic how humans process language.

Two core stages of NLP – natural language understanding (NLU) and natural language generation (NLG) cover how AI digests and responds to human language. But this is only a very brief overview of a sophisticated, merely seconds-long process. 

These processes of understanding and reacting to language happen virtually simultaneously when humans converse with one another. Achieving a new fluent listening-and-response mechanism in a machine might be the holy grail of artificial intelligence in general, not just NLP. Known as the Turing Test, a machine whose language is so fluent it fools a human should be indisputably considered ‘artificially intelligent.’ 

Natural Language Understanding

Meeting the Turing standard is tough. No programmer is satisfied yet they have met 100 percent of the test’s criteria. But don’t worry, because INFI and a host of other teams are making rapid progress.

To reach this goal, machines must recognize, analyze, and respond to language input (what you might ask Siri or Alexa). The recognition of text dovetails tightly with NLU’s analysis, recognizing the correct words. NLU then parses input sentences and categorizes data or generates text summaries.

Many factors go into generating as deep an understanding of text or speech as possible. The context, user’s body language, semantic analysis, sentiment analysis, and opinion mining are critical to generating a deep understanding of text (or speech). 

Natural Language Generation

NLG is even more difficult – weighing these factors in order to come up with an acceptable response to the original input. Technologies you use everyday, open software like Google Translate, apply similar methods to other NLP apps. Phrase-based analysis and statistical models both contribute to various combo approaches to hatch suitable conversation responses.

Machine responses approximate or complement a user’s lexicon, speech patterns, tones, and attitudes in order to create as natural a conversation as possible. NLG thus involves lexical choices that prioritizes certain information over others. Subtasks like document structuringcontent determination, and referring expression generation (REG) make up a pretty intricate process. 

The Next Phase of NLP

Most systems today fail to adequately factor in user sentiment (much less fluctuating attitudes) into their NLG-backed responses. This is the case from machine translation to avatar-based customer service. By calculating personality, user sentiment, and short-term moods, INFI Avatars constitute an advanced application of NLG. As a result, our approach to generation that is more fluid, depending on context and the timing of user interactions. 

As the field develops, system updates will integrate new language models. AI’s next major hurdle might be a better sense of nuance and cogently incorporating that skills for itself in NLG.