Splunk Inc.

11/13/2023 | News release | Distributed by Public on 11/14/2023 07:37

An Intro to Natural Language Processing (NLP)

Share:
By Stephen Watts November 13, 2023

Simply defined, Natural Language Processing (NLP) is a practice in which computers are taught to process, understand and replicate natural human speech. As a discipline, it combines elements of computer science, computational linguistics, deep learning, artificial intelligence (AI) and machine learning (ML). NLP depends on the ability to ingest, process and analyze massive amounts of human speech - in written and verbal form - to interpret meaning and respond correctly. The ultimate goal of NLP is to allow humans to communicate with computers and devices as closely as possible to the way they interact with other humans.

The concept of NLP has existed since the 1950s, when computing pioneer Alan Turing proposed what he called the "imitation game" (later known as the Turing test), in which a human operator asks a series of questions through a text-only channel to determine if an unseen respondent is a human or computer. If the human can't tell, the computer has "passed the Turing test," which is often described as the ultimate goal of AI or NLP.

NLP powers applications from automated telephone response trees to speech-to-text to GPS systems to automated assistants such as Amazon's Alexa, Apple's Siri, Microsoft Azure and Google Assistant. It can be used to perform automated translation of text from one language to another, to respond to verbal commands as in the case of virtual assistants, to analyze and summarize large amounts of text and much more.

In this article, we'll discuss the types of NLP, how they work, some common NLP tasks and applications and talk about how artificial intelligence (AI) and machine learning (ML) contribute to NLP. We'll also take a look at the challenges and benefits of NLP and how it may evolve in the future.

Alan Turing, computer scientist who developed the "Turing test" for AI- and NLP-based programs.

The Basics of Natural Language Processing

NLP transforms words into a format a computer can understand using a process known as text vectorization, which assigns a numeric vector (or array of numbers) to each word and compares it to the system's dictionary.

With a large enough volume of data to compare against, ML can make this task more efficient, with the NLP system using ML to make better inferences about word meanings and automatically grow the dictionary to make future searches faster and more accurate.

NLP systems are trained using machine learning algorithms, which are given specific data to teach the system the correlation between words and their associated numerical values. Once the system is trained, it can continue to learn new words, new contexts and new meanings using machine learning.

Three Types of Natural Language Processing

There are three main types of NLP models, which include:

  • Symbolic NLP: The norm from the early 1950s through the 1980s, symbolic NLP represented early NLP systems that were hand-coded with a limited number of words programmed into the dictionary. The computer was given a defined set of rules and its responses were based on those rules.
  • Statistical NLP: Launched in the 1990s, statistical NLP introduced NLP algorithms from machine learning. With ML, NLP-based systems were able to use unstructured data - beyond their predefined dictionaries - and analyze and process it in real time, enabling significant advancement in NLP capabilities and applications.
  • Neural NLP: In the 2010s, deep neural network-style ML principles began to be applied to NLP. Powered by ML, neural networks are designed to mimic the way the human brain stores and uses information. While neural networks must be trained using ML algorithms, they have the ability to learn on their own, once they are trained.

The three main types of NLP models include symbolic NLP, statistical NLP and neural NLP.

Components of Natural Language Processing

In order to understand how NLP works, it is helpful to take a look at the components or subsets of NLP. These are closely related practices that power core NLP functions.

  • Natural Language Understanding (NLU): NLU is a subset of NLP in which human language is translated into a machine-readable format. NLP and NLU are similar in that they use machine learning and unstructured data, but NLU focuses specifically on the programming aspects that allow the computer to understand the semantics and syntax of human language. One example uses part-of-speech tagging in customer service automation, where an NLP system is dedicated to understanding, parsing and routing customer service tickets based on context and route them to the correct department.
  • Natural Language Generation (NLG): While NLU focuses on helping computers understand human language, NLG focuses on teaching computers to create it. NLG allows the computer to write or speak in natural language based on a specific set of data. Text-to-speech, for example, is an application of NLG.
  • Language Processing and Optical Character Recognition (OCR): NLP relies on various datasets for speech recognition and to create human language. If the data is not in written or spoken form - for instance the dialog in a video or the text data contained in a scanned document or image - then NLP uses language processing and optical character recognition (OCR) to convert it into searchable text.

The Role of NLP with Machine Learning

The terms machine learning (ML), artificial intelligence (AI) and natural language processing (NLP) are inextricably linked. In the context of computer science, NLP is often referred to as a branch of AI or ML. You will also see machine learning methods referred to as a core component of modern NLP. Generally, NLP and ML are both considered to be subsets of AI.

The earliest instances of symbolic NLP relied on comparing words to predefined dictionary definitions. ML allowed NLP to make huge strides in terms of applicability by giving NLP-based systems the ability to learn new words, new rules and use data to perform the core tasks of NLP.

ML is also vitally important to the future development of NLP. The more data available to NLP systems, the more accurate, conversational, fast and user-friendly they will be. ML gives NLP systems the ability to ingest and process increasingly large amounts of available data.

Natural Language Processing Use Cases and Applications

In order for NLP to function, it must perform a variety of tasks to understand the text in questions, or text classification, and how to process it. These tasks are similar to the way the human brain understands and interprets language.

  • Text and speech processing includes the practice of how to turn speech into text and text into speech, including turning spoken language into individual words and understanding the difference between a proper noun and a common noun.
  • Morphological analysis is based on morphemes, the smallest meaningful components of language. In English, morphemes are often whole words, but they can also be smaller. Morphological analysis helps an NLP-based system to determine the root of a word, its part of speech and other factors key to understanding.
  • Syntactic analysis helps an NLP-based system understand the grammar of a sentence, break it down into words and related groups of words, and understand the relationship among words in order to better comprehend the meaning.
  • Lexical semantics (semantic analysis) is a broad category that allows the system to understand, for example, the meaning of words in context, analyze whether the sentiment of a group of text is negative or positive, disambiguate words that have multiple meanings and accurately link together groups of words that describe named entities (i.e. names of famous people, locations, companies, etc.).
  • Relational semantics breaks down the semantics of individual sentences to understand the relationships among named entities, converts groups of words into logical forms that can be understood by a computer and supplies the full meaning of individual words.
  • Discourse is a deep learning model that helps the system understand the relational semantics beyond those contained in individual sentences. It includes a wide variety of NLP tasks to define the relationship between sentences and larger blocks of text and how they work together to produce meaning, such as question answering.

NLP has dozens of real-world applications from enterprise- to consumer-based. Some examples of who uses NLP include:

  • Healthcare and medical professionals use NLP to create patient notes rapidly and accurately on the fly, without needing to spend time writing them after sessions. NLP is also used to review patient notes for correlations of symptoms that might not be readily apparent.
  • Computer security experts use NLP as part of many protective measures, including analyzing email messages for words that indicate spam or phishing attempts.
  • Business professionals use NLP in a variety of ways, from grammar-checking applications that help them improve their written communication to speech-to-text for dictating documents and emails.
  • Customer experience service teams use NLP across their operations, from telephone trees and chatbots that route consumer calls to the right destination to sentiment analysis that identifies and prioritizes the most critical customer comments on websites.
  • Social media specialists use automated sentiment analysis in a similar manner to customer service teams, identifying keywords to determine which comments on tweets and other various social media channels should be addressed first or routed for additional attention.
  • Consumers use NLP in a variety of ways every day, from hands-free mobile applications to grammar-checking programs in their word processing applications to automated home control programs that interface with light, thermostat and music systems.

NLP is used by everyone from consumers and business professionals to social media, healthcare security experts.

Natural Language Processing Applications

NLP is used in dozens of ways by computer systems and mobile applications to perform a wide variety of tasks. Here are some of the more common applications.

  • Text to speech: NLP powers automated readers that can be used to turn written text - in a book, email or other format - into spoken words.
  • Speech to text: NLP helps speech-to-text applications understand the context of human speech to accurately convert it to grammatically and semantically correct text.
  • Virtual assistants: Virtual assistants such as Google Assistant, Amazon's Alexa and Apple's Siri use NLP to understand the input from human users and deliver accurate and helpful answers. Speech-based GPS programs also use NLP to enable communication between humans and their devices.
  • Text summarization: NLP is used in applications that ingest and analyze large volumes of text and summarize it. NLP allows text summarization applications to reduce the amount of text without changing the meaning.
  • Grammatical error correction: Applications that automatically correct grammatical errors in written text rely on NLP to give them a more accurate understanding of meaning and therefore more contextually accurate suggestions and corrections.
  • Machine translation: Tools that automatically translate written or spoken text from one language to another rely on NLP to understand the context, syntax, grammar and other core components of language to deliver correct translations.
  • Spam and phishing detection: NLP takes the process of spam detection further by automatically understanding the full context of the text to more accurately predict spam and phishing attempts.
  • Social media analysis: Social media monitoring and engagement tools use NLP to help understand the context of a post or comment to see if it is negative or positive, which helps determine the type and priority of response.
  • Customer service: NLP can help a customer service application understand comments or complaints in a customer comment or support ticket and route it to the appropriate department for response or resolution.

Natural Language Processing Benefits

There are almost countless benefits to NLP. Here are a few of the most significant advantages.

  • Efficiency: Because NLP uses machine learning to quickly understand large volumes of text, it provides significant optimization benefits that go hand-in-hand with explosive data growth. NLP also accommodates an increasing need to process text with its ability to perform text summarization - analyzing large amounts of written text and presenting a more easily readable version, as well as enabling faster and easier web searches.
  • Accessibility: One of the most important aspects of NLP is its use in assistive technologies like speech-to-text, text-to-speech, text summarization and other applications that can be used by people with visual, speech, hearing, motor or cognitive disabilities.
  • Removing language barriers: Automated translation allows people to read text on websites and applications in languages other than their own. The ability to translate text in another language goes a long way toward removing barriers to travel, business and important communications.
  • Hands-free usability: NLP-based systems allow hands-free applications that enable drivers to search for directions or reply to a text message, for example, without taking their hands off the wheel.

Common Challenges with Natural Language Processing

While offering myriad benefits, NLP creates some challenges for users.

  • Ambiguity. In terms of NLP there can be several different kinds of ambiguity, including:
    • Lexical ambiguity, where there are multiple meanings for the same word. ("Jane is looking for a match.")
    • Syntactic ambiguity, where a word could mean more than one thing in a given sentence. ("I saw a child with a telescope." Did the child have a telescope, or did the speaker see the child through a telescope?)
    • Referential ambiguity, where a pronoun used in a sentence could apply to more than one person. ("Maria spoke to Louise. She said, 'I am hungry.'" Who is hungry?)
  • Sentiment analysis presents challenges because understanding human language is often dependent on understanding idioms, slang, jargon and sarcasm. For instance, the phrase, "This pair of sunglasses is totally sick" would likely be interpreted as negative by automated sentiment analysis.
  • Bias. While NLP training data itself is objective, the choice of which data to use is subject to bias. Words that are biased based on gender, race or sexual orientation can be removed from the training data, but the data may still be subject to representation bias, where fewer samples are derived from underrepresented populations.

The Future of Natural Language Processing

NLP will only continue to grow in value and importance as humans increasingly rely on interaction with computers, smartphones and other devices. The ability to speak in a natural way and be understood by a device is key to the widespread adoption of automated assistance and the further integration of computers and mobile devices into modern life.

AI and ML are key to the future of NLP. They form the basis on which future advances in NLP will be built and what statistical methods will be most popular. The main limitation of NLP has previously been the sheer volume of data required to produce sufficiently humanistic interactions, and the speed at which this can be achieved. AI and ML in conjunction offer the ability to overcome those obstacles and allow NLP-driven applications to interact in real-time, and with increasing comprehension of human speech in all its variations.

All of the current NLP applications will grow in ability and adoption as NLP capabilities continue to advance. For instance, as another tool in your toolkit, NLP makes technology more accessible to those who work with data without becoming experts in how to manipulate/process data. As the role of IT generalists become broader, technologies like NLP can ensure that they can interact with IT systems without becoming experts, often with the help of tutorials. And in business, NLP applications will provide more realistic, more helpful customer service as well as more efficiency in day-to-day computer interactions. The growth of virtual assistants is based largely on system ease of use and as well as accuracy of results - all of which depends on NLP. The future of NLP is closely tied to the future of AI, and vice versa.

The Bottom Line: NLP is a critical factor in the data revolution

The growth of computing lies in data, and much of that data is structured and unstructured text in written form. As the data revolution continues to evolve, the places where data intersects with human beings are often rendered in written text or spoken language. The ability to quickly and easily turn data into human language, and vice versa, is key to the continued growth of the data revolution. NLP helps drive this forward with its ability to provide sustainable, long-term, valuable assistance and benefits to people, in their work and personal lives.

What is Splunk?

 

This posting does not necessarily represent Splunk's position, strategies or opinion.