Dictionary Based Tokenization in NLP
Last Updated :
30 Jul, 2025
In Natural Language Processing (NLP), dictionary-based tokenization is the process in which the text is split into tokens using a predefined dictionary of words, phrases and expressions. This method is useful when we need to treat multi-word expressions such as names or domain-specific phrases as a single token.
For example, in Named Entity Recognition (NER) the phrase "New York" should be recognized as one token not two separate words ("New" and "York"). Dictionary-based tokenization makes this possible by referencing a predefined list of multi-word expressions. Unlike regular word tokenization which simply splits a sentence into individual words, it groups specific terms together based on the dictionary entries. This ensures that important phrases are treated as a single unit which is important for many NLP tasks.
How Does Dictionary-Based Tokenization Work?
In dictionary-based tokenization, the process of splitting the text into tokens is guided by a predefined dictionary of multi-word expressions, words and phrases. Let's see how the process typically works:
1. Input Text: We start with a string of text that needs to be tokenized. For example:
"San Francisco is a beautiful city."
2. Dictionary Lookup: Each word or multi-word expression in the input text is checked against the dictionary. If a word or phrase matches an entry in the dictionary, it is extracted as a single token.
3. Token Matching: If the word or phrase exists in the dictionary, it is grouped as a token. For example:
"San Francisco" will be treated as a single token.
4. Handling Unmatched Words: If a word is not found in the dictionary, it is left as is or further split into smaller components. This can involve:
- Splitting a word into subwords or characters.
- Keeping it as a single token if no dictionary match is found.
For example, if a word like "NLP" is not in the dictionary, it may be broken into:
- 'N' and 'LP' or treated as a special token if predefined in the dictionary.
5. Types of Dictionaries Used:
- Word Lists: These are basic dictionaries containing standard words.
- Subword Dictionaries: These include common prefixes, suffixes and smaller units that help in handling rare words or out-of-vocabulary (OOV) terms.
- Special Tokens: Predefined tokens for handling specific terms like numbers, punctuation or symbols (e.g "!", "$").
Steps for Implementing Dictionary-Based Tokenization
Let’s see the steps required to implement Dictionary-Based Tokenization in NLP using Python and NLTK (Natural Language Toolkit).
1. Importing Required Libraries
First, we need to import the necessary libraries such as NLTK, spaCy and NumPy for handling arrays and processing data.
Python
import nltk
from nltk.tokenize import word_tokenize
from nltk.tokenize import MWETokenizer
import numpy as np
2. Preparing the Dictionary
The key part of dictionary-based tokenization is to have a predefined dictionary that contains multi-word expressions. For example, we will create a custom dictionary containing phrases that should be treated as single tokens like place names, organizations etc.
Python
custom_dict = [('San', 'Francisco'), ('New', 'York'), ('United', 'Nations')]
3. Preprocessing the Text
Before tokenizing, we need to clean the text. This may involve removing punctuation marks, stop words or any irrelevant characters to prepare the text for further processing.
Python
def preprocess_text(text):
text = text.lower()
return text
sample_text = "San Francisco is a beautiful city. The United Nations meets regularly."
cleaned_text = preprocess_text(sample_text)
4. Tokenizing the Text
Next, we split the cleaned text into individual words using a basic tokenizer. This is where the dictionary will come into play, as multi-word expressions should remain intact.
Python
tokens = word_tokenize(cleaned_text)
print("Tokenized text:", tokens)
Output:
Tokenized text: ['san', 'francisco', 'is', 'a', 'beautiful', 'city', '.', 'the', 'united', 'nations', 'meets', 'regularly', '.']
5. Applying Dictionary-Based Tokenization
Now, we apply the dictionary-based tokenization. The MWETokenizer (Multi-Word Expression Tokenizer) in NLTK helps in grouping multi-word expressions from the predefined dictionary into a single token.
Python
tokenizer = MWETokenizer(custom_dict)
tokenized_text = tokenizer.tokenize(tokens)
print("Dictionary-based tokenized text:", tokenized_text)
Output:
Dictionary-based tokenized text: ['san', 'francisco', 'is', 'a', 'beautiful', 'city', '.', 'the', 'united', 'nations', 'meets', 'regularly', '.']
6. Handling Unmatched Tokens
In the tokenization process, words that are not found in the dictionary remain as individual tokens. This helps in keeping the flexibility of the process while ensuring multi-word expressions are correctly tokenized.
Python
unmatched_tokens = [token for token in tokens if token not in ['San', 'Francisco', 'United', 'Nations']]
print("Unmatched Tokens:", unmatched_tokens)
Output:
Unmatched Tokens: ['san', 'francisco', 'is', 'a', 'beautiful', 'city', '.', 'the', 'united', 'nations', 'meets', 'regularly', '.']
7. Example of Dictionary-Based Tokenization in Action
To see dictionary-based tokenization in action, let’s consider the sentence "San Francisco is part of the United Nations."
Python
sentence = "San Francisco is part of the United Nations."
cleaned_sentence = preprocess_text(sentence)
tokens = word_tokenize(cleaned_sentence)
tokenized_sentence = tokenizer.tokenize(tokens)
print("Tokenized sentence:", tokenized_sentence)
Output:
Tokenized sentence: ['san', 'francisco', 'is', 'part', 'of', 'the', 'united', 'nations', '.']
8. Customizing the Dictionary
If we're working with domain-specific text, we can continuously expand the dictionary to include more multi-word expressions, ensuring accurate tokenization in specialized applications.
Python
custom_dict.extend([('Machine', 'Learning'), ('Natural', 'Language', 'Processing')])
tokenizer = MWETokenizer(custom_dict)
9. Visualizing Tokenization Output
For better understanding, we can visualize how the dictionary-based tokenization is working over various sentences. This can help confirm whether multi-word expressions are accurately grouped as single tokens.
Python
sentences = [
"San Francisco is a beautiful place.",
"The United Nations is headquartered in New York.",
"Machine Learning is a subset of Artificial Intelligence."
]
for sentence in sentences:
cleaned_sentence = preprocess_text(sentence)
tokens = word_tokenize(cleaned_sentence)
tokenized_sentence = tokenizer.tokenize(tokens)
print(f"Original: {sentence}")
print(f"Tokenized: {tokenized_sentence}\n")
Output:
Visualizing Tokenization OutputAdvantages of Dictionary-Based Tokenization
- Handling Multi-Word Entities: This method works great for handling complex entities like locations, names or other specific terms that should remain intact.
- Efficiency: It is faster compared to more complex techniques that rely on machine learning models.
- Simplicity: The approach is easy to implement and doesn’t require a large amount of training data, making it a good choice for smaller projects or real-time applications.
Challenges and Limitations
- Out-of-Vocabulary (OOV) Words: If a word or phrase isn’t included in the dictionary, it could be split incorrectly or missed entirely.
- Limited Coverage: The dictionary may not be comprehensive enough to handle all possible variations or new terms in the text.
- Ambiguity: Some words might have different meanings based on context. For example, "lead" could be a noun or verb and dictionary-based tokenization might struggle to handle such ambiguities.
By using dictionary-based tokenization, NLP systems can efficiently recognize and process multi-word expressions, enhancing their accuracy and performance across a wide range of language tasks.
Similar Reads
Natural Language Processing (NLP) Tutorial Natural Language Processing (NLP) is a branch of Artificial Intelligence (AI) that helps machines to understand and process human languages either in text or audio form. It is used across a variety of applications from speech recognition to language translation and text summarization.Natural Languag
5 min read
Introduction to NLP
Natural Language Processing (NLP) - OverviewNatural Language Processing (NLP) is a field that combines computer science, artificial intelligence and language studies. It helps computers understand, process and create human language in a way that makes sense and is useful. With the growing amount of text data from social media, websites and ot
9 min read
NLP vs NLU vs NLGNatural Language Processing(NLP) is a subset of Artificial intelligence which involves communication between a human and a machine using a natural language than a coded or byte language. It provides the ability to give instructions to machines in a more easy and efficient manner. Natural Language Un
3 min read
Applications of NLPAmong the thousands and thousands of species in this world, solely homo sapiens are successful in spoken language. From cave drawings to internet communication, we have come a lengthy way! As we are progressing in the direction of Artificial Intelligence, it only appears logical to impart the bots t
6 min read
Why is NLP important?Natural language processing (NLP) is vital in efficiently and comprehensively analyzing text and speech data. It can navigate the variations in dialects, slang, and grammatical inconsistencies typical of everyday conversations. Table of Content Understanding Natural Language ProcessingReasons Why NL
6 min read
Phases of Natural Language Processing (NLP)Natural Language Processing (NLP) helps computers to understand, analyze and interact with human language. It involves a series of phases that work together to process language and each phase helps in understanding structure and meaning of human language. In this article, we will understand these ph
7 min read
The Future of Natural Language Processing: Trends and InnovationsThere are no reasons why today's world is thrilled to see innovations like ChatGPT and GPT/ NLP(Natural Language Processing) deployments, which is known as the defining moment of the history of technology where we can finally create a machine that can mimic human reaction. If someone would have told
7 min read
Libraries for NLP
Text Normalization in NLP
Normalizing Textual Data with PythonIn this article, we will learn How to Normalizing Textual Data with Python. Let's discuss some concepts : Textual data ask systematically collected material consisting of written, printed, or electronically published words, typically either purposefully written or transcribed from speech.Text normal
7 min read
Regex Tutorial - How to write Regular Expressions?A regular expression (regex) is a sequence of characters that define a search pattern. Here's how to write regular expressions: Start by understanding the special characters used in regex, such as ".", "*", "+", "?", and more.Choose a programming language or tool that supports regex, such as Python,
6 min read
Tokenization in NLPTokenization is a fundamental step in Natural Language Processing (NLP). It involves dividing a Textual input into smaller units known as tokens. These tokens can be in the form of words, characters, sub-words, or sentences. It helps in improving interpretability of text by different models. Let's u
8 min read
Python | Lemmatization with NLTKLemmatization is an important text pre-processing technique in Natural Language Processing (NLP) that reduces words to their base form known as a "lemma." For example, the lemma of "running" is "run" and "better" becomes "good." Unlike stemming which simply removes prefixes or suffixes, it considers
6 min read
Introduction to StemmingStemming is an important text-processing technique that reduces words to their base or root form by removing prefixes and suffixes. This process standardizes words which helps to improve the efficiency and effectiveness of various natural language processing (NLP) tasks.In NLP, stemming simplifies w
6 min read
Removing stop words with NLTK in PythonNatural language processing tasks often involve filtering out commonly occurring words that provide no or very little semantic value to text analysis. These words are known as stopwords include articles, prepositions and pronouns like "the", "and", "is" and "in." While they seem insignificant, prope
5 min read
POS(Parts-Of-Speech) Tagging in NLPParts of Speech (PoS) tagging is a core task in NLP, It gives each word a grammatical category such as nouns, verbs, adjectives and adverbs. Through better understanding of phrase structure and semantics, this technique makes it possible for machines to study human language more accurately. PoS tagg
7 min read
Text Representation and Embedding Techniques
NLP Deep Learning Techniques
NLP Projects and Practice
Sentiment Analysis with an Recurrent Neural Networks (RNN)Recurrent Neural Networks (RNNs) are used in sequence tasks such as sentiment analysis due to their ability to capture context from sequential data. In this article we will be apply RNNs to analyze the sentiment of customer reviews from Swiggy food delivery platform. The goal is to classify reviews
5 min read
Text Generation using Recurrent Long Short Term Memory NetworkLSTMs are a type of neural network that are well-suited for tasks involving sequential data such as text generation. They are particularly useful because they can remember long-term dependencies in the data which is crucial when dealing with text that often has context that spans over multiple words
4 min read
Machine Translation with Transformer in PythonMachine translation means converting text from one language into another. Tools like Google Translate use this technology. Many translation systems use transformer models which are good at understanding the meaning of sentences. In this article, we will see how to fine-tune a Transformer model from
6 min read
Building a Rule-Based Chatbot with Natural Language ProcessingA rule-based chatbot follows a set of predefined rules or patterns to match user input and generate an appropriate response. The chatbot canât understand or process input beyond these rules and relies on exact matches making it ideal for handling repetitive tasks or specific queries.Pattern Matching
4 min read
Text Classification using scikit-learn in NLPThe purpose of text classification, a key task in natural language processing (NLP), is to categorise text content into preset groups. Topic categorization, sentiment analysis, and spam detection can all benefit from this. In this article, we will use scikit-learn, a Python machine learning toolkit,
5 min read
Text Summarization using HuggingFace ModelText summarization involves reducing a document to its most essential content. The aim is to generate summaries that are concise and retain the original meaning. Summarization plays an important role in many real-world applications such as digesting long articles, summarizing legal contracts, highli
4 min read
Advanced Natural Language Processing Interview QuestionNatural Language Processing (NLP) is a rapidly evolving field at the intersection of computer science and linguistics. As companies increasingly leverage NLP technologies, the demand for skilled professionals in this area has surged. Whether preparing for a job interview or looking to brush up on yo
9 min read