Accessing Text Corpora and Lexical Resources using NLTK
Last Updated :
23 Jul, 2025
Accessing Text Corpora and Lexical Resources using NLTK provides efficient access to extensive text data and linguistic resources, empowering researchers and developers in natural language processing tasks. Natural Language Toolkit (NLTK) is a powerful Python library for natural language processing (NLP). It provides easy-to-use interfaces to over 50 corpora and lexical resources, including WordNet, along with a suite of text-processing libraries for classification, tokenization, stemming, tagging, parsing, and more.
This article will guide you through accessing text corpora and lexical resources using NLTK, illustrating with practical examples.
Accessing Text Corpora using NLTK
NLTK provides access to various text corpora, including books, news, chats, and more. Some popular corpora include:
- Gutenberg: Contains text from classic literature.
- Brown: The first million-word electronic corpus of English.
- Reuters: A collection of news documents.
- Inaugural: Presidential inaugural speeches.
NLTK is a comprehensive library that supports complex NLP tasks. It is ideal for academic and research purposes due to its extensive collection of linguistic data and tools.
Before proceeding with implementation make sure, that you have install NLTK and necessary data.
pip install nltk
After installation, you need to download the data:
import nltk
nltk.download('all')
Loading and Using Corpora
You can load and use these corpora easily. For example, to access the Gutenberg corpus:
Python
from nltk.corpus import gutenberg
print(gutenberg.fileids())
Output:
['austen-emma.txt', 'austen-persuasion.txt', 'austen-sense.txt', 'bible-kjv.txt', 'blake-poems.txt', 'bryant-stories.txt', 'burgess-busterbrown.txt', 'carroll-alice.txt', 'chesterton-ball.txt', 'chesterton-brown.txt', 'chesterton-thursday.txt', 'edgeworth-parents.txt', 'melville-moby_dick.txt', 'milton-paradise.txt', 'shakespeare-caesar.txt', 'shakespeare-hamlet.txt', 'shakespeare-macbeth.txt', 'whitman-leaves.txt']
The output represents the list of file identifiers (fileids) available in the Gutenberg corpus of the NLTK library. Each file identifier corresponds to a text file containing a literary work included in the Gutenberg collection.
To read the text of a specific file:
Python
hamlet = gutenberg.words('shakespeare-hamlet.txt')
print(hamlet[:100])
Output:
['[', 'The', 'Tragedie', 'of', 'Hamlet', 'by', ...]
Working with Lexical Resources using NLTK
NLTK includes several lexical resources, with WordNet being the most significant. WordNet is a large lexical database of English that groups words into sets of synonyms.
Using WordNet with NLTK
To use WordNet:
Python
from nltk.corpus import wordnet as wn
Find the synonyms of a word:
Python
synonyms = wn.synsets('book')
print(synonyms)
Output:
[Synset('book.n.01'), Synset('book.n.02'), Synset('record.n.05'), Synset('script.n.01'), Synset('ledger.n.01'), Synset('book.n.06'), Synset('book.n.07'), Synset('koran.n.01'), Synset('bible.n.01'), Synset('book.n.10'), Synset('book.n.11'), Synset('book.v.01'), Synset('reserve.v.04'), Synset('book.v.03'), Synset('book.v.04')]
The output represents a list of synsets (synonym sets) for the word "book" from the WordNet lexical database in NLTK. Each synset corresponds to a different meaning or sense of the word "book." The notation Synset('book.n.01') provides the following information:
- book: The word for which the synset is defined.
- n: Indicates the part of speech (in this case, "n" for noun).
- 01: The sense number, distinguishing different meanings of the word.
To get definitions and examples:
Python
for syn in synonyms:
print(syn.definition())
print(syn.examples())
Output:
a written work or composition that has been published (printed on pages bound together)
['I am reading a good book on economics']
physical objects consisting of a number of pages bound together
['he used a large book as a doorstop']
a compilation of the known facts regarding something or someone
["Al Smith used to say, `Let's look at the record'", 'his name is in all the record books']
a written version of a play or other dramatic composition; used in preparing for a performance
[]
a record in which commercial accounts are recorded
['they got a subpoena to examine our books']
a collection of playing cards satisfying the rules of a card game
[]
a collection of rules or prescribed standards on the basis of which decisions are made
['they run things by the book around here']
the sacred writings of Islam revealed by God to the prophet Muhammad during his life at Mecca and Medina
[]
the sacred writings of the Christian religions
['he went to carry the Word to the heathen']
a major division of a long written composition
['the book of Isaiah']
a number of sheets (ticket or stamps etc.) bound together on one edge
['he bought a book of stamps']
engage for a performance
['Her agent had booked her for several concerts in Tokyo']
The output provides the definitions and example sentences for each sense (synset) of the word "book" as retrieved from WordNet, illustrating the various contexts in which the word can be used.
Practical Examples
Tokenizing Text
Tokenization is the process of breaking text into words or sentences.
- word_tokenize: Splits the text into individual words.
- sent_tokenize: Splits the text into sentences.
Python
from nltk.tokenize import word_tokenize, sent_tokenize
text = "Hello, world! This is a test."
print(word_tokenize(text))
print(sent_tokenize(text))
Output:
['Hello', ',', 'world', '!', 'This', 'is', 'a', 'test', '.']
['Hello, world!', 'This is a test.']
Finding Synonyms and Antonyms
Using WordNet, you can find synonyms and antonyms for words.
- wn.synsets('good'): Retrieves all synsets (sets of synonyms) for the word "good".
- lemma.name(): Extracts the lemma name from each synset.
- lemma.antonyms(): Checks if the lemma has antonyms and adds the first antonym's name to the antonyms list.
Python
from nltk.corpus import wordnet as wn
synonyms = []
antonyms = []
for syn in wn.synsets('good'):
for lemma in syn.lemmas():
synonyms.append(lemma.name())
if lemma.antonyms():
antonyms.append(lemma.antonyms()[0].name())
print("Synonyms:", set(synonyms))
print("Antonyms:", set(antonyms))
Output:
Synonyms: {'salutary', 'just', 'in_effect', 'full', 'skillful', 'adept', 'safe', 'skilful', 'beneficial', 'near', 'unspoiled', 'trade_good', 'secure', 'estimable', 'respectable', 'effective', 'unspoilt', 'soundly', 'serious', 'well', 'commodity', 'good', 'practiced', 'dependable', 'in_force', 'right', 'sound', 'expert', 'honest', 'upright', 'thoroughly', 'honorable', 'ripe', 'goodness', 'proficient', 'dear', 'undecomposed'}
Antonyms: {'evilness', 'badness', 'bad', 'evil', 'ill'}
Frequency Distributions
Frequency Distribution is used to analyze the frequency of words in a text, helping to identify the most common words.
- gutenberg.words('shakespeare-hamlet.txt'): Loads the words from the text of "Hamlet".
- FreqDist(text): Creates a frequency distribution of the words in the text.
- fdist.most_common(10): Returns the 10 most common words along with their frequencies.
Python
from nltk.probability import FreqDist
text = gutenberg.words('shakespeare-hamlet.txt')
fdist = FreqDist(text)
print(fdist.most_common(10))
Output:
[(',', 2892), ('.', 1886), ('the', 860), ("'", 729), ('and', 606), ('of', 576), ('to', 576), (':', 565), ('I', 553), ('you', 479)]
Conclusion
NLTK is a versatile tool for NLP, offering access to a wealth of corpora and lexical resources. Whether you're performing text analysis, developing NLP applications, or conducting research, NLTK provides the functionalities needed to preprocess and analyze textual data effectively. By following the examples provided, you can leverage NLTK's capabilities to enhance your NLP projects.
Similar Reads
Natural Language Processing (NLP) Tutorial Natural Language Processing (NLP) is a branch of Artificial Intelligence (AI) that helps machines to understand and process human languages either in text or audio form. It is used across a variety of applications from speech recognition to language translation and text summarization.Natural Languag
5 min read
Introduction to NLP
Natural Language Processing (NLP) - OverviewNatural Language Processing (NLP) is a field that combines computer science, artificial intelligence and language studies. It helps computers understand, process and create human language in a way that makes sense and is useful. With the growing amount of text data from social media, websites and ot
9 min read
NLP vs NLU vs NLGNatural Language Processing(NLP) is a subset of Artificial intelligence which involves communication between a human and a machine using a natural language than a coded or byte language. It provides the ability to give instructions to machines in a more easy and efficient manner. Natural Language Un
3 min read
Applications of NLPAmong the thousands and thousands of species in this world, solely homo sapiens are successful in spoken language. From cave drawings to internet communication, we have come a lengthy way! As we are progressing in the direction of Artificial Intelligence, it only appears logical to impart the bots t
6 min read
Why is NLP important?Natural language processing (NLP) is vital in efficiently and comprehensively analyzing text and speech data. It can navigate the variations in dialects, slang, and grammatical inconsistencies typical of everyday conversations. Table of Content Understanding Natural Language ProcessingReasons Why NL
6 min read
Phases of Natural Language Processing (NLP)Natural Language Processing (NLP) helps computers to understand, analyze and interact with human language. It involves a series of phases that work together to process language and each phase helps in understanding structure and meaning of human language. In this article, we will understand these ph
7 min read
The Future of Natural Language Processing: Trends and InnovationsThere are no reasons why today's world is thrilled to see innovations like ChatGPT and GPT/ NLP(Natural Language Processing) deployments, which is known as the defining moment of the history of technology where we can finally create a machine that can mimic human reaction. If someone would have told
7 min read
Libraries for NLP
Text Normalization in NLP
Normalizing Textual Data with PythonIn this article, we will learn How to Normalizing Textual Data with Python. Let's discuss some concepts : Textual data ask systematically collected material consisting of written, printed, or electronically published words, typically either purposefully written or transcribed from speech.Text normal
7 min read
Regex Tutorial - How to write Regular Expressions?A regular expression (regex) is a sequence of characters that define a search pattern. Here's how to write regular expressions: Start by understanding the special characters used in regex, such as ".", "*", "+", "?", and more.Choose a programming language or tool that supports regex, such as Python,
6 min read
Tokenization in NLPTokenization is a fundamental step in Natural Language Processing (NLP). It involves dividing a Textual input into smaller units known as tokens. These tokens can be in the form of words, characters, sub-words, or sentences. It helps in improving interpretability of text by different models. Let's u
8 min read
Python | Lemmatization with NLTKLemmatization is an important text pre-processing technique in Natural Language Processing (NLP) that reduces words to their base form known as a "lemma." For example, the lemma of "running" is "run" and "better" becomes "good." Unlike stemming which simply removes prefixes or suffixes, it considers
6 min read
Introduction to StemmingStemming is an important text-processing technique that reduces words to their base or root form by removing prefixes and suffixes. This process standardizes words which helps to improve the efficiency and effectiveness of various natural language processing (NLP) tasks.In NLP, stemming simplifies w
6 min read
Removing stop words with NLTK in PythonNatural language processing tasks often involve filtering out commonly occurring words that provide no or very little semantic value to text analysis. These words are known as stopwords include articles, prepositions and pronouns like "the", "and", "is" and "in." While they seem insignificant, prope
5 min read
POS(Parts-Of-Speech) Tagging in NLPParts of Speech (PoS) tagging is a core task in NLP, It gives each word a grammatical category such as nouns, verbs, adjectives and adverbs. Through better understanding of phrase structure and semantics, this technique makes it possible for machines to study human language more accurately. PoS tagg
7 min read
Text Representation and Embedding Techniques
NLP Deep Learning Techniques
NLP Projects and Practice
Sentiment Analysis with an Recurrent Neural Networks (RNN)Recurrent Neural Networks (RNNs) are used in sequence tasks such as sentiment analysis due to their ability to capture context from sequential data. In this article we will be apply RNNs to analyze the sentiment of customer reviews from Swiggy food delivery platform. The goal is to classify reviews
5 min read
Text Generation using Recurrent Long Short Term Memory NetworkLSTMs are a type of neural network that are well-suited for tasks involving sequential data such as text generation. They are particularly useful because they can remember long-term dependencies in the data which is crucial when dealing with text that often has context that spans over multiple words
4 min read
Machine Translation with Transformer in PythonMachine translation means converting text from one language into another. Tools like Google Translate use this technology. Many translation systems use transformer models which are good at understanding the meaning of sentences. In this article, we will see how to fine-tune a Transformer model from
6 min read
Building a Rule-Based Chatbot with Natural Language ProcessingA rule-based chatbot follows a set of predefined rules or patterns to match user input and generate an appropriate response. The chatbot canât understand or process input beyond these rules and relies on exact matches making it ideal for handling repetitive tasks or specific queries.Pattern Matching
4 min read
Text Classification using scikit-learn in NLPThe purpose of text classification, a key task in natural language processing (NLP), is to categorise text content into preset groups. Topic categorization, sentiment analysis, and spam detection can all benefit from this. In this article, we will use scikit-learn, a Python machine learning toolkit,
5 min read
Text Summarization using HuggingFace ModelText summarization involves reducing a document to its most essential content. The aim is to generate summaries that are concise and retain the original meaning. Summarization plays an important role in many real-world applications such as digesting long articles, summarizing legal contracts, highli
4 min read
Advanced Natural Language Processing Interview QuestionNatural Language Processing (NLP) is a rapidly evolving field at the intersection of computer science and linguistics. As companies increasingly leverage NLP technologies, the demand for skilled professionals in this area has surged. Whether preparing for a job interview or looking to brush up on yo
9 min read