Getting Started with Natural Language Processing

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Hit the ground running with this in-depth introduction to the NLP skills and techniques that allow your computers to speak human.

In
Getting Started with Natural Language Processing you’ll learn about:

    Fundamental concepts and algorithms of NLP
    Useful Python libraries for NLP
    Building a search algorithm
    Extracting information from raw text
    Predicting sentiment of an input text
    Author profiling
    Topic labeling
    Named entity recognition

Getting Started with Natural Language Processing is an enjoyable and understandable guide that helps you engineer your first NLP algorithms. Your tutor is Dr. Ekaterina Kochmar, lecturer at the University of Bath, who has helped thousands of students take their first steps with NLP. Full of Python code and hands-on projects, each chapter provides a concrete example with practical techniques that you can put into practice right away. If you’re a beginner to NLP and want to upgrade your applications with functions and features like information extraction, user profiling, and automatic topic labeling, this is the book for you.

Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.

About the technology
From smart speakers to customer service chatbots, apps that understand text and speech are everywhere. Natural language processing, or NLP, is the key to this powerful form of human/computer interaction. And a new generation of tools and techniques make it easier than ever to get started with NLP!

About the book
Getting Started with Natural Language Processing teaches you how to upgrade user-facing applications with text and speech-based features. From the accessible explanations and hands-on examples in this book you’ll learn how to apply NLP to sentiment analysis, user profiling, and much more. As you go, each new project builds on what you’ve previously learned, introducing new concepts and skills. Handy diagrams and intuitive Python code samples make it easy to get started—even if you have no background in machine learning!

What's inside

    Fundamental concepts and algorithms of NLP
    Extracting information from raw text
    Useful Python libraries
    Topic labeling
    Building a search algorithm

About the reader
You’ll need basic Python skills. No experience with NLP required.

About the author
Ekaterina Kochmar is a lecturer at the Department of Computer Science of the University of Bath, where she is part of the AI research group.

Table of Contents
1 Introduction
2 Your first NLP example
3 Introduction to information search
4 Information extraction
5 Author profiling as a machine-learning task
6 Linguistic feature engineering for author profiling
7 Your first sentiment analyzer using sentiment lexicons
8 Sentiment analysis with a data-driven approach
9 Topic analysis
10 Topic modeling
11 Named-entity recognition

Author(s): Ekaterina Kochmar
Publisher: Manning
Year: 2022

Language: English
Commentary: True PDF
Pages: 456

Getting Started with Natural Language Processing
brief contents
contents
preface
acknowledgments
about this book
Who should read this book
How this book is organized: A road map
About the code
liveBook discussion forum
Other online resources
about the author
about the cover illustration
1 Introduction
1.1 A brief history of NLP
1.2 Typical tasks
1.2.1 Information search
1.2.2 Advanced information search: Asking the machine precise questions
1.2.3 Conversational agents and intelligent virtual assistants
1.2.4 Text prediction and language generation
1.2.5 Spam filtering
1.2.6 Machine translation
1.2.7 Spell- and grammar checking
Summary
Solution to exercise 1.1
2 Your first NLP example
2.1 Introducing NLP in practice: Spam filtering
2.2 Understanding the task
2.2.1 Step 1: Define the data and classes
2.2.2 Step 2: Split the text into words
2.2.3 Step 3: Extract and normalize the features
2.2.4 Step 4: Train a classifier
2.2.5 Step 5: Evaluate the classifier
2.3 Implementing your own spam filter
2.3.1 Step 1: Define the data and classes
2.3.2 Step 2: Split the text into words
2.3.3 Step 3: Extract and normalize the features
2.3.4 Step 4: Train the classifier
2.3.5 Step 5: Evaluate your classifier
2.4 Deploying your spam filter in practice
Summary
Solutions to miscellaneous exercises
3 Introduction to information search
3.1 Understanding the task
3.1.1 Data and data structures
3.1.2 Boolean search algorithm
3.2 Processing the data further
3.2.1 Preselecting the words that matter: Stopwords removal
3.2.2 Matching forms of the same word: Morphological processing
3.3 Information weighing
3.3.1 Weighing words with term frequency
3.3.2 Weighing words with inverse document frequency
3.4 Practical use of the search algorithm
3.4.1 Retrieval of the most similar documents
3.4.2 Evaluation of the results
3.4.3 Deploying search algorithm in practice
Summary
Solutions to miscellaneous exercises
4 Information extraction
4.1 Use cases
4.1.1 Case 1
4.1.2 Case 2
4.1.3 Case 3
4.2 Understanding the task
4.3 Detecting word types with part-of-speech tagging
4.3.1 Understanding word types
4.3.2 Part-of-speech tagging with spaCy
4.4 Understanding sentence structure with syntactic parsing
4.4.1 Why sentence structure is important
4.4.2 Dependency parsing with spaCy
4.5 Building your own information extraction algorithm
Summary
Solutions to miscellaneous exercises
5 Author profiling as a machine-learning task
5.1 Understanding the task
5.1.1 Case 1: Authorship attribution
5.1.2 Case 2: User profiling
5.2 Machine-learning pipeline at first glance
5.2.1 Original data
5.2.2 Testing generalization behavior
5.2.3 Setting up the benchmark
5.3 A closer look at the machine-learning pipeline
5.3.1 Decision Trees classifier basics
5.3.2 Evaluating which tree is better using node impurity
5.3.3 Selection of the best split in Decision Trees
5.3.4 Decision Trees on language data
Summary
Solutions to miscellaneous exercises
6 Linguistic feature engineering for author profiling
6.1 Another close look at the machine-learning pipeline
6.1.1 Evaluating the performance of your classifier
6.1.2 Further evaluation measures
6.2 Feature engineering for authorship attribution
6.2.1 Word and sentence length statistics as features
6.2.2 Counts of stopwords and proportion of stopwords as features
6.2.3 Distributions of parts of speech as features
6.2.4 Distribution of word suffixes as features
6.2.5 Unique words as features
6.3 Practical use of authorship attribution and user profiling
Summary
7 Your first sentiment analyzer using sentiment lexicons
7.1 Use cases
7.2 Understanding your task
7.2.1 Aggregating sentiment score with the help of a lexicon
7.2.2 Learning to detect sentiment in a data-driven way
7.3 Setting up the pipeline: Data loading and analysis
7.3.1 Data loading and preprocessing
7.3.2 A closer look into the data
7.4 Aggregating sentiment scores with a sentiment lexicon
7.4.1 Collecting sentiment scores from a lexicon
7.4.2 Applying sentiment scores to detect review polarity
Summary
Solutions to exercises
8 Sentiment analysis with a data-driven approach
8.1 Addressing multiple senses of a word with SentiWordNet
8.2 Addressing dependence on context with machine learning
8.2.1 Data preparation
8.2.2 Extracting features from text
8.2.3 Scikit-learn’s machine-learning pipeline
8.2.4 Full-scale evaluation with cross-validation
8.3 Varying the length of the sentiment-bearing features
8.4 Negation handling for sentiment analysis
8.5 Further practice
Summary
9 Topic analysis
9.1 Topic classification as a supervised machine-learning task
9.1.1 Data
9.1.2 Topic classification with Naïve Bayes
9.1.3 Evaluation of the results
9.2 Topic discovery as an unsupervised machine-learning task
9.2.1 Unsupervised ML approaches
9.2.2 Clustering for topic discovery
9.2.3 Evaluation of the topic clustering algorithm
Summary
Solutions to miscellaneous exercises
10 Topic modeling
10.1 Topic modeling with latent Dirichlet allocation
10.1.1 Exercise 10.1: Question 1 solution
10.1.2 Exercise 10.1: Question 2 solution
10.1.3 Estimating parameters for the LDA
10.1.4 LDA as a generative model
10.2 Implementation of the topic modeling algorithm
10.2.1 Loading the data
10.2.2 Preprocessing the data
10.2.3 Applying the LDA model
10.2.4 Exploring the results
Summary
Solutions to miscellaneous exercises
11 Named-entity recognition
11.1 Named entity recognition: Definitions and challenges
11.1.1 Named entity types
11.1.2 Challenges in named entity recognition
11.2 Named-entity recognition as a sequence labeling task
11.2.1 The basics: BIO scheme
11.2.2 What does it mean for a task to be sequential?
11.2.3 Sequential solution for NER
11.3 Practical applications of NER
11.3.1 Data loading and exploration
11.3.2 Named entity types exploration with spaCy
11.3.3 Information extraction revisited
11.3.4 Named entities visualization
Summary
Conclusion
Solutions to miscellaneous exercises
Appendix—Installation instructions
index
Symbols
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Z