Applied Natural Language Processing in the Enterprise: Teaching Machines to Read, Write, and Understand

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

NLP has exploded in popularity over the last few years. But while Google, Facebook, OpenAI, and others continue to release larger language models, many teams still struggle with building NLP applications that live up to the hype. This hands-on guide helps you get up to speed on the latest and most promising trends in NLP. With a basic understanding of machine learning and some Python experience, you'll learn how to build, train, and deploy models for real-world applications in your organization. Authors Ankur Patel and Ajay Uppili Arasanipalai guide you through the process using code and examples that highlight the best practices in modern NLP. • Use state-of-the-art NLP models such as BERT and GPT-3 to solve NLP tasks such as named entity recognition, text classification, semantic search, and reading comprehension • Train NLP models with performance comparable or superior to that of out-of-the-box systems • Learn about Transformer architecture and modern tricks like transfer learning that have taken the NLP world by storm • Become familiar with the tools of the trade, including spaCy, Hugging Face, and fast.ai • Build core parts of the NLP pipeline--including tokenizers, embeddings, and language models--from scratch using Python and PyTorch • Take your models out of Jupyter notebooks and learn how to deploy, monitor, and maintain them in production

Author(s): Ankur A. Patel, Ajay Uppili Arasanipalai
Edition: 1
Publisher: O'Reilly Media
Year: 2021

Language: English
Commentary: Vector PDF
Pages: 336
City: Sebastopol, CA
Tags: Google Cloud Platform;Amazon Web Services;Microsoft Azure;Machine Learning;Deep Learning;Natural Language Processing;Python;Recurrent Neural Networks;Core ML;Transfer Learning;Web Applications;TensorFlow;Natural Language Understanting;Long Short-Term Memory;fastText;PyTorch;Kaggle;H2O;word2vec;ImageNet;Transformers;Attention Mechanisms;Google Colaboratory;TensorBoard;GloVe;Pretrained Networks;BERT;Word Embeddings;spaCy;GPT;Hugging Face;BERTology;Neptune;Comet;Dataiku;ONNX;FloydHub;Streamlit

Copyright
Table of Contents
Preface
What Is Natural Language Processing?
Why Should I Read This Book?
What Do I Need to Know Already?
What Is This Book All About?
How Is This Book Organized?
Conventions Used in This Book
Using Code Examples
O’Reilly Online Learning
How to Contact Us
Acknowledgments
Ajay
Ankur
Part I. Scratching the Surface
Chapter 1. Introduction to NLP
What Is NLP?
Popular Applications
History
Inflection Points
A Final Word
Basic NLP
Defining NLP Tasks
Set Up the Programming Environment
spaCy, fast.ai, and Hugging Face
Perform NLP Tasks Using spaCy
Conclusion
Chapter 2. Transformers and Transfer Learning
Training with fastai
Using the fastai Library
ULMFiT for Transfer Learning
Fine-Tuning a Language Model on IMDb
Training a Text Classifier
Inference with Hugging Face
Loading Models
Generating Predictions
Conclusion
Chapter 3. NLP Tasks and Applications
Pretrained Language Models
Transfer Learning and Fine-Tuning
NLP Tasks
Natural Language Dataset
Explore the AG Dataset
NLP Task #1: Named Entity Recognition
Perform Inference Using the Original spaCy Model
Custom NER
Annotate via Prodigy: NER
Train the Custom NER Model Using spaCy
Custom NER Model Versus Original NER Model
NLP Task #2: Text Classification
Annotate via Prodigy: Text Classification
Train Text Classification Models Using spaCy
Conclusion
Part II. The Cogs in the Machine
Chapter 4. Tokenization
A Minimal Tokenizer
Hugging Face Tokenizers
Subword Tokenization
Building Your Own Tokenizer
Conclusion
Chapter 5. Embeddings: How Machines “Understand” Words
Understanding Versus Reading Text
Word Vectors
Word2Vec
Embeddings in the Age of Transfer Learning
Embeddings in Practice
Preprocessing
Model
Training
Validation
Embedding Things That Aren’t Words
Making Vectorized Music
Some General Tips for Making Custom Embeddings
Conclusion
Chapter 6. Recurrent Neural Networks and Other Sequence Models
Recurrent Neural Networks
RNNs in PyTorch from Scratch
Bidirectional RNN
Sequence to Sequence Using RNNs
Long Short-Term Memory
Gated Recurrent Units
Conclusion
Chapter 7. Transformers
Building a Transformer from Scratch
Attention Mechanisms
Dot Product Attention
Scaled Dot Product Attention
Multi-Head Self-Attention
Adaptive Attention Span
Persistent Memory/All-Attention
Product-Key Memory
Transformers for Computer Vision
Conclusion
Chapter 8. BERTology: Putting It All Together
ImageNet
The Power of Pretrained Models
The Path to NLP’s ImageNet Moment
Pretrained Word Embeddings
The Limitations of One-Hot Encoding
Word2Vec
GloVe
fastText
Context-Aware Pretrained Word Embeddings
Sequential Models
Sequential Data and the Importance of Sequential Models
RNNs
Vanilla RNNs
LSTM Networks
GRUs
Attention Mechanisms
Transformers
Transformer-XL
NLP’s ImageNet Moment
Universal Language Model Fine-Tuning
ELMo
BERT
BERTology
GPT-1, GPT-2, GPT-3
Conclusion
Part III. Outside the Wall
Chapter 9. Tools of the Trade
Deep Learning Frameworks
PyTorch
TensorFlow
Jax
Julia
Visualization and Experiment Tracking
TensorBoard
Weights & Biases
Neptune
Comet
MLflow
AutoML
H2O.ai
Dataiku
DataRobot
ML Infrastructure and Compute
Paperspace
FloydHub
Google Colab
Kaggle Kernels
Lambda GPU Cloud
Edge/On-Device Inference
ONNX
Core ML
Edge Accelerators
Cloud Inference and Machine Learning as a Service
AWS
Microsoft Azure
Google Cloud Platform
Continuous Integration and Delivery
Conclusion
Chapter 10. Visualization
Our First Streamlit App
Build the Streamlit App
Deploy the Streamlit App
Explore the Streamlit Web App
Build and Deploy a Streamlit App for Custom NER
Build and Deploy a Streamlit App for Text Classification on AG News Dataset
Build and Deploy a Streamlit App for Text Classification on Custom Text
Conclusion
Chapter 11. Productionization
Data Scientists, Engineers, and Analysts
Prototyping, Deployment, and Maintenance
Notebooks and Scripts
Databricks: Your Unified Data Analytics Platform
Support for Big Data
Support for Multiple Programming Languages
Support for ML Frameworks
Support for Model Repository, Access Control, Data Lineage, and Versioning
Databricks Setup
Set Up Access to S3 Bucket
Set Up Libraries
Create Cluster
Create Notebook
Enable Init Script and Restart Cluster
Run Speed Test: Inference on NER Using spaCy
Machine Learning Jobs
Production Pipeline Notebook
Scheduled Machine Learning Jobs
Event-Driven Machine Learning Pipeline
MLflow
Log and Register Model
MLflow Model Serving
Alternatives to Databricks
Amazon SageMaker
Saturn Cloud
Conclusion
Chapter 12. Conclusion
Ten Final Lessons
Lesson 1: Start with Simple Approaches First
Lesson 2: Leverage the Community
Lesson 3: Do Not Create from Scratch, When Possible
Lesson 4: Intuition and Experience Trounces Theory
Lesson 5: Fight Decision Fatigue
Lesson 6: Data Is King
Lesson 7: Lean on Humans
Lesson 8: Pair Yourself with Really Great Engineers
Lesson 9: Ensemble
Lesson 10: Have Fun
Final Word
Appendix A. Scaling
Multi-GPU Training
Distributed Training
What Makes Deep Training Fast?
Appendix B. CUDA
Threads and Thread Blocks
Writing CUDA Kernels
CUDA in Practice
Index
About the Authors
Colophon