Learning Deep Learning: Theory and Practice of Neural Networks, Computer Vision, NLP, and Transformers using TensorFlow

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

NVIDIA's Full-Color Guide to Deep Learning with TensorFlow: All You Need to Get Started and Get Results Deep learning is a key component of today's exciting advances in machine learning and artificial intelligence. Learning Deep Learning is a complete guide to deep learning with TensorFlow, the #1 Python library for building these breakthrough applications. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this book is ideal for developers, data scientists, analysts, and others--including those with no prior machine learning or statistics experience. After introducing the essential building blocks of deep neural networks, Magnus Ekman shows how to use fully connected feedforward networks and convolutional networks to solve real problems, such as predicting housing prices or classifying images. You'll learn how to represent words from a natural language, capture semantics, and develop a working natural language translator. With that foundation in place, Ekman then guides you through building a system that inputs images and describes them in natural language. Throughout, Ekman provides concise, well-annotated code examples using TensorFlow and the Keras API. (For comparison and easy migration between frameworks, complementary PyTorch examples are provided online.) He concludes by previewing trends in deep learning, exploring important ethical issues, and providing resources for further learning. [list] [*]Master core concepts: perceptrons, gradient-based learning, sigmoid neurons, and back propagation [*]See how frameworks make it easier to develop more robust and useful neural networks [*]Discover how convolutional neural networks (CNNs) revolutionize classification and analysis [*]Use recurrent neural networks (RNNs) to optimize for text, speech, and other variable-length sequences [*]Master long short-term memory (LSTM) techniques for natural language generation and other applications [*]Move further into natural language-processing (NLP), including understanding and translation [/list]

Author(s): Magnus Ekman
Publisher: Addison-Wesley Professional
Year: 2021

Language: English
Pages: 800