Deep Learning from First Principles In Vectorized Python R and Octave

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

This is the second edition of the book. The code has been formatted with fixed with a fixed width font, and includes line numbering. This book derives and builds a multi-layer, multi-unit Deep Learning from the basics. The first chapter starts with the derivation and implementation of Logistic Regression as a Neural Network. This followed by building a generic L-Layer Deep Learning Network which performs binary classification. This Deep Learning network is then enhanced to handle multi-class classification along with the necessary derivations for the Jacobian of softmax and cross-entropy loss. Further chapters include different initialization types, regularization methods (L2, dropout) followed by gradient descent optimization techniques like Momentum, Rmsprop and Adam. Finally the technique of gradient checking is elaborated and implemented. All the chapters include implementations in vectorized Python, R and Octave. Detailed derivations are included for each critical enhancement to the Deep Learning. By the time you reach the last chapter, the implementation includes fully functional L-Layer Deep Learning with all the bells and whistles in vectorized Python, R and Octave.

Author(s): Tinniam V Ganesh
Year: 2018

Language: English
Pages: 775

Preface......Page 6
Introduction......Page 10
1.Logistic Regression as a Neural Network......Page 13
2.Implementing a simple Neural Network......Page 36
3.Building a L- Layer Deep Learning Network......Page 79
4.Deep Learning network with the Softmax......Page 134
5.MNIST classification with Softmax......Page 161
6.Initialization, regularization in Deep Learning......Page 194
7.Gradient Descent Optimization techniques......Page 262
8.Gradient Check in Deep Learning......Page 313
1.Appendix A......Page 347
2.Appendix 1 – Logistic Regression as a Neural Network......Page 356
3.Appendix 2 - Implementing a simple Neural Network......Page 369
4.Appendix 3 - Building a L- Layer Deep Learning Network......Page 391
5.Appendix 4 - Deep Learning network with the Softmax......Page 424
6.Appendix 5 - MNIST classification with Softmax......Page 441
7.Appendix 6 - Initialization, regularization in Deep Learning......Page 495
8.Appendix 7 - Gradient Descent Optimization techniques......Page 564
9.Appendix 8 – Gradient Check......Page 662
References......Page 774