- Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by backpropagating errors. Nature, 323(6088), 533-536.
- Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111-3119).
- Pennington, J., Socher, R., and Manning, C. D. (October 2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543).
- Rivas, P., and Zimmermann, M. (December 2019). Empirical Study of Sentence Embeddings for English Sentences Quality Assessment. In 2019 International Conference on Computational Science and Computational Intelligence (CSCI) (pp. 331-336). IEEE.
- Hochreiter, S., and Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780.
- Zhang, Z., Liu, D., Han, J., and Schuller, B. (2017...

Deep Learning for Beginners
By :

Deep Learning for Beginners
By:
Overview of this book
With information on the web exponentially increasing, it has become more difficult than ever to navigate through everything to find reliable content that will help you get started with deep learning. This book is designed to help you if you're a beginner looking to work on deep learning and build deep learning models from scratch, and you already have the basic mathematical and programming knowledge required to get started.
The book begins with a basic overview of machine learning, guiding you through setting up popular Python frameworks. You will also understand how to prepare data by cleaning and preprocessing it for deep learning, and gradually go on to explore neural networks. A dedicated section will give you insights into the working of neural networks by helping you get hands-on with training single and multiple layers of neurons. Later, you will cover popular neural network architectures such as CNNs, RNNs, AEs, VAEs, and GANs with the help of simple examples, and learn how to build models from scratch. At the end of each chapter, you will find a question and answer section to help you test what you've learned through the course of the book.
By the end of this book, you'll be well-versed with deep learning concepts and have the knowledge you need to use specific algorithms with various tools for different tasks.
Table of Contents (20 chapters)
Preface
Section 1: Getting Up to Speed
Introduction to Machine Learning
Setup and Introduction to Deep Learning Frameworks
Preparing Data
Learning from Data
Training a Single Neuron
Training Multiple Layers of Neurons
Section 2: Unsupervised Deep Learning
Autoencoders
Deep Autoencoders
Variational Autoencoders
Restricted Boltzmann Machines
Section 3: Supervised Deep Learning
Deep and Wide Neural Networks
Convolutional Neural Networks
Recurrent Neural Networks
Generative Adversarial Networks
Final Remarks on the Future of Deep Learning
Other Books You May Enjoy
How would like to rate this book
Customer Reviews