Posted in 2016
Exercise on sparse autoencoders
- 03 May 2016
This exercise is the first of several posts I am writing, for those who want a mathematical and hands-on introduction to deep neural networks.
Read the series of notes on the topic of “Sparse Autoencoder” in the UFLDL Tutorial.
Exercise on deep neural networks
- 03 May 2016
Read the notes and complete the exercises for the section on “Building Deep Networks for Classification” in the UFLDL Tutorial. Complete all the programming tasks in Python. No starter code will be given for these exercises, but you may refer to the given Matlab codes for hints if you are stuck
Hashing
- 04 February 2016
Hashing is a method for compressing information from a high dimensional space into a smaller space. Hashing is commonly used in computer science to help us with many tasks. For instance, if two documents are (randomly) hashed to the same code, it is very likely that they are exactly the same. Also, in computer vision, we sometimes hash images in a clever way to find similar or related images through their codes.
Hashing goes all the way back to Shannon, the father of information theory, who looked at random hashes in his source coding theorem. There are also interesting connections to compressed sensing which have not been fully explored as yet.