Section Recap
Introduction
This short lesson summarizes key takeaways from section 44.
Objectives
You will be able to:
- Understand and explain what was covered in this section
- Understand and explain why this section will help you become a data scientist
Key Takeaways
The key takeaways from this section include:
- Autoencoders are unsupervised neural networks that are useful in the context of data compression
- Autoencoders are networks that have the same input and output, while compressing the input into a lower-dimensional code called the summary or "representation"
- The compressed representation can be seen as a "bottleneck"
- Except for being useful for compressing the input and reconstructing the output, the hidden layer in an autoencoder can also be useful to learn something useful about the hidden data
- There are 4 AE Hyperparameters that we need to set before training an autoencoder: code size, number of layers, number of nodes per layer, and loss function
- Just like with other neural networks, you can have simple, shallow autoencoders and deep autoencoders
- Denoising Autoencoders (DAEs) are used to denoise input data and creating "clean" outputs
- Convolutional Autoencoders combine the use cases of CNNs and autoencoders by providing solutions for image reconstruction and image colorization