source: https://blog.keras.io/building-autoencoders-in-keras.html

Member-only story

Autoencoders in Computer Vision

An implementation with Python

Valentina Alto
5 min readJan 6, 2021

--

In my previous article, I’ve been talking about Generative Adversarial Networks, a class of algorithms used in Deep Learning which belong to the category of generative models.

In this article, I’m going to introduce another way of generating new images, yet with a different approach. The algorithms we are going to use for this purpose belong to the class of Autoencoders. Generally speaking, an autoencoder (AE) learns to represent some input information (in our case, images input) by compressing them into a latent space, then reconstructing the input from its compressed form to a new, auto-generated output image (again in the original domain space).

The first step of the job (compressing the information into a latent space) is done by an encoder, while the decompression phase is done by a decoder. Performances are evaluated by measuring the distance between the original and generated data (in case of computer vision, images).

As you can see, in the field of Computer Vision, the main difference between GANs and AEs is the way their performances are measured. With GANs, the generator aims at fooling the discriminator by generated images which are likely to be thought as real rather than generated. On the other hand, with Autoencoders the performances of the algorithm…

--

--

Valentina Alto
Valentina Alto

Written by Valentina Alto

Data&AI Specialist at @Microsoft | MSc in Data Science | AI, Machine Learning and Running enthusiast

No responses yet