Image for post

In my previous article, I’ve been talking about Autoencoders and their applications, especially in the Computer Vision field.

Generally speaking, an autoencoder (AE) learns to represent some input information (in our case, images input) by compressing them into a latent space, then reconstructing the input from its compressed form to a new, auto-generated output image (again in the original domain space).

In this article, I’m going to focus on a particular class of Autoencoders which have been introduced a bit later, that is that of Variational Autoencoders (VAEs).

VAEs are a variation of AEs in the sense that their main job is that of learning a probabilistic model, living in the latent feature space of the compressed image, from which one can sample and generate new images. …


Image for post
Image for post

GoogLeNet is a 22-layer deep convolutional network whose architecture has been presented in the ImageNet Large-Scale Visual Recognition Challenge in 2014 (main tasks: object detection and image classification). You can read the official paper here.

The main novelty in the architecture of GoogLeNet is the introduction of a particular module called Inception.

To understand why this introduction represented such innovation, we should spend a few words about the architecture of standard Convolutional Neural Networks (CNNs) and the common trade-off the scientist has to make while building them. …


Image for post
Image for post
source: https://blog.keras.io/building-autoencoders-in-keras.html

An implementation with Python

In my previous article, I’ve been talking about Generative Adversarial Networks, a class of algorithms used in Deep Learning which belong to the category of generative models.

In this article, I’m going to introduce another way of generating new images, yet with a different approach. The algorithms we are going to use for this purpose belong to the class of Autoencoders. …


Image for post
Image for post

The architecture behind

Neural Networks are a class of algorithms widely used in Deep Learning, the field of Machine Learning whose aim is that of learning data representation via multi-layers, deep algorithms.

So by definition, Neural Networks are made of multiple layers, each one approaching towards more meaningful insight of the input data (if you want to learn more about NNs, you can read my former article here).

Convolutional Neural Networks are nothing but Neural Networks that exhibit, in at least one layer, the mathematical operation of convolution. …


An implementation with Python

Image segmentation is a technique used in Computer Vision whose goal is that of partitioning a given image into segments, in order to output a data representation that is more meaningful than the original image.

There are two ways we can perform image segmentations:

  • Semantic segmentation →it considers as the same instance all the items which belong to the same semantic category. Namely, it will let fall within the same segment all the instances which represent dogs.
Image for post
Image for post
source: https://www.warrenphotographic.co.uk/11238-two-pomeranians-with-paws-over
  • Instance segmentation →it considers each item as a unique instance, segmenting them into different regions.


Image for post
Image for post

An implementation with Python

Object Detection is a technique used in Computer Vision that aims at localizing and classifying objects in images or video. The main idea is that of building bounding boxes that envelop a specific object and then apply a classification algorithm (in the field of Computer Vision it will belong to the family of CNN) to the bounded object, in order to classify it with a probabilistic output.

So as you can see, there are two main steps in an object detection task: building the bounding boxes and classifying objects within them. …


Image for post
Image for post

An implementation with Python

Whenever we develop knowledge within a given field, not only we are specializing our skills in that specific field, but also are we developing more generic and abstract mental tools that will speed up the process of learning new tasks also in very different domains of knowledge.

Probably the most common example (at least for Italian students who attended classical high school) is the study of ancient Greek and their relation with Maths.

Apparently, two domains far away from each other (literature versus science), have a common denominator which is the way of reasoning needed to get to a result. In other words, when we learn greek grammar, we are also learning how to learn very complex topics, and we will very willingly re-use this knowledge in other fields, namely Maths. …


Image for post
Image for post

A gentle introduction

Generative Adversarial Networks (GANs) are a class of algorithms used in Deep Learning which belong to the category of generative models.

With “generative models” we refer to those models whose main goal is that of describing, in terms of a probabilistic rule, how a dataset is generated. By doing so, whenever we sample from the obtained probabilistic rule, we end up having a new dataset that is different from the original one, yet it seems to be generated by the same mechanism as the former.

GANs have been first introduced in the 2014 paper by I. Goodfellow et al., “Generative Adversarial Networks”. Before deep-diving into the architecture of those models, let’s first have a look at the main idea behind them. …


Image for post
Image for post
https://en.wikipedia.org/wiki/Monty_Hall_problem#/media/File:Monty_open_door.svg

An intuitive explanation

The Monty Hall Problem, named after its original host, is a statistical puzzle whose resolution is not that intuitive if your not much familiar with probability theory (or, as in my case, even if you are).

In this article, I’m going to present you a proof of this “counterintuitive” result (which we will be focusing on in a minute) with an implementation in Python.

In the Monty Hall problem, the following scenario is described. You, the player, are facing three closed doors. Behind two of them, there are two goats (one for each door), while behind the third one there is a car. …


Image for post
Image for post

From Random Forest to LightGBM

Decision Trees are popular Machine Learning algorithms used for both regression and classification tasks. Their popularity mainly arises from their interpretability and representability, as they mimic the way the human brain takes decisions.

However, to be interpretable, they pay a price in terms of prediction accuracy. To overcome this caveat, some techniques have been developed, with the goal of creating strong and robust models starting from ‘poor’ models. Those techniques are known as ‘ensemble’ methods (I discussed some of them in my previous article here).

In this article, I’m going to dwell on four different ensemble techniques, all having Decision Tree as base learner, with the aim of comparing their performances in terms of accuracy and training time. …

About

Valentina Alto

Cloud Specialist at @Microsoft | MSc in Data Science | Machine Learning, Statistics and Running enthusiast

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store