Image for post
Image for post

An implementation with Python

Artificial Intelligence and its applications have opened innovative paths in societies and organizations. Today we can simultaneously get a transcript, in any language, of someone’s speech, identify individuals’ identities via smart cameras, and so on.

Yet it all comes with a price: time and resources. Indeed, training algorithms behind any AI system is no joke: especially in the computer vision field, where the class of algorithms is that of Neural Networks, those latter could take days to properly train. Plus, in order to parallelize and speed up the process, one needs also to be provided with special hardware powered by…


Image for post
Image for post
Source: https://www.istockphoto.com/it/immagine/balance?excludenudity=false&phrase=balance&sort=mostpopular

Customized Loss Function with Python

In the previous articles of this series, we have been introducing some techniques to deal with the imbalance in data in binary classification tasks. Part 1 examined some resampling techniques; Part 2 focused on how to modify the algorithm by changing the threshold value.

In this Part 3, we are going to dwell once more on a way to directly intervene on the algorithm, yet this time on its loss function.

The idea is similar to that of the cutoff value examined in Part 2. Basically, given the following unbalanced situation:


Image for post
Image for post
Source: https://www.istockphoto.com/it/immagine/balance?excludenudity=false&phrase=balance&sort=mostpopular

Implementing different cutpoints with Python

In Part 1 of this series of articles, I’ve been introducing the curse of class imbalance in binary classification tasks and some remedies to address it. More specifically, I’ve been focusing on how to intervene directly to the dataset with different sampling techniques in order to make it more balanced.

In this article, I’m going to dwell on one of a set of techniques, that are applied directly to the training procedure rather than to the dataset. In particular, those techniques are:

  • Alternative cutoffs
  • Weighting instances with different values
  • Asymmetric loss function

Let’s examine the first one (next ones will…


Image for post
Image for post
Source: https://www.istockphoto.com/it/immagine/balance?excludenudity=false&phrase=balance&sort=mostpopular

Re-Sampling procedures with Python

Whenever we initialize a task for a Machine Learning model, the very first thing to do is analyzing and reasoning on the data we are provided with and will be using for training/testing purposes. Indeed, it is often the case that even before thinking about the model to use, we might need to re-architect the dataset or at least incorporate in the training some features to deal with initial data conditions.

One of those conditions is that of unbalanced data, and in this article, I’m going to focus on unbalanced datasets within binary classification tasks.

The curse of Unbalanced Dataset

We face an imbalance in…


Image for post
Image for post

A visual implementation with Python

Whenever we train a Machine Learning model, we need a set of tools that are able to give us an idea of how well our algorithm has performed. In the case of binary classification, the evaluation of performances can be pretty simple: we can directly count how many instances have been correctly labeled by our model. Ideally, we want to maximize this last number.

Nevertheless, it is often not possible to do so without facing a trade-off in terms of other metrics. …


Image for post

In my previous article, I’ve been talking about Autoencoders and their applications, especially in the Computer Vision field.

Generally speaking, an autoencoder (AE) learns to represent some input information (in our case, images input) by compressing them into a latent space, then reconstructing the input from its compressed form to a new, auto-generated output image (again in the original domain space).

In this article, I’m going to focus on a particular class of Autoencoders which have been introduced a bit later, that is that of Variational Autoencoders (VAEs).

VAEs are a variation of AEs in the sense that their main…


Image for post
Image for post

GoogLeNet is a 22-layer deep convolutional network whose architecture has been presented in the ImageNet Large-Scale Visual Recognition Challenge in 2014 (main tasks: object detection and image classification). You can read the official paper here.

The main novelty in the architecture of GoogLeNet is the introduction of a particular module called Inception.

To understand why this introduction represented such innovation, we should spend a few words about the architecture of standard Convolutional Neural Networks (CNNs) and the common trade-off the scientist has to make while building them. …


Image for post
Image for post
source: https://blog.keras.io/building-autoencoders-in-keras.html

An implementation with Python

In my previous article, I’ve been talking about Generative Adversarial Networks, a class of algorithms used in Deep Learning which belong to the category of generative models.

In this article, I’m going to introduce another way of generating new images, yet with a different approach. The algorithms we are going to use for this purpose belong to the class of Autoencoders. …


Image for post
Image for post

The architecture behind

Neural Networks are a class of algorithms widely used in Deep Learning, the field of Machine Learning whose aim is that of learning data representation via multi-layers, deep algorithms.

So by definition, Neural Networks are made of multiple layers, each one approaching towards more meaningful insight of the input data (if you want to learn more about NNs, you can read my former article here).

Convolutional Neural Networks are nothing but Neural Networks that exhibit, in at least one layer, the mathematical operation of convolution. …


An implementation with Python

Image segmentation is a technique used in Computer Vision whose goal is that of partitioning a given image into segments, in order to output a data representation that is more meaningful than the original image.

There are two ways we can perform image segmentations:

  • Semantic segmentation →it considers as the same instance all the items which belong to the same semantic category. Namely, it will let fall within the same segment all the instances which represent dogs.
Image for post
Image for post
source: https://www.warrenphotographic.co.uk/11238-two-pomeranians-with-paws-over
  • Instance segmentation →it considers each item as a unique instance, segmenting them into different regions.

Valentina Alto

Cloud Specialist at @Microsoft | MSc in Data Science | Machine Learning, Statistics and Running enthusiast

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store