A gentle introduction to the concepts of machine learning

 Machine learning has attracted attention and solving sophisticated problems can seem like magic. But it is not. It is built on a foundation of mathematics and statistics developed over several decades.

A very practical way to think about machine learning is a unique approach to computer programming. Most programming that isn't machine learning (and practical programs created by humans over the past 50 years) is procedural - it's essentially a set of rules defined by humans. This rule is called algorithm.

In machine learning, the underlying algorithms are chosen or designed by the human. However, algorithms learn from the data, not directly through human intervention, about the parameters that will shape the mathematical model to make predictions. Humans do not know or set those parameters - the machine does. In other words, a data set is used to train a mathematical model so that when it sees similar data in the future, it knows what to do with it. Models usually take data as input and then generate some predictions of interest.



Executives don't have to be machine learning experts, but even a little knowledge can do a lot. If you can understand the basic kinds of things with ML, you'll know where to start and what to know. And you won't have to blindly ask your tech team to "do some magic" and then hope they succeed. In this post, we will give you enough knowledge to be dangerous. We'll start with machine learning techniques you may have heard of, address a fundamental ML challenge, dive into deep learning, and discuss the physical, computational realities that make it possible. Overall, we hope that your interactions with data scientists and engineers turn out to be somewhat more fruitful.

Read MoreThe Most In-Demand Technical Skills – And How To Develop Them

Machine learning techniques

Machines learn in different ways with varying amounts of "supervision": supervised, unsupervised, and semi-supervised. Supervised learning is the most deployed form of ML and also the easiest. However, unsupervised learning does not require much data and has the most practical use cases.

Machines often learn from sample data that has both an example input and an example output. For example, a data-sample pair may be input data about an individual's credit history, and the corresponding output is the associated credit risk (either assigned by humans or based on historical results). Given enough of these input-output samples, the machine learns how to build a model that is consistent with the sample on which it is trained.

From there, the model can be applied to new data that it has never seen before – in this case, the new individuals' credit history. After learning from sample data, the model applies what it has learned to the real world.

This class of machine learning is called "supervised learning", because the desired predicted result is given, and the model is "supervised" to learn the associated model parameters. Humans know the correct answer, and they monitor the model as it learns how to find it. Since humans must label all the data, supervised learning is a time-intensive process.

Classification

The goal of a classification problem is to determine which group a given input belongs to. For example, a medical case where probabilities exist/disease does not exist. Another classic example is classifying animal pictures into a cat group and a dog group.

The machine is trained on data with multiple instances of the input (such as an image of an animal) with associated outputs, often labeled (such as "cat" or "dog"). Train the model with a million pictures of dogs and cats, and it should be able to classify a picture of a new dog that was not in the training data.

Read MoreTop 10 Highly Recommended Machine Learning Software!

Regression

Like classification, regression is also about inputs and related outputs. But for classification the outputs are usually discrete types (cat, dog), for regression the output is a normal number. In other words, it is not a 0 or a 1, but a sliding scale of probability. For example, given a radiological image, a model can predict how many more years the person concerned will be sick or healthy.

Related Courses

Post a Comment

0 Comments