Decoding machine learning

Published by admin on

Growing up, humanity develops increasingly powerful tools to do things faster, easier, better. Buildings protect us from the elements; vehicles whiz back and forth to boost up our mobility; farming technologies feed us, though few people farm today.

Apart from food and safety, however, a human being also has information needs. Every day we have to deal with data — store it, exchange and derive meaning from it.

For quite some time now, computers have been there to help. They store tons of information and can communicate it in no time to the other side of the world. Assign a task to a computer, explain what exactly you want to get done, give instructions on how to do it, and the machine will manage far better than any human.

The only catch is there are tasks easier done than explained. It is much simpler, for example, to walk a hundred meters than to talk through all the mechanics of one step. Or describe what features you focus on to differ a dog from, say, a wolf.

Such skills are gained with practice and manifold repetition, usually when we are still kids. Think about how babies learn to speak. Nobody’s giving them instructions like “To create the ‘m sound’, press your lips together, causing the air to be blocked from leaving your mouth. Make sure the soft palate drops, allowing air to pass out through the nose…” We just let them hear us speaking, and one fine day they utter the miraculous ‘mama’.

Back to computers, machine learning is an advanced branch of data science where lots of intricate math is used to deliver on artificial intelligence, but the core idea is the same as with babies: instead of instructing them how to perform each specific task, have them learn by experience.

Thus, in machine learning computers are taught to program themselves so that we don’t have to go into details on how to accomplish each task.

To achieve this, various algorithms are used, such as linear regression, classification and regression trees, Naive Bayes, K-Nearest Neighbors (KNN), Support Vector Machines (SVM), artificial neural networks, etc. Many of them were already known years ago, but just recently we got enough computing power and data to make them work.

Today, ML powers all kinds of solutions ranging from targeted ads, spam filtering and fraud detection to speech and image recognition, self-driving cars and medical decision-making.

ML algos are generally divided into 2 major types:

Supervised learning, or learning by example, is the most common approach. Computer ‘practices’ on a set of “training examples” to learn a pattern, or function, which best maps the input into the output. Once the program has found the pattern and we checked it works right for the known inputs, we can use it to analyze new data. It can be a YES/NO answer to questions like “Will this user like this video?”, “Is this letter a spam?”, or a trend prediction, such as “What will be the weather like next week in this city?”, “What will be the market price of this company in a year?”.

Unsupervised learning is where you only have input data and no corresponding output values. There is no teacher to correct you. The aim is to recognize unknown patterns from the data and derive rules from them. Unsupervised learning is successfully applied in cybersecurity and natural language processing. It is also handy in social media engines suggesting you people to befriend.

To give a few real-world examples, some of ML technologies most talked about today are

Uber’s Michelangelo, a fundamental decision making tool for the transportation giant business,

Apple’s Siri, a personal assistant with a voice recognition system that imitates human interaction,

IBM’s Watson, a question-answering computer system that won $1 million in Jeopardy! quiz show against legendary champions of the game,

Google’s AlphaZero, a board game engine that within 24 hours mastered go, chess and shogi well enough to beat world-champion programs.