7 Key Notions to Catch On to Deep Learning

Published by admin on

Deep learning is a booming field of computer science at the cutting edge of machine learning and artificial intelligence and if you do not happen to have a sound tech background, making heads or tails of it might be quite challenging. Yet, all high-complexity concepts in this world can be broken down into simpler ones, and deep learning is no different. It is built upon a set of ideas as its keystones, and in this article, we are going to turn some of those stones from stumbling into stepping ones.

Artificial Intelligence (AI) is about making machines do things that are characteristic of human intelligence. For example, play chess, drive a car, diagnose diseases or keep up a conversation. Humanity has been toying with the concept from olden times and nowadays it has finally matured from sci-fi tales like The Terminator or The Matrix into genuine, fully-fledged science. In theory, there can be many different ways to achieve AI. In practice, however, these days everybody uses

Machine learning (ML) — an approach where instead of giving the computer specific instructions on how to perform each and every task we have it learn by experience, just like we, humans, learn to walk, speak or classify our surroundings into animals, trees, buildings and so on. Algorithms used to ‘train’ machines are numerous and varied, but deep learning is first and foremost powered by

Artificial neural networks (ANNs) that roughly mimic the way our brain works. In fact, deep learning so heavily relies on neural networks that the two terms are often used interchangeably. ANNs contain myriads of simulated neurons that form umpteen synaptic connections among them. Each connection is assigned a number, or weight, that shows how strong that connection is.

Initial weights are chosen by scientists, which is one of the crucial pre-launch steps, and then, during the training process, the model plays with them, weakening some connections while making others stronger so that the lowest possible error rate could be reached.

ANNs have been around for quite some time, living through a full hype cycle and oblivion times. It was the idea to make them multilayered and cascaded that gave ANNs a second wind. Actually, ‘deep’ refers to the cascade depth — the more layers we add, the deeper our learning is. What’s so grand about piling neural networks on top of one another? This arrangement unfolds 3 important options that are out of reach in traditional, or shallow, learning approaches:

Hierarchical abstraction. Given raw data (e.g. pixels) and a task requiring a high level of abstraction (to define, whether the pixels show a face), the system promotes data from the lowest layer to the highest one where each successive ANN converts the data in a slightly more abstract form than the previous one. The output of the first layer is the input for the second one and so on, higher and higher up the hierarchy of neural networks until the desired level of abstraction is reached. Thus, where conventional ML systems would break the task into parts, solve them one by one and then glue them together, deep learning solves the problem end-to-end.

Scalability. The more data we have and the faster our computers work the better results we get. Traditional ML solutions cannot boast of such a steady performance gain in response to up-scaling.

Automated feature extraction. To derive meaning from input data, the computer needs to order them into a structure using some feature. For example, if we rank a bunch of random numbers in the order of increasing, the feature is the value of the number and the structure is the increasing series. Formerly, it was the human’s job to decide which features are important and develop extractors to make the computer focus on those features. But now deep learning systems are trained to extract key features on their own.

Surely, those are the very basics, and there are many more things to learn about deep learning, like the 3 major types of them — multilayer perceptron networks, convolutional neural networks, and recurrent neural networks. Yet, as the saying goes, a journey of a thousand miles begins with a single step.

Categories: Featured