LeapMind BLOG

The magic of deep learning

I will provide a comprehensive introduction of Machine Learning today (All engineers with machine learning experience are welcome to jump to the next blog post.). In my honest opinion, understanding the philosophy helps more than detailed mathematical formula of specific algorithms to get an overview. I will also outline how the development in Machine Learning directly leads us to Deep Learning.
Let’s forget all the maths stuff and dig into this fascinating topic!

What is Machine Learning (ML)?
In very simple terms we can understand Machine Learning as the (very difficult) task to teach a computer things that most humans can do seemingly effortlessly. Let’s say we have a little robot and we wish to teach this robot how to walk (assuming we have a technically well-designed humanoid robot). If you think about the task of walking for 2 minutes, I think you will agree with me that it’s actually quite complex. It’s a complex interaction of your foot, your legs, your hip and even your arms, which also includes the exact timing of activation of muscles. Small children usually need months to get from just lying around to being able to walk steadily. This realization helps us to see that computers also need a learning phase to accomplish steady walking.
So we have our little robot that also has some processing unit (let’s say there is a small computer in his head). This unit has to control all the mechanical parts of the robot to make it walk. The difficult part is to decide how to set all these parameters for the complex walking pattern. Luckily, we do have some feedback. In our case we can see how the robot leg moves depending on our parameter settings. Even if we just have 10 parameters and 10 possible settings for each to control, trying all the possible parameter setting combinations would take really long (it’s 100 unique combinations). Sure, computers have become very fast at evaluating functions even with lots of input values, however many problems just have such a huge number of possible settings that we can’t try them all. For example the Asian board game Go has more possible game moves than there are atoms in the universe.
If we can’t try all the parameter settings how can we find out how to set them? The answer is to teach a computer to be intelligent instead of just being diligent. Wow, that seems to be against the nature of a computer, right? Even in the 21st century we still know surprisingly little about intelligence. It’s a highly controversial topic itself and there is little known about the exact factors and connections of real intelligence. What we know for sure is that size of the brain alone does not make you smart - the brain of Albert Einstein was of average size yet he was a true genius. The same applies to computers in a way, simply giving them computational power doesn’t make them smart.
Machine Learning is addressing this problem by allowing computers to make mistakes. The number of mistakes and processing time are generally directly correlated. A baby learning how to walk will fall frequently at first if it’s not guarded, but as time passes, it will walk more securely. In reality we also know that even adults can fall, when there is an unexpected step or after having too many beers. ML has a similar philosophy, where using lots of training data, the settings of the parameters are gradually adapted to the task of walking. We can see this at applications that are very popular such as route planning and automatic translation. Route planning algorithms sometimes lead users to inaccessible places or wrongly estimate travelling time. Automatic translation of a random Chinese text to Bulgarian, then to Swedish and finally to English is guaranteed to give us weird results. Yet normal users would pay for these services because the average person would most likely estimate the travelling time even less accurate and is very unlikely to make any sense out of a random passage of Chinese text.
To be continued.
Written by Schwende Isabel

Back to Index