Who needs understanding if you can be more accurate and make the right decision. In machine learning he who gets it right wins and those that try to understand are doomed to be left behind.
An example always raised is “If you or someone you care about had an illness, and your doctor proposes a treatment which he understands, and explains how it works to you, but has a 75% chance of succeeding, and then gives you an option where a treatment has been recommended by an algorithm that has a 95% chance of succeeding, but he cannot explain how it works, which one will you choose?”
This example sums up what we will be facing, not only in medicine but, in everything.
There is no doubt that the superior results of narrow deep learning systems has surpassed the best of human results in many area’s. With the technological explosion witnessed over the last 3 years, computers are now able to perform better than the best humans in more and more area’s at an exponential rate. While already performing better in all areas that have substantial data, such as voice recognition, image recognition, many games, it is without a doubt that this will grow to encompass healthcare, transportation, legal activities, market predictions, and many many other areas.
Neural Networks, the process of machine learning that has had the biggest impact recently, due to a substantial increase in data available, and processing power, is intrinsically superior due to its ability to disseminate a large amount of data very quickly and learns to improve its output via what is called Backpropagation. Basically what this means is it corrects itself using its own results improving through trying, failing and improving many times. Many times, meaning millions and possibly billions of times, until its accuracy is beyond superhuman good.
This is the reason it becomes so good at what it does. Like they say: you do something enough times and you become good at it. What if you can practice something millions or billions of times?
So what is the problem? The problem is to understand the decision made by the Neural Network you literally have to analyse every iteration, and this would defeat the purpose. No amount of humans can go through millions and possibly billions of considerations, and that is why they call it a Black box.
So there is no real solution unless we make the learning algorithm simpler, much simpler, at the cost of performance. So back to the original question “would you choose a 75% success rate because you understand over a 95% success rate of something you don’t.”