When it comes to neural networks, or deep learning, a fixed size network is repeatedly exposed to input data whether it is vision or some other kind of data. The network is randomized (and this is where it stops being science because science demands repeatability) and then learning commences. The data is shown over and over again usually in a random sequence because we as humans may learn stuff at random because there is no way we can guarantee when we'll be exposed to something we have to learn, and the network adapts to the input producing some output that is meant to match the desired output. The error in the output is the difference between whatever the output is and the output of the network. That error is feed back through the network and it's values are changed, perhaps millions of times.
So you show the network a picture of a car and it is meant to output something that represents a car.
The real problem is what is a car. If 1 means car, 0 means not car, but the output is in the range of 0 to 1 and may output .5. In other words it's nothing better than a coin flip. However the corporations say the network is saying the network has learned to distinguish cars, because they use numeric rounding, IE anything larger and including .5 is a 1, and anything less than is not a car. OUCH! And if the network had previously learned about trucks it is possible the network will completely forget trucks once it learns cars, because of this Hebbian scheme of learning.
And this is not true. We don't forget the letter 'x' because we don't see it as often as 'e' and just because people might lose their sight, it doesn't mean they completely forget how to process images if their eye sight is later restored.
This is a terrible model to base AI on.
But the hype is so strong people believe this this technology can now recognize people's faces and people have been thrown into jail based on basically a computer doing what humans could achieve as accurately with a coin flip.
Then there was the Google woman who was in-charge of ethics who lost her job because their face recognition software called a black woman an animal ! The ethicist lost her job - yet it's not her problem - it sounds like the actual 'state of the art' or rather the 'tech' and the tech is appalling.
Prof Marvin Minsky once said 'no one knows what's going on inside a neural network' and they still don't. And the problem is the gradient descent algorithm employed in neural or deep learning networks. But Adkodas does, Adkodas knows what's going on inside a neural network.