, Professor Emeritus, Carnegie Mellon, LTI and CSD
Complicated question, and one for which there is no real consensus among the experts. I’ll very briefly state my own opinion, but that is not necessarily a majority view.
For what it’s worth, I’m one of the relatively few researchers who have done serious work on both neural-net learning algorithms (including some that did a kind of deep learning 25 years ago) and on symbolic methods for human-like, common-sense knowledge representation, reasoning, planning, and language understanding.
I believe that deep-learning neural nets (but not necessarily the algorithms being used now) will play a very important role in the future of AI. If we want to emulate human capabilities, I think that the neural nets will pretty much take over the lower-level parts of sensory-motor processing, and speech/language understanding, probably up to and including the learning of sequential word patterns and syntax.
Roughly, this is the stuff that we humans do without being aware of what is going on or how we learned it: standing, walking, reaching and grasping, throwing; picking out the words in a noisy stream of speech; recognizing objects, their parts, and spatial relations in a scene.
I don’t believe that neural networks, as currently understood, will take over higher-level conscious thought and planning (including creative planning and design); the symbolic parts of knowledge representation and inference; and language understanding/generation tasks that involve meaning. We will need symbolic representations for these things. I will be surprised if distributed “thought vectors” are adequate representations for these tasks.
In the human, it is pretty clear that this higher-level, more symbolic stuff must also be implemented in some sort of neural network — that’s all there is in the brain — but it these neurons are not operating like current feed-forward or generative neural-net models. Instead, these networks are functioning more like conventional computers that manipulate symbols, but with some massively parallel symbolic search and inference capabilities built in.
The neural-net and symbolic levels have to work together, and what happens at the interface is a very interesting area for investigation. It’s pretty clear that the lower-level pattern-recognition parts are influenced by our expectations, some of which come from higher-level reasoning; it’s also pretty clear that the pattern-recognition and pattern-learning parts must be able to cause the creation of new symbols and relations that are accessible to the higher-level symbolic machinery.
By the way, my use of the terms “higher-level” and “lower-level” is not a value judgement, just a shorthand for the way most people classify certain mental functions. Some of the “highest level” cognitive tasks, such as chess and calculus, were among the first things that AI researchers solved, while “lower-level” tasks such as manual dexterity and recognizing objects from images are only now starting to make real progress towards human-like performance.
Again, that is just one researcher’s best guess about where things are headed in AI. Read what other researchers are saying and you will get a variety of other viewpoints and guesses.