The origin of thoughts and how the human brain processes abstract learning is one of the questions scientists can’t explain to date.
However, a philosopher from Houston University believes that by deconstructing AI’s complex networks, it’s possible to decode how we process abstract learning – holding to the fact that these systems emulate how our brain works.
Why AI Presents the Only Option
There are several reasons why we’ve not been able to fully hack into the brain’s operation. One is that, even if we have willing brain donors, the ethical laws don’t permit risking life to that extent for research. A better option to help us study the brain much closer would have been the lab-grown brain currently in the labs, but the problem again is, scientists say it’s not yet a thinking brain.
This leaves us with only one model, artificial intelligence. As at now, AI has become really sophisticated and has captured public attention for being able to compete against human intellect. In other words, the Houston University philosopher Buckner is pressing the right note by looking at machine learning to decode how certain functions of the brain work.
“These systems are becoming part of our lives and it’s important that we understand the way they work,” said Buckner, senior author of the paper, which also appears in the current issue of the journal Synthese.
The Roots of Human Knowledge
The origin of human knowledge has been a topic of hot debate amongst philosophers since the days of Plato. Nobody can comfortably explain whether knowledge is innate based on logic or if it proceeds from sensory experience.
Now, according to Buckner’s school of thought, after dissecting AI’s Deep Convolutional Neural Networks (shortened to DCNNs,) knowledge stems from experience, a process he calls empiricism.
“The DCNNs with nodes emulates the way neurons process and pass information in the brain. This logically explains how exactly abstract knowledge is acquired,” said Buckner. And makes it another significant tool in the field of psychology and neuroscience.
Two things make these networks complex like our brains, one being tasks involving perception and the second how they are able to discriminate, something scientists are still figuring out.
Most researches involving AI have focused majorly on results, what the machines can do, how to make them do and so on… rather than understanding how their cognitive abilities work.
Buckner’s approach is different; he’s into investigating abstract reason in machines. The mechanism behind visual recognition of objects like artwork, chairs or animals, and how AI does surprisingly complex tasks, as well as how games work in the glare of different ranging aspects like style and color.
Buckner originally pursued computer science – and focused more on logic-based approaches of machine intelligence. But a shift to dive into philosophy occurred when he started studying how humans and animals solve problems.
Yes, these systems learn from the data they receive, same with the human brain. But the simple question is, how do neural networks, both in the systems and the brain recognize a cat seen from above, from behind or from the front. The obvious take is that somehow all these diverse perspectives need to be unified to have a reliable cat-detector.
Basically, the systems need to master all these nuisance variations — talk of position, size, tone pitch and so on. The ability to account, process and present all that diversity is the hallmark of abstract reasoning.
This far, we can say intuitive knowledge of the world in humans comes automatically. Machines can acquire the same knowledge to become more subtle through abstract learning, but the astonishing thing is the fact that we cannot program the same knowledge into AI systems.