Neuroscience is a field for philistines. It’s based on the crudest form of physicalism: the mind is the brain, a hypothesis of a crudity that physics itself lost centuries ago, similar to saying “light is created in the sun.” At some level that’s true, for the sun is (locally) distal source of light, but one would learn almost nothing about physics by trying to do solar anatomy without knowing anything about chemistry or quantum physics. One simply doesn’t learn about matter by going about one’s business that way. Materialism of the neuroscientific kind is a poor theory of matter, let alone of mind. Materialism is a bad philosophy for practical purposes as well. There are some situations where crude localization helps: if someone suddenly starts slurring their speech, we know where they might have had a stroke.
However, even for a generally flourishing mental life, the brain = sun theory of the mind is a poor theory. Just as the sun theory of light doesn’t help us make artificial light sources, since the sun’s actions are somewhat hard to replicate, the brain theory of the mind doesn’t help us build intelligent devices. For all its faults, Artificial Intelligence with its focus on the abstract principles of intelligence is a much better theory of intelligence than anything in the neurosciences. The reason is simple: like in any other creative science, we need to abstract the problem before we can make real progress. There’s a reason why levers and pulleys helped us make progress in physics, and not a direct attempt to replicate the shape of clouds. The correct idealizations and abstractions are crucial.
At the same time, it’s clear that physics might have gone too far in the direction of abstraction as far as biology goes, and closer to home, AI might have gone too far in the direction of abstraction as far as intelligence goes. Intelligence is not pure information processing, and whatever principles there might be of information processing can lead to good mathematics and in controlled circumstances better engineering, but it’s not the right way to approach the problem of understanding the remarkably flexible characteristics of living creatures. We need new models and new artifacts. At the same time, paradoxically:
In order to make our ideas more complex, we might need to make our methods cruder. Our current capacities for probing neurons, neural systems and entire brains are remarkably sophisticated and the technology is getting better rapidly. However, it’s technology that’s rapidly outstripping our understanding of the mind and leaving us with a deluge of data that defies explanation.
Consider the following analogy: suppose we became an engineering civilization before we became a scientific one and we were able to probe the center of the earth well before we had a clue about atomic structure. At the same time, imagine that we had a theory that all heat was generated out of the earth’s core.
A plausible alternate history of technological progress and as it turns out, the core theory of heat has some truth to it. I am shifting from the sun to the earth since a civilization that can probe the sun’s core directly without any understanding of the underlying physics seems rather implausible
In that situation, by building ever more complex devices to collect ever more evidence about the different kinds of heat in different parts of the earth’s core
we are only taking ourselves further away from a real understanding of thermodynamics. Similarly, we are getting too far ahead of ourselves by building ever more powerful mechanical and genetic techniques for probing the brain. Not only does that not enhance our understanding of the mind, it also increases our capacity to torture other species in the pursuit of data.