Normally, we tend to think of beliefs as psychological states with dubious epistemic properties. Beliefs are conceptualized as unregulated conceptual structures, for the most part hypothetical and often fanciful or deluded. Thinking and reasoning on the other hand are seen as rational activities regulated by rules and governed by norms. Computational theories of the mind have focused on rule governed behaviour, ultimately trying to reduce them to rules of logic.
Beliefs are far more fuzzy. When I say, “I Believe in God” what exactly do I mean? It seems that its the very imprecision of beliefs that allows us to share them, even if we do not have the same understanding of a given belief such as “I Believe in God”. Furthermore, beliefs have emotional components. For example, I can hate a belief (and I can hate you for having that belief), but it seems strange to hate a logical rule or to hate someone for using a rule. As I had mentioned in my previous post on reason and emotion, its precisely the separation of logic from emotion that makes it attractive to many of us. But what if thinking was more like believing?
Note that we acquire our beliefs mostly through verbal testimony. Not surprisingly, Western philosophers, from Plato onwards, have mostly refused to recognize language as a genuine source of knowledge. Indian philosophers on the other hand (apart from the Buddhists) have always acknowledged sabda pramana. At best, western philosophers say that language is a source of true belief. As a consequence, western philosophy (and now cognitive science) has modeled the knowledge of language as the knowledge of the meaning of sentences. Is that enough? Indian thinkers would disagree. When you tell me that the earth goes around the sun, not only do I understand that you mean “the earth goes around the sun” but also that I come to know that the earth actually goes around the sun. Otherwise, how else do we explain the fact that we get most of our knowledge of the material world from physics textbooks, since most of us do not do the experiments directly?
So why not expand our conception of thinking to include believing? Sure, beliefs are more likely to be false than strict modes of reasoning are, but so what? We should accept that knowledge is fallible. I would rather have the freedom to think and believe what I want and to arrive at unexpected conclusions than to be strictly bound within rules, rules that give me a false certainty. We are all going to die anyway, right?