Tyler Cowen writes:
It seems to me that people are first choosing a mood or attitude, and then finding the disparate views which match to that mood and, to themselves, justifying those views by the mood. I call this the “fallacy of mood affiliation,” and it is one of the most underreported fallacies in human reasoning.
He also cites examples of the phenomenon:
2. People who see a lot of net environmental progress (air and water are cleaner, for instance) and thus dismiss or downgrade well-grounded accounts of particular environmental problems. There’s simply an urgent feeling that any “pessimistic” view needs to be countered.
4. People who see raising or lowering the relative status of Republicans (or some other group) as the main purpose of analysis, and thus who judge the dispassionate analysis of others, or for that matter the partisan analysis of others, by this standard. There’s simply an urgent feeling that any positive or optimistic or deserving view of the Republicans needs to be countered.
This is solid analysis and Cowen’s correct that it’s highly under-reported. It’s not entirely new, however. Consider Robert Pirsig in his excellent book Lila:
Any person of any philosophic persuasion who sits on a hot stove will verify without any intellectual argument whatsoever that he is in an undeniably low-quality situation: that the value of his predicament is negative. This low quality is not just a vague, woolly-headed, crypto-religious, metaphysical abstraction. It is an experience. It is not a judgment about an experience. It is not a description of experience. The value itself is an experience.
Pirsig’s point is that value judgments precede rational analysis. For some people, any argument that is pessimistic about the environment, or that defends Republicans, immediately evokes a low-quality response, in the same way that sitting on a hot stove does. Cowen wants to shift the focus of discussion from our gut-level response, to high-level analysis, which is admirable. The path to get there is to recognize this process, examine it, and be willing to compromise when gut-level response and high-level analysis contradict. Note that I’m not advocating we ignore our gut-level value judgment (and Pirsig definitely isn’t); our value judgments are really important, and when they conflict with some analysis, it may be because the analysis is flawed.
Neurosurgeon/philosopher Edward de Bono makes a similar case in this book, by illustrating the process by which the human brain filters ideas into buckets based on prior experience. The same way our mind associates the idea “cat” with a broad set of quite different prior experiences, it also associates the idea “bad argument” with a broad set of quite different prior experiences. Furthermore, we link these ideas to each other, based on prior experience. So if someone has been exposed to a decent number of arguments that they consider both bad and conservative, they become likely to assign the label bad to a new argument at the same time they recognize it as conservative, regardless of the merits of the argument. What differentiates thinkers from partisans hacks is they recognize the fallibility of their prior assumptions, and are able to analyze the world through multiple frameworks.