Skip navigation

Monthly Archives: June 2011

Just as it’s easy to write computer code that would badly fail a Turing test, it’s easy for humans to fail an ideological Turing test.  All you have to do is express your own thinking.  For instance, here’s Brad DeLong:

I have never met a believer in Nozickianism who can [successfully explain Nozickian political philosophy], and I expect never to do so…if any Nozickian believer ever grasps the structure of the argument well enough to successfully explain it, they thereby cease to be a Nozickian believer. Nozickian believers are thus, in a sense, incapable of passing the Turing Test.

This is a failure of the ideological Turing test, since it’s obvious to any observer from this writing that its author is indeed not a Nozickian.  Indeed, DeLong is expressing his liberal interpretation of Nozick.  Which is a fine thing to do.  But it’s not something that Nozickians would ever do, (unless they were trying to pass the ideological Turing test for Nozickians).

DeLong is arguing that an opposing view, if understood the way he understands it, is wrong.  This is not a way to win the ideological Turing test; it merely begs the question of whether DeLong actually understands the opposing view correctly.  There are other thinkers who understand Nozick differently and, unsurprisingly, they have different interpretations of the merits of Nozickian.  The point of the ideological Turing test is not for thinkers with opposing views to angrily point fingers at each other and say “You’re wrong!” “No, you’re wrong!”  The point is for thinkers to try to show that they can express opposing views in a favorable light, such that a neutral observer would think the arguer supported these views.

Meanwhile, I’m pretty unimpressed by Bryan Caplan–who initially proposed the ideological Turing test–backtracking:

If someone wanted to make me fail an ideological Turing Test, what kinds of questions would they ask? … Questions that explicitly solicit arguments.  I’m apt to get carried away, and forget that these implicitly test whether you understand what people take for granted.  Even if I keep this fact in mind, it’s hard to strike a believably intermediate stance.

My prediction remains that Caplan would fail an ideological Turing test quite miserably. as would most thinkers with strong views.  Here he’s saying that while he can state liberal viewpoints, he can’t defend them the way a liberal would.  Or, in other words, he doesn’t think about liberalism the same way a liberal does, and couldn’t convince observers that he does.

Which is fine.  It’s okay that Brian Caplan is a libertarian and Paul Krugman is a liberal, that David Gordon likes Nozick and that Brad DeLong does not.  Ideological diversity is a good thing.  What’s not fine is for Caplan or Krugman or anyone else to get up on a high horse and claim that they understand their opposition better than their opposition understands them.  Rather, they need to recognize that their views are theories, based on their own (highly limited) perception of the world, that their theories clash, and that in order for them to improve their and our understanding of their world, they need to talk to each other.

Responding to Paul Krugman’s claim that liberals better understand their opponents’ arguments than do convservatives, Bryan Caplan has an interesting idea to test whether intellectuals are able to correctly state their opponents positions:

If someone can correctly explain a position but continue to disagree with it, that position is less likely to be correct…the ability to pass ideological Turing tests – to state opposing views as clearly and persuasively as their proponents – is a genuine symptom of objectivity and wisdom…

Here’s just one approach.  Put me and five random liberal social science Ph.D.s in a chat room.  Let liberal readers ask questions for an hour, then vote on who isn’t really a liberal.  Then put Krugman and five random libertarian social science Ph.D.s in a chat room.  Let libertarian readers ask questions for an hour, then vote on who isn’t really a libertarian.

I’d tend to describe Krugman and Caplan, respectively, as a Thinking Liberal and a Thinking Libertarian, meaning that they actively engage opposing ideas.  But I also suspect they’d both fail a well-designed ideological Turing test quite miserably.  The key would be to ask questions that effectively challenge a belief structure.  It’s difficult to convincingly defend ideas against criticisms you find legitimate; either you concede defeat, or you resort to caricaturing the opposing view.  No matter how well you think you understand an opposing view, chances are that somewhat who actually believes it understands it, and can defend it, much better.

There’s a strain of reasoning that runs something like this:

  1. A trend is occurring.  (Usually backed by data in charts)
  2. Extrapolating far forward along that trend leads to a very bad result.  (Often uncertain or vague, but definitely bad)
  3. Therefore, we must do something right now.  (Often the specific something is an action the arguer wants to take anyway)

This reasoning is applied to issues such as peak oil, climate change, fiscal deficits, trade deficits, overpopulation, disappearing bees, ozone depletion, etc.  Megan McArdle’s post on antibiotics seems to largely fit the bill.  Note that these issues are not Black Swan-type events; Black Swans are inherently unpredictable.  These problems are White Swans.

Posts about White Swans are often meant to evoke fear and panic, but my sense is that White Swans predictions almost never play out, and the reason they’re prevented has little to do with the proposed solution.  Indeed, any of the following may occur:

  1. The analysis is wrong, and the problem does not exist.  This may be the case with antibiotics, as suggested here (see the fifth chart especially).
  2. There is already a solution or safeguard in place.  This is how I feel about peak oil; the safeguard is pricing mechanisms.
  3. Raising awareness of the problem will lead to it being solved, without taking any policy action.  I’ve argued that this may be how climate change plays out.
  4. The problem requires a solution, and the proposed solution is the best solution.  (searching for examples)
  5. The problem requires a solution, but the proposed solution is not the best solution.  (searching for examples)

Overall, I’d say my thinking boils down to this: if you think there’s a problem that will manifest in 30 years, it’s appropriate to spend the next, say, five years talking about it, instead of rushing to action.

There’s a nasty rhetorical trick I’ve been seeing lately, which involves making a (weak) argument, then stating that opposition to the argument will occur for a particular reason, and then attacking that reason.  For instance, here’s Matt Yglesias, after citing a study that alleges wage discrimination based on gender:

Some people are going to be very resistant to this conclusion. They’ll think that in a competitive labor market with many employers and many workers, employers who discriminate against women in their salary offerings will be at a disadvantage. No firm will want to disadvantage itself in this way, thus the discrimination shouldn’t exist. Consequently, this apparently [sic] effect is almost certainly due to some other variable that’s not accounted for. So it’s worth pointing out that by this logic, the gender disparity in employment that existed in 1961 wouldn’t exist either. But obviously it did.

Yglesias is being intellectually lazy here, and by prematurely denouncing his opponents as partisan hacks, he becomes one himself.  In order to have intelligent dialog, thinking liberals–a term I’d normally use to describe Yglesias–need to engage their thinking conservative opponents.  Arguments opposing Matt Yglesias’s position are not constrained by Matt Yglesias’s ability to imagine opposing arguments.  Indeed, it’s possible to resist the study he cites without relying on the neoclassical theory of labor markets in its purest form.  For instance, there’s this study, which pegs gender-based wage discrimination at 5-7%, rather than 17%.

I’m not arguing that gender-based wage discrimination doesn’t exist.  I’m confident it does.  And that’s a bad thing.  But it does a disservice to the cause to cite one study from amongst many and then cut off opponents by accusing them of bad faith.