Skip navigation

Category Archives: Scientific Method

Amidst a poker game last night, I posited the following:

Every poker player has worse than average luck.

Call it the anti-Lake Wobegon Effect (aLWE).

This claim, at first glance, is entirely absurd.  Treating poker as a zero-sum game, one player’s good luck must be offset by another player’s bad luck, such that not all players can possibly have below average luck.  We learn this in kindergarten, and then again in advanced college mathematics.  So why am I trying to argue something that is patently absurd?  Below I will argue both why it is wise to believe aLWE, and also reasons why it may be true.

Read More »

I’m puzzled by this chart from the Brookings Institution‘s Hamilton Project, which attempts to predict how long it will take the United States to return to pre-recession level employment.  The chart plots three scenarios: a pessimistic option, in which employment grows at 208,000 jobs per month, as it did in 2005; an optimistic option, in which employment grows at 472,000 jobs per month, as it did in the best month in the 200s; and a middle option, in which employment grows at 321,000 jobs, as it did in 1994.  The takeaway from the graph, presumably, is that it will take a very long time to return to full employment.  The problem with the graph, is that its assumptions are entirely arbitrary, to the point that its predictions are largely meaningless.

The function of science, or social science is to use observed data to create theories that make predictions.  In this case, Hamilton is observing the period 1990-2008, a period of time that neither included nor followed a large recession, then theorizing that 2012-2025, a period of time that does follow a large recession, will behave like 1990-2008.  Hamilton is essentially saying that because job growth never exceeded 472,000 jobs per month when unemployment was low, it cannot possibly exceed 472,000 jobs per month when unemployment is high.  It’s just bad science, and it’s exactly the same bad science that failed to predict the recession in the first place.  Any scenario planning based on historical data leading up to 2008 would have deemed it impossible that employment would fall by 12 million from 2008 to 2010.  Why then, does Hamilton continue to use a forecasting method when that method’s limitations have been so clearly exposed?

The more I think about this Andrew Gelman post, the more ridiculous it seems.  Gelman argues that economists, especially popular economists, use a pair of contradictory arguments to explain phenomena:

1. People are rational and respond to incentives. Behavior that looks irrational is actually completely rational once you think like an economist.

2. People are irrational and they need economists, with their open minds, to show them how to be rational and efficient.

In the comments, he clarifies his position:

My problem with some pop-economics is not with the use of arguments 1 or 2 but rather with what seems to me as the arbitrariness of the choice, accompanied by blithe certainty in its correctness.  This looks more to me like ideology than science.

I have no problem criticizing economists for their blithe certainty, a criticism I’d also to apply to just about everyone, myself included.  But I don’t follow Gelman’s criticism of the fact that economists apply different models to different situations.  This happens in all disciplines, including Gelman’s field of statistics.  For instance, statisticians often apply one of the following arguments:

  1. Phenomenon X follows a normal distribution.
  2. Phenomenon X follows a log-normal distribution.

1 and 2 are entirely contradictory, and to a non-statistician, it would appear entirely arbitrary whether to apply 1 or 2.  But to a statistician, there is a logic (part science, part art) as to whether to apply 1, 2, both, or neither.  Similarly, it may appear arbitrary to Gelman whether to assume rationality or non-rationality in a particular situation, when there is a consistent logic apparent to economists.  Ultimately, models should be judged based on the reliability of their predictions, not perceived arbitrariness by outsiders.

One purpose of this blog has been to question the way to think about problems.  I’m specifically interested in how philosophy of science, statistics, and rhetoric shape the way we think.  Another purpose has been to identify which world problems are most worthy of discussion, having received insufficient attention.

Today I followed Tyler Cowen’s link to Mike McGovern’s essay about development economics.  Development economics is a topic I don’t understand well but consider highly important and under-discussed.  It’s also a meta-analysis, exploring different ways to think about problems.  For instance:

The difference between poets and economists…there is an acceptance that there are many ways to write a great poem, just as there are many enlightening ways to read any great poem. Bound as it is to the model of the natural sciences, economics cannot accept that there might be two incommensurable but equally valuable ways of explaining a given group of data points…Paul Collier, William Easterly, and Jeffrey Sachs can all be tenured professors and heads of research institutes, despite the fact that on many points, if one of them were definitively right, one or both of their colleagues would have to be wrong. If economics really were like a natural science, this would not be the case.

I wasn’t expecting to find philosophy of science (or philosophy of poetry) in an essay about third world development, but I think this type of thinking is necessary to address the particulars of third world development.  It’s a slightly morbid point of view; most people who want to solve problems want to do something; instead I want to think about thinking.  But actions are driven by views, and views and driven by the way we think about the world; when we don’t analyze the ways we think, we’re more likely to hold misguided views and take misguided actions.

McGovern’s assessment of development economics is shaped by his philosophy of science; in the above paragraph, he first criticizes economists for trying to be scientists, and then criticizes economists for being bad scientists.  The two criticisms contradict, and don’t account for the fact that throughout history, hard science regularly maintains contradictory points of view, whether in cosmology or mathematics.

My concerns about McGovern’s philosophy of science should dismiss what’s he written; his concerns about development economists may have more to do with their rhetoric than their scientific thinking.  On the whole, his essay is a really interesting read, and I’ll continue to think about it throughout the day.

Happy July 4th.

Predicting the future is both really important and really hard.  It’s supposed to be science–we build a model, validate it, and use it to make projections.  But in reality, building the model inevitably involves choosing from a severely limited set of data points, and predictions become dependent on unvalidated assumptions.  To a considerable extent, assumptions cannot be validated, since they are assumptions about the future.  When a company predicts its next year’s revenue, it assumes that within the next year, aliens will not invade Earth and enslave all humans.  This assumption is hard to validate; the best we can do is model its likelihood based on a historical absence of extra-terrestrial invasions, and a lack of flying saucers in the sky.  I don’t object to making this assumption, but we should be aware of its possibility.  There are a vast number of low-likelihood, high-impact events that could occur and drastically disrupt our predictions.  The longer the time-frame of prediction, the more implicit future assumptions we make, and the less reliable our prediction.

Of course, long time-frame future gazing is important.  When young choose which careers to prepare themselves for, they make assumptions about what careers will be available to them.  When politicians design budgets, they make assumptions about productivity growth over a long time-frame.  Both these sets of assumptions are highly dubious, as can be attested by auto-factory workers, young lawyers, and governments scrambling to deal with the aftermath of the financial crisis.

It’s with this skepticism I evaluate Paul Ryan’s budget proposal, which operates over a  forty-year time frame.  The analysis Ryan cites is obviously wrong.  But I’m also skeptical of this critical analysis, since it assumes zero economic growth.  There’s a lot of good discussion on this topic, such as here and here, but overall I’d like to see more robust analysis of the proposal, with more flexible assumptions about future events.

Matt Yglesias is one of the smartest and most economically literate liberal writers, so I find this recent post on school vouchers a little confusing:

Republicans design a program that’s not a voucher program, it’s just a “free money for a small number of poor kids in the District of Columbia” program.

That’s what I’ve always understood to be the definition of a voucher program.  In the comments section, Stephen Eldridge clarifies:

Vouchers are intended to replace funds used for Public Schools–that is, if you use a voucher, you’re “getting back” your money from your state or local taxes that would otherwise go to a public school and giving it to a private school instead…the DC program is *additional* money, so the use of it doesn’t defund a public school.

At some level, public-school spending and school vouchers are substitutable, since any spending on one could instead be spent on the other.  Yglesias and Eldridge seem unconcerned about funding a small vouchers program in DC; note that its funding could instead be applied to public schools in DC.  Meanwhile, Yglesias and Eldridge seem highly concerned with a system that could automatically make this same substitution at a much larger scale.

The underlying question of whether tax dollars are better spent on public schools or vouchers, is a testable one, but it’s important to use the right test.  The test is not, Do DC students start using vouchers? as Yglesias worries it would be.  The test is also not, Does the vouchers program lead to better education outcomes? The correct test is, Does the vouchers program have a better benefit:cost ratio than public school funding?  If that question is conclusively answered affirmatively, it provides some support for the larger type of vouchers program that Yglesias and Eldridge oppose.  But answering the previous questions provide no such support.

I think the strongest conservative viewpoint on this topic is Jim Manzi’s, who doesn’t support the large-scale voucher programs Yglesias and Eldridge fear:

I have argued for supporting charter schools instead of school vouchers for exactly this reason. Even if one has the theory (as I do) that we ought to have a much more deregulated market for education, I more strongly hold the view that it is extremely difficult to predict the impacts of such drastic change, and that we should go one step at a time (even if on an experimental basis we are also testing more radical reforms at very small scale).

Arnold Kling cites William Byers’ book The Blind Spot:

Until recently, the conventional scientific view was that mind could be reduced to brain, that the physical brain was primary phenomenon and the mind was merely an epiphenomenon. Yet, in recent years, evidence has emerged that the physical configuration of the brain is malleable and can change as a result of learning, thinking, and other mental activities–in short, that the mind can influence the brain…

The underlying question here seems to be whether mind consists of more than just the particles of the brain.  What’s at stake is whether minds can be fully understood, modeled, and replicated.  Particles of the brain, being particles, are somewhat predictable, whereas any non-particle metaphysical component of the mind might not be.  In other words, can the mind be modeled accurately, or not?  I consider this an open question.

Kling continues:

Byers afflicts the comfortable by emphasizing the role of ambiguity in science. Most people want science to play the role of resolving ambiguity. Byers argues that scientific progress comes from confronting and sometimes even embracing ambiguity–for example, the theory that an electron is both a wave and a particle. Thus, the role of ambiguity in science is….ambiguous.

Indeed, ambiguity results from accepting multiple conflicting models.

Will Wilkinson’s piece on use of shaky evolutionary psychology to explain dating behavior reminds of me a question I’ve always found mysterious: why isn’t more serious thought put into explaining dating behavior?

Wilkinson:

One day a tidy disquisition explaining why human behavioral ecology is the bees non-vulgar knees will issue forth upon this page, but until that glorious day I present to you Lucia, a “dating/relationship expert specializing in Cougar relationships,” and two of her “12 Reasons Women Can’t Stand Nice Guys.”

It’s clear Wilkinson disapproves of this particular analysis of dating, but unless he presents an alternative explanation to a set of important and interesting questions, he hasn’t advanced the discussion very far.  I’ve never seen a particularly good discussion of the simple, quite empirically testable question, “What percentage of women can stand nice guys?”  I know the answer isn’t zero, since my dad’s a pretty nice guy, and my mom seems to like him okay.  Lacking serious-minded analysis, the rational response is to skeptically apply the weak theory that’s out there.  A map drawn by a five year-old is better than no map at all.

Honestly, what would happen if you polled women across a range of ages and relationship statuses, with two questions: “On a scale of 1-7, How much do you like nice men” and, if appropriate, “On a scale of 1-7, How nice is the man you’re in a relationship with?”  A lot of words have been written on this topic, very few of them by particularly serious thinkers.  I once read Neil Strauss’s book The Game, which contains some fairly innovative theories, some of them supported by anecdotal evidence, or some of the wishy-washy evolutionary pysch that Wilkinson decries.  Why don’t academic social scientists study these hypotheses?  Putting aside sweeping theories of why dating behavior, start with a fact base.  For instance, testing whether stuff like this works.  It’s not a difficult study to design.

Of course, there is a small number of academics asking these questions, but at most universities, the field most likely to ask these questions is Gender Studies, which is generally classified in the humanities, and not approached with the scientific rigor I’m looking for.  This is a topic that’s ripe for investigation by economists, psychologists, sociologists, if not creation of a field specific to the topic.  Why are questions about dating studied so significantly less important than, say, questions about politics, which has multiple academic fields devoted to it?

I’m having issues with the comments section at Philosophy Bro, so I’ll post this here instead.  In a discussion of empiricism and rationalism, the author writes:

You are definitely over-committing yourself if you mean that everything we can know is mediated through the senses – I mean, mathematics? Logical tautologies?

As with my earlier discussion of science, the key missing idea here is that of the map/territory relation.  If scientists and philosophers would realize that they’re building models/maps to explain the world, and not defining rules that govern how the world/territory works, our understanding of these issues would improve drastically.

Frankly, I find it preposterous to suggest that mathematics can be known without relying on sensory perception. Show me a way to teach a child addition that doesn’t rely on sensory perception–no blocks, no pictures, no counting on your fingers.  Then, and only then, will I grant that mathematics can be intuited without sensory perception.  Yes, armchair philosophers can understand math without reference to specific measurements.  But only because they’ve previously perceived an enormous number of examples of mathematical theory working.

Based on observation, we create a model; we test it; it works; over time we accept it as very strong theory.  That’s our inductive process, for science, for math, for logic.  After we have our model, we extrapolate, interpolate, and infer–deductive processes.  So we can figure out that 9823+2349=12172 without measuring. But only because we’ve previously seen so many other applications of our addition theory work, without ever seeing one fail.  We trust that we don’t need to test it, based on the enormity of evidence supporting the model.  But our trust doesn’t make the theory true.

(Editorial note: Philosophy Bro writes casually about interesting issues in philosophy.  In a previous post about Free Will, the author used a word I found offensive, and I said so in the comments.  The word remains in that post, and as such I won’t link to it.  However, I like the blog, and am wiling to write off one isolated case of bad judgment in linking to the site.)

Yesterday I linked to a blog post that referred to a model I’d proposed to map the political spectrum.  In it, the author linked to a different model of the political/social/cultural spectrum, and built an analysis upon that framework that conflicted with my interpretation, with this line being the crux:

To understand the fight between Wilkinson and Brooks, you need to understand that Wilkinson’s tribe is not “Moderate Conservative,” it is “Brahmin.”

Now I like my model.  I have an emotional attachment to it, given that I designed it, but I also think it’s a very good approximation of the state of political discourse.  So one response I could take is to attack the Moldbug model that conflicts with mine.  Indeed, I think Moldbug’s write-up is pretty vague, and I’m unclear about some classifications.  But overall, I like the model.  It’s interesting and different.  And while it may conflict with my model at times, they’re largely compatible.  Going forward, I’d much rather use both models to assess political dynamics; at times my model will probably work better, and at times Moldbug’s will.

The analogy here is to maps.  If maps could be perfectly accurate, you’d only need one.  But when they’re not perfect, it’s better to have two than one.  When they agree, you’re more likely to find truth.  When they disagree, you realize you have to put additional work into understanding the situation.

I’d say the same about liberalism, conservativism, libertarianism, realism, interventionism, and so on.  I don’t think any of these is entirely wrong or right.  They’re models, maps, that help explain the world.  In some cases I’ll prefer one model to another.  Generally speaking, I’m most confident about claims where all the maps agree.

Follow

Get every new post delivered to your Inbox.