Are you psycho? Then help push this fat guy off a bridge.

I saw an interesting paper a while back, via Crooked Timber, that applied a mixture of psychology and experimental philosophy to see just what kind of people have utilitarian intuitions. The punchline:

Participants who indicated greater endorsement of utilitarian solutions had higher scores on measures of Psychopathy, machiavellianism, and life meaninglessness.

I sent a joking email to some friends, since I’m an advocate of consequentialism and therefore sympathetic to utilitarian arguments. I also noted some caveats and, satisfied that I’d justified my sanity, moved on. But the excellent Will Wilkinson picked up the topic today on his new Big Think blog, The Moral Sciences Club. That led to some noise in my Twitter stream on the subject so, while it’s outside the normal focus of the blog, I figured I should write up a few quick thoughts. First, here’s Wilkinson:

Since it seems implausible that we are best off governed by Machiavellian psychopaths, I take the findings of Bartels and Pizarro–that those attracted to utilitarianism tend toward the psychopathic and Machiavellian–as prima facie evidence that utilitarianism is “self-effacing,” that it recommends its own rejection. This is a study about how, if you are a utilitarian, you should probably do the world some good and shut up about what you really thing is best.

My assumption is that this is in good fun. So I mean my response to be taken in the same vein. Here goes…

Being a utilitarian doesn’t make you a psychopath

No one said it did, but it’s still worth calling out. In fact, the authors were careful to say as much:

Nor do our results show that endorsing utilitarianism is pathological, as it is unlikely that the personality styles measured here would characterize all (or most) proponents of utilitarianism as an ethical theory (nor is the measure of psychopathic personality traits we used sufficient to conclude that any respondents reach clinical levels of psychopathy). It is also possible that possessing these sub-clinical psychopathic traits may be of moral value insomuch as individuals who are capable of such emotional detachment, while appearing to possess a questionable moral character in some situations, may be better able to act for the greater good in ways that would prove difficult for many (such as the very situations described in our target dilemmas)

Ok, so at least we have that out of the way.

BREAKING: Every ethical theory has problematic questions

Perhaps you are drawn to utilitarianism but you get squeamish when asked about pushing a fat man off of a bridge to stop a moving train from killing five others. Every moral theory is “problematic” in the sense that cases can be raised that tend to go against our moral intuitions. Consider Kant’s brand of deontology. It is famously posited that Kant’s formulation would not allow lying, even in the scenario in which you’re hiding Anne Frank in your attic and the Nazis come to the door and ask if anyone is hiding in the attic. Telling the truth in that scenario probably violates most of our moral intuitions. I’d like to see a study where the toughest questions of all prominent moral theories were asked, so I could see if the answers of deontologists, contractualists, etc. correlate with measures of psychopathy and the like.

What I’m suggesting is that it’s somewhat unfair (though interesting!) to zero in on such difficult questions. What if we asked a bunch of people a question where the utilitarian answer is in line with our moral intuitions? You have $100 to distribute among yourself and 9 other people. Assume diminishing marginal utility for each dollar. How do you distribute the money? The person who answers “$10 to each person” is both a better utilitarian and less Machiavellian than the person who answers “$100 to myself.”

Don’t like pushing people off bridges? You still can be utilitarian

There’s a reason we have rule utilitarianism. It’s not 100% obvious that the correct utilitarian answer is to push someone off of a bridge to save 5 others. A rule utilitarian could argue that the disutility of people incorrectly making snap moral judgments is so great that we should have a system of rules that are themselves designed to maximize utility. And that such a rules system doesn’t require you to murder someone to save others. There are other arguments one could make as well. You might think they’re copouts, but welcome to the vague moral calculus of utilitarianism. My point is that while the results of this study are interesting, we can’t take any particular decision as the definitive utilitarian position.

Endorsing utilitarianism does not mean rule by psychopaths

Wilkinson ends his post rather strangely (quote above). Sidgwick is right that it’s possible to be a utilitarian and not recommend that others act as utilitarians (for the greater utility!) But Wilkinson makes a move that I just can’t understand. He writes:

Since it seems implausible that we are best off governed by Machiavellian psychopaths

One response would be to say that it is plausible. And we could go there. For committed utilitarians, it could make some sense. But more practically, I think we can just argue that nothing here implies that utilitarian rule must be by Machiavellian psychopaths. As the above quote from the study suggests, there’s no reason to think that all utilitarians are psychopaths. Arguably, the reason such a high percentage seem to be is because the theory simply hasn’t permeated society very deeply. In other words, utilitarianism is unpopular; very few people truly hold it, say 1 in 100. Meanwhile, there are way more Machiavellian psychopaths in the population, so 10 out of 100 people give the “utilitarian” answer in one of these scenarios. Since that group happens to answer the same way as some utilitarians to this problem, it appears that most utilitarians are psychopaths when the real problem is we don’t have enough utilitarians! So, the utilitarian would argue, we just need to go out and recruit more non-psychopath utilitarians who can then rule in a world that avoids Wilkinson’s critique.

So there you have it. A few quick thoughts. I’ve definitely done the opposite of shutting up. Whether I’ve done some good is up for debate.


What is evolution good for?

In one of his essays in Philosophy and Social Hope, Richard Rorty noted the tendency of scientists to assume that they are best positioned to adjudicate questions on the philosophy of science. As Rorty compelling detailed, they are not. So I was reminded when reading Can Darwinism Improve Binghamton? in NYT’s Book Review.

The author starts off this way:

My undergraduate students, especially those bound for medical school, often ask why they have to study evolution. It won’t cure disease, and really, how useful is evolution to the average person? My response is that while evolutionary biology can explain, for example, the origin of antibiotic-resistant bacteria, we shouldn’t see evolution as a cure for human woes. Its value is explanatory: to tell us how, when and why we got here (by “we,” I mean “every organism”) and to show us how all species are related. In the end, evolution is the greatest tale of all, for it’s true.

This is, in my view, quite misguided. Without usefulness it soon becomes impossible to locate any measure by which to evaluate the truth of an explanation. This, to me, (and I believe to Rorty), is the point of postmodern philosophy. By accepting it we don’t have to accept all that falls under the heading of “postmodernism”; we can dodge it by embracing pragmatism. While I recommend Rorty to anyone looking to read more about science and pragmatism, I occasionally have come across succinct yet poignant statements of the subject, two of which I’d like to share here.
Economist Joseph Stiglitz put it simply but astutely in a recent paper“Prediction is the test of a scientific theory.”

It’s as simple as that. Conservative writer and entrepreneur Jim Manzi had an equally useful (get it?) take on science and pragmatism a while back.

He wrote“I claim that the purpose of science is to create useful, reliable, non-obvious predictive rules.”

We would do well to heed these words.*

*Yes, that’s circular reasoning, if you really think about it. But as Rorty might say, what’s the alternative?


Exposing sacred arguments

Moral psychologist Jonathan Haidt gave a talk in February arguing that the social psychology field was a “moral community” by virtue of its political liberalism, and that this was compromising its ability to do good science. I want to use one piece of his argument as a jumping off point to discuss what I see as one of the biggest obstacles to productive public discussion. Haidt:

Sacredness is a central and subtle concept in sociology and anthropology, but we can get a simple working definition of it from Phil Tetlock [a social psychologist at the University of Pennsylvania]. Tetlock defines a sacred values as “any value that a moral community implicitly or explicitly treats as possessing infinite or transcendental significance …” If something is a sacred value, you can’t make utilitarian tradeoffs; you can’t think in a utilitarian way. You can’t sell a little piece of it for a lot of money, for example. Sacredness precludes tradeoffs. When sacred values are threatened, we turn into “intuitive theologians.” That is, we use our reasoning not to find the truth, but to find ways to defend what we hold sacred…

…Sacralizing distorts thinking. These distortions are easy for outsiders to see, but they are invisible to those inside the force field.

For the most part there’s nothing wrong with sacredness, per se. The problem arises when the sacred principle is challenged by someone outside the moral community. As Haidt notes, the result is that reasoning comes to the aid of justifying a principle, and that leads to sloppy arguments. If your commitment to the principle of nonviolence is challenged, for instance, you may start arguing about the ineffectiveness of military interventions. But what’s really driving that argument isn’t the facts; it’s the desire to defend a principle that in your moral vision really doesn’t even need defending. If you’re a pacifist, that’s fine. What’s not fine is marshalling weak arguments when a sacred view is challenged.

Now in practice my guess is that few things are held as entirely sacred, but many things are held nearly sacred. By that I mean that for most people, few beliefs are beyond any tradeoffs, but quite a few principles are sacred enough to require an exceptionally high bar be cleared before they’re willing to start trading it away.

There’s been some good back-and-forth in the libertarian blogosphere recently on the extent to which policy differences between liberals and libertarians are caused by different opinions on empirical matters, versus different values or principles. Ilya Somin at Volokh Conspiracy is thinking along the same lines as I am, writing:

Within political philosophy, many scholars are either pure utilitarian consequentialists (thinkers who believe that we are justified in doing whatever it takes to maximize happiness) or pure deontologists (people who argue that we must respect certain rights absolutely, regardless of consequences)… Outside philosophy departments, however, few people endorse either of these positions.

So sacredness in practice is probably a matter of extent. But that does nothing to detract from its importance in public debate. If someone is arguing in favor of a principle they hold sacred I want to know. If you’ve written an op-ed detailing all the reasons military intervention in Libya would be ill-advised, the fact that you’re a pacifist – that nonviolence is a sacred principle for you – is extremely relevant.

I see the identification of sacredness as a crucial challenge in the public sphere, and therefore a crucial challenge within media. As I’ve mentioned before, there’s lots of talk about the importance of transparency in the brave new world of online media, and I’m in favor of that. But transparency means a lot of things (again, as I’ve discussed before). It’s easy to say “I’m a liberal, I generally favor x, y and z and am a fan of these thinkers or politicians. I voted for so-and-so for president.” That’s one kind of transparency. But it’s a very thin transparency. I’d love to see some media experiments that go further and try to identify sacred principles. Let’s play around with ways of telling me the author is a pacifist.

This is a hard problem because most of us have a rough time identifying what we consider sacred. As Haidt notes, it’s often something that is obvious only outsiders. And once extents are thrown into the mix things get even messier. In a way the blogosphere offers a really rudimentary partial fix just by removing word/page limits. When there’s no limit to length you can talk endlessly about the principles behind the authors, as the libertarian discussion makes clear. But I think we can do better. I don’t have many good specifics on how just yet, but it’s something I think about. Ideas?


The epistemology of Wikipedia

The Atlantic tech has a great feature for Wikipedia’s 10th anniversary, featuring thoughts from a number of excellent contributors, including Shirky, Benkler, Zuckerman, Rosen and more.  Check it out.

One point of interest for me was a contrast in epistemologies offered by novelist Jonathan Lethem and Clay Shirky.  Lethem:

Question: hadn’t we more or less come to understand that no piece of extended description of reality is free of agendas or ideologies? This lie, which any Encyclopedia implicitly tells, is cubed by the infinite regress of Wikepedia tinkering-unto-mediocrity. The generation of an infinite number of bogusly ‘objective’ sentences in an English of agonizing patchwork mediocrity is no cause for celebration

Now compare that to Shirky:

A common complaint about Wikipedia during its first decade is that it is “not authoritative,” as if authority was a thing which Encyclopedia Britannica had and Wikipedia doesn’t. This view, though, hides the awful truth — authority is a social characteristic, not a brute fact.

So far, that’s basically the same critique that Lethem offers.  But unlike Lethem, Shirky offers a pragmatic version of epistemology:

Authoritativeness adheres to persons or institutions who, we jointly agree, have enough of a process for getting things right that we trust them. This bit of epistemological appraisal seems awfully abstract, but it can show up in some pretty concrete cases.DARPA, the Pentagon’s famous R&D lab, launched something in late 2009 called “The Red Balloon Challenge.” They put up ten red weather balloons around the country, and  said to contestants “If you can tell us the latitude and longitude of these balloons, within a mile of their actual positions, we’ll give you $40,000.” However, because the Earth is curved, DARPA also had to explain the Haversine forumla, which converts latitude and longitude to distance.

Now, did DARPA want to write up a long, technical description of the Haversine formula?  No, they did not; they had better things to do. So they did what you or I would have done: They pointed to Wikipedia. DARPA, in essence, told contestants “If you want to compete for this $40,000, you should understand the this formula, and if you don’t, go look at this Wikipedia article.”

Shirky’s account strikes me as the kind of pragmatism advocated by Richard Rorty, of whom I’m a big fan.  What makes something true in a post-metaphysical world?  Well, how about whether or not it helps you track down the balloons and win $40K?  Hurray, pragmatism!

I recognize all of the above is less about Wikipedia and more about philosophy…so thanks for indulging me this post. But do go read The Atlantic’s package.  Particularly Benkler’s response.  I’ll leave you with this Benkler nugget:

That, to me, is the biggest gift Wikipedia has given us; a way of looking at the world around us and seeing the possibility of effective human cooperation, on really complex, large projects, without relying on either market or government processes.