Sunday, December 28, 2008

Justice Lecture 3: The cost of a life

The case study in this lecture is an internal memo from Ford that assigns a dollar value to a human life in deciding whether or not to issue a recall of its Pinto cars. Most of the students in the class called this move objectionable, but the obvious objection to that position is: how else do you decide whether to do something? For example, we could certainly reduce the number of traffic deaths by imposing a mandatory 25 mph speed limit across the country. If the value of a human life were infinite, we would be compelled to take such an action. However, we don’t think that lives are infinitely valuable. An important point here is that no action can ever be guaranteed to save a life, i.e., we can only take actions that have some statistical probability of saving lives. This probability must enter into the likelihood of the calculation. Resources are scarce. We only have so many bags of rice or vaccines to ship around the world, so we have to make hard choices. I don’t think that we boost our ethical consciousness by ignoring the dollar value that we place on human lives. Rather, I think by talking about the value of a life we will become more aware of how tough ethical decisions have to be made. Also, I think it would expose the dramatically distorted value that we place on human lives in, for example, Congo compared to human lives in California.

Sandel offers a counter argument to the contention that we must assign a dollar value to human lives. He uses the example of the Roman coliseum: maybe the combined utility of all the people enjoying the show would outweigh the loss of life for the slave who gets tortured and murdered in the coliseum. Therefore: killing for sport is justifiable. Here’s another (imaginary) example: Jon Krakauer writes Under the Banner of Heaven, which offends millions of Mormons. Therefore, Krakauer’s book should not be published. In each of these examples, an individual right is overcome by the aggregate utility of a large number of people. How does the rule-based utilitarian face these cases? I think the answer is that we believe in individual rights because over the long run they lead to a maximization of utility. Though there may be some fluke cases in which aggregate utility is increased by violating individual rights, but overall we want to live in a society where we are free from murder or free to express ourselves. So we make those rights near-absolute. When individual rights are violated, as is done for criminals, we want this done by a body (the criminal justice system) that follows fair procedures.

Sunday, December 21, 2008

Justice Lecture 2: The case of the cabin boy

First, Sandel describes the distinction between consequentialist and categorical approaches to morality. As I see it, any moral code must be consequentialist at its most fundamental level. There are no absolute moral laws in the universe, that is, morality does not exist independently of sentient beings. In a world with no consciousness, there is no right or wrong. The is no way to morally differentiate between, say, Mercury and Pluto. However, once there is consciousness (even if rudimentary), then we can start to make moral evaluations. At this foundational level, what we care about is minimizing suffering and maximizing pleasure and happiness (broadly defined). Again, we do not make moral judgments about inanimate objects, unless they have an impact on the quality of life of living beings.

Thus any moral code must at its most fundamental be utilitarian, in the most general form of the definition. That is to say, a moral code is utilitarian if it is grounded in promoting the quality of life of living beings. I assert that this is the only thing we value. There is nothing else that we really care about. If a moral code is *not* utilitarian, then what does it value? The shape of rocks? The temperature? The periods of planetary revolution?

[ASIDE: Here I am ignoring any moral claims that are derived from religious dogma. The justification for any of these claims is an appeal to authority which is not widely accepted by all people. Even if such an appeal were accepted by a majority (say, a majority in a single nation), I reject any moral argument of the form “I believe X because Y tells me so,” where Y is Moses, Jesus, Muhammad, God, etc. The only defensible arguments that are valid are ones which all reasonable people can accept. Rawls has talked a lot about this (in Political Liberalism), but it’s been awhile since I read it, so I won’t want into it now.

[One might say, well, how are you going to live in a civil society with all those people who derive their morality from moral codes? My goal here is not to describe what morality actually is followed by most people. Instead, I want to lay out my own argument for a justifiable moral code. I think it’s right, and I think people ought to adopt it based on the reasons as I describe them. But I fully understand that many people will not follow this morality. Despite that fact, I still think that democracy is the best political structure, even if many members of the society do not have moral/political values that I subscribe to. Ultimately, I think my moral code—let’s call it limited liberalism for now—will be the one that societies converge toward. END ASIDE.]

So let’s start from the assertion that all reasonable moral codes must be at their most fundamental level motivated by utilitarian values. Now, the question is how to best achieve this. One extreme approach, let’s call it “OCD Utilitarianism,” says that to make a moral judgment, one must perform an extended calculation of all the ramifications of your decision. For example, if I am a doctor and two patients arrive in the ER, I need to do a full background check—family members, career history, resume, etc.—in order to evaluate which one to save. Taken to this logical extreme, OCD utilitarianism is obviously unrealistic. We simply have too many decisions to make every day for this to be realistic. My point is proposing this ridiculous moral code is to demonstrate that we live in a world of complexity, uncertainty, and scarcity (of time and resources). To make decisions, we need to have simplifying rules of thumb. These are not absolute moral laws, but rather moral guidelines.

For example, one moral rule of thumb is to feed kids milk. We break this rule commonly, whenever a child is lactose-intolerant. Another moral rule of thumb that is less commonly broken is to stop at red lights. However, there are certainly some rare cases when it is morally justifiable to violate this rule, e.g., if a passenger is dying of a gunshot wound in the backseat and you’re trying to get to the hospital. Given that we need to make decisions quickly, we need moral rules of thumb to help us make moral decisions. Since we live in a world of uncertainty, sometimes following these rules of thumb will lead us to make decisions that, in hindsight, we regret. Many laws of the state are based on these kinds of moral rules of thumb: do not kill, do not steal, do not prevent people from worshiping as they choose, etc. These are not absolute moral laws but rather very strong rules of thumb which may be justifiably broken only under very rare circumstances.

So any moral code will need a way of deriving moral rules of thumb; OCD utilitarianism is simply untenable. The moral codes which claim to be alternatives to utilitarianism (such as the categorical imperative, or contractualism) really just disagree about how to generate the rules of thumb. The end goal is to create a set of laws that maximizes quality of living.

Now, one of the big outstanding questions to resolve is the distribution question: given scarce resources, the opportunity cost of a marginal addition to my utility is an incremental utility for someone else. This is one of the big questions that Rawls tries to tackle in Theory of Justice. It’s been awhile since I read it, so I can’t describe the argument in detail. But the thrust is that the veil of ignorance is a powerful thought experiment which helps motivate his two basic principles: maximal liberty without infringing on others’ liberty, and supporting differences in income only to the extent that all people benefit. I won’t go into these in any depth now, but I think they’re fundamentally on the right track.

Now (finally!), let’s turn to the case of the cabin boy, described by Sandel in this lecture. As a brief recap, the story is that in the 17th or 18th century a ship sinks and some crew survive in a lifeboat, with almost no provisions. After 19 days, they realize they will die unless they kill someone. So the older guys decide to kill the cabin boy who is in his teens. They kill him with a pen, eat his remains, and several days later are rescued. They are then taken to court and tried for murder.

I think this is a very interesting case because it is an example of humans acting not as moral agents but as animals designed to survive. In the animal world, there is no such thing as moral judgments. We don’t say that a male bird is amoral for abandoning its offspring. We don’t say that a male sea lion is amoral for fighting off other male sea lions. We don’t say a praying mantis is amoral for eating her mate after he fertilizes her. We don’t say pandas are amoral for only wanting to have sex once per year—even if it’s not very good for the survival of their species. Animals are designed in many different ways to propagate their genes. Some strategies are more successful than others, but we don’t attribute moral value to any of their actions. Animals are not moral agents. Humans, by contrast, are not only motivated by survival. We have the capacity to overcome the design of our genes, and thank goodness we do—otherwise our society would be filled with plundering rapists.

Just because we have the capacity to act as moral agents does mean that we always do. When threatened by death, we revert to our instincts for survival. As proof of this fact, we may ponder whether the sailors would have acted differently if they had known they would be convicted guilty of murder. On the brink of starvation, I think most people would act in whatever way necessary to save their own lives. Thus, I don’t think it’s very useful to ponder the morality or even the legality of people’s actions in such a circumstance. What is the use of a moral code for a non-moral agent? I think this example has a lot of lessons. Let me give just two examples. Consider the drug addict who values a high over his family’s and his own safety, security, and comfort. For such an addict, the force of law holds little power of persuasion. There may be reasons to have laws against the use of such powerful drugs, but deterring the hardened addict should not be one of them.

As another example, let us consider the plight of someone like Ishmael Beah, the child in Sierra Leone whose parents were killed by rebels and who was then forced to fight with the rebels. Twelve-year-old Beah may have performed actions (murder, to start with) that would be amoral for most people to commit, but he was doing them in order to survive. Thus he cannot be held morally responsible for those actions.

Sunday, December 7, 2008

Tweaking nature's laws

Dennett's main point in Freedom Evolves is that assuming we live in a deterministic universe, then the interesting question is: how different would things have to be in order to produce a different outcome? I know that I can't run a 2 minute mile--that would require deep physiological changes. I know that I can't speak Mandarin right now--that would require deep neural changes. But I might be able to calculate the differential cross-section for some scattering process. That's within the bounds of my capability, though I may fall short.

I think this approach is also valuable in thinking about science. It's useful to ask the question: "How different would the universe have to be in order for X to be the case?" In biology, we could imagine primates evolving to have 6 limbs. It's harder to imagine them evolving to have 2 on one side and 3 on the other, but not impossible.

In physics, we could imagine the value of G being different (even if it would create a universe that could not support life). It's harder to imagine a universe in which the small oscillations around a point of equilibrium is described by a triangle wave rather than a sinusoidal wave. I think that's about like trying to imagine a universe in which 2 + 2 = 5. You sort of can, but it's very hard to do.

Well, here's my rumination on a question my brother asked me, about why E=mc^2 and not E=m(kc)^2. I imagine that this falls closer to the 2+2=5 case than the different G case, but i'm not fully convinced by my explanation. I haven't seen a more complete explanation anywhere else though.


Hey Stu,

Let me try to do a better job answering your question about why E=mc^2 and not E=m(kc)^2.

I think it really centers on frame-invariant quantities. If you want something to be frame-invariant, it needs to be represented in an appropriate 4-vector. An example is the position 4-vector (t, x, y, z), for which s^2 = t^2 - x^2 - y^2 - z^2 is invariant. If you're on a train and I'm on a platform, we may disagree about the time between 2 events or the space between 2 events, but we'll always agree on the s^2 of the 2 events.

Well, rest-mass is something that is also invariant. You and I will always agree on the rest-mass of an object, even if we disagree on its momentum and energy. We can make a 4-vector that obeys these
properties [is this the only way to get a frame-invariant rest-mass?]: (A/c, px, py, pz), where
m^2*c^2 = A^2/c^2 - px^2 - py^2 - pz^2
= A^2/c^2 - p^2.

What is A? Well, note that p = gamma * m * v, with gamma = (1-v^2/c^2)^(-1/2). [You need the gamma if you want conservation of momentum to hold in different reference frames.] Now, let's take an enlightened guess about what A is:

A = gamma * m * (kc)^2.

Then, we can expand the gamma with a Taylor series (trust me if you forget how to do this):

A = m*(kc)^2 * (1 + v^2/2c^2 + 3v^4/8c^4 + ...)
= m*(kc)^2 + (1/2)m(kv)^2 + (3/8)mv^4(k/c)^2 + ...

The third and higher terms are negligibly small, so let's drop those.
Then if we set k=1, we get

A = mc^2 + (1/2)mv^2

All that we can measure are changes in energy. So what we will measure are changes in (1/2)mv^2, which just happens to be the same as the expression for kinetic energy that we're familiar with. If, however, we had left k not equal to 1 then changes in A would not correspond to changes in Newtonian kinetic energy. And we would not be able to set A = E.

Not sure if that helps. It's a good question, one that hits at some subtle issues in relativity. Most of the time we just use it because we know that it works! :)