More than thirty-five years later, the horror lived on in The Tipping Point, Malcolm Gladwell"s groundbreaking book about social behavior, as an example of the "bystander effect," whereby the presence of multiple witnesses at a tragedy can actually inhibit intervention.
Today, more than forty years later, the Kitty Genovese saga appears in all ten of the top-selling undergraduate textbooks for social psychology. One text describes the witnesses remaining "at their windows in fascination for the 30 minutes it took her a.s.sailant to complete his grisly deed, during which he returned for three separate attacks."
How on earth could thirty-eight people stand by and watch as their neighbor was brutalized? Yes, economists always talk about how self-interested we are, but doesn"t this demonstration of self-interest practically defy logic? Does our apathy really run so deep?
The Genovese murder, coming just a few months after President John F. Kennedy"s a.s.sa.s.sination, seemed to signal a sort of social apocalypse. Crime was exploding in cities all across the United States, and no one seemed capable of stopping it.
For decades, the rate of violent and property crimes in the United States had been steady and relatively low. But levels began to rise in the mid-1950s. By 1960, the crime rate was 50 percent higher than it had been in 1950; by 1970, the rate had quadrupled.
Why?
It was hard to say. So many changes were simultaneously rippling through American society in the 1960s-a population explosion, a growing anti-authoritarian sentiment, the expansion of civil rights, a wholesale shift in popular culture-that it wasn"t easy to isolate the factors driving crime.
Imagine, for instance, you want to know whether putting more people in prison really lowers the crime rate. This question isn"t as obvious as it may seem. Perhaps the resources devoted to catching and jailing criminals could have been used more productively. Perhaps every time a bad guy is put away, another criminal rises up to take his place.
To answer this question with some kind of scientific certainty, what you"d really like to do is conduct an experiment. Pretend you could randomly select a group of states and command each of them to release 10,000 prisoners. At the same time, you could randomly select a different group of states and have them lock up 10,000 people, misdemeanor offenders perhaps, who otherwise wouldn"t have gone to prison. Now sit back, wait a few years, and measure the crime rate in those two sets of states. Voila! You"ve just run the kind of randomized, controlled experiment that lets you determine the relationship between variables.
Unfortunately, the governors of those random states probably wouldn"t take too kindly to your experiment. Nor would the people you sent to prison in some states or the next-door neighbors of the prisoners you freed in others. So your chances of actually conducting this experiment are zero.
That"s why researchers often rely on what is known as a natural experiment, a set of conditions that mimic the experiment you want to conduct but, for whatever reason, cannot. In this instance, what you want is a radical change in the prison population of various states for reasons that have nothing to do with the amount of crime in those states.
Happily, the American Civil Liberties Union was good enough to create just such an experiment. In recent decades, the ACLU has filed lawsuits against dozens of states to protest overcrowded prisons. Granted, the choice of states is hardly random. The ACLU sues where prisons are most crowded and where it has the best chance of winning. But the crime trends in states sued by the ACLU look very similar to trends in other states.
The ACLU wins virtually all of these cases, after which the state is ordered to reduce overcrowding by letting some prisoners free. In the three years after such court decisions, the prison population in these states falls by 15 percent relative to the rest of the country.
What do those freed prisoners do? A whole lot of crime. In the three years after the ACLU wins a case, violent crime rises by 10 percent and property crime by 5 percent in the affected states.
So it takes some work, but using indirect approaches like natural experiments can help us look back at the dramatic crime increase of the 1960s and find some explanations.
One major factor was the criminal-justice system itself. The ratio of arrests per crime fell dramatically during the 1960s, for both property and violent crime. But not only were the police catching a smaller share of the criminals; the courts were less likely to lock up those who were caught. In 1970, a criminal could expect to spend an astonishing 60 percent less time behind bars than he would have for the same crime committed a decade earlier. Overall, the decrease in punishment during the 1960s seems to be responsible for roughly 30 percent of the rise in crime.
The postwar baby boom was another factor. Between 1960 and 1980, the fraction of the U.S. population between the ages of fifteen and twenty-four rose by nearly 40 percent, an unprecedented surge in the age group most at risk for criminal involvement. But even such a radical demographic shift can only account for about 10 percent of the increase in crime.
So together, the baby boom and the declining rate of imprisonment explain less than half of the crime spike. Although a host of other hypotheses have been advanced-including the great migration of African Americans from the rural South to northern cities and the return of Vietnam vets scarred by war-all of them combined still cannot explain the crime surge. Decades later, most criminologists remain perplexed.
The answer might be right in front of our faces, literally: television. Maybe Beaver Cleaver and his picture-perfect TV family weren"t just a casualty of the changing times (Leave It to Beaver was canceled in 1963, the same year Kennedy was a.s.sa.s.sinated). Maybe they were actually a cause of the problem.
People have long posited that violent TV shows lead to violent behavior, but that claim is not supported by data. We are making an entirely different argument here. Our claim is that children who grew up watching a lot of TV, even the most innocuous family-friendly shows, were more likely to engage in crime when they got older.
Testing this hypothesis isn"t easy. You can"t just compare a random bunch of kids who watched a lot of TV with those who didn"t. The ones who were glued to the TV are sure to differ from the other children in countless ways beyond their viewing habits.
A more believable strategy might be to compare cities that got TV early with those that got it much later.
We wrote earlier that cable TV came to different parts of India at different times, a staggered effect that made it possible to measure TV"s impact on rural Indian women. The initial rollout of TV in the United States was even b.u.mpier. This was mainly due to a four-year interruption, from 1948 to 1952, when the Federal Communications Commission declared a moratorium on new stations so the broadcast spectrum could be reconfigured.
Some places in the United States started receiving signals in the mid-1940s while others had no TV until a decade later. As it turns out, there is a stark difference in crime trends between cities that got TV early and those that got it late. These two sets of cities had similar rates of violent crime before the introduction of TV. But by 1970, violent crime was twice as high in the cities that got TV early relative to those that got it late. For property crime, the early-TV cities started with much lower rates in the 1940s than the late-TV cities, but ended up with much higher rates.
There may of course be other differences between the early-TV cities and the late-TV cities. To get around that, we can compare children born in the same city in, say, 1950 and 1955. So in a city that got TV in 1954, we are comparing one age group that had no TV for the first four years of life with another that had TV the entire time. Because of the staggered introduction of TV, the cutoff between the age groups that grew up with and without TV in their early years varies widely across cities. This leads to specific predictions about which cities will see crime rise earlier than others-as well as the age of the criminals doing the crimes.
So did the introduction of TV have any discernible effect on a given city"s crime rate?
The answer seems to be yes, indeed. For every extra year a young person was exposed to TV in his first 15 years, we see a 4 percent increase in the number of property-crime arrests later in life and a 2 percent increase in violent-crime arrests. According to our a.n.a.lysis, the total impact of TV on crime in the 1960s was an increase of 50 percent in property crimes and 25 percent in violent crimes.
Why did TV have this dramatic effect?
Our data offer no firm answers. The effect is largest for children who had extra TV exposure from birth to age four. Since most four-year-olds weren"t watching violent shows, it"s hard to argue that content was the problem.
It may be that kids who watched a lot of TV never got properly socialized, or never learned to entertain themselves. Perhaps TV made the have-nots want the things the haves had, even if it meant stealing them. Or maybe it had nothing to do with the kids at all; maybe Mom and Dad became derelict when they discovered that watching TV was a lot more entertaining than taking care of the kids.
Or maybe early TV programs somehow encouraged criminal behavior. The Andy Griffith Show, a huge hit that debuted in 1960, featured a friendly sheriff who didn"t carry a gun and his extravagantly inept deputy, named Barney Fife. Could it be that all the would-be criminals who watched this pair on TV concluded that the police simply weren"t worth being afraid of?
As a society, we"ve come to accept that some bad apples will commit crimes. But that still doesn"t explain why none of Kitty Genovese"s neighbors-regular people, good people-stepped in to help. We all witness acts of altruism, large and small, just about every day. (We may even commit some ourselves.) So why didn"t a single person exhibit altruism on that night in Queens?
A question like this may seem to fall beyond the realm of economics. Sure, liquidity crunches and oil prices and even collateralized debt obligations-but social behaviors like altruism? Is that really what economists do?
For hundreds of years, the answer was no. But around the time of the Genovese murder, a few renegade economists had begun to care deeply about such things. Chief among them was Gary Becker, whom we met earlier, in this book"s introduction. Not satisfied with just measuring the economic choices people make, Becker tried to incorporate the sentiments they attached to such choices.
Some of Becker"s most compelling research concerned altruism. He argued, for instance, that the same person who might be purely selfish in business could be exceedingly altruistic among people he knew-although, importantly (Becker is an economist, after all), he predicted that altruism even within a family would have a strategic element. Years later, the economists Doug Bernheim, Andrei Shleifer, and Larry Summers empirically demonstrated Becker"s point. Using data from a U.S. government longitudinal study, they showed that an elderly parent in a retirement home is more likely to be visited by his grown children if they are expecting a sizable inheritance.
But wait, you say: maybe the offspring of wealthy families are simply more caring toward their elderly parents?
A reasonable conjecture-in which case you"d expect an only child of wealthy parents to be especially dutiful. But the data show no increase in retirement-home visits if a wealthy family has only one grown child; there need to be at least two. This suggests that the visits increase because of compet.i.tion between siblings for the parent"s estate. What might look like good old-fashioned intrafamilial altruism may be a sort of prepaid inheritance tax.
Some governments, wise to the ways of the world, have gone so far as to legally require grown children to visit or support their aging moms and dads. In Singapore, the law is known as the Maintenance of Parents Act.
Still, people appear to be extraordinarily altruistic, and not just within their own families. Americans in particular are famously generous, donating about $300 billion a year to charity, more than 2 percent of the nation"s GDP. Just think back to the last hurricane or earthquake that killed a lot of people, and recall how Good Samaritans rushed forward with their money and time.
But why?
Economists have traditionally a.s.sumed that the typical person makes rational decisions in line with his own self-interest. So why should this rational fellow-h.o.m.o economicus, he is usually called-give away some of his hard-earned cash to someone he doesn"t know in a place he can"t p.r.o.nounce in return for nothing more than a warm, fuzzy glow?
Building on Gary Becker"s work, a new generation of economists decided it was time to understand altruism in the world at large. But how? How can we know whether an act is altruistic or self-serving? If you help rebuild a neighbor"s barn, is it because you"re a moral person or because you know your own barn might burn down someday? When a donor gives millions to his alma mater, is it because he cares about the pursuit of knowledge or because he gets his name plastered on the football stadium?
Sorting out such things in the real world is extremely hard. While it is easy to observe actions-or, in the Kitty Genovese case, inaction-it is much harder to understand the intentions behind an action.
Is it possible to use natural experiments, like the ACLU-prison scenario, to measure altruism? You might consider, for instance, looking at a series of calamities to see how much charitable contribution they produce. But with so many variables, it would be hard to tease out the altruism from everything else. A crippling earthquake in China is not the same as a scorching drought in Africa, which is not the same as a devastating hurricane in New Orleans. Each disaster has its own sort of "appeal"-and, just as important, donations are heavily influenced by media coverage. One recent academic study found that a given disaster received an 18 percent spike in charitable aid for each seven-hundred-word newspaper article and a 13 percent spike for every sixty seconds of TV news coverage. (Anyone hoping to raise money for a Third World disaster had better hope it happens on a slow news day.) And such disasters are by their nature anomalies-especially noisy ones, like shark attacks-that probably don"t have much to say about our baseline altruism.
In time, those renegade economists took a different approach: since altruism is so hard to measure in the real world, why not peel away all the real world"s inherent complexities by bringing the subject into the laboratory?
Laboratory experiments are of course a pillar of the physical sciences and have been since Galileo Galilei rolled a bronze ball down a length of wooden molding to test his theory of acceleration. Galileo believed-correctly, as it turned out-that a small creation like his could lead to a better understanding of the greatest creations known to humankind: the earth"s forces, the order of the skies, the workings of human life itself.
More than three centuries later, the physicist Richard Feynman rea.s.serted the primacy of this belief. "The test of all knowledge is experiment," he said. "Experiment is the sole judge of scientific "truth."" The electricity you use, the cholesterol drug you swallow, the page or screen or speaker from which you are consuming these very words-they are all the product of a great deal of experimentation.