by JOHN CASSIDY
How poor is poor?
Issue of 2006-04-03
In the summer of 1963, Mollie
Orshansky, a forty-eight-year-old statistician at the Social Security
Orshansky’s timing was propitious. In December of 1962, President John F. Kennedy had asked Walter Heller, the chairman of the Council of Economic Advisers, to gather statistics on poverty. In early 1963, Heller gave the President a copy of a review by Dwight Macdonald, in The New Yorker, of Michael Harrington’s “The Other America: Poverty in the United States,” in which Harrington claimed that as many as fifty million Americans were living in penury.
The federal government had never
attempted to count the poor, and Orshansky’s paper proposed an ingenious and
straightforward way of doing so. Orshansky had experienced poverty firsthand.
Born in the
Orshansky never married or had children, but she was passionate about children’s welfare. From 1945 to 1958, she worked in the Department of Agriculture’s Bureau of Human Nutrition and Home Economics, where she worked on a series of diets designed to provide poor American families with adequate nutrition at minimal cost. In painstaking detail, the food plans laid out the amount of meat, bread, potatoes, and other staples that families needed in order to eat healthily. These were “by no means subsistence diets,” Orshansky later wrote. “But they do assume that the housewife will be a careful shopper, a skillful cook, and a good manager who will prepare all the family’s meals at home.”
In 1958, Orshansky joined the
research department of the Social Security Administration, and decided to try
to estimate the incidence of child poverty. “Poor people are everywhere; yet
they are invisible,” she told a reporter for the
Orshansky compared these figures with the Census Bureau’s records on pre-tax family incomes and concluded that twenty-six per cent of families with children earned less than the upper poverty threshold and eighteen per cent earned less than the lower poverty threshold. In total, she estimated that between fifteen million and twenty-two million children were living in poverty, a disproportionate number of them in single-parent households and minority neighborhoods. “It would be one thing if poverty hit at random, and no one group were singled out,” she wrote. “It is another thing to realize that some seem destined to poverty almost from birth—by their color or by the economic status or occupation of their parents.”
Heller and his colleagues on the Council of Economic Advisers cited Orshansky’s paper in an “Economic Report to the President” that appeared in January, 1964, shortly after Kennedy’s successor, Lyndon B. Johnson, declared a “war on poverty” in his State of the Union address. In August of that year, Congress created the Office of Equal Opportunity, which used Orshansky’s method to determine eligibility for new anti-poverty programs, such as Head Start. Other federal agencies followed suit, and in 1969 the White House adopted a slightly modified version of Orshansky’s lower threshold—the one based on the economy food plan—as the official poverty line.
In the nineteen-sixties, many
economists believed that economic growth and government intervention would
eliminate poverty. Between 1964 and 1973, as Johnson’s Great Society programs
went into effect, the poverty rate fell from nineteen per cent of the population
to 11.1 per cent. But, while the nation’s inflation-adjusted gross domestic
product has virtually tripled since 1973, the poverty rate has hardly budged.
In 2004, the most recent year for which figures are available, it stood at
12.7 per cent, a slight increase over the previous year, and in some regions
the figure is much higher. The horror of Hurricane Katrina was not just the
physical destruction it wrought but the economic hardship it exposed. In
The persistence of endemic poverty raises questions about how poverty is measured. In the past ten years or so, significant changes have been made in the way that inflation, gross domestic product, and other economic statistics are derived, but the poverty rate is still calculated using the technique that Orshansky invented. (Every twelve months, the Census Bureau raises the income cutoffs slightly to take inflation into account.)
This approach has some obvious shortcomings. To begin with, the poverty thresholds are based on pre-tax income, which means that they don’t take into account tax payments and income from anti-poverty programs, such as food stamps, housing subsidies, the Earned Income Tax Credit, and Medicaid, which cost taxpayers hundreds of billions of dollars a year. In addition, families’ financial burdens have changed considerably since Orshansky conducted her research. In the late fifties, most mothers didn’t have jobs outside the home, and they cooked their families’ meals. Now that most mothers work full time and pay people to help them take care of their kids, child care and commuting consume more of a typical family budget.
Another problem is that the
poverty thresholds are set at the same level all across the country. Last
year, the pre-tax-income cutoff for a couple with two children was $19,806.
This might be enough to support a family of four in rural
Such considerations suggest that the official measures understate the extent of poverty, but the opposite argument can also be made. The poverty figures fail to distinguish between temporary spells of hardship, like those caused by a job loss or a divorce, and long-term deprivation. Surveys show that as many as forty per cent of people who qualify as poor in any given year no longer do so the following year. Middle-class families that suffer a temporary loss of income can spend their savings, or take out a loan, to maintain their living standard, and they don’t belong in the same category as the chronically impoverished. One way to remedy this problem is to consider how much households spend, rather than how much they earn. If in the course of a year a household spends less than some designated amount, it is classified as poor. Daniel T. Slesnick, an economist at the University of Texas, has tested this approach using figures that he obtained from the Department of Labor’s Consumer Expenditure Survey, which tracks the buying habits of thousands of American families. Slesnick calculated that the “consumption poverty rate” for 1995—that is, the percentage of families whose spending was less than the poverty income threshold—was 9.5 per cent, which is 4.3 per cent less than the official poverty rate. Subsequent studies have confirmed Slesnick’s findings.
In 1995, a panel of experts assembled by the National Academy of Science concluded that the Census Bureau measure “no longer provides an accurate picture of the differences in the extent of economic poverty among population groups or geographic areas of the country, nor an accurate picture of trends over time.” The panel recommended that the poverty line be revised to reflect taxes, benefits, child care, medical costs, and regional differences in prices. Statisticians at the Census Bureau have experimented with measures that incorporate some of these variables, but none of the changes have been officially adopted.
The obstacles are mainly political. “Poverty rates calculated using the experimental measures are all slightly higher than the official measure,” Kathleen Short, John Iceland, and Joseph Dalaker, statisticians at the Census Bureau, reported in a 2002 paper reviewing the academy’s recommendations. In addition to increasing the number of people officially classified as impoverished, revising the Census Bureau measure in the ways that the poverty experts suggested would mean that more elderly people and working families would be counted as poor.
Conservatives would prefer a measure that reduces the number of poor people. “The poverty rate misleads the public and our representatives, and it thereby degrades the quality of our social policies,” Nicholas Eberstadt, of the American Enterprise Institute, wrote in a 2002 article. “It should be discarded for the broken tool that it is.” In February, the conservatives appeared to make some headway when the Census Bureau released a report on some new ways of measuring poverty that could cut the official rate by up to a third.
Rather than trying to come up with a subsistence-based poverty measure about which everybody can agree, we should accept that there is no definitive way to decide who is impoverished and who isn’t. Every three years, researchers from the federal government conduct surveys about the number of appliances in the homes of American families. In 2001, ninety-one per cent of poor families owned color televisions; seventy-four per cent owned microwave ovens; fifty-five per cent owned VCRs; and forty-seven per cent owned dishwashers. Are these families poverty-stricken?
Not according to W. Michael Cox,
an economist at the Federal Reserve Bank of Dallas, and Richard Alm, a
reporter at the
Consider a hypothetical single
mother with two teen-age sons living in
The concept of relative deprivation was first described by Adam Smith in “The Wealth of Nations,” in a passage on the “necessaries” of daily life:
By necessaries I understand not only the commodities which
are indispensably necessary for the support of life, but what ever the
customs of the country renders it indecent for creditable people, even the
lowest order, to be without. A linen shirt, for example, is, strictly
speaking, not a necessary of life. The Greeks and Romans lived, I suppose,
very comfortably, though they had no linen. But in the present times, through
the greater part of Europe, a creditable day-laborer would be ashamed to
appear in public without a linen shirt, the want of which would be supposed
to denote that disgraceful degree of poverty which, it is presumed, nobody
can well fall into, without extreme bad conduct. Custom, in the same manner,
has rendered leather shoes a necessary of life in
For decades, economists
overlooked Smith’s analysis, and it was left to sociologists and
anthropologists to study the impact of relative deprivation. During the
Second World War, Samuel A. Stouffer, a sociologist at the
More recently, three economists at the University of Warwick published the results of a survey of sixteen thousand workers in a range of industries, in which they found that the workers’ reported levels of job satisfaction had less to do with their salaries than with how their salaries compared with those of co-workers. Human beings are also competitive with their neighbors. Erzo Luttmer, an economist at the John F. Kennedy School of Government, recently found that people with rich neighbors tend to be less happy than people whose neighbors earn about as much money as they do. It appears that, while money matters to people, their relative ranking matters more.
Relative deprivation is also bad
for your health. In a famous study conducted between 1967 and 1977, a team of
epidemiologists led by Sir Michael Marmot, of University College London,
monitored the health of more than seventeen thousand members of
Initially, some critics suggested that these results could be attributed to differences in behavior: members of the lower ranks were more likely to smoke and drink and less likely to exercise and eat healthily than their better-paid superiors. To test this theory, Marmot and his team have been conducting a follow-up study of civil servants, which began in 1985 and continues to this day. This survey has confirmed the results of the first study, and has also suggested that less than a third of the difference in patterns of disease and mortality can be ascribed to behavior associated with coronary risk, such as smoking or lack of exercise. “The higher the social position, the longer people can expect to live, and the less disease they can expect to suffer,” Marmot explained in a recent paper. “This is the social gradient in health.”
The British findings have been
replicated in other parts of the world, including the
The epidemiological studies don’t explain how relative deprivation damages people’s health; they simply suggest that there is a connection. One possibility is that subordination leads to stress, which damages the body’s immune system. In the animal kingdom, where there are bitter fights over relative status, there is evidence supporting this hypothesis. The neurobiologist Robert Sapolsky has described how dominant baboons in troops on the African plains verbally and physically abuse their subordinates. When Sapolsky analyzed blood samples from low-ranking baboons, he found high levels of a hormone associated with stress. Other scientists have shown that dominant rhesus monkeys have lower rates of atherosclerosis (hardening of the arteries) than monkeys further down the social hierarchy, and when dominant female monkeys are relegated to a subordinate status their rate of heart disease goes up.
“Given the animal results,” Angus Deaton, a Princeton economist who is an expert on poverty, wrote in a recent paper about relative deprivation and mortality, “the degree to which low rank is harmful to an individual is likely to depend on the number of people of higher rank, because each such person is in a position to deliver the threats, insults, enforced obeisance, or ultimate violence that generate stress. Individuals who are insulted by those immediately above them insult those immediately below them, generating a cascade of threats and violence through which low-ranked individuals feel the burden, not just of their immediate superiors, but of the whole hierarchy above them.”
Poor health may be the most
dramatic consequence of relative deprivation, but there are more subtle
effects as well. Although many poor families own appliances once associated
with rich households, such as color televisions and dishwashers, they live in
a society in which many families also possess DVD players, cell phones,
desktop computers, broadband Internet connections, powerful game consoles,
S.U.V.s, health-club memberships, and vacation homes. Without access to these
goods, children from poor families may lack skills—such as how to surf the
Web for help-wanted ads—that could enhance their prospects in the job market.
In other words, relative deprivation may limit a person’s capacity for social
achievement. As Sen put it, “Being
relatively poor in a rich country can be a great capability handicap, even
when one’s absolute income is high in terms of world standards.” Research
by Tom Hertz, an economist at
Since relative deprivation confers many of the disadvantages of absolute deprivation, it should be reflected in the poverty statistics. A simple way to do this would be to classify a household as impoverished if its pre-tax income was, say, less than half the median income—the income of the household at the center of the income-distribution curve. In 2004, the median pre-tax household income was $44,684; a poverty line based on relative deprivation would have been $22,342. (As under the current system, adjustments could be made for different family sizes.)
Adopting a relative-poverty threshold would put to rest the debate over how to define a subsistence threshold. As long as the new measure captured those at the bottom of the social hierarchy, it wouldn’t matter much whether the income cutoff was set at forty per cent or fifty per cent of median income. If poverty is a relative phenomenon, what needs monitoring is how poor families make out compared with everybody else, not their absolute living standards.
Academics have proposed a relative-poverty line before; notably, the British sociologist Peter Townsend, in 1962, and the American economist Victor Fuchs, who is now an emeritus professor at Stanford, in 1965. Nobody has taken the idea very seriously. “I still think that it is the right way to think about poverty, especially from a policy point of view,” Fuchs told me. Unfortunately, few politicians and poverty experts agree. Liberals fear that shifting the focus of policy away from hunger and physical need would make it even harder to win support for government anti-poverty programs; conservatives fear that adopting a relative-poverty rate would be tantamount to launching another costly war on poverty that the government couldn’t hope to win.
Neither of these fears is justified. Many Americans are skeptical about government anti-poverty programs, because they believe that the impoverished bear some responsibility for their plight by dropping out of high school, taking drugs, or committing crimes. Raising public awareness about relative deprivation could help to change attitudes toward the poor, by showing how those at the bottom of the social hierarchy continue to face obstacles even as they, along with the rest of the society, become more prosperous. The Times recently reported that more than half of black men in inner cities fail to finish high school, and that, nationwide, almost three-quarters of black male high-school dropouts in their twenties are unemployed. “It doesn’t do a poor person any good to say ‘You are better off than you would have been thirty years ago,’ ” Fuchs said. “The pathologies we associate with poverty—crime, drug use, family disintegration—we haven’t eliminated them at all.”
The conservative case against a relative-poverty line asserts that since some people will always earn less than others the relative-poverty rate will never go down. Fortunately, this isn’t necessarily true. If incomes were distributed more equally, fewer families would earn less than half the median income. Therefore, the way to reduce relative poverty is to reduce income inequality—perhaps by increasing the minimum wage and raising taxes on the rich. Between 1979 and 2000, the inflation-adjusted earnings of the poorest fifth of Americans increased just nine per cent; the earnings of the middle fifth rose fifteen per cent; and the earnings of the top fifth climbed sixty-eight per cent.
In the Ninth Ward and in neighborhoods
like it, the gap between aspiration and reality has never been greater. As
Americans were shocked to learn, many residents lacked the means to pay for
transportation out of the city during Hurricane Katrina. But the poor of
Introducing a relative-poverty
line would help shift attention to this larger problem of social exclusion.
Although few attempts have been made to address the issue, the results have
been promising. A recent long-term study of Head Start, which began in 1964,
as one of the original “war on poverty” initiatives, found that poor children
who participated in the program were more likely to finish high school and
less likely to be arrested for committing crimes than those who did not. And
in another initiative, undertaken between 1976 and 1998, the city of
Mollie Orshansky, who is now ninety-one and living on Manhattan’s East Side, never warmed to the idea of a relative-poverty line—she was too concerned about people actually starving—but she wasn’t wedded to her method, either. “If someone has a better approach, fine,” she said in 1999. “I was working with what I had and with what I knew.”