Oxford Companion Entry 2002

Welfare

The Oxford Companion to American Law (Joel Grossman, ed.).  Oxford:  Oxford University Press  (2002).

by

Sheldon Danziger and Jeffrey Lehman


The American welfare state emerged during the twentieth century.  Over the course of a hundred years, the nation fashioned a complex web of social insurance and means-tested programs, financed and administered at all levels of government.  Today, most Americans share a commitment to protecting the most vulnerable from extreme economic hardship.  What has yet to emerge is a true consensus about how that commitment should be balanced against a variety of other social policy goals that Americans value, such as a belief in limited government and the primacy of individual responsibility.  Moreover, the American welfare state remains less extensive and less generous than those in most other industrialized nations.  


The oldest roots of American social welfare policy are in England, in the Poor Laws of 1601 and 1834.  American anti-poverty policy was, in its colonial origins, a patchwork of locally administered programs offering minimal assistance to the most obviously blameless among the destitute.  Administrators were preoccupied with the social dangers of “pauperism,” a dispirited dependency among the poor.  While the programs reflected a salutary symbolism of public concern with the poor, in practice they were inadequate to the challenges of industrial society.


The 1900’s ushered in the Progressive Era, and a wave of new public efforts to prevent poverty and protect children.  Many states created programs of social insurance, including workers’ compensation and unemployment insurance; in part, they hoped to induce employers to take better care of their workers.  Moreover, the “Child Saving Movement” led most states to enact “mothers’ pensions,” to help morally upright widows survive without abandoning their children.  None of these programs, however, could address the suffering brought on by the Great Depression of the 1930’s, which left one fourth of the workforce unemployed in 1933.


Franklin Delano Roosevelt’s New Deal made social welfare policy an overarching concern of the federal government.  The Federal Emergency Relief Administration provided funds to state governments to help the poor.  The Civil Works Administration provided public employment of last resort during the winter of 1934.  And the successor Works Progress Administration created many low-wage, means-tested jobs.

The Supreme Court invalidated some New Deal legislation, such as the National Industrial Recovery Act (A.L.A. Schecter Poultry Corp. v. United States, 1935) and the Railway Pension Act (Railroad Retirement Board v. Alton Railroad, 1935).  Undeterred, Congress pressed forward with an ambitious interventionist agenda.  Roosevelt proposed to “pack” the Court with sympathetic Justices, and under political pressure the Court underwent a change of philosophy, allowing the national government a greater role in economic affairs (NLRB v. Jones & Laughlin Steel Corp., 1937).


The signature enactment of the New Deal was the Social Security Act of 1935.  Responding in part to a political crusade led by Francis Townsend, the Act created a federally financed and administered retirement insurance program of “old age pensions” for people who had worked in certain sectors of the economy and had, along with their employers, paid payroll taxes on their wages.  The Act also created a federally financed but state-administered unemployment insurance program.  The Act created means-tested programs to assist the elderly poor and the blind poor, in each case run by the states but partially financed by the federal government under a structure known as “cooperative federalism.”  And the Act created Aid to Dependent Children (“ADC,” later to become Aid to Families With Dependent Children, or “AFDC”), a program of cooperative federalism designed to support certain needy children.  


ADC authorized states to provide support for the children of divorced, separated, and never-married mothers, as well as the children of widows.  But, at least initially, states were not required to take full advantage of that authority.  They could choose to offer benefits only to those families where the mother maintained a “suitable home.”  Most states used that discretion to manage the sexual conduct and workforce participation of their clientele.


ADC is the paradigmatic “welfare” or “relief” program:  when Americans speak of “welfare,” they are usually alluding to ADC and its successor programs.  From the outset, the design and implementation of ADC highlighted the central conflicts of welfare policy.  Issues of race, gender, work, and parenting style were, then as now, matters of great social tension.  Those tensions created many disputes over whether a home was indeed “suitable” within the meaning of the Act.


From 1935 until 1960, changes in the structure of the welfare state took the form of relatively minor expansions of social security for widows and the disabled.  Then, during the Kennedy administration, poverty was “rediscovered” and a new set of policy goals came to the fore.  Policymakers began to speak of creating equal opportunity for all to compete in the marketplace, and of “rehabilitating” the poor by eliminating artificial barriers imposed by the circumstances of birth.  


In his 1964 State of the Union address, President Johnson declared “unconditional war on poverty.”  He proposed a series of new legislative initiatives intended to usher in a “Great Society.”  They included creation of the Office of Economic Opportunity, which was to provide the poor themselves with “maximum feasible participation” in antipoverty programs, creation of Medicare to provide health insurance for the elderly, and creation of Medicaid to provide health care for the poor.  The Legal Services Corporation was established to provide legal representation for the poor in non-criminal matters.


In the late 1960’s, major cultural shifts transformed the country and put pressure on the federal government to further expand the welfare state.  Popular movements pressed the legal, political, and social cases for civil rights for African Americans, for women’s equality, and against the war in Vietnam.  A “welfare rights” movement advanced the claim that welfare was not an act of public charity, but instead an entitlement of the poor.


Legal Services lawyers, working collaboratively with welfare rights advocates, initiated a series of lawsuits that made the Supreme Court, under the leadership of Chief Justice Earl Warren, a major actor in welfare policy.  The welfare-rights trilogy (King v. Smith, 1968; Townsend v. Swank, 1971; Carleson v. Remillard, 1972) interpreted the Social Security Act in ways that substantially reduced states’ discretion to condition welfare eligibility on conformity with caseworkers’ views about proper behavior.  Shapiro v. Thompson (1969) interpreted the Constitution to limit states’ authority to deny welfare benefits to new residents.  Goldberg v. Kelly (1970) interpreted the Constitution to prohibit states from terminating welfare benefits without due process.


The combination of Great Society legislation and welfare rights litigation sparked an expansion of the welfare rolls from about 4 million persons in the mid 1960’s to about 6 million by 1969, leading to proposals for welfare reform.  Ever since, whether and how to curb welfare spending has been a prominent feature of political and legislative debate.


In 1969, President Nixon proposed the Family Assistance Plan (“FAP”) as a replacement for AFDC. FAP included a national minimum welfare benefit coupled with a work requirement, but mothers of pre-school age children were to be exempt.  FAP and similar negative income tax (“NIT”) plans emphasized the extension of welfare to two-parent families, the establishment of a national minimum welfare benefit, the reduction of work disincentives arising from AFDC’s high marginal tax rate on earnings, and the de-coupling of cash assistance and social services. The NIT plans reflected the view that welfare recipients did not need assistance from social workers so much as they needed cash, and that increased benefits were all that was required to reduce their poverty. FAP itself failed to gain legislative approval, but the Food Stamp program evolved into a kind of NIT that provided a national benefit in food coupons that varied by family size, regardless of state of residence, living arrangements, or marital status. 


As for the Supreme Court, its activism was short-lived, and changes in the Court’s membership during the early 1970’s brought a different approach to welfare law.  Cases such as New York State DSS v. Dublino (1973) restricted the scope of the welfare rights trilogy.  Mathews v. Eldridge (1976) limited the scope of Goldberg v. Kelly.  Wyman v. James (1971) and San Antonio v. Rodriguez (1973) rejected efforts to use other provisions of the Constitution to claim additional economic or procedural rights for the poor.

To be sure, the American judiciary did not become wholly irrelevant to welfare policy after the end of the Warren Court.  In Saenz v. Roe (1999), for example, the Supreme Court struck down California’s effort to limit the level of benefits it provided new residents to whatever they would have been receiving in their state of origin.  And when Congress attempted to prevent legal services lawyers from using the courts to promote systemic welfare reform, the Supreme Court held that attempt unconstitutional, concluding that it constituted a form of “viewpoint discrimination” that is prohibited by the First Amendment.  (Legal Services Corporation v. Velasquez, 2001).  Still and all, it is fair to say that, since the early 1970’s Congress has been the dominant actor in the evolving American welfare state.


In 1977, President Carter proposed the Program for Better Jobs and Income (“PBJI”), an NIT with one income guarantee for those not expected to work and a lower guarantee for those expected to work, with the latter group also eligible for minimum-wage public service employment (“PSE”) in a job of last resort.  As would have been true under FAP, a single mother with a child under seven years old would have been exempted from work. Only those single mothers whose youngest child was over age fourteen would have been expected to work full-time.  By providing jobs of last resort and supplementing low earnings, PBJI was a precursor to proposals articulated in the US in the late 1980s, and by the Blair government in the UK in the late 1990s to “make work pay.”  


Yet PBJI, like FAP before it, failed in Congress.  Unlike the academic policy community, Congress and the public never embraced the notion of a guaranteed income, not even when the income guarantee was linked to an expectation of work.  Moreover, the plan would also have increased total federal welfare spending substantially, by expanding the welfare rolls and providing expensive PSE jobs. 

Despite the rejection of FAP and PBJI, the income maintenance system expanded substantially between the late 1960s and the late1970s, as new programs were introduced, benefit levels were increased, and eligibility requirements were liberalized.  The number of AFDC recipients increased from about 6 to 11 million and the Food Stamp program welcomed 19 million recipients during this period. As higher cash and in-kind benefits became available to a larger percentage of poor people, more and more concern was expressed about the various costs of welfare programs. The public and policymakers viewed increased welfare recipiency as evidence that the programs were subsidizing dependency, encouraging idleness, and enabling nonmarital childbearing.


With the arrival of the Reagan Administration, the 1980’s began as an era of welfare retrenchment.  Early on, a ten-year-old federal PSE program (the Comprehensive Employment and Training Act) was repealed, on the theory that it was intruding unnecessarily into the labor market.  Ironically, that meant fewer welfare recipients were working at all, which only heightened public dissatisfaction with welfare.   During the remainder of the 1980’s, concern mounted over whether the welfare system was doing enough to encourage mothers to obtain paid employment.


Real spending on cash welfare for the nonworking poor was cut back, but at the same time spending on programs to help the working poor increased.  A program that had been enacted in 1975 to raise the effective wage of low-income workers, the Earned Income Tax Credit (“EITC”), continued to draw bipartisan support.  New welfare-to-work experiments were initiated by the states with federal backing, such as the Community Work Experience Program.  And when evaluations of those experiments proved promising, a broad political consensus supported the Family Support Act (“FSA”) of 1988.  


The FSA broadened the safety net in exchange for tougher AFDC work requirements.  Drawing on the experience with the prior demonstration projects, it required state governments to establish a new training and education program:  Job Opportunities and Basic Skills (“JOBS”).  States were expected, through JOBS, to offer a range of education, skills training, job placement, and support services, and to extend them to a greater proportion of the caseload.  


Moreover, FSA required more mothers to participate in JOBS, even earlier in their children’s growth and development.  Once her youngest child reached age three, a mother had to participate for up to 20 hours per week; once that child reached age six, a mother could be required to participate for up to 40 hours per week.  Participating meant agreeing to a reasonable “employability plan” the state devised, as long as the state provided child care, transportation, and other work-related expenses.  Refusal to participate could lead to serious sanctions for the recipient family.


Significantly, the political consensus at the end of 1980’s stressed the concept of mutual responsibility.  The government had a responsibility to provide education, training, and work opportunities.  Welfare recipients had the responsibility to take advantage of those opportunities and to lead a generally responsible personal life.  If the state did not appropriate sufficient funds to provide a JOBS slot (and many states did not), the recipient was not sanctioned for the state’s failure. 


Shortly after the FSA was enacted, the economic expansion of the 1980’s came to an abrupt end, and the welfare rolls jumped from 11 to 14 million recipients.  Critics of the welfare state became more vocal, renewing arguments that welfare programs might be creating incentives for individuals to engage in socially irresponsible behavior.  They argued that the welfare system was discouraging recipients from seeking paid employment.  In addition, some argued that the rise in nonmarital childbearing since the mid-1980’s was also attributable to the growing generosity of the welfare state.  In the context of these criticisms, Presidential Candidate Clinton made welfare reform one of his central commitments, promising if elected to “end welfare as we know it.”


Four years later, legislation was passed, and its provisions made clear how much had changed since 1988.  The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (“PRWORA”) ended the nationwide entitlement to cash assistance that had begun with the Social Security Act.  It replaced AFDC with Temporary Assistance for Needy Families (“TANF”), a decentralized program of block grants to the states.  Each state now determines which families are eligible for benefits, subject only to a requirement that they receive “fair and equitable treatment.”

  

PRWORA left very few boundaries around the exercise of state and federal discretion.  It freed the federal government from any obligation to increase expenditures in response to future population growth, economic downturns, or inflation.  The federal government promised only to make an annual block grant to each state, equal to what the federal government had given to the states as its contribution to welfare expenditures in 1994.  For their part, the states were required only to maintain total expenditures for needy families equal to 75 percent of their 1994 level of expenditures on AFDC, JOBS, child care, and Emergency Assistance.


Of course, states were also free to create an even more supportive safety net than existed before. Subject to only a few specific limitations, they were free to design whatever kind of program they chose.  In the strong macroeconomic environment of the late 1990’s, however, almost all states chose to concentrate on reducing the size of the welfare caseload.  


The most important boundary that PRWORA established around state discretion was its time limit.  As a general rule, the statute prohibited states from using federal block grant funds to provide more than a cumulative lifetime total of 60 months of cash assistance to any welfare recipient, no matter how hard she may be trying to satisfy public behavioral expectations.  Even that general rule was subject to an exception:  states may grant exceptions to the lifetime limit and continue to use federal funds for up to 20 percent of their total caseload.  


With respect to procreative behavior, early legislative proposals would have made children born out-of-wedlock completely ineligible for cash assistance.  But as it was finally enacted,  PRWORA ultimately imposed few constraints on state discretion.  It offered modest rewards for states that reduce nonmarital childbearing without increasing abortion rates.  And it encouraged states to make rule changes that promote marriage.


With respect to work, PRWORA imposed more constraints.  Continuing the trend of all welfare reform legislation since the 1960’s, PRWORA required more mothers who receive federal money to work, still earlier in their children’s growth and development.  A single parent with no children under age one is expected to work at least 30 hours per week.  And states may push the age even lower.  Indeed, some states now exempt a mother for only 13 weeks following childbirth.


PRWORA also backed away from any strong requirements that states provide educational and training services at the early stages of job placement for welfare recipients.  As the legislation was being crafted, much attention was being given to the experience of Riverside, California.  In its welfare-to-work experiment, Riverside had implemented a “work first” program, which required participants to pursue private-sector employment before they could receive any training services, and the program had proved relatively successful in getting recipients into jobs.  Many people found attractive the “work first” philosophy that any job is a good job and that the best way to succeed in the labor market is to develop work habits and skills on the job.


In the first five years since PRWORA was enacted, the American welfare state has changed significantly.  First, PRWORA itself “ended welfare as we knew it” more decisively than most policy analysts expected when the legislation was signed.  Welfare caseloads dropped so dramatically that by 2001 the number of recipients had fallen to 5.8 million, the smallest fraction of the population since 1965. 


Second, the economic and policy environment around PRWORA has also changed, so that welfare recipients face different incentives than did their predecessors.  Most significantly, the financial rewards for moving from welfare to work have increased substantially.  The EITC has been expanded and the minimum wage has been raised.  In 1997, Congress enacted the Children’s Health Insurance Program, and childcare subsidies have been increased.  As a result, the dramatic caseload decline has not caused the surge in poverty or homelessness that many critics of the 1996 Act predicted.  Even though many who have left welfare are not working full-time, full-year, and many are working at low-wage jobs, a significant number are earning at least as much as they had received in cash welfare, and some now have higher net income because of the expanded income supplements. 


Third, despite the large caseload reduction, the national poverty rate has fallen rather little.  Many who have left welfare for work remain poor and continue to depend on other forms of government assistance.  In recent years a great deal of evidence has been accumulated about welfare recipients whose prospects for stable unsubsidized private sector employment are limited by personal issues such as poor physical or mental health, or limited skills.  For those welfare recipients, there does not yet appear to be a successful and replicable programmatic alternative to cash support.


Fourth, we do not yet know how welfare reform will play out during a recession.  Because PRWORA placed a 5-year, life-time limit on the receipt of cash assistance, women still receiving welfare are at risk of  “hitting their time limits” during a period of slow economic growth or recession.  If more than 20 percent of any state’s caseload comprises recipients whose personal attributes prevent them from securing stable unsubsidized private sector employment, the states themselves will have to consider providing extended cash benefits or PSE jobs without any contribution from the federal government whatsoever. 


PRWORA will be re-authorized by Congress in 2002.  While it is unlikely that major changes will be legislated, it will clearly be a time for renewed debate about the ultimate goals of welfare policy.  Once again, it is fair to expect that the subtle balance among those goals will be readjusted in search of a new consensus.


Further Reading:  Sheldon Danziger and Robert Haveman (eds.), Understanding Poverty (2002); David Ellwood, Poor Support (1988); Christopher Jencks, Rethinking Social Policy (1992); Charles Murray, Losing Ground (1984); James Patterson, America’s Struggle Against Poverty in the Twentieth Century (2000); Charles Reich, “The New Property,” 73 Yale Law Journal 733 (1964); William Julius Wilson, The Truly Disadvantaged (1987).