Government Domestic Economic Policies. The American Economy: A Historical Encyclopedia

The federal government of the United States is a major promoter and regulator of the American economy. Vast bureaucratic agencies and commissions staffed with thousands of
experts monitor the economy and adjust various fiscal and
monetary levers in an ongoing and complicated effort to
maintain a healthy economy. These institutions include—but
certainly are not limited to—the White House, Congress, the
Federal Reserve Bank, the Council of Economic Advisers, the
U.S. Treasury, and the Federal Trade Commission (FTC).
The United States has been the largest free market economy in the world since the late nineteenth century. But it is
hardly a “pure” capitalist system. Rather, government and the
private sector together comprise a “mixed” economy, one in
which government economic policy makers interact continually with entrepreneurs, corporations, workers, and consumers. This arrangement has been especially true since the
1930s, when the role of government in the economy grew
dramatically under President Franklin D. Roosevelt’s New
Deal. And since World War II, the scale and scope of the government’s economic expertise, programs, and policies have
grown exponentially.
Throughout the history of the American Republic (and
even during the colonial period), political leaders as well as
ordinary citizens have made sense of their economic lives by
relying on metaphors, models, and other frameworks for understanding individual and collective economic behavior. Political leaders and economic policy makers have shared a
common set of goals since the U.S. Constitution was ratified
in 1787: robust growth of the economy, welfare for the citizenry, and low rates of unemployment and inflation. But they
have often disagreed about the best policies for achieving
these goals, and prevailing economic ideas have changed dramatically over time.
The Era of Promotionalism: From Constitution to
Civil War, 1787 to 1865
In 1776, the year the Declaration of Independence was
signed, the Scottish political economist Adam Smith published a work that would become one of the most influential
economic treatises of modern times:
The Wealth of Nations.
Smith advocated that governments limit themselves to maintaining security, leaving economic affairs in private hands.
Markets, he argued, do a much better job of setting prices and
maintaining quality than governments.
But Smith’s vision was not reality in the British colonies of
North America in 1776. Instead, the economies of the
colonies were controlled by a variety of economic policies integral to the British imperialist system. Operating a global
system of commerce designed to benefit the homeland, the
British monarchy defined what products the American
colonies could produce, export, and import. It also prohibited colonists from coining money. Although historians disagree about the precise economic toll these mercantilist controls exacted on the 13 colonies, there is little doubt that the
American Revolution was, in large measure, a fight for greater
economic independence.
The patriots who fought for independence faced, among
other things, the practical problem of raising and funding an
army without a central government. By 1775 the Continental
Congress had assumed many of the economic functions of an
indigenous central government, such as forming a postal system, issuing paper currency (known as Continentals), and
levying taxes on the states (but not individuals). However,
some states refused to tax their citizens and issued their own
paper currencies, which caused a massive devaluation of
Continentals. The situation was stabilized in 1781 when Congress retired the currency.
The U.S. Constitution defined a remarkable system of representative government but also held great economic significance. It empowered the central government to levy taxes and
collect duties on imports, to regulate domestic trade, to grant
patents, and to coin money. To establish the new nation’s
credit on a firm footing, it provided for the redemption of all
war debt. The Constitution also authorized a navy and army
to defend the nation and to protect and expand commerce.
In the 1790s the Federalists (led by Alexander Hamilton,
the first secretary of the Treasury) and the Anti-Federalists (or
Republicans, most notably Thomas Jefferson and James
Madison) struggled over the issue of central government
power. In the end, despite the fact that states retained a good

deal of power and independence in economic affairs, the Federalists nevertheless made several important gains, although
they did not achieve all of their aims. These gains included the
establishment of a protective import tariff, national excise
taxes, a stronger army and navy, and a national bank. Congress
would also collect and publish statistics on the nation’s population (the ten-year census), build lighthouses and harbor facilities, and support scientific exploration.
During the antebellum period (1790–1860), federal economic policy was most influential in four areas. One pertained to tariffs and subsidies. Among those benefiting from
protective tariffs and subsidies were cod-fishing enterprises,
telegraph companies, stagecoach lines, and small-arms manufacturers. Merchant shipping was an especially large beneficiary: Congress imposed discriminatory duties, offered mail
contracts and generous subsidies ($14 million between 1845
and 1858), and excluded foreign competition in the coastal
trade.
A second important dominion of federal economic policy
was banking. At Hamilton’s urging, Congress federally chartered the Bank of the United States (BUS) in 1791. Capitalized at $10 million, BUS issued much-needed paper currency,
provided loans to the Treasury and to responsible state banks,
and served as the federal government’s repository and fiscal
agent in foreign exchange. Although BUS helped stabilize the
nation’s banking and currency and facilitated commerce, Jefferson and other Anti-Federalists declared it a threat to sound
hard currency (gold and silver) and to agrarian interests, and
they prevented a renewal of the bank’s 20-year charter in
1811. The Second Bank of the United States (1817–1837) had
a similar history, though on a larger scale. Expanded to 29
branches by its aggressive president, Philadelphia banker
Nicholas Biddle, the Second BUS met strong opposition from
President Andrew Jackson as well as from competing state
and local banks. From the time the Second BUS’s charter renewal was denied until the Civil War, the United States had
no central bank. During that period, state banks, many of
them reckless “wildcats,” issued their own currencies, which
often fluctuated wildly in value. But many state governments
reined in such practices with various regulations—most notably, requirements that chartered banks hold a minimum
percentage of specie (money in coins) for every paper dollar
issued.
Third, the federal government played a major economic
role through its land policies. Throughout the antebellum period, the government pursued an explicit strategy of territorial expansion that involved purchasing or, in some cases,
forcibly taking vast tracts of western lands. Large purchases
from France in 1803 (the Louisiana Territory), from Spain in
1819 (Florida) and 1845 (Texas), and from Great Britain in
1846 (Oregon) were supplemented by military takeovers of
Mexico-controlled California and the Southwest and of vast
Indian lands. Federal government policies controlled the
transition of new lands to the status of territories and then
states.
To encourage the settlement and cultivation of new
lands—the central aim of federal land policy—Congress
passed several key laws. The Homestead Act offered a quarter
section (160 acres) to any adult who lived on and cultivated
the land, at a price of just $1.25 per acre after six months or
for free after five years. Subsequent legislation—the Timber
Culture Act of 1873, the Desert Land Act of 1877, and the
Timber and Stone Culture Act of 1878—offered landownership incentives to homesteaders who cleared or irrigated
marginal lands. But these policies largely failed to achieve
their intended goals. Of the 96 million acres distributed
under the four acts, only one-sixth of them were distributed
as gifts, and only one western farmer in ten was a true homesteader. Rather, cheating and speculation were rampant as
choice lands were gobbled up by large speculators and then
divided and subdivided for profit.
Fourth, the federal government played a major role in
building the nation’s transportation infrastructure. In 1806
Congress authorized construction of the National Road to
encourage western settlement and commerce. After an intense political battle, Cumberland, Maryland, was selected as
the eastern terminus, and the road reached Wheeling, Ohio,
in 1819. Although Treasury Secretary Albert Gallatin spelled
out an ambitious plan for federal turnpike and canal building
in his
Report on Roads and Canals (1808), his plan was scuttled by constitutional arguments against a strong central government, by rivalries among states and localities, and by
budgetary concerns.
Canals became major arteries for commerce in the antebellum period. The most successful was the Erie Canal, constructed between 1816 and 1825, when it connected Albany
and Buffalo—and thus the East Coast—to the Great Lakes.
The Erie Canal was built by the state of New York (with
strong support from its governor, DeWitt Clinton) but financed by domestic and foreign private investors. It sparked
many imitators in Pennsylvania, Ohio, and elsewhere. The
federal government provided surveyors and some land grants
to the states for these projects. But state governments played
a more overt role by directly financing many of the projects.
State public funds accounted for roughly three-quarters of
the $190 million spent to build about 4,000 miles of canals
(most of them linked to natural waterways) in the United
States between 1815 and 1860. By the late 1840s, however,
many of these projects had defaulted on their loans, and several states revised their constitutions to ban debt-financed
improvements. Nevertheless, these public-private projects
dramatically lowered transportation costs in many parts of
the Northeast and upper Midwest.
By this time, canals were being eclipsed by railroads, which
appeared in the 1830s. In Massachusetts, Pennsylvania, South
Carolina, and Georgia, state governments financed the first
rail companies. Although states turned away from direct investment in the railroads in the 1840s, they continued to
grant generous charters that often gave rail companies the
power to seize land through eminent domain and sometimes
exempted them from taxation and rate regulation. Meanwhile, municipalities and counties played a growing promotional role, offering to build free terminals, subscribe to
blocks of stock, and the like. For its part, the federal govern-

ment made generous land grants to railroads, in part to encourage private investors to build them and in part to reap
the benefits, for railroad development boosted prices of
nearby government lands. Together, state and federal governments gave about 200 million acres of land (an area roughly
the size of Texas) to the railroads before the Civil War.
The Civil War wrought massive disruptions, many of
them economic, that the federal government and the new
Confederacy in the South struggled to overcome. Federal
spending surged from $66.5 million in 1861 to $1.3 billion in
1865. Although the government introduced a new income tax
in 1863 (repealed after the war) as well as new excise taxes, it
financed most of its expenditures with loans and by issuing
$400 million of greenbacks, a new paper currency. These actions contributed to runaway inflation, which seriously
eroded the real buying power of Northerners. Consumer
prices roughly doubled in the North during the war, whereas
wages for skilled and unskilled workers actually fell. But the
situation was far worse in the South. The Confederacy imposed no income tax until 1863. Rather, it printed more than
$1 billion of new paper money, which became worthless with
the South’s surrender. By that time, the South owed more
than $2 billion to domestic and foreign creditors. Although
Southern money wages rose about 10-fold during the war,
key prices climbed more than 30-fold.
More important, however, the Civil War assured Republican control of Congress (as Southern Democrats withdrew
from the Union), which ushered in a set of economic policies
that favored many powerful economic interests in the North
and often encouraged economic development. These measures included the aforementioned Homestead Act; a new
wave of loans and land grants for railroads (including authorization of the first transcontinental railroad); the Morrill
Land-Grant College Act (which supported agricultural education and research); a contract-labor law that encouraged
manufacturing investment; and a national banking system
with the power to charter and regulate banks.
In these and other ways, federal, state, and local governments encouraged the economic development of the nation
in the antebellum period. In general, government played a
promotional role—protecting infant industries against cheap
imports, opening new lands for settlement and cultivation,
and encouraging investment in transportation and communication networks. But the government’s efforts to foster a
stable and adequate system of money and banking were uneven at best, and its control over the nation’s natural resources too often led to speculation and reckless exploitation.
The federal government played virtually no direct role in the
slave-based cotton economy of the South. Roughly every 20
years throughout the nineteenth century, the U.S. economy
was plagued with a severe recession that left millions of urban
and farm workers destitute, yet the federal government did
little or nothing to correct these recessions. Very few people
thought that government could or should do much to control economic cycles, other than continue to protect the sanctity of private property and remove obstacles to entrepreneurial investment.
The Era of Industrialism: Regulating Trusts and
Competition, 1865 to 1914
In the generation after the Civil War, the United States
emerged as the world’s preeminent economic power. Its railroad networks possessed more track than existed in Europe
and Russia combined, and its behemoth iron, steel, and oil refineries far outproduced those of any foreign rival. Small and
medium-sized firms persisted and multiplied, but national
attention increasingly focused on the giant industrial corporations that were defining the era. Yet it was glaringly apparent that rapid industrialization, for all of its benefits, also
brought a host of economic and social problems. As a result,
whereas government had played a mainly promotional role
before the Civil War, it took on a second, regulatory role as
well in the antebellum period.
The regulation of business corporations typically began
at the state level and later moved to the federal level. Railroads attracted the first intense regulatory scrutiny. Farmers
and shippers in the Midwest were frustrated by secret rebates to large shippers and by complicated railroad rate
schedules, especially those that forced them to pay higher
rates per mile to ship commodities over short distances
rather than long ones. Investors, large and small, were angered by the watering of railroad stocks (diluting the stocks’
value), bogus construction contracts, insider trading, and
the bond defaults that plagued many American railroads.
All complained of the railroads’ undue political influence.
In the 1860s Illinois, Iowa, Minnesota, and Wisconsin
passed the first state laws regulating railroads, which were
soon imitated in neighboring states. These so-called
Granger Laws were based on the principle that railroads
should be regulated because they were indispensable “natural monopolies” affected with a “public interest.” The
Granger movement encountered opposition from railroad
owners, who claimed the laws denied them their Fourteenth
Amendment right to private property. The Supreme Court
first upheld the Granger Laws in
Munn v. Illinois (1877) and
then reversed itself in the
Wabash case (1886), in which the
Court affirmed that only Congress had the power to regulate interstate commerce.
The
Wabash ruling left the door open for federal regulation, which many railroad executives also were advocating by
this time. They wanted to eliminate the worse abuses of less
responsible rivals, to deal with a single set of federal commissioners rather than scores of different state regulators,
and to take a hand in defining the new regulation. The result
was the Interstate Commerce Act of 1887, the first federal
regulation of business in U.S. history. Its major provisions
mandated “just and reasonable rates,” outlawed price discrimination, prohibited pooling arrangements, and established the five-member Interstate Commerce Commission
(ICC). Initially, the ICC had little impact. It held no explicit
powers of enforcement, and its language about “just and reasonable rates” was subject to broad interpretation. Moreover,
the courts more often than not ruled in favor of the railroads. But in the Progressive Era (see the discussion that follows), several additional laws—the Elkins Act (1903), the

Hepburn Act (1906), and the Mann-Elkins Act (1910)—gave
the ICC investigative, enforcement, and rate-setting powers.
The second major federal regulation of corporations came
with the passage of the Sherman Anti-Trust Act of 1890. That
measure was designed to deal with the growing problem of
business concentration, especially in the manufacturing sector. State incorporation laws prohibited a corporation in one
state from owning a corporation in another. But giant firms
saw this act as a great obstacle to expansion and interstate
management of their assets. In 1882 a lawyer for Standard Oil
devised the first “trust,” a way of skirting the prohibition
against interstate corporate ownership. Soon, several other
industries consolidated as trusts. States responded with antitrust laws; 15 were passed between 1888 and 1890. In 1889
New Jersey, hoping to attract more business, passed a
holding-company law that gave corporations a new way of
legally consolidating their multistate operations.
The Sherman Anti-Trust Act of 1890 outlawed all “contracts, combinations, and conspiracies in restraint of trade.”
But like the Interstate Commerce Act, it was vaguely worded,
weakly enforceable, and usually interpreted by the courts in
favor of big business. In one crucial Supreme Court ruling—
the
Addyston Pipe case (1898)—the Court affirmed that collusion was illegal but merger was legal. This ruling encouraged a massive wave of corporate mergers at the turn of the
century. But Congress put teeth in the antitrust law during
the Progressive Era. In 1911 the Justice Department broke up
two of the world’s most powerful monopolies, Standard Oil
and American Tobacco. In these cases, the Court articulated
a “rule of reason” that distinguished between “good” trusts,
which controlled a dominant market share but did not act
anticompetitively, and “bad” ones, which interfered with
competition. In 1914 antitrust law was significantly strengthened and expanded by passage of the Clayton Act, which created the Federal Trade Commission; the commission was
given the power to investigate anticompetitive practices and
issue “cease and desist” orders. The Clayton Act also outlawed
interlocking directorships, selling and buying contracts, and
price discrimination. In this way, U.S. antitrust policy had become more a matter of administrative government rather
than court interpretation.
In the late nineteenth and early twentieth centuries, progressivism—a constellation of reformers and reform movements struggling with the effects of industrialization, urbanization, and immigration—fostered the passage of a wave of
economic legislation intended to reform business and improve labor conditions. The strengthening of railroad regulation and the antitrust legislation, discussed earlier, were
among the most important measures. Such reforms often
were pioneered at the state level before being emulated nationally. The so-called Progressive presidents, Theodore Roosevelt (1901–1906) and Woodrow Wilson (1913–1921), made
the greatest strides. Roosevelt saw that big corporations could
benefit society with their great efficiencies, but he also felt
that government should be given the power to rein in abusive
firms. He believed in using a combination of publicity, antitrust law, and regulations to keep corporations in line. In
1906 alone, his administration passed the Hepburn Act, the
Pure Food and Drug Act, the Meat Inspection Act, and an
employer-liability law for the District of Columbia. Roosevelt
proposed measures that were even more ambitious, such as a
federal incorporation act and employer liability for all federal
workers, but probusiness forces defeated them.
The Wilson administration’s three major economic measures were the Clayton Act and the Federal Trade Commission
Act, discussed earlier, as well as the Federal Reserve Act of
1913. The last of these measures created the U.S. Federal Reserve system (commonly known as the Fed) to address a
number of weaknesses in the financial system that had long
plagued the economy. The Fed operated 12 district banks distributed throughout key economic regions of the country. Individual banks were encouraged to become members of the
system by subscribing to a portion of the stock of their regional Fed. Boards of governors appointed by member banks
and by the central Federal Reserve Board ran the regional
Feds. The system therefore shared power between public and
private interests and represented and served diverse regional
economic interests.
To encourage responsible banking practices among its
members, the Fed enforced minimum reserve requirements
(the minimum cash on hand required of financial institutions under the law). To moderate business cycles, it raised or
lowered the rediscount rate—the rate at which it loaned
money to member banks. The Fed also acted as a clearinghouse for obligations among member banks and as the federal government’s fiscal agent. In the 1920s the Fed began to
ease or tighten credit by buying or selling large blocks of government securities—its so-called open market operations.
The Federal Reserve gave the nation a permanent and largely
effective central bank.
The Progressives also instituted many new forms of national labor regulation. Up to this time, employee-employer
relations were generally governed by the “fellow servant” rule,
which left employers free of any responsibility for worker injury or death on the job. Moreover, Progressive legislation severely limited the widespread practice of using the Sherman
Act against labor unions (as organizations “in restraint of
trade”) rather than to control corporations. Progressive legislation also provided minimum-wage laws for women workers, restricted the hours and working conditions for child
workers, and required pensions for indigent widows with
children. In spite of these gains for industrial workers, however, local, state, and federal governments usually sided with
employers during labor disputes. For instance, governors
often called out state militias to help manufacturers put
down strikes.
The rise of giant corporations after the Civil War encouraged the expansion of state and federal regulatory powers in
response. Government policies continued to foster economic
development in a variety of ways, from tariffs to liberal immigration laws to agricultural extension services, but the government now also played a larger role as the arbiter of disputes, enforcer of competition, and guardian of the industrial
worker.

The Era of National Emergencies: Economic Policies in
World War and Depression, 1914 to 1945
Three national crises—World War I (1914–1919), the Great
Depression (1929–1939), and World War II (1941–1945)—
ushered in a new era of relations between the government
and the economy. To mobilize for war, to soften the economic
disruptions of war, and to cope with the century’s most severe
economic depression, American citizens called on their government to dramatically expand its role in the nation’s economic affairs. But this expansion often was curtailed or limited by continuing fears of a strong central government and
by a continuing belief in the natural and inevitable character
of business cycles.
When World War I began in Europe in August 1914, the
United States was strongly isolationist. Although supplying
the Allies with large quantities of war-related foodstuffs, raw
materials, manufactured goods, and loans, the country did
not begin to seriously mobilize for war until mid-1916, and it
did not enter the war as a combatant until April 1917. But the
mobilization process did not go smoothly. In 1916 it was handled mainly by the Council of National Defense (CND) and
the U.S. Shipping Board, with the result that shortages of
tanks, planes, bombs, and critical materials were commonplace. In July 1917 the CND created the War Industries Board
(WIB) to set priorities and increase production of critical
materials. This was the first formal attempt at central economic planning in U.S. history.
But the WIB did not possess clear, constitutional powers
to compel manufacturers to abide by its priorities. Many
companies did so voluntarily, motivated by patriotism,
profit seeking, or both. But the WIB did not become reasonably effective until 1918, when Wilson established a pricefixing board within the WIB and appointed as its head Wall
Street tycoon Bernard Baruch. Structuring the WIB more
like a corporation, Baruch staffed it with business leaders, established functional divisions, and instituted a range of controls over the production and distribution of food and fuel
to discourage shortages, hoarding, and price discrimination.
The WIB also operated adjustment boards to control wages,
hours, and working conditions. And in its most dramatic exercise of power, Baruch’s board took over operation of the
nation’s railroad system in April 1918, followed by the telephone and telegraph systems. Although most businesspeople initially viewed these actions with alarm, the wartime
business-government partnership proved to be mostly beneficial for American business. Corporate profits rose generously during the war. The government also relaxed antitrust
enforcement.
The federal government financed about two-thirds of the
war effort by levying new or increased excise, estate, and income taxes. The Sixteenth Amendment, passed in 1913, authorized an income tax, which soon was instituted on a
sharply progressive basis, ranging from 3 to 63 percent.
Meanwhile, the country suffered from severe price inflation,
brought on by a combination of heavy gold imports and liberal Federal Reserve credit policies.
The new enthusiasm among business leaders for a strong
business-government partnership dissipated rather quickly
with the return of peace. Under presidents Warren Harding
(1921–1923) and Calvin Coolidge (1923–1929), the federal
government raised tariffs, lowered taxes, made frequent antitrust exemptions, and staffed the FTC with businessfriendly regulators but otherwise left big business alone. The
gross national product (the total market value of the goods
and services produced by the United States in a given year)
rose 43 percent between 1920 and 1929, spurred by the mass
production and mass marketing of automobiles, electricity,
and consumer durables. The agricultural sector suffered severely during the 1920s, but the government acted only to expand credit and to encourage cooperative efforts of farmers.
When engineer-businessman Herbert Hoover was elected
president in 1928, big business was held in high esteem, labor
union membership was declining, and stock prices on Wall
Street were skyrocketing. The stock market began a harrowing decline in October 1929 and did not hit bottom for three
years. Hoover blamed speculators and foreigners for the
Great Crash, and he called on business leaders to maintain
wages and prices. Drawing lessons from World War I
business-government cooperation and from his own Quaker
background, he advocated “associationalism,” an approach by
which business leaders would voluntarily cooperate to control wages, prices, and output for the nation’s good.
Although this approach proved naive, Hoover also took
some concrete measures to revive the economy. The centerpiece of his efforts was the Reconstruction Finance Corporation, which ultimately loaned more than $3 billion to ailing
railroads and financial institutions. He staunchly resisted direct government grants to either individuals or firms. Some
of Hoover’s economic policies were continued, in modified
form, under the Democratic administration of President
Franklin D. Roosevelt (1933–1945). Most notably, the National Industrial Recovery Act (1933) brought together leaders of big business to voluntarily set wages, prices, and output
levels, reminiscent of Hoover’s associationalism.
The federal government passed a dizzying array of economic legislation under Roosevelt’s New Deal—15 major
pieces of legislation in the first 100 days alone. To make sense
of these many laws and the “alphabet agencies” they created,
historians have employed various organizing schemes. One
views New Deal economic policies and programs in terms of
three fundamental goals: relief, recovery, and reform. Relief
programs, designed to help relieve the suffering of hard-hit
groups such as farmers or unemployed laborers, included the
Federal Emergency Relief Administration and the Public
Works Administration, which created thousands of construction jobs. Recovery legislation, intended to lift the economy
out of depression, similarly often involved job creation;
among the projects such legislation spawned was the Tennessee Valley Authority, a massive regional land reclamation
and electrification project. Reform legislation, designed to permanently correct structural flaws or weaknesses in the economy, focused on agriculture, public utilities, and banking.
The New Deal separated investment and commercial banking and created the Securities and Exchange Commission to
Government Domestic Economic Policies 391
regulate Wall Street. Many programs were fashioned to
achieve more than one of these goals.
The New Deal also expanded the federal government’s role
as a guardian of social welfare and organized labor. In 1935
the National Labor Relations (Wagner) Act ensured the rights
of workers to organize and bargain collectively, and the Social
Security Act provided old-age, unemployment, and other
benefits. To some liberals, the New Deal did not go far
enough: It virtually ignored certain groups, and it preserved
the basic structure of American capitalism. Moreover, Roosevelt shared many of his predecessors’ traditional values, as
shown when he attempted to balance the budget in 1937,
thereby bringing on a new recession. Still, the New Deal,
whose dimensions are only suggested here, represented a dramatic expansion of federal economic power and activism.
World War II continued the trend. As Washington geared
up for war in the late 1930s, many economic planners strove
to avoid the production bottlenecks, shortages, and rampant
inflation that had plagued the nation in World War I. As in
that conflict, business executives played key managerial roles
in the economy in the 1930s and 1940s, and as in the New
Deal, Roosevelt again created scores of alphabet agencies. For
war production, these agencies were the Office of Production
Management (under General Motor’s president William
Knudsen) and the Supply, Priorities, and Allocations Board,
both created in 1941. The U.S. economy converted to war
production remarkably quickly, its wartime output surprising that of allies and enemies alike. Rationing and price controls were handled by the Office of Price Administration
(OPA, 1941), which succeeded in holding inflation well
below World War I levels. Labor unions sustained a no-strike
policy through most of the war, but in 1943 Congress passed
the War Labor Disputes Act, which strengthened the executive branch’s power to stop strikes at government war plants.
The federal budget soared between 1941 and 1945, with
nearly 90 percent of the $318 billion in expenditures going
directly to the war. This heavy price tag was funded by even
heavier taxation, the large-scale sale of government bonds,
and deficits that reached $55 billion a year by the war’s end.
From Keynesians to Neoconservatism: Managing the
Postwar Economy, 1945 to the Present
The wartime economy put into practice an economic theory
that had begun to gain attention in the late 1930s. In 1936
British economist John Maynard Keynes (pronounced
“Kanes”) published
The General Theory of Employment, Interest, and Money, arguably the century’s most influential
economic treatise. In his analysis of business cycles, Keynes
argued that recessions could be so severe that they would no
longer be self-correcting, as consumers hoarded money in
spite of falling prices. He contended that government deficit
spending (spending more than the government received in
taxes by borrowing money through the sale of treasury notes)
was needed to spark recovery.
Keynes’s ideas, which seemed to be validated by the
wartime recovery, were widely accepted by postwar U.S.
economists and policymakers. Often, too, they were oversimplified and “bastardized,” that is, used as an excuse to justify
policies that relied too heavily on short-term fiscal solutions.
(In fact, Keynes recommended monetary controls for most
economic conditions.) Still, Keynesian economics dominated
until the late 1970s. Far more than even during the New Deal,
the federal government was deemed responsible for the
health of the economy. This expectation was reflected in the
passage of the Employment Act of 1946, which created the
Council of Economic Advisers, a body of economists charged
with analyzing the economy and reporting on it to the president and Congress.
In the 1950s and 1960s, the economy performed extraordinarily well. New products and growing efficiencies deserved much of the credit, but so did the new variety of
macroeconomic management that gained prominence after
the war. Using new quantitative techniques from the emerging field of econometrics, economic policy makers began to
speak of “fine-tuning” an economy from which major business cycles had been virtually eliminated. Downturns became
“corrections,” and recessions were now known as “soft landings.” Gone was the notion that business cycles were inevitable. In addition, heavy government spending on expanded social welfare programs and on the peacetime
“military-industrial complex” (to support the nation’s cold
war doctrine) provided a steady stimulus to the economy.
The federal government also made major investments in the
nation’s wealth-producing capacity through the 1944 Serviceman’s Readjustment Act, or GI Bill, and the 1956 Federal
Highway Act.
Postwar prosperity encouraged rising expectations about
the safety and quality of American life, which were translated
into a new wave of regulation. The New Social Regulation included some three dozen major new laws regulating the environment, the workplace, and consumer products. These included the Clean Air Act (1967, with later amendments), the
Occupational Safety and Health Act (1970), the Consumer
Products Safety Act (1972), and the Toxic Substances Control
Act (1976).
Growth rates remained high and unemployment and inflation were low until the mid-1960s, when—with the economy running close to capacity—President Lyndon Johnson
resisted raising taxes to pay for the aggressive, simultaneous
expansion of his Great Society social programs and the Vietnam War. Thus began the “great inflation” of the 1960s and
1970s. Historically, inflation and unemployment had moved
inversely, but by the early 1970s, both were topping 6 percent,
leaving economists at a loss to explain “stagflation.” In 1972
President Richard Nixon instituted wage-and-price controls,
a remarkable step for the conservative Republican. Similarly,
his successor in the White House, the liberal Democrat
Jimmy Carter, began the broad-gauge deregulation of several
major transportation, energy, communications, and manufacturing sectors: airlines, trucking, railroads, petroleum and
natural gas, electricity, telecommunications, and financial
services. Both presidents had used unconventional measures
in grappling with a seemingly intractable economic slump.
The time was ripe for change, politically and within the

economics profession. In the late 1970s, new research suggested that overregulation (such as the New Social Regulation) and overtaxation contributed to economic slowdown.
These ideas attracted the attention of some conservative Republican politicians, as did a theory proposed by University
of Southern California economist Arthur Laffer—that cutting taxes would actually increase tax revenues. Presidential
candidate Ronald Reagan ran on a supply-side economic
platform, promising massive tax cuts to spur savings and investment, mild spending cuts, and a balanced budget. He and
his successor, George H. W. Bush (1988–1992), cut taxes,
scaled back health and safety and environmental regulation,
and greatly reduced antitrust enforcement. Inflation fell, but
the total national debt swelled to $4 trillion (rising from 23
percent to 69 percent of the gross domestic product [GDP],
or the total market value of the goods and services produced
by workers and capital within the United States in a year).
The administration of President Bill Clinton (1992–2000)
followed a mainstream economic program, which was supported by a massive stock market boom, historically low energy prices, and large technology-related productivity gains.
Since 1979 the economy also has benefited from excellent
leadership at the Federal Reserve, under chairs Paul Volcker
(from 1979 to 1987) and Alan Greenspan (since 1987). The
simultaneous high productivity rates, low inflation rates, and
high employment rates of the late 1980s and 1990s proved to
be as baffling to economists as stagflation had been, though
certainly more welcome.
Since the 1980s, economic policy makers have strongly favored opening up business to ever greater levels of free market competition, both domestically and globally. President
George W. Bush (who assumed office in 2000) has prescribed
supply-side policies much like those of his father. Yet, despite
the fact that Keynesian government intervention seems to
have fallen out of fashion, the federal government continues
to play an enormous role in the country’s mixed economy as
a customer, supplier, promoter, and regulator.
—David B. Sicilia
References
Bruchey, Stuart W. Enterprise: The Dynamic Economy of a
Free People.
Cambridge, MA: Harvard University Press,
1990.
Collins, Robert M.
More: The Politics of Economic Growth in
Postwar America.
New York: Oxford University Press,
2000.
Engerman, Stanley L., and Robert E. Gallman, eds.
The
Cambridge Economic History of the United States.
3 vols.
Cambridge: Cambridge University Press, 1996–2000.
Feldstein, Martin, ed.
American Economic Policy in the 1980s.
Chicago: University of Chicago Press, 1994.
Hughes, Jonathan R. T.
The Governmental Habit Redux:
Economic Controls from Colonial Times to the Present.
Princeton, NJ: Princeton University Press, 1991.
Keller, Morton.
Regulating a New Economy: Public Policy and
Economic Change in America, 1900–1933.
Cambridge,
MA: Harvard University Press, 1990.
Krugman, Paul.
Peddling Prosperity. New York: W. W.
Norton, 1994.
Nettels, Curtis P.
The Emergence of a National Economy,
1775–1815.
New York: Holt, Rinehart and Winston, 1962.
Stein, Herbert.
Presidential Economics: The Making of
Economic Policy from Roosevelt to Clinton.
Washington,
DC: American Enterprise Institute, 1994.
Temin, Peter.
Lessons from the Great Depression. Cambridge,
MA: MIT Press, 1989.
Vietor, Richard.
Contrived Competition: Regulation and
Deregulation in America.
Cambridge, MA: Harvard
University Press, 1994.

Leave a Reply 0

Your email address will not be published. Required fields are marked *