Law. The American Economy: A Historical Encyclopedia

The United States of America, a former colony of the British
Empire, has a legal heritage descended from the English common law system. The American legal system maintains law
and order; manages large populations, commerce, and the
wealth of the nation; and reflects American culture. Through
judicial decisions and legislative action, the law has evolved to
remain up-to-date and to represent contemporary society.
Consequently, the U.S. Constitution, one of the governing
documents of American law, functions as a living organic law,
a product of the American experience. An understanding of
the American legal system requires an examination of the
common law system, how it evolved, and how it came to the
United States of America.
Common law refers to the system of laws developed in
England and adopted by most of the English-speaking world.
Common law uses the concept of stare decisis (let the decision stand) as a basis for its system, with past decisions serving as a high source of authority. Judges draw their decisions
from existing principles of law, thus reflecting the living values, attitudes, and ethical ideas of the people. English common law developed purely as a product of English constitutional development. By contrast, most countries of
continental Europe and the nations settled by them employ
the civil law system—the other principal legal system of the
democratic world. Civil law rests on Roman law, which was
extended to the limits of the Roman Empire. Islamic law, the
third major legal system, relies on the Koran, as interpreted
by tradition and juristic writings.
During the reign of Henry II (1154–1189), England
adopted a system of royal courts and common law throughout the country. The Judicature Act of 1873 further consolidated a series of statutes and overturned the whole classical
structure of the English courts. In the early thirteenth century, the Normans, under William the Conqueror, took to
England their laws, which descended from the Scandinavian
conquerors of western France. Anglo-Saxon law at that time
was well established in England, but the Normans offered refined administrative skills. They established a system of government to deal with the highly decentralized British shires,
bringing all the English counties under one common rule.
The colonists carried this system of laws to the British
colonies in the New World.
The early American legal system adhered to English law
but gradually changed over the centuries. Law emerged from
the necessary customs and morals of society, even though the
colonial judicial system of the eighteenth century in the
United States remained notably English. The common law
evolved from the customs of the royal courts, though as the
legal system developed, previous cases became a source of
law. The skeleton of colonial law was shaped in the courts but
followed English practice. Unlike the situation in the English
system, though, the colonies started off with one court that
passed necessary laws. Until 1776 law libraries contained
mainly English documents and William Blackstone’s
Commentaries on the Laws of England (1765–1769), a concise and
updated resource covering the basics of English law that is
still employed today. Early American law literature remained
quite sparse.
Although many of the old English laws and traditions prevailed in the colonies, no standardized law existed there. Each
colony developed its own system of law, as each state does
today (allowing for the existence of the Quebec provincial
and Louisiana state legal systems). In 1776 the colonies declared themselves independent. The founding fathers drew
up the Articles of Confederation, but they proved unsatisfactory. After the failure of the Articles due to a lack of taxing
power, delegates to the Constitutional Convention drafted
the federal Constitution that the states signed in 1787. The
states also drew up their own constitutions, and federal
courts served as the courts of appeal for major state courts.
Ultimately, debates developed as to whether the common law
system should be overthrown.
Doubts existed as to whether the English common law system would come to dominate North America. With the different nations that were colonizing the North American continent came varying legal systems: The British, French, and
Spanish and even the Dutch in Delaware carried with them
their own legal cultures and heritages as they settled into their

respective territories across the continent. However, by the
turn of the nineteenth century, the common law system had
taken a firm hold in the United States, and there was little risk
that it would be supplanted by the French Napoleonic Code,
the only real alternative. Just two remnants of the French legal
system continue today in two of France’s old colonies—the
Province of Quebec in Canada and the State of Louisiana.
By the middle of the nineteenth century, the preconditions
for a separate and distinct American jurisprudence had been
achieved. Enough time had elapsed since the Declaration of
Independence for an American legal heritage to develop.
American precedence had been built up, legal texts had been
written, and lawyers had been trained in the United States.
The American legal system was not yet completely autonomous, and judges still referred to English law for precedents where American law was lacking, but those areas became fewer and fewer as the years went by. One clear
distinction came with the transition in land laws. In England
the legal system facilitated land inheritance through primogeniture. A significant break came in the 1850s when the
United States rejected the notion of passing on all land to the
eldest son. This decision reflected the emergence of a legal
system independent from English law.
Legal Terms and Applications
Two types of court cases—civil and criminal—exist in the
United States. Plaintiffs initiate civil cases, in which a company or an individual sues for financial reparations, whereas
the state prosecutes criminal cases, which involve punishments of fines or imprisonment. Common law and equity
(whereby both parties benefit) remain separate in that equity
deals with more than simply financial reparations. In England, the Courts of Chancery and the Star Chamber, which
deal with equity matters, have the authority to force people to
undertake certain actions, such as selling property—something that is not done in a civil case. Equity receiverships
allow courts to take possession of assets and redistribute
them. In the United States, the process of equity receivership
was not dealt with until the formulation of stable bankruptcy
laws in 1898.
Most legal thought develops institutionally, not individually, through processes occurring in the courtroom and legislative chambers. Legislation, which is promulgated in the
legislative branch of the government, involves a new rule or
law that has just taken effect and specifies when the law is applicable. Case law, by contrast, is retroactive. Taxes offer a
good case study in this regard. With legislation, individuals
can only be taxed on money they have earned from the moment the law was passed, whereas with a case law, a ruling can
deem that individuals owe the government back taxes. For
this reason, courts must take into account the effects their decisions will have; consequently, courts usually issue conservative decisions.
A contract constitutes a binding agreement that two or
more individuals or entities enter into—an enforceable
promise that is to be carried out at a future date. Two types of
contracts exist. A contract of sale is the most common and is
usually made instantaneously, as when purchasing goods.
The second involves a more complicated transaction, usually
associated with a trading or commercial situation, involving
a guarantee to provide goods or services in the future. In
Anglo-American law, contracts can be formal (written documents) or informal (implied in speech or writing). A stable
society requires both types of contracts.
For almost 700 years, the jury system has been an important part of the legal system. There are two types of juries.
The petit jury hears both civil cases (to establish damages that
will be awarded) and criminal cases (to establish guilt). The
grand jury, which functions as an accusatory body, establishes, based on evidence presented to it, whether a case warrants trial. The jury system is much criticized for being flawed
because jurors tend to make their decisions based on emotion
rather than rational thought. Presently, the grand jury exists
in only half the United States and in the federal courts.
Commerce Clause
The commerce clause, as presented in the U.S. Constitution,
gives the government the power “to regulate commerce with
foreign nations, and among the several states, and with the
Indian tribes.” In order to regulate enormously powerful
business corporations, to carry forward programs of social
welfare and economic justice, to safeguard the rights of individual citizens, and to allow that diversity of state legislation
so necessary in a federal system of government, the Supreme
Court eventually defined what constituted commerce.
The period from 1824 to 1937 saw several important
events in the adjudication of the commerce clause before the
Supreme Court.
Gibbons v. Ogden (1824) was the first case in
which the Court interpreted and applied that particular
clause of the U.S. Constitution. The commerce clause came
about because states erected barriers to protect manufacturers within their borders.
Gibbons v. Ogden emerged because
the state of New York prevented Thomas Gibbons, a resident
of Elizabethtown, New Jersey, from running his ferry service
between New Jersey and New York, in competition with the
ferry service of Col. Aaron Ogden, of New York. Lawyers argued the steamboat case in front of the Supreme Court in
February 1824. Daniel Webster and William Wirt (the U.S. attorney general from 1817 to 1829) represented Gibbons, and
Thomas J. Oakley and Thomas A. Emmet represented
Ogden. Webster argued that the federal government retained
the sole authority over commerce and that the states lacked
the power to enact laws affecting it. Emmet, for his part, argued for a narrow definition of commerce. He contended
that Congress might have an incidental power to regulate
navigation but only insofar as that navigation occurred for
the limited purposes of commerce. Emmet argued that the
individual states had always exercised the power of making
material regulations respecting commerce.
On March 2, 1824, Justice John Marshall handed down his
decision. He rejected the premise that the expressly granted
powers of the Constitution should be constructed strictly. He
took the word
commerce and gave it a broad definition, he extended the federal power to regulate commerce within state
boundaries, and he gave wide scope to the Constitution grant
in applying these powers.
Following the
Gibbons v. Ogden case, the Supreme Court
presided over the watershed case
Cooley v. Board of Wardens
of the Port of Philadelphia
(1852), which cleared up questions
raised in the
Gibbons v. Ogden decision. First, the Supreme
Court held that certain subjects of national importance demanded uniform congressional regulation, whereas others of
strictly local concern properly remained under the jurisdiction of state regulation. Second and perhaps most important,
the Court gave itself great power by becoming the final arbitrator in decisions that would affect the core of the American
federal system. The commerce clause has proven extremely
important in America’s legal history because through it, the
government has exercised a tremendous amount of centralized authority. Using the commerce clause, the government
could weld the diverse parts of the country into a single nation.
As a result of
Cooley v. Board of Wardens, states were able to
impose tariffs on shipping through their territories, but the
courts would strike down laws if state regulation favored local
businesses. On February 4, 1887, Congress passed the Interstate Commerce Act to regulate rail rates, which were running
rampant. It also established the five-person Interstate Commerce Commission (ICC), but the act could not properly enforce the Interstate Commerce Act until the passage of the
Hepburn Act in 1906, the Mann-Elkins Act of 1910, and the
Federal Transportation Act of 1920. Around 1900 Congress
used the commerce clause to regulate the national economy
and certain businesses as well. The Supreme Court, in the
process, gave an expanded interpretation of the scope of national authority contained in that delegated power, but it
never gave complete free rein to the commerce clause, which
led to the rise of the doctrine of dual federalism.
The concept of dual federalism involves the notion that
the national government functions as one of two powers and
that the two levels of government—national and state—operate as sovereign and equal entities within their respective
spheres. With dual federalism, state powers expanded. And as
a direct consequence of dual federalism, the federal government could not regulate child labor: The Supreme Court reasoned that child labor remained purely a local matter, keeping it out of the regulatory reach of the federal government.
With the New York Stock Market crash in 1929 and the
onset of the Great Depression, the Court reversed its policy
on dual federalism. To deal with the depression, President
Franklin D. Roosevelt implemented his reforms in economics, agriculture, banking and finance, manufacturing, and
labor, all of which involved statutes that the Court had struck
down before. Congress passed the National Labor Act (Wagner Act) on July 5, 1935, regulating labor-management relations in industry and creating the National Labor Relations
Board (NLRB).
National Labor Relations Board v. Jones &
Laughlin
(1937) became the first test case before the Supreme
Court. The circuit courts had ruled in favor of the Jones &
Laughlin Steel Corporation of Pittsburgh, citing
Carter v.
Carter Coal Co.,
which distinguished between production
and commerce. The Supreme Court did not uphold this distinction, and as a result, the NLRB was able to order companies to desist from certain labor practices if they adversely affected commerce in any way. By the end of 1938, the
authority of the NLRB extended to companies that were
wholly intrastate, that shipped goods in interstate commerce,
or that provided essential services for the instrumentation of
commerce.
The two other important cases dealing with the commerce
clause were
United States v. Darby (1941) and Wickard v. Filburn (1942). The rulings from these cases resolved the confusion surrounding the commerce clause once and for all. The
Supreme Court found that the clause “could reach any individual activity, no matter how insignificant in itself, if, when
combined with other similar activities, it exerted a ‘substantial economic effect’ on interstate commerce.” The Court did
away with the old distinction between commerce and production, bringing manufacturing, mining, and agriculture
into—and making them inseparable from—commerce. The
Supreme Court also did away with the constitutional doctrine of dual federalism and denied states the power to limit
the delegated powers of the federal government.
Since 1937, the Court’s interpretation of the commerce
clause has given Congress broad and sweeping powers to regulate labor-management relations. By the end of 1942, the
Supreme Court had also given Congress extensive authority
to regulate commerce, but this authority did not extend to
the insurance industry because insurance was deemed more
of a contract than a business. The Court refused to hear cases
dealing with insurance until 1944 in
United States v. SouthEastern Underwriters Association, a case in which Justice
Hugo L. Black held that both the commerce clause and the
Sherman Anti-Trust Act could be applied to the insurance
business.
Bankruptcy Law
Bankruptcy law in the United States gives more favorable
treatment to debtors than to creditors. Moreover, the courts
view bankruptcy not as a last resort but rather as another option to resolve financial difficulties. Famous individuals declare bankruptcy quite frequently and for different reasons;
for example, they may use bankruptcy to get out of a contract.
Another characteristic of U.S. bankruptcy law is that
lawyers are used to declare bankruptcy, whereas in other nations, bankruptcy decisions are made through an administrative process. A bankruptcy judge oversees the process in the
United States, and both the debtor and the creditor usually
retain counsel. By contrast, in England, another marketbased economy, an administrator supervises the process, and
the debtor (whether an individual or a business) rarely has
the option of being represented by counsel. This is an interesting development, given the fact that when U.S. bankruptcy
laws were first enacted in 1800, they resembled the English
laws almost exactly.
Two types of bankruptcies exist in the U.S. legal system—
one for individuals and another for corporations. For indi-

viduals, Chapter 7 bankruptcy involves a straight liquidation,
whereby all of the individual’s assets are liquidated and used
to pay off creditors. The court then relieves the debtor of his
or her entire burden. An individual may also file a Chapter 13
bankruptcy. This chapter of the Bankruptcy Code provides
for a rehabilitation case, whereby the debtor pays a portion of
the debt over a period of three to five years—making this a
less stigmatizing form of bankruptcy. Thus, an individual has
two options when declaring bankruptcy: either liquidation
(Chapter 7) or rehabilitation (Chapter 13). In both cases, the
debtor can retain certain assets in order to be able to make a
fresh start. A debtor or creditor can initiate a bankruptcy
claim, but most of the time, such claims are made voluntarily by the debtor.
As with individual bankruptcy, a company can file for either liquidation or reorganization. For the corporation,
Chapter 7 involves liquidation, but it is complete and with no
exemptions. Chapter 11 allows for the rehabilitation of companies. On occasion, individuals can invoke Chapter 11 and
small businesses can file Chapter 13 bankruptcies.
In the late eighteenth century, bankruptcy law involved an
ideological struggle between opposing groups. On the one
hand, Alexander Hamilton and the Federalists believed that
the future of America lay with commerce and that bankruptcy laws were essential to protect both creditors and
debtors; they argued that these laws would encourage credit,
thereby fueling commercial growth. Thomas Jefferson and
the Republicans, on the other hand, feared that a federal
bankruptcy law would erode the importance of farmer’s
property rights and shift power from the state to the federal
court.
Debates raged throughout the nineteenth century on such
issues as whether only debtors could invoke bankruptcy laws.
Congress enacted three bankruptcy laws (in 1800, 1841, and
1867) but repealed each of them a few years later, since legislators had hastily formulated the acts to respond to grave economic distress. The bankruptcy legislation of 1898, however,
had staying power. In the end, the nation’s first large-scale
corporate reorganization, which involved the bankruptcy of
many railroads during the 1890s, resulted in stable bankruptcy laws. The courts, not Congress, dealt with this problem, creating a process known as equity receivership.
Effective U.S. bankruptcy laws went through three eras.
The first involved the enactment of the 1898 Bankruptcy Act
and the perfection of the equity receivership technique for
large-scale reorganizations. The Great Depression and the
New Deal marked the second era, during which bankruptcy
reforms reinforced and expanded the general bankruptcy
practice and completely reshaped the landscape of large-scale
corporate reorganization. The enactment of the 1978 Bankruptcy Code and the revitalization of bankruptcy practice
initiated the final era.
Antitrust Law
Today, antitrust law shapes the policy of almost every large
company in the world. Following World War II, the United
States wanted to impose its antitrust tradition on the rest of
the world. Contradictions existed between nations, as most
industrial countries tolerated (or even encouraged) cartels
whereas the United States banned them. The antitrust concept has a hallowed place in American economic and political life. Antitrust legislation focuses on preventing collusion
among competing firms hoping to raise prices and hinder
competition. European markets, by contrast, set minimum
prices and cooperated with cartels. This policy protected the
smaller firms, stabilized markets, and kept the overall economy stable.
In the 50 years before World War II, nations backed away
from the idea of economic competition as promoting the
common good. The pace of the retreat, at first gradual, picked
up with the outbreak of World War I. The expansion of cartels was among the chief manifestations of this trend, and
cartels played an ever growing role in domestic and international trade and by 1939 had become a major factor in the
world economy. The United States remained the only country of the industrialized world to reject the notion of cartels,
and it reacted to cartels abroad by increasing tariff barriers.
Americans respected the efficiency of big business but feared
its economic and political powers. They placed great confidence in economic competition as a check on the power of
big business, and they looked askance at cartels. As a result,
Washington regulated the activities of large firms, outlawing
cartels and imposing other restrictions on companies.
Congress passed the Sherman Act of 1890 as the first
measure directed against big business. In 1914, during the administration of President Woodrow Wilson, Congress also
passed the Clayton Anti-Trust and Federal Trade Commission Acts. With the Great Depression, however, Franklin Roosevelt secured passage of the National Recovery Act (NRA),
which suspended the antitrust laws and allowed cartels during the economic downturn under “codes of conduct for each
industry.” In his second term, Roosevelt went on a strong antitrust crusade, creating the Temporary National Economic
Committee (TNEC) and the Justice Department’s Antitrust
Division, headed by Thurman Arnold. Before the outbreak of
war in Europe in 1939, Arnold concentrated on domestic
conditions. But the war forced him to pay more attention to
foreign affairs. His Antitrust Division operated constructively
in peacetime, but he failed to see the importance of cartels in
wartime, when free market rules are suspended and close cooperation is needed. Although the government retreated
from its antitrust position during the war, Washington would
pick it up again afterward.
With the onset of World War II, American firms participating in cartels experienced difficulties, as did those involved
in the antitrust drive. Since the United States remained technically neutral, cartel agreements with German firms remained in place. American businesses did not sever their ties
because of the advantages gained, such as access to innovations, and Congress did not suspend cartel agreements because if it had, the executive branch would have had to admit
that war with Germany remained a possibility. Furthermore,
the need to coordinate mobilization and placate the business
community led to sharp restrictions on the antitrust drive.

After World War II, the United States began to focus its attention on foreign cartels. A small group associated with the
Antitrust Division of the Justice Department took an interest
in foreign affairs and used the division’s position in the world
to attack foreign cartels, believing that Europe’s failures resulted from its lack of an antitrust tradition. But domestic
markets outside the United States facilitated cartels because
they remained necessary to the smaller economies. According
to Wyatt Wells, in his work
Antitrust and the Formation of the
Postwar World,
the successful export of the antitrust concept
depended on economic development abroad. After 1945 the
nations of Western Europe integrated their markets, stabilized their currencies, and built or reinforced democratic governments. In this context, companies could afford competition, and most European governments responded to
Washington’s urging and enacted antitrust statutes roughly
comparable to those in American law. Yet in the absence of
favorable conditions—for example, in Japan—antitrust
foundered.
The postwar attack on cartels was advanced, in part, under
the banner of free trade. However, long-term goals such as
commercial liberalization would have to wait, as nations simply tried to stabilize the postwar world economy. They created the International Trade Organization (ITO) to deal with
this concern, and few firms (the De Beers diamond cartel and
shipping businesses being the notable exceptions) escaped
the blows dealt by the U.S. courts. In the early 1950s, as Western nations achieved a measure of prosperity, cartel policy
also achieved a certain equilibrium. Radical decartelization
failed in Japan and Germany, but court decisions in the
United States had struck the seriously weakened international cartels. Monopoly remained suspect, and cartels were
largely forbidden, but big business would continue as long as
competition persisted. In practice, some cartels were allowed
to exist if they could cite special circumstances or command
substantial political support.
Legal Education
In the early days of the colonies, lawyers played a small role
and were generally unwelcome; indeed, pleading for hire was
prohibited by the Massachusetts
Body of Liberties (1641).
Over time, however, lawyers came to fulfill two important
functions in the legal system: providing advice and practicing
advocacy. Today, some lawyers specialize in courtroom work
(like English barristers), and others work in their offices (like
English solicitors/attorneys). In Britain, the two specializations remained separate, though this is not the case in the
United States. In America, lawyers receive training at law
schools, which are usually affiliated with a university, whereas
in Britain, they train at one of the four Inns of Court, a combination of law school and professional organization.
The history of the law school in the United States differs
from that of legal education in the rest of the common law
system. Only in North America can a law school function
completely apart from the rest of the university with which it
is affiliated. Before the Civil War, law schools played a minor
role in the training of lawyers. The trend of educating attorneys in law schools began only in the early years of the twentieth century, and it developed for numerous reasons, mainly
to achieve higher standards, establish standardization, and
exclude immigrants from the field. (The American Bar Association [ABA] and the American Association of Law Schools
[AALS] wanted to excluded immigrants because they did not
espouse the values of the dominant Anglo-Saxon Protestants.) Clearly, the raising of standards played an important
role, for elite lawyers (like elites in other fields of the time)
wished to establish more rigorous academic instruction.
The ABA and AALS campaigned on two fronts: (1) to increase standards required of accredited universities, and (2)
to secure legislation that would impose these higher standards. Not until 1928 did states require attorneys to attend
law school before practicing in the field. This mandatory policy largely involved competition with schools that taught law
on a part-time basis or at night that could not meet the required standards. These schools fiercely resisted any attempt
at change, but the economic situation of the Great Depression forced many of them to shut down.
With the closure of the “lesser” law schools, the ABA and
AALS had the freedom to implement a legal training system
of their choosing. The bar exam became compulsory, and
without passing it, lawyers could not practice in any state.
The standards of the bar rose, making it more difficult to pass
the exam. Harvard University played a large part in setting
these standards. Christopher Columbus Langdell, the first
dean of the Harvard Law School, promoted graduate professional education for lawyers in order to elevate the Harvard
program from mediocrity to distinction. Other universities
quickly followed suit by establishing law schools of their own
or by bringing independent institutions under their auspices.
Acceptance into law school became more selective, especially
with the implementation of the Law School Admission Test
(LSAT) in 1948.
Today’s law schools in the United States produce considerable legal writings in their law reviews. Most of these schools
publish journals, and eminent lawyers and law professors
write the lead articles. These works are probably more valuable than any other secondary legal source. Indeed, doctrinal
writing holds an important place as a secondary source of law
in the Anglo-American legal system.
Conclusion
The American legal system, once intrinsically linked with
English law, has come into its own over the past couple of
centuries. Today, it has become a model for many of the
emerging democracies. Through the legal and legislative
branches of the government, American law has adequately
managed the commerce and the wealth of the nation, while
also reflecting American values. At the turn of the twentieth
century, antitrust legislation, bankruptcy legislation, and the
commerce clause all emerged to deal with the rise of big business. In addition, modern American law schools successfully
train American lawyers, thus maintaining an independent
American legal tradition.
—Matthieu J-C. Moss
References
Benson, Paul R., Jr. The Supreme Court and the Commerce
Clause, 1937–1970.
New York: Dunellen Publishing,
1970.
Billias, George Athan, ed.
Law and Authority in Colonial
America: Selected Essays.
Barre, MA: Barre Publishers,
1965.
Friedman, Lawrence M.
A History of American Law. New
York: Simon and Schuster, 1973.
Horwitz, Morton J.
The Transformation of American Law,
1780–1860.
New York: Oxford University Press, 1992.
Kempin, Frederick G., Jr.
Historical Introduction to AngloAmerican Law in a Nutshell. St. Paul, MN: West
Publishing, 1973.
Schwartz, Bernard.
The Law in America: A History. New
York: McGraw-Hill, 1974.
Skeely, David A., Jr.
Debt’s Dominion: A History of
Bankruptcy Law in America.
Princeton, NJ: Princeton
University Press, 2001.
Stevens, Robert.
Law School: Legal Education in America
from the 1850s to the 1980s.
Chapel Hill: University of
North Carolina Press, 1983.
Wells, Wyatt.
Antitrust and the Formation of the Postwar
World.
New York: Columbia University Press, 2002.

Leave a Reply 0

Your email address will not be published. Required fields are marked *