History of tariffs in the United States
| This article is part of a series on the |
| Economy of the United States |
|---|
Tariffs have historically played a key role in the trade policy of the United States. Economic historian Douglas Irwin classifies U.S. tariff history into three periods: a revenue period (ca. 1790–1860), a restriction period (1861–1933) and a reciprocity period (from 1934 onwards). In the first period, from 1790 to 1860, average tariffs increased from 20 percent to 60 percent before declining again to 20 percent. From 1861 to 1933, which Irwin characterizes as the "restriction period", the average tariffs rose to 50 percent and remained at that level for several decades. From 1934 onwards, in the "reciprocity period", the average tariff declined substantially until it leveled off at 5 percent. Especially after 1942, the U.S. began to promote worldwide free trade. After the 2016 presidential election, the US increased trade protectionism.
According to Irwin, tariffs were intended to serve three primary purposes: "to raise revenue for the government, to restrict imports and protect domestic producers from foreign competition, and to reach reciprocity agreements that reduce trade barriers."
According to Irwin, a common myth about U.S. trade policy is that low tariffs harmed American manufacturers in the early 19th century and then that high tariffs made the United States into a great industrial power in the late 19th century. As its share of global manufacturing powered from 23% in 1870 to 36% in 1913, the admittedly high tariffs of the time came with a cost, estimated at around 0.5% of GDP in the mid-1870s. In some industries, they might have sped up development by a few years. However, U.S. economic growth during its protectionist era was driven more by its abundant resources and openness to people and ideas.