The four channels are
1) “a direct effect from economic dislocation to demands for anti-elite, redistributive policies”
2) “through amplification of cultural and identity divisions”
3) “through political candidates adopting more populist platforms in response to economic shocks”
4) “through adoption of platforms that deliberately inflame cultural and identity tensions.”
In order to get a better sense of what underpinned Trump’s populist appeal, Rodrik focused on a specific bloc of voters — those who switched from supporting Obama in 2012 to Trump in 2016. He found that
Switchers to Trump are different both from Trump voters and from other Obama voters in identifiable respects related to social identity and views on the economy in particular. They differ from regular Trump voters in that they exhibit greater economic insecurity, do not associate themselves with an upper social class and they look favorably on financial regulation. They differ from others who voted for Obama in 2012 in that they exhibit greater racial hostility, more economic insecurity and more negative attitudes toward trade agreements and immigration.
In an email, Rodrik wrote:
Automation hits the electorate the same way that deindustrialization and globalization have done, hollowing out the middle classes and enlarging the potential vote base of right-wing populists — especially if corrective policies are not in place. And the overall impact of automation and new technologies is likely to be much larger and more sustained, compared to the China shock. This is something to watch.
In their December 2017 paper, “Artificial intelligence, worker-replacing technological progress and income distribution,” the economists Anton Korinek, of the University of Virginia, and Joseph E. Stiglitz, of Columbia — describe the potential of artificial intelligence to create a high-tech dystopian future.
Korinek and Stiglitz argue that without radical reform of tax and redistribution politics, a “Malthusian destiny” of widespread technological unemployment and poverty may ensue.
Humans, they write, “are able to apply their intelligence across a wide range of domains. This capacity is termed general intelligence. If A.I. reaches and surpasses human levels of general intelligence, a set of radically different considerations apply.” That moment, according to “the median estimate in the A.I. expert community is around 2040 to 2050.”
Once parity with the general intelligence of human beings is reached, they continue, “there is broad agreement that A.I. would soon after become super‐intelligent, i.e., more intelligent than humans, since technological progress would likely accelerate.”
Without extraordinary interventions, Korinek and Stiglitz foresee two scenarios: both of which could have disastrous consequences:
In the first, “man and machine will merge, i.e., that humans will ‘enhance’ themselves with ever more advanced technology so that their physical and mental capabilities are increasingly determined by the state of the art in technology and A.I. rather than by traditional human biology.”
Unchecked, this “will lead to massive increases in human inequality,” they write, because intelligence is not distributed equally among humans and “if intelligence becomes a matter of ability‐to‐pay, it is conceivable that the wealthiest (enhanced) humans will become orders of magnitude more productive — ’more intelligent’ — than the unenhanced, leaving the majority of the population further and further behind.”
In the second scenario, “artificially intelligent entities will develop separately from humans, with their own objectives and behavior, aided by the intelligent machines.”
Credit: Source link