Joseph Stiglitz & Anton Korinek on AI and Inequality
Over the next decades, AI will dramatically change the economic landscape. It may also magnify inequality, both within and across countries. Joseph E. Stiglitz, Nobel Laureate in Economics, joined us for a conversation with Anton Korinek on the economic consequences of increased AI capabilities. They discussed the relationship between technology and inequality, the potential impact of AI on the global economy, and the economic policy and governance challenges that may arise in an age of transformative AI. Korinek and Stiglitz have co-authored several papers on the economic effects of AI.
Joseph Stiglitz is University Professor at Columbia University. He is also the co-chair of the High-Level Expert Group on the Measurement of Economic Performance and Social Progress at the OECD, and the Chief Economist of the Roosevelt Institute. A recipient of the Nobel Memorial Prize in Economic Sciences (2001) and the John Bates Clark Medal (1979), he is a former senior vice president and chief economist of the World Bank and a former member and chairman of the US President’s Council of Economic Advisers. Known for his pioneering work on asymmetric information, Stiglitz’s research focuses on income distribution, risk, corporate governance, public policy, macroeconomics and globalization.
Anton Korinek is an Associate Professor at the University of Virginia, Department of Economics and Darden School of Business as well as a Research Associate at the NBER, a Research Fellow at the CEPR and a Research Affiliate at the AI Governance Research Group. His areas of expertise include macroeconomics, international finance, and inequality. His most recent research investigates the effects of progress in automation and artificial intelligence for macroeconomic dynamics and inequality.
You can watch a recording of the event here or read the transcript below:
Joslyn Barnhart [0:00]
Welcome, I’m Joslyn Barnhart, a Visiting Senior Research Fellow at the Centre for the Governance of AI (GovAI), which is organizing this series. We are part of the Future of Humanity Institute at the University of Oxford. We research the opportunities and challenges brought by advances in AI and related technologies, so as to advise policy to maximise the benefits and minimise the risks from advanced AI. Governance, this key term in our name, refers [not only] descriptively to the ways that decisions are made about the development and deployment of AI, but also to the normative aspiration that those decisions emerge from institutions that are effective, equitable, and legitimate. If you want to learn more about our work, you can go to http://www.governance.ai.
I’m delighted today to introduce our conversation featuring Joseph Stiglitz in discussion with Anton Korinek. Professor Joseph Stiglitz is University Professor at Columbia University. He’s also the co-chair of the high-level expert group on the measurement of economic performance and social progress at the OECD, and the chief economist of the Roosevelt Institute, a recipient of the Nobel Memorial Prize in Economic Sciences in 2001, and the John Bates Clark Medal in 1979. He is a former senior vice president and chief economist of the World Bank, and a former member and chairman of the US President’s Council of Economic Advisers, known for his pioneering work on asymmetric information. Professor Stiglitz’s research focuses on income distribution, risk, corporate governance, public policy, macroeconomics and globalisation.
Professor Korinek is an associate professor at the University of Virginia, Department of Economics and Darden School of Business, as well as a research associate at the NBER, research fellow at the CEPR, and a research affiliate at the Centre for the Governance of AI. His areas of expertise include macroeconomics, international finance, and inequality. His most recent research investigates the effects of progress in automation and artificial intelligence [on] macroeconomic dynamics and inequality.
Over the next decades, AI will dramatically change the economic landscape and may also magnify inequality both within and across countries. Anton and Joe will be discussing the relationship between technology and inequality, the potential impact of AI on the global economy, and the economic policy and governance challenges that may arise in an age of transformative AI. We will aim for a conversational format between Professor Korinek and Professor Stiglitz. I also want to encourage all audience members to type your questions using the box below. We can’t promise that [your questions] will be answered but we will see them and try to integrate them into the conversation. With that, Anton and Joe, we look forward to learning from you and the floor is yours.
Anton Korinek [3:09]
Thank you so much, Joslyn, for the kind introduction. Inequality has been growing for decades now and has been further exacerbated by the K-shaped recovery from COVID-19. In some ways, this has catapulted the question of how we can engineer a fairer economy and society to the top of the policy agenda all around the world. As Joslyn has emphasised, what is of particular concern for us at the Centre for the Governance of AI is that modern technologies, and to a growing extent artificial intelligence, are often said to play a central role in increasing inequality. There are concerns that future advances in AI may in fact further turbo-charge inequality.
I’m extremely pleased and honoured that Joe Stiglitz is joining us for today’s GovAI webinar to discuss AI and inequality with us. Joe has made some of the most pathbreaking contributions to economics in the 20th century. As we have already heard, his work was recognised by the Nobel Prize in Economics in 2001. I should say that he has also been the formative intellectual force behind my education as an economist. What I have always really admired in Joe — and I still admire every time we interact — is that he combines a razor-sharp intellect with a big heart, and that he is always optimistic about the ability of ideas to improve the world.
We will start this webinar with a broader conversation on emerging technologies and inequality. Over the course of the webinar, we will move more and more towards AI and ultimately the potential for transformative AI to reshape our economy and our society.
Let me welcome you again, Joe. Let’s start with the following question: Can you explain what we mean by inequality? What are the dimensions of inequality that we should be most concerned about?
Joseph Stiglitz [5:33]
[Inequality is the] disparities in the circumstances of individuals. One is always going to have some disparities, but not of the magnitude and not of the multiplicity of dimensions [that we see today]. When economists talk about inequality, they first talk about inequalities of income, wealth, labour income, and other sources of income. [These inequalities] have grown enormously over the last 40 years. In the mid-1950s, Simon Kuznets, a great economist who got a Nobel Prize, had thought that in the early stages of development, inequality would increase, but then [in later stages of development], inequality would decrease. And the historical record was not inconsistent with that [model] at the time he was writing. But then beginning in the mid-1970s and the beginning of the 1980s, [inequality] started to soar. [Inequality] has continued to increase until today, and the pandemic’s K-shaped recovery has exposed and exacerbated this inequality.
Now beyond that, there are many other dimensions of inequality, like access to health[care], especially in countries like the United States [without] a national health service. As a result, the US has the largest disparities in health among advanced countries, and even before 2019, had an average decline in life expectancy and overall health standards. There are disparities in access to justice and other dimensions that make for a decent life. One of the concerns that has been highlighted in the last year is the extent to which those disparities are associated with race and gender. That has given rise to the huge [movement], “Black Lives Matter.” [This movement] has reminded us of things that we knew, but were not always conscious of, [including] the tremendous inequalities across different groups in our society.
Anton Korinek [8:23]
Thank you. Can you tell us about what motivated you personally to dedicate so much of your work to inequality in recent decades? I’ve heard you speak of your experience growing up in Gary, Indiana. I have heard a lot about your role as a policymaker, as a chair of the President’s Council of Economic Advisors, and as a chief economist of the World Bank in the 1990s. How has all of this shaped your thinking on inequality?
Joseph Stiglitz [8:55]
I grew up, as you said, in Gary, Indiana, which was emblematic of industrial America, though of course I didn’t realise that as I was growing up. [In Gary], I looked at my surroundings, and I saw enormous inequalities in income and across races; [I saw] discrimination. That was really hard to reconcile with what I was being taught about the American Dream: that everybody has the same opportunity and that all people are created equal. All those things that we were told about America, which I believed on one level, seemed inconsistent with what [I saw].
That was why I had planned [to study economics]. Maybe it seems strange, but I had wanted to be a theoretical physicist. [But with all] the problems that I had seen growing up around inequality, suddenly, at the end of my third year in college, I wanted to devote my life to understanding and doing something about inequality. I entered economics with that very much on my mind, and I wrote my thesis on inequality. But life takes its turn, [so I spent] much of the time [from then until] about 10 years ago on issues of imperfect information and imperfect markets. This was related, in some sense, to inequalities because the inequalities in access to information were very much at the core of some of the inequalities in our society. [For example,] inequalities in education played a very important role in the perpetuation of inequalities. So, the two were not [part of] a totally disparate agenda.
From the very beginning, I also spent a lot of time thinking about development, which interacted with my other work on theoretical economics. It may seem strange, but I did go to Africa in 1969: [I went to] Kenya not long after it got its independence. I’m almost proud to say that some people in Africa claim me to be the first African Nobel Prize winner: [Africa] had such an important role in shaping my own research. That strand of thinking about inequality between the developing countries and the developed countries was also very important [to my understanding of inequality].
Finally, to answer your question, when I was in the Clinton administration, we had a lot of, you might say, fights about inequality. Everybody was concerned about inequality, but some were more concerned than others. Some wanted to put it at the top of the agenda, and [others] said, “We should worry about it, but we don’t have the money to deal with it.” It was a question of prioritisation. On one side Bob Reich, who was the Secretary of Labour, and I were very much concerned about this inequality. We were concerned about corporate welfare: giving benefits to rich corporations meant that we had less money to help those who really needed it. Our war against corporate welfare actually led to huge internal conflicts between us and some of the more corporatist or financial members of the Clinton team.
Anton Korinek [13:53]
That brings us perhaps directly to a more philosophical question. What would you say is the ethical case for [being concerned with] inequality? In particular, why should we care about inequality in itself and not just about absolute levels of income and wealth?
Joseph Stiglitz [14:17]
The latter [question] you can answer more easily from an economic point of view. There is now a considerable body of theory and empirical evidence that societies that are marked by large disparities and large inequalities behave differently and overall perform more poorly than societies with fewer inequalities. Your own work has highlighted the term “macroeconomic externalities,” which [describes when a system’s functioning] is adversely affected by the presence of inequality. An example, for instance, is that when there are a lot of inequalities, those at the bottom engage in “keeping up with the Joneses,” as we say, and that leads them to be more in debt. That higher level of debt introduces a kind of financial fragility to the economy which makes it more prone to economic downturns.
There are a number of other channels through which economic inequality adversely affects macroeconomic performance. The argument can be made that even those at the top can be worse off if there’s too much inequality. I reflected this view in my book, the Price of Inequality, where I said that our society and our economy pay a high price for inequality. This view has moved into the mainstream, which is why the IMF has put concerns about inequality [at] the fore of their agenda. And as Strauss-Kahn, who was the Managing Director of the IMF at the time said, [inequality] is an issue of concern to the IMF because the IMF is concerned about macroeconomic stability and growth, and the evidence is overwhelming that [inequality] does affect macroeconomic performance and growth.
[There is a] moral issue which economists are perhaps less well-qualified to talk about rigorously. Economists and philosophers have used utilitarian models and equality-preferring social welfare functions. [These models build on] a whole literature of [philosophy], of which Rawls is an example. [Rawls] provides a philosophical basis [for] why, behind the veil of ignorance, you would prefer to be born into a society with greater equality.
Anton Korinek [17:40]
So that means there is both a moral and an economic efficiency reason to engage in measures that mitigate inequality. Now, this brings us to a broader debate: what are the drivers of inequality? Is inequality driven by technology, or by institutions [and] policies, broadly defined? There is a neoclassical caricature of the free market as the natural state of the world. In this caricatured description of the world, everything is driven by technology, and technology may naturally give rise to inequality, and everything we would do [to mitigate inequality] would be bad for economic efficiency. Can you explain the interplay of technology and institutions more broadly and tell us what is wrong with this caricature?
Joseph Stiglitz [18:46]
[To put it in another way:] is inequality the result of the laws of nature, or the laws of man? And I’m very much of the view that [inequality] is a result, overwhelmingly, of the laws of men and our institutions. One way of thinking about this, which I think [provides] compelling evidence for my perspective, is that the laws of nature are universal: globalization and [technological advancement] apply to every country. Yet, in different countries, we see markedly different levels of inequality in market incomes and even more so in after-tax and transfer incomes.
It is clear that countries that should be relatively similar have been shaped in different ways by the laws. What are some of those laws? Well, some of them are pretty obvious. If you have labour laws that undermine the ability of workers to engage in collective bargaining, workers are going to get short stacked; they’re not going to be treated well. You see that in the United States: one of the main [reasons for] the weakening of the share of labour in the United States is, I believe, the weakening of labour laws and the power to unionise.
At the other extreme, more corporate market power [allows companies to raise prices, which] is equivalent to lowering wages, because [people] care about what [they] can purchase. The proceeds of [higher prices] go to those who own the monopolies, who are disproportionately those at the top. During Covid-19 we saw Jeff Bezos do a fantastic job of making billions of dollars while the bottom 40% of Americans suffered a great deal. The laws governing antitrust competition policy are critical.
But actually, a host of other details and institutional arrangements that we sometimes don’t notice [drive inequality]. United States [policy] illustrates that we do things so much worse than other countries. Bankruptcy laws, which deal with what happens if a debtor can’t pay [back] all of the money, give first priority [to banks]. In the United States, the first claimant is the banks, who sell derivatives – those risky products that led to the financial crisis of 2008. On the other hand, if you borrow money, to get ahead in life or to finance education, you cannot discharge your debt. So, students are at the bottom, and banks are at the top. So that’s another example [of how laws drive inequality].
Corporate governance laws, that give the CEOs enormous scope for setting their salaries in any way they want, result, in the United States, in the CEOs getting 300 times the compensation of average workers. That’s another example [of how laws create inequality].
But there are a whole host of things that we often don’t even think of as institutions, but [they] really are. When we make public investments in infrastructure, do we provide for public transportation systems, which are very important for poor people? When we have public transportation systems, do we connect poor people with jobs? In Washington D.C. they made a deliberate effort not to do that. When we’re running monetary policy, are we focusing on making sure that there’s [as] close to full employment as possible, which increases workers bargaining power? Or do we focus on inflation, which might be bad for bondholders?
Monetary policy, in the aftermath of the 2008 crisis, led to unprecedented wealth inequality, but didn’t succeed very well in creating jobs. 91% of the gains that occurred in the first three years of that recovery went to the top 1% in the United States. So, [inequality stems from] an amalgam of an enormous number of decisions.
Now, even when [considering] the issue of technology, we forget that [it is] man-made to a large extent — [it is] not like the laws of quantum mechanics! Technology [itself], and where we direct our attention [within technology], is man-made, and the extent to which we make access to technology available to all is our decision. Whether we steer technology to save the planet, or to save unskilled jobs, we can determine whether we’re going to have a high level of unemployment of low-skilled people or whether we’re going to have a healthier planet. [We witnessed] fantastic success in quickly developing COVID-19 vaccines. But now the big debate is, should those vaccines be available only to rich countries? Or should we waive the intellectual property rights in order to allow poor countries to produce these vaccines? That’s an issue being discussed right now at the WTO. Unfortunately, although a hundred countries want a waiver, the US and a few European countries say “no”. We put the profits of our drug companies over [peoples’] lives, not only over [lives] in developing countries, but possibly over the lives of people in our own country. As long as the disease rages [in developing countries], a mutation may come that is vaccine resistant, and our own lives are at risk. It’s very clear that this is a battle between institutions, and that right now, unfortunately, drug companies are winning.
Anton Korinek [26:04]
It’s a battle of institutions within the realm of a new technology.
If we now turn to another new technology, AI, you hear a lot of concern about AI increasing inequality. What are the potential channels that you see that we should be concerned about? To what extent could AI be different from other new technologies when it comes to [AI’s] impact on inequality?
Joseph Stiglitz [26:36]
AI is often lumped together with other kinds of innovations. People look historically, and they say “Look, innovations are always going to be disturbing, but over the long run, ordinary people gain.” [For example,] the makers of buggy whips lost out when automobiles came along, but the number of new jobs created in auto repair far exceeded the old jobs, and overall, workers were better off. In fact, [automobiles] created the wonderful middle class era of the mid-20th century.
I think this time may be different. There’s every reason to believe that it is different. First, these new technologies are labour replacing and labour saving, rather than increasing the productivity of labour. And so [these technologies are] substituting for labour, which drives down wages. There’s no a priori theory that says that an innovation [must] be of one form or the other. Historically, [innovations] were labour augmenting and labour enhancing; [historically, innovations] were intelligence-assisting innovations, rather than labour-replacing. But the evidence now is that [new innovations] may be more labour replacing. Secondly, the new technologies have a winner-take-all characteristic associated with them: [these new technologies] have augmented the potential of monopoly power. Both characteristics mean there will be a less competitive market and greater inequality resulting from this increased market power, and almost everybody may lose.
In the case of developing countries, the problems are even more severe for two reasons. The first [reason] is that the strategy that has worked so well to close the gap between developing and developed countries, which was manufacturing export-led growth, may be coming to an end. Globally, employment in manufacturing is declining. Even if all the jobs in manufacturing shifted, say, from China to Africa, [this shift] would [hardly] increase the labour force in Africa. I and some others have been trying to understand: why was manufacturing export-led growth so successful? And what [strategies] can African countries [employ] today if [manufacturing export-led growth] doesn’t work? Are there other strategies that will [be effective]? The conclusion is that there are other things that work, but they’re going to be much more difficult [to implement]. And there won’t likely be the kind of success that East Asia had beginning 50 years ago.
The second point [concerns] inequalities that occur within our country [as a result of AI]. [For example,] when Jeff Bezos [becomes] richer or Bill Gates [becomes] richer, we always have the potential to tax these gainers and redistribute some of their gains to the losers. The result, [which] you and I wrote about in one of our papers, shows that in a wide class of cases we can make sure that everybody could be better off [via redistributive taxation]. While [implementation is] a matter of politics, at least in principle, everybody could be made better off. However, [AI] innovations across countries [drive down] the value of unskilled labour and certain natural resources, which are the main assets of many developing countries. [Therefore, developing countries are] going to be worse off. Our international arrangements for redistribution are [very limited]. In fact, our trade agreements, our tax provisions, and our international [arrangements] work to the disadvantage of developing countries. We don’t have the instruments to engage in redistribution, and the current instruments actually disfavour developing countries.
Anton Korinek [32:44]
Let me turn to a longer-term question now. Many technologists predict that AI will have the potential to be really transformative if it reaches the ability to perform substantially everything that human workers can do. This [degree of capacity] is sometimes labelled as “transformative AI,” though people have also [described] closely-related concepts like Artificial General Intelligence and human-level machine intelligence. There are quite a few AI experts who predict that such transformative advances in AI may happen within the next few decades. This could lead to a revolution that is of similar magnitude to or greater magnitude than the agrarian or industrial revolution, which could make all human labour redundant. This would make human labour, in economic speak, a “dominated technology.”
[When we consider inequality,] the dilemma is that in our present world labour is the main source of income. Are you willing to speculate, as a social scientist, and not as a technologist, [about] the likelihood and timeframe of transformative AI happening? What do you see as the main reasons why it may not be happening soon? [Alternatively,] what would be the main arguments in favour of transformative AI happening soon? And how should we think about the potential impacts of transformative AI, from your perspective?
Joseph Stiglitz [34:36]
There is a famous quip by Yogi Berra, who is viewed as one of the great thinkers in America. I’m not sure everybody in the UK knows about him. He was a famous baseball player who had simple perspectives on life and one of them was “forecasting is really difficult, especially about the future.”
The point is that we don’t know. But we certainly could contemplate this happening, and we ought to think about that possibility. So as social scientists, we ought to be thinking about all the possible contingencies, but obviously devote more of our work to those [scenarios] that are going to be most stressful for our society. Now, you don’t think that people should train to be a doctor to deal just with colds. You want your doctor to be able to respond to serious maladies. I don’t want to call [transformative AI] a malady – it could be a great thing. But it would certainly be a transformative moment that would put very large stresses on our economic, social [and] political system.
The important point is that […] these advances in technologies make our society as a whole wealthier. These [advances] move out what we could do, and in principle, everyone could be made better off. So the question is: can we undertake the social, economic, [and] political arrangements to ensure that everyone, or at least a vast majority, will be made better off [by advances in AI]? When we engage in this sort of speculative reasoning, one could also imagine [a world in which] a few people [are] controlling these technologies, and that our society [may be] entering into a new era of unprecedented inequality – with a few people having all the wealth, and everybody else just struggling to get along and [effectively] becoming serfs. This would be a new kind of serfdom, a 21st century or 22nd century serfdom that is different from that of 13th and 12th century [serfdom]. For the vast majority [of people, this serfdom would not be] a good thing.
Anton Korinek [37:59]
For the sake of argument, let’s take it as a given that this type of transformative AI will arrive by, say, 2100. What would you expect to be the effects of [transformative AI] on economic growth, on the labour share, and in particular, on inequality? What would be the [impact] on inequality in non-pecuniary, non-monetary terms?
Joseph Stiglitz [38:36]
The effect [of transformative AI] on inequality, income, wealth, and monetary aspects will depend critically on the institutions that we described earlier in two key [ways]. If we move beyond hoarding knowledge via patents and other means, and gain wide[spread] and meaningful access to intellectual property, then competition can lower prices and the benefits of [transformative AI] can be widely shared.
This was what we experienced in the 19th and 20th century [during the Industrial Revolution]. Eventually, when competition got ideas out into the marketplace, profits eroded. While the earlier years of the [Industrial] Revolution were not great for ordinary workers, eventually, [ordinary workers] did benefit and competition [served to ensure] that the benefit of the technological advances were widely shared. There is a concern about whether our legal and institutional framework can ensure that that will happen with artificial intelligence. That’s one aspect of our institutional structure.
Even if we fail to do the right thing in that area, we have another set of instruments, which are redistributive taxes. We could tax multibillionaires like Jeff Bezos or Bill Gates. From the point of view of incentives, most economists would agree that if multibillionaires were rewarded with 16 billion dollars, rather than 160 billion [dollars], they would probably still work hard. They probably wouldn’t say “I’m going to take my marbles and not play with you anymore.” They are creative people who want to be at the top, but you can be at the top with 16 [billion dollars], rather than 160 billion [dollars]. You take that [extra tax revenue] and use it [for] more shared prosperity. Then, obviously, the nature of our society would be markedly different.
If we think more broadly, right now, President Biden is talking a lot about the “caring economy.” Jobs are being created in education, health, care for the aged, [and] care for the sick. Wages in those jobs are relatively low, because of the legacy of discrimination against women and people of colour who have [worked] in these areas. Our society has been willing to take advantage of that history of discrimination and pay [these workers] low wages. Now, we might say, why do that? Why not let the wages reflect our value of how important it is to care for these parts of our society? [We can] tax the very top, and use that [tax revenue] to create new jobs that are decently paid, [which would create] a very different outcome [for the economy]. I think, optimistically, this new era could create shared prosperity. There would still be some inequality, but not the nightmare scenario of the new serfdom that I talked about before.
Anton Korinek [42:49]
Let’s turn to economic policy. You have already foreshadowed a number of interesting points on this theme. But let’s talk about economic policy to combat inequality more generally. People often refer to redistribution and pre-distribution as the main categories of economic policy to combat inequality. Can you explain what these two [policy categories] mean? What are the main instruments of redistribution and of pre-distribution? And how do [these policies] relate to our discussion on inequality?
Joseph Stiglitz [43:37]
Pre-distribution [looks at] the factors that determine the distribution of market income. If we create a more equal distribution of market income, then we have less burden on redistribution to create a fair society. There are two factors that go into the market distribution of income. [The first factor] is the distribution, or the ownership, of assets. [The second factor] is how much you pay each of those assets. For instance, if you have a lot of market power, and weak labour power, you [end] up with capital getting a high return relative to workers and [high] monopoly profits relative to workers’ [incomes] — that’s an example of the exercise of market power leading to greater inequality. The progressive agenda in the United States emphasises increasing the power of unions and curbing the power of big tech giants to create factor prices that are conducive to more market equality.
We can [also consider] the ownership of two types of assets: human capital and financial capital. The general issue here is: how do we prevent the intergenerational transmission of advantage and disadvantage? Throughout the ages, there have always been parents who want to help their children, which is not an issue. [Rather, the issue is] the magnitude of that [helping]. In the United States, for instance, we have an education system which is locally-based. We have more and more economic segregation, which means that rich people live with rich [people] and poor [people] with poor [people.] If schools in [rich] neighbourhoods give kids a really good education and conversely [in poor neighbourhoods, then even] public education perpetuates inequality.
The most important provision in the intergenerational transmission of financial wealth is inheritance tax and capital taxation. Under Trump, [Congress] eviscerated the inheritance taxes. So [now] the question is how to [reinstate these taxes] to create a more equal market distribution, called pre-distribution.
Anton Korinek [47:34]
[You began to address] taxation in the context of estate taxation. For the non-economists in the room, I should emphasise that among the many contributions that Joe has made to economics is a 1976 textbook with Tony Atkinson that is frequently referred to as the “Bible of Public Finance” which lays out the basic theory of taxation and still underlies basically all theoretical economic work on taxes.
In recent decades, the main focus of this debate has been on taxing labour versus capital. A lot of economists argue that we should not tax capital, because it’s self-defeating: [taxation of capital] will just discourage the accumulation of capital and ultimately hurt workers. My question to you is: do you agree? [If not,] what is wrong with this standard argument?
Joseph Stiglitz [48:43]
It is an argument that one has to take seriously: that attacks on capital could lead to less capital accumulation which would in turn lead to lower wages, and even if the proceeds of the tax were redistributed to workers, workers could be worse off. You can write down theoretical models in which that happens. The problem is that this is not the world we live in. In fact, [there are] other instruments at [our] disposal. For instance, as the government taxes [private] capital, [the government] can invest in public capital, education, and infrastructure. [These investments lead to an increase in] wages. Workers can be doubly benefited: not only [do workers benefit] from direct distribution, but [they also benefit from a greater] equality of market income caused by capital allocation to education and infrastructure.
Many earlier theories were predicated on the assumption that we were able to tax away all rents and all pure profit. We know that’s not true: the corporate profit tax rate is now 21% in the United States, and the amount of wealth that the people at the top are accumulating [provides evidence that] we are not taxing away all pure profits. Taxing away [these pure profits] would not lead to less capital accumulation, [but instead] could lead to more capital accumulation.
[Let’s] look broadly at the nature of capitalism in the late 20th and early 21st century. We used to talk about the financial sector intermediating, which meant [connecting] households and firms by bringing [households’] savings into corporations. [This process] helped savings and helped capital accumulation. [However,] evidence is that over the last 30 or 40 years, the financial sector has been disintermediating. The financial sector, [rather than] investing monopoly profits, has been redistributing [these profits] to the very wealthy, [to facilitate] the wealthy’s consumption or increase the value of their assets, [including their international assets], and their land. [Ultimately], this simple model [of financial intermediation] doesn’t describe [late] 20th and [early] 21st century capitalism.
Anton Korinek [52:17]
Should we think of AI as [the same kind of] capital described in theories of capital taxation in economics, or is AI somehow inherently different? Should we impose what Bill Gates calls a “robot tax” [on AI]?
Joseph Stiglitz [52:36]
That’s a really good question. [If we had had more time, I would have] have distinguished between intangible capital, called R&D, and [tangible capital, like] buildings and equipment. 21st century capital is mostly intangible capital, which is the result of investment in R&D. [Intangible capital] is more productive in many ways than buildings, and so in that sense, it is real capital, and is [well-described by the word] “intangible.” [Intangible capital is also the] result of investment: people make decisions to hire workers to think about [certain] issues, or individuals decide themselves to think about these issues, when [employers or individuals otherwise] could have done something else. [In this way, intangible capital] is capital: it requires resources, which could have been put to other uses, [and these alternative uses are foregone] for future-oriented returns.
The question is: is this [intangible] capital getting excess returns? Are there social consequences of those investments, that [the investors] don’t take into account? We call [these social consequences] externalities. People who invest in coal-fired power plants may make a lot of money, but [their investment] destroys the planet. If we don’t tax carbon, then society — rather than the investor — bears these costs. Gates’s robot tax is based on the same [concept]. If we replace workers, and [these workers] go on the unemployment roll, then we as a society bear the cost of [these workers’] unemployment. [Gates argues that] we ought to think about those costs, [though] how we balance the tax and appropriate its excess returns is another matter. Clearly, [the robot tax] is an example of steering innovation. You and I, [in our research,] have [also argued that we must] steer innovation to save the planet [rather than] create more unemployment.
Anton Korinek [55:32]
How would you recommend that we should reform our present system of taxation to be ready for not only [our present time in the] 21st century but also for a future in which human labour plays less of a role? How should we tax to make sure that we can still support an equitable society?
Joseph Stiglitz [56:02]
Let me first emphasise that not just taxation, but also investment, is important. [Much of the economy’s direction is determined by] the basic research decisions of the National Science Foundation and science foundations in other countries. [These decisions inform which] technologies are accessible to those in the private sector. Monetary policy [is also important]. We don’t think the central bank [affects] innovation, but it actually does. [At a] zero interest rate, the cost of capital is going to be low relative to the cost of labour, [which will] encourage investors to think about saving labour rather than saving capital. So monetary policy is partly to blame for distortions in the direction of innovation. The most important thing is to be sensitive to how every aspect of policy, including tax policy, shapes our innovative efforts and [directs where we] devote our research. Are we devoting our research to saving unskilled labour or to augmenting the power of labour? We talked before about intelligence-assisting innovations like microscopes and telescopes which make us more productive as human beings. We can replace labour, or we can make labour more productive. [While this distinction can be] hard to specify, it’s very clear that we have tools to think about these various forms of innovation.
Anton Korinek [58:16]
On the expenditure side, one policy solution that a lot of technologists are big fans of is a universal basic income. What is your perspective on a UBI: do you advocate it or do you believe there are other types of expenditure policy that are more desirable? Do you think [UBI] may be a good solution if we arrive at a far-future – or perhaps near-future – [scenario] in which labour is displaced?
Joseph Stiglitz [58:53]
I am quite against the UBI [being implemented] in the next 30 or 40 years. The reason is very simple: for the next 30 years, the major challenge of our society is the Green Transition, which will take a lot of resources and a lot of labour. Some people ask if we can afford it, and [I argue that] if we redirect our resources, labour, and capital [toward the Green Transition] then we can afford it. Ben Bernanke [describes] a surplus of capital and a savings glut. However, if [we look] at the challenges facing the world, [we understand Bernanke’s assertion] is nonsense. Our financial system isn’t [developing] the [solutions] our society needs [like] the Green Transition.
I also see deficiencies in infrastructure and in education in so many parts of the world. I see a huge need for investments over the next 30 to 40 years such that everybody who wants a job will be fully employed. It is our responsibility [to ensure] that everybody who wants a job should be able to get one. We must have policies to make sure that [workers] are decently paid. This should be our objective now.
[If] in the far-future [we don’t need] labour, we have the infrastructure that we need, we’ve made the Green Transition, and we have wonderful robots that produce other robots and all of the goods, food, and services that we need, then we will have to consider the UBI. We would [then] be engaged in a discussion of what makes life meaningful. While work has been part of that story of meaningfulness, there are ways of serving other people that don’t have to be monetised and can be very meaningful. While I’m willing to speculate about [this scenario,] it’s a long way off, and [is] well after my time here on this earth.
Anton Korinek [1:01:46]
Would you be willing to revise your timelines if progress in AI occurs faster than what we are currently anticipating?
Joseph Stiglitz [1:01:59]
I cannot see [a scenario where we] have excess labour and capital [over] the next 30 or 40 years, even if [AI] proceeds very rapidly, given the needs that we have in public investment and the Green Transition. We could have miracles, but I think if that happens, we could face that emergency of this unintended manna from heaven and we would step up to that emergency.
Anton Korinek [1:02:51]
We are already [nearing] the end of our time. Let me ask you one more question, and then I would like to bring in a few questions posed by the audience. My question is: what are the other dimensions of AI that matter for inequality, independent of purely economic [considerations]? What is your perspective [on these dimensions of inequality] and how we can combat them?
We’ve talked about meaning in life and meaningful work. If AI takes away work, we will have to find meaning in other places. In the shorter term, AI will take away routine jobs, which will mean that we as a society will be able to devote more labour to non-routine jobs. This should open up possibilities [for people to be] more creative. Many people [have] thought the flourishing of our society is based on creativity. It would be great for our society if we could devote more of our talents to doing non-routine, creative things.
The audience had a question about workplace surveillance, which is one element of [AI] that could potentially greatly reduce the well-being of workers. What are your thoughts on [workplace surveillance]?
Joseph Stiglitz [1:05:06]
I agree [that AI could reduce the well-being of workers]. There are many [adverse effects of AI] we haven’t talked about. We are in an early stage [of AI policy], and our inadequate regulation allows for a whole set of societal harms from AI. Surveillance is one [example of these harms]. Economists talk about corporations’ ability to acquire information in order to appropriate consumer surplus for themselves, or in other words, to engage in discriminatory pricing. Anybody who wants to buy an airline ticket knows what I’m talking about: firms are able to judge whether you really want to [fly] or not. Companies are using AI now to charge different prices for different people by judging how much [each consumer] wants a good. The basis for market efficiency is that everybody faces the same price. In a new world, where Amazon — or the internet — uses AI, everybody [faces] a different price. This discrimination is very invidious: it has a racial, gender, and vocational component.
Information targeting has other adverse [implications], like manipulation. [AI] can sense if somebody has a predilection to be a gambler and can encourage those worst attributes by getting [the person] to gamble. [AI] can target misinformation at somebody who is more likely to be anti-vax and give [them] the information to reinforce that [belief]. [AI] has already been used for political manipulation, and political manipulation is really important because [it impacts] institutions. The institutions — the rules of the game — are set by a political process, so if you can manipulate that political process, you can manipulate our whole economic system. In the absence of guardrails, good rules, and regulations, AI can be extraordinarily dangerous for our society.
Anton Korinek [1:08:25]
That relates closely to another question from the audience: do you think there is a self-correcting force within democracy against high inequality and in particular against the inequality that AI may lead to?
Joseph Stiglitz [1:08:47]
I wish I felt convinced that there were a self-correcting force. [Instead], I see a force that [works] in the [opposite] direction. This [perception] may be [informed] by my experience as an American: [in the US], a high level of inequality [causes] distortions and [gives] money a role in the political system. This has changed the rules in the political and economic system. Money’s [increasing] power in both the political system and the economic system has reinforced the creation of that kind of plutocracy that I talked about [earlier].
[The changes] we’ve seen in the last few years in the United States are shocking, but in some ways are what I predicted in my 2010 book The Price of Inequality. The Republican Party has openly said, “We don’t believe in democracy. We want to suppress voters and their right to vote. [We want to] make it more difficult for them to vote.” [They’ve said this without] any evidence of voter fraud. It’s almost blatant voter suppression. In some sense, this [scenario] is what Nancy MacLean [described] in her book Democracy in Chains, though it has come faster [than she predicted].
I’ve become concerned that what many had hoped would be a self-correcting mechanism isn’t working. We hope we are at a moment when we can turn back the tide. As more and more Americans see the extremes of inequality, they will turn to vote before it’s too late, before they lose the right to vote. This will be a watershed moment in which we will go in a different direction. I feel we’re at the precipice, and while I’m willing to bet that we’re going to go the right way, I would give [this path] just over 50% odds.
Anton Korinek [1:11:37]
I think fortunately Joe and all his work on the topic is part of the self-correcting force.
The top question in terms of Q&A box votes is whether AI will be a driver for long run convergence or divergence in global inequalities. Do you believe that current laggards, or poor countries, will be able to catch up with the front runners more easily or less easily [because of AI]?
Joseph Stiglitz [1:12:12]
I’m afraid that we may be at the end of the era of convergence that we saw over the last 50 years. There was widespread convergence in China and India, and though some countries in Africa did not converge, we broadly saw a convergence [occurring]. I think [that now] there is a great risk of divergence: AI is going to decrease the value of unskilled labour and many natural resources, which are the main assets of poor countries. There will be [complexity]: oil countries will find that oil is not worth as much if we make the Green Transition. A few countries like Bolivia, that have large deposits of lithium, are going to be better off, but that will be more the exception than the rule. Access to [AI] technology may be more restricted. A larger fraction of the research is [occurring] inside corporations. The model of innovation [used to be] that universities were at the centre, and [innovators received a] patent with a disclosure, which means that the information was public and others built on that [information]. However, AI [innovation] so far has been within companies that have better hoarded information. [Companies can’t protect all information]: one non-obvious [path forward] is that [members of the public] could still access the underlying mathematical theorems that are in the public domain. While that’s an open possibility, I [still] worry that we will be seeing an era of divergence.
Anton Korinek [1:14:41]
Thank you so much, Joe for sharing your thoughts on AI and inequality with us. We are almost at time for our event. I am wondering if I may ask you a parting question that comes in two parts. What would be your message to, on the one hand, young AI engineers and, on the other hand, young social scientists and economists, who are beginning their careers and who are interested in contributing to make the world a better and more equitable place?
Joseph Stiglitz [1:15:30]
Engineers are working for companies, and a company consists of people. Talented people are the most important factors in the production of these companies. In the end, the voice of these workers is very important. We [must] conduct ourselves in ways that mitigate the extent to which we contribute to increases in inequality. There are many people, understandably, within Facebook and other tech giants that are using all their talents to ink the profits of, say, Facebook, regardless of the social consequences and regardless of whether it results in a genocide in Myanmar. These things do not just happen, but rather are a result of the decisions that people make.
To give another example, I often go to conferences out in Silicon Valley. When we discuss these issues, they say, “there is no way we can determine if our algorithms engage in discrimination.” [However], the evidence overwhelmingly is that we can. While the algorithms are always changing, taking in new information, and evolving, at any moment in time we can assess precisely whether [algorithms] are engaging in discrimination. Now, there are groups that are trying — at great cost — to see who is getting [certain] ads. You can create sampling spaces to see how [ads] are working.
I think it is nihilistic to say [that gauging discrimination is] beyond our ability and that we have created a monster out of our control. These companies’ workers need to take a sense of responsibility, because the companies’ actions are a consequence of their workers’ actions. When working for these companies, one has to take a moral position and a responsibility for what the companies do. One can’t just say, “Oh, that’s other people that are doing this.” One has to take some responsibility.
For social scientists, I think this is a very exciting time because AI and new technologies are changing our society. They may even be changing who we are as individuals. There is a lot of discussion about what [new technologies] are doing to attention span and how we spend our time. [These technologies] have profound effects on the way that individuals interact with each other.
Of course, social science is about society and how we interact with each other. [It is about] how we act as individuals. [It is about] market power [and] how we curb that market power. The basic business model of many tech giants [relies on] information about individuals. Policy [determines] what we allow those corporations to do with our [personal] information and whether [these corporations] can store [our information] and use it for other purposes. It is clear that AI has opened up a whole new set of policy issues that we had not even begun to think about 20 years ago. My Nobel Prize was in the economics of information, but when I did my work, I had not thought about the issue of disinformation and misinformation. [At the time], we thought we had laws dealing with [misinformation], which [are called] fraud laws and libel laws. We put [misinformation] aside because we thought it was not a problem. Today, [misinformation] is a problem. I mention that because we are going to have to deal with a whole new set of problems that AI is presenting to our society.
Anton Korinek [1:21:46]
Thank you, Joe. Thank you for this really inspiring call to action. Let me invite everybody to give a round of virtual applause. Have a good rest of the day.