Stephanie Bell and Katya Klinova on Redesigning AI for Shared Prosperity
AI poses a risk of automating and degrading jobs around the world, creating harmful effects to vulnerable workers’ livelihoods and well-being. How can we deliberately account for the impacts on workers when designing and commercializing AI products in order to benefit workers’ prospects while simultaneously boosting companies’ bottom lines and increasing overall productivity? The Partnership on AI’s recently released report Redesigning AI for Shared Prosperity: An Agenda puts forward a proposal for such accounting. The Agenda outlines a blueprint for how industry and government can contribute to AI that advances shared prosperity.
Stephanie Bell is a Research Fellow at the Partnership on AI affiliated with the AI and Shared Prosperity Initiative. Her work focuses on how workers and companies can collaboratively design and develop AI products that create equitable growth and high quality jobs. She holds a DPhil in Politics and an MPhil in Development Studies from the University of Oxford, where her ethnographic research examined how people can combine expertise developed in their everyday lives with specialized knowledge to better advocate for their needs and well-being.
Katya Klinova is the Head of AI, Labor, and the Economy Programs at the Partnership on AI. In this role, she oversees the AI and Shared Prosperity Initiative and other workstreams which focus on the mechanisms for steering AI progress towards greater equality of opportunity and improving the working conditions along the AI supply chain. She holds an M.Sc. in Data Science from University of Reading and MPA in International Development from Harvard University, where her work examined the potential impact of AI advancement on the economic growth prospects of low- and middle-income countries.
Robert Seamans is an Associate Professor at New York University’s Stern School of Business. His research focuses on how firms use technology in their strategic interactions with each other, and also focuses on the economic consequences of AI, robotics and other advanced technologies. His research has been published in leading academic journals and been cited in numerous outlets including The Atlantic, Forbes, Harvard Business Review, The New York Times, The Wall Street Journal and others. During 2015-2016, Professor Seamans was a Senior Economist for technology and innovation on President Obama’s Council of Economic Advisers.
You can watch a recording of the event here or read the transcript below:
Anton Korinek 00:12
Welcome. I’m Anton Korinek. I’m a professor of economics at the University of Virginia and a research fellow at the Centre for the Governance of AI, which is organizing this event.
Today’s topic is redesigning AI for shared prosperity. We have three distinguished speakers: Katya Klinova and Stephanie Bell, who will be giving the main presentation, followed by discussion by Rob Seamans.
Let me first introduce Katya and Stephanie, and I will introduce Rob right before his discussion. Stephanie Bell is a research fellow at the Partnership on AI, affiliated with the AI and Shared Prosperity Initiative. Her work focuses on how workers and companies can collaboratively design and develop AI products that create equitable growth and high-quality jobs. Stephanie holds a DPhil in politics and an MPhil in development studies from Oxford where [she conducted] ethnographic research which examined how people can combine expertise developed in their everyday lives with specialized knowledge to better advocate for their needs and well-being.
Katya Klinova is the Head of the AI, Labour, and the Economy program at the Partnership on AI. In this role, she oversees the AI and Shared Prosperity Initiative—which has developed the report that the two will be presenting today—and other work streams, which focus on mechanisms for steering AI progress towards greater equality of opportunity and towards improving the working conditions along the AI supply chain. She holds a Master of Science in data science from the University of Reading and an MPA in international development from Harvard where her work examined the potential impact of AI on economic growth for low-income and middle-income countries.
The concern underlying today’s webinar is that AI poses a risk of automating and degrading jobs all around the world, which would create harmful effects for vulnerable workers’ livelihoods and well-being. The question is: how can we deliberately account for the impact on workers when designing and commercializing AI? [How can we ensure] AI benefits workers’ prospects while also boosting companies’ bottom lines and increasing overall productivity in the economy? With this short introduction, let me hand the mic over to Stephanie and Katya.
Katya Klinova 03:15
Anton, thank you so much for hosting us. Thank you to the Centre for the Governance of AI—it is an absolute pleasure to be here. Thanks to everyone who is joining us for this hour to talk about redesigning AI for shared prosperity.
As Anton said, the work that we’re presenting today is part of the AI and Shared Prosperity Initiative. Recently, we released the Initiative’s agenda, which is our plan for research and action. You can download this agenda at partnershiponai.org/shared-prosperity. The [agenda] is not just authored by Stephanie and I. [Rather], it is a result of a multi-stakeholder collaboration by the steering committee, which consists of distinguished thinkers from academia, from industry, from civil society, and from human rights organizations. It was also supported by the research group that included someone very dear to us who is now at the Future of Humanity Institute—Avital Balwit. We want to say thank you to Avital and to everyone who supported this work.
The goal of the AI and Shared Prosperity Initiative is to make sure that AI adoption advances an abundance of good jobs, not just for a select profile of workers, but for workers who have different skills and different demographics all around the world. To advance that goal we decided, under the guidance of the steering committee, on the method of introducing shared prosperity targets. [These shared prosperity targets] are measurable commitments by the AI industry to expand the number of good jobs in the broader economy. These targets can be adopted voluntarily, or they can be adopted with regulatory encouragement.
The agenda that we’re [discussing] today is our plan for developing these targets and thinking through their accompanying questions. The agenda is structured in two parts, which are going to be broadly mirrored by our talk today. We begin by [describing] the background against which AI advancement is happening today. [Next,] we introduce the proposal for shared prosperity targets. Then, we analyse the critical stakeholders and their interests and concerns when it comes to adopting, encouraging, or even opposing the shared prosperity targets. Our presentation [will follow this format] today. [Additionally,] we’ll briefly discuss why set targets [can] expand access to good jobs for AI, the structure of the shared prosperity targets proposal [itself], and the key stakeholders, their interests, and the constraints that they’re facing.
Let’s begin by discussing the motivation for this work. Many of you have seen this graph before. It has certainly been shown in this very seminar before by David Autor. [This graph] shows the polarization of wages, which is especially pronounced for men in the US, though women also very much experience [this polarization]. This graph [reveals] that despite the economy growing almost threefold in real terms since the sixties, not everyone has [benefitted from] that growth. You can see that people with graduate degrees have experienced a [large] wage growth, while other skill groups and educational attainment groups did not. [In fact], wages stagnated or even declined in real terms, which is quite staggering. [This pattern] definitely cannot be called “shared prosperity.” The risk and the worry are that AI will exacerbate or continue this wage polarization, [because] AI can be a skill-biased technology, [meaning it is] biased in favour of people with higher levels of educational attainment.
A [much-discussed] and very important solution is upskilling, or re-skilling, which we should [certainly] invest in. [This requires] educating people to help them navigate changing skill demands in the labour market. Nobody will ever argue against [improving] education [quantity and quality]. However, we need to be aware that if the overall demand for human labour goes down in the long term, upskilling itself will not fix the [core] issue: the scarcity of good jobs. [No matter] how much we retrain people, if there’s a declining number of good jobs in the economy, [retraining] will always be a losing battle.
The graphs you’re looking at are from a paper by Acemoglu and Restrepo, which shows that automation has picked up in the last 30 years, [which is a departure from historical trend]. [Historically,] automation existed: automation is not something that only AI introduced into our economy. [However,] automation was displacing humans from tasks [at the same rate] new tasks [were created]. [However, this has not been true over the last 30 years] and the risk is that AI will continue this trend [of rapid displacement]. The last [concern] that I want to mention is that the impacts [of automation] tend to be global. There are no mechanisms for global redistribution of [automation’s] gains, which tend to be concentrated in a few countries’ firms and accrue to just a handful of individuals.
There is a memorable anecdote that I want to share with you. You’re looking at a picture of self-order kiosks introduced in fast food restaurants. Once the investment [in these kiosks] had been made in California and the rest of the United States, the cost of deploying [this technology] everywhere around the world became so low that no matter how low the wages were in some of the low-income and middle-income countries, workers [simply] couldn’t compete with the technology. The picture you’re looking at was taken in South Africa. Even before COVID, the unemployment [rate] in [South Africa] was 29%; it was not the time to be eliminating formal sector jobs. [However,] the march of technology knows no [limits].
[Given AI’s] global impact and [our current inability to] redistribute AI’s gains globally—either through taxation or other transfers—we need to think ahead and [consider how we can ensure] AI supports a globally inclusive economic future. One [frequent] recommendation, [supported] by a growing [body] of literature, is that AI [should] complement, rather than replace, human labour. While this sounds intuitive, in practice it can be difficult to differentiate between technology that complements labour and [technology] that replaces [labour]. [Our concept of] shared prosperity targets addresses exactly [this question]: how do you differentiate between labour-complementing and labour-displacing technology?
What makes this differentiation hard? In economic terms, the definition assumes that you know exposed outcomes from the technological advancements. A technology is called labour saving if it reduces the overall labour demand in the economy and [a technology] is called labour using if it creates growth in overall labour demand in the economy. [However,] it’s very difficult to know [how a technology will impact] labour demand. Early research in [technology] can be used [in] many different applications down the road. Some of those [applications] can be labour saving and some of those can be labour complementing.
Deployment contexts very much matter. The same application [of technology] can be used for different purposes: in the workplace, something can be used to augment peoples’ productivity or to surveil them. [While the technology is applied in the same way,] how [the technology] is used depends on the values and the orientation of the employer actually introducing [the technology] into the workplace. It’s also difficult to map micro[economic] impacts of a given technology to the macro[economic] trends in the economy, because the economy is a dynamic system with [many] interacting parts. It is very difficult to predict ex ante [how a technology investment will] impact that dynamic system, just like it is difficult to predict how a business or technology investment will impact the climate, because climate is also a very complex, dynamic system. And yet, people came up the idea of tracking carbon emissions as a proxy for their impact on global warming. The shared prosperity targets are inspired by the carbon emission targets, in their pursuit of finding appropriate proxies that would, despite all of these constraints and sources of uncertainty, introduce a good enough measure to [determine] if a product or a system is likely to be labour displacing or labour complementing down the line.
I want to spend some time unpacking the connection between the micro[economic] impact of introducing technology in the given workplace and the macro[economic] consequences in the rest of the economy. Of course, there are direct consequences [of introducing technology]: there are people who [a firm] might be firing or hiring directly as [they] introduce technology in the workplace because now [the firm] needs fewer people of certain skill groups and more people of other skill groups. This is very intuitive.
We [also] want to make sure we’re not missing [technology’s] broader impacts, which can [occur] up or down the value chain. [For example,] after [a firm] introduces a new technology, [they] might require a different volume of interim inputs from [their] suppliers. [These suppliers] in turn might hire or fire workers to reduce or expand their workforce. [These are all examples of] indirect effects.
If introducing a new technology into the production process improves goods’ and services’ quality or lowers their prices, then some of the gains [of technology] are passed along to the consumers. The consumers are now, in real terms, richer: they can spend their free income on something else in the economy, which may create new jobs. We want to keep these indirect impacts in mind when we’re talking about the impact of technology.
Finally, [changes in] labour demand not [only impact] the [size of] the workforce—whether it is expanded or downsized—but also the quality of jobs and their level of compensation. Under lower demand for labour, jobs can become more precarious or [worse] paid, even if the total size of the workforce does not change. The ambiguity that I [described] between labour-displacing and labour-complementing technology gets even more complicated when people start describing their technology as “labour augmenting.” As of today, anybody can claim this title of “worker augmenting,” whether the technology grows productivity of workers and makes them more valuable to the labour market or the technology squeezes [every] last bit of productivity from [workers] using exploitative measures like movement tracking and not allowing [workers] to take an extra break. [The distinction] can be [extremely] blurry.
Shared prosperity targets would allow [genuine] producers of worker-augmenting technology to credibly differentiate themselves: if [producers] adopt ways to measure their impact on the availability of good jobs in the economy, then they would have receipts to show for calling themselves worker augmenting [rather than] worker exploiting. Shared prosperity targets are a proposal for firm-level commitments to produce labour-friendly AI while also keeping broader economic effects in mind.
There are three components that we want to track with shared prosperity targets: the impact on labour income in the form of job availability and distribution, job compensation, job quality i.e. worker well-being, and job distribution. [Job distribution describes whom] new, good jobs are available to and for whom are [good jobs] are getting scarcer. These groups can be split by skills by geographic location or by demographic factors. [Now, I’ll] turn it over to Stephanie to talk about incorporating workers’ voices into designing shared prosperity targets.
Stephanie Bell 20:21
Great, thanks so much Katya. Thank you as well to Anton and to FHI for having us here today to talk about the Shared Prosperity Agenda. We’re really excited to be in this conversation with you all.
In thinking about the next phase of applied research with workers, [we’re considering] the [most important] areas in the context of the shared prosperity targets—that Katya just mentioned—to ensure we’re taking into account workers’ priorities and needs. There’s been substantial research as to what constitutes a good job, or “decent work,” in the words of the ILO. There’s been research much more recently into the impact of artificial intelligence on different aspects of worker well-being and worker power.
Setting [shared prosperity] targets [requires] finding a sufficient amount of depth to address [workers’] real needs while also creating targets that are sufficiently clear and straightforward for companies to implement. [A framework] that covers all [worker concerns] is going to be less useful than one that is focused on workers’ high-priority needs. Our goals include [focusing] on job quality— [which we’ll] track within the shared prosperity targets—by identifying possible mechanisms, as well as required conditions, for workers themselves to participate in AI design and AI development. [Workers have] largely been left out of this process. [We aim to] identify places for workers to participate directly in this process and identify how technologies can not just not harm workers, but [rather actively improve] workers’ ability to do their jobs and also boost their workplace satisfaction. This would be a tremendous advancement in terms of the trajectory of these technologies.
Our approach [relies on] qualitative field research at different field sites around the world. Given the context of COVID, this is largely going to take place digitally, using [approaches] like diary studies, contextual observation, and semi-structured interviews to learn what workers have observed about the implementation of AI in their workplace as well as any [insights they have which will allow us to develop the] job quality component of the shared prosperity targets. Some might question the necessity of actively incorporating workers and [ensuring] we talk to a variety of people in different industries, occupations, and geographies. The rationale is that workers, regardless of their wage, formal skills, training, or credentialing, are experts in their own roles. [Workers] know the most about what it takes to get their jobs done: what the tasks are and how they experience their working conditions. By going directly to workers, we have an opportunity to understand their needs and make sure that we’re addressing their well-being and their power within workplaces, [rather than relying on] managers or company leaders as proxies and potentially misidentifying workers’ interests or missing the nuances of [workers’ experiences]. Finally, [incorporating workers is critical to our] process’ integrity. The entire point of this initiative [is to address workers’ needs]. Leaving [workers] out of the conversation would surely be malfeasance on our part, if we’re trying to make sure that we’re creating a set of targets that really does meet the needs of people whose voices are often left out of these conversations. We frequently have and [witness] conversations about the future of work that never [address] the future of workers, [and we’re trying to remedy this problem through our] work.
24:55
[Let’s transition to the] section of the agenda focused on key stakeholders’ interests and constraints. The first group that we’d like to give an overview of is workers themselves. We have two major areas of concern. [The first] is the impact of AI on worker power and worker well-being. In what ways are these technologies degrading or potentially benefiting workers? Katya mentioned [examples] like worker exploitation, aggressive surveillance, and privacy invasions. Another [area of concern is how] these systems impact workers’ ability to organize on the job, improve their working conditions, and grow their ability to participate in workplace decision-making. The [less contact] workers have with other humans during work, [the fewer opportunities workers have] to discuss their job quality and the harder it is for workers to effect change within their workplace. For example, you can introduce an effective scheduling software system that’s able to anticipate customer demand and then tailor your shift scheduling efficiently. [However,] this can radically disrupt workers lives by calling [workers] in at the last minute, forcing them to rearrange childcare, or causing them to worry that their job is in jeopardy if they aren’t able to match those needs. What we would want is for workers to be able to advocate for themselves—to have the opportunity to have a conversation with their supervisor, to make sure that their job is one that they can perform without having to worry about last-minute disruptions to their lives. However, once these decisions are no longer stemming from human-to-human conversations, you open up the opportunity for what Mary Gray called “algorithmic cruelty” to be the decision-making power within a workplace.
Stephanie Bell 27:05
The second area that we’re focused on is how worker voice [can direct] AI development and deployment. As I mentioned earlier, workers have a tremendous amount of expertise in their own tasks and [insight into how they can] improve their efficiency and productivity. For example, perhaps there are opportunities to improve safety or working conditions using technology. Depending on who’s [raising the concerns that] technology is designed to address, very different technologies are implemented. We believe that we [must] take workers seriously: they are impacted by these technologies and [their insights can be] quite generative and a real benefit to AI development companies.
Then the big question is: what are the mechanisms for change? We’ve identified three major [avenues through which] workers can create opportunities for their participation, the first of which is unions and worker organizations. This is probably an obvious [approach] to this audience, but always worth noting. However, [it is tenuous to rely on] unions and worker organizations as the [sole] avenue for change: around the world, we’re at a historically low unionization rate, which means that workers might not be in a position of power when they’re coming to these conversations.
Second, companies often take into account user and stakeholder research and testing, if not with the actual workers in a given company, than with workers who are in some way similar to them. [Workers could better participate in technological decisions if they had the] opportunity to contribute to the [research and testing] processes in a way that actually had teeth, [for example, by saying,] “This is this is a step too far in terms of its impact on me and my co-workers.” [Alternatively, workers might say,] “Hey, there’s a design feature that you hadn’t thought about, that would be really useful to build.” We see real opportunity for workers to collaborate with AI designers as well as their corporate leadership to be able to create “win-win” situations.
Finally, I think there are opportunities [for worker empowerment] within corporate governance and ownership structures. While this area is less defined in the context of artificial intelligence, historically, there are [successful models] like codetermination, cooperative ownership, shadow boards, and worker boards in which company leaders get the opportunity to have a sense of what workers think of a given product.
The second audience to discuss is businesses. One of the big questions in this work is: what would a business get out of committing [to shared prosperity targets]? As Katya pointed out, there is an opportunity for [businesses] to differentiate and gain credibility, especially when [they create] a genuinely worker-augmenting product as opposed to a worker-exploiting, worker-surveilling, or worker-replacing product.
There are also opportunities in the product development cycle. On the left side [of this slide] is a simplified graphic about the AI product development cycle. Ideally, AI-developing businesses find ways to commercialize research and develop workplace AI products which they sell to AI-deploying businesses—which are frequently a different set of businesses entirely. Those AI-deploying businesses purchase and implement AI products and then offer feedback both through their purchases and through direct communication with the firm from which organizations they buy their technology.
[However, this idealized model] isn’t how the development cycle actually functions. Instead, [a great deal of development] is driven by research breakthroughs. This isn’t necessarily a bad thing. [However,] research breakthroughs [require] use case identification and many of these use case identifications follow the format that Katya has already described: they are quite anti-worker in that they automate tasks even in instances where it doesn’t increase productivity or exploit workers for the sake of [maximizing profits]. While this is not the whole universe of use cases, one reason [for the trend toward worker exploitation] is that businesses are [not] engaging with and listening to [workers about how products impact them]. The more [businesses] build in conversations with workers—and frontline workers in particular—the more opportunity [businesses] have to identify additional use cases and different ways these technologies can be implemented. Oftentimes, [engaging with workers] can [allow businesses and developers to] expand the productivity frontier, [rather than] swapping out a human worker—one form of productivity—for a robot or an algorithm—another form of productivity.
Other business stakeholders are involved as well, [including] researchers, developers and product managers, many of whom get into this kind of work because of the intellectual challenge and the opportunity but don’t want their products to harm other people. [This challenge creates an] opportunity for conversations between workers within tech companies. [Another stakeholder] is artificial intelligence investors. Investors, and particularly large institutional investors, frequently invest in spaces where it [is profitable] to have a robust labour market. Investment in automation technology creates problems for other investments within [these investors’] portfolios. We speak about this in more detail in the agenda.
The last audience that I’ll talk about is government. We saw three major opportunities for government to participate in steering AI [in a better direction] for society and to support workers in addressing their particular challenges. Right now, [there is a great deal] of government investment in basic research and [along] the commercialization chain. [However, this investment doesn’t have] any kind of constraints on technologies whose most obvious use cases are going to be harmful for society and consolidate gains within the hands of a few.
[First,] there are opportunities to assess the way that governments deploy their research funding and procurement processes to support an AI trajectory that is broadly socially beneficial in an economic sense. Second, there’s [been a focus on] identifying opportunities to support workers who would be navigating challenges created by AI. What would be the role of government if the trajectory were different— [if we working toward mitigating AI-related risks]? [In this scenario, we may still implement] reskilling and universal basic income. The question is: how do we [avoid] creating a problem that we have to solve down the road, if [right now] we have an opportunity to [prevent] some of the most devastating impacts? Finally, low-income and middle-income countries have some very specific challenges that they need to work through. As Katya showed with her earlier example, these technologies, once created, have a very low marginal cost to implement anywhere that the company is operating, which could result in massive labour market disruption [without distributed gains] because there are no redistribution mechanisms. We think substantial work needs to take place in this space to ensure that low-income and middle-income countries don’t end up continuing to bear the brunt of the growth of the Global North. Katya, I’ll hand it over to you to cover international institutions and civil society stakeholders.
Katya Klinova 35:47
Thank you, Stephanie. I want to highlight a [section] from our chapter on international organizations and civil society. As Daron Acemoglu once said, “AI can become the mother of all inappropriate technologies for low and middle-income countries.” I showed you a photo from Twitter when I was talking about spill over effects of automation in developing countries because there are no graphs and no data that measure the magnitude or the extent of these spill over effects. We need much more research and attention to understand [these effects well]. If there is one thing we’ve learned from globalization, it’s that the expansion of trade can produce incredibly large gains and those gains can be quite concentrated. There are very real losers from [free trade], and [certain] populations can be hurt very badly. We shouldn’t repeat this story. Now, the expansion of very powerful technology opens up the possibilities for automation and the frontier of the kinds of activities that can be automated in a way that is not globally controllable. We need to be much more attentive to these trends. The role of the international organizations can be very meaningful in balancing, pacing, and understanding the cross-border impact of [technology].
So, with that, we’ll hand it back to Anton. Just to remind everyone: if you’d like to read the full agenda, it is on our website. You can also email or tweet me and Stephanie—please get in touch if you’d like to be involved with this work. Thank you very much.
Anton Korinek 37:53
Thank you so much, Katya and Stephanie, for a very clear and inspiring presentation. Let me invite all the members of the audience to contribute questions for Katya and Steph in the Q&A box. You can also upvote questions asked by others to express your interest in them.
Robert Seamans has kindly agreed to be our discussant. Rob is an associate professor at New York University’s Stern School of Business. His research focuses on how firms use technology in their strategic interactions with each other and also focuses on the economic consequences of AI, robotics, and other advanced technologies. His research has been published in leading academic journals and has been cited in numerous outlets, including the Atlantic, Forbes, HBR, The New York Times, Wall Street Journal, and others. And in 2015-2016, Rob was a senior economist for technology and innovation on President Obama’s Council of Economic Advisers. Let me hand the mic over to Rob.
Robert Seamans 39:38
Anton, thank you very much for inviting me to discuss this paper. Let me start off by saying that I’ve been following the Partnership for AI for a number of years. I think it’s an excellent organization and I like the impact that that the organization has been having. This particular initiative is very important work and very ambitious.
I’m going to start at a fairly high level with a definition. We’re using the term artificial intelligence, AI, which sounds very fancy. I think it’s useful to dumb it down. Here’s my definition of artificial intelligence: [AI] is a group of computer software techniques. At the end of day, AI is highly sophisticated software and its algorithmic techniques rely on a lot of data. I’m not a computer scientist, [AI] is outside the realm of what I can [create]. However, that doesn’t mean that I can’t talk about AI—one does not need to be an expert in a specific technology in order to think through its effects on economy and society. As a perhaps tortured analogy, I don’t know how to build a car; I certainly don’t know how to fix an engine. I would probably even have trouble changing the oil in my car. But that doesn’t prevent me from thinking deeply about how changes in cars might affect the economy and society. The same is true [for AI] and for any technology.
AI and robotic technologies are developing and commercializing rapidly. They’ll likely lead to innovation and productivity growth, which is the good news. But according to some, the effects on human labour are unclear and potentially a cause for concern. I’ll spend half an hour half my time the first part—[the good news]— and half my time on that on that very last bullet point—[the bad news].
First, [I’ll discuss] some basic facts. AI has been developing very rapidly. There have been many breakthroughs. Here is one example: this picture tracks the progress in ImageNet’s ability in image recognition. So, the y-axis shows error rate: the lower you go, the better off you are; the lower you go, the more progress there is. The x-axis shows what’s happening over time. Over time, ImageNet’s [capability in] image recognition is dramatically improving. By 2015 or 2016, ImageNet surpasses human capacity in image recognition. This is one piece of evidence that [suggests] we have rapid breakthroughs [occurring] in the lab. Moreover, these rapid breakthroughs have led to these technologies’ commercialization. The panel on the left shows venture capital funding for mostly US-based AI start-ups—[the data] comes from Crunchbase. You can see a dramatic increase [in funding] starting roughly in 2010. Why is this useful to point this out? Well, venture capitalists have very strong incentives to make sure that they’re getting these investments right. They believe that breakthroughs in the lab have commercial applications which provides some evidence [for the emergence of] commercial applications of AI.
It’s also useful to talk about what’s happening with robotics. Robots, of course, have been around longer than [AI] and to date have had more of an impact [than AI], particularly on manufacturing. There are some things happening with robots that are [useful to understanding] what might happen with AI. I’m going to talk about robots a little bit in my remarks.
The panel on the right looks at worldwide robot shipments. [There were] about 100,000 units sold annually until about 2010. Then, there is a dramatic increase, and by about 2016, three times that amount [were sold annually]. Once again, [this demonstrates] rapid commercialization of a new technology.
While it’s probably too early to say, I would bet there’s going to be a lot of productivity growth as a result of AI, as we’re already seeing with robots. Graetz and Michaels had a fantastic paper came out in The Review of Economics and Statistics in 2018. According to their study, robots added an average of 0.4 percentage points of annual GDP growth in the seventeen countries that they were studying between 1993 and 2007. This was about a tenth of GDP growth for those countries during that time period. I think that’s as good of a benchmark as we can expect. I suspect we’ll get a similar, if not greater, boost from AI, though frankly, it’s still too early to tell.
We can look at robots to get excited about what AI can do. We can also look at prior episodes of automation. All prior episodes of automation, and particularly steam engines and electrification, have led to growth. One of favourite studies is by David Autor and Anna Salomons from 2018 in the Brookings Papers on Economic Activity. They’re looking at episodes of productivity growth and the effect that had on labour. The last column [shows productivity’s] net effect—when you have productivity boosts, you see an increase in labour.
What I find interesting, though, is that there’s a fair amount of heterogeneity in the supply chain. The direct effect is negative, the final demand effect is positive, the upstream effect is positive, and the downstream effect is noisy. There are two big takeaways from this. [The first takeaway] is that the net effect is positive for labour. The second takeaway is that there’s a lot of heterogeneity. We can think about other sources of heterogeneity, for example, within a given firm across different occupations in that firm.
Let’s [consider AI]. Katya and Anton, [in their] paper “AI and Shared Prosperity,” write that future advances in AI, which automate away human labour, may have stark implications for labour markets and inequality. I agree with that statement and I [think] there are [two important components to highlight]. The first is that AI is automating away human labour. And the [second important component to consider is] inequality. [My—albeit nascent—work] may [provide] early evidence [for these points]. [My research] suggests that [at this stage] firms are using AI for augmentation rather than for replacement. However, there’s also early evidence that the augmentation is disproportionately benefiting [only] some [people: automation’s gains are not widely shared]. [This provides evidence for AI’s] heterogenous [impact] across occupations.
Over the past several years I’ve worked with Jim Bessen and a couple of other co-authors to survey AI-enabled start-ups. In the first wave of [surveys], we asked these AI start-ups, “What is the goal of this AI-enabled product that you’re creating? What is this product’s KPI when you’re trying to sell to customers?” [Start-ups] could [choose one or more from a range of possible answers]. Most [start-ups answered that their products were aimed at] making better predictions or decisions, managing and understanding data, and gaining new capabilities. [These answers suggested that technologies] augmented, rather than replaced, human labour. [On the slide,] I’ve highlighted in red the answers “automate routine tasks” and “reduce labour costs,” [as these answers suggest] replacement. However, these [replacement-indicating reasons were not among the] top reasons that these firms gave. [While there may be] some evidence that AI is being used to replace human workers, [most] technologies are being used to augment work.
[Now, let’s consider] inequality. [We’ll turn to] a paper [I wrote] with Ed Felten, a computer science professor at Princeton, and Manav Raj, a PhD student I work a lot with it. We came up with an AI Occupational Exposure Score: for each occupation in the US, we’ve come up with a way to describe how exposed that occupation has been to AI. Now let’s [segment these occupations] into three [categories]: low-income, middle-income, and high-income occupations. Let’s look at employment growth and wage growth over a ten-year period. The positive coefficient for high-income workers’ unemployment growth suggests that as the high-income group is more exposed to AI, they will see larger employment growth. The same holds true for wage growth: as these occupations are more exposed to AI, they will see faster wage growth. [In contrast], the [opposite is true] for low-income workers’ employment: [as AI growth increases, employment growth will decrease]. This suggests AI may be exacerbating inequality.
So, what’s the solution? You’ve heard about one solution from the authors of the [Shared Prosperity Agenda]: they’re developing a framework to enable ethically minded companies to create and deploy AI systems. I think that this is a very good solution. The first question that I have is: “How likely is it that firms would self-regulate to adopt such a framework?” I’ve asked a [similar] question, with Jim Bessen and co-authors, by surveying AI start-up firms. We [surveyed] firms [to learn] how many of them have adopted a set of ethical AI principles. I expected, ex ante, a low number of firms [would have ethical AI principles]. We learned that about 60% of these firms said that they did [have ethical AI principles]. Perhaps unsurprisingly, there’s a fair bit of heterogeneity across different industries.
Coming up with a framework is a useful solution, because firms actually will adopt these. Now, under what conditions might firms be most likely to adopt these principles? Based on correlational evidence from the survey that we did, [represented on the slide in] columns one and two, we found that when AI start-up firms are collaborating closely with a large tech firm—like Microsoft, Google, or Amazon—these smaller firms are much more likely to adopt these AI ethical principles. One potential [strategy this evidence suggests could be effective] is to [develop a] framework [for ethical AI] and specifically target large companies to adopt this framework first so that smaller companies will follow.
Katya and Stephanie highlighted some of the larger macroeconomic and labour market trends [and described] increasing inequality and declining union membership. Other [trends include] falling labour force participation and rising industry concentration. These high-level macroeconomic trends are important to keep in mind because [they differentiate this episode of automation] from prior episodes. The conditions under which electrification or steam [power were introduced] were very different than they are now. [These changing conditions] are useful to keep in mind.
How do we measure corporate shared prosperity targets? While it’s easy to [call] something measurable, it’s much harder to actually measure it. [This is something I hope to] push the authors on.
[Finally, let’s consider] customers [as a lever of change]. Stephanie described the three different mechanisms that could be used to push for change, but she [didn’t discuss] customers or “end-users.” We know that customers are an important stakeholder and can [cause] firms to adopt [ethical] standards. [Consider a] perhaps tortured analogy. When I’m purchasing eggs, I care about how the chickens were treated and I’m willing to pay a [small] premium [for better treatment]. You could imagine the same kind of mechanism at play here. When customers—people like you and me—are willing to pay a little bit more for a product that’s been certified as treating workers [fairly, that can motivate firms to adopt ethical standards]. [Stephanie and Katya] can add customers to the set of stakeholders that they’re thinking about.
Anton Korinek 55:21
Thank you so much, Rob, for the insightful discussion and comments. Let me give Katya and Steph an opportunity to offer their reactions.
Katya Klinova 55:31
Rob, thank you so much. These were great comments that we will use in the future. I couldn’t agree more with you. For the record, we’re not anti-automation; historically, automation has gone [well]. [However,] as you described, the societal and economic conditions [today] are different [than they were historically]. We cannot [uncritically] rely on past successes to be confident about our future success. I completely agree with your point about measurement: [now, the work is to understand how to measure these targets we’ve developed].
Stephanie Bell 56:34
Thank you so much for these comments Rob. I think they’re extremely insightful and helpful as we forge ahead with all of this. I really appreciated your point about customers and their role [in motivating ethical behaviour]. The evidence—at least that I’ve seen—seems mixed about the degree to which people willing to trade off on their surplus of productivity in order to help workers. However, there are people who are willing to buy free-range and cage-free chicken eggs and there are people who prefer Patagonia to North Face as a result of [Patagonia’s] supply chain and environmental principles. I think [there’s certainly a group of people who] are an audience for this work and we’re thinking hard about how we engage [this group].
The other part of your comments that I really appreciated was [your examination of] automation versus productivity and what that [distinction means] for workers. As I’ve [read] the [literature] on AI’s impact on workers, [I’ve learned] the degree to which many so-called augmentation technologies are a new-fangled Taylorism or Fordism. [Augmentation technologies employ a] very old managerial style [that is] technologically enabled to be much more aggressive. [For example,] injury rates in Amazon warehouses that have AI-enabled robotics are much higher [than in warehouses without robots]. [AI isn’t creating a] Terminator scenario— [rather, AIs are] colliding into people on the warehouse floor. [The danger] is about increasing work and job intensity to the point where people are badly injured from repetitive stress injuries. [As we consider] measuring [these metrics, we have to place a] fine line between augmentation and exploitation.
Anton Korinek 58:35
Thank you, Katya and Steph. The agenda that you laid out proposes [a path] to ensure that progress in AI will be broadly shared with workers. I agree that [this is very important] for the short-term and medium-term future.
At GovAI we are also interested in preparing for a long-term future in which potentially all jobs can be performed more effectively by machines [than by people]. In this future scenario, it would cost less to pay a machine than a human for the same type of work. Equilibrium wages would not even cover the basic subsistence [needs of humans]. Steph has already hinted at this possibility during her presentation. One of the members of the audience, Vemir Michael, phrased it this way: “In the long term, can shared prosperity be [managed] within the company environment, as workers [self-advocate]? [Or, will there need to be] deeper [governance, in the form of a] government structure? [Or, must there] be a societal shift?”
So, let me ask you: how do you think about this potential long-term future in which all jobs may be automated? How does it square with the agenda that you are advancing? How do you [negotiate] the tension between using work as the vehicle to deliver shared prosperity and the fear—that some in the field have—that there may be no work in the future?
Katya Klinova 1:00:12
[It’s essential to] make sure there is work in the interim [period], [given that during this interim,] work will still be the main vehicle for distributing prosperity around the world and the main source of income for the majority of the global population. [This work availability] is actually a precondition for long-term AI progress. If the decline in labour demand and the elimination of good jobs happens too quickly, there will be so much social discontent that it could preclude technological progress from happening. We need to pace the development of technology [with] the development of social institutions that enable redistribution. Eventually, we may need to decouple people’s prosperity, dignity, and well-being from their employment. Right now, [however,] we are in a society in which [work and well-being] are tightly coupled. We cannot pretend we’ve already figured out how decoupling can be done painlessly and globally. Even the boldest proposals [don’t propose] large [redistributions: they are in the range of] $10,000 to $13,000 per year. I don’t want to say anything at all against UBI—I think social safety nets are incredibly important and very much needed. We just need to be realistic about [what is] sufficient in the interim. [This] interim period is a precondition for success in a future in which nobody needs to work to survive.
Stephanie Bell 1:02:35
I fully agree [with Katya]. The devil really is in the details when [considering] the feasibility of different approaches to the trajectory of [AI] and its impact on people’s livelihoods. I think, based on my previous work in democratic theory and trust building across different social groups, and considering the current political environment, that we are more likely to convince an important subset of capitalist companies to ever so slightly decrease their bottom line than to put in place large-scale redistribution. And that’s just [considering redistribution] in a given nation state, let alone across nation states. [Redistribution requires] functioning democratic governments. Unfortunately, right now, we’re seeing many governments—which would consider themselves to be democratic—backtracking. Given this, what does a transition period look like? What is the best way to work toward a jobless future? How do we ensure that [our path to this future is] humane for everybody involved? Unfortunately, I’m not optimistic that near-term redistribution is the solution.
Anton Korinek 1:04:08
Thank you, Steph and Katya. Let me read the next question from Markus: “Could you say more about the role of policy in shifting AI technology in a labour-augmenting direction?”
I’ll add my own follow-up question for Rob specifically. The agenda for redesigning AI for shared prosperity has focused on making AI more worker friendly, [especially] in the private sector. I think we all agree this is an important starting point. Rob, you also have considerable experience in public policy settings. I wanted to ask you [how you would approach creating] public policies to support shared prosperity. What would be your advice on how to best go about making public policy work useful and appealing to policymakers?
Robert Seamans 1:05:27
I agree with much, though maybe not all, of what Stephanie and Katya have said. [While they didn’t cover this,] I worry about [a specific segment of] AI policy [focused on] addressing inequality. The first reason [I’m concerned relates to what] Katya said earlier: it’s very difficult to know ex-ante if technology will be labour displacing or labour augmenting. We can only [make this distinction] ex-post. I don’t think it makes any sense to try to create a policy focused on taxing certain technologies because we think [these technologies] are going to be labour replacing—I worry about the distortions that [tax] would impose. The second reason [I worry about this policy] is that the larger trends we’ve touched on, like declining union membership, increasing inequality, declining labour force participation, and increasing industry concentration are first-order concerns. We want to be addressing these before coming up with policy that’s specific to AI. That being said, I like The Partnership on AI’s approach because it gets firms to engage in self-regulation, which I think [is a better approach than] government-imposed [regulation]. There is a role for government to play, as a convener of different firms, stakeholders, workers, and customers to arrive at a set of principles that firms might be more willing to adopt rather than less willing to adopt.
Anton Korinek 1:08:00
Katya and Steph, would you like to add your thoughts on policy?
Katya Klinova 1:08:07
It’s, of course, scary if the government begins taxing something that is likely to be labour displacing ex ante. However, the government does fund a great deal of technology R&D which can [affect development] in the private sector. If the government, [in addition to implementing] other policies, starts thinking [ahead] and lays the groundwork for labour-complementing technologies, it [could steer] AI away from becoming excessively automating. Interest rate policy and immigration policy can influence the supply of labour and [impact how likely firms are to] invest in automation. We want the government to be aware of AI’s capacity to [increase] inequality by benefiting high-skilled workers and to think through what [it] can do to create conditions in which the private sector [makes] investments in labour-complementing technology.
Stephanie Bell 1:09:58
I wholeheartedly agree with Rob’s point: a tax that targets AI specifically is likely to cause quite a few distortionary effects, as many of the problems that emerge from AI also emerge from other technologies. To the extent that [we focus] on dealing with the impacts of technological change on workers well-being, worker power, and worker livelihood, a more encompassing set of regulations or approaches would be [warranted].
[Currently, a great deal of] AI research is targeted at human parity metrics: how well can this technology replace a human doing the same task? That’s a very different kind of metric than one focused on what we can achieve when technology is working together with a person on a [given] task. Using something other than a human parity metric to measure success could help the government [steer] AI research to be more augmenting and potentially less exploitative.
A second thought [concerns Katya’s comment on the] taxation scheme. Capital and labour are treated differently in tax schemes around the world. If [government] makes it much cheaper—at least in terms of accounting gains—for a company to purchase software or a robot to do a given task, then [the government is] disadvantaging a worker who could be doing those tasks instead. If aggressive depreciation gives [firms] tax advantages— [as happens in] the United States—on any piece of equipment or a capital investment, but all labour-related [expenses] incur a payroll tax, then [the government creates] two different incentives to replace labour.
Finally, I think many of these problems [stem from] labour law. Places like the United States in particular would benefit from having more stringent laws to protect workers from workplace injuries and exploitation and to safeguard workers’ livelihoods, wages, and hours. Putting [these protections] in place, either through additional rules or heavier fines for breaking these laws, would steer companies away from using more exploitative technologies.
Robert Seamans 1:12:44
I completely agree with these comments, Stephanie and Katya. In particular, I think the point about the different ways that capital and labour are taxed is very important.
[Let’s consider a scenario] that I would like to get your reaction to. Let’s say the Partnership on AI successfully comes up with the framework that you’re in the process of developing. Might [implementing] a policy that the government can only purchase from firms that have adopted this framework [create an incentive] for firms to adopt [these principles]?
Katya Klinova 1:13:27
I would love that. Right now, the government procures a lot of technology. If the government recognized long-term decline in labour demand as [important], how would they [evaluate] criteria for which technology to buy from whom? Would they just decide based on marketing [information] on the website that says, “this technology augments workers”? Or would [the government] ask for disclosures? [If so,] what kind of disclosures would they be looking for? What would be measured? We think this framework could be useful—even if not mandated as law—as a [means] to inform decision makers who handle government procurement of technology.
Stephanie Bell 1:14:58
A question from Michelle: “What do each of you believe will be the biggest challenge in redesigning AI for shared prosperity? Will [the challenge] be [from] a specific industry? The engagement from a specific stakeholder group? Or [will it be] something else? Is there a [consensus] on the largest challenges among your research team?”
I should also add that Michelle is asking how she can best continue the conversation, so perhaps tell us again how to find out more about the Shared Prosperity Agenda on the PAI’s website.
Katya Klinova 1:15:38
Michelle, thank you for the question. For you and for everyone that would like to stay in touch with us, there is a form to leave your email and sign up for updates on these discussions and conversations. All of this is on partnershiponai.org/shared-prosperity.
We will see right now if there is agreement on the biggest challenge. The immediate challenge for us is to figure out a reliable, robust way to measure [our goals] that would be intellectually honest and substantive but at the same time be intuitive and simple enough to explain so that a lot of people could get behind it. [Referring back to the] example of eggs in the store—there is a one simple label you’re looking for, the “free range” label. [To consider another example,] carbon emission targets are a very complex system that was proxied so that it was easy to understand and get behind, though it [still] took two decades to build momentum behind corporate carbon emission targets. And governments [still have difficulty] deciding which investments are environmentally sustainable or not.
We don’t have decades for this work because AI progress, and its impacts on labour, are happening [so quickly]. How do we quickly [develop] a metric that is substantive but intuitive? This is the question that keeps me up at night.
I should add that Anton is this Senior Advisor to the initiative and on our steering committee. I couldn’t have done [this work] without his support.
Stephanie Bell 1:17:58
I agree with Katya: getting this set of metrics right is going to be our biggest challenge because developing intellectually honest and rigorous metrics is challenging. [Another challenge is] finding a way to translate the rigor into something that’s easily implementable, especially for companies who don’t have a team of in-house macroeconomists and microeconomists. [We have to distil our metrics] so that companies can [understand these metrics], support [our] cause, and [feel capable implementing these targets]. Our work over the next couple of years will be to figure out how to make [metrics] that are coherent and actionable.
Anton Korinek 1:18:51
Thank you, Katya and Steph. Now let me ask you, perhaps as a concluding question, if one of the members in our audience is an AI developer, what tangible next steps would you recommend that they take to advance shared prosperity through their work?
Katya Klinova 1:19:24
If you read the companion paper “AI and Shared Prosperity” on our website, we lay out steps that could be [useful for] AI developers. If you would give us feedback on [if these steps] are working for you and whether they’re helpful or not, that would be [very] appreciated. [Another way to help would be to] spread the word: AI developers and innovators at-large have a responsibility to think about their economic impacts on labour and on the distribution of good jobs. I do not think that this notion of [developer and innovator reasonability] is broadly accepted. [You can advance the cause by helping this] become more of a norm and an expectation.
Stephanie Bell 1:20:15
[I echo] everything that Katya just said. [It’s important to] push for economic impact as a fundamental part of AI ethics. AI has advanced impressively along a number of different tracks. For whatever reason, the economic impact of these technologies is not a part of that conversation. The more we’re able to bring awareness to how [AI’s economic impacts affect] people’s livelihoods, the better the opportunity we have for success in [steering AI in a] in a positive [direction].
Anton Korinek 1:20:55
Let me say thank you to Katya, Steph and Rob, not only for your presentations and the discussion, but also for the thoughtful conversation that we have had thereafter.
Thank you and we hope to see you at our next webinar.