GovAI Webinar #1 COVID-19 and the Economics of AI
Introduced by Allan Dafoe
Moderated by Anton Korinek
Featuring Daron Acemoğlu,
and Joseph Stiglitz
Welcome to our inaugural webinar on the governance and economics of AI. It is extremely exciting to see so many audience members from around the world. I see in the chat Portugal, Shanghai, Brazil represented, so that’s great. I am Allan Dafoe, the director of the Centre for the Governance of AI which is organizing this series. We are based at the Future of Humanity Institute at the University of Oxford. For those of you who don’t know about our work, we study the opportunities and challenges brought by advances in AI so as to advise policy to maximize the benefits and minimize the risks. We understand AI as broadly referring to the cluster of technologies associated with machine intelligence, especially the recent progress in machine learning, but also including advances in computing power, sensors, robotics and our digital infrastructure. The term governance, which may not be familiar to many of you, refers both descriptively to the ways that decisions are in fact made about the development and deployment of AI, but also to the normative aspiration that those decisions emerge from institutions that are effective, equitable and legitimate.
We have a special interest in understanding the long run impact of artificial intelligence. Over the past few years, it has become increasingly common for economists to identify AI as a general-purpose technology or GPT, as I expect we’ll hear about more today. If AI turns out to be anything like previous transformative GPTs such as electricity and the internal combustion engine, then we can expect massive changes in our culture, politics, and in the character of war.
More speculatively, AI might even turn out to be something more than another GPT in a long line of GPTs. A number of scholars, including those attending today, have begun to explore more radical possibilities and their associated challenges such as massive labor displacement, extreme inequality, rapidly accelerating economic growth, and the maintenance of human oversight of highly intelligent artificial systems. This webinars series will continue these conversations. In the coming months, we will host a conversation on challenges for US-China cooperation and the governance of AI, on the impact of AI on democracy, on forecasting methodology and insights for trends in AI, as well as many more discussions of the economics of AI.
This series is put on in partnership with Anton Korinek of the University of Virginia — who is sharing the screen with me — who will be moderating today’s event. Anton is one of the leading economists who has been thinking seriously about the economic implications of advanced AI. Anton first came to my attention because of his excellent paper coauthored with Joseph Stiglitz, also here with us today, on the implications of AI for income distribution and unemployment. In this paper they discuss with subtlety and insight the many challenges to making technological progress broadly beneficial due to failures in insurance markets for technological displacement, and the costs and feasibility of redistribution. From my conversations with Anton, I’ve learned a lot more about the economics of AI and I encourage you all to follow his work. I will now turn the mic over to Anton to introduce and moderate this event.
Let me thank the GovAI team, Markus Anderljung and Anne le Roux, for making this event possible. Let me also thank Allan for hosting us and for the kind introduction. I have followed Allan’s work for a number of years. What I find really admirable is that he focuses on how to put into practice many of the policy proposals that economists like myself only consider in theory.
In the economics of AI, a big theme is that smart machines may be a substitute for human labor rather than complementing us. And that this may be unlike what earlier technological revolutions entailed. The fear is that this will progressively lead to a decline in relative and perhaps even absolute demand for labor, driving down wages, and when wages cannot fall, causing unemployment. This would exacerbate inequality, poverty, and social and political tension.
Well, just like doctors learn the most about the human body when it is sick or injured, economists learn the most about the economy when it is in crisis.
When Allan and I first spoke about this webinar series, we felt that it would be a fitting theme for our inaugural event to invite three of the world’s top thinkers on the economics of AI to share with us what they have learned from the ongoing pandemic and what lessons this provides us for how we as a society can prepare for the advent of ever smarter machines.
Aside from devastating health effects, Covid-19 has led to hundreds of millions of jobs lost around the world — probably one of the largest negative labor demand shocks in human history, although it was a policy induced and temporary one. It has also led to unprecedented government actions to support the jobless while simultaneously giving rise to significant political tension. So one important question is what can we learn to prepare for potential future labor demand shocks that may arise from automation?
Another issue is that Covid-19 has also spurred a massive technological transition into the virtual world in which the marginal cost of distribution is zero. So instead of holding an in-person conference on our topic today (and I would very much enjoy being with you in person), we have to live stream this event on the web. There are some obvious benefits: it democratizes the attendance, but it also risks exacerbating the superstars phenomenon and exacerbating inequality in our world. Another really important question is: what we can learn from Covid-19 about a future that is increasingly digital? More broadly, let me ask our panelists: what lessons have you learned from the pandemic that we can carry over to the governance of AI?
Without further ado, let me introduce three superstars who are our panelists today, Daron Acemoğlu, Diane Coyle, and Joseph Stiglitz.
Daron Acemoğlu is the Elizabeth and James Killian Professor of Economics and Institute Professor at the Massachusetts Institute of Technology where he has taught since 1993, and he is also a winner of the John Bates Clark medal. Has coauthored a book on Why Nations Fail: The Origins of Power, Prosperity, and Poverty.
It’s a great pleasure to be here even though we cannot all be in the same room. And I think Anton gave an excellent introduction to what I wanted to say, which is that we are living through a transformative moment and this is true for many dimensions of our lives, but two that are particularly important are the future of technology, especially related to AI, and the future of institutions. Both because the current state of institutions is shaping how we react to the crisis, but this is also a window for potentially transformative changes for the future. I’m going to spend my eight minutes or so equally on these two points. First on the AI. During this hour of need, we are all grateful for the digital technologies that enable us not to be completely isolated from the rest of the world, but there are also dangers as well as opportunities for how we use AI. To understand that, I think it’s useful to look at what has happened in the labor market and why over the last three decades.
Here [indicating a slide] I’m showing the labor share in the US, but the pattern is similar in other OECD countries. But the US is simpler and sharper. You see a huge decline in the labor share in national income from around 2000. There is some decline going on before then, but it’s small, especially when you look at industry share, composition adjusted, it’s a very, very remarkable decline of almost 10 points in the course of about 15 years. So what’s going on with this? Well, the explanation that Pascual Restrepo and I have pushed for over the last several years with our researchers that this is mostly about automation. Partly AI, but really mostly the forerunners of AI.
One way of seeing that is in the next three graphs, and those are going to be the background on which I’ll put some thoughts on the future of AI and on this current crisis. Here, when you look at the left graph, what you see is the private sector wage bill growth in the United States. That’s so I can measure inclusive measure of labor demand growth in the private sector in the US. It’s a remarkable picture. It will be more remarkable if I also showed you wage inequality and the wages of the bottom, but it’s essentially a picture of shared growth for about four decades, which is that labor demand is growing for about above 2% a year, every year, very steadily. Wages are more or less keeping up, and then when you come to the post 1990 period, which is the right panel, you see a completely different picture.
First, the growth of labor demand becomes anemic and then it essentially stops after 2000. There’s really no growth in the wage bill or overall labor demand in the US private sector. Where is this coming from? Pascual and I argue this is from sources like monopsony, monopoly, rent sharing have also played a role. But mostly this is about the types of technologies that we have adopted. Particularly if you look at the again the four decades after world war II, the red line is what we call technological displacement, technologies that are reducing the labor share substituting machines — mostly numerically controlled machines, specialized software, robotics, and very recently AI — that substitute these types of machines for labor. The blue is when you find new ways of doing tasks that increase labor demand. These are industries where the labor share is actually going up. You see that the blue and the red are roughly balanced and the yellow in the middle is essentially the sum of the two. So, the reason why labor demand is growing very steadily and the labor share is constant is because automation technologies are being counterbalanced by new tasks and other human friendly technology. Fast forward to the last 30 years, you see a completely different picture. The blue line here is now about 30% to 40% slower than the previous one. So there’s much less of these human friendly technologies. The displacement curve is much faster, about 30-40% faster than what it was before 1987. We are doing much more automation and much less human friendly technology. Why is that?
Well, there are a number of reasons for this. I don’t have the time to get into all of them right now, but I want to highlight one of them. This is not the most important one, but it’s one of the top three, but it’s easy to talk about and it highlights the other point that I want to make, which is the inefficiency of this capital labor substitution. If you look at the US tax code, labor taxes have been roughly constant, but capital taxes, especially on software and equipment, which are the purple and the red curves here, have been much, much lower essentially getting into the zero territory. So we are subsidizing the use of capital while at the same time taxing the use of labor. And that’s encouraging a lot of automation and much of that marginal automation is actually not super productive.
So against this background, what are we going to experience in this crisis period? I think one of the things that we are already seeing, we don’t have hard data on this, but the surveys are very clear, firms are using more and more AI in order to substitute for workers because the lockdown is making the labor supply even harder for firms, and the demand for machines is increasing exponentially. So against the background of very fast and perhaps already excessive automation, now there is a danger that we’re going to go and repeat exactly this pattern, not enough use of AI for helpings human and too much for replacing rather than the more balanced pattern of the four decades after World War II. But of course, if you’re going to hope anything will work out and not the worst case scenario of how we use technology, and what the implications of that for wages, unemployment, labor share, income distribution, we have to turn to institutions. Institutions can actually help us redirect technology in the right way. I have argued in a lot of my work over the last two and a half decades that the path of technology is not preordained. It is firms, workers, and scientists’ choices and especially regulators’ choices in redirecting technology, and in this instance AI technology, that is going to play a critical role.
So can we hope that we have the right institutions to guide us in the right way? Actually, that’s when any type of cautious optimism that one might have becomes more jaded because we have actually seen a spectacular failure of institutions during the current crisis. This is really a combination of two things: one is that we have seen an erosion of expertise, technology, and autonomy in institutions. I think the sorry state of the CDC — which was actually very successful a short while ago during the Ebola crisis — which has been an utter failure during this crisis is related to that. But we are also seeing the role of institutions become much, much more difficult because of a collapse of trust in institutions. If you look at the trust in government and state from the world value survey, you end up with a very paradoxical and disturbing pattern. You know, in autocracies such as China, Turkey, Singapore, you have relatively high trust in state institutions and government and in democracies, including some places that have done extremely well during this crisis including Taiwan and South Korea, but also in the United States, you have very low and falling trust in institutions. That’s really making life much more complicated.
But then what does the future hold? There’s no doubt in my mind the Covid-19 crisis has created what Jim Robinson and I have called a critical juncture. There will be changes in institutions because their inadequacy has been laid bare. There are many possible futures for these institutions. I have outlined that in some talks and articles, but since time is short, let me not go into each one of them in detail. We may do nothing, which will be completely tragic. We may try to emulate China, which would also be tragic because we couldn’t emulate their good parts, such as a competent bureaucracy living for 2,500 years under an authoritarian hierarchical system. We would end up with emulating their bad parts such as a lack of respect for civil liberties and autocracy and repression. We could turn to large tech companies motivated by the failure of our government and perhaps they are better for us than the failing government. But I think there is another option which is to remake our welfare state.
The current crisis has highlighted that we need new responsibilities for the state combating inequality, climate change, pandemics, better regulation. But I think a lot of people are worried whether that’s going to happen starting from the current sorry state. They’re worried as Hayek was after the Beveridge report, which led him to write Road to Serfdom, whether once the state becomes very powerful, economically much larger, much more administratively in control of wages and allocation of resources, if that’s going to be a tenable state. Well, this is actually my last slide and I’ll conclude that this is actually what James Robinson and I tackled in our new book, The Narrow Corridor. We came up with a framework for arguing why Hayek was actually wrong and there was a way for society to adapt to greater state power as long as they deepen democracy, and we outlined the dynamics of that. Since time is short, let me not get into the details of that with the hope that somebody will ask during the Q&A session and I can provide more explanation about what the main thesis is and why despite all of the difficulties that we’re facing, I’m saying that a little bit of cautious optimism might be possible. Let me conclude here and pass it to other panelists and come back to these issues during the Q&A session. Thank you.
Thank you very much, Daron, for your insightful remarks. And let me now turn it over to Diane. But before doing so, let me also make an announcement to all of us attending the webinar. Please feel free to click on the link “ask a question” at the bottom of your screen and to add any questions that you may have for our panelists. You can also upvote existing questions that other people have already posed.
Diane Coyle is an economist and former advisor to the UK treasury and the Bennett professor of public policy at the University of Cambridge where she has also codirected the Bennett Institute since 2018. She was vice chairman of the BBC trust, the governing body of the British broadcasting corporation and was a member of the UK competition commission from 2001 to 2019 and she has just published a book on Markets, State, and People: Economics for Public Policy.
Thank you. Hello everybody. It’s a great pleasure to have this opportunity. We panelists haven’t coordinated beforehand, but I think what I’m going to say complements Daron’s comments without overlapping with them. I’ve got about eight minutes, and I want to make three main points. The first is that the crisis has crystallized tensions around expertise and to what extent modelling can or should inform policy choices. I think we need to reflect on the lessons for AI because machine learning systems are technocrats par excellence. The second point is that the companies that can operate AI at scale are being strengthened by the crisis and they’re going to emerge even more powerful. So we need to double down on the policies that will make them accountable and make the markets in which they operate contestable. The third point is that data is everything and we’re going to need to understand much better the creation and the distribution of value in data value chains and the trade-offs between private and collective benefit.
Let me start with the first of those: expertise. Machine learning systems have been programmed and trained to act just like Homo Economicus. They maximize some well specified objective functions subject to constraints, and they use the rules of logic. In the pre-crisis world, this was already problematic when machine learning systems were starting to be deployed in public policy decisions. We live in a very complex socioeconomic system, there are multiple conflicting aims and tradeoffs. Just as target setting can distort public sector behavior, AIs can game the objectives that they’re set. And there are what political scientists call “incompletely theorized agreements,” what others might call political fudge, which means that quite often we don’t want to specify too clearly what objective we’re aiming for in order to achieve some consensus on actions. We’re already seeing during the crisis the kinds of problems that economists are very familiar with using models to forecast. You get model drift and you get what’s called the Lucas critique where structural breaks mean that the relationships that you’ve modelled breakdown. In some domains, this doesn’t matter — if you’re thinking about algorithms determining online shopping offers, the fact they’ve broken down doesn’t matter, they’ll fix those quickly. There are also some quite reasonably narrowly specified domains in which AI is proving really useful at the moment in biomedical discovery. So I think the main lesson of the pandemic is actually about the limitations of the kind of model we can build and train. And I think we’re really far from the complex interactions that policymakers need to consider now. The genetics and other aspects of the virus itself, individual and group susceptibility, social and economic conditions, behavioral responses to the pandemic and lock down policies, responses to climate and so on. This is really, this is really complicated. I think this is a real lesson in learning the limitations of what we can and should be trying to do with AI in policy.
The second point is about power. Before the crisis, a number of countries around the world were recommending tougher competition and regulatory policies toward big tech. Now big tech is getting even bigger. This is a moment for governments to hold their nerve, as our society is dependent on digital companies as never before. And that means that they even more than before, they need to be held accountable for the market power and also political power that they hold. We need a lot more thought about the governance of AI. And I welcome Allan’s comments about that in his introduction. Can we avoid a geopolitical arms race? What are the national and global institutions that will deliver accountability? With some of my colleagues in the Bennett Institute, we’re starting a project looking at the history of different governance frameworks for new technologies. It’s not straightforward. It depends on the cost of entering the technology. If the technology changes the governance frame, what needs to change as well? It depends on the market context and what the interaction between public and private sectors looks like with developing new technologies and so on. I was a member of Jason Furman’s panel in the UK looking at competition in digital markets. Just before the lockdown, the government announced that our recommendation for a digital markets unit to be set up will go ahead and that needs to happen. It needs to happen in other countries. I think the more we can get regulatory alignment and alignment of competition policies between countries, the more effective we will all be. We also need to reflect about the skills base and national capabilities in AI. If you’re sitting as I am in Europe and London, being in between the United States and China — who have the leading capabilities at the moment — when they seem to be embarking on a phase of geopolitical rivalry and AI is part of that, that’s not a very comfortable position. And so for everybody, for a number of reasons, thinking about sharing the skills needed to use AI and deploying it and building up those national capabilities will be important.
My final set of comments is about data. It’s one of the barriers to entry that we identified in the Furman review. We recommended looking at and enforcing interoperability and enforcing rules about APIs and some data sharing. The key issue emerging in the pandemic is health and location data. I think that’s been really unfortunately shaped by the narrative that data is personal. Almost no data is personal. There might be quite a lot that we want to be kept private, but that’s a different matter because the information content of data is almost always relational and contextual. Daron has an excellent paper about negative externalities of potential privacy loss from the provision and sharing of data. But there are also substantial potential positive externalities from aggregation and the sharing of data with colleagues. I have a policy paper on that and we are working on an academic paper also. In the current context of the pandemic, my health and location status has substantial implications for other people as a very large externality. That to my mind outweighs concerns about privacy, not data security, but privacy in the sense of not sharing data at all in the context of the huge civil liberties removal from lockdowns in various countries. We shouldn’t have to be relying on the good will of companies like Google and Apple to provide limited data on what’s happening during the lockdown, and the APIs that they are developing. The democratic public interest in this is too large. I think that the Covid example is an instance of a much broader debate that we need to start having about data, about the positives and the negatives, the individual and social, about how to capture that value and how to distribute the benefits. Also about what kinds of institutions can be trusted to govern data and data access both in terms of security and privacy, but also in terms of the rights of access to various forms of information that can be used for the good of individuals and the good of the public. And I will stop there. Thank you.
Thank you so much for your insightful remarks. Let me now hand the microphone over to Joseph Stiglitz, who is an economist, public policy analyst and University Professor at Columbia University. He is the recipient of the Nobel Prize in Economics in 2001 and the John Bates Clark medal in 1979. He is also a former Senior Vice President and Chief Economist of the World Bank and the former member and Chairman of the US president’s Council of Economic Advisors. His most recent book is Measuring What Counts; The Global Movement for Well-Being. So Joe, the floor is yours.
Thank you very much, Anton. It’s really good to be here. I again join the others in saying that I wish it could be in person. I agree with the point that you made in the beginning that we’ve learned a lot about our society, about our economy, about our government from this pandemic. It’s like pathology in medicine, you learn a lot from putting the system under stress. And I think at least in the United States, we found things didn’t go quite as well as we would have hoped. We’ve seen a lack of resilience. Our private sector, our markets couldn’t even produce masks and protective gear and distribute them to where they were needed. We’ve seen the importance of government. We all turn to government in times of disaster. And this is clearly a time of disaster. We’ve seen that 40 years of denigrating the role of government has actually worked. It’s worked in weakening the institutions. Daron pointed out the weakening of the CDC, which had been a very strong institution, the abandonment of the White House Office of Pandemics. We had created institutional structures designed to prepare us for a pandemic, but then a weakening of institutions led to the abandonment of those institutions.
As we look at what has happened, it’s natural to think about this in relationship to other crises, and we can see some shared underlying factors. In the 2008 crisis, the last crisis, we saw a weakening of the state, with financial deregulation being one of the conditions leading to the crisis. Then again we saw short-sighted behavior on the part of the banks leading to the crisis. Here, it’s short-sighted behavior on the part of firms relating to an economic system that lacks resilience.
We’ve also seen in this crisis that this is not an equal opportunity disease. It goes after those with poor health. Those with poor health are disproportionately people who are poor, especially in the United States where we have not recognized the right of access to healthcare as a basic human right. I’ll make some comments later on that. Wealth inequality is clearly part of the preconditions that have exposed the United States so strongly to the disease, and one of the reasons why we’ve had the highest rate of death. The problem of inequality is going to be exacerbated by the crisis. And I’ll try to explain that.
But the topic of the seminar is about AI. AI is a major structural change in the economy. One of the things that we’ve seen over a long period of time, and that is going to be exacerbated by the pandemic, is that markets don’t handle these kinds of large structural changes well. That’s not one of the strengths of markets, and it inevitably requires government assistance to manage that. What we’ve seen is maybe not so optimistic. There are three things which are going to reinforce, I hope, what has already been said. The first is that the long-standing weaknesses of the American economy, but also other economies, have been exposed. The second is that there’s a clear possibility of further adverse effects from the pandemic. But third, echoing what’s been said, that it’s not inevitable. It’s a matter of policy. And then the final question which I won’t get to, I hope we get to in the Q&A, is one that Daron raised at the end of his discussion: the question is whether we actually do what we could do. And that is a matter of how our democratic institutions respond.
Let me begin by noting one aspect of the pandemic, that it has led to a fundamental shift to the cost of labor versus machines or robots. Daron pointed out very clearly that what has been happening is a shift in technology, labor-replacing versus labor-augmenting innovation. That is one of the reasons why the labor market is not working well, why the output share of labor has gone down. This pandemic has emphasized, even increased, the virtues of robots. Robots don’t get the coronavirus (though, obviously, computers do get computer viruses). And there is an ongoing war in both spheres with some uncertainty and some hope that the good guys, the antivirals, will win over the virals. Robots, even if they do get viruses, don’t need to be socially distanced. And all of this adds to the shadow price of labor. It makes labor less attractive relative to capital. And that will exacerbate, I worry, some of the trends that Daron talked about. There was an interesting article this morning in the New York Times about a city in the UK where robots are being used for deliveries. They had already set up a company before the pandemic, but after found a vastly new market. If this is so, it will mean the problems with unemployment and inequality that we’ve been facing before Covid-19 will be even worse.
There is a failure to design an adequate response in the United States to growing unemployment. The unemployment rate in the United States is clearly already at 20%, and a broader measure of unemployment, which we call U-6, is clearly north of 25%. And this growing unemployment is in spite of massive spending — almost $3 trillion fiscal support, and an equivalent amount of monetary support. What is equally disturbing is an unwillingness on the part of some to continue to support this spending, even though it’s obviously needed. That’s obviously very worrying, it’s a clear sign that some aspects of our solutions may not be working as well as they should.
At one level, one can say it’s not a surprise that things didn’t work out as well as we would have hoped. Everything had to be done in a rush. But the fact is that countries all over the world had to do it in a rush. In some countries, the institutions actually worked. That’s a hopeful side. In New Zealand, not only did they avoid the massive increase of unemployment that they had in the United States, the disease was almost brought down to zero, and there’s strong social cohesion. Other democracies have done so as well. As Daron said, some of the more authoritarian countries have also brought down disease numbers, but in ways that obviously wouldn’t be acceptable to us. But the good news is that there are countries like New Zealand and South Korea who are democracies that have brought it down, have gotten the disease under control.
What is most disconcerting is the marginally different perceptions, the beliefs about the disease and its consequences and what to do about it, reflecting and deepening pre-existing divides. And that goes to the point that Daron and Diane emphasized – the importance of trust in science, trust in experts. In large parts of our society there is that lack of trust, and that’s been exposed very strongly by the pandemic. To me that suggests that we may not be able to respond appropriately to the enormous social and economic challenges that AI may present going forward. Now, some have suggested that the pandemic will, in the short run, reduce the problems posed by AI and robotization because it is causing onshoring. But I think that’s an overly optimistic note. Onshoring will be done by robots or by machines more broadly. The jobs that have been lost to robotization and de-industrialization won’t be regained.
In fact, as I said earlier, the short-run impacts are going to be just the opposite. Those who can do remote work will work remotely – the high-tech workers have been relatively little affected. We’ve gone on with our teaching on Zoom. It’s the others, the people who work in the restaurants that have faced job losses. In the short run, the problems of inequality to which I refer are likely to get worse. The disease has exposed and is likely to exacerbate these inequalities.
In the United States, the disease has also exposed the weaknesses of our whole system of social protection. The fact that America has the least adequate system of social protection, such as paid sick leave. This really illustrates the worries about our institutions. Congress recognized the importance of paid sick leave. We don’t want people who are sick with COVID-19 going to work. And since almost half of all Americans are living paycheck to paycheck, if they get sick and there is not paid sick leave, they have to go to work. Congress passed a law requiring paid sick leave just for COVID-19, but then, under the lobbying of major companies, those companies with more than 500 workers were exempted. Now that reflects a kind of short-sightedness on the part of the companies — you can say it also reflects a lack of humanity. It reflects an inadequacy in our political process that they would let this group win the day. These companies employ almost 50% of all workers in the private sector.
Another example is that we have asked workers to go to work without protective gear. We have an agency within the government called OSHA, that’s supposed to protect workers, but it has still not issued regulations concerning the disease. I referred earlier to the lack of resilience in our economy, but it’s a lack of resilience for which the poor pay the highest price. Indeed, the rapid restructuring of the economy, accelerating change already going on, such as in retail, will create a pool of unemployed that would, even in a normal recession, take some time to work off.
The next point I want to make is the same point that Diane emphasized: the restructuring of the economy has advantaged large digital firms. Firms which have large elements of monopoly power, related in part to superstar and network effects. The problem of the lack of competition in this key sector – something that I talk about in my book, People, Power and Profits – is getting worse as a result of the crisis. So too will the problems of inequality, which are linked to this monopoly power, and to some of the other effects I talked about. In the medium term, we shouldn’t have a problem there, but our politics may lead us to have one: We will need massive investments for the green transition; there are gaps left by underinvestment in the last 20 years in our infrastructure. These gaps should necessitate more job creation than we will be losing. But that will require government revenue, and that’s the question – will we have the political will to make these investments?
I have even more worries about Africa. The cheap labor that enabled export growth in manufacturing goods was at the center of the development strategy in East Asia. And that won’t be working in Africa. As I said, we shouldn’t have a problem in the medium term. And in the longer term too, we shouldn’t have a problem. We should be able to use our tax system and intellectual property rights system to ensure the benefits are shared by all. This is particularly important in light of COVID-19, which can be viewed as a large negative technology shock. Negative technology shocks or similar events give rise to distributive battles: who will bear the cost of the reduced standard of living? Such distributive battles can be particularly ugly in countries lacking a certain degree of underlying social solidarity such as we’ve seen in the United States.
I want to end on a couple more positive notes. The first is that we should be able to steer innovation. Steering innovation to what has been called intelligence-assisting innovation rather than labor-replacing innovation. Maybe that itself is a problem which AI could be trained to do. Daron emphasized the problems of misguided incentives of encouraging labor-replacing innovation. There are others too: The fact that monetary policy has kept the cost of capital down to a negative real interest rate obviously exacerbates the problem of the incentives to have human-replacing robots. But if we can steer innovation in another direction, then the problems that we have with AI will be mitigated.
The second more positive note is that government has never intervened more strongly in the economy. Never has there been so much spending and so much lending, where in the midst of this pandemic, they’re making life and death decisions over enterprises.
We are shaping the economy or failing to do so. The choices we make now will have long-lasting effects. So we have the potential to use conditionality on public lending programs in ways that can really reshape our economy and make us better able to handle the problems of inequalities we’re facing, and some of the governing problems we’re facing. The problem is, will we have the institutions that will direct this money to try to create the post-pandemic society and economy that we like? So far in the United States the answer is no. So far in other countries, the answer is partially yes. Let me stop there and we can have a discussion.
Thank you so much Joe. Let me now also bring all panelists on screen. We have just heard three really thoughtful perspectives on the effects of the pandemic on our economy and also how to think about our societal response to other large shocks.
I thought I would start the panel discussion by posing a perhaps somewhat personal question. What has surprised you over the past few months, and are there any specific lessons that you feel you have learned that give you a new perspective on how easily or how our economy and our society can adapt to large shocks? And how should this inform how we react to the prospect of ever more automation?
Let me make one remark which will be a partial answer and also a riff off what Diane said because I think it’s going to be an illustration of the power and the dangers of technology and our governance challenges. Before this crisis I was probably close to one extreme on issues of privacy, in that I saw the control of data by governments and the control of data by companies as a real threat to democracy. I have partially changed my mind in the way that Diane already anticipated. It is clear in the midst of the pandemic that data sharing, use of data on infections, and contact tracing are all critical for saving lives. So how do you square that with the issues that I worried about? In fact, I think this is a critical test case for some of the issues that both the other panelists and I talked about. I’ve been somewhat frustrated by conversations I’ve had over the last few weeks with computer scientists, who a year ago would have not paid sufficient attention to issues of privacy and their importance to democracy, who now object to use of data sharing in order to combat the pandemic. I think all of these conflicted responses are an implication of our inability to visualize and understand and imagine a better governance for data.
I think in an ideal world, what we would say is that of course right now we have to use all the data we can in order to combat the pandemic. But then do that with a proactive plan for doubling down on protecting privacy as soon as the pandemic is over. That means both controlling the use and abuse of data by governments and controlling and containing the use and abuse of data by companies. Now the question I think is that some people come down with a very different conclusions because they have different views on what is feasible institutionally. For example, if you believe that once you open the gates to companies or governments using private data, you can never take that back, you’re going to be much more cautious. If you think that our institutions are so badly failed at the moment that we can never double down on protecting privacy and strengthening democracy, you might have a very different view. I think this privacy issue is a test case and I do actually still retain a cautious optimism that recognizing the issues and publicly debating them and understanding what sorts of institutions can deal with them will open the way to a better governance of data. That is actually very related to the governance of AI.
One of the questions that I saw is do we need broad institutions that protect us in terms of inequality, public safety, and democracy, or do we need technology-specific institutions? I do very much believe that broad institutions are the first line of defense, but we do need technology-specific governance structures and data is one of them. And AI is another, both because of its ability to change the political discourse, to transform privacy and political activism by individuals, and also because of its labor market effects. Because I think replacing labor with machines, sometimes it’s very productivity enhancing, but it also has external effects because it really damages the very fabric of society. It has to be balanced out with other social objectives. Thank you.
I think Daron is absolutely right to point out that this is a critical moment for thinking about the kinds of institutions that we trust to handle data and technology more generally. So that kind of thinking is really important. But to answer your question more directly, the thing that struck me is the way that people sit in intellectual silos, even in the face of a major crisis like this. We’re a self-selected sample of people who talk to computer scientists a lot, and so already crossing disciplinary boundaries in that way. I’ve been quite struck in all the discussions I’ve observed in UK government and elsewhere that medics talk about medical issues, geneticists talk about genetic issues, the epidemiologists and the economists, maybe they’re starting to talk to each other. This really highlights to me the importance of thinking about ways to integrate social science and different strands of science because the problems that we’re facing — be it the covid pandemic or climate change or geopolitical disruption — these don’t fit into narrow silos. That surprised me and concerned me. I hope we can also take this opportunity to do some more of that joining up because if we’re putting a lot of effort into medical innovation only and not into the social context of the institutions so that people would trust the health system that will deliver it, then we’re going to fail in tackling this crisis.
I agree very much that the key is creating institutions. I’m optimistic that we can create them, but let me express a concern that one has to go a little bit beneath that. The question is why isn’t there trust in our institutions? And why should there be some skepticism? Well that goes back to, you might say, the word power or inequality in our society. If we think that Facebook is in one way or another going to write the rules, we’re not going to feel comfortable with the rules that come out. And if we think our society has a lot of inequality, which it does, and that we have a political system where that economic inequality translates into political inequality, then we’re not going to trust the institutions that emerge out of the political process that are supposed to protect us. They’ll be protecting the one 10th of 1%. That’s why I’ve always said at the root, we have to begin by dealing with the underlying problems of inequality, the problems of ensuring that we have competition, and of course, that’s interactive. How do we do that without good institutions?
Let me give one more example that’s a little bit different from the data problem, but one that is of great concern to me both before the pandemic, but made very clear by the pandemic, and that’s misinformation. The concern about spreading misinformation about the pandemic response, which has been a major problem. What’s interesting about that is before the pandemic, Facebook and other technology companies said they didn’t have the technology to address problems of misinformation. None of us really believed it because AI has the technology, not necessarily to do it perfectly, but to do it reasonably well. Then finally, when it became clear that it was our country’s health was at risk, they did come forward and say they were going to take down misinformation about responses to pandemic. But they feel very hesitant to take down misinformation about the pandemic put up by political leaders. So again, a political and institutional decision which is obviously a problem.
Finally, let me say, in response to your question about what has surprised me, one of the things that surprised me was the willingness to come up with a sizeable response, on the one hand, and the magnitude of the failures in the design of response on the other, which I find quite colossal. It wasn’t like they didn’t know about the alternatives that were being discussed. And the third is the willingness of one of the two parties not to have a comprehensive program and not to have a sustained program, saying we ought to pause now. The social divisions in our society that are forming over this issue are actually a surprise. We can’t even, on this particular issue, come to some agreement about reality.
Thank you Joe. A number of people have posed questions in our question box around a familiar theme that automation will on the one hand create more abundance, but on the other hand, we are concerned about whether the resulting prosperity will be shared or whether will just benefit the few. What is your take on this question and do you view it differently in a post-COVID world? Are you more optimistic or maybe more pessimistic on how we can resolve this tension? Let me maybe go in inverse order now, let me start with Joe and then Diane and Daron.
Absolutely, the fact that we have more resources means in principle every group in our society could be better off. I alluded very briefly in my introductory remarks to the fact that we can use intellectual property rights, taxation, we have lots of incentives that we can use to make sure that the benefits are shared. Part of that is competition policy to make sure that you don’t have an agglomeration of market power. There are lots of things that we could do. I guess I have an ambiguous reaction coming out of the pandemic whether we will. On the one hand, I certainly get a very strong feeling that a lot of people have realized that the pandemic has exposed the magnitude of inequality in our society and a lot of discussions of inequality, unfairness, and a lot of resolve to deal with that. On the other hand, the point I made before, the kind of divisions in our society that have led some of the people who should be the most strong advocates of pro-equality policies to actually resist the kinds of policies that would enable us to more effectively deal with the problems.
If you think about the 19th century, technological change and automation brought about a long period of great inequality and low wage growth. Then if you think about the 1950s and 60s, which saw a lot of automation, we had the opposite outcome. We had reduced inequality, lots of good jobs for middle class people, and rapid wage growth. And so the question is how can you steal yourself into that mid 20th century pattern rather than that late 19th century pattern? One of the keys for me is about the skills that you need. And anybody who deals with the big data sets and AI now knows that actually handling the data is a craft skill, and people don’t have any very systematic ways of passing on that skill. It’s a learning by doing system. You learn it at the feet of the master and you gain those skills yourself. And so what we need to do is both make the technology itself more routine and change the provision of the supply of labor, the people with skills, and make sure that it becomes less of an inequality machine than it has been to date. And the pandemic is probably an opportunity to start to create some of those skills and think about that because governments are going to have to think about it, how to avoid the scarring on the large groups of young people coming into the labor market and needing to find themselves a good career and a good job prospects. So on balance, I think I’m probably a little bit optimistic about that, but this is very uncertain. Who knows?
I think this is a really interesting question. I’ve thought a lot about it. I’m going to just ever so slightly disagree with the other panelists in the sense that I think even though automation is an enormous engine for productivity growth, it is also potentially very disastrous if the attitude is “automate everything in sight.” And the reason for that is threefold.
First, it isn’t actually true that automation always increases productivity. Automation has the promise of increasing productivity, but if it involves substituting machines that are only slightly more profitable than labor, it doesn’t increase TFP. And if there are policy distortions such as the ones I hinted at, and there are many others related to labor market structure, it may actually reduce TFP.
Second, my belief on the basis of my work and data analysis, is that periods such as the one that Diane said, middle class wage growth, broadly shared prosperity, stable or sometimes even declining inequality, though they coincide with automation, critically depend on other technological changes, periods that have mostly automation and no other technological changes have never brought that kind of prosperity. And the reason why economists have often not been as clear on this is because we have imposed on the data a way of looking and models that have only one type of technology, blinding ourselves to the critical question of which types of technologies are doing what. So automation can increase productivity but it’s generally a force towards greater inequality and slower wage growth. It needs to be counterbalanced by other technology.
That brings me to reiterate what I said earlier, technology policy, redirecting technological change away from just automation, especially for AI which has so much promise to be complementary to humans, is critical. My reason for being very worried about “let’s just do AI on everything and get rid of the troublesome humans which are now proving to be more troublesome because they can get Covid-19” is because I think we have also no great experience of generating shared prosperity based on redistribution. There was a question on predistribution and I very much agree with that question. Predistribution is critical. So it’s very difficult both for political reasons but also for social reasons to create a harmonious, well-functioning democratized society when everybody depends on bread and circuses out of the hands of the governments or this new version UBI.
We really need people to be earning less unequally distributed wages, and that means middle class wages generated by the technological working conditions, and bargaining situations in the workplace. That’s going to become more and more difficult if automation just gets out of control. Of course, redistribution helps, especially for a social safety net, providing public services and keeping in touch through progressive means, what types of incentives that are at the top of the distribution, but it can never replace the market system generating more equal wages. And that will never be possible if we double down on automation because if you just have more and more automation technologies, bargaining power cannot survive. If workers ask for higher wages, firms will just shift to machines that are getting better, whereas humans are not getting better. So it’s absolutely critical that we have ways of investing our ingenuity, especially in the field of AI, to make humans more productive as well, not just machines. Thank you.
Thank you, Daron. And you have just touched upon the next question that a member of the audience has posted which was on predistribution versus redistribution. So I wanted to ask Diane and Joe if they could also share their thoughts on this question with us.
I don’t really have a lot to add on that. I mean the point about increasing labor skills is the point about predistribution shaping the configuration of labor supply and demand. And that’s exactly why I put emphasis on that in my previous answer. One other point to make perhaps is looking back again at that history, the role of institutional innovation. So we’ve been talking about automation, but we might also want to think about in this context what kinds of new institutions might we want to see emerging out of this? And they might not be able to deliver financial redistribution or pre-distribution, but they can alter things like the distribution of social capital, the distribution of natural capital among people. And, you know, income matters a lot, finance matters a lot, but these are also other assets that people really need.
I just want to say I strongly agree with what both Diane and Daron said. I think there should be a focus on pre-distribution or what used to be called just market income. I want to just add that it’s a comprehensive issue of the rules of the game and the investments. What do I mean by that? We’ve talked about competition policy, also corporate governance policy. One of the sources of inequality are the CEOs being able to shape what the firm does and getting more for themselves and making decisions about labor-saving innovation versus other kinds of innovation. If we add more representation of workers on boards, we might get different decisions. They might not view labor as an irritant but as part of the objective of society.
One of the striking things about the pandemic, as I mentioned, was that employers did not provide sick leave or protective gear. In many cases it was only the unions that succeeded in getting that kind of protective gear. So that’s an extreme manifestation of the lack of social responsibility or short-sightedness on the part of corporations. And I mentioned before, monetary policy, which changes the incentives of labor, of intelligence-assisting innovation, which strengthens the productivity of labor and increases the demand for labor rather than labor-replacing innovation. There are a whole set of policies that go to shape how our economy works, and which affect the market distribution of income. And we ought to be focusing a lot more on that.
Thank you. We are almost at the end of our webinar and time is always way too short. But I wanted to ask our panelists if they are willing to leave us with just a very short thirty second parting thought on this theme of what we can learn from the pandemic for the future of governing AI. And let me go through in alphabetic order again. So with Daron first.
Let me agree with one thing that Joe said, which is the ability of most Western societies and beyond to actually respond to the crisis with large stimulus packages to deal and in a general agreement within society despite a lot of misinformation that you have to deal with this problem both at the level of containing the virus and healthcare systems being bolstered, I think are hopeful signs that when push comes to shove, there will be some agreement on key issues. That is the only sort of straw we can cling to in terms of remaking institutions in the future.
I certainly think the mood has changed. People are ready for a different kind of system. They’re very aware of the many inequalities that have been exposed and exacerbated by this crisis. So that makes this an opportunity. And there is a cliche, don’t let a good crisis go to waste, grab that opportunity. My concern about it is that we have already recently let a good crisis go to waste in 2008. We did far less than I expected coming out of that. All of us who are engaged in this debate really need to make sure we grab that opportunity now.
I agree very strongly again and in fact, maybe it’s one of those instincts where the second time around you actually learn the lesson that you should have learned the first time around. And the lesson is very much that we need a better balance of the market and the state. We put too much on the view that markets will solve all problems, and we didn’t realize how you need to have regulation, you need to have public investment in science, you need to have good institutions, you need to have trust in experts, you need to build up trust in these institutions rather than bringing them down. You won’t be able to get that unless you have societies with more solidarity, and that kind of solidarity will only be achieved if we get a society with more shared prosperity, more equality. So that agenda of equality is both the object of what we’re trying to get, but also a necessary condition to get the kind of society that we want.
Thank you very much, Daron, Diane, Joe, for sharing your thoughts with us. Thank you to everybody in the audience who has joined us today. I should also let you know that all three of our panelists today have agreed to give a full webinar on more specific topics in the coming months. So please check back on our website frequently as we announce future events. I hope to see all of you back with us soon. Goodbye.
Acemoğlu, Daron and James A. Robinson (2013), Why Nations Fail: The Origins of Power, Prosperity, and Poverty, Penguin Random House.
Acemoğlu, Daron, Ali Makhdoumi and Azarakhsh Malekian (2019), Can we have too much data? VoxEU.
Coyle, Diane (2020), Markets, State, and People: Economics for Public Policy, Princeton University Press.
Coyle, Diane (2020), The Value of Data, Bennett Institute, Cambridge University.
Korinek, Anton and Joseph E. Stiglitz (2019), Artificial Intelligence and Its Implications for Income Distribution and Unemployment, in The Economics of Artificial Intelligence: An Agenda, Agrawal, Gans, and Goldfarb.
Stiglitz, Joseph E., Jean-Paul Fitoussi, and Martine Durand (2019), Measuring What Counts; The Global Movement for Well-Being, New Press.
Stiglitz, Joseph E. (2020), People, Power and Profits, W.W. Norton.