GovAI Webinar #3: Noah Feldman, Sophie-Charlotte Fischer, and Gillian Hadfield on the Design of Facebook’s Oversight Board
Introduced by Allan Dafoe
Featuring Noah Feldman,
and Gillian Hadfield
Allan Dafoe 00:09
Okay, welcome. Hopefully you can all hear and see us. So I am Allan Dafoe, Director of the Center for the Governance of AI, which we often abbreviate GovAI, which is at the University of Oxford’s Future of Humanity Institute. Before we start today, I wanted to mention a few things. One, we are currently hiring a project manager for the center, as well as researchers at all levels of seniority, for GovAI and the rest of the Future of Humanity Institute, including interested in further work on this topic. So, for those of you in the audience, take a look. A reminder that you can ask questions in this interface at the bottom and you can vote on which questions you find most interesting, we can’t promise that we will answer them, but we will try to see them and integrate them into the conversation.
Okay, so we have a very exciting event scheduled, we will hear from Professor Noah Feldman about the Facebook oversight board and his views about what a meaningful review board for the AI industry would look like. Noah is a professor of law at Harvard Law School, an expert on constitutional law, and a prominent author and public intellectual. We’re also fortunate to have two excellent discussants with us today. Gillian Hadfield, whose in my bottom right, maybe it’s the same for you, is a longtime friend of GovAI. She is the director of the Schwartz Reisman Institute for Technology and Society at the University of Toronto, where she’s also professor of law and of strategic management. She also has affiliations at the Vectra Institute for artificial intelligence and OpenAI. Gillian has produced a lot of fascinating work, including some on AI governance, and specifically, I’ll call out her work on regulatory markets for AI safety. She’s also doing some interesting work on how machine learning can learn and adapt to human norms. Our second discussant is Sophie Charlotte Fisher. I’ve actually known Sophie from before GovAI was even established. She was a founding member of GovAI’is predecessor, which was called the Global Politics of AI Research Group, that was in 2016 at Yale, if you can believe those old days. She is currently a PhD candidate at the Center for Security Studies at ETH Zurich. And she continues to work with us as a GovAI affiliate. She has done great work on a range of topics, including the ethics and law of autonomous weapons, US export controls for AI technology, and on the idea of a CERN for AI. And specifically, perhaps one might put it in Switzerland. In 2018, she was listed as one of the 100 brilliant women in AI ethics. And in 2019 she was the Mercator technology fellow at the German Foreign Ministry. So thank you, both of you, for joining us.
Now I’ll share a little bit of background of how I came to this topic and learned about Noah Feldman. So I first learned about him actually, from reading one of his recent books, fairly recent book on the US founding father, James Madison, at the time I was and I still am struck by how much of the work in AI governance has this character of a constitutional moment, we have this opportunity it seems to set not just norms for the future, but also to deeply rooted institutions, which could shape decisions for decades to come. So at the time, I wanted to learn more about James Madison, as he seemed to be one of the best examples of how someone who has is deeply committed to scholarship can have a centuries long impact through the formation of long lasting institutions. I know Feldman, as I understand it, came to this conversation the other way around. So in 2017, he was had just finished publishing this biography of James Madison, and he was visiting the Bay Area talking to some people in the tech community, staying with his friend Sheryl Sandberg when this insight, this constitutional insight came to him that what Facebook needed was a Supreme Court. He sketched what that would look like; Sheryl Sandberg and Mark Zuckerberg were interested. And now two plus years later, the oversight board is on the cusp of starting its work and represents, in my view, a radical new experiment in corporate governance and technology governance. I find this origin story so fascinating, because it shows like with James Madison’s life, how a life of committed scholarship can suddenly and potentially profoundly, offer useful insights that can shape history. So with that, we will now hear from Professor Noah Feldman, about his thoughts on the Facebook oversight board and the governance of AI. Noah.
Noah Feldman 04:30
Thank you so much for that generous introduction, I’m really grateful for it. I have a special feeling for Oxford from when I was a doctoral student. And you say that 2017 was a long time ago, but I am of the generation now who when we were students at Oxford, the computer center was one room by the old parsonage hotel with a bunch of mainframe computers that you could use for email. And the idea that the university which treated my work on medieval Islamic political philosophy [garbled] talk about Aristotle and Plato than it was to talk about the Middle Ages would eventually become a leader in spaces like the governance of AI was literally unimaginable. So I’m thrilled by that. And very excited to be to be here with all of you today, under the auspices of the GovAI webinar.
As you mentioned, I came to the issues here from the standpoint of governance, and specifically from the governance standpoint of constitutional design. If you think about it, constitutional design, as a field is a field about the management of complex social conflicts through the creation of governance institutions. That’s not a terrible summing up of what the whole field of constitutional design is about. And as you mentioned too, I was thinking about constitutional design in a specifically American context, because of this book I wrote about James Madison, who was, after all, the chief designer of the US Constitution, but I’d also been lucky enough to work most recently in Tunisia on constitutional design there after the Arab Spring and earlier, Iraq on constitutional design there, although under much different and worse circumstances of US occupation. So the design issues were of recurring interest to me. But those were always in the context of states. It was always in the context of the state as the locus for the creation of a governance institution to manage some form of social conflict. And I was in fact in in Stanford giving at Stanford giving a talk at Michael McConnell’s seminar. And we’ll come back to Michael McConnell in a few moments, because some of you may know he’s one of the new chairs of this Facebook oversight board. And he is a constitutional law professor, former judge, and I was speaking entirely about Madison, and that was what was on my mind. And then as you say, I was also having some conversations with people at Facebook about content moderation, because like so many other people in the field of free expression, which is one of my fields, I was trying to figure out what free expression was gonna look like, in a world where more and more expression took place on platforms. It was that juxtaposition of thoughts about the development of new challenges for content moderation and simultaneously, the idea of institutional design to manage social conflicts through constitutional mechanisms, that I think led me to think on a long bike ride up in the hills, behind Palo Alto, that actually, Facebook and other platforms could benefit by the introduction of a governance mechanism that has traditionally been used, and intensively for the last, you know, 50 or 60 years in liberal democracies, to manage the social conflict around what speech should be allowed and what speech should not be allowed, namely, the Constitutional Court or Supreme Court model. That is, in its essence, a model where there is an independent body that is not directly answerable to the primary first order decision maker that has a set of principles that are clearly articulated, on which it relies to make decisions, that transparently describes to the world why it has made the decisions that it has made via an explicit balancing of competing social values, such as the value of equality, the value of dignity, the value of safety, and the value of free expression.
And so I thought, perhaps an institution like that could be tried in the context of the private sector of a corporation, even though essentially it had never been done before. And the reason that it hadn’t been done, I think, is largely that we imagined the institutional governance solutions associated with states as solely appropriate for the public sector, and not as appropriate for private actors or private entities. And of course, the difficulty with that restrictive view is that it then deprives us of a whole realm where serious attempts to solve institutional governance problems have been made. On the ground that well, this is the private sector and not the public sector. If you imagine the kind of cognitive divide that people often make, they think, well, if the government is going to regulate us, then it would be appropriate for the government to use its institutional governance mechanisms. But if we’re going to be regulated by a private sector entity, a whole other set of mechanisms are appropriate and kick in. And that is really an arbitrary and artificial divide. I wouldn’t say it’s arbitrary, but it’s an artificial divide in the negative sense of the word artificial. I know with this audience, the word artificial is itself subject to deep analysis. But it’s a divide that is not necessarily valuable pragmatically, it’s simply something to treat as an opening reality that one can then explore and potentially explode. And so that’s essentially what Facebook subsequently did. And I was lucky enough to be advising them throughout this last, you know, two and three quarters years, to the point now where their oversight board is in existence, it has four co-chairs, 20 total members. One of them, as I mentioned, is Michael McConnell. Others are people of different backgrounds. There’s a former prime minister of Denmark, there’s a prominent constitutional law professor at Columbia University Law School, there’s the Dean of the law school in Colombia, who’s also a special rapporteur for the United Nations on free expression. And they’re a diverse group of people from all over. The core design remains the one that basically struck me on the bike ride. And again, just to reiterate that it’s independent, its members are appointed to three year terms that are renew automatically renewable, and therefore are not hired and fired by Facebook. They are paid, but they’re not paid by Facebook, they’re paid by an independent trust, that was funded by Facebook and then spun out of Facebook to become independent. Their decisions will be rendered transparently and publicly, they will give reasons and reason giving is hugely important in this context. And their decisions will balance competing values. And not least, in addition to its so called community standards, which are the content moderation rules that Facebook has. Facebook has also articulated a set of high level principles, a set of values, what they call values, that function effectively as a set of constitutional values here that are also relevant to the decisions that will be made, international legal principles will inform but not dictate results.
So that’s the basic structure of what’s going on here. I’m thrilled to answer questions about the technical sides of this, the difficulty of it, the design of it, I want to say one or two words about its purpose overall. And about two ways to look at it a more optimistic way and a more cynical way. And then from there, I’m going to tack to talking about ways that similar or related governance models could potentially be used in other contexts, including in the context of the governance of AI. So that’s my thought roadmap. So let me start with two ways of thinking about the purpose of the oversight board, let’s call them a publicly interested way, and a more cynical corporate interest way. So let’s start with the more publicly interested approach. And because I do constitutional law as a as my day job, we always have to look at everything through these two lenses, right. Every constitution in the world expresses high flown values, and is institutionally implemented by people who often really believe in those values. And yet every constitution is also a distribution of power by real people in the context of real governments and real estates, where politics and self interest dominate decision making, as we all understand it in the real world. So for those of you who don’t move in a world where these two frames are constantly going back and forth, I just want to flag that these are the two frames that I use all the time, and I’m going to use them here.
From the publicly interested frame it’s just really clear the crucial decisions on issues that affect billions of people should not be made, ultimately, by unelected tech founders, CEOs and COOs. And I think that’s rather obviously true in the case of free expression, as Mark Zuckerberg is the first to acknowledge: he should not be deciding himself whether the President of the United States has or has not breached some ethical principle of safety, when he criticizes Black Lives Matter protesters, and therefore should have his content taken down or whether the President of the United States running for office is participating in a political process that needs to be facilitated, and therefore what he says should be left up. That’s just much too important decision to be left to Mark or to Mark and Cheryl, or to the excellent teams that they nevertheless put together. I would argue that it goes even beyond those kind of hot button issues and extends to the more you know, in the weeds, but hugely important questions. What counts as hate speech? What hate speech should be prohibited? What hate speech should be permitted, because it’s necessary to have some forms of free expression. What forms of human dignity are respected by displays of the human body? What forms of human dignity might be violated by certain displays of the human body or certain human behavior or conduct? These are questions on which reasonable people can and do disagree. They’re questions that implicate major forms of social conflict. I’m not a relativist, I don’t think there are no right answers on these questions. But I do think there’s a lot of variation in what different societies might come up with as the right answers. And especially when you consider the platform’s cross social and political and legal boundaries, it just makes almost no sense for the power to make those ultimate decisions to be concentrated in just a few people. Now, that doesn’t mean that the decisions don’t have to be made, there has to be responsibility taken. And so the objective of a revolutionary strategy, which is what we with the oversight board uses, is to ensure that there are people who are making these decisions, who are accountable in the sense that they give reasons for their decisions, accountable in the sense that they explain transparently what they’re doing, accountable in the sense that they can be criticized, but are nevertheless not easily removable by the for profit actors who are involved. The result of this, again, speaking in terms of the public interest should be – it may not be, but experimentally, it ought to be – some legitimacy for the decision-making process. And now what I’m talking about is legitimacy in what philosophers call the normative sense of legitimacy. That is something is legitimate, because it should be legitimate, it ought to be considered legitimate. And from a publicly interested perspective, we should all want important decisions to be made in ways and with values that ultimately serve the goal of public legitimacy. Now, let me turn briefly to the cynical perspective, the cynical, self interested perspective. Facebook is a for profit company, it is governed under corporate law of the United States. And by virtue of being governed in that way, its management and its board of directors have certain fiduciary duties to its shareholders, which includes the duty to make it an effective and profitable company.
If Facebook’s senior management hadn’t believed that it was in the interests of the company, to devolve decision making power on these issues away from senior management, they would actually have been in breach of their fiduciary duties to advocate and then adopt the strategy. So in that sense, when somebody says to me, and people do say to me all the time, well, Facebook just did this because it’s in Facebook’s self interest. My answer that is twofold. First of all, yes, that’s absolutely right, they would have been in breach of their own fiduciary obligations if they had thought they were acting against the company’s interests. And my second is, please show me any example anywhere in the world, of any person or entity with power, giving out a power for any reason other than that, they believed in that given circumstance, they have more to gain by giving up that power than by keeping that power. I mean, this is an insight from constitutional studies. Any would be dictator would like to just be the dictator all the time. It’s really nice to be the dictator. But we recognize that governments based on dictatorial principles are frequently, not always, but are frequently unstable, and effective lead to bad outcomes, not just for the general public, but also to bad outcomes for the dictators who have a bad habit of ending up dead rather than, you know, beloved and in retirement. And so systems of power that involve power sharing are always shot through with structures of self interest.
So then that raises the question of why should anybody trust an oversight board or any other devolutionary governance experiment that is adopted by for profit actors? We might imagine that if the state imposes something, then it would reflect public interest. But if it’s adopted by private actors, we might say it should never be trusted. Well, part of the answer that is that even state bodies aren’t purely publicly interested. You know, political science as a field has spent much of the last half century showing the ways that state actors, including governmental actors, are privately interested, notwithstanding that they have jobs where in principle, they’re supposed to be answerable to the public. So there is no perfect world where everybody is perfectly publicly interested. But more importantly, the reason that the public should be able under some circumstances to trust a system put in place through the self-interest of corporate actors, is that it is in the self-interest of those corporate actors to be trusted. And to do so in this day and age they must impose transparency, independence and reason given, not because it’s their first order of preference. After all, this is not how content moderation was designed in any major platform initially, but because they realize that they have so much to lose by continuing the model that they’ve been following, and they need to try something new. So the cynical view is that in this day and age, companies can’t get away with appearing to devolve power or appearing to be legitimate, they have to actually go ahead and do it. You might say, well, their most effective game theoretic strategy is to appear to be not getting away with it, but actually to be getting away with it. That might be true. And it’s an empirical proposition to say that in this day and age, with as much scrutiny and skepticism as exists, it’s very difficult for a corporate actor to get away with that, in a way that might not have been true as recently as a quarter century ago.
Okay, let me say a word now about other contexts and other possible governance solutions. You know, having spent the better part of the last three years incredibly focused on the problem of content moderation and the solution of a governance institution modeled loosely on a Constitutional Court, I have now shifted my own attention to trying to think about other kinds of governance institutions, they could also be borrowed from different places, and shapes and contexts that might be appropriate to the governance of other kinds of social conflicts that arise in technology spaces. And here, I come close to the topic of your ongoing seminar and to your program, namely, the question of the governance of AI. Now, we do have, and I actually had it right in front of my face as I was originally – not right when I was designing this thing in the very beginning, but very quickly in the process – the Google AI committee that came into existence and came out of existence in a incredibly short period of time, a story that you all know better than I, had. So I had in front of me exactly what not to do from an early moment.
So we can stipulate that the model of the corporate appointed group of potential advisors on its own without more with a high risk and unstable models to adopt if we in circumstances where the corporate actor would react very negatively to criticisms of the membership of the board. But that doesn’t mean that there aren’t other mechanisms that are worthy of being explored. And these are other models of governance. So let me just name a few of them. And then we can talk more in our conversation about which of these might be adaptable in different circumstances to different aspects of AI governance. So one interesting model that comes not purely from the public sector, but from the educational and medical sector, is the model of the institutional review board or IRB. So those of you who are social scientists aren’t used to dealing with IRBs, the same will be true of those of you in the harder sciences, whose work interfaces with important ethical considerations. IRBs are quasi-independent bodies, typically, constituted and composed by institutions, universities and hospitals, most typically, that have full authority to approve or disapprove proposed research plans or programs. Their power is enormous, as anybody who’s ever dealt with an IRB knows. It’s subject to abuse, like all great powers, and the question of how to govern IRBs is itself a rich and important question. But the IRB model is a model that remarkably, hasn’t really been tried in the private corporate sector. Sometimes there’s overlap, because if you are a researcher at Harvard Medical School, and you have a great idea, you form a company, but you also continue to do research in the university. And so you need to both go through an IRB and then discuss it with your investors. So there are some points of overlap. But we don’t really have in the private corporate sector, an institutionalized IRB model in place. Now, IRB is have something in common with the oversight board that Facebook has created, because they’re meant to be institutionally independent, but they still belong to the institution. So the Supreme Court of the United States is the Supreme Court of the United States, it’s part of the US government, but it’s also independent and its independence is assured by certain institutional features, life tenure, etc, etc. It’s not without government influence. We see that right now in the United States. We’re in the middle of a huge fight over our next Supreme Court appointees. So you see, there’s a politicization of one aspect of the process. But part of the reason for that that intensity of that fight is that once appointed, the Justice will have complete independence.
IRBs are typically part of technically the university or hospital with which they’re affiliated. So in that sense, they’re part of that entity. Therefore, they internalize some sense of responsibility, but their members typically come from the outside and they cannot have their judgment overruled by the institutional actor that convenes that. So could corporations create IRBs on their own? One option is that corporations could create independent IRBs of their own. If they offloaded management and devolved it through [garbled] foundation in the way that Facebook has done. That’s very expensive. And it requires long term commitments, but it can be done. Another alternative is to have IRB-like independent entities created by third parties. Those could be nonprofit foundations that produce their own IRBs, that are then selectively employed by companies that are looking for independent judgment. Or you can also imagine, and I’m toying with trying to create one of these right now, one can also imagine a private entity being created either for profit or not for profit, but a private entity not growing out of an existing foundation that maintains an IRB, or multiple IRBs with subject matter expertise, that can be as it were rented by the corporation, which says, gee, we’re going to be making the following difficult decisions over deploying our AI for the next two years or five years. We publicly commit ourselves to submit those decisions at a given juncture point to this independent IRB, which has AI subject matter expertise alongside possessing ethicists, stakeholder interest and other sorts of interests. Now, there are all kinds of technical issues that need to be worked out here, which I’m happy to talk about. But I think they’re all in the realm of tractable problems. The overall model, though, would be to actually devolve some meaningful power to these IRBs, and for their decisions to be not merely advisory, but to function as actual choke points for the corporate actor. You may ask why would any corporate actor ever want to agree with that, agree to do that? And the answer is self interest: the corporate actor might be aware that in order to get credibility for its decisions, it needs to have those decisions blessed by a body that can only give a meaningful blessing if it can also prohibit certain lines of conduct, or blocks or lines of conduct or behavior. And I think there is a game theoretic situation where that becomes desirable and even necessary from the standpoint of the company. Transparency is a really interesting issue here. And I don’t need to tell all of you the transparency, challenging as it is, in any corporate domain is doubling or tripling hard in the context of AI where you don’t only have to deal, you have to deal first with proprietary technologies, but also with the – fascinating to me as an outsider to AI – conceptual problem of what counts as transparency in the case of certain, certain AI function, certain machine learning functions, where they may not be fully interpretable. I mean, there’s a fascinating conceptual question. I’m sure you’ve all spent time on this. When I taught a seminar on some of these issues a couple of years ago, we spend a couple of sessions on this fascinating issue of what counts as transparency in an in a situation where you have genuinely uninterpretable algorithm where again [garbled] I understand is also debatable term, but an algorithm that is not, we are not able to interpret under given circumstances. There are very rich and fascinating questions there deserve close scrutiny and attention.
That said, it is possible using an IRB structure to maintain selective confidentiality. So you could imagine a FinTech company, that is using a proprietary machine learning algorithm, to sort credit-worthiness of applicants, profound social conflict is inevitably going to arise there. And I can say a few more words about that if people are interested. But profound social conflict is inevitably going to arise. And there are many subtle questions to be worked through. For example, does the algorithm pick out discriminatory patterns that already exist in society? Does it reinforce those if the algorithm is quote unquote, “formally instructed” to ignore those, will it then replicate them nevertheless, by virtue of picking out a proxy that the algorithm is capable of picking out? These are incredibly rich, fascinating issues. I know you’ve spoken about them before here here, and I’m actually happy to discuss them as well. But one could imagine a private company with a proprietary algorithm, just saying to the IRB, listen, we will show you what’s under the hood. You will agree not to share that with anybody else, but in your public account what you will say is we have been under the hood, and we say that what we consider to be the cutting edge techniques that can be used to manage and limit the discriminatory effects have been employed here. And those are techniques such as such and such or such and such. Right. So imagine you think, with a very, very brilliant new professor at Columbia Law School, Talia Gillis, a recent graduate of the PhD and SJD programs at Harvard, who worked with me that one of Talia’s arguments is that the only really reliable mechanism for evaluating discriminatory effects in a range of algorithmic contexts is running empirical tests of those algorithms and measuring outcomes, much in the way that historically actors who are trying to constrain governmental – sorry, governmental actors were private actors who were trying to use existing law to constrain private discriminatory conduct in say the housing context or the employment context, historically ran empirical tests to see whether a given company or institution was discriminating. So imagine one has Talia’s view – it’s not the only possible view but imagine one has that view – well, if one has that view, then what the IRB would do is it would say, we self-certify that we’ve run those tests, we’ve done the cutting edge, you know, approach, and we’ve created a protocol or supervisor protocol where those tests will be run regularly on the data as it develops. And so we’re not showing you what’s under the hood, but we’re telling you what our approach is transparently, we’re telling you what the research is transparently, and we’ll probably be able to show you the results transparently, or compel the private actor, the corporate actor that has the proprietary algorithm to do so. That’s just a sketch out an example of how this kind of institutional governance mechanism might potentially work.
So that’s the sort of, you know, private IRB or independent IRB type of approach, then there are some other potential governance mechanisms that are also worth thinking about that go outside of the IRB context, and that could also be borrowed from various institutional structures. There’s industry-level regulatory bodies, that could be created, that are always subject to the skepticism that they’re just like the Motion Picture Association, you know, the MPAA, controlled largely by their members. But it’s possible to create a more robust, industry wide regulatory actors, which again, by use of transparency and independent funding, and real independence from the corporations that constitute them, could engage in regulation of a kind that is analogous to what a governmental regulatory agency might do, but could do it more efficiently than a government regulatory agency. And it could potentially also maintain certain kinds of confidentiality to a greater degree than a government institution might be able to do. So there, you have a full range of different regulatory mechanisms, you know, European governance is different from Chinese governance is different from US governance, one can pick and choose in the institutional design process to obtain the best features or the most appropriate features here. And so there’s a full scale set of options for what I would call private, collective regulatory governance, that, again, look familiar in the context of state regulation, but avoid some of the problems that scientists and corporate actors alike inevitably fear. When they start thinking about government regulation, among them external political influence on that regulation, among them, a tendency to always be as conservative as possible to avoid criticism, sort of, you know, cover yourself in the worst case scenario, danger or risk. So that’s yet another set of techniques that can be borrowed from the public sector, suitably adopted and tweaked.
I could go on about, you know, other possible directions, I won’t, because I want to leave as much time as possible for conversation. So I’m going to pause there and just say in conclusion, that I’m eager to talk both about the particularities of how the Facebook model is working. But I’m also really eager to speak about other potential directions and options that might be more suitable in some of these AI contexts. Then the full on Constitutional Court like review board, and those may be IRB style, they may be regulatory style, and they may there may be other techniques, too. I have some other ideas for other things. They’re not as well developed, but maybe you can get me to throw them out there in conversation, that are potential directions all have in common billing is to say that we can learn from institutional governance solutions from different contexts and try to adapt and adopt them and we should never say that will never work here because that comes from this context. Rather, we should say in these other contexts, these things have these benefits and these costs, how might we try to adapt them to our needs, such that we will capture some of the benefits while reducing some of the costs. So thank you all for listening, and I’m looking forward to our conversation.
Allan Dafoe 35:17
Thank you, Noah. That was fantastic. And I’m sure if we were live, there would be a very enthusiastic round of applause from the 90 plus people in this room. I have lots of thoughts and questions. And I was very stimulated by this. But it’s the honour of Gillian now and our honor to hear from her to share her thoughts.
Gillian Hadfield 35:36
Great. Thanks. Thanks, Allan. And thanks Noah, that was terrific, really. It’s such an important thing to be discussing, and I really, the way you you wind up there with: we need to be looking for alternative regulatory models, we should look and draw on other models out there, and then thinking creatively about what the demands are regulating in the AI context, and, and, and how do we how do we meet those demands? Lots and lots for us to discuss, I want to try and just keep focused on a couple of a couple of points. First, I think it’s fantastic that somebody with your background and thinking about the origins of democracy and the development of the constitutional system, I think it’s really great that context here because I do think we are – maybe Alan is the one who used – a constitutional moment, I do think we are at a point in time, where we are seeing kind of the same question like, you know, Magna Carta 1215, where, you know, you know, we have entities, they’re not monarchs, but they’re private corporations that have become the dominant aggregations of wealth and power, and they’re defining so much about the way our lives work. That to be exploring, okay, so how do we democratize that process, is absolutely critical. I think it does raise the question, is that the right way to do this? Is it both feasible, and is it desirable to democratize private technology companies? You’re exactly right to frame it up, as I’m pretty sure I agree with you, people who are running these corporations that they hate, look, I don’t want to be the person deciding, you know, what, what Trump’s tweets can be left up or what postings can be there. But as you pointed out, this is this a global platform with two and a half billion people on the Facebook platform. It’s just something we’ve never seen before. And so I think both the question of democratizing that – both is it feasible, and is it desirable, where the [garbled], so I think that’s exactly the way we have to be thinking about this, this moment in time. And then I want to say a little bit about, you know, the appeal to existing institutions. And then, you know, we’re talking about the Facebook Supreme Court, particularly, but your comments, sort of in a broader context of thinking more generally about that, IRBs, and so on. But one of the things I think we’re also at a point of is, you know, the set of institutions that we created over the last couple of hundred years, in the context, originally of the commercial revolution, then the Industrial Revolution, the mass manufacturing economy, the nation state-based economy, and society that those institutions, you know, in many ways worked fabulously well, for significant periods there. But there are lots of ways in which they are no longer working very well. So we’re talking here, you’re sort of using a model of the high level Constitutional Court, but a lot of the issues are facing is like, you know, I’m user 5262 if I got in there early, I guess. And I’ve got a picture from my, you know, party that I that I want to post or I have a political statement I want to make, and those numbers are in the millions I was trying to get. Yeah, so 8 billion pieces of content were removed, something like that, in 2019, from Facebook, these are just massive, massive numbers. And one of the things we know about our existing institutions, which are heavily process-based, phenomenally expensive, the vast majority of people have zero access to those existing institutions. Now, certainly, that’s true if we go up to the level of the Supreme Court, and you’re not proposing that we create something that is going to be responsive to every individual who has a complaint. So I totally get that you have focuses, you know, even jump up to the Supreme Court rather than to say, you know what, let’s start with our trial courts or, but, but I think that’s actually a really critical thing for us to be thinking about is that both processes are incredibly expensive. They end up being like little pinholes of light into this big, big area. What can we be doing to, in fact, bring many, many more people into this process of expressing and communicating and constituting the norms of what we consider to be okay and not okay, on our platforms just to focus on that that framing in the context of, you know, free speech. And the other things, the other values that that are lined up against against free speech, what can we be doing to incorporate that? Now I think that’s where we just have to come to grips with this massive mismatch between a huge volume and a cost to the process and the – and I’ll go back to that – that language of democratization of that process. I think we will not develop methods that will be responsive that don’t ultimately involve AI. You’ve mentioned some of those. And actually, your concept of privately recruiting an IRB to review under confidentiality provisions, you know, what’s under the hood, in a model, and so on, I think is great.
I’ve been thinking about comparable models, Alan mentioned at the outset some of this work on regulatory markets, and I think we do need to be figuring out, how are we going to simultaneously get investment and methods of, in this instance, content, moderation, that are still responsive and legitimate, but I think we’re gonna have to figure out ways to incorporate a lot more people. So I’m particularly interested in thinking through the, the technical as well as legitimacy challenge of how can you get many more people involved in that in that process. And I actually think that’s really important, not just from the point of view of thinking about equality, or thinking about equal participation, but also because it’s fundamentally critical for the constituting of social order, that, that our norms are deeply rooted in ordinary people, their ordinary lives and, and communities. And of course, once you start talking global, that’s where it becomes tremendously difficult. I worry a little bit that, you know, the, the model of the Supreme Court, with an elite group, and so on, is going to be actually pretty difficult to try and make that progress. I think one of the things that I also want us to think about is, if we are so there’s a way to go back to this point about democratizing the private technology company, one of the things you might say is I can let them recruit the market here, in the sense that part of the global issues, you know, Facebook is such a massive platform and dominates the space. So incredibly, is there a role for communities to be working? You know, can we develop groups within our platforms, multiple platforms where people can, basically sort of in that in the [garbled] from political scientists voting with your feet, you know, voting with your with your browser, for which platform in which values and which norms you want to, you want to follow? I think there’s a lot of challenges to think about, I think this is the great challenge of how do we figure out how to respond to this massive scale, and respond to the global nature of these platforms, without taking decision making further and further and further away from ordinary people and their and their experiences and their experience of being a member of the community who’ve seen and recognized in these environments. I’ll stop there. Thanks.
Allan Dafoe 43:14
Thanks, Gillian, that was great. I’m thinking actually, perhaps Noah, we can give you some time to reflect and respond. And I’ll use the fact that I have the mic right now to add on to Gillian the specific question of the the choice of whether to have a global court and sort of a global moderation policy versus culturally specific policies. Yeah. And I you discussed at one point having regions or countries but yeah, that’s sort of just an interesting question.
Noah Feldman 43:43
Thank you. Thank you, Gillian. Those are really very rich, and hugely important issues that you’re raising. So let me just say a few a few words about them. If I if I could summarize, and maybe slightly oversimplify the argument that you’re making, it’s that we need greater democratization, greater public access, more people involved, I think you said, in order to aim at constituting social order. And you expressed a concern, which is completely correct, that the Constitutional Court model – and this I think would also be true of an IRB model – tends to rely on a smaller, elite group of people to make the relevant decisions. And I think that’s a correct analysis of what’s going on in both of those contexts. So I want to start by just acknowledging how incredibly challenging this problem is in democracies, not in you know, platforms, not in AI, not not on on social media, but just in democracy, right. How does one get genuine public participation in decision making?
It remains the central problem in most developed democracies: some have great turnout or lots of people show up to vote, but many have relatively weak turnout, where not that many people show up to vote. Voting, as political science has repeatedly demonstrated, is subject to all kinds of strange problems of agent principle control, and doesn’t always give ordinary people all the options that they would like to see represented. And there are tweaks for that, proportional representation tweaks, which have their own consequences, like the production of a parliament with so many parties in it, that it becomes very difficult for anything to get done. Even though a greater set of points of view are represented, there’s a set of complex trade offs that arise there as well. You know, the strongest critics of contemporary liberal democracy would probably say that one of the worst things about contemporary liberal democracy is that it purports to give the public opportunities to participate, and doesn’t actually give that to them, or gives them some simulacrum of participation. So that’s just to deepen the problem that you’re describing. Even if we could borrow some of the features that come from democracy, that might not solve our problems, because democracy itself is struggling. Making it much harder is the problem that I like to sum up with the example that I’m sure most or all of you know about. The example that arose a few years ago, when Great Britain decided to use an online voting process to name a new warship. As you will recall, the eventual winner was not Intrepid, or Valor, or Harry and Kate, but Boaty McBoatface. And sometimes in our conversations at Facebook, around the difficulties of democratization, we just summed up that this problem, which I’ll say a word about, in that phrase, Boaty McBoatface. And, you know, to formalize the Boaty McBoatface problem is that so far, it seems that when voting techniques are used online, the fact that the UN user is very far from internalizing the costs of his his or her vote, make it appealing, and not unappealing maybe more importantly, to cast votes that are silly or frivolous, or humorous. And so, you know, Facebook had actually experimented more than a decade before I got involved with them with a regulatory democratization approach, which is famous only in the small circles of people who care about online, regulatory democratization. It was an utter disaster, they said they were certain they wouldn’t have they wouldn’t make basically, they wouldn’t make certain kinds of major changes on the platform without getting a certain number of votes from a certain percentage of users. They couldn’t get participation, comparable to what they needed to get anything done. And it was also very subject to capture, again, a political science concept that’s very familiar here by small groups of concentrated people who had interest and could generate votes. And it’s actually I mean, sometimes I wonder, like, how did it happen that, you know, a random constitutional law person made a suggestion and Facebook decided to do it. Of course, the reason was that when I came to Cheryl with this idea, and then she brought it to Mark, Mark had actually been thinking for years, for at least a decade, about potential ways to devolve power. But the problem that he and the very, very smart people around Facebook kept bumping into was that if you devolve power, you want to democratize it. And if you democratize it, you run into cycling problems, and capture problems, and Boaty McBoatface problems. And I think what was actually when I just to finish the thought, and then by all means, yeah, but from the outside, when I look at it from the outside, I think, oh, no wonder they like this solution. Because it was it was about devolution, without democratization. It was devolution into an institutional structure like a court, that is not technically a, you know, small democratizing structure. So this is all by way of acknowledgement. And then I’ll say a word, you should say speak Gillian, and then I’ll say a word about what I think might be scary.
Gillian Hadfield 48:58
Yeah. And I only just want to jump in there and say, because I, I think the challenges of developing the regulatory model here that are democratically responsive is, is as big a challenge as building AI. And it’s also why sort of I’m under a focus on regulatory markets models, because I think we need to attract investment into this problem in the same way that we attract investment into building AI. So when I think about, okay, we want Yes, okay, getting voting, I think that’s inevitably going to be a poor that that was a technology that worked in various time, but don’t think that is going to work here. You you’ve said that’s been tried. But being able to read the normative environment, is something that we have now tremendous tools at our disposal, like what what is the reaction to different kinds we could build? I think we could be building machine learning models that are reading the rich, dense, massive volume of responses. And I think we should be figuring out how to do that and how to make that more legible, but I don’t think it’s only voting, I think that that we just sort of say, we want the idea of voting. But it’s not, as you say that that’s kind of broken in our current in our offline worlds. And I’m not surprised it doesn’t carry over. Anyway, I just wanted to jump in there with that, with that thought as well.
Noah Feldman 50:15
A couple of couple of thoughts on that. First, I actually think it’s harder than AI, because we’re in an early stage still of AI. And yet, the problem that you and I are talking about now of giving the legitimate public access to democratic participation, was posed explicitly by Plato and Aristotle. And in about 2500 years, smart people have been thinking about it, and nobody’s really solved it, you could say that the most intense process probably goes back to the French Revolution, of trying to mobilize a mass democratic public to make decisions effectively. So let’s just say it’s been the last 200 plus years that people have been trying to do it. And a lot of really smart people have focused on it, and haven’t really solved it. So I think it’s even harder. I also think it’s interesting when you say maybe we could use AI in order to solve it. And there will be hundreds of people out there, if there are hundreds of people listening, I’ve got a 222 people mark at the bottom of mine, but I don’t know if that means that’s the number of people listening. But if there are, then there are 221 people better than I, at answering the technical question of whether current techniques of aggregation are promising for doing what I would call normative political theory, you know, substantive analysis of what people are saying out there so as to glean a direction, maybe, but so as to glean a set of arguments about legitimacy. That’s a hard problem. I don’t I don’t claim to say that it’s an insoluble problem, just that it’s a genuinely hard problem. And if we were in over in the seminar room, where we talk about administrative and regulatory law, and and and Gillian were to say, you know, we should improve our legitimacy by using machine learning tools to get a sense of what all the comments are out there, I would say, interesting. doubtful, tell me more, I guess, is what I would have said. And maybe maybe it would work.
Just a last thought on this, I have a kind of approach to the problem that Gillian is talking about. And the approach is to say that we actually have a series of legitimating techniques that we use when mass voting doesn’t work very well. And those include transparent reason giving and subjection to intense public criticism. When a regulatory body is silent and behind closed doors, and is not easily transparent for analysis, it tends to lose legitimacy. And, you know, those of you who are in the UK and lived through the the Brexit process, probably know, on both wherever you were on that issue, that the perception I’m not speaking of realities now. But the perception that European regulation was insufficiently transparent, and therefore could not be subject to detailed criticism played a crucial role, I would argue, in the delegitimation within the UK of the project of European regulation, I mean, it’s not a coincidence that one of the most powerful and pro Brexit arguments from the powerful, most powerful leave arguments was, again, rhetorically was a claim of a legitimate regulation, illegitimate because non-democratic, and illegitimate and non-democratic, because non-transparent. So transparency can be an import- play an important role, because then we have other institutions, institutions like advocacy groups, institutions like the press, that can then engage in criticism of what are perceived as bad regulatory outcomes. So to me, in the absence of a magic bullet solution, I am interested in finding ways that it’s possible to use existing mechanisms of legitimation, I would call a democratic legitimation in the absence of mass voting, to improve participation and to improve access, not that these are perfect solutions at all, they’re very far from perfect, but they’re definitely starts in that direction. And they’re, they’re identifiable and they’re concrete. And you can point to them and say this regulatory process is good because people know what’s happening. They know the reasons, and they can be criticized and discussed. This is bad because they don’t.
Allan Dafoe 54:26
Thanks. Sophie, over to you.
Allan Dafoe 54:35
Oops, sorry about that. We’re having.
Sophie Fisher 54:37
Okay, can you hear me now? Perfect. Okay, so already, we’re already in the middle of this really interesting discussion. I just want to take a couple of steps back and talk about how we actually got to the point that we’re now talk, we’re talking about the Facebook oversight board before offering some reflections on the limitations but then also the strength of this approach and maybe some of the lessons But we can learn for other regulatory models, maybe for the case of AI. Now we all know that Facebook has actually made for a long time important decisions of what kind of content it removes or leaves up on its platform that affected its 2.7 billion users around the world. And we’ve just heard from Noah that actually also within Facebook, there’s been a lot of thinking before about how to make this process of content moderation more participatory. But I think what really outside of Facebook has changed over the last couple of years are that we have seen new challenges brought about by, for example, the interference in the 2016 US elections in which Facebook has played a prominent role, or even the prosecution of targeted populations, and most notably, the Rohingya minority in Myanmar. And I think these cases really have showed that the stakes inherent in handling this kind of content that we see on a platform, like Facebook really have changed. And this incidents have not only emphasized the difficulty of balancing the freedom of expression and removing harmful content from the platform, in different national and cultural contexts. But, and I think this is important to stress again, it also created tangible economic costs for Facebook, due to an audible loss of consumer trust, which threaten Facebook’s business model and future growth.
So I think these developments really have emphasized again, the need for new measures and participatory measures to evaluate content in a fair and transparent manner to maintain the trust of Facebook users in the long term. And what we’re looking at now is the Facebook oversight board, which is certainly one of the most ambitious private governance experiments to date as a transnational platforms get mechanism to govern something which is very vital to the public and an essential human rights speech. Now, the board hasn’t even started operations yet. And we’re still at a very early stage. But different facets of its design has already been criticized widely, for example by journalists, but also nonprofit organizations, and I very briefly want to get into one of the most fundamental ones, and that is the at present very limited mandate of the board. So the limited mandate of this board implies that most probably the board won’t be in the position where it is able to really solve some of the most critical issues related to the content that we see on these platforms and do the most harm. So for example, it won’t probably tackle the selection and augmentation of certain content visible to users made by Facebook’s algorithm, including this information, it won’t necessarily minimize coordinated attacks on democracies around the world. And although there’s also an expedited procedure to bring issues more quickly to the attention of the board, the board won’t be able to offer a quick reaction to and prevent the spread of harmful content, such as the live streaming of the Christchurch shooting a while ago. Now, some of these limitations are probably inherent in the function of a court-like body as the board that exerts influence by making clear how the law applies for cases. But the problem is that many of the most contentious incidents, and I’ve named a couple before, that [garbled] the past few years and then have shown that the stakes and handing this cognitive change won’t be tackled by the organization that at least partially was established in response to begin to regain user trust. And they were also again to safeguard Facebook’s future growth. So I would argue that there’s a risk that the board would distract regulators from addressing some of the fundamental and most harmful activities on the platform and by the company that will maintain. Out of that, I would also argue that when we judge the board based on its mandate, and its court-like function from what we know about it today, its design is very thoughtful, and also very promising, because not only is it a clear improvement to the existing system, that existing system that we currently have in place, but I would also argue that we can learn different lessons from the way the board was set up, especially with regard to one of the key challenges of industry self governance, and that is how to structure private governance mechanism and essentially diplomacy already in the institution vetting process given that it originates with the organization that it is supposed to check on. And with legitimacy, I mean, here how to ensure meaningful transparency, impartiality and accountability.
And I briefly want to reflect on five of these lessons that we I think, have learned from the process of how the board was established. And the first is probably a very banal one, that is power sharing. So first of all, we need to reach a situation when we look at these tech firms, where they’re actually willing to share power. And I think Facebook is a really extreme case here, because due to its dual stock structure, then the exclusive power for a very long time over contentious decisions was with a CEO Mark Zuckerberg and I think now at the board, there’s the power to actually overrule Zuckerberg on contentious decisions and also previous decisions made by content moderators. The second aspect is public outreach. So what I found very fascinating about the way in which this board was set up is that there was actually a month long consultation process all around the world, with users and stakeholders in different countries, and also that this feedback was actually published afterwards. And then you can see that it is flown into the design of the board. So I think developing a public process that incorporates listening to outside users and stakeholders and to show as a company that you take this feedback seriously, is a really important issue to keep in mind. The third aspect is diversity. So Facebook’s community standards have been looking very American for a long time. And I think they’ve shifted towards most of the European approach, but input of the global south has always been absent. And while the composition of the board as it looks now is definitely not perfect, I think it reflects much better the diversity of this reserve base in the very broadest sense representing different cultural backgrounds, professional experiences, languages, etc. The fourth aspect is independent judgment, a really fundamental one. And I think if a private governance initiative should be perceived as legitimate, it is, of course, important that the people are working in these kind of boards, or outside organizations should not be working for this company. And I think, of course, there’s a chicken and egg problem that Facebook has always also faced, how to select the first members of this kind of institution that will then select other members. But I think the solution of using a non-charitable purpose trust, pay the members and set up a limited liability company to run the operations of the board is actually quite an elegant solution that we can run from. And the last aspect is transparency. And I think also here, Facebook did quite a good job of making all the steps and key decisions taken on the design of the board transparent. And [garbled] plans to make the decisions of the board transparent, how they’re being implemented. And also policy recommendations are issued by the board explained by the public, why, how they’re being implemented, or if not why they’re not being implemented. And I think being transparent all along the way, also really increases the cost of Facebook to just drop the board or threaten its independence.
So this was this will basically the five lessons where I think we can really learn from this process. And to conclude the oversight board, as it stands, is certainly no silver bullet to reform Facebook, and it shouldn’t distract regulators from tackling some of the remaining probably most harmful activities that are happening as well on the board and that are to a certain extent, also promoted by the platform. However, within the scope of what an outside body with a limited mandate as the board can do, it is certainly a really important step towards more transparency, and also to empower users by providing them with potential lever for accountability and a mechanism for due process. I also want to stress at the end that I think it is way too early to really say how meaningful and effective the board will eventually be and whether its operations will be independent, before it even has started operations. And there are many other important unknowns outside of the realm of the board and Facebook, including how exactly foreign governments or national governments will react to the board, how national courts will react to it, and how other platforms will perceive it. So I think for now to close, we can just impatiently wait for the board to finally start its work and see how things will unfold. Thank you very much.
Allan Dafoe 1:03:11
Thanks, Sophie and Noah’s muted. There we go.
Noah Feldman 1:03:14
Let me make just a few responses. And in the process, I think I’ll still try to answer Allan’s which I didn’t answer before about the global versus regional. I agree with, you know, 95% of what Sophie said. And it’s important to note that experiments need to evolve in the in the real world. And that evolutionary experimentalism and incrementalism are sometimes the right thing when you’re trying something radical, you’re trying a radical experiment, you don’t necessarily want to roll it out, giving it all of the power to do everything that it could possibly do. Because it might not work well. Instead, a little incremental ism is appropriate. And in fact, every Constitutional Court in the world has only gradually and incrementally increased its power, you also have to realize that in the process of institutional design, the oversight board face two opposite criticisms from within Facebook. One was, it will be much too powerful, it’s going to take over the core decision making that goes to our business function and shut us down, we can’t have this. The other was this will be a big waste of time and money, it will be purely symbolic, it will have no impact. I won’t help us at all. It’s a waste of time and money to do it. And my response to both was to say you’re both completely correct that these are risks, they can’t both be correct. You know, either it will turn out to be so powerful, that it threatens Facebook’s business model or it will turn out to be purely symbolic. The history of constitutional courts is a history of gradually expanding powers sometimes having to pull back after they’ve gotten too much power. But you also couldn’t possibly have convinced the board of directors of a major company to do something, or the management of the company, or the you know the leading shareholder in the case of Mark, to do or if you thought it was going to destroy the company. And in fact that would be, that wouldn’t be responsible on his part. So I think we will see whether the limited mandate, first of all that mandate is described already in the documents as intended to expand. Second of all, there are many things that the board can do to expand its mandate right out of the box. They can say to Facebook, we don’t like your rules, write new ones in light of these values, and they have the capacity to do that written into their mandate, which is a very, very great power. In the first instance, they’re supposed to decide if Facebook is following on rules, and if they accord with their values and the second instance, they can say your rules don’t fit your values, write new rules. So I’m agreeing with Sophie, that we’re at the beginning of the experiment. And we’ll see how it goes. And I hope that we don’t we remain patient rather than impatient because it will take time for this experiment to run out. It’s not going to solve all of the problems at Facebook, and it’s not going to solve them all right away.
With respect to the global versus the local. That was a really interesting and important design question, Allan, from the beginning, it may be relevant in the AI context as well, it was very relevant with respect to content moderation, because reasonable cultures, let’s say, could have different solutions to the question, right? I mean, and then there are real cultural value differences on the platform. So you know that what is culturally appropriate to wear to the beach in San Jose is different from what is culturally appropriate to wear on Main Street in Jeddah at prayer time. I like being in both of those places, but they are very different cultural norms for what dress is appropriate. And so, and I mentioned that because nudity policy is, you know, one of the most basic policies that a social media platform has to cope with, I mean, nobody in all of the consultation that Facebook did, I didn’t encounter anybody who said, Facebook should have such a radical free expression that it’s open to pornography, I didn’t hear anybody say that. But there is such a view out there, one could imagine in that view, and there has been a real fight on Instagram, about the extent to which sex workers accounts should be constrained or limited with organized sex workers in some places in Northern Europe, arguing for greater range of expression in order to facilitate their businesses. So this is a kind of everyday day in day out difficult thing to deal with. I think the difficulty of going down the every culture on its own is basically a line drawing one out, you know, where do you draw the line? Where do you say is the definitive view within a given culture? You know, some women in Saudi Arabia really don’t want to wear the hijab. And some consider the job to be liberating and say so, who’s right? That’s a very difficult social question, which couldn’t be answered without some independent base of [garbled], as Sophie says, community standards have traditionally been very American and their orientation, opening that up is risky. Because it may lead to, you know, I was often asked in Facebook internal deliberations? Well, what are the things that you imagine could happen in terms of interest group politics? And I said, well, the single largest, if you’re gonna break groups down by interests, the single largest group of Facebook users is Muslims.
Noah Feldman 1:07:59
Right, and so you know, not all Muslims agree on all things. Many Muslims disagree on a wide range of things. But imagine that there were agreement among Muslims on some set of issues, you know, what if one then want the views held by Muslims to govern the platform? What about the views of Christians? What about the views of, you know, so you know, these are hard and genuine questions. And I think Facebook, in the end decided that hard as it is to have standards that fit the whole platform. It would be harder to divide the world up into a kind of quasi, [would have to] map making kind of way, to create different Facebook’s for different contexts and places. And I think that’s where they drove that, coupled with Facebook’s ongoing vision of wanting to be a global community. And we have a fascinating conversation about what is a global community? Can there be a global community with two and a half billion people? What is the word community even mean in that context, but that is also part of the part of the aspirational picture. So you know, much more to be said about all these topics. But I think our time is sort of coming to a to its end, if I’m not mistaken. So I just want to thank all of you for great questions and comments. And if we have more time, and I’m happy to keep talking, I’m leaving that up to you.
Allan Dafoe 1:09:06
Great, well, I’m torn, because formally, we said it would end in one minute, but of course, I would love to keep talking. Why don’t we see if there’s any burning last thoughts more discusses? And maybe I’ll say something, and then No, you can reflect again, and then we’ll we’ll close. Gillian, Sophie, do have anything last you want to share?
Gillian Hadfield 1:09:24
So I think this question about the global and the local is really quite critical. And I think that’s how we train that challenge is how do you have a global platform that yet allows smaller subgroups to have different values and to have – somebody in the chat has picked up this idea of, you know, competition between those different subgroups. It’s, you know, the, the challenges around a harmonization of standards, globally, is one we’ve been struggling with in many, many domains for decades. And I don’t think it’s reasonable to think we’ll get there Allan, I’ve had a lot of conversations along these lines over time, so I think the real challenge is how can you have, how can you have a global community where people nonetheless feel that there are smaller communities to which they belong and in which they feel reflected and respected?
Sophie Fisher 1:10:17
I agree. And I think it’s also going to be very interesting to see what the support staff of the board will maybe be able to contribute in terms of acquiring local knowledge that may be necessary to really get into the culture of these of these individual cases. And it’s not only about the diversity of the board members as such, but really also about the support staff and what they can contribute.
Allan Dafoe 1:10:40
Maybe I’ll just add to this. So I find this decision fascinating politically, and I can completely believe that global is just the most viable solution, because as you say, you know, are you going to make them national, are you going to start drawing, defining the sort of cultural, social networks? Me, I’m imagining maybe there’s some clever social network clustering algorithm that could allow subgroups to self identify, self select? And maybe this actually gets to a broader governance question about Facebook, which is the ability of users to define the mechanisms of their interaction, you know, maybe they different users would like, different weightings of what kind of media they’re provided with, you know, news versus family updates versus political inputs. Maybe I’ll say one last thing, which is, yeah, I think your argument is right, it makes sense that you want to start in many ways, you wanna start with the lowest hanging fruit. If we think this kind of governance initiative is promising, you want to start with something that ideally will succeed, right, that ideally, is good for Facebook, and it’s good for Facebook shareholders, and it’s good for users, and it’s good for the public and, and then you can grow from there. I can imagine that speech moderation is in many ways, the easiest of sort of governance issues facing a company like Facebook, because there’s not as many trade offs between Facebook’s profit and the decisions that are being made, versus other decisions, like how to how to personalize advertising or, or just anything around, I guess, advertising, or perhaps, say the addictiveness of the device, you know, to what extent you use various, you know, notification techniques or other other techniques to keep people engaged. So maybe a worry is, it’s going to be much more difficult to have these sorts of solutions for domains where there is more of a trade off between profit motive and the sort of the legitimate decision. I’ll conclude there. So over to know if you have last thoughts.
Noah Feldman 1:12:40
Just, just briefly, again, thanking everybody for great comments, I think it’s just worth noting that the problems we’re talking about are the problems of human societies. And their problems that we face at the local level, at the sub state level, and their problems we face at the global level. One interesting thing about the social media platforms is that they have, they’re both not state problems, because they’re, this is a private corporation, not a state, it doesn’t have an, Facebook doesn’t have an army, it can be shut down by states, it’s, Facebook is weaker in many ways than certain states, than most states. But at the same time, they’re also super state problems, because they’re about crossing borders and users globally. And these are problems that in international affairs, international relations of international law, we also haven’t solved, you know, the United Nations, you know, we have universal declaration rights, which are defined at such a high level of generality that lots of countries can adopt them. But many of those countries don’t follow those, those principles, because that’s the only way you could get the consensus. So you have both sub state level problems and super state problems. And I think that carries through to the AI context, as well, insofar as AI is deployed by platforms that have this kind of reach, in so far as it’s to a certain degree shaped and controlled at the highest end, by multi, by corporations that are multinational and that are present in many different contexts. And I guess I would end just with a plea to people who are listening in to remember that, in order for us to make good decisions about governance, whether in AI or other tech contexts, we need to be deeply aware of the body of social conflict, and the body of thought and debate that exists around the deepest governance problems that we face as human beings. I mean, in the end, you know, when Aristotle said that, that humans were political animals. He didn’t just mean that we do politics, he meant that we live in a polis, and that we make a politeia, which is a constitution, you know, that humans have the capacity uniquely, not to live socially, lots of animals or social, but to have a conscious thought through a set of our publicly articulated values and norms by which we try to live together, and that, to me is the challenge of governance. And I’m all for doing that across the, across the disciplines, the less we hive ourselves off, the better we’ll do. And we also have to have modesty in knowing that, unlike some problems in science, and unlike, unlike some problems in AI, which may actually be soluble, by, you know, better work and faster processors and more sophisticated algorithmic design, some of the problems that we’re talking about here, don’t are not they don’t admit, have definitive solutions. If they did, we would have converged on one system of government sometime in the last 3000 or say, 10,000 years since we started making constitutions. But we haven’t converged because there are a range of different possibilities, a range of different viewpoints, again, about which reasonable people can disagree. So some degree of epistemological modesty, I mean, that’s always good in life to have epistemological modesty, but I’m never I’m not the one to tell anybody who works in the scientific domain to be epistemologically modest, what I can say is in the domain of governance, that kind of modesty is very called for, and people like me who want to and like you who want to contribute to doing better governance. It behooves us to be modest, and incremental, and cautious, and experimental. So thanks to all of you for a great conversation, and thanks to those who listened in for listening in.
Allan Dafoe 1:16:29
Fantastic, what a great conclusion. So yes, thank you again, to our wonderful discussions and to know for this great conversation.
Gillian Hadfield 1:16:41
All right. Thanks, everybody. Bye bye.