MARK ZUCKERBERG TALKS TO WIRED ABOUT FACEBOOK’S PRIVACY PROBLEM
FOR THE PAST four days, Facebook has been taken to the woodshed by critics, the stock market, and regulators after it was reported that the data-science firm Cambridge Analytica obtained the data of 50 million Facebook users. Until Wednesday, Mark Zuckerberg had stayed silent. On Wednesday afternoon, though, he addressed the problem in a personal Facebook post and laid out some of the solutions he will introduce.
He then gave an interview to WIRED in which he discussed the recent crisis, the mistakes Facebook made, and different models for how the company could be regulated. He also discussed the possibility that another—Russian—shoe could drop. Here is a transcript of that conversation:
Nicholas Thompson: You learned about the Cambridge Analytica breach in late 2015, and you got them to sign a legal document saying the Facebook data they had misappropriated had been deleted. But in the two years since, there were all kinds of stories in the press that could have made one doubt and mistrust them. Why didn’t you dig deeper to see if they had misused Facebook data?
Mark Zuckerberg: So in 2015, when we heard from journalists at The Guardian that Aleksandr Kogan seemed to have shared data with Cambridge Analytica and a few other parties, the immediate actions that we took were to ban Kogan’s app and to demand a legal certification from Kogan and all the other folks who he shared it with. We got those certifications, and Cambridge Analytica had actually told us that they actually hadn’t received raw Facebook data at all. It was some kind of derivative data, but they had deleted it and weren’t [making]any use of it.
In retrospect, though, I think that what you’re pointing out here is one of the biggest mistakes that we made. And that’s why the first action that we now need to go take is to not just rely on certifications that we’ve gotten from developers, but [we]actually need to go and do a full investigation of every single app that was operating before we had the more restrictive platform policies—that had access to a lot of data—and for any app that has any suspicious activity, we’re going to go in and do a full forensic audit. And any developer who won’t sign up for that we’re going to kick off the platform. So, yes, I think the short answer to this is that’s the step that I think we should have done for Cambridge Analytica, and we’re now going to go do it for every developer who is on the platform who had access to a large amount of data before we locked things down in 2014.
NT: OK, great. I did write a piece this week saying I thought that was the main mistake Facebook made.
MZ: The good news here is that the big actions that we needed to take to prevent this from happening today we took three or four years ago. But had we taken them five or six years ago, we wouldn’t be here right now. So I do think early on on the platform we had this very idealistic vision around how data portability would allow all these different new experiences, and I think the feedback that we’ve gotten from our community and from the world is that privacy and having the data locked down is more important to people than maybe making it easier to bring more data and have different kinds of experiences. And I think if we’d internalized that sooner and had made these changes that we made in 2014 in, say, 2012 or 2010 then I also think we could have avoided a lot of harm.
NT: And that’s a super interesting philosophical change, because what interests me the most about this story is that there are hard tradeoffs in everything. The critique of Facebook two weeks ago was that you need to be more open with your data, and now it’s that certain data needs to be closed off. You can encrypt data more, but if you encrypt data more it makes it less useful. So tell me the other philosophical changes that have been going through your mind during the past 72 hours as you’ve been digging into this.
MZ: Well that’s the big one, but I think that that’s been decided pretty clearly at this point. I think the feedback that we’ve gotten from people—not only in this episode but for years—is that people value having less access to their data above having the ability to more easily bring social experiences with their friends’ data to other places. And I don’t know, I mean, part of that might be philosophical, it may just be in practice what developers are able to build over the platform, and the practical value exchange, that’s certainly been a big one. And I agree. I think at the heart of a lot of these issues we face are tradeoffs between real values that people care about. You know, when you think about issues like fake news or hate speech, right, it’s a tradeoff between free speech and free expression and safety and having an informed community. These are all the challenging situations that I think we are working to try to navigate as best we can.
NT: So is it safe to assume that, as you went through the process over the past few days, you’ve been talking about the tradeoffs, looking at a wide range of solutions, and you picked four or five of them that are really good, that are solid, that few people are going to dispute? But that there’s a whole other suite of changes that are more complicated that we may hear about from you in the next few weeks?
MZ: There are definitely other things that we’re thinking about that are longer term. But there’s also a lot of nuance on this, right? So there are probably 15 changes that we’re making to the platform to further restrict data, and I didn’t list them all, because a lot of them are kind of nuanced and hard to explain—so I kind of tried to paint in broad strokes what the issues are, which were first, going forward, making sure developers can’t get access to this kind of data. The good news there is that the most important changes there had been made in 2014. But there are still several other things that, upon examination, it made sense to do now. And then the other is just that we want to make sure that there aren’t other Cambridge Analyticas out there. And if they were able to skate by giving us, say, fraudulent legal certification, I just think our responsibility to our community is greater than to just rely on that from a bunch of different actors who might have signals, as you say, of doing suspicious things. So I think our responsibility is to now go and look at every single app and to, any time there’s anything suspicious, get into more detail and do a full audit of them. Those, I think, are the biggest pieces.
NT: Got it. We’re learning a lot every day about Cambridge Analytica, and we’re learning what they did. How confident are you that Facebook data didn’t get into the hands of Russian operatives—into the Internet Research Agency, or even into other groups that we may not have found yet?
MZ: I can’t really say that. I hope that we will know that more certainly after we do an audit. You know, for what it’s worth on this, the report in 2015 was that Kogan had shared data with Cambridge Analytica and others. When we demanded the certification from Cambridge Analytica, what they came back with was saying: Actually, we never actually received raw Facebook data. We got maybe some personality scores or some derivative data from Kogan, but actually that wasn’t useful in any of the models, so we’d already deleted it and weren’t using it in anything. So yes, we’ll basically confirm that we’ll fully expunge it all and be done with this.
So I’m not actually sure where this is going to go. I certainly think the New York Times and Guardian and Channel 4 reports that we received last week suggested that Cambridge Analytica still had access to the data. I mean, those sounded credible enough that we needed to take major action based on it. But, you know, I don’t want to jump to conclusions about what is going to be turned up once we complete this audit. And the other thing I’d say is that we have temporarily paused the audit to cede to the UK regulator, the ICO [Information Commissioner’s Office], so that they can do a government investigation—I think it might be a criminal investigation, but it’s a government investigation at a minimum. So we’ll let them go first. But we certainly want to make sure that we understand how all this data was used and fully confirm that no Facebook community data is out there.
NT: But presumably there’s a second level of analysis you could do, which would be to look at the known stuff from the Internet Research Agency, to look at data signatures from files you know Kogan had, and to see through your own data, not through the audited data, whether there’s a potential that that information was passed to the IRA. Is that investigation something that’s ongoing?
MZ: You know, we’ve certainly looked into the IRA’s ad spending and use in a lot of detail. The data that Kogan’s app got, it wasn’t watermarked in any way. And if he passed along data to Cambridge Analytica that was some kind of derivative data based on personality scores or something, we wouldn’t have known that, or ever seen that data. So it would be hard to do that analysis. But we’re certainly looking into what the IRA did on an ongoing basis. The more important thing, though, that I think we’re doing there is just trying to make the sure government has all the access to the content that they need. So they’ve given us certain warrants, we’re cooperating as much as we can with those investigations, and my view, at least, is that the US government and special counsel are going to have a much broader view of all the different signals in the system than we’re going to—including, for example, money transfers and things like that that we just won’t have access to be able to understand. So I think that that’s probably the best bet of coming up with a link like that. And nothing that we’ve done internally so far has found a link—doesn’t mean that there isn’t one—but we haven’t identified any.
NT: Speaking of Congress, there are a lot of questions about whether you will go and testify voluntarily, or whether you’ll be asked in a more formal sense than a tweet. Are you planning to go?
MZ: So, here’s how we think about this. Facebook regularly testifies before Congress on a number of topics, most of which are not as high profile as the Russia investigation one recently. And our philosophy on this is: Our job is to get the government and Congress as much information as we can about anything that we know so they have a full picture, across companies, across the intelligence community, they can put that together and do what they need to do. So, if it is ever the case that I am the most informed person at Facebook in the best position to testify, I will happily do that. But the reason why we haven’t done that so far is because there are people at the company whose full jobs are to deal with legal compliance or some of these different things, and they’re just fundamentally more in the details on those things. So as long as it’s a substantive testimony where what folks are trying to get is as much content as possible, I’m not sure when I’ll be the right person. But I would be happy to if I were.
NT: OK. When you think about regulatory models, there’s a whole spectrum. There are kind of simple, limited things, like the Honest Ads Act, which would be more openness on ads. There’s the much more intense German model, or what France has certainly talked about. Or there’s the ultimate extreme, like Sri Lanka, which just shut social media down. So when you think about the different models for regulation, how do you think about what would be good for Facebook, for its users, and for civic society?
MZ: Well, I mean, I think you’re framing this the right away, because the question isn’t “Should there be regulation or shouldn’t there be?” It’s “How do you do it?” And some of the ones, I think, are more straightforward. So take the Honest Ads Act. Most of the stuff in there, from what I’ve seen, is good. We support it. We’re building full ad transparency tools; even though it doesn’t necessarily seem like that specific bill is going to pass, we’re going to go implement most of it anyway. And that’s just because I think it will end up being good for our community and good for the internet if internet services live up to a lot of the same standards, and even go further than TV and traditional media have had to in advertising—that just seems logical.
There are some really nuanced questions, though, about how to regulate which I think are extremely interesting intellectually. So the biggest one that I’ve been thinking about is this question of: To what extent should companies have a responsibility to use AI tools to kind of self-regulate content? Here, let me kind of take a step back on this. When we got started in 2004 in a dorm room, there were two big differences about how we governed content on the service. Basically, back then people shared stuff and then they flagged it and we tried to look at it. But no one was saying, “Hey, you should be able to proactively know every time someone posts something bad,” because the AI tech was much less evolved, and we were a couple of people in a dorm room. So I think people understood that we didn’t have a full operation that can go deal with this. But now you fast-forward almost 15 years and AI is not solved, but it is improving to the point where we can proactively identify a lot of content—not all of it, you know; some really nuanced hate speech and bullying, it’s still going to be years before we can get at—but, you know, nudity, a lot of terrorist content, we can proactively determine a lot of the time. And at the same time we’re a successful enough company that we can employ 15,000 people to work on security and all of the different forms of community [operations]. So I think there’s this really interesting question of: Now that companies increasingly over the next five to 10 years, as AI tools get better and better, will be able to proactively determine what might be offensive content or violate some rules, what therefore is the responsibility and legal responsibility of companies to do that? That, I think, is probably one of the most interesting intellectual and social debates around how you regulate this. I don’t know that it’s going to look like the US model with Honest Ads or any of the specific models that you brought up, but I think that getting that right is going to be one of the key things for the internet and AI going forward.
NT: So how does government even get close to getting that right, given that it takes years to make laws and then they’re in place for more years, and AI will be completely different in two years from what it is now? Do they just set you guidelines? Do they require a certain amount of transparency? What can be done, or what can the government do, to help guide you in this process?
MZ: I actually think it’s both of the things that you just said. So I think what tends to work well are transparency, which I think is an area where we need to do a lot better and are working on that and are going to have a number of big announcements this year, over the course of the year, about transparency around content. And I think guidelines are much better than dictating specific processes.
So my understanding with food safety is there’s a certain amount of dust that can get into the chicken as it’s going through the processing, and it’s not a large amount—it needs to be a very small amount—and I think there’s some understanding that you’re not going to be able to fully solve every single issue if you’re trying to feed hundreds of millions of people—or, in our case, build a community of 2 billion people—but that it should be a very high standard, and people should expect that we’re going to do a good job getting the hate speech out. And that, I think, is probably the right way to do it—to give companies the right flexibility in how to execute that. I think when you start getting into micromanagement, of “Oh, you need to have this specific queue or this,” which I think what you were saying is the German model—you have to handle hate speech in this way—in some ways that’s actually backfired. Because now we are handling hate speech in Germany in a specific way, for Germany, and our processes for the rest of the world have far surpassed our ability to handle, to do that. But we’re still doing it in Germany the way that it’s mandated that we do it there. So I think guidelines are probably going to be a lot better. But this, I think, is going to be an interesting conversation to have over the coming years, maybe, more than today. But it’s going to be an interesting question.
NT: Last question. You’ve had a lot of big changes: The meaningful interactions was a huge change; the changes in the ways that you’ve found and stopped the spread of misinformation; the changes today, in the way you work with developers. Big changes, right. Lots of stuff happening. When you think back at how you set up Facebook, are there things, choices, directional choices, you wish you had done a little differently that would have prevented us from being in this situation?
MZ: I don’t know; that’s tough. To some degree, if the community—if we hadn’t served a lot of people, then I think that some of this stuff would be less relevant. But that’s not a change I would want to go back and reverse. You know, I think the world is changing quickly. And I think social norms are changing quickly, and people’s definitions around what is hate speech, what is false news—which is a concept people weren’t as focused on before a couple of years ago—people’s trust and fear of governments and different institutions is rapidly evolving, and I think when you’re trying to build services for a community of 2 billion people all over the world, with different social norms, I think it’s pretty unlikely that you can navigate that in a way where you’re not going to face some thorny tradeoffs between values, and need to shift and adjust your systems, and do a better job on a lot of stuff. So I don’t begrudge that. I think that we have a serious responsibility. I want to make sure that we take it as seriously as it should be taken. I’m grateful for the feedback that we get from journalists who criticize us and teach us important things about what we need to do, because we need to get this right. It’s important. There’s no way that sitting in a dorm in 2004 you’re going to solve everything upfront. It’s an inherently iterative process, so I don’t tend to look at these things as: Oh, I wish we had not made that mistake. I mean, of course I wish we didn’t make the mistakes, but it wouldn’t be possible to avoid the mistakes. It’s just about, how do you learn from that and improve things and try to serve the community going forward?
–
This article first appeared in www.wired.com
Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: info@groupisd.com or visit www.groupisd.com