The decision taken by Facebook last week to block account accounts belonging to NYU researchers reveals what Facebook considers the possible risks that come with independent research conducted in the public interest, and the reason why the only solution is legislation that demands and allows for transparency.
The 3rd of August was the day that Facebook declared its decision to “disabled accounts, apps Pages, and access to platforms” for the researchers involved in the New York University’s Ad Observatory Project. Facebook stated that one of its tools used in the project- called the Ad Observer, a browser extension – was “scraping” data about Facebook users who were not consented to their data being analysed or collected. The company claimed that this NYU application was in breach of Facebook’s privacy policies and was required to be removed per the firm’s consent order in conjunction with FTC. FTC.
Every Facebook claim about the project has been disputed by NYU researchers, as well as in extensive reports by journalists like Protocol‘s Issie Lapowsky and Wired‘s Gilad Edelman. Mozilla is also denied Facebook’s claim it is a privacy tool. Ad Observer presents privacy concerns in a statement that notes that Mozilla had conducted extensive tests of the tool prior to suggesting the tool to users. The FTC has condemned Facebook for its actions, blaming the company for falsely claiming that the consent decree of its users required it to pursue action against NYU. More than 250 researchers, scientists and other private citizens have submitted an open-letter that I wrote asking the company to reinstate researchers accounts.
Some people see Facebook’s logic and explain why Facebook is cautious when it comes to research projects in the academic field like this. For example, Neil Chilson, former acting Chief Technologist of the FTC and currently a research fellow at Stand Together and the Charles Koch Institute, wrote an enthralling post on Twitter explaining that due to possible data leaks “the Ad Observatory research project raised the legal risk for Facebook even without an FTC settlement and the FTC settlement increases the risk significantly.” He claims that things can be wrong with third-party tools even if they are developed and tested as meticulously like the Ad Observer.
I’ve been talking to individuals on Facebook about this topic for many years and have heard this same argument from employees of the company repeatedly. I can understand the argument. But ultimately I’m not completely convinced. It’s not an issue of avoiding any possible data leaks and privacy threats. In fact, the company considers the potential consequences of transparent research of public interest on its financial standing and reputation.
To better understand this we need to look into this NYU case a bit further.
For starters, Chilson is absolutely right to observe that the very presence of NYU’s Ad Observer does boost the business’s risks, particularly in the context of the FTC consent order.
However, it is also any other third-party application of Facebook information.
This includes anyone who has the ability to access Facebook’s APIs. (Note that, while developers and business developers are able to get approval to gain API accessibility, scholars are not able to.) It also includes the numerous individuals who continuously scrape the APIwith the intent of gaining commercial, private collect.
Then why should you enforce it in this instance? Where do we draw the line? Based on the privacy protections NYU provided and the particular use-case of this software, it is to be one of the most dangerous applications Facebook might consider regarding third-party leaks of information.
Additionally there was a simple option to allow Facebook to acquire at least some understanding regarding the danger in question: Ask the FTC.
It’s not true, the FTC has never and never will be able to declare, “Approve away, Facebook! We pledge not to go after you should anything go in an Ad Observer.” That’s not how these things function. However, Facebook was aware that their public statements implying that the consent decree dictating the decision was based on questionable foundations, and they could have requested an assessment of use instances like those of the NYUAd Observer.
To be honest, the FTC might not have listened to such a request. However, that could have made Facebook’s assertions and their choice more plausible, and allowed the company to argue that in the absence of FTC explanation, it would not be able to accept the risk.
The Facebook executives were aware of all this. The clever lawyers as well as the policy and communications departments were aware that they could contact the FTC. But they didn’t pursue this feature, as the higher chance is that the FTC will state precisely what it is – the consent decree doesn’t prohibit NYU’s work but also “does not restrict Facebook from establishing exceptions for legitimate research conducted for the public good. …”
This is a clear indication that Facebook can decide on its own which line it should draw. But, paradoxically, this could be a major issue for the company. Because even the case that Facebook would be ethical and establish an exception to honest, public-interest research things get more difficult to the business in two ways:
The first thing to note is that research like this is always a source of stress for the communications department and other teams (in NYU’s instance, the teams that work on advertising). Independent scrutiny is not a pleasant experience and those who are forced to face it usually resist the issue. (It’s the wrong decision however, it’s easy to see the reason why people involved would want to stay clear of having the FTC inform them they have an option to make.)
Then, someone must decide what the new policy should include, where is it excellent to define the new line of “good trust” and “bad bad” use of Facebook information. It’s more convenient to be able to state that you are not in control. The company must reveal and clearly explain how much risk is too much risk , and what reasons one company could be more reliable over another.
This is what brings me back to my issue that I mentioned earlier. Yes it is true that it is true that the Ad Observer and similar cases of good faith research pose privacy issues. As Chilson states, the risks are actually increased due to the presence of an FTC consent decree. However, that’s the case in the case of every third-party data use. It has always been so. As with other cases of data access, Facebook can take measures to limit the risks inherent in independent research (for instance, by mandating regular audits by an independent auditor as well as “pen testing” of tools used in research).
There are a lot of ways to achieve this. There have always been options. However, because independent research does not benefit the bottom line – and could in fact cause harm to it, the situation was, and continues to be very straightforward for Facebook to declare, “We’d rather not.”
In the end, the NYU case highlights the need for more transparency in the regulation of the access of third-party data. Since as long as Facebook Or other major tech firms, for instance are able to decide on how to define these lines and public research, transparency and accountability will suffer.