In Search Of A Grand Unified Theory Of Free Expression And Privacy – Techdirt

Posted: May 28, 2020 at 7:53 am

from the time-for-a-gut-check dept

Everytime I ask anyone associated with Facebooks new OversightBoardwhether the nominally independent, separately endowed tribunal isgoing address misuse of private information, I get the sameanswerthats not the Boards job. This means thatthe Oversight Board, in addition to having such an on-the-nose propername, falls short in a more important wayits architectsimagined that content issues can be tackled substantively withoutaddressing privacy issues. Yet surely the recent scandals that haveplagued Facebook and some other tech companies in recent years haveshown us that private information issues and harmful-content problemshave become intimately connected.

Wecant turn a blind eye to this connection anymore. We need thecompanies, and the governments of the world, and the communities ofusers, and the technologists, and the advocates, to unite behind aframework that emphasizes the deeper-than-ever connection betweenprivacy problems and free-speech problems.

Whatwe need most now, as we grapple more fiercely with the public-policyquestions arising from digital tools and internet platforms, is aunifiedfield theoryor,more properlya GrandUnified Theory(a.k.a. GUT)of free expression and privacy.

Butthe road to that theory is going to be hard. From the beginningthree decades ago when digital civil-liberties emerged as a distinctset of issues that needed public-policy attention, the relationshipbetween freedom of expression and personal privacy in the digitalworld has been a bit strained. Even the name of the first bigconference to bring all the policy people, technologists, governmentofficials, hackers, and computer cops reflected the tension. Thefirst Computers, Freedom and Privacy conference was held inBurlingame California, in 1991, made sure that attendees knew thatPrivacy was not just a kind of Freedombut its own thing that deserved its own special attention.

Thetensions emerged early on. It seemed self-evident to most of us backthen that the relationship between freedom of expression (and freedomof assembly and freedom of inquiry) had to have some limitsincludinglimits on what any of us could do with the private information aboutother people. But while its conceptually easy to define infairly clear terms what counts as freedom of expression,the consensus about what counts as a privacy interest is murkier.Because I started out as a free-speech guy, I liked thelaw-school-endorsed framework of privacy torts, whichcarved out some fairly narrow privacy exceptions to the broadguarantees of expressive freedom. That privacy tortssetup meant that, at least when we talked about invasion ofprivacy, I could say what counted as such an invasion and whatdidnt. Privacy in the American system was narrow and easy tograsp.

Butthis wasnt the universal view in the 1990s, and itscertainly not the universal view in 2020. In the developed world,including the developed democracies of the European Union, thebalance between privacy and free expression has been struck in adifferent way. The presumptions in the EU favor greater protection ofpersonal information (and related interests like reputation) andsomewhat less protection of what freedom of expression. Sure, theinternational human-rights source texts like the UniversalDeclaration of Human Rights (in Article 19) may protect freedomto hold opinions without interference and to seek, receive and impartinformation and ideas through any media regardless of frontiers.But ranked above those informational rights (in both the UniversalDeclaration of Human Rights and the International Covenant on Civiland Political Rights) is the protection of private information,correspondence, honor, and reputation. This differencebalance is reflected in European rules like the General DataProtection Regulation.

Theemerging international balance, driven by the GDPR, has created newtensions between freedom of expression and what we loosely callprivacy. (I use quotation marks because the GDPRregulates not just the use of private information but also the use ofpersonal information that may not be privatelikeold newspaper reports of government actions to recoversocial-security debts. This was the issue in theleading right to be forgotten caseprior to the GDPR.) Standing by themselves, the emerginginternational consensus doesnt provide clear rules forresolving those tensions.

Dontget me wrong: I think the idea of using international human rightsinstruments as guidance for content approaches on social-mediaplatforms has its virtues. The advantage is that in internationalforums and tribunals it gives the companies as strong a defense asone might wish in the international environment for allowing some(presumptively protected) speech to stay up in the face of criticismand removing some (arguably illegal) speech. The disadvantages areharder to grapple with. Countries will differ on what kind of speechis protected, but the internet does not quite honor borders the waysome governments would like. (Thailand'slse-majestisa good example.) In addition, some social-media platforms may want tocreate environments that are more civil, or child-friendly, orwhatever, which will entail more content-moderation choices andpolicies than human-rights frameworks would normally allow. Do wewant to say that Facebook or Google *can't* do this? That Twittershould simply be forbidden to taga presidential tweet as unsubstantiated?Some governments and other stakeholders would disapprove.

Ifa human-rights framework doesnt resolve thefree-speech/privacy tensions, what could? Ultimately, I believe thatthe best remedial frameworks will involve multistakeholderism, but Ithink they also need to begin with a shared (consensus) ethicalframework. I present the argument in condensed form here: "ItsTime to Reframe Our Relationship With Facebook.(I also publisheda book last yearthat presents this argument in greater depth.)

Cana code of ethics be a GUT of free speech and privacy? I dontthink it can, but I do think it can be the seed of one. But it has tobe bigger than a single companys initiativewhich moreor less is the best we can reasonably hope Facebooks OversightBoard (assuming it sets out ethical principles as a product of itswork on content cases) will ever be. I try not to be cynical aboutFacebook, which has plenty of people working on these issues whogenuinely mean well, and who are willing to forgo short-term profitsto put better rules in place. While it's true at some sufficientlyhigh level that the companies privilege profits over public interest,the fact is that once a company is market-dominant (as Facebook is),it may well trade off short-term profits as part of a grand bargainwith governments and regulators. Facebook is rich enough to absorbthe costs of compliance with whatever regimes the democraticgovernments come up with. (A more cynical read of Zuckerberg's publicwritings in the aftermath of the companys various publicwritings, is that he wants the governments to get the rules inplace, and then FB will comply, as it can afford to do better thanmost other companies, and then FB's compliance will be a defenseagainst subsequent criticism.)

Butthe main reason I think reform has to come in part at the industrylevel rather than at the company level, is that company-levelreforms, even if well-intended, tend to instantiate a public-policyversion of Wittgenstein's "privatelanguage" problem.Put simply, if the ethical rules are internal to a company, thecompany can always change them. If they're external to a company,then there's a shared ethical framework we can use to criticize acompany that transgresses the standards.

Butwe cant stop at the industry level eitherwe needgovernments and users and other stakeholders to be able to step inand say to the tech industries that, hey, your industry-widestandards are still insufficient. You know that industry standardsare more likely to be adequate and comprehensive when theyrebuttressed both by public approval and by law. Thats whathappened with medical ethics and legal ethicsthe frameworkswere crafted by the professions but then recognized as codes thatdeserve to be integrated into our legal system. Theres aninternational consensus that doctors have duties to patients (First,do no harm) and that lawyers and other professions havefiduciary duties to their clients. I outline howfiduciary approaches might address Big Techs consumer-trustproblems in a series of Techdirt articles that begins here.

Thefiduciary code-of-ethics approach to free-speech andprivacy problems for Big Tech is the only way I see of harmonizingdigital privacy and free-speech interests in a way that will leavemost stakeholders satisfied (as most stakeholders are now satisfiedwith medical-ethics frameworks and with lawyers obligations toprotect and serve their clients). Because lawyers and doctors aregenerally obligated to tell their clients the truth (or, if for somereason they cant, end the relationship and refer the clientsto other practitioners), and because theyre also obligated todo no harm (e.g., by allowing companies to use personalinformation in a manipulative way or to violate clientsprivacy or autonomy), these professions already have a Grand UnifiedTheory that protects both speech and privacy in the context ofclients relationships with practitioners.

BigTech has a better shot at resolving the contradictory demands on itsspeech and privacy practices if it aspires to do the same, and if itembraces an industry-wide code of ethics that is acceptable to users(who deserve client protections even if theyre not paying forthe services in question). Ultimately, if the ethics code is backedby legislators and written into the law, you have something muchcloser to a Grand Unified Theory that harmonizes privacy, autonomy,and freedom of expression.

Ima big booster of this GUT, and Ive been making versions ofthis argument before now. (Please dont call it Godwin-UnifiedTheoryhaving one lawnamed after me is enough.) But here in 2020 we need to do more thanargue about this approachwe need to convene and begin tohammer out a consensus about a systematic, harmonized approach thatprotects human needs for freedom of expression, for privacy, and forautonomy thats reasonably free of psychological-warfaretacticsof informational manipulation. The issue is not just false content,and its not just personal informationopensocietieshave to incorporate a fairly high degree of tolerance forunintentionally false expression and for non-malicious ornon-manipulative disclosure or use of personal information. But anopen society also needs to promote supporting an ecosystemapublic sphere of discoursein which neither the manipulativecrafting of deceptive and destructive content nor the manipulativetargeting of it based on our personal data is the norm. Thatsan ecosystem that will require commitment from all stakeholders tobuilda GUT based not on gut instincts but on critical rationalism, colloquy, and consensus.

Filed Under: data protection, facebook oversight board, fiduciary duty, free speech, grand unified theory, greenhouse, multi-stakeholder, oversight board, privacy

See original here:

In Search Of A Grand Unified Theory Of Free Expression And Privacy - Techdirt

Related Posts