FAQ: Why Brazil’s Plan to Mandate Traceability in Private Messaging Apps Will Break User’s Expectation of Privacy and Security – EFF

Despite widespread complaints about its effects on human rights, the Brazilian Senate has fast-tracked the approval of PLS 2630/2020, the so-called Fake News bill. The bill lacked the necessarily broad and intense social participation that characterized the development of the 2014 Brazilian Civil Rights Framework for the Internet and is now in the Chamber of Deputies. The Chamber has been holding a series of public hearings that should be considered before releasing a new draft text.

The traceability debate has mostly focused on malicious coordinated action on WhatsApp, which is the most popular encrypted messaging tool in Brazil. There has been minimal discussion of the impact on other tools and services such as Telegram, Signal, or iMessage. WhatsApp uses a specific privacy-by-design implementation that protects users by making forwarding indistinguishable for the private messaging app from other kinds of communications. So when a WhatsApp user forwards a message using the arrow, it serves to mark the forward information at the client-side (and count if it's more than 5 times or not), but the fact that the message has been forwarded is not visible to the WhatsApp server. In such a scenario, the traceability mandate would take this information, which was previously invisible to the server, and make it visible, affecting the privacy-by-design secure implementation and undermine users' expectations of privacy and security.

While we do not know how a service provider will implement any traceability mandate nor at what cost to security and privacy, ultimately, any implementation will break users expectations of privacy and security, and would be hard to implement to match current security and privacy standards. Such changes move companies away from privacy-focused engineering and data minimization principles that should characterize secure private messaging apps.Below, we will take a deep dive into a series of questions and answers to explain why the current language of two critical issues of the Senates bill would undermine human rights:

PROBLEM I: A tech mandate to force private messaging servers to track massively forwarded messages sent to groups or lists

Article 10 of the bill compels private messaging applications to retain, for three months, the chain of all communications that have been massively forwarded. The data to be retained includes the users that did the mass forwarding, date and time of forwardings, and the total number of users who received the message.The bill defines mass forwarding as the sending of the same message by more than five users, in an interval of up to fifteen days, to chat groups, transmission lists, or similar mechanisms that group together multiple recipients. This retention obligation applies only to messages whose content has reached 1,000 or more users in 15 days. The retained logs should be deleted if the virality threshold of 1,000 users has not been met in fifteen days.

Many of the most obvious implementations of this article would require companies to keep massive amounts of metadata about all users communications, or else to break encryption in order to get access to the payload of an encrypted message. Even if other implementations are possible, we dont know exactly how any given provider will ultimately decide to comply, and at what cost to security, privacy, and human rights. Ultimately, all such implementations are moving away from the privacy-focused engineering and data minimization that should characterize secure private messaging apps.

When does access to the traceability records occur?

The third paragraph of Article 10 states that access to these records will only occur with the purpose of determining the liability of mass forwarding illicit content, to constitute evidence in criminal investigation and procedural penal instruction, only by court order as defined in the Brazilian Civil Framework for the Internet. (In Brazil, defamation liability can be obtained through a moral damage claim under civil law. But it is also a crime. Criminal defamation has been widely criticized by UN Special Rapporteurs on Free Expression and others for hindering free expression.)

The text is ambiguous. In one interpretation, both mass forwarding purpose and criminal investigation are mandatory elements. This means that the metadata could only be accessed in criminal investigations that involve the mass forwarding of a message.In another interpretation, this article may allow a much broader range of uses of the recorded message history information. In this interpretation, the elements related to the responsibility for massive forwarding of illegal content and to use in criminal investigations are separate, independently permitted uses of the data. In that case, the retained metadata could also be used to investigate illegal acts under civil law related to massively forwarded messages and also could be used for criminal investigations unrelated to massively forwarded messages.

How does traceability break the users expectation of secure and private messaging?

In common implementations, including WhatsApps, probabilistic end-to-end encryption ensures that an adversary can neither confirm nor disconfirm guesses about a messages content. That also includes confirming a specific guess that the message was not about a certain topic. In such scenarios, traceability allows someone with access to the metadata to confirm that a user did send a message that was identical to another message (even when the content of that message is unknown). This disconfirms the guess that the user was actually talking about something else entirely, disconfirms the guess that the user was writing something original, and disconfirms many other possible guesses about the content! In general, forward vs. write something new is a kind of activity that is fundamentally related to knowing something about the content.

In some cases, the fact that a person forwarded something could be extra-sensitive even when the forwarded item is not necessarily illegal, e.g. when someone who made a threat wants to punish someone for forwarding the threat, or when someone wants to punish a leaker for leaking something. WhatsApp made a specific privacy-by-design implementation that protects users by making forwarding indistinguishable for WhatsApp server from other kinds of communications.

How does traceability for criminal and civil cases interfere with the right to privacy and data protection?

Traceability in civil and criminal cases creates serious concerns about privacy and freedom of expression. Revealing the complete chain of communication for a massively forwarded message can also be intrusive in a distinctive way beyond the intrusion of revealing individual relationships: the complete history for certain messages may reveal the structure and membership of a whole community, such as people who all share a certain belief or interest, or who speak a certain minority language, even when none of them is actually involved with illegal activities. The avenues are open for abuse.

Brazil is one of the few democracies with a Constitution prohibiting anonymity exclusively in the context of freedom of expression. However, that prohibition does not extend to the protection of privacy nor in accessing information anonymously. Moreover, such a restriction to anonymous speech cannot serve to impede the expression altogether when this protection is crucial to enable someone to speak in circumstances where her life or physical integrity might be at risk.

The Inter-American Commission on Human Rights (IACHR) Office of the Special Rapporteur for Freedom of Expression has explained that privacy should be understood in a broad sense as every personal and anonymous space that is free from intimidation or retaliation, and necessary for an individual to be able to freely form an opinion and express his or her ideas as well as to seek and receive information, without being forced to identify him or herself or reveal his or her beliefs and convictions or the sources he or she consults. Anonymity does not shield Internet users who engage in illegal speech in accordance with international human rights law. In all those cases, the IACHR Office has noted that judicial authorities would be authorized to take reasonable measures to disclose the identity of a user engaged in an illegal act as provided by law. At the United Nations, the Special Rapporteur on Freedom of Expression has also noted that encryption and anonymity provide individuals and groups with a zone of privacy online to hold opinions and exercise freedom of expression without arbitrary and unlawful interference or attack.

What could go wrong with achieving a traceability mandate?

First, forwarding a popular message does not mean you should automatically be under suspicion. In fact, the virality of a message does not change the privacy and due process rights of the original sender nor the presumption of innocence, a core requirement of international human rights law.Second, the first person to introduce some content into a particular private messaging system could be wrongly viewed as or assumed to be the author who massively forwarded an alleged illegal message.Third, a person who forwarded content by any means other than an apps forwarding interface could be wrongly viewed as or assumed to be the author. People could be framed as authors of content that they were not actually involved in creating. People could also be more frightened about sharing information if they think its more likely that someone will try to punish them for their role in disseminating it (which is also a very disproportionate measure for the huge majority of innocent users of messaging systems).Finally, the line between originating and forwarding messages can be blurred either by the government, leading to overzealous policing, or in the public's eyes, leading to self-censorship. The latter also creates a serious concern for freedom of expression.

Which assumptions are wrong in the traceability debate in Brazil?

Article 10 seeks to trace back everyone who has massively forwarded a message for the purpose of investigation or prosecution of alleged crimes. This includes the originator as well as everyone who forwarded the message, regardless of whether the distribution was done maliciously or not. The supporters of the bill have argued that mass retention of the chain of communication is needed to help trace back who the originator of the message was.

That assumption is wrong from the outset.

First, while the details of how traceability will be carried out are based on the providers implementation choices, it shouldn't necessarily imply that there will be mass centralized retention. However, that would be the most simple implementation, so we have serious concerns about it. Mass data retention is a disproportionate measure that would affect millions of innocent users instead of only those investigated or prosecuted for an illegal act under criminal or civil law. Mass data retention programs can be arbitrary, even if they serve a legitimate aim and have been adopted on the basis of law. On this front, the UN High Commissioner for Human Rights stated that it will not be enough that the [legal] measures are targeted to find certain needles in a haystack; the proper measure is the impact of the measures on the haystack, relative to the harm threatened; namely, whether the measure is necessary and proportionate. These measures are not necessary and proportionate to the problem being solved.

Second, legislators should take into account that metadata is personal data under Brazils data protection law when it relates to an identified or identifiable natural person. This means that companies should limit personal data collection, storage, and usage to data for legitimate, specific and explicit purposes, and such processing should be relevant, proportional, and non-excessive in relation to the purposes for which the data is processed. Recently, the Brazilian Supreme Court issued a landmark decision stressing the constitutional grounds for the protection of personal data as a fundamental right, separate from the right to privacy. As Bruno Bioni and Renato Leite have argued, The new precedent of the Supreme Court is such a remarkable shift of how the Court has been analyzing privacy and data protection because it changes the focus from data that is secret to data that is attributed to persons and might impact their individual and collective lives, regardless of whether they are kept in secrecy or not. Legislators should consider the impact on the right of data protection when requesting a traceability mandate in light of such developments.

Third, the bill assumes that only messages that become widely forwarded need to be traceable, regardless of whether the distribution of the message was done maliciously or not. This assumption is wrong on both counts.

Fourth, the bill ignores the fact that data minimization is essential in every privacy-by-design system, and is a key component of Brazils data protection law. Some systems have been developed to retain less data by not tracking the relevant information and dont necessarily have a sensible way to begin to track it, which may lead to technological changes that would break users expectations of privacy and security.

Fifth, traceability will not help trackback the originator of a message. Users of private messaging apps routinely use them to share media that they got somewhere else. For example, WhatsApp users might share a cartoon that they originally found on a web site or a social media site, or that they previously received through a different messaging app like Telegram or iMessage or through WhatsApp Desktop. In that case, a version of WhatsApp with traceability still doesnt have any way of distinguishing between the case where the first user drew the cartoon herself, and the case where she found it in one of these other media. Shes simply tracked as the first person to introduce that cartoon into a particular forwarding chain on WhatsApp, but thats obviously different from having created it herself. Similarly, for text messages, anyone who retyped a text message, or copied and pasted it (maybe from a different app or medium), would still be tracked as the original author by virtue of having been the first one to introduce the message into the particular app.

Forwarding something other than by using a traceability-compliant in-app forwarding feature would presumably break and restart the chain. For example, WhatsApp users who receive text messages could copy and paste them instead of using the forward button inside WhatsApp or WhatsApp Desktop. The software wouldnt have a way to correctly identify this as a form of forwarding. Likewise, if the phone number used is a virtual number or a foreign non-Brazilian one, the non-Brazilian account nor the virtual number will be covered by this law. In such scenarios, the software wont be able to trace back the foreign originator. Similarly, in WhatsApp, the originators identity is not strongly and reliably authenticated by technical means. It is simply maintained as a metadata field within the forward encrypted message that can be seen by the clients applications but not by the WhatsApp server. For example, the encrypted message headers might say that a certain message had originated from the user with an indicated telephone number. Official client software that complied with the requirements of these proposals would then copy that header, with no changes, when forwarding a message to new recipients.) So people using unofficial client software could remove or obscure it, or could even frame someone else as responsible for a message. It would not be practical to confirm by technical means whether the reported sender was really involved in originating the message or not. (Other proposals may be able to solve these problems, but at a significant cost to privacy, since the service provider would need to have much more access to confirm for itself exactly what its users are doing before the malicious act happens.)

Why are calls to separate private, encrypted conversations from group conversations misguided?

One argument for traceability is that, while private conversations and mass media or mass discussions should each be able to exist, they shouldnt be combined. In other words, a particular tool or medium should either be private and secure (and only practical for use by small groups of people) or public (and visible, at least to some extent, for others in society to notice and respond to either in the media or via the legal system). This argument criticizes existing services for having both a private character (in terms of the confidentiality of contents and users behavior) and a quasi-mass media character (in terms of the extremely large audience for some forwarded items). But these arguments ignore the fact that, even under this traceability mandate, messages can be forwarded from person to person while not preserving their ultimate origin, or entire forwarding history, making it much less likely that the true original sender of very widely distributed content can ever be identified with confidence.

Many existing private messaging systems already do not necessarily provide traceability. Why not?

Consider email: you can forward an email message without necessarily forwarding any information about where you got it fromand you can also edit it when forwarding it, to remove or change that information. Systems like email dont have traceability because theyre somewhat decentralized, and because they give users complete control over the content of the messages they send (so that the users could simply edit out any information that they dont want to include).

Encryption and privacy features have also discouraged traceability because modern systems are typically designed so that the developer or service provider doesnt know exactly who is writing what, or what the content of a message isincluding properties like whether or not two messages have the same or similar contents. (Even when WhatsApp, for example, centrally stores a copy of media attachments so that users dont have to use up time and data re-uploading things that they forward, the design of the system avoids letting the company know which media is or isnt included as an attachment to a particular message.)

Regardless of why, many recently-developed messaging tools also do not allow traceabilitysome for the same reasons as email, some simply because their developers dont feel that it would be in the users interest overall, and they may want to reduce users anxiety about being punished or threatened over information that they have passed along.

Why will newer technologies or messaging systems have difficulties complying with these proposals?

Though the messaging apps themselves may not appear decentralized, as email is, the idea of tracking when a user forwards a message may depend on control over client applications that simply dont exist. Its implausible to imagine that all client applications will cooperate with restrictions and limitations in the same way, or even can.Some systems are too decentralized (there is no central operator who could be responsible for compliance). This mandate assumes that application providers are always able to identify and distinguish forwarded and non-forwarded content, and also able to identify the origin of a forwarded message. This depends in practice on the service architecture and on the relation between the application and the service. When the two are independent, as is often the case with email, it is common that the service cannot differentiate between forwarded and non-forwarded content, and that the application does not store the forwarding history except on the user's device.This architectural separation is very traditional in Internet services, and while it is less common today in the most-used private messaging applications, the obligation would limit the use of XMPP or similar solutions. This could also negatively impact open source messaging applications.

Is there any connection between traceability and innovation according to Article 10 and 11 of the Senates version of the bill?

Article 10 compels private messaging applications to retain the chain of communications that have been massively forwarded based upon a virality threshold. Article 11 states that the use and trading of external tools by the private messaging service-providers aimed at mass messages forwarding are forbidden, except in the case of standardized technological protocols regarding Internet application interaction. The bill requests that a private messaging service provider must adopt policies within the technical limits of its service, to cope with the use of these tools.We dont know how a provider will comply with either Article 10 or Article 11, but it will presumably require developers to actively try to block and suppress the use of third-party software that interacts with their platforms by strictly controlling the client applications (to ensure that they cooperate with tracking forwarding history by recording whether they had or had not forwarded a message, and updating the records about the history).Many traceability proposals may require the developer of a communication system to stop other people from developing or using third-party software that interacts with that system. So the developer may be expected or required to monopolize the ability to make client application tools, and in turn to be the only one who is allowed to change or improve those tools. This limits interoperability in a way that will likely be damaging to competition and innovation.

How does traceability relate to other efforts to regulate messaging services?Some countries such as China, Russia, and Turkey have threatened to ban messaging tools that dont force data localization, and mandatory legal ID of users. This traceability mandate would force similar practices to Brazilian users. No ones government should keep them from practicing private, secure communication, and Brazils government should not consider joining the ranks of countries whose residents are at risk of prosecution and privacy invasion simply for using secure messaging.

As a result of this article, large social networks and private messaging apps (that offer service in Brazil to more than two million users) may demand a valid ID document from users where there are complaints of violations of the "fake news" law, or when there are reasons to suspect either automated accounts are bots not identified as such, or that they are behaving inauthentically, such as assuming someone else's identity to deceive the public. The system for submitting complaints for violations of the law could also create new serious unintended consequences by opening the door to abusive, inaccurate claims. For example, malicious actors may file false claims as a means to identify a certain account in order to harass the user.The bill also exempted parody and humor, as well as pseudonyms from the application of the law. But this supposed failsafe wont protect pseudonymous users; while users are explicitly permitted to use pseudonyms, the service provider may still demand their legal identities.

Article 7 (sole paragraph) compels social networks and private messenger apps to create some technical measures of detecting fraud in account creation and in the use of accounts that fail to comply with this bill. Providers will be forced to convey those new mechanisms in their terms of use and other documents available to users. Read together with Article 5, I, (identified account, means that the application provider has fully identified the account owner with confirmation of data previously provided by the owner). These new provisions seem to match many companies' existing practices but may be expanded and enforced in cases of non-compliance with this bill.

How will companies' obligation to identify users impact human rights?

Compelling these companies to identify an online user should only be done in response to a request by a competent authority, not by default without the legal process. Currently, Brazil's Civil Rights Framework exempts subscriber data from the usual requirement for a judicial order for competent authorities. Competent administrative authorities can already directly demand these types of data in certain crimes. Police authorities have also already claimed the ability to directly access subscriber data, and at a recent hearing at the Chamber of Deputies, the representative of the Federal Prosecutors' Office agreed that the information already collected by application providers is enough to identify users in investigations. Also according to the prosecutor, demanding the collection of ID numbers would be disproportionate, run afoul of data minimization concerns, and could bring issues regarding ID counterfeit as well as authenticity challenges.Ultimately, forcing companies to demand identification of users will not solve the fake news problem; it will create a new series of problems, and will disproportionately impact users.

Conclusion

There are policy responses and technical solutions that can improve the situation: for example, limiting the number of recipients of a forwarded message, or labeling viral messages to indicate they did not originate from close contact. Silencing millions of other users, invading their privacy, or undermining their security are not viable solutions. While this bill has several serious flaws, we hope the Chamber of Deputies will take into account these particularly egregious ones, and recognize the danger, and ineffectiveness, of the traceability mandate.

See the article here:

FAQ: Why Brazil's Plan to Mandate Traceability in Private Messaging Apps Will Break User's Expectation of Privacy and Security - EFF

Related Posts

Comments are closed.