The perils of opening the mind – The Boston Globe

Used properly and creatively, new technologies that open up the mind could let us know more about ourselves than we could gain from introspection, control far-flung environments, and even transcend the boundaries of self by participating in multi-brain networks. But such technologies may also allow unprecedented monitoring, loss of autonomy and liberty, discrimination, and loss of the default right to exclude others from the inner mental and physical self.

Existing laws, regulations, and business practices are no match for these prospects. As it stands now, nothing not the Constitution, not federal or state laws, and certainly not the user-consent agreements on neurotech products appropriately defines when and how neurotech devices may be used. Nothing limits access to the newly emerging class of information derived directly from our brains.

This matters greatly because an ability to get inside our minds creates concerns about privacy and autonomy that go beyond those presented by other kinds of data. Data from GPS and social media, for example, can inform others (appropriately or not) of facts we largely already know about ourselves and that we have expressed through our movements, clicks, and posts. Brain-derived data, by contrast, enables others to know things about us that we have not outwardly expressed and that we may not even know about ourselves, since we cannot know the workings of our brains by introspection. Unknown to us and outside of our control, these mental and neurological processes nevertheless may have greater predictive validity than other forms of data about our health and behavior.

WHAT COULD BE done with information derived from the brain? Large stores of data at the population level could advance neurological research. Long-term information about your brain activity could help you make lifestyle choices like identifying the best methods of reducing stress in your life. The data also could help you and your doctor spot neurological problems earlier than generally happens now.

But imagine, as well, the following scenarios:

An employer wants to reduce the risk of on-the-job disability, so it screens applicants for neurological markers that they are predisposed to chronic pain and depression.

A school system equips students with headbands that monitor their state of focus, restricting students cognitive freedom and perhaps justifying cutting back on teachers.

A gaming company tracks a users arousal patterns, fine-tuning the game to his or her precise tastes, inducing behavioral addiction.

A political campaign buys large volumes of neurological data from a data broker to identify individuals with hallmarks of impulsivity and aggression, then targets them with politically radical social media messaging and advertising.

Workers and students in some parts of the world already are made to wear headbands that read their brains EEG signals or are watched by affect recognition systems that monitor their attention and mood. The data from such systems may not truly be helpful or relevant; it may be of middling accuracy or provide an unrepresentative type of insight into performance. But given that employers, education systems, and governments have screened individuals using all kinds of dubious and debunked instruments, from handwriting analysis to spurious personality tests and unreliable polygraphs, even inappropriate neurotechnologies could be put into widespread use and have substantial consequences.

The holes in existing privacy laws are easy to see. Health privacy laws, for example, dictate that if a device transmits information about your mood to your doctor, your doctor has to keep the information confidential. But if the same information is also held by the device manufacturer, or you keep it on an app on your phone, the device manufacturer and app maker are not bound by these obligations. Many of the new neurotech devices dont even have medical applications.

In the realm of criminal law, apart from the warrant requirement under the Fourth Amendment, nothing limits the ability of the state to obtain and use neurological information to probe memory, evaluate veracity, or predict future risk. The Fifth Amendment right not to incriminate oneself and the First Amendment protection against compelled speech also may fail to apply. Just as the state can cause a person to take a blood-alcohol test but cannot make him admit that he is drunk, the state potentially could require a suspect to undergo a neurological test when it cannot compel him to make a statement.

ON THE CUSP of becoming transparent by default, we need to consider how to shape the kind of open-minded world we want.

Frequently, the United States avoids regulation by relying on the notion of consent. We allow individuals to opt into all kinds of things, including the sale of our data. That apparent liberty is taken off the table only in a few cases, mostly relating to sale of the self and physical body. But consent is meaningless where there are great asymmetries of power or knowledge. If employers require certain neurological testing or monitoring, how free is an individual to make the choice not to be employed? If neurological data harvested from an individual today could be used against that person five years into the future, in a way that is currently unforeseeable, how meaningful was the consent? Consent falls apart if a person cannot know the content of that to which they are consenting.

Another option would be to selectively restrict the conduct of companies or organizations that would use neurotechnologies. A visionary piece of legislation preventing discrimination based on biological data, the Genetic Information Nondiscrimination Act, GINA, provides a model for how to allow progress in scientific and commercial development while limiting related social ills. The 2008 law prohibits genetic discrimination in employment and health insurance coverage by blocking employers and insurers from mandating DNA testing. But a pitfall of this approach is apparent as well. Only 12 years later, employers and insurers can now obtain genetic information from third-party providers just as law enforcement currently accesses third-party GPS and genetic data.

A third approach could be to regulate neurotech devices and data in a manner that focuses on the values we want to protect, as GINA does, with some of the adaptability of successful anti-discrimination laws. For example, the Americans with Disabilities Act broadly prohibits employers from discriminating based on an individuals disability or the employers perception of disability. This transcends specific facts or acts; it does not specify the conditions that constitute disabilities, the particular conduct that amounts to discrimination, or the particular accommodations that employers have to make. This protects workers who are in jobs that didnt exist when the law was drafted and whose needs can be met with tools that didnt exist at the time.

Neurotech and the information it generates touch on values core to American society, from nondiscrimination to cognitive liberty and self-determination. The right way to guard these values, before we lose them unwittingly, is for authoritative bodies to convene wide-ranging conversations among developers, investors, researchers, citizens of many perspectives, law enforcement, ethicists, lawyers, and lawmakers to describe the precise harms that could occur and values that require safeguards. Eventually, it may even be wise to have a standing body to regulate uses of this technology in light of established, consensus principles.

The key is to start these conversations now and then legislate incrementally and appropriately, so that we do not mindlessly slide into our open-mindedness.

Amanda Pustilnik is a professor at the University of Maryland School of Law and a faculty member at the Center for Law, Brain, and Behavior at Massachusetts General Hospital.

Here is the original post:

The perils of opening the mind - The Boston Globe

Related Posts

Comments are closed.