What we need to know about artificial intelligence and privacy rights – The Daily Star

Posted: May 20, 2021 at 5:07 am

Nobel Prize-winning author Kazuo Ishiguro's latest novel Klara and the Sun, his first since receiving the award in literature in 2017, has some relevance for policymakers and ordinary citizens across the globe. The main protagonist in this dystopian science fiction story is Klara, an artificial friend (AF)a human-like teenager who behaves and thinks almost like her cohort of the same age and is a fast learner, as any device or robot using artificial intelligence (AI) can be expected to be. However, what we also learn is that if robots, even if they are super-intelligent, are allowed to make decisions that affect the lives of humans, it might lead to unintended consequences unless there are strict guidelines protecting privacy and other individual rights.

Many discerning readers might already be aware that AI is whipping up quite a storm, particularly as it makes inroads into facial recognition software, law enforcement, and hiring decisions in the corporate world. Policymakers in many countries are alarmed, realising the pros as well as the cons of this revolutionary technology. The European Union (EU), which has been at the forefront of safeguarding privacy rights, unveiled strict regulations in April 2021 to govern the use of AI, a first-of-its-kind policy that outlines how companies and governments can use AI technology, also known as "machine learning". While these new rules are yet to be implemented and are still on the drawing boards, their impact will be similar to that of the General Data Protection Regulation (GDPR), which was drawn up to keep global technology companies such as Amazon, Google, Facebook and Microsoft in check.

The EU has been the world's most aggressive watchdog of the technology industry, with its policies often used as blueprints by other nations. As mentioned, the bloc has already enacted the world's most far-reaching data privacy regulation, the GDPR, and is debating additional antitrust and content-moderation laws. "We have to be aware that GDPR is not made for blockchain, facial or voice recognition, text and data mining [...] artificial intelligence," said Axel Voss, a German member of the European Parliament and one of the creators of GDPR.

AI, in simple terms, is a computer programme that enables the machine to learn how to mimic the problem-solving and decision-making capabilities of the human mind. An AI-enabled device learns how to respond to certain situations and uses algorithms and historical data to recognise a face, predict the weather, or support a search engine like Google. AI is playing a critical role in autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art, playing games (such as Chess or Go), search engines, online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays, and much more.

Large technology companies have poured billions of dollars into developing AI, as also have scores of others that use it to develop medicine, underwrite insurance policies, and judge creditworthiness. Governments use versions of AI in criminal justice and in allocating public services like income support. The potential for AI is enormous, and here is the rub. To what extent can machines be manipulated to gain financial or strategic advantage by those who fund this research or manufacture them? Are they really neutral? Can machines adapt to the humans it interacts with, or do existing biases become hard-wired?

Facial recognition algorithms have been at the centre of privacy and ethics debates. Imagine a scenario where a government buys facial recognition software and uses it to track attendees during protest marches. In Hong Kong, the police used a system, created by Cellebrite of Israel, to access the phones of 4,000 protesters. A researcher at an AI global conference claimed to be able to generate facesincluding aspects of a person's age, gender and ethnicitybased on voices.

Digital rights groups across the US, UK and EU have already raised many issues prompted by advances in AI research, and these include privacy violations, ethical concerns, and lack of human control over AI. Ria Kalluri, a machine-learning scientist at Stanford University in California, said, "because training data are drawn from human output, AI systems can end up mimicking and repeating human biases, such as racism and sexism." Kalluri urged her colleagues to dedicate more efforts into tackling scientific questions that make algorithms more transparent and create ways for non-experts to challenge a model's inner workings.

This controversy brings to the fore the role humans play in training the machines. We each have our biases and preferences, and the machines may inherit them. There is increasing support to "debias the algorithms". "Algorithms exclude older workers, trans people, immigrants, children," said Abeba Birhane, a cognitive scientist at the University College Dublin, citing uses of AI in hiring and surveillance. AI had already evoked criticisms from the late Prof Stephen Hawking, one of Britain's pre-eminent scientists, and Elon Musk, the pioneering entrepreneur, has also raised a red flag.

Having said that, it cannot be gainsaid that AI revolution is here and has come to stay. The important question is, how do we harness the power of AI and manage its negative influence? On the positive side, research has conclusively proven that AI is at least as good as humans in determining medical diagnosis. However, rights groups have argued that "people should be told when their medical diagnosis comes from AI and not a human doctor." And using the same logic, one could propose that the same warning be issued if, for example, advertising or political speech is AI-generated.

Coming back to Klara and the Sun, Henry Capaldi, a scientist and artist, says to Klara, "there's growing and widespread concern about AF (artificial friends) right now People are afraid what's going on inside," meaning inside the "black box" of AI. Capaldi then goes on to outline the source of the anxieties that wider society has about the decision-making processes of AI. These anxieties can be summarised as follows: a lack of transparency, how decisions are made inside the black box, and the rules that are used to make a decision. For example, if a company is using AI to judge an applicant's creditworthiness, the rules that the algorithm uses to make the decision might give rise to some discomfort. If an applicant is denied a loan for a small business, one could ask, "Was this a fair decision?" Similarly, if the government uses AI software in criminal justice and allocating public services like income support, there again is a need for accountability, transparency, and public confidence in the process.

There is some good news on this front. There is a proposal going around that all machine-learning research papers should include a section on societal harms, as well as the provenance of their data sets. Some other areas where regulatory oversight may be needed are: the use of AI by politicians to influence voters and marketing companies who create promotions to persuade people to buy their products. "Government is already undermined when politicians resort to compelling but dishonest arguments. It could be worse still if victory at the polls is influenced by who has the best algorithm," according to an editorial in April in the prestigious journal Nature. "With artificial intelligence starting to take part in debates with humans, more oversight is needed to avoid manipulation and harm." Another idea is that in addition to meeting transparency standards, AI algorithms could be required to undergo trials, akin to those required for new drugs, before they can be approved for public use.

Dr Abdullah Shibli is an economist, currently serving as a Senior Research Fellow at the International Sustainable Development Institute (ISDI), a think-tank based in Boston, USA.

Continue reading here:

What we need to know about artificial intelligence and privacy rights - The Daily Star

Related Posts