A Citizens Guide To Artificial Intelligence: A Nice Focus On The Societal Impact Of AI – Forbes

Posted: April 13, 2021 at 6:28 am

Artificial Intelligence

A Citizens Guide to Artificial Intelligence, by a cast of thousands (John Zerilli, John Danaher, James Maclaurin, Colin Gavaghan, Alistair Knott, Joy Liddicoat, and Merel Noorman) is a nice high level view of some of the issues surrounding the adoption of artificial intelligence (AI). The author bios describe them as all lawyers and philosophers except for Noorman, and with that crowd its no surprise the book is much better at discussing the higher level impacts than AI itself. Luckily, theres a whole lot more of the latter than there is the former. The real issue is theyre better at explaining things than at coming to logical conclusions. Well get to that, but its still a useful read.

The issue about understanding of AI is shown early, when they first give a nice explanation of false positives and false negatives, but then write Its hard to measure the performance of unsupervised learning systems because they dont have a specific task. As this column has repeatedly mentioned, the key use of unsupervised learning is the task of detecting anomalous behavior, especially when anomalies are sparse. The difference between supervised and unsupervised learning is in knowing what youre looking for:

Supervised: Hey, heres attack XYZ!

Unsupervised learning: Hey, heres this weird thing that might be an attack!

So skim chapter one to get to the good stuff. Chapter two is about transparency, and Figure 2.1 is a nice little graphic about the types of transparency they are describing. What I really like is that accessibility is in the top tier. It doesnt matter if the designers and owners of a system are claiming to be responsible and are also inspecting the results to check accuracy; if the information isnt accessible to all parties involved in and impacted by the AI system, theres a problem.

The one issue I have with the transparency chapter is in the section human explanatory standards. They seem to be claiming that since were hard to understand, why should we expect better from AI systems? They state, A crucial premise of this chapter has been that standards of transparency should be applied consistently, regardless of whether were dealing with humans or machines. Yes, a silly premise. We didnt create ourselves. Were building AI systems for the same reasons weve built other thing in order to do things easier or more accurately than we can do them. Since were building the system, we should expect to be able to require more transparency to be built into a system.

The next three chapters are on bias, responsibility & liability, and control. They are good overviews of those issue. The control chapter is intriguing because its not just about us controlling the systems, but also covers issues about giving up control to systems.

Privacy is a critical issue, and chapter six is nice coverage of that. The most interesting section is on inferred data. We talk about inference engines, making inferences on the data; but the extension of that to privacy is to say there might be ethical limits to what engines should be allowed to infer. Theres the old case of a system knowing a young woman is pregnant and sending pregnancy sales pitches to her home before she had told her parents, but there are far worse situations. Consider societies that are intolerant of sexual orientation, but that can be inferred from other data. A government could use that to persecute people. Theres a wide spectrum in between those examples, and the chapter does a nice job of getting people to think about the issue.

The next chapter covers autonomy and makes some very good points. One is that humans have always challenged each others autonomy, but that AI and lack of laws and regulations make it far easier for governments and a few companies to remove our autonomy in much more opaque ways than have previously been available.

Algorithms in government and employment are given a good introduction in the next chapters, but with a lot of the same information seen elsewhere. The most interesting part of the back portion of the book comes in chapter ten, about oversight and regulation. There is a suggestion that, given the complexity of AI, there is logic to creating a new oversight agency for the national government. As they point out, an FDA for AI. Think of it in business terms, its a center of excellence in AI, able to formulate national policy for business and citizens, while also serving to help other agencies adapt the general policies to their specific oversight areas. That makes excellent sense.

No book is perfect, but Im partially surprised that a book with so many authors attached flows as well. Then I remember they all are academics, used to research papers with multiple authors. Of course, with that many academics, the risk is always that a book will sound like a research paper. Fortunately, they seem to have escaped that problem. A Citizens Guide is a good read to help people understand key issues in having AI make the major impact on society that it will. More people need to realize that quickly and get governments to focus on protecting people.

Link:

A Citizens Guide To Artificial Intelligence: A Nice Focus On The Societal Impact Of AI - Forbes

Related Posts