The dreams, the nightmares and the interesting middle ground: why AI should be our collective concern – Innovation Origins

In a new series in close collaboration with the Dutch AI Coalition, Innovation Origins wants to show the impact of Artificial Intelligence (AI) on our society. How do we, as human beings, keep track of the consequences of this social revolution? Will we ever be able to? Searching for the fears, the opportunities, the dilemmas. Today part 1: how can AI become our collective concern?

Artificial Intelligence is just like soccer: everyone has an opinion on it. As a discipline, it is reserved for a select group of technical experts. But at the same time, it can unleash the sharpest opinions of large groups of laymen. Two principal streams stand out: the utopia that AI is going to solve the worlds major problems (poverty, disease, the energy crisis, etc.) and the fear of a world in which machines will completely control humans. Neither of these scenarios will come true, says AI researcher Rudy van Belkom. He recently completed his 18-month research into the future of AI.

For this series, we also asked random Dutch people about their expectations regarding artificial intelligence. Nightmares and dreams appear to more or less balance each other. The moment that you no longer know whether you are dealing with a human or an AI-bot is widely feared. We were also warned several times that this trend could only lead to a modern dictatorship, because a computer will one day be able to make better and more rational decisions than a human being. But there is also hope, for example, for a future in which AI enables us to live more sustainably.

Van Belkom recognizes the comments, both the optimistic and the pessimistic. The negative people talk about losing control of our lives, the positive hope that AI will solve all diseases and bring about world peace. But thats not what AI can do. Nevertheless, Van Belkom sees value in all those images: After all, in order to determine your goals, you must first agree on what that ideal society would look like. Above all, AI should not be a goal in itself, which it now often is.

Based on the contribution of a series of expert panels, Van Belkom drew up five scenarios for the future with, on the one hand, the total disappearance of AI and, on the other hand, the situation that AI has taken over everything. Both extremes will not come true, but it is all the more interesting to look at the intermediate scenarios. The interesting middle ground is where the relevant ethical questions can be asked, Van Belkom shows. But we shouldnt primarily approach them as scholars, which has been done very often already. Instead, he devised an Ethical Design Game, in which participants are asked to think about tricky themes such as the autonomous car, the doorbell with video recognition or manipulated DNA. These are examples of areas in which AI can play a significant role. Van Belkom expects that the game will help to raise awareness of the different perspectives and can contribute to a more constructive discussion about AI and ethics. This will enable us to narrow the gap between technicians and ethicists.

The only way to fully control AI is not to use it at all. But thats not a realistic option.

Many people who try to look into the future see a world in which machines will eventually be more intelligent than humans. Not everyone, by the way, sees that as an objection; on the contrary. Van Belkom, too, has asked himself how far this non-human intelligence will reach. For now, I dont see a moment for AI to take over my complete job. But that doesnt mean that certain parts of it cant be left to machines. Searching files, clustering data, carrying out analyses, thats all perfectly possible. Interpreting is another story; thats typically a human skill. Because so much of the understanding in our world is implicit, its almost impossible to turn that over to an AI. But whoever might want to conclude that this will save us from an artificial super-intelligent system, has to think twice. Even without a human level of intelligence, the integration of hundreds of small systems automatically leads to something much bigger. We have to take that into account: instead of General AI Gone Bad, youll see Narrow AI Everywhere.

It is high time for a more nuanced vision around AI, Van Belkom says. Only by freeing ourselves from the extremes can we give the conversation real value. The only way to fully control AI is not to use it at all. Thats not a realistic option. That means we will have to accept that there will be disadvantages to it that well never fully control. But hey, thats not only true for AI. Why are we so very strict in our judgment of AI when it comes to this aspect of control when we do accept it in all kinds of other areas? As if we can control the economy. Or our health. Some things just happen. Its no different with AI.

AI is not only about technology. Philosophers, neurologists, biologists, economists: everyone needs to participate in the conversation.

That doesnt mean were entirely on the sidelines as human beings. We can concentrate on the question of how we can teach a system to do what we want it to do. Van Belkom: How do you get a system to behave according to our principles, whatever the circumstances? Solving that puzzle means that we have to involve all disciplines; otherwise, we will indeed no longer understand what is happening under the bonnet of the system. AI is not only about technology. Philosophers, neurologists, biologists, economists: everyone has to participate in the conversation.

The challenge is perhaps not so much to get that conversation going but to give it direction. And not just to involve the specialists mentioned by Van Belkom, but to turn it into a broad dialogue. So that in the future, opinions will also reach beyond the extremes we hear on the street.

Innovation Origins will, with the support of the Dutch AI Coalition, continue to look for this middle ground in the coming period. I hope that using AI can improve our lives by helping with health care, education, quality of life in the city, and a better government that makes policy more effective and efficient, one of the AI optimists assured us. With less waste, less fraud too, he added. As happy as we would be if this would be the outcome, his statement also shows that a little more realism is welcome in the AI-debate.

More about the research Rudy van Belkom conducted (including the ethical design game) can be found at detoekomstvanai.nl.

This article is part of a series in collaboration with the Dutch AI Coalition. The series is meant to stimulate discussion about AI. Responses to this article or ideas for follow-up topics are welcome in the space below. You can also contact the AI Coalition directly: [emailprotected]

Originally posted here:

The dreams, the nightmares and the interesting middle ground: why AI should be our collective concern - Innovation Origins

Related Posts

Comments are closed.