Artificial Intelligence might not destroy us after all – New York Post

Elon Musk famously equated Artifical Intelligence with summoning the demon and sounds the alarm that AI is advancing faster than anyone realizes, posing an existential threat to humanity. Stephen Hawking has warned that AI could take off and leave the human race, limited by evolutions slow pace, in the dust. Bill Gates counts himself in the camp concerned about super intelligence. And, although Mark Zuckerburg is dismissive about AIs potential threat, Facebook recently shut down an AI engine after reportedly discovering that it had created a new language humans cant understand.

Concerns about AI are entirely logical if all that exists is physical matter. If so, itd be inevitable that AI designed by our intelligence but built on a better platform than biochemistry would exceed human capabilities that arise by chance.

In fact, in a purely physical world, fully realized AI should be recognized as the appropriate outcome of natural selection; we humans should benefit from it while we can. After all, sooner or later, humanity will cease to exist, whether from the sun running out or something more mundane including AI-driven extinction. Until then, wouldnt it be better to maximize human flourishing with the help of AI rather than forgoing its benefits in hopes of extending humanitys end date?

As possible as all this might seem, in actuality, what we know about the human mind strongly suggests that full AI will not happen. Physical matter alone is not capable of producing whole, subjective experiences, such as watching a sunset while listening to seagulls, and the mechanisms proposed to address the known shortfalls of matter vs. mind, such as emergent properties, are inadequate and falsifiable. Therefore, it is highly probable that we have immaterial minds.

Granted, forms of AI are already achieving impressive results. These use brute force, huge and fast memory, rules-based automation, and layers of pattern matching to perform their extraordinary feats. But this processing is not aware, perceiving, feeling, cognition. The processing doesnt go beyond its intended activities even if the outcomes are unpredictable. Technology based on this level of AI will often be quite remarkable and definitely must be managed well to avoid dangerous repercussions. However, in and of itself, this AI cannot lead to a true replication of the human mind.

Full AI that is, artificial intelligence capable of matching and perhaps exceeding the human mind cannot be achieved unless we discover, via material means, the basis for the existence of immaterial minds, and then learn how to confer that on machines. In philosophy, the underlying issue is known as the qualia problem. Our awareness of external objects and colors; our self-consciousness; our conceptual understanding of time; our experiences of transcendence, whether simple awe in front of beauty or mathematical truth; or our mystical states, all clearly point to something that is qualitatively different from the material world. Anyone with a decent understanding of physics, computer science and the human mind ought to be able to know this, especially those most concerned about AIs possibilities.

That those who fear AI dont see its limitations indicates that even the best minds fall victim to their biases. We should be cautious about believing that exceptional achievements in some areas translate to exceptional understanding in others. For too many including some in the media the mantra Question everything applies only within certain boundaries. They never question methodological naturalism the belief that there is nothing that exists outside the material world which blinds them to other possibilities. Even with what seems like more open-minded thinking, some people seem to suffer from a lack of imagination or will. For example, Peter Thiel believes that the human mind and computers are deeply different yet doesnt acknowledge that implies that the mind comprises more than physical matter. Thomas Nagle believes that consciousness could not have arisen via materialistic evolution yet explicitly limits the implications of that because he doesnt want God to exist.

Realizing that we have immaterial minds, i.e. genuine souls, is far more important than just speculating on AIs future. Without immaterial minds, there is no sustainable basis for believing in human exceptionalism. When human life is viewed only through a materialistic lens, it gets valued based on utility. No wonder the young nones young Americans who dont identify with a religion think their lives are meaningless and some begin to despair. It is time to understand that evolution is not a strictly material process but one in which the immaterial mind plays a major role in human, and probably all sentient creatures, adaption and selection.

Deep down, we all know were more than biological robots. Thats why almost everyone rebels against materialisms implications. We dont act as though we believe everything is ultimately meaningless.

Were spiritual creatures, here by intent, living in a world where the supernatural is the norm; each and every moment of our lives is our souls in action. Immaterial ideas shape the material world and give it true meaning, not the other way around.

In the end, the greatest threat that humans face is a failure to recognize what we really are.

If were lucky, what people learn in the pursuit of full AI will lead us to the rediscovery of the human soul, where it comes from, and the important understanding that goes along with that.

Go here to read the rest:

Artificial Intelligence might not destroy us after all - New York Post

Related Posts

Comments are closed.