Recently, Walter Bradley Center director Robert J. Marks interviewed Oxford mathematician John Lennox on his latest book 2084: Artificial Intelligence and the Future of Humanity (2020). He focused on why Lennox chose that theme and how far we have caught up with George Orwells 1984. Here are some excerpts from the combined interviews in John Lennox on Artificial Intelligence and Humanity:
A partial transcript follows, along with highlights, Show Notes, and Resources:
Robert J. Marks (starting at roughly 1:40 min): Many of Orwells predictions about communism were proven. So what will be the effects of AI a century later in the year 2084? Replacing George Orwell is Dr. John Lennox who has written 2084: Artificial Intelligence and the Future of Humanity: How will AI, not communism, affect the future?
Dr. Lennox is able to look at the AI phenomenon from a number of different perspectives. He is an emeritus professor of mathematics at Oxford University. He is also a pastoral advisor of Green Templeton College at Oxford University. The first obvious question to ask is, why did you write this book?
John Lennox (pictured, starting at roughly 3:24 min): Well, I have been interested in futuristic scenarios for a long time. Im not a sci-fi addict, but I was deeply impressed by C.S. Lewiss sci-fi trilogy. And he raises the question in the third of those books, That Hideous Strength. He imagines scientists trying to increase their power by preserving a human brain. And as I read that book and saw the issues it raised, it put into my mind the idea that it might be important to think through this stuff as it develops as it has.
But the major reason for writing it was that I was asked to give a lecture on the topic in connection with the Book of Genesis. And I said, Look, I think youve come to the wrong person. And they said, No. We think youre the person to do this. Well, I decided in the end to do it because it initially had to do with artificial intelligence and the nature of humanity.
As I started reading, background reading, of various people, I discovered that a lot had been said, a lot had been written, but there was a real need, in my view, of evaluating it. And so, it has ended up with this book.
Robert J. Marks: Well, could you give us a quick overview, a thumbnail sketch, of what the book is about?
John Lennox: Well, the book really is, has several purposes. I want to demystify the good side of AI, so that people, particularly Christians, but not only, are not afraid of it. And secondly, I want to take some of the hype out of the science fiction side, artificial general intelligence Narrow artificial intelligence tends to do one thing superbly well that normally takes human intelligence to do. But the machinery, and it consists of a computer with a capacity to dig into a large database and recognize patterns there, thats impressive. And there are wonderful examples, particularly in medicine, of it working very efficiently.
But the second kind of artificial intelligence, AGI, artificial general intelligence, is really the quest for a superintelligence of one of two kinds, either enhancing human beings as they exist and building a biological superintelligence, or else discovering ways of uploading or downloading the contents, say, of the human mind onto silicone, so that we remove the dependence on an organic substrate. And there, it seems to me, that the likelihood of building a superintelligence that exceeds human capacity in every direction is very slim, because human intelligence is conscious. And we dont know what consciousness is. No scientist knows what consciousness is. And that the most serious people recognize that, that is a huge barrier. How can you build a conscious being when you dont even know what consciousness is?
Here are some further excerpts from the podcast:
2084 vs 1984: The difference AI could make to Big Brother: We buy a book and a few days later up pops a little message that says, People that bought that book also are interested in this book, and your attention is drawn to buying the second book. Well, that can be very useful or it can be very irritating. What many people do not realize is that that system is actually harvesting a great deal of information about us, about where we go, who we meet, what our buying preferences are. Its being sold on to third parties without our permission. This is what is in a way called surveillance capitalism.
Lennox: Whether the surveillance AI technology enables is an advantage is something that we need to seriously think about before were engulfed by it.
In Dan Browns AI hype novel Origin, the hero stumbles onto God. Not clear that was supposed to happen but stories do get away on their authors at times
The question of God, for me, lies in understanding the difference between codes and patterns. Patterns occur everywhere in nature, the spiraling seeds of a sunflower, the hexagonal cells of a honeycomb and so on. Codes are special. Codes by definition must carry information. Codes must transmit data and convey meaning.
And he ends up by saying, Codes are the deliberate inventions of intelligent consciousness. They dont appear organically, they must be created.
And one of the other female heroes in the book says, You think DNA was created by an intelligence?
And he just goes as far as saying, I feel as if Im seeing a living footprint, the shadow of some greater force that is just beyond our grasp.
John Lennox: Utterly fascinating. Someone whos trying to bring down religion by the use of AI is actually heightening evidence for the existence of God.
Can AI replace the need for belief in God? Lennox contends that science should increase our respect for what God has created and allowed us to do. The problem, as he sees it, is that atheism does not provide grounds for believing in rationality: I spent most of my life contending with people that think that science replaces God. And I see that as a very foolish argument really. Its like saying that if you understand how a Ford motor car works, you dont need to believe in Henry Ford. Its a confusion between different kinds of explanation. And I often say to people, look, the God explanation, no more competes with the scientific explanation than Henry Ford competes with the law of internal combustion to explain a motor car engine.
And in fact, you need both levels of explanation, the scientific one and the one in terms of the creative agency of God to give you a complete explanation. And so its been clear to me for many years that a lot of the heat could be taken out of this science versus God thing if people only could realize that explanation comes at different levels.
Do some passages in the Book of Revelation appear to talk about AI? Revelation is notoriously obscure but a passage about a future total control state gives pause for thought: Revelation 13:1517: 15 The second beast was given power to give breath to the image of the first beast, so that the image could speak and cause all who refused to worship the image to be killed. 16 It also forced all people, great and small, rich and poor, free and slave, to receive a mark on their right hands or on their foreheads, 17 so that they could not buy or sell unless they had the mark, which is the name of the beast or the number of its name.
John Lennox: And what is intriguing and rather chilling actually in the light of our AI developments is that freedom to buy and sell is determined by the wearing of some kind of mark, an implanted chip.
Tegmark talks about a bracelet that people may have to wear that will determine whether or not theyre regarded as socially acceptable. And weve already got that kind of social acceptability factor in the credit system thats being rolled out in the Chinese population today. So its relatively easy to see how this kind of thing could come about.
Could techno-immortality ever be the real thing? Oxford mathematician John Lennox looks at Ray Kurzweils techno-immortality from a Christian perspective: People are already, particularly in Sweden, getting chips put under their skin so that they can pay for things and all this kind of stuff. So in bits and pieces, people are becoming part biological and part mechanical, which often we describe as a cyborg. Something like this will happen but whether it will reach Kurzweils extent, I doubt. Im always amused that they say this is going to happen within thirty to fifty years.
Its an AI immortality where we are told, for example, that we wont need tongues because we can tap right into our taste buds.
Heres what some sources (not John Lennox) believe will happen:
Excerpts from 2084 (2020)
John Lennox: How AI raises the stakes for all of us: The brilliant play Copenhagen by Michael Frayn explores the question of whether scientists should simply follow the mathematics and physics without regard to the consequences of what they are developing or whether they should have moral qualms about it. The context of the play is the research that led to nuclear fission. Exactly the same issues are raised by AI, except that AI is accessible by many more people than atomic physics and does not need very sophisticated and expensive facilities.
AI could cause more serious problems than nuclear energy. You cannot build a bomb in your bedroom but you could hack your way around the world.
Transhumanism is not a new idea: John Lennox points out that, in the 20th century, both the Communists and the Nazis had attempted transhumanist projects. For example, In the former Soviet Union, attempts were made to use science to create a New Man. In 1924, Leon Trotsky wrote: Man will make it his purpose to master his own feelings, to raise his instincts to the heights of consciousness, to make them transparent, to extend the wires of his will into hidden recesses, and thereby to raise himself to a new plane, to create a higher social biologic type, or, if you please, a superman.
In his view, the likely outcome of all transhumanist attempts to re-engineer humanity will be the extinction of humanity.
Oxford mathematician: Atheism detracts from science. Atheism, he says, undermines the rationality needed to develop and understand an argument, especially a scientific one, by positing a meaningless universe. The problem, as he sees it, is that atheism does not provide grounds for believing in rationality: Thought is replaced by electrochemical neural events. Two such events cannot confront each other in rational discourse. They are neither right nor wrong. They simply happen . . . The world of rational discourse dissolves into the absurd chatter of firing synapses. Quite frankly that cannot be right and none of us believes it to be so.
Our exclusive interview with John Lennox
Here, Lennox answers questions from Mind Matters News about AI in 2084: In his new book, 2084, the Oxford mathematician doubts that AI, now or then, will out-think humans. Our real worry is how they will be used.
Mind Matters News: Surveying the scene in China, isnt that the biggest problem? Not that the machines will outsmart us but that they will be used by powerful forces to control us in more detail than was ever possible before?
John Lennox: Yes this is the much greater danger since it comes from (narrow) AI that has already been developed and is now in use, particularly in China. However, the point has been made that all the necessary equipment to produce a totalitarian surveillance state is available in the West. The only difference is that it is not (yet) under centralised state control.
Podcast Transcript Download
See the original post here:
- The Great AI Race: Forecasts Diverge on the Arrival of Superintelligence - elblog.pl - April 14th, 2024 [April 14th, 2024]
- ASI Alliance Voting Opens: What Lies Ahead for AGIX, FET and OCEAN? - CCN.com - April 6th, 2024 [April 6th, 2024]
- Revolutionary AI: The Rise of the Super-Intelligent Digital Masterminds - Medium - January 2nd, 2024 [January 2nd, 2024]
- AI, arms control and the new cold war | The Strategist - The Strategist - November 16th, 2023 [November 16th, 2023]
- The Best ChatGPT Prompts Are Highly Emotional, Study Confirms - Tech.co - November 16th, 2023 [November 16th, 2023]
- 20 Movies About AI That Came Out in the Last 5 Years - MovieWeb - November 16th, 2023 [November 16th, 2023]
- Can You Imagine Life Without White Supremacy? - Dallasweekly - November 16th, 2023 [November 16th, 2023]
- Will Humanity solve the AI Alignment Problem? | by Enrique Tinoco ... - Medium - October 31st, 2023 [October 31st, 2023]
- The Tesla Trap; Ellison Going Nuclear; Dont Count Headsets Out - Equities News - October 31st, 2023 [October 31st, 2023]
- Future Investment Initiative emphasizes global cooperation and AI ... - Saudi Gazette - October 31st, 2023 [October 31st, 2023]
- AI systems favor sycophancy over truthful answers, says new report - CoinGeek - October 31st, 2023 [October 31st, 2023]
- What "The Creator", a film about the future, tells us about the present - InCyber - October 31st, 2023 [October 31st, 2023]
- Invincible's Guardians Of The Globe Team Members, History ... - Screen Rant - October 31st, 2023 [October 31st, 2023]
- From streaming wars to superintelligence with John Oliver & Calum ... - KNEWS - The English Edition of Kathimerini Cyprus - October 22nd, 2023 [October 22nd, 2023]
- Reckoning with self-destruction in Oppenheimer, Indiana Jones, and ... - The Christian Century - October 22nd, 2023 [October 22nd, 2023]
- How Microsoft's CEO tackles the ethical dilemma of AI and its ... - Medium - October 22nd, 2023 [October 22nd, 2023]
- Managing risk: Pandemics and plagues in the age of AI - The Interpreter - October 22nd, 2023 [October 22nd, 2023]
- Artificial Intelligence Has No Reason to Harm Us - The Wire - August 2nd, 2023 [August 2nd, 2023]
- Fischer Black and Artificial Superintelligence - InformationWeek - August 2nd, 2023 [August 2nd, 2023]
- OpenAI Forms Specialized Team to Align Superintelligent AI with ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- 10 Best Books on Artificial Intelligence | TheReviewGeek ... - TheReviewGeek - August 2nd, 2023 [August 2nd, 2023]
- The Concerns Surrounding Advanced Artificial Intelligence and the ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Decentralized AI: Revolutionizing Technology and Addressing ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- An 'Oppenheimer Moment' For The Progenitors Of AI - NOEMA - Noema Magazine - August 2nd, 2023 [August 2nd, 2023]
- The Implications of AI Advancements on Human Thinking and ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Focusing on Tackling Algorithmic Bias is Key to Ethical AI ... - Fagen wasanni - August 2nd, 2023 [August 2nd, 2023]
- Discover This SUPER Early AI Crypto Gem - Altcoin Buzz - August 2nd, 2023 [August 2nd, 2023]
- Biden meets with AI leaders to discuss its 'enormous promise and its ... - KULR-TV - June 20th, 2023 [June 20th, 2023]
- VivaTech: The Secret of Elon Musk's Success? 'Crystal Meth' - The New Stack - June 20th, 2023 [June 20th, 2023]
- AI meets the other AI - POLITICO - POLITICO - June 20th, 2023 [June 20th, 2023]
- Squid Game trailer for real-life reality contest prompts confusion from Netflix users - Yahoo News - June 20th, 2023 [June 20th, 2023]
- Our Future Inside The Fifth Column- Or, What Chatbots Are Really For - Tech Policy Press - June 20th, 2023 [June 20th, 2023]
- Elon Musk refuses to 'censor' Twitter in face of EU rules - Roya News English - June 20th, 2023 [June 20th, 2023]
- AI alignment - Wikipedia - January 4th, 2023 [January 4th, 2023]
- Are We Living In A Simulation? Can We Break Out Of It? - December 28th, 2022 [December 28th, 2022]
- Amazon.com: Superintelligence: Paths, Dangers, Strategies eBook ... - October 13th, 2022 [October 13th, 2022]
- What is Artificial Super Intelligence (ASI)? - GeeksforGeeks - October 13th, 2022 [October 13th, 2022]
- Literature and Religion | Literature and Religion - Patheos - October 13th, 2022 [October 13th, 2022]
- Why AI will never rule the world - Digital Trends - September 27th, 2022 [September 27th, 2022]
- Why DART Is the Most Important Mission Ever Launched to Space - Gizmodo Australia - September 27th, 2022 [September 27th, 2022]
- 'Sweet Home Alabama' turns 20: See how the cast has aged - Wonderwall - September 27th, 2022 [September 27th, 2022]
- Research Shows that Superintelligent AI is Impossible to be Controlled - Analytics India Magazine - September 24th, 2022 [September 24th, 2022]
- Eight best books on AI ethics and bias - INDIAai - September 20th, 2022 [September 20th, 2022]
- AI Art Is Here and the World Is Already Different - New York Magazine - September 20th, 2022 [September 20th, 2022]
- Would "artificial superintelligence" lead to the end of life on Earth ... - September 14th, 2022 [September 14th, 2022]
- Instrumental convergence - Wikipedia - September 14th, 2022 [September 14th, 2022]
- The Best Sci-Fi Movies on HBO Max - CNET - September 14th, 2022 [September 14th, 2022]
- Elon Musk Shares a Summer Reading Idea - TheStreet - August 2nd, 2022 [August 2nd, 2022]
- Bullet Train Review: Brad Pitt Even Shines in an Action-Packed Star Vehicle that Goes Nowhere Fast - IndieWire - August 2nd, 2022 [August 2nd, 2022]
- Peter McKnight: The day of sentient AI is coming, and we're not ready - Vancouver Sun - June 20th, 2022 [June 20th, 2022]
- SC judges should have minimum of seven to eight years of judgeship tenure: Justice L Nageswara Rao - The Tribune India - May 25th, 2022 [May 25th, 2022]
- WATCH: Neuralink, mind uploading and the AI apocalypse - Hamilton Spectator - May 3rd, 2022 [May 3rd, 2022]
- Artificial Intelligence And the Human Context of War - The National Interest Online - May 3rd, 2022 [May 3rd, 2022]
- Elon Musk and the Posthumanist Threat | John Waters - First Things - May 3rd, 2022 [May 3rd, 2022]
- Eliminating AI Bias: Human Intelligence is Not the Ultimate Solution - Analytics Insight - April 15th, 2022 [April 15th, 2022]
- If You Want to Succeed With Artificial Intelligence in Marketing, Invest in People - CMSWire - April 15th, 2022 [April 15th, 2022]
- Here Are All The TV Shows And Movies To Watch Featuring The Cast Of "Atlanta" - BuzzFeed - April 15th, 2022 [April 15th, 2022]
- What2Watch: This week's worth-the-watch - Review - Review - March 31st, 2022 [March 31st, 2022]
- Top 10 Algorithms Helping the Superintelligent AI Growth in 2022 - Analytics Insight - March 29th, 2022 [March 29th, 2022]
- AI Ethics Keeps Relentlessly Asking Or Imploring How To Adequately Control AI, Including The Matter Of AI That Drives Self-Driving Cars - Forbes - March 18th, 2022 [March 18th, 2022]
- What to watch next on Showmax - News24 - March 18th, 2022 [March 18th, 2022]
- Does Kimi deliver the goods? Thriller aims to capitalize on Alexa, Siri and Seattles tech cachet - GeekWire - February 17th, 2022 [February 17th, 2022]
- 'Downfall' follows chain of bad decisions that led up to Boeing 737-Max crashes - Lewiston Morning Tribune - February 17th, 2022 [February 17th, 2022]
- Maybe it is Not Too Late to Buy Bitcoin! BTC has a Long Way to Go - Analytics Insight - February 17th, 2022 [February 17th, 2022]
- Giving an AI control of nuclear weapons: What could possibly go wrong? - Bulletin of the Atomic Scientists - February 7th, 2022 [February 7th, 2022]
- Meet the cast of The Afterparty - Radio Times - January 27th, 2022 [January 27th, 2022]
- 8 big threats to human stability and even existence in 2022 - AMEinfo - January 27th, 2022 [January 27th, 2022]
- Artificial Intelligence in Cardiology | AER - January 17th, 2022 [January 17th, 2022]
- Are We Living in a Computer Simulation? Artificial Superintelligence Could Provide the Answer - BBN Times - January 17th, 2022 [January 17th, 2022]
- Movie Review: The 355 - mxdwn.com - January 9th, 2022 [January 9th, 2022]
- AI control problem - Wikipedia - December 10th, 2021 [December 10th, 2021]
- REPORT : Baltic Event Works in Progress 2021 - Cineuropa - December 5th, 2021 [December 5th, 2021]
- Top Books On AI Released In 2021 - Analytics India Magazine - November 27th, 2021 [November 27th, 2021]
- Inside the MIT camp teaching kids to spot bias in code - Popular Science - November 27th, 2021 [November 27th, 2021]
- 7 Types Of Artificial Intelligence - Forbes - November 17th, 2021 [November 17th, 2021]
- The Flash Season 8 Poster Kicks Off Five-Part Armageddon Story Tonight on The CW - TVweb - November 17th, 2021 [November 17th, 2021]
- Nick Bostrom - Wikipedia - November 15th, 2021 [November 15th, 2021]
- Cowboy Bebop; Ein dogs were really spoiled on set - Dog of the Day - November 13th, 2021 [November 13th, 2021]
- Inside the Impact on Marvel of Brian Tyree Henry's Openly Gay Character in 'Eternals' - Black Girl Nerds - November 13th, 2021 [November 13th, 2021]
- The funny formula: Why machine-generated humor is the holy grail of A.I. - Digital Trends - November 13th, 2021 [November 13th, 2021]