A futurist who isn’t worried about AI – POLITICO

Posted: December 22, 2023 at 7:55 pm

A silhouette of a person in front of computer code. | Clement Mahoudeau/AFP via Getty Images

Predictions are hard.

But some people have a better track record than others. Near this newsletters beginning in March of last year, we featured the work of Peter Leyden, founder of the strategic foresight firm Reinvent Futures and a former managing editor at Wired, where he wrote an unusually prescient list of predictions for the future back in 1997.

Weve interviewed Leyden a few times since then, so the end of this year seemed like a good opportunity to bring him back and ask him what surprised him in 2023, what he missed and, yes, what he thinks is on deck for 2024.

We talked about the runaway success of generative AI (and the muddled policy conversation surrounding it), why hes more of a techno-optimist than ever despite predictions of AI doom, and what happens when the public gradually wakes up to the technological revolution its living through.

The following has been edited for length and clarity:

What happened in 2023 that you didnt expect, and why do you think you missed it?

I did not expect artificial intelligence to arrive in such an explosive manner and to move with such speed into the center of our national and international conversations. Our current approach to AI via neural networks and LLMs had been working its way through the tech world for the last decade and picking up momentum in the last several years behind the scenes, but the public launch of GPT-4 this March grabbed the attention of everyone faster than Ive ever seen with any technology.

One reason that surprised me is that I had been through the last tech revolution thats remotely comparable, the arrival of the internet in the 1990s when I was working with the founders of Wired. We spent the bulk of our time trying to convince anyone who would listen to pay attention to this digital revolution. However, the internet was an infrastructure play that took 25 years to fully build out, and only really fundamentally started changing things a decade or more along.

Generative AI essentially is a software play, and could happen almost immediately. We spent the last 25 years boosting the power of computer chips, building out a wireless high-bandwidth internet, digitizing all data and storing it in the cloud. That took time. AI took that foundation as a starting point and could go zero to 60 pretty much overnight.

What has surprised you most about the generative AI boom?

How well generative AI worked right from the start. I was not alone in my surprise here in Silicon Valley. Many AI experts I had gotten to know over the last 25 years here were similarly blown away by what GPT-4 and the like could do. Even those who had always been skeptics quickly changed their tunes. Several members of the old guard literally said they never believed they would live to see this breakthrough.

I also was surprised to see how many technologists who did have expertise freak out about the generative AI breakthrough and then talk up the possibility of AI moving towards a super-intelligence that could threaten human extinction. The vast majority of AI experts who I know think that existential threat is ridiculous, or so far in the future that we dont have to even begin worrying about it now.

I would caution those in government to keep in mind the vested interests of those who make these far-fetched claims. Many either come from the large tech giants who could benefit from early regulation that would overburden the AI startups. Or some experts warn of dangers partly to gain attention in the media that always gravitates to potential disasters.

Why do you think that AI risk has so gripped the public imagination?

Theres a rule of thumb in the strategic foresight business, where I operate, that anyone can spin a negative scenario of how things screw up in the future. Its much harder (and more valuable) to build up a scenario of how things could come together in positive ways. Add to that the default tendency of the media from Hollywood to newspapers to online posts to always gravitate towards sensational disasters, and you have your answer for why the public is currently preoccupied about the risks.

This is unfortunate because now governments feel compelled to do something about the risks before we even understand all the positive possibilities that this supertool of AI could unleash. Regulating too early is worse than too late as Europe is going to soon find out when their AI sector implodes.

What do you find hardest to predict for 2024, and why?

The explosion of positive uses for generative AI that will proliferate throughout the year as millions of entrepreneurs apply their creativity in myriad directions. You gotta remember that AI is a general purpose technology that can and ultimately will be applied to almost everything over time, in every industry, every field. What would not benefit from applying machines that can now think?

The closest thing we have in recent memory is the arrival of the internet, and AI is way bigger than that. The 1990s saw an explosion of startups as entrepreneurs from all over the world poured into the San Francisco Bay Area with crazy ideas about what to do with that new capability of connectivity. I was there back in the day, and I can tell you today San Francisco is every bit as energized with the even larger capabilities of AI.

Are you more or less techno-optimistic than you were at the beginning of 2023, and why?

Im way more techno-optimistic. Step back and look at the big picture: generative AI opened up artificial intelligence to everyone, and will be understood over time as marking the beginning of the AI age. This is a technological development of world-historic importance. AI gives humans a step change in our capabilities on a par with a couple dozen general purpose technologies in our history like fire, the printing press and electricity. Its a very, very big deal.

The amazing thing about the 2020s is that AI is not the only world-historic technology that is giving humans a step change in our capabilities. We also now have entered the age of bioengineering, given our increasing mastery of genetics and our ability to design living things. Plus we have entered the age of clean energy, with a throughline to how we could have cheap, abundant clean energy from a variety of sources, including possibly the holy grail of fusion energy.

Whats the prediction youre most confident making about 2024?

That many more people will understand that were living through an extraordinary moment in history.

Every general purpose technology can be used for good and for bad. Electricity can light our homes, but can electrocute those who mishandle it. We didnt shut down the development of electricity because of the risks. We figured out how to reduce the risks to manageable levels in order to take advantage of the many benefits.

The same is going to happen with AI. With time we will come to understand something like an 80/20 rule that maybe 80 percent of what AI brings is good, and maybe 20 percent will potentially be bad. But we will figure out the way forward. Humans always have, and we always will.

The highest court in the United Kingdom has ruled AI systems cannot hold patents.

POLITICOs Joseph Bambridge reported for Pros on the ruling, which says AI cant be considered an inventor under current U.K. patent law but noted legislators could change that.

The court was not concerned with the broader question [of] whether technical advances generated by machines acting autonomously and powered by AI should be patentable, wrote judge David Kitchin on behalf of the justices, emphasizing that he was strictly ruling on whether this was possible under the current, circa-1977 version of British patent law.

The outcome mirrors the decisions made by judges in the United States and Europe in similar cases, including one in the U.S. brought by the same computer scientist the U.K. court ruled on here, Stephen Thaler.

One climate activist is arguing the green revolution promised at this years United Nations COP28 climate summit will simply reproduce existing inequalities.

In an op-ed for POLITICO Europe Max Lawson, co-chair of the Peoples Vaccine Alliance and head of inequality policy at Oxfam International, writes that his experience with inequality in the response to the Covid-19 pandemic gives him a grim view of how COP28s promised climate revolution might play out.

Lawson singles out intellectual property monopolies as the locus for this problem: As health campaigners know all too well from the COVID-19 pandemic and many health crises before it, corporations that patent life-saving technologies rarely respond to emergencies with altruism, he writes, arguing that new green technologies will be restricted to rich countries over patent concerns. Rather, their governments tend to close ranks, protecting monopoly profits over humanitarian considerations.

He cites U.N. Secretary-General Antnio Guterres recent call to liberalize intellectual property laws, and concludes that unless the climate movement takes on this cause, we may see a green technology apartheid.

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); Steve Heuser ([emailprotected]); Nate Robson ([emailprotected]) and Daniella Cheslow ([emailprotected]).

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

Read more here:
A futurist who isn't worried about AI - POLITICO

Related Posts