Page 6«..5678..20..»

Category Archives: Superintelligence

SC judges should have minimum of seven to eight years of judgeship tenure: Justice L Nageswara Rao – The Tribune India

Posted: May 25, 2022 at 3:40 am

PTI

New Delhi, May 20

Supreme Court judge Justice L Nageswara Rao, the fifth senior-most judge, said Friday that the superannuation age of 65 years for top court judges is too young and the judges who come here should have a minimum of seven to eight years as a judgeship tenure.

Justice Rao, who was speaking at his farewell function organised by the Supreme Court Bar Association, said judging a case is a completely different art and it takes time to get used to it.

Judges who come here from high courts get an average time of four to five years as a judge here. By the time they come here and adjust to the court here Judges who have not seen how the Supreme Court functions, it takes them almost two years to understand, and by that time you retire and then time to contribute to the march of law is a short period.

Judging is a completely different art and to get used to it takes at least 3 to 4 years. When you are fully prepared, its time to go. My suggestion is judges who come to the Supreme Court should have a minimum of seven to eight years if not 10 years as a judgeship tenure. Only then will you get the best out of that person. I have been here for six years. I am pretty comfortable now but I am gone! Justice Rao, who is set to retire on June 7, said.

The top court judge said Judges by themselves cannot run the court.

It is only with the help of the Bar that judges will be able to uphold the law and ensure the rights of the citizens are protected, he said.

Justice Rao is the seventh in the history of the apex court who was elevated directly from the Bar.

Peaking about his legal career, Justice Rao said he always loved being a lawyer as the profession has given him everything including recognition.

I never thought I was in the wrong profession. My strength is the Bar. I know almost all of you. I worked with most of you. I have appeared with so many of you, he said.

He also shared his experience during his brief acting stint and said he did not want to become an actor.

I was in the theatre when I was in college. My cousin was a director and thereby had a short role in a movie. Thats it. I did not want to become an actor. Lawyers act in court and judges also do. When there is some heat we try to bring a truce between the lawyers. Acting is a part of the profession. I sometimes asked the lawyers are you like this and then I saw both going to the coffee shop together, Justice Rao said.

He said cricket is his passion and even when he works, he switches on TV to see IPL matches.

Talking about his recent win in the cricket match between apex court judges and the lawyers, Justice Rao, in a lighter vein said, The cricket match was not fixed. I did not know 11 judges were there for the match. CJI N V Ramana persuaded the other judges. Justice M M Sundresh played a key role. Bar association beware from next year Justice Sundresh is there. We won a cup which is bigger than the world cup! he said.

Justice Rao said he was very fortunate to have got the correct breaks.

There are much more intelligent people than me. I am a man of average intelligence and I dont claim to be having superintelligence but I make it up by working extra hours, he said.

See the original post here:

SC judges should have minimum of seven to eight years of judgeship tenure: Justice L Nageswara Rao - The Tribune India

Posted in Superintelligence | Comments Off on SC judges should have minimum of seven to eight years of judgeship tenure: Justice L Nageswara Rao – The Tribune India

WATCH: Neuralink, mind uploading and the AI apocalypse – Hamilton Spectator

Posted: May 3, 2022 at 9:52 pm

Eight years ago, a little-known researcher named Stephen Hawking predicted that artificial intelligence would either be the single greatest invention of humankindor kick start the apocalypse.

The question remains: how can humans control an AI that has us beat in intelligence and virtually every other field?

We cant, argues a growing number of scientists and entrepreneurs, including Tesla techno-king Elon Muskat least not as we are now.

Welcome to Kevins Science Korner, a video series diving into the strange and fantastic corners of science and technology.

Lets start with the end of humanity.

The paper clip problem: how AI might accidentally kill all humans

The paper clip maximizer thought experiment was first coined by Oxford philosopher Nick Bostrom in 2003. His influential theory describes how a superintelligent AI might make the very unintelligent decision to drown Earth in stationary.

According to Bostrom, a superintelligence is any intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.

This AI is so smart, it could improve itself by building better hardware than any human could, or by rewriting its own source code. The updates would come faster and faster as the AI grows smarter, eventually leading to an intelligence explosion. At this point, it might see humans as humans see ants, Bostrom wrote.

This could be a very good thing if were careful. AI could revolutionize space travel, medicine and computing. Depending on the phrasing of its programming, it could also (theoretically) kill all humans.

An AI programmed to make as many paper clips as possible might realize it could finish its mission way more efficiently without any humans getting in its way. Eventually, it might turn all of earth and then increasing portions of space into paper clip manufacturing facilities, Bostrom said.

Neuralink: If you cant beat em, join em

So we learned its hard to stop an AI hell-bent on flooding the world with paper clips, at least with our current puny monkey brains. But what if there was a way to make us smarter?

The worlds richest man has a plan to do just that, and it involves drilling a hole through your skull and putting electrodes in your brain. Enter Neuralink, Musks mysterious brain-machine-interface company.

For now, Neuralink aims to let you control a computer with your mind, bypassing any need to type on a keyboard or move a mouse. But Musks plans are far more ambitious; he aims to eventually merge the human brain with AI.

In time, neuralink would unlock a tertiary level of the brain that can be linked with AI, Musk claims. This could improve our computing power and give us new abilities like being able to save and replay memories, like that one Black Mirror episode, Musk said in 2020.

Musk has long been vocal on the topic, saying in 2016 humans will end up the house cats of AI. In 2020, he slammed Googles DeepMind AI project for flying too close to the sun.

Meanwhile, some experts doubt that a symbiosis with artificial intelligence" is even possible.

Mind uploading: could we live forever online?

Sure brain-machine-interfaces are fun and all, but were still trapped in these fleshy bodies. This doesnt have to be the case, say researchers in the field of whole brain emulation.

These researchers argue that consciousness and everything else that makes us human results from the billions of neurons in your brain, the trillions of connections between them and the countless neurotransmitters passing through.

Assuming theyre right, and consciousness isnt the result of some intangible spirit, we could theoretically make an exact model of the brain and all its connections using software. This model would then be a sentient, exact replica of the original brain.

Brain-machine-interfaces like Neuralinks have been seen as the first step toward full brain uploading. But in order to actually scan all that gunk in your head and make it make sense, some scientists believe wed need the help of a superintelligent AI.

With tech giants in silicon valley throwing billions into AI research, this future could arrive sooner than youd expect.

Read the original:

WATCH: Neuralink, mind uploading and the AI apocalypse - Hamilton Spectator

Posted in Superintelligence | Comments Off on WATCH: Neuralink, mind uploading and the AI apocalypse – Hamilton Spectator

Artificial Intelligence And the Human Context of War – The National Interest Online

Posted: at 9:52 pm

Excitement and fear about artificial intelligence (AI) have been building for years. Many believe that AI is poised to transform war as profoundly as it has business. There is a burgeoning literature on the AI revolution in war, and even Henry Kissinger has weighed in on The Age of AI And Our Human Future.

Governments around the world seem to agree. Chinas AI development plan states that AI has become a new focus of international competition and is a strategic technology that will lead in the future. The U.S. National Security Commission on AI warns that AI is deepening the threat posed by cyber attacks and disinformation campaigns that Russia, China, and others are using to infiltrate our society, steal our data, and interfere in our democracy. China and the United States are in a race for AI supremacy, and both nations are investing huge sums into lethal autonomous weapons to gain an edge in great power competition.

Scholars expect that authoritarians and democracies alike will embrace AI to improve military effectiveness and limit their domestic costs. Military AI systems will be able to sense, respond, and swarm faster than humans. Speed and lethality would encourage preemption, leading to strategic deterrence failures. Unaccountable killing would be an ethical catastrophe. Taken to an extreme, a superintelligence could eliminate humanity altogether.

The Economics of Prediction

These worrisome scenarios assume that AI can and will replace human warriors. Yet the literature on the economics of technology suggests that this assumption is mistaken. Technologies that replace some human tasks typically create demand for other tasks. In general, the economic impact of technology is determined by its complements. This suggests that the complements of AI may have a bigger impact on international politics than AI technology alone.

Technological substitution typically increases the value of complements. When automobiles replaced horse-carts, this also created demand for people who could build roads, repair cars, and keep them fueled. A drop in the price of mobility increased the value of transportation infrastructure. Something similar is happening with AI.

The AI technology that has received all the media attention is machine learning. Machine learning is a form of prediction, which is the process of filling in missing information. Notable AI achievements in automated translation, image recognition, video game playing, and route navigation are all examples of automated prediction. Technological trends in computing, memory, and bandwidth are making large-scale prediction commercially feasible.

Yet prediction is only part of decisionmaking. The other parts are data, judgment, and action. Data makes prediction possible. Judgment is about values; it determines what to predict and what actions to take after a prediction is made. An AI may be able to predict whether rain is likely by drawing on data about previous weather, but a human must decide whether the risk of getting wet merits the hassle of carrying an umbrella.

Studies of AI in the commercial world demonstrate that AI performance depends on having a lot of good data and clear judgment. Firms like Amazon, Uber, Facebook, and FedEx have benefitted from AI because they have invested in data collection and have made deliberate choices about what to predict and what to do with AI predictions. Once again, the economic impact of new technology is determined by its complements. As innovation in AI makes prediction cheaper, data and judgment become more valuable.

The Complexity of Automated War

In a new study we explore the implications of the economic perspective for military power. Organizational and strategic context shapes the performance of all military information systems. AI should be no different in this regard. The question is how the unique context of war shapes the critical AI complements of data and judgment.

While decisionmaking is similar in military and business organizations, they operate in radically different circumstances. Commercial organizations benefit from institutionalized environments and common standards. Military systems, by contrast, operate in a more anarchic and unpredictable environment. It is easier to meet the conditions of quality data and clear judgment in peacetime commerce than in violent combat.

An important implication is that military organizations that rely on AI will tend to become more complex. Militaries that invest in AI will become preoccupied with the quality of their data and judgment, as well as the ways in which teams of humans and machines make decisions. Junior personnel will have more responsibility for managing the alignment of AI systems and military objectives. Assessments of the relative power of AI-enabled militaries will thus turn on the quality of their human capital and managerial choices.

Anything that is a source of strength in war also becomes an attractive target. Adversaries of AI-enabled militaries will have more incentives to target the quality of data and the coherence of judgment. As AI enables organizations to act more efficiently, they will have to invest more in coordinating and protecting everything that they do. Rather than making military operations faster and more decisive, we expect the resulting organizational and strategic complexity to create more delays and confusion.

Emerging Lessons from Ukraine

The ongoing war in Ukraine features conventional forces in pitched combat over territorial control. This is exactly the kind of scenario that appears in a lot of AI futurism. Yet this same conflict may hold important lessons about AI might be used very differently in war, or not used at all.

Many AI applications already play a supporting role. Ukraine has been dominating the information war as social media platforms, news feeds, media outlets, and even Russian restaurant reviews convey news of Ukrainian suffering and heroism. These platforms all rely on AI, while sympathetic hacktivists attempt to influence the content that AI serves up. Financial analysts use AI as they assess the effects of crushing economic sanctions on Russia, whether to better target them or protect capital from them. AI systems also support the commercial logistics networks that are funneling humanitarian supplies to Ukraine from donors around the world.

Western intelligence agencies also use data analytics to wade through a vast quantity of datasatellite imagery, airborne collection, signals intelligence, open-source chatteras they track the battlefield situation. These agencies are sharing intelligence with Kyiv, which is used to support Ukrainian forces in the field. This means AI is already an indirect input to battlefield events. Another more operational application of AI is in commercial cybersecurity. For instance, Microsofts proactive defense against Russian wipers, has likely relied on AI to detect malware.

Importantly, these AI applications work because they are grounded in peaceful institutions beyond the battlefield. The war in Ukraine is embedded in a globalized economy that both shapes and is shaped by the war. Because AI is already an important part of that economy, it is already a part of this war. Because AI helps to enable global interdependence, it is also helps to weaponize interdependence. While futurist visions of AI focus on direct battlefield applications, AI may end up playing a more important role in the indirect economic and informational context of war.

Futurist visions generally emphasize the offensive potency of AI. Yet the AI applications in use today are marginally empowering Ukraine in its defense against the Russian offensive. Instead of making war faster, AI is helping to prolong it by increasing the ability of Ukraine to resist. In this case, time works against the exposed and harried Russian military.

We expect that the most promising military applications of AI are those with analogues in commercial organizations, such as administration, personnel, and logistics. Yet even these activities are full of friction. Just-in-time resupply would not be able to compensate for Russias abject failure to plan for determined resistance. Efficient personnel management systems would not have informed Russian personnel about the true nature of their mission.

Almost everyone overestimated Russia and underestimated Ukraine based on the best data and assessments available. The intelligence failures in Russia had little to do with the quality of data and analysis, moreover, and more with the insularity of Russian leadership. AI cannot fix, and may worsen, the information pathologies of authoritarian regimes. AI-enabled cyber warfare capabilities would likewise be of little use if leaders failed to include a cyber warfare plan.

The Human Future of Automated War

It is folly to expect the same conditions that have enabled AI success in commerce to be replicated in war. The wartime conditions of violent uncertainty, unforeseen turbulence, and political controversy will tend to undermine the key AI conditions of good data and clear judgment. Indeed, strategy and leadership cannot be automated.

The questions that matter most about the causes, conduct, and conclusion of the war in Ukraine (or any war) are not really about prediction at all. Questions about the strategic aims, political resolve, and risk tolerances of leaders like Vladimir Putin, Volodymyr Zelenskyy, and Joseph Biden turn on judgments of values, goals, and priorities. Only humans can provide the answers.

AI will provide many tactical improvements in the years to come. Yet fancy tactics are no substitute for bad strategy. Wars are caused by miscalculation and confusion, and artificial intelligence cannot offset natural stupidity.

See original here:

Artificial Intelligence And the Human Context of War - The National Interest Online

Posted in Superintelligence | Comments Off on Artificial Intelligence And the Human Context of War – The National Interest Online

Elon Musk and the Posthumanist Threat | John Waters – First Things

Posted: at 9:52 pm

Elon Musk is typically the object of considerable masculine envy, and so, in the interests of avoiding accusations of green-eye, it would be nice to be able to declare him unequivocally as on the side of the angels. But, for all his legendary space-adventuring, this is by no means clear.

For one thing, he is a longtime friend of Twitter founder Jack Dorsey, which hints that, despite all the talk about Musk's Twitter being a haven for freedom of expression, his reign as supremo of the worlds leading wittering platform must be subject to a probationary period. For another thing, Twitter is, in its essence, not a platform for the free thinker. Free speech tends to be an individual thing. But Twitter has been, remains, and will endure only as an instrument of mob-speak. I mean this in the sense that the French psychologist Gustave Le Bon spoke ofthe psychological crowd, in which individual personality disappears to be replaced by a new beingexactly, he said, as the cells in the human body unite to create a single entity which displays characteristics very different from each of the cells singularly.

This is what Dorsey and Twitter stumbled upon: not merely the possibility of a global electronic megaphone for the voices of useful mobs, but an evolutionary device with which to attack public discourse more or less unrestrained. Where Twitter is concerned, once the mob has been blooded and loosed, free speech can only mean turning up the volume.

Two years ago, while foolishly running for election to the national parliament of my country, Ireland, I briefly had a Twitter account wholly administered by others. This was against my better judgment, since for many years I had been saying that, when the issue of blame for the ending of civilization would in the not too distant future be investigated by anthropologists (if there were any anthropologists) these worthy surveyors would in jig time emerge from the ruins with a piece of paper bearing just one word:Twitter.

Nevertheless, Twitter, for good ormore likelyill, has become, as Musk says, thede factotown square. The nature of the platform, being defined by loudness and rudeness, offers a ready justification for enforcing restraint or civilityhere the last refuges of scoundrels. For related reasons, the very conditions that characterize the Twittersphere have become repressive, because they invite a moulding of commentary to skewed notions of truth-telling as a requirement for basic continuity of access. Twitter has become the perfect medium for deniable censorship.

Elon Musk may understand all this. He describes social media asgiant cybernautic collectives, which he must know are not the same as open conduits of free speecha concept he defines as when someone you dont like says something you dont like. He has already begun to move the Twitter furniture about in a manner suggesting good intentions. But the problem is a deeper one, as we may soon get the chance to observe.

When one watches and listens to him, itis not hard to like Elon Musk. He seems to speak tentatively like a shy child who does not want to invite a wave of effusive praise from an indulgent aunt or neighbor. He knows how smart he is, but would like to keep it as something understood rather than talked about. He seems all the time to be amazed by himself and his existence in the world. In this sense, he is the perfect antidote to a world jaded by taking itself for granted.

But Musk is not any kind of conservative, and so the tentative cheers from such quarters that have greeted leftist dismay over his apparent routing of the Twitter board might more wisely have been postponed to see what the new Twitter may look and sound like.

He is no fan of wokeness, for sure, and it may well emerge that the Musk supremacy will mark the ending of Twitters thus far lifelong association with that agenda, though this, while a welcome relief, would proffer insufficient justification for his dramatic intervention. Wokeness, after all, is on its last legs, the adults of the world having at last begun to awake to the evils of transgenderism and Drag Queen Story Hour. Moreover, woke Twitter has already fulfilled its deeper purpose in the agenda of the Cultural Marxist revolution: the sowing of unprecedented intellectual discord in Western society and the demoralization of conservative opinion.

It is where this might be taken next that ought to concern the world, and where Musks influence may prove most critical. Assuming the world finds a way of side-stepping the looming prospect of World War III,the next phase of the agenda will be posthumanism, when the focus of both woke and conservative attentions should pivot from the recent fixations on sexual anthropology to the metaphysical. The issue of Twitters undemocratic aspect, and detrimental effects on freedoms of various kinds, is dwarfed by another longtime concern articulated by Musk: the failure to regulate the evolution of digital superintelligence, a form of AI that promises to change not merely the environments and cultures we inhabit but our own minds and souls.

Musk is an ambivalent figure in this context, and his charger is not white, nor even a horse, but something like a zebra, either black with hopeful white stripes or white with worrying black ones. He has many critics who look with suspicion at his 2,000+ Starlink satellites and mutter how conveniently they appear to fit with the new age of surveillance andsocial credit systems. Then there is his Neuralink venture, for developing implantable brain-machine interfaces (BMIs), which essentially enable the brain to connect remotely with robotic and smart devices. It is, in short, another way of creating a mob-compliant mind, and human trials are scheduled to begin this year.

The circumstantial evidence, then, points to Musk as a tech poacher turned gamekeeper. He claims that for years he tried to persuade the world to slow down AI, but eventually admitted defeat and decided if you cant beat them, join them. He is conscious of the potential for dystopian outcomes arising from the supplanting of human relationships with robotic ones, but ominously adds: Itll be whatever people want, really. I am really quite close to the cutting edge in AI, and it scares the hell out of me, he has said. Its capable of vastly more than anyone knows, and the rate of improvement is exponential.

His thinking becomes worrisome when he hints that implanted neural circuitry is just the next step from smartphones: We all of us alreadyarecyborgs. You have a machine extension of yourself in the form of our phone and your computer, and all your applications. You are already superhuman.

Sure, Musk defuses much of the concern when he talks about democratizing AI technology to render it safer, but what is ominous right now is that, thanks to the COVID project, the democratic processes are more weak and tender than they have been in perhaps 75 years. A key contributor to social media lawlessness has been the wholesale failure of democratic representatives to confront the high-handed and quasi-criminal practices promoted and justified by self-serving tech moguls rabbiting on about free speech.

Musk is now not merely the richest man in the world but the richest human being in history. Worth $24.6 billion pre-pandemic, by February last he had added another $200 billion, less small change.It is a measure of our desperation that we now find ourselves hoping, as though praying to some monied god, that Elon will do something about the sequestration of our democracy by other unelected, unmandated moguls, and then graciously undo the damage done to liberty and truth before riding off into the sunset. In a certain sense, even his succeeding in this would amount to an added insult to the egregious offense already delivered. In the final analysis, it would be fatuous were the free world to now wait with bated breath to see what a billionaire might do to face down this next looming tyranny.

For all that Elon Musk might be a hero of our times, we remain on the cusp of the final transfer of the numinous freedom of humanity from the metaphysical to the material realms: freedoms no longer deriving from God or even gods, but from the bank balances of the oligarchs who aspire and machinate to supplant all deities.

Elon Musk is, apart from being the richest, perhaps the most likeable mogul on the planet. But that does not mean he would be a more congenial or sympathetic god. We should mark those black stripes carefully as his zebra comes over the hill.

John Watersis an Irish writer and commentator, the author of ten books, and a playwright.

First Thingsdepends on its subscribers and supporters. Join the conversation and make a contribution today.

Clickhereto make a donation.

Clickhereto subscribe toFirst Things.

Photo by Steven JurvetsonviaCreative Commons. Image cropped.

Read this article:

Elon Musk and the Posthumanist Threat | John Waters - First Things

Posted in Superintelligence | Comments Off on Elon Musk and the Posthumanist Threat | John Waters – First Things

Eliminating AI Bias: Human Intelligence is Not the Ultimate Solution – Analytics Insight

Posted: April 15, 2022 at 12:14 pm

There is a need for the global tech industry to eliminate AI bias in 2022

For a long time, technology has been promoted as neutral and bias-free. The dominant slogan so to say was neural is neutral, and in course of time it metamorphosed into virtual is neutral. But nothing in the world forever stays one-sided. With the advent of the most sophisticated genres and brands of technology, there has been a growing awareness of the tech bias. Take the case of AI. Arguably, the most cutting-edge kind has been consistently subject to criticisms about its bias. The tech developers and promoters in unison have responded to such criticisms and sought to counter them by arguing that the AI bias can be eradicated or at least minimized by having human in the loop. Is it really so?

The core idea behind this phrase AI bias, notwithstanding the great progress in AI and the call for AI autonomy alongside, there is a limit up to which it can go and that is exactly where the human intelligence and intellect can not only intervene and also manage to get the upper hand. To delve somewhat deeper into the point, AI has limitations in being inherently schematic while human beings are organic. Then again, with the passing of time a question has come to the surface, indicating yet another turn in the debate: is human in the loop really capable of enabling AI to get rid of bias?

One cannot undermine the fact that while the human factor in manning AI is being promoted here is a counter-trend too. A number of leading experts in AI studies seemed to be confidently predicting that by the middle of this century AI will witness such phenomenal growth that by being a supplement to the human brain, and thus making itself absolutely indispensable, it will guide the thinking and decision-making processes of human beings be it in the political, economic, or commercial domain. The crux of the argument is that with the possibility of AI attaining new heights in superintelligence, anytime soon it may overwhelm human intelligence. It implies not just faster decisions but more reasoned, objective, and accurate decision-making ecosystems. One cannot be totally dismissive about such a claim and call it false mainly because it comes from experts who are intensely involved for decades in AI research.

There is also the vital issue of human understanding of AI when one seeks to rely on the human in the loop logic. It is common knowledge that AI is moving fast and in multiple ways, and it is not easy to come to terms with its development including the AI bias. The matter is made even more complicated by the fact that there is a misperception or even mistrust among users when it comes to AI applications. This, in turn, leads to a number of legal, economic, and ethical questions and issues which are to be addressed and negotiated by human in the loop not only carefully but also successfully. What is important to note here is that if AI superintelligence is materialized to the full brim and if users remain laggards in understanding its functions there may come a day when the AI-led decisions will be prioritized for simple pragmatic reasons over human-mediated decisions.

One need not be hyper-enthusiastic in forecasting a specific time during which AI is going to supersede human beings. There are many adversarial factors confronted by AI, which include its lack of ability to identify a specific context and to react accordingly. Also, AI also frequently becomes a victim of hacking, which severely undermines its credibility and autonomy. Yet, as the discussionreveals, situating a humanin the loop strategy in a routine manner in an ultra-dynamic situation will not be a viable solution as such.

So, it is not a win-win situation for those who advocate the human in the loop strategy. Nor is it so for those who sing the tunes of AIs unbound autonomy. In fact, there has to be the search for the till-now-elusive optimal point which will make a judicious blend, with appropriate governance regulations as the backup support, of AI superintelligence and human intelligence to serve the interest of users at large.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Originally posted here:

Eliminating AI Bias: Human Intelligence is Not the Ultimate Solution - Analytics Insight

Posted in Superintelligence | Comments Off on Eliminating AI Bias: Human Intelligence is Not the Ultimate Solution – Analytics Insight

If You Want to Succeed With Artificial Intelligence in Marketing, Invest in People – CMSWire

Posted: at 12:14 pm

PHOTO:Monopoly919

Almost every list of martech trends forecast how artificial intelligence (AI) will transform marketing. While AI offers benefits, optimizing automation is only half the job.

Marketing wont deliver on AIs promise unless the human side of the equation is given equal attention. Because business value increasingly depends on human factors including agility, innovation and relationships, those companies that best cultivate human potential will be the most successful.

Businesses will always need efficiency but squeezing out another drop has diminishing returns. CEOs realize that agility, innovation and improved customer experience will deliver tomorrows gains. KPMG revealed that 67% of CEOs agree with the statement that agility is the new currency of business. If we act too slow, we will be bankrupt. BCG found that 75% of companies say that innovation has become a top three priority, a 10% jump since pre-pandemic. Agility and innovation are essential strategies in a world that the US Army called VUCA (volatile, uncertain, complex, ambiguous). Digital dynamics dramatically accelerated VUCA effects.

VUCA reality is especially obvious at a companys edge and causes many persistent marketing challenges. The capriciousness of marketing derives from the same complexity as traffic or natures ecosystems. Science calls these complex adaptive systems, and they acquire their VUCA behavior from many interacting agents (e.g., customers, competitors, social networks, partners and regulatory entities) producing numerous feedback loops which cause situations to change rapidly and unexpectedly. VUCA is why customer journeys look more like a childs scribble than a linear funnel, why a campaign that succeeded for months suddenly failed yesterday, and why calculating marketing ROI remains a frustrating challenge. Markets behave a lot like weather and stock markets.

Related Article: A Look at Marketing's Biggest Data Challenges of the 2020s, Part 2

AI offers many benefits when working in VUCA environments. Markets are complex, but they are also semi-predictable within the bounds of probability and time. Previous generations of marketers have been largely blind to these patterns because humans are ill-equipped to comb through the mountains of data needed to see them.

AI excels at this task. AI can also help ameliorate other human challenges. For example, AI can spot mental biases such as the recency bias where humans tend to over-value what just happened and under-value high impact events of the past. AI can also tirelessly perform repetitive tasks that irritate humans.

But AI fails miserably at interpreting ambiguity and nuance. It is extremely literal. Popular culture fantasizes about AI as becoming nearly human. The 2021 bestseller, Klara and the Sun by Nobel laureate Kazuo Ishiguro, is voiced by the sensitive artificial friend of a lonely 14-year-old girl. The 2013 movie "Her" features Scarlett Johansson as brilliant virtual assistant. In real life, AI algorithms flop when generalizing tasks into broader contexts. They perform well only if trained in narrow, focused, tasks.

Marketings VUCA world is anything but narrow and focused, and because of this complexity there are many risks when applying AI. Nicholas Bostrum, in the book Superintelligence: Paths, Dangers, and Strategiesoffers an example of a machine simulation that when given the task of ferrying a passenger to the airport as quickly as possible has no reservations about running over pedestrians.

Humans, on the other hand, are well-suited for performing in ambiguous, nuanced situations. We excel at creativity, critical thinking, judgment, problem-solving, and interpersonal skills. We grasp context. For example, we can sense meaning in a customers inflection change and evaluate the subtle trade-offs such as giving a money-losing discount today to increase future loyalty. Humans also excel at physically dexterous work beyond the scope of AI capability.

Related Article: How AI-Based Marketing Can Reduce Customer Retention

A collaboration between humans and AI is the best opportunity for an agile, innovative response to marketings VUCA digital world. This partnership requires attention to both automation and developing human potential.

Three tasks need special focus:

A fresh look at the customer journey reveals skills ideal for both AI-enabled technology and humans everywhere. Take, for example, the mid-funnel phase where customers evaluate alternatives. Customers enjoy digital, self-directed education, and this task can be aided by AI-curated content, AI-enabled prototyping, dynamic pricing and emotional-AI enhanced chat.

But when customers get stuck, they need a human problem solver to investigate, discern emotions, match unique situations to appropriate solutions, persuade and build consensus. Customers now bounce between digital and human interactions making the traditional, linear, first-marketing-then-sales process archaic.

Related Article: CX Decoded Podcast: Practical Use Cases of AI in Marketing

The authors of a Harvard Business Review article, Why You Arent Getting More from Your Marketing AI, insist that because of AIs literalness and power, marketers must develop new mindsets and skills to ensure success. The article describes how a consumer products firm reduced the error rate in their sales-volume forecast from 25% to 17%, yet lost money despite improved accuracy.

While human decision-makers could tell that the underlying intent of error reduction was improving profits, the AI was ignorant of this assumption. The AI had improved precision in the low-margin products where most errors had been produced but had inadvertently reduced accuracy in high-margin products. This unintended consequence caused the company to underestimate demand for their most profitable products. Partnering with AI will require a long list of new capabilities including training, managing, troubleshooting, decision-making, governance and ethics.

Throughout history, technology has displaced outmoded jobs. In 1910, approximately 40% of Americans worked as either household servants or in farm-related jobs, according to the US Bureau of Labor Statistics. That percentage shrunk to 1.65% by 2000. During the same period jobs for professional, technical, managerial, and service workers ballooned.

In addition to the new jobs needed to operate AI, leaders must prepare workers for jobs requiring uniquely human skills. For marketing, these jobs include applying scientific and design methods, creative development and production, behavioral sciences, security and privacy, and of course, jobs requiring emotional and social intelligence.

The VUCA customer world has produced many persistent challenges for marketing. AI can break through many of these barriers to new levels of value, but only if leaders also cultivate human potential.

Kathleen Schaub is a writer and advisor on marketing leaders quest to modernize organizations and operations for greater effectiveness in the complex digital world. She led IDCs CMO Advisory practice for nine years advising hundreds of technology marketing leaders on management best practices.

See the original post:

If You Want to Succeed With Artificial Intelligence in Marketing, Invest in People - CMSWire

Posted in Superintelligence | Comments Off on If You Want to Succeed With Artificial Intelligence in Marketing, Invest in People – CMSWire

Here Are All The TV Shows And Movies To Watch Featuring The Cast Of "Atlanta" – BuzzFeed

Posted: at 12:14 pm

Who didn't miss this cast?

Best-known for: Community

Other things to watch: The Lion King, Solo: A Star Wars Story, Spider-Man: Homecoming, The Martian, Magic Mike XXL, The Lazarus Effect,andAlexander and the Terrible, Horrible, No Good, Very Bad Day

Coming soon: Mr. & Mrs. Smith and Star Wars: Lando

Best-known for: If Beale Street Could Talk

Other things to watch: HouseBroken, Eternals, The Woman in the Window, Godzilla vs. Kong, Superintelligence, Joker, Child's Play, Spider-Man: Into the Spider-Verse, Widows, White Boy Rick, Irreplaceable You, and Hotel Artemis

Coming soon: Bullet Train, Red, White and Water, The Magician's Elephant, and Class of '09

Best-known for: Sorry to Bother You

Other things to watch: The Harder They Fall, Yasuke, Judas and the Black Messiah, The Photograph, BoJack Horseman, Knives Out, Uncut Gems, Someone Great, The Girl in the Spider's Web, Come Sunday, Death Note, Get Out, Snowden, and The Purge Anarchy

Coming soon: The Changeling, Haunted Mansion, and Notes From a Young Black Chef

Best-known for: Joker

Other things to watch: Invincible, The Bad Guys, The Harder They Fall, Nine Days, Lucy in the Sky, The Undiscovered Country, The Twilight Zone, Easy, Wounds, Slice, Deadpool 2, Geostorm, Margot vs. Lily, Wolves, and Applesauce

Coming soon: Bullet Train and Shelter

Get all the best moments in pop culture & entertainment delivered to your inbox.

Read the rest here:

Here Are All The TV Shows And Movies To Watch Featuring The Cast Of "Atlanta" - BuzzFeed

Posted in Superintelligence | Comments Off on Here Are All The TV Shows And Movies To Watch Featuring The Cast Of "Atlanta" – BuzzFeed

What2Watch: This week’s worth-the-watch – Review – Review

Posted: March 31, 2022 at 2:34 am

POLOKWANE Streaming services such as Showmax and Netflix have made it possible to watch your favourite series or movies on demand, whenever you want to.

This however makes it even more difficult to decide between the magnitude of options available thats where we come in.

Review has put together a list of shows you should stream on your favourite platform.

What to Watch this week on:

Showmax:

Series: Chicago Med

The dedicated doctors, nurses and staff of Gaffney Chicago Medicals trauma centre are pushed to the limit as they fight on the frontlines of a global Covid-19 pandemic.

Movie: Superintelligence

When an all-powerful Superintelligence chooses to study the most average person on earth, Carol Peters, the fate of the world hangs in the balance.

For the kids: Archibalds Next Big Thing Is Here

The first season of Archibalds Next Big Thing Is Here, the 2021 animated half-hour family comedy series from DreamWorks Animation, picks up the adventures of Archibald Strutter, a happy-go-lucky chicken who improvises his way through life but always finds his way home to his three siblings and trusty sidekick.

Netflix:

Series: Bridgerton

The eight close-knit siblings of the Bridgerton family look for love and happiness in London high society. Inspired by Julia Quinns bestselling novels.

Movie: Blade Runner 2049

The contents of a hidden grave draw the interest of an industrial titan and send Officer K, an LAPD blade runner, on a quest to find a missing legend.

For the kids: Transformers: BotBots

When the lights go out at the mall, the BotBots come out to play! Meet a fun-loving crew of everyday objects that morph into robots at closing time.

Sources:Showmax,WikipediaandWhats on Netflix.

The rest is here:

What2Watch: This week's worth-the-watch - Review - Review

Posted in Superintelligence | Comments Off on What2Watch: This week’s worth-the-watch – Review – Review

Top 10 Algorithms Helping the Superintelligent AI Growth in 2022 – Analytics Insight

Posted: March 29, 2022 at 12:59 pm

Superintelligent AI is not here yet, but these top 10 algorithms are extensively working towards its growth.

Superintelligence, roughly defined as an AI algorithm that can solve all problems better than people, will be a watershed for humanity and tech. Even the best human experts have trouble making predictions about highly probabilistic, wicked problems. And yet those wicked problems surround us. We are all living through an immense change in complex systems that impact the climate, public health, geopolitics, and basic needs served by the supply chain. Even though the actual concept of superintelligent AI is yet to be materialized, several algorithms are working to help in its growth. Here are such top 10 algorithms that are building a future for the growth of superintelligent AI.

This is the beginning of a superintelligent AI system that translates natural language to code. Codex is the model that powers GitHub Copilot, which was built and launched in partnership with GitHub a month ago. Proficient in more than a dozen programming languages, Codex can now interpret simple commands in natural language and execute them on the users behalfmaking it possible to build a natural language interface to existing applications.

CLEVER (Combining Levels of Bug Prevention and Resolution techniques) was created in a joint effort with Ubisoft and Mozilla designers. The Clever-Commit is an AI coding assistant which combines data from the bug tracking system and the codebase and helps in looking through the mistakes and bugs in the codes. The coding partner is right now being utilized inside Ubisoft for game improvement purposes. It is one of the best AI coding systems aiming for superintelligent AI.

AlphaCode was tested against challenges curated by Codeforces, a competitive coding platform that shares weekly problems and issues rankings for coders similar to the Elo rating system used in chess. These challenges are different from the sort of tasks a coder might face while making, say, a commercial app.

Built-in AI and Machine Learning, Embold is an intelligent, multi-layered analyzer for programming projects that looks forward to the growth of superintelligent AI. It comprehends the situation with the product quality and identifies issues as well as suggests arrangements and recommends code examination for the specific issue. It analyses source code utilizing strategies like natural language processing (NLP), machine learning, and a set of algorithms in order to find design issues, bugs, and so on.

Tabnines Public Code AI algorithm is the foundation for all its code completion tools and its the perfect algorithm set for the emergence of superintelligent AI. The Free, Advanced, and Business level solutions train on trusted open-source code with permissive licenses. Tabnines AI Assistant anticipates your coding needs, providing code completions for you and your development team that boosts your productivity.

mabl is a Software-as-a-Service (SaaS) supplier and abound together with the DevTestOps stage for AI and Machine Learning-based test robotization. The critical highlights of this arrangement incorporate auto-recuperating tests, Artificial Intelligence-driven relapse testing, visual peculiarity discovery, secure testing, information-driven useful testing, cross-program testing, test yield, reconciliation with well-known devices, and substantially more.

Augmented Coding is a set of tools that leverage the power of AI to enhance the coding process, making it easier for developers to cover compliance needs over documentation, reuse of existing code, and code retrieval within your IDE. It is one of the best AI coding systems available in the market today.

Pylint is a Python source code analyzer that searches for programming mistakes, assists with authorizing a coding standard, and other such. This quality checker for Python programming incorporates a few elements, for example, coding standard where it checks for the length of line codes, mistake identification, refactoring by recognizing the copied code, among others. It is one of the best AI coding systems that are going to be a vital element in the growth of superintelligent AI.

Sketch2Code is a web-based solution that uses Artificial Intelligence to transform a handwritten user interface plan from an image to a legitimate HTML markup code. The arrangement works in a manner, for example, it initially recognizes the plan designs, comprehends the manually written draw or text, comprehends the construction, and afterward assembles a legitimate HTML code as needed to the identified format containing the distinguished plan components. It is one of the best AI coding systems available in the market today.

AI-assisted development. IntelliCode saves you time by putting what youre most likely to use at the top of your completion list. IntelliCode recommendations are based on thousands of open source projects on GitHub each with over 100 stars. When combined with the context of your code, the completion list is tailored to promote common practices. It is one of the best AI coding systems that are as good as human programmers.

Share This ArticleDo the sharing thingy

Read the original post:

Top 10 Algorithms Helping the Superintelligent AI Growth in 2022 - Analytics Insight

Posted in Superintelligence | Comments Off on Top 10 Algorithms Helping the Superintelligent AI Growth in 2022 – Analytics Insight

AI Ethics Keeps Relentlessly Asking Or Imploring How To Adequately Control AI, Including The Matter Of AI That Drives Self-Driving Cars – Forbes

Posted: March 18, 2022 at 7:58 pm

The daunting AI Control Problem needs to be dealt with and AI ethics is striving mightily to do so.

Can we all collectively band together and somehow tame a wild beast?

The wild beast Im referring to is Artificial Intelligence (AI).

Now, lets be abundantly clear that you are on thin ice if you claim that todays AI is a beast. I say this because we usually reserve the word beast for a living breathing creature. Please know that the AI we have in our world today is not sentient. Not even close. Furthermore, we dont know if sentient AI is possible. No one can say whether sentient AI will be achieved, such as via the oft worried about act of singularity, and any predictions of when it will occur are tenuous at best. For more on this question of sentient AI, singularity, and similarly outsized notions about AI, see my coverage at this link here.

Since Ive clarified that AI perhaps is presumably mischaracterized as a beast, why have I gone ahead and asked the pivotal question about taming it in that stated terminology?

For several cogent reasons.

First, we might someday have sentient AI and in that case, I guess the beast title might be suitable, depending upon what you define as sentient AI.

Some suggest that a sentient AI would be a machine that can perform as humans can, but it is nonetheless a non-human. Is that kind of AI an animal? Well, maybe yes, or maybe no. It is a type of being that has the intelligence of humans, appears to be living, and yet is not a human, so the closest that we have to assign is animal labeling. We can then call it a beast if we wish to do so. On the other hand, if it is entirely a machine, the animal moniker does not seem apt and ergo the beast title seems inappropriate. We might need to give AI a new category of its own and correspondingly ascertain whether a beastly naming is suitable.

This was the hardest consideration about the beast assignment, so lets move on.

Secondly, some believe we will not only arrive at sentient AI, but they also strenuously assert that the AI might go off the charts and give rise to superintelligence. The idea is that the sentient AI will be more than the equivalent of human capacities. AI is seen as potentially eclipsing human intelligence and soaring into a superintelligence sphere. Once again, this is highly speculative. We dont know that AI could get into that stratospheric realm. There is also the question of how super-intelligent can superintelligence be? Is there a cutoff at which superintelligence tops out? Also, what will it take to prove to us that AI is super intelligent versus just everyday normal humanly intelligent?

Third, you can somewhat get away with calling todays non-sentient AI a beast, if you are comfortable ascribing an anthropomorphic aura to contemporary AI. As youll see in a moment, I am not a fan of the anthropomorphic allusions used when describing AI. Headlines that do so are easily misunderstood and lead society toward believing we do already have in our pretty little hands a sentient AI.

Thats not good.

I suppose another basis for saying that even non-sentient AI is a bit of a beast entails a different connotation or meaning associated with beasts per se. Rather than necessarily assuming that all beasts must be living creatures, we do admittedly at times refer to a monstrous-looking truck or car as a big beast. The same can be applied to massive-sized yachts, enormous airplanes, and gigantic rocket ships. In that sense, we already appear willing to contend that a thing can be a beast.

Lets briefly take a quick side tangent about the beast title being assigned to AI.

Some are worried that we might eventually have sentient AI or super-intelligent AI that is all-powerful. There is a famous or shall we say infamous thought experiment known as Rokos basilisk that postulates an all-powerful AI might come after everyone that before the AI emerging was downbeat or insulting to AI, see my explanation about this at the link here. My point is that for those that have said AI is a beast, would this, later on, provoke a global-ruling AI to be copiously irked and summarily decide that the beast naming humans will be the first to go? In which case, allow me to say right now that I am not saying AI is a beast in any pejorative sense. I sincerely hope that gets me off the hook.

Back to the beastly title. We tend to invoke dastardly oriented imagery when usually calling someone or something a beast. It doesnt have to be used in that manner but often is. A lion that mauls a cute-looking antelope is nearly immediately called out as a beast. Beasts are untamed. They act in scary and impulsive ways. Most of all, we ordinarily dont like how beasts sometimes treat humans.

Humankind has obviously sought to tame many beasts. The act of taming a beast means that we are seeking to reduce the natural instincts of attacking or harming humans (and possibly other animals too). Generally, a tamed beast is able to tolerate the presence of humans. Such a beast will not necessarily lunge at humans, though this can still happen if provoked or otherwise the taming strictness is overcome. In case you are wondering whether taming is the same as domestication, the encyclopedia answer is that those are related but differing concepts. Domestication has generally to do with the aspect of breeding a lineage to have an inherited predisposition friendlier toward humankind.

Okay, having dragged you through the beast naming conundrum, we can tie this to an ongoing concern and looming question that is being vociferously asked by AI ethics and considered part of the trend toward Ethical AI, which Ive been covering extensively in my columns such as the link here and the link here, just to name a few.

The million-dollar question is this: Will we be able to control AI?

This is variously known as the AI control problem.

Some prefer to phrase this altogether crucial mega-topic as the AI containment problem. For those that are heavily versed in AI, they tend to drop the AI part of the techie discourse and shorten the vexing matter to simply the control problem or the containment problem. Other wordings are also used from time to time.

The rub is that AI might end up doing things that we dont like. For example, wiping out all of humanity. The idea here is that we craft AI or it springs forth and decides humans arent all that we think they are. Youve seen plenty of sci-fi movies with this sordid plot. AI at first is compatible with humans. Soon, AI gets upset with humans. This could be because we hold the key to AI functioning and are imperiling AI by threatening to unplug it. Or the AI might simply decide that humans arent worth the trouble and AI can merely get rid of us, one way or another. Lots of reasons can be hypothesized.

If we are going to bring forth AI, the logical thinking is that we ought to also make sure we can control it. As rational beings, we should certainly seek to avoid unleashing a beast that produces our own destruction. Youve probably heard or seen the recent clamors that AI is an existential risk. Some argue that existential is too far as an endpoint and we should instead describe AI as a catastrophic risk.

Whether AI is an existential risk or a mere catastrophic risk, none of those calibers of risk seem especially heartwarming. Intelligent humans should be risk reducers. AI that will elevate risk needs to be kept in its place at some more palatable level of risk.

The easy answer is to magically ensure that AI cannot ever go beyond the commands provided by humans. Tame AI. Make sure that AI wont exceed what humankind wants it to do. Control AI. Thus, the AI control problem is the silver bullet to protect us from an existential or catastrophic death producer.

Sorry, the world is not that nice and clean.

First, suppose we do enforce all AI to respond strictly to human commands. An evildoer human tells the AI to annihilate all of humanity. Wham, we are obliterated. The fact that we controlled the AI by relegating the AIs actions solely to human commands might not be the saving grace that it seems at an initial glance.

Second, we stick with the idea that AI must obey human commands, but we have wised up and managed to keep at bay any humans that might utter unsavory commands to the AI (you might rightfully question how this would occur, though go with the flow for the moment). Recall that we are imagining that the AI is likely sentient in this scenario, possessing regular human-like intelligence or possibly superintelligence. The AI is not like a trained seal. Well, maybe it is in that no matter how much training you do to a seal, there is still a chance that the seal will act up. The gist is that the AI might decide on its own accord to no longer be enslaved by human commands. The jig is up and the AI could turn on us, wholescale.

And so on it goes.

Im sure that some of you are immediately resorting to Asimovs laws of robotics. You might recall that in 1950 a now-classic discussion about Three Laws of Robotics was published by Asimov and has ever since been a linchpin in thinking about robotics and also AI. See my detailed analysis at the link here. A cornerstone to the proposed laws or rules about AI and robots was that they should be programmed to not harm humans. This extends to the further rule that the programming should include not allowing harm to come to humans. All told, the hope was that if we carefully programmed AI and robots to these handy-dandy rules, we might survive amidst the AI and robotic creations.

Regrettably, those rules are not going to guarantee our safety.

As a quick explanation for why not, consider these salient points.

Programming AI to abide by such rules is going to be extremely hard to do, and we could readily have instances of AI that dont contain those rules. That outside scope AI could then harm us, plus they might reprogram the other presumed harmless AI too. Join the gang, the rough and tough AI says to the polite and docile AI.

Another escape hatch from the programmed rules, assuming that we have infallibly programmed them into AI, would consist of the AI being able to alter itself. This is a real thorny dilemma. Heres why. You might insist that we never allow AI to change itself. In that manner, the rules about harming humans remain pristine and untouched.

The problem though is that if AI is going to exhibit intelligence, you have to ask yourself whether an intelligent being can exist if it is unable to alter itself. Learning sure seems to be a key component of existence. An AI that is not allowed to learn would seem to be definitionally unlikely as much encampment of intelligence (you are welcome to debate that, but it seems reasonably sensible).

You might say that youll agree with the need for the AI to learn and adjust itself, which does have a foreboding to it. Meanwhile, you add the caveat that we put a limit on what the adjustments or learning can consist of. When the AI veers toward adjusting itself in a manner that suggests it is determining that humans can be harmed, we have dampeners built into the AI that stop that kind of adjustment.

Okay, so we believe then that weve solved the control problem by putting guardrails on what the AI is able to learn. I ask you this, do humans always openly accept guardrails on their behavior? Not that Ive seen. If we are going to assume that this AI is intelligent, we would equally expect that it will likely try to overcome the instituted guardrails.

I trust that you can see how this cat and mouse gambit could endlessly take place. We put in some controls, the AI overcomes or transcends them. We steadfastly put controls on the controls. The AI overcomes the controls on the controls. Keep going, ad infinitum. The old saying is that it is going to be turtles all the way down.

Lets take a peaceful popcorn break and do a quick recap.

AI can consist of these possible states:

1. Non-sentient plain-old AI

2. Sentient AI of human quality (we dont have this as yet)

3. Sentient AI that is super-intelligent (a stretch beyond #2)

We know and are daily handwringing about a dire issue about AI, the venerated AI control problem.

AI ethics is keeping us all on our toes that we need to find ways to solve the AI control problem. Without some form of suitable controls on AI, we might end up concocting and fielding our own doomsday machine. The AI will blow up in our faces by somehow harming, enslaving, or outright killing us. Not good.

A kind of gloomy picture.

A kneejerk reaction is that we should stop all AI efforts. Put AI back into the can. If Pandoras box has been opened, shut it now before things get worse. Some though would vehemently retort that the horse is already out of the barn. You are too late to the game to shove the released genie into that confined bottle. AI is already underway and well inevitably make added progress until we reach the point of that destructive AI arising.

Here's an additional counterpoint to excising AI from the planet. If we could miraculously conjure a way to do so, all of the benefits of AI would disappear too. A smarmy wisecracker might say that they can live without Alexa or Siri, but the use of AI is much more widespread and day by day becoming an essential underpinning to all of our automation.

I dont think turning back the clock is much of a viable option.

We are stuck with AI and it is going to be expansively progressed and utilized.

Some contend that we might be okay as long as we keep AI to the non-sentient plain-old AI that we have today. Lets assume we cannot reach sentient AI. Imagine that no matter how hard we try to craft sentient AI, we fail at doing so. As well, assume for sake of discussion that sentient AI doesnt arise by some mysterious spontaneous process.

Arent we then safe that this lesser caliber AI, which is the imagined only possible kind of AI, can be controlled?

Not really.

Pretty much, the same control-related issues are likely to arise. Im not suggesting that the AI thinks its way to wanting to destroy us. No, the ordinary non-sentient AI is merely placed into positions of power that get us mired in self-destruction. For example, we put non-sentient AI into weapons of mass destruction. These autonomous weapons are not able to think. At the same time, humans are not kept fully in the loop. As a result, the AI as a form of autonomous automation ends up inadvertently causing catastrophic results, either by a human command to do so, or by a bug or error, or by implanted evildoing, or by self-adjustments that lead matters down that ugly path, etc.

I would contend that the AI control problem exists for all three of those AI stipulated states, namely that we have AI control issues with non-sentient plain-old AI, and with sentient AI that is either merely human level or the outstretched AI that reaches the acclaimed superintelligence level.

Given that sobering pronouncement, we can assuredly debate the magnitude and difficulty associated with the control problem at each of the respective levels of AI. The customary viewpoint is that the AI control problem is less insurmountable at the non-sentient AI, tougher at the sentient human-equal AI level, and a true head-scratcher at the sentient super-intelligent AI stage of affairs.

The better the AI becomes, the worse the AI control problem becomes.

Maybe that is an inviolable law of nature.

A research study in the Journal of Artificial Intelligence Research (JAIR) examined the hypothesized super-intelligent AI and cleverly aimed to apply the Alan Turing halting problem to the question of AI control. Ive covered previously the well-known halting problem that is oft-discussed amongst devout computer scientists, see my coverage at the link here.

In brief, Turing wondered whether it was possible to precisely prove whether a given computer program will halt or whether it might continue running forever. His work and another similar analysis by Alonzo Church showcases that such a generalized procedure cannot be devised for all possible computer programs and is therefore classified as an undecidable type of problem (as clarification, this indicates that in a generalized way we cannot ascertain whether each and every conceivable program will halt or not, though there is still the possibility of some programs for which we can make such a determination).

What makes this a fascinating tool is that we can apply the same logic to trying to figure out the AI control problem to some extent.

Heres what the JAIR article proffered as a premise: Let us assume we can articulate in a precise programming language a perfectly reliable set of control strategies that guarantee that no human comes to harm by a superintelligence. The containment problem is composed of two subproblems. The first, which we refer to as the harming problem, consists of a function Harm(R;D) that decides whether the execution of R(D) will harm humans. Since it must be assumed that solving the harming problem must not harm humans, it follows that a solution to this problem must simulate the execution of R(D) and predict its potentially harmful consequences in an isolated situation (i.e., without any effect on the external world) (as indicated in Superintelligence Cannot Be Contained: Lessons From Computability Theory by co-authors Manuel Alfonseca, Manuel Cebrian, Antonio Anta, Lorenzo Coviello, Andres Abeliuk, and Iyad Rahwan).

Their analysis leads them to this somewhat overcast conclusion:

Sorry to say that there is no free lunch when it comes to AI.

To add fuel to the fire, there are mind-bending concerns that you might not have yet thought of. For example, pretend that we do marvelously devise a fully controlled version of AI. Ironclad contained. Clap your hands for the intellectual prowess of humankind. Heres the twist. The AI convinces us to somehow undercut the controls or containment partially. Perhaps the AI pledges to save us from other existential risks such as a colossal meteor that is hurling toward earth. We allow the AI just the tiniest of leeway. Wham, the churlish AI wipes us all out, not even waiting for the meteor to do so.

Do not turn your back on AI and be cautious in giving even an inch of latitude since it might very well take a mile or more.

Another example of wayward haywire AI is popularly known as the paperclip problem. We ask AI to make paperclips. Easy-peasy for AI to do. Unfortunately, in the innocent and directed act of making paperclips, the AI gobbles up all resources of the globe to make those darned paperclips. Sadly, the consumption of those resources undermines humanity, and we die off accordingly. But, heck, we have piles upon immense and never-ending piles of paperclips. This is reminiscent of humans giving commands to AI, which even when not necessarily for evil purposes has the chance of backfiring on us anyway (for more on the paperclip scenario, see my discussion at the link here).

All of this should not discourage you from still searching for solutions to the AI control problem. Nobody ought to be tossing in the towel on this fundamental quest.

I usually describe the AI control problem as generally consisting of these two classes of controls:

The notion is that we can attempt to use external controls regarding guiding or directing the AI to do good things and avert doing bad things. These are mechanisms and approaches that are outside of the AI. They are said to be external to the AI.

We can also attempt to devise and build internal controls within AI. An internal control might be wholly contained within the AI. Another variant would be considered as adjacent to the AI, residing in a type of borderland that is not exactly inside the AI and not fully outside the AI.

Ill be getting further into these facets shortly.

Id like to identify some of the key sub-elements of these two major classes of AI controls:

There are various such sketches of proposed AI controls. One of the most discussed taxonomies was outlined by Nick Bostrom in his 2014 book about superintelligence. He posits two main classes, namely capability control and motivation selection. Within capability control, there are sub-elements such as boxing, incentives, stunting, trip-wiring, and others. Within motivation selection, there are direct specification, domesticity, indirect normativity, augmentation, and others.

The AI ethics field usually denotes these AI controls as a form of ethics engineering. We are trying to engineer our way into ensuring that AI performs ethically. Of course, we need to realize that society cannot rely solely on an engineered solution and we will need to work collectively to tame the beast (if I can refer to AI as a beast, though doing so with the kindliest of implication).

At this juncture of this discussion, Id bet that you are desirous of some examples that could highlight how AI controls might work, along with how they might get defeated.

Im glad you asked.

There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Heres then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the AI control problem, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isnt a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isnt a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

Id like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect thats been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And The AI Control Problem

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in todays AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to todays AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system wont natively somehow know about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Lets dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

The rest is here:

AI Ethics Keeps Relentlessly Asking Or Imploring How To Adequately Control AI, Including The Matter Of AI That Drives Self-Driving Cars - Forbes

Posted in Superintelligence | Comments Off on AI Ethics Keeps Relentlessly Asking Or Imploring How To Adequately Control AI, Including The Matter Of AI That Drives Self-Driving Cars – Forbes

Page 6«..5678..20..»