Hulk Just Exploded Into Bits and Pieces in The Immortal Hulk #35 – Screen Rant

Hulk just overloaded with green energy and exploded in The Immortal Hulk #35. Is the 'Immortal Hulk' more mortal than readers have believed?

Warning! Spoilers for Immortal Hulk #35 below

The Hulk is known for his body's extreme resilience. His body is so durable because of his ability to increase his strength through anger. Hulk is so close to immortal that he is often pitted against Marvel's strongest heroes and villains. Hulk has survivedcountless destructive events and deadly foes, but in The Immortal Hulk #35,his body was just completely obliterated by an explosion from within.

The Hulk is a hero so powerful that some of his fellow heroes once tricked him into leaving the earth because of his capacity for destruction. While he appears immortal (and become seemingly immortal in the current run) he has momentarily died in several comics only to be saved or revived by a miraculous event. His death in The Immortal Hulk #35was one of the most complete and brutal because his entire body was reduced to shreds of skin and bone in the final panel.

Related:Hulk's Strong Enough To Wear TWO Infinity Gauntlets (in Comics)

The Immortal Hulk #35by Al Ewing, Mike Hawthorne, Mark Morales, Paul Mounts, Cory Petit, and Alex Ross ends with Bruce Banner's body left in tatters after he internally combusts. This issue finds Hulk helping build a house as part of his quest to do good and control his extreme emotions. His actions turn into a big press scene for the local mayor. Little does Bruce know, one of his greatest foes, The Leader is mere feet away from him, disguised as Bruce's close friend Rick Jones. Just as Hulk calms himself down from a close call with losing his temper, The Leader strolls up to him in Rick's body and lays a hand on his shoulder. The Leader charges Hulk's body with a green light that might be gamma radiation.

The green light energy fills The Hulk's body first escaping his eyes and mouth until it eventually explodes obliterating Hulk's body and killing two bystanders. The Leader is one of the most capable Hulk villains due to the superintelligence at his disposal. He, like The Hulk, was a product of gamma radiation. Fans have seen Hulk's body endure all kinds of punishment including getting his neck snapped in comics or his body scorched in Avengers: Endgame.Despite getting extremely close to dying many times, Hulk has rarely had his entire body destroyed such as in The Immortal Hulk #35.

The Leader has historically been one of The Hulk's fiercest opponents in the comics so has he finally done what seemed impossible? It is difficult to fathom what writer Al Ewing has in store for Marvel's big green powerhouse. This suspenseful story poses a question that Hulk fans have been pondering since his inception. Is the 'Immortal Hulk' truly immortal?

Next:How Powerful The Hulk Really Is In Each MCU Movie

Source: The Immortal Hulk #35

Star Wars: The REAL Reason Luke Escaped Darth Vader

Charles Singh is a reader, writer, and huge geek. He is based in The Bronx, New York. He received a Bachelor of Arts degree in English literature in 2019 from Lehman College. He has worked for several non-profit organizations including The Harlem Children's Zone, MMCC, and The GO Project, assisting New York City's youth and spreading a love for literature.

See the article here:

Hulk Just Exploded Into Bits and Pieces in The Immortal Hulk #35 - Screen Rant

Should You Get Down (And Occasionally Dirty) With Star Trek: Lower Decks? – PRIMETIMER

Star Trek: Lower Decks (CBS)

Media companies know you have a choice when it comes to streaming platforms, so they must differentiate themselves from one another. That's easy for Netflix, which was first in the game; or Disney+, which leverages content catalog and a brand identity that's been nearly a century in the making.Beyond that, it gets a little more difficult for a streamer to stand out. From the start, CBS All Access has defined itself as the destination for new series in theStar Trekfranchise: it launched in 2017 withStar Trek: Discovery, a new space adventure.Star Trek: Picard, which picks back up with the belovedNext Generationcaptain after his retirement,followed in January of this year. This week sees the premiere ofStar Trek: Lower Decks, which... is a bit of a departure.

Up to now, the exploits chronicled in theStar Trekfranchise (including both seasons ofStar Trek: The Animated Series)have tended toward thrilling heroics. This animated series goes another way.For starters, Lower Decksis set on a Starfleet vessel the U.S.S. Cerritos which is tasked with handling secondcontact with new civilizations; Cerritosofficersvisit for utilitarianfollow-ups after other crews have already swept in and out to collect first-contactglory. But the series doesn't even focus on the second-contactCerritosofficers: its main characters are the ensigns who live and work in the ship's titular bowels, doing tasks of commensurate crappiness.

Ensign Tendi (voice of Nol Wells), newly arrived medical trainee, is eager and geeky. Ensign Rutherford (Eugene Cordero), still getting used to his new cybernetic upgrades, is nearly as dedicated as Tendi, and her likeliest love interest; he's a superstar in Engineering. Ensign Boimler (Jack Quaid) is a teacher's pet, focused on moving up in the ranks, possibly to the exclusion of any wonderment he might be experiencing at, you know, his job as a space traveler. Constantly needling him is Ensign Mariner (Tawny Newsome, remaining in the cosmos after her stint in Netflix's Space Forcethis spring), a former officer who's been demoted; rather than being concerned about her career trajectory, sheloves her low-pressure lower-decks life. The series was created by Mike McMahan (Rick and Morty;Solar Opposites); other writers include Katie Krentz (Over The Garden Wall) andChris Kula (Close Enough), with Alex Kurtzman and Rod "Gene's Son" Roddenberry both keepers of theTrekflame across all the new series among its executive producers.

Boimler and Mariner, at the center of the story, are a classic sitcom odd couple:intense overachiever and cheerful slacker. Even if we didn't know anything about theStar Trekfranchise (and, nearly 60 years in, evenTrekabstainers are somewhat conversant with the basics just by osmosis), the pilot makes it clear early on which ensign's attitude Starfleet endorses: Captain Freeman (Dawnn Lewis) tells Boimler she knows Mariner is a goldbricker, ordering him to spy on her and report back on any incidents of Mariner failing to follow protocol. Almost immediately, while visiting an alien planet, Mariner departs from her orders. Boimler thinks she's selling residents Starfleet tech to enrich herself, but it turns out to be farm equipment they would get from Starfleetif they went through channels; it would just take much longer. Mariner's ways may not be Boimler's, but he decides not to snitch on her (after which we find out why Captain Freeman is taking such a personal interest in Mariner's performance).

Portraying Starfleet as a behemoth bogged down by pointless bureaucracies is actually kind of a subversive take on what is generally portrayed as an inter-galactic United Nations for a post-politics utopia. The first season ofPicardchallengedStarfleet orthodoxy too specifically, on a late-in-the-Next-Generation-film-franchise plot point involving androids whose superintelligence may have threatened humanity. But after two seasons of DCU's animated seriesHarley Quinn,a spectacularly violent, sexually adventurous, routinely profane, gleefully meta thrill ride (a side character appears in a "Release The Snyder Cut" t-shirt, for example),Lower Deckscan't help seeming tame by comparison. (The writers clearly know their Treklore, and there's ajoke in the second episode about a viral video of a "Vice-Admiral Gibson" falling off a stage that I bet was "Capt. Morgan Bateson," the character Kelsey Grammer played in a 1992 episode ofTNG, before a higher-up nixed it for being too mean.) It's not entirely fair to compare the two shows Lower Decksseems aimed at older kids and tweens, whereasHarley Quinnisabsolutely not for children but even in spirit,Lower Decksfeels, at least in these early episodes, a little too reverential toward the franchise.

That said, even over the first four episodes that were released to critics, the show did give the sense of unfolding into greater playfulness, and I was reminded that the first few episodes of another fine animatedsci-fi show,Futurama, were also a little tentative before the show's characters started to settle into what would be their ultimate forms. The more I saw of Lower Decks, the more I wanted to see. Congratulations, CBS All Access; you've managed to filch another couple of months' worth of subscription fees out of me.

Star Trek: Lower Decks drops on CBS All Access on August 6th.

People are talking about Star Trek: Lower Decks in our forums. Join the conversation.

Writer, editor, and snack enthusiast Tara Ariano is the co-founder of Television Without Pity and Fametracker (RIP). She co-hosts the podcasts Extra Hot Great and Again With This (a compulsively detailed episode-by-episode breakdown of Beverly Hills, 90210), and has contributed to New York, the New York Times magazine, Vulture, Decider, Salon, and Slate, among many others. She lives in Austin.

Visit link:

Should You Get Down (And Occasionally Dirty) With Star Trek: Lower Decks? - PRIMETIMER

The Era Of Autonomous Army Bots is Here – Forbes

When the average person thinks about AI and robots what often comes to mind are post-apocalyptic visions of scary, super-intelligent machines taking over the world, or even the universe. The Terminator movie series is a good reflection of this fear of AI, with the core technology behind the intelligent machines powered by Skynet, referred to as an artificial neural network-based conscious group mind and artificial general superintelligence system. However, the AI of today looks nothing like the worrisome science fiction representation. Rather, AI is performing many tedious and manual tasks and providing value from recognition and conversation systems to predictive analytics pattern matching and autonomous systems.

In that context, the fact that governments and military organizations are investing heavily in AI shouldnt be as much concerning as it is intriguing. The ways that machine learning and AI are being implemented are both mundane from the perspective of enabling humans to do their existing tasks better, and very interesting seeing how machines are being made more intelligent to give humans better understanding and control of the environment around them.

John Fossaceca, APM for AI & ML for Maneuver & Mobility at US Army Research Laboratory (ARL), who spoke at a recent AI in Government event shares some insights as to how AI is being applied on a day-to-day basis as well as where things are heading with autonomous bots and other machines in the US Army.

How is the Army currently leveraging AI?

John Fossaceca: The Army is leveraging AI in many ways, for example in predictive maintenance. AI techniques can help predict when vehicle parts need to be replaced or serviced before the vehicle breaks down. If this can be done well it will save money and increase operational safety. This is being implemented with the Bradley Fighting Vehicle as well as others.

The Army has a vast amount of data and many AI and Machine Learning (AI/ML) techniques require large amounts of data. Some programs that leverage data include Project Maven that consumes data from drones and helps to automate some of the work that analysts do. Project Maven leverages some standard AI tools such as Googles TensorFlow as well as customized tools built internally.

The Army has active ongoing research using AI to enhance autonomous vehicles, electronic warfare and signal intelligence, sensor fusion and augmented reality. AI will improve situational awareness in the battlefield and improve decision-making with programs such as the Joint All-Domain Command & Control (JAD-C2) initiative.

Another area where AI plays a role for the Army is in talent management. The Armys AI Task Force (AITF) has an initiative to use AI

Army Futures Command AI Task Force

to identify the competencies and attributes that lead to successful performance that can then be used to find potential candidates for positions in the Army.

At the Combat Capabilities Development Commands Army Research Laboratory (ARL), Artificial Intelligence is considered to be a primary research area. ARL is the Armys corporate research laboratory and has many initiatives that leverage Artificial Intelligence. For example the essential research program entitled, Artificial Intelligence for Maneuver and Mobility (AIMM) is leading the way for how the Army will imbue the Next Generation Combat Vehicles (NGCV) with the ability to operate off road without the need for being supervised by a soldier with a remote control radio. These next generation intelligent vehicles will be able to reason about specific situations, environmental conditions and make decisions about the best action to take while keeping soldier teammates informed and improving overall situational awareness. There are many other essential research programs (ERPs) at ARL that also leverage AI methods and all of these ERPs are producing innovations that will greatly benefit army operations in the future.

In the near term, the Army is using AI to leverage inputs from multiple sensors in order to build an accurate picture of battlefield threats and speed up the targeting and decision making process in Project Convergence, an initiative led by the Army Futures Command.

What are some challenges in the Army when it comes to AI/ML adoption?

John Fossaceca: Commercial AI relies on vast computing resources and large amount of data including cloud computing reach back when necessary. Battlefield AI, on the other hand, must operate within the constraints of edge devices: Computer processors must be relatively light and small with potentially constrained communication bandwidth under adversarial conditions.

In Army applications often there is either not enough training data or the data is corrupted or noisy. Operational environments tend to be dynamically changing and sometimes unstructured with damaged roads, buildings and infrastructure. There is heterogeneous data from many sources, sometimes this data is deceptive or influenced by adversaries.

Todays AI techniques tend to be brittle and can break down even under ideal operating conditions. These methods are very limited in their ability to reason, especially in real-time. There are some deployed systems that tout AI capabilities that are limited to hard-coded rules and lack the ability to reason and infer from inputs from sensors and other systems and do not provide enhanced situational assessment.

Many of the AI approaches depend on supervised learning (e.g. deep learning) and these techniques create massive models, often with 10 to 100 million parameters learned in a batch-basedmode on powerful computing infrastructures. The Army needs alternatives to these offline and time consuming training methods.

Ultimately, current systems are not able to operate autonomously, and require constanthuman attention, intervention and manual control. Back in 2018 we were looking at learning from feedback where a human observer would simply provide a positive or negative signal to the intelligent agent and we demonstrated that we could drastically reduce the learning time by orders of magnitude. We are extending this research to Learning from Demonstration which Ill discuss soon.

As our research progressed we realized that we need a way to interactand communicate withintelligent agents in a natural way. Beyond just natural dialog and grounding, a lot of issues crop up due to a lack of shared understanding of the world and commonsense reasoning. These shortcomings are being addressed through several research programs in AIMMs second line of effort Context Aware Decision Making.

How is the Army working towards getting their data in a usable state for AI/ML?

John Fossaceca: There are many data collection and labelling initiatives being worked on by the Army and across the DoD preparing data for use by AI algorithms. For example, Project Maven has a lot of videos from military drones. Sometimes labelling is done through crowdsourcing techniques depending on the level of classification. Other initiatives include ARLs work to internally collect data from various locations and with research partners to curate and label data from a variety of terrains. ARL has a Robotics Research Collaboration Campus (R2C2) in Maryland where data is collected and autonomous experiments are conducted.

In addition to project Maven, there are several efforts across the DoD for intelligence analysis using state of the art tools. Many of these projects focus on detecting specific objects in images using deep learning methods and each of these programs requires large amounts of data be cleaned, curated and labelled in order to be useful. These efforts also require an AI pipeline that consists of storage, algorithmic toolkits, computing resources, testing and deployment tools. Often data format standards are developed to ensure consistency between experiments and tests and provide users with a familiar environment. These data repositories need to be cataloged and be accessible to users as well as have useful descriptions of the data contained within them. There are some efforts to standardize this access information across several databases to make it easier for the intelligence community to use.

How is the Army leveraging AI-enabled autonomous vehicles for Maneuver and Mobility?

John Fossaceca: In the Armys Robotic and Autonomous Systems (RAS) strategy, General Daniel B. Allyn, Vice Chief of Staff states, The integration of RAS will help future Army forces, operating as part of Joint teams, to defeat enemy organizations, control terrain, secure populations, and consolidate gains. RAS capabilities will also allow future Army forces to conduct operations consistent with the concept of multi-domain battle, projecting power outward from land into maritime, space, and cyberspace domains to preserve Joint Force freedom of movement and action.

According to the RAS strategy Effective integration of RAS improves U.S. forces ability to maintain overmatch and renders an enemy unable to respond effectively. The Army must pursue RAS capabilities with urgency because adversaries are developing and employing a broad range of advanced RAS technologies as well as employing new tactics to disrupt U.S. military strengths and exploit perceived weaknesses

In order to accomplish the vision laid out in the RAS strategy, autonomous vehicles will need to ensure freedom of maneuver while decreasing risks to soldiers. This will require collaboration between humans and machines that is autonomous. Vehicles will be teammates for soldiers in the battlefield rather than just another piece of equipment. These integrated human-machine teams will allow forces to learn, adapt, fight and win under uncertain situations.

AI is one of the key enablers for these intelligent autonomous systems. These systems will be able to deal with near peer or peer adversaries who can operate at fast speeds by allowing our forces to make decisions more quickly. The Army will also have to contend with the fact that our adversaries will also be using their own autonomous systems. With more autonomy, robotic autonomous systems will be less dependent on communication links that are often unreliable in battlefield conditions due to jamming or capacity issues.

In terms of priorities, the RAS strategy calls for near term improvements in situational awareness and helping to reduce the physical load on soldiers. In the midterm, automated convoy operations will not only help with sustainment but will protect soldiers. In the longer term, autonomous vehicles will execute advanced tactical maneuvers and will increase capabilities within brigade combat teams.

What are some unique environmental challenges that impact the research that goes into autonomous vehicles and equipment?

John Fossaceca: In addition to complex terrain and unstructured environments where the Army operates, the environment often consists of adversaries and these adversaries may be unpredictable. The Army has research that focuses specifically on so-called tactical behaviors, that is, what are the specific formations that the autonomous vehicles should utilize? How can an autonomous vehicle achieve a position of advantage over an adversary? How can an autonomous vehicle operate without being detected by an enemy force? The Army has done research in autonomous subterranean exploration as well as and in order to operationalize autonomy, next generation combat vehicles will need to be able to reason about the possibility of all potential routes, even water crossings.

How does the ARLs research in autonomous vehicles differ from what industry is doing?

John Fossaceca: Often, in Army contexts because large amounts of militarily relevant, labeled data are not available so a very important research area ARL is pursuing are AI algorithms than can learn with far fewer examples than traditional supervised approaches. In concert with these the Army has developed some unsupervised approaches for things like scene segmentation which can use self-labeling methods. However, such methods still require a lot of computing power and it is challenging to do in real-time on the autonomous vehicle. To help address this problem, the Army has several computer scientists who specialize in computer architecture and algorithms to take advanced state of the art methods and make them work within the processor size and power constraints of Army autonomous vehicles.

The Army has unique technical challenges that the commercial sector is not addressing. Autonomous vehicles generally do not operate in an environment that is contested in all domains. Certainly there are people, obstacles and sometimes unexpected events, however, military operations occur in very uncertain environments, complex and dangerous terrains which may be filled with adversaries and other dangers.

The first instantiation of this will be tele-operated and as the Army operates these vehicles, we will learn how to employ robots in the battlefield. This will inform the autonomous behaviors that we need to develop. Ultimately, Next Generation Combat vehicles will have the capability to learn in the field, adapt to the current situation, reason and act effectively in support of the Multi-domain Operations mission.

What is a unique insight into your AI challenges that others might be interested to learn?

John Fossaceca: Recent Army research has found success with deep reinforcement learning techniques that leverage human demonstration and feedback. Newer methods have been successful in greatly reducing the time it takes to train a system on new tasks. Other research that involves learning from human demonstration is showing early promise and utility for battlefield retraining with the potential for real-time learning using limited examples. These techniques appear to allow for transfer learning, that is, learning under one set of conditions and operating under a new set of conditions without the need to for training from scratch.

How does the Army envision the warfighter and battlefield of the future?

John Fossaceca: The Armys vision of the future battlefield will have an unmanned formation several kilometers in advance of a manned formation. One goal is to have the autonomous systems do area and route reconnaissance to find or make contact with the enemy while providing stand off for soldiers.

How important is AI to the Armys vision of the future?

John Fossaceca: AI will be a critical enabler for future success in Multi-domain Operations. According to the former Secretary of the Army and current Secretary of Defense, Mr. Mark Esper, if we can master AI then I think it will just really position us better to make sure we protect the American people. Winning on the future battlefield requires us to act faster than our enemies while placing our troops and resources at a lower risk.Whoever gets there first will maintain a decisive edge on the battlefield for years to come.

The current Secretary of the Army, Mr. Ryan McCarthy has stated that cloud based technologies and capabilities are key in order to maximize AI. Mr. McCarthy wants to see the cloud infrastructure put into place as a driver for AI progress. According to Mr. McCarthy, this will be critical for decision-making in the battlefield.

What is the Armys perspective on ethics and responsible use of AI?

John Fossaceca: The Army and DoD as a whole is concerned with AI Ethics and last October a draft of Recommendations on the Ethical Use of Artificial Intelligence. These rules will apply across the U.S. military. The U.S. military will have humans that will be in control of all AI enabled systems.

The Armys AI Task Force has an ethics officer who helps to inform AI ethics policies. Per Secretary of the Army, Mr. Ryan McCarthy, A system can crunch the data very quickly and give you an answer, but it doesnt have context, he said. Only a human being can bring the context to a decision.

What are you doing to get an AI ready workforce and war fighter? Are you providing training and education around AI?

John Fossaceca: ARL and the Army offers many opportunities for students to do internships as well as SMART Scholarships that help students pay for education and in exchange the student will work for the Army for a period of time. ARL also hires new doctoral graduates as PostDocs and brings them in to do cutting edge research. Eventually some of the PostDocs will become employees. Because Artificial Intelligence is key competency area, the Army is increasingly hiring scientists and engineers with this expertise.

What are you doing now to train soldiers to make them more comfortable working alongside autonomous systems and robots?

John Fossaceca: Since the types of autonomous systems we are talking about are still under development, we use simulation in our training environments to help soldiers get comfortable operating with autonomous systems. The Army is early in process but there are some ongoing initiatives such as the Reconfigurable Virtual Collective Trainer (RVCT) including both ground and air platforms that provides the ability to rehearse missions with simulated data.

Many of the training efforts focus on realistic simulations of intelligent semi-autonomous and autonomous systems providing soldiers an immersive training experience. Soldiers train against virtual opponents in this Synthetic Training Environment(STE). These virtual opponents are imbued with intelligent behaviors that have a certain unpredictability to simulate adversaries as well as a reasonable level of cognition based on also leveraging state-of-the-art artificial intelligence with realistic environments.

At the foundational research level, ARL has used soldiers to interact with autonomous prototypes to learn how soldiers speak and what commands they tend to use. This has also help some soldiers learn how autonomous systems behave. In fact, as soldiers train with autonomous systems, they tend to adapt their language over time to more effectively communicate and control such systems.

What AI technologies are you most looking forward to in the coming years?

John Fossaceca: We are making advances in using artificial intelligence for reasoning about the environment and being able to recommend specific courses of action to soldier teammates. This will represent moving beyond narrow AI, where autonomous agents can do very specific tasks well, to being able to adapt to new, never before seen situations. They will determine what actions are possible and the probability of success for each of these actions. This is not general AI, that is, AI that can reason at a level close to human beings. What we envision in the future, is the ability for autonomous systems to do sophisticated reasoning about a given situation, make complex decisions and anticipate what the outcomes might be to ensure mission success.

More:

The Era Of Autonomous Army Bots is Here - Forbes

AI Could Overtake Humans in 5 Years, Says Elon Musk, Whose ‘Top Concern’ is Google-Owned DeepMind – International Business Times, Singapore Edition

Elon Musk tweeted that Teslas self-driving cars would cost more after July

Artificial Intelligence is the future of this world and a perfect example of technological development. But the tech billionaire Elon Musk warns the world about the dark side of it as there is a strong possibility that AI will take over humans within the next five years.

The CEO of SpaceX and the co-founder of the AI research lab OpemAI has been sounding the alarm bells against the rising threat of advanced AI over the past few years.

As reported by The New York Times, Musk said: "My assessment about why AI is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are. And this is hubris and obviously false."

The billionaire technology entrepreneur claimed that the experience he gathered while working with different types of AI at Tesla, has given him the confidence to claim that the world is heading toward a situation where AI is "vastly smarter than humans." Musk said that the time frame is probably less than five years now, but it doesn't mean that "everything goes to hell in five years. It just means that things get unstable or weird."

'Top Concern'

Almost four years ago the Tesla CEO sounded an alarm saying that humans could become the equivalent of "house cats" considering the rise of the AI rulers. Now it looks like his point of view about AI has not changed at all, as recently he said that the highly secretive London research lab DeepMindrun by Demis Hassabisis Musk's "top concern" when it comes to AI technology. It was acquired by Google in 2014 for a reported $600 million.

"Just the nature of the AI that they're building is one that crushes all humans at all games," Musk said adding that "I mean, it's basically the plotline in 'WarGames'"a 1983 movie, in which a teenager unintentionally connects to an AI-controlled government supercomputer that used to run war simulations. Going with the background of the movie, after starting a game called "Global Thermonuclear War," the teen leads the computer to activate the country's nuclear arsenal in response to his simulated threat as the Soviet Union.

In 2017, at the Beneficial AI conference, Musk and Hassabis sat on a panel"Superintelligence: Science or Fiction?" -- along with Oxford professor and Superintelligence author Nick Bostrom, Skype cofounder Jaan Tallinn, Google engineering director Ray Kurzweil and many other experts from the tech industry. At the start of the panel, everyone agreed that some form of superintelligence is possible. When Musk was asked whether it will actually happen, he said 'yes.'

The 49-year-old South African tech billionaire is currently busy bringing out new advancements via Neuralinka startup founded in 2016 to develop "ultra-high bandwidth brain-machine interface." But his stand on AI remains the same.

Originally posted here:

AI Could Overtake Humans in 5 Years, Says Elon Musk, Whose 'Top Concern' is Google-Owned DeepMind - International Business Times, Singapore Edition

Superintelligence – Wikipedia

Hypothetical immensely superhuman agent

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligenceeven though it is much better than humans at chessbecause Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called "recursive self-improvement". It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)." Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAOIs, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.. A prediction market is sometimes considered an example of working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).[16]

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[18]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Bostrom clarifies these terms:

instead of implementing humanity's coherent extrapolated volition, one could try to build an AI with the goal of doing what is morally right, relying on the AIs superior cognitive capacities to figure out just which actions fit that description. We can call this proposal moral rightness (MR)...MR would also appear to have some disadvantages. It relies on the notion of morally right, a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of moral rightness could result in outcomes that would be morally very wrong... The path to endowing an AI with any of these [moral] concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by morally right. If the AI could grasp the meaning, it could search for actions that fit...

One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanitys CEV so long as it did not act in ways that are morally impermissible.

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

It has been suggested that if AI systems rapidly become superintelligent, they may take unforeseen actions or out-compete humanity.[22] Researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful as to be unstoppable by humans.[23]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[24] Eliezer Yudkowsky illustrates such instrumental convergence as follows: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[25]

This presents the AI control problem: how to build an intelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right "the first time," is that a superintelligence may be able to seize power over its environment and prevent humans from shutting it down. Potential AI control strategies include "capability control" (limiting an AI's ability to influence the world) and "motivational control" (building an AI whose goals are aligned with human values).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Go here to read the rest:

Superintelligence - Wikipedia

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

Superintelligence asks the questions: what happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our speci

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful--possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

More:

Superintelligence: Paths, Dangers, Strategies by Nick Bostrom

Artificial Intelligence – A Way To Superintelligence …

To print this article, all you need is to be registered or login on Mondaq.com.

Industrialization and Digitization has changed the wayhumans look at life. The world is moving at a technological pacewherein with every passing minute an invention is done, and a lotof these inventions are beyond human imagination. One suchtechnology is Artificial Intelligence or AI as it is popularlyknown. AI has brought extensive changes around the world. AI is aterm for simulated intelligence in machines.1

According to the father of Artificial Intelligence, JohnMcCarthy, it is 'The science and engineering of makingintelligent machines, especially intelligent computerprograms'.2 In simple words, it is anythingthat can learn and perform functions on its own without anyintervention by humans. AI is the ability of simulated machines tomimic human thoughts like problem solving and learning. Thesemachines also understand human languages, speech and are skilled instrategic thinking.

Artificial Intelligence is a part of our daily lives. Siri,Alexa, Google Maps, Uber, Turnitin and other machine learningapplications are all products of AI. AI is touted as the future ofmankind. AI has already started making its mark in plethora offields and industries like healthcare, education, transportation,agriculture and many more.

To embrace this new wave called AI, India has made a modestbeginning this year by devoting a huge amount of money. In hisBudget Presentation on February 1, finance minister Arun Jaitleyannounced a national programme on AI to be spearheaded by NitiAayog.3 Approximately USD 480 million dollars have beendedicated to artificial intelligence, machine learning and IoT thisyear.4 Many industries and institutions are taking aleap in the field of AI. One of the most robust inventions arecoming in the field of health care, which is utilizing AI incollecting, storing, normalizing, and tracing data. From smartphoneand health tracker revolutions, it has become possible for a userto analyze all relevant data or simply to be up-to-date abouthis/her health.

Currently, a low cost portable home-based rehabilitationsolution device is produced which helps patient exercise forflexion and extension of wrist and fingers. The device has beentested on 20 stroke patients in All India Institute of MedicalSciences (AIIMS), Delhi and can also be combined with the brainstimulation device.

Moreover, institutions like Indian Institute of Technology(IIT), Delhi have developed many innovations based on AI rangingfrom an "intelligent" prosthetic limb and anon-hazardous, longlasting "flow battery" to a new typeof loom and technology to convert agriculture waste into pulp thatsaves 40% water and energy than usual. Further, Centre forBiomedical Engineering has developed a new Intelligent ArtificialLeg for people who have lost their legs above the knee. Theseartificial devices are cheap and durable and uses smart sensingtechnology in the shoes to adapt to the movement of theindividual.

As they say every coin has two sides. With AI benefits comescertain challenges that can be a major threat to the mankind. Forexample, the security of the large amounts of data that AI wouldstore. Another major issue that stands is the privacy of thepersonal data that AI would gather.

No doubt that artificial intelligence has unimaginablepotential. The next few decades would definitely mark a shift frommachine intelligence to artificial superintelligence and set fortha new era in which a computer's cognitive ability will besuperior to human's. Nevertheless, along with the new AIinventions, we as a country also need to invest in strong step tofight the challenges that this necessary devil would bring alongit.

Footnotes

1https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp

2https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_overview.htm

3 Economic Times report available here

4https://analyticsindiamag.com/where-artificial-intelligence-research-in-india-is-heading/

For further information please contact at S.S Rana &Co. email: info@ssrana.in orcall at (+91- 11 4012 3000). Our website can be accessed at http://www.ssrana.in

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

POPULAR ARTICLES ON: Technology from India

Vaish Associates Advocates

Blockchain Technology is one of the latest technologies in the era of technology. Technology which carries the potential to change the outlook of the way things are done till now and all the orthodox methods which our human brain is habitual to.

Singh & Associates

With the popularization of internet technologies, virtual currency called cryptocurrency has also been invented. A popular form of cryptocurrency is bitcoin.

Ernst & Young

"Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." -- Stephen Hawking.

Phoenix Legal

The use of cryptocurrency has always been a point of contention with its legality being a mystery to public.

Original post:

Artificial Intelligence - A Way To Superintelligence ...

AI governance and the future of humanity with the Rockefeller Foundations senior VP of innovation – The Sociable

As the progenitors of artificial intelligence, how we care for and nurture this paradigm-shifting technology will determine how it grows up alongside humanity.

There are many paths ahead for AI and society, and depending on which ones we follow, we may find ourselves on a road to peace and prosperity or one towards a dark dystopia, with several gray areas in-between.

We need to now create a new institution that can continue being the gardener for AI because AI is going to leave home soon, and we hope it becomes a productive member of society Zia Khan

Zia Khan, Senior VP of Innovation at the Rockefeller Foundation, tells The Sociablethat AI will be deeply integrated in the entire human experience and how we choose to govern it is how we will determine our future alongside it.

Whilethe Bretton Woods agreements gave birth to the rule-making institutions of the World Bank and International Monetary Fund, the Rockefeller Foundation is looking to develop a practical rule-making Bretton Woods-inspired framework to govern AI.

In October, the Foundation brought together some of the brightest technologists, economists, philosophers, and thinkers who would come away inspired to create a collection of ideas and calls to action in single report: AI+1: Shaping Our Integrated Future, based on their discussions.

The conversation wasnt always easy, said Khan, but at the core, it was a fantastic conversation, and the area we landed on was the need for governance for AI.

If left unchecked, AI could be governed by a select few elitists with their own agendas, or the AI itself could assume more autonomy towards artificial superintelligence, so who governs AI, how they govern, and on whose authority they do so are all serious issues facing humanitys future with this game-changing technology.

AI is a teenager who is about to leave home [] The teenager is starting to express its personality now Zia Khan

I put the question to Khan that if he could personify AI as a child and humans as its parents, what stage of life would AI be in right now? He indulged.

If I were to guess, I would say AI is a teenager who is about to leave home, he said.

When it was in the lab, the scientists were more or less providing for AI, feeding it and caring for it.

The teenager is starting to express its personality now its a little rebellious. We saw some applications that werent great. Some issues are coming like facial recognition that we know we need to deal with but its about to leave home, in my view.

I think its about to have this explosive proliferation into society, the Rockefeller senior VP added.

AI may be likened to a teenager right now, but unlike humans, its growth will be exponential and at lightning speed.

Whats really interesting about technology is that we learn more about humans as we understand technology Zia Khan

Continuing with the parenting metaphor, do we want to care for our artificial offspring like carpenters defining all the rules early on and following the plan or do we want to be like gardeners, allowing the algorithms to flourish within a set framework while trying to nurture them and maintain boundaries?

My view of it is that we need to now create a new institution that can continue being the gardener for AI because AI is going to leave home soon, and we hope it becomes a productive member of society, but theres a lot of ways people can go when they leave home, said Khan.

For the Rockefeller Foundation senior VP, a new institution should be created to govern AI, but what would that look like?

Should the future of AI governance be held to a democratic vote of the people, or should it be placed under the stewardship of philanthropists, technologists, or other organizations with deep pockets and agendas?

We need some political mechanism to decide what are the goals that we want as a society when AI is incorporated Zia Khan

While Khan admits that he doesnt have all the answers on who should be behind the institutions to govern AI, he is certain that they do need to exist.

Going back to the teenager metaphor, he says, When someone leaves home, theres lots of things they can do. They can go to university. They can nod-off. They can be an entrepreneur [] but we still expect them to follow some basic laws around goals that we see as a society.

We need some political mechanism to decide what are the goals that we want as a society when AI is incorporated in that, and then, how do we ensure that the technology meets those goals?

And that is one of the biggest debates going around artificial intelligence circles right now and highlighted in the AI+1 report: rules-based governance or outcome-based?

Focus too much on the rules, then you can have unexpected outcomes. A few years back, Microsoft had to kill its AI chatbot Tay after itturned into a foul-mouthed racist in less than 24 hours, and more recently OpenAI created a virtual game of hide and seek, but the AI unexpectedly broke the programs simulated laws of physics to win.

By focusing on outcomes, the rules can bend and flex within a specific framework governed and guided by what the Rockefeller Foundation senior VP sees as a need for a new institution.

I think that AI is overestimated in some cases and underestimated in other cases Zia Khan

At present, there are a lot of misconceptions about what AI can and cannot do, but as Khan points out, the more we study AI, the more we find out about ourselves.

Whats really interesting about technology is that we learn more about humans as we understand technology, he said.

For example, you still dont have a robot that can really open a door. Someone said once that when the killer robots come, all you have to do is close the door. You see all these crazy videos of robots doing flips and gymnastics its a pretty simple problem relatively speaking but friction?! they cant handle it.

He added that its in studying robots that we learned our sense of touch is about a thousand times more sensitive than we thought before similarly with our hearing and similarly with our smell.

But when it comes do decision making, right now AI is really good at the intuitive tasks that we dont think much about like recognizing languages, images, and counting things.

Human consciousness, on the other hand, keeps our minds occupied on many thoughts while juggling a plethora of emotions simultaneously in any given moment.

As we understand AI better, were actually understanding human consciousness Zia Khan

Thats something, according to Khan, that AI cant do right now, and being able to manage multiple thought processes is like an executive function that only people possess at present.

As we understand AI better, were actually understanding human consciousness, and were understanding the role of emotion in helping with our cognition, he said.

These are the interesting frontiers were learning about the human mind and human body as AI progresses.

The more we understand machines, the more we understand ourselves, and many companies working with AI are applying what theyve learned and developed to directly benefit society in truly unique ways.

And there are some groups that have figured out that their AI solutions for one industry could prove beneficial in another.

For example, the Rockefeller Foundation works with a group called DataKind a fantastic organization that has an army of volunteer data scientists who want to apply their skills to social problems, says Khan.

They identify some social problems, and they get volunteer teams to help develop tools and applications.

The Rockefeller senior VP cited DataKinds work in Haiti as an example where the team was able to optimize routes for waste disposal while maximizing pickups using AI, which in turncould be applied to community health workers in Africa who can better optimize their routes between communities.

Anytime we can find something where one solution can be applied to another problem, it just really increases the efficiency of how we can solve all the challenges that were trying to solve, said Khan.

All of these AI systems have a problem around bias, and thats something were really starting to worry about Zia Khan

While algorithms can be redistributed to serve multiple purposes, problems arise when they pass along inherent biases in the code.

All of these AI systems have a problem around bias, says Khan, adding, thats something were really starting to worry about. In many ways, these tools can just reproduce and amplify the human biases that we have.

The Rockefeller Foundation recently launched the $4 million Lacuna Fund aimed specifically at correcting the gaps and biases in data for AI solutions in order to mobilize labeled datasets that solve urgent problems in low-and-middle-income contexts globally.

The Lacuna Fund is meant to identify where are there opportunities where we can fund labeled datasets that round-out the training data available to algorithms, so that those algorithms can train themselves and remove the bias, said Khan.

COVID has laid bare a lot of the really deep and important problems Zia Khan

As AI permeates every industry and facet of society, bias will be a main issue to tackle, but moving beyond biases, this technology has the power to help make sure every human on earth is fed, clothed, and sheltered, depending on how its used and governed.

The arrival of the coronavirus pandemic has accelerated the discussion on how AI can best serve humanity and society at large.

For Khan, Something like the COVID crisis gives us the opportunity to rethink big paradigm shifts.

In some way, COVID has laid bare a lot of the really deep and important problems, and I think it has heightened the urgency to think about new solutions, he said.

The current urgency of this crisis is demanding new thinking, and I think there are opportunities to deploy and apply AI to help in those cases.

Thats going to help us learn about what AI can do, and hopefully well keep an eye on the risks and manage those risks, he added.

The disruption thats been created by COVID on so many different fronts gives us the opportunity rethink really major paradigms Zia Khan

AI will be a technology that cuts across society, and the Rockefeller senior VP believes that AI governance will be directly linked to economics.

I think theres a linkage between how we think about regulating AI and a lot of the thinking thats going on with people in economics, he said.

I think people are realizing that we need a new form of economics. The neo-liberal economic paradigm of maximizing shareholder value, not accounting for the cost and nature, etc., just isnt working.

I think we have to do some hard thinking around what is the value of data, how are we accounting for the value of data, and I think that will lead to how we think about regulating and managing AI, but also the broader economic rules, and market rules, and the role of government. I think these will be more tightly coupled going forward, he added.

How we think about managing AI will be coupled with how we think about economic models Zia Khan

For Khan, The disruption thats been created by COVID on so many different fronts gives us the opportunity rethink really major paradigms, and how we think about managing AI will be coupled with how we think about economic models.

The AI teenager is about to leave home. Will it go off and learn to do what is best for society, or will its own experiences shape it into a rebellious force of destruction?

The way forward, according to the Rockefeller Foundations senior VP of innovation, is to create a framework for governance that guides AI towards a prosperous future for humanity.

Tech arms race will give corporations, governments the ability to hack human beings: Yuval Harari at WEF

Digital Immortality and the Book of the Dead

Excerpt from:

AI governance and the future of humanity with the Rockefeller Foundations senior VP of innovation - The Sociable

neXt Trailer: It’s Not Paranoia If The Threat Is Real – Bleeding Cool News

Viewers are hoping that good things come to those who wait, with FOX releasing the official trailer and premiere date for its upcoming sci-fi/tech thriller neXt.From creator and executive producer Manny Coto (24: Legacy) and executive producers and directors John Requa and Glenn Ficarra (This Is Us), neXt is a fact-based thriller starring John Slattery (Mad Men) that looks at what would happen if a deadly, rogue Artificial Intelligence found its way into every aspect of our everyday lives. Have we become so bonded with our technology that we're losing our humanity?

Silicon Valley pioneer PAUL LEBLANC (Emmy Award nominee John Slattery, "Mad Men," "Veep") built a fortune and legacy on the world-changing innovations he dreamed up, while ignoring and alienating the people around him, including his own daughter, ABBY (Elizabeth Cappucino, "Jessica Jones," "Deception"), and his short-sighted younger brother, TED (Jason Butler Harner, "Ozark," "Ray Donovan"), who now runs Paul's company. After discovering that one of his own creations a powerful artificial intelligence called neXt might spell doom for humankind, Paul tried to shutter the project, only to be kicked out of the company by his own brother, leaving him with nothing but mounting dread about the fate of the world.

When a series of unsettling tech mishaps points to a potential worldwide crisis, LeBlanc joins forces with Special Agent SHEA SALAZAR (Fernanda Andrade, "The First," "Here and Now"). Having escaped crime, poverty and a deadly criminal father to remake herself as a force for good, Salazar's strict moral code and sense of duty have earned her the respect of her team a talented but contentious group held together by her faith in their ability to defy expectations and transcend their differences, including GINA (Eve Harlow, "Agents of S.H.I.E.L.D.," "Heroes Reborn"), a high-strung cybercrime agent; BEN (Aaron Moten, "Disjointed," "Mozart in the Jungle"), a straight-laced, buttoned-up hard worker, who is boring to the point of being interesting; and CM (Michael Mosley, "Ozark," "Seven Seconds"), an ex-con hacker with a genius IQ. But the demands of Shea's challenging job have taken their toll on her home life, where Salazar's young son, OWEN (Evan Whitten, THE RESIDENT, "Mr. Robot"), has been raised primarily by his father, TY (Gerardo Celasco, "How to Get Away with Murder," "The Haves and the Have Nots"), a recovering alcoholic.

Now, LeBlanc and Salazar are the only ones standing in the way of a potential global catastrophe, fighting an emergent superintelligence that, instead of launching missiles, will deploy the immense knowledge it has gleaned from the data all around us to recruit allies, turn people against each other and eliminate obstacles to its own survival and growth.

neXt stars John Slattery (Paul LeBlanc), Fernanda Andrade (Shea Salazar), Michael Mosley (CM), Jason Butler Harner (Ted LeBlanc), Eve Harlowe (Gina), Aaron Moten (Ben), Gerardo Celasco (Ty Salazar), Elizabeth Cappucino (Abby), and Evan Whitten (Owen Salazar). Written by Manny Coto, the series' pilot episode was directed by John Requa and Glenn Ficarra, and executive produced by Coto, Requa, Ficarra, and Charlie Gogolak. From 20th Century Fox Television/Zaftig Films and FOX Entertainment.

Serving as Television Editor since 2018, Ray began five years earlier as a contributing writer/photographer before being brought on board as staff in 2017.

twitter instagram envelope

Read the original:

neXt Trailer: It's Not Paranoia If The Threat Is Real - Bleeding Cool News

The Famous AI Turing Test Put In Reverse And Upside-Down, Plus Implications For Self-Driving Cars – Forbes

AI and the Turing Test, turned round and round.

How will we know when the world has arrived at AI?

To clarify, there are lots of claims these days about computers that embody AI, implying that the machine is the equivalent of human intelligence, but you need to be wary of those rather brash and outright disingenuous assertions.

The goal of those that develop AI consists of one day being able to have a computer-based system that can exhibit human intelligence, doing so in the widest and deepest of ways that human intelligence exists and showcases itself.

There is not any such AI as yet devised.

The confusion over this matter has gotten so out-of-hand that the field of AI has been forced into coming up with a new moniker to express the outsized revered goal of AI, proclaiming now that the goal is to arrive at Artificial General Intelligence (AGI).

This is being done in hopes of emphasizing to laymen and the public-at-large that the vaunted and desired AI would include common-sense reasoning and a slew of other intelligence-like capacities that humans have (for details about the notion of Strong AI versus Weak AI, along with Narrow AI too, see my explanation at this link here).

Since there is quite some muddling going on about what constitutes AI and what does not, you might wonder how we will ultimately be able to ascertain whether AI has been unequivocally attained.

We rightfully should insist on having something more than a mere provocateur proclamation and we ought to remain skeptical about anyone that holds forth an AI system that they declare is the real deal.

Looks alone would be insufficient to attest to the arrival.

There are plenty of parlor stunts in the AI bag-of-tricks that can readily fool many into believing that they are witnessing an AI of amazing human-like qualities (see my coverage of such trickery at this link here).

No, just taking someones word for AI having been accomplished or simply kicking the tires of the AI to feebly gauge its merits is insufficient and inarguably will not do.

There must be a better way.

Those within the AI field have tended to consider a type of test known as the Turing Test to be the gold standard for seeking to certify AI as being the venerated AI or semantically the AGI.

As named after its author, Alan Turing, the well-known mathematician and early pioneer in the computer sciences, the Turing Test was devised in 1950 and remains pertinent still today (heres a link to the original paper).

Parsimoniously, the Turing Test is relatively easy to describe and indubitably straightforward to envision (for my deeper analysis on this, see the link here).

Heres a quick rundown about the nature of the Turing Test.

Imagine that we had a human hidden behind a curtain, and a computer hidden behind a second curtain, such that you could not by sight alone discern what or who is residing behind the two curtains.

The human and the computer are considered contestants in a contest that will be used to try and figure out whether AI has been reached.

Some prefer to call them subjects rather than contestants, due to the notion that this is perhaps more of an experiment than it is a game show, but the point is that they are participants in a form of challenge or contest involving wits and intelligence.

No arm wrestling is involved, and nor any other physical acts.

The testing process is entirely about intellectual acumen.

A moderator serves as an interrogator (also referred to as a judge because of the designated deciding role in this matter) and proceeds to ask questions of the two participants that are hidden behind the curtains.

Based on the answers provided to the questions, the moderator will attempt to indicate which curtain hides the human and which curtain hides the computer. This is a crucial judging aspect. Simply stated, if the moderator is unable to distinguish between the two contestants as to which is the human and which is the computer, presumably the computer has sufficiently proven that it is the equivalent of human intelligence.

Turing originally coined this the imitation game since it involves the AI trying to imitate the intelligence of humans. Note that the AI does not necessarily have to be crafted in the same manner as humans, and thus there is no requirement that the AI has a brain or uses neurons and such. Thus, those devising AI are welcome to use Legos and duct tape if that will do the job to achieve the equivalence of human intelligence.

To successfully pass the Turing Test, the computer embodying AI will have had to answer the posed questions with the same semblance of intelligence as a human. An unsuccessful passing of the Turing Test would occur if the moderator was able to announce which curtain housed the computer, thus implying that there was some kind of telltale clue that gave away the AI.

Overall, this seems to be a rather helpful and effective way to ferret out AI that is the aspirational AGI versus AI that is something less so.

Of course, like most things in life, there are some potential gotchas and twists to this matter.

Imagine we have set up a stage with two curtains and a podium for the moderator. The contestants are completely hidden from view.

The moderator steps up to the podium and asks one of the contestants how to make a bean burrito, and then asks the other contestant how to make a bologna sandwich. Lets assume that the answers are apt and properly describe the effort involved in making a bean burrito and in making a bologna sandwich, respectively so.

The moderator decides to stop asking any further questions.

Voila, the moderator announces, the AI is indistinguishable from human intelligence and therefore this AI is declared forthwith as having reached the pinnacle of AI, the long sought after AGI.

Should we accept this decree?

I dont think so.

This highlights an important element of the Turing Test, namely that the moderator needs to ask a sufficient range and depth of questions that will help root out the embodiment of intelligence. When the questions are shallow or insufficient, any conclusion reached is spurious at best.

Please know too that there is not a specified set of questions that have been vetted and agreed upon as the right ones to be asked during a Turing Test. Sure, some researchers have tried to propose the types of questions that ought to be asked, but this is an ongoing debate and to some extent illuminates that we are still not even quite sure of what intelligence per se consists of (it is hard to identify metrics and measures for that which is relatively ill-defined and ontologically squishy).

Another issue exists about the contestants and their behavior.

For example, suppose the moderator asks each of the contestants whether they are human.

The human can presumably answer yes, doing so honestly. The AI could say that it is not a human, opting to be honest, but then this decidedly ruins the test and seemingly undermines the spirit of the Turing Test.

Perhaps the AI should lie and say that it is the human. There are ethicists though that would decry such a response and argue that we do not want AI to be a liar, therefore no AI should ever be allowed to lie.

Of course, the human might lie, and deny that they are the human in this contest. If we are seeking to make AI that is the equivalent of human intelligence, and if humans lie, which we all know that humans certainly do lie from time-to-time, shouldnt the AI also be allowed to lie?

Anyway, the point is that the contestants can either strive to aid the Turing Test or can try to undermine or distort the Turing Test, which some say is fine, and that it is up the moderator to figure out what to do.

Alls fair in love and war, as they say.

How tricky do we want the moderator to be?

Suppose the moderator asks each of the contestants to calculate the answer to a complex mathematical equation. The AI can speedily arrive at a precise answer of 8.27689459, while the human struggles to do the math by hand and come up with an incorrect answer of 9.

Aha, the moderator has fooled the AI into revealing itself, and likewise the human into revealing that they are a human, doing so by asking a question that the computer-based AI readily could answer and that a human would have a difficult time answering.

Believe it or not, for this very reason, AI researchers have proposed the introduction of what some describe as Artificial Stupidity (for detailed facets of this topic, see my coverage here). The idea is that the AI will purposely attempt to be stupid by sharing answers as though they were prepared by a human. In this instance, the AI might report that the answer is 8, thus the response is a lot like the one by the human.

You can imagine that having AI purposely try to make mistakes or falter (this is coined as the Dimwit ploy by AI, see my explanation at this link here), seems distasteful, disturbing, and not something that everyone necessarily agrees is a good thing.

We do allow for humans to make guffaws, but having AI that does so, especially when it knows better would seem like a dangerous and undesirable slippery slope.

The Reverse Turing Test Rears Its Head

Ive now described for you the overall semblance of the Turing Test.

Next, lets consider a variation that some like to call a Reverse Turing Test.

Heres how that works.

The human contestant decides they are going to pretend that they are the AI. As such, they will attempt to provide answers that are indistinguishable from the AIs type of answers.

Recall that the AI in the conventional Turing Test is trying to seem indistinguishable from a human. In the Reverse Turing Test, the human contestant is trying to reverse the notion and act as though they were the AI and therefore indistinguishable from the AI.

Well, that seems mildly interesting, but why would the human do this?

This might be done for fun, kind of laughs for people that enjoy developing AI systems. It could also be done as a challenge, trying to mimic or imitate an AI system, and betting whether you can do so successfully or not.

Another reason and one that seems to have more chops or merit consists of doing what is known as a Wizard of Oz.

When a programmer is developing software, they will sometimes pretend that they are the program and use a facade front-end or interface to have people interact with the budding system, though those users do not know that the programmer is watching their interaction and ready to interact too (doing so secretively from behind the screen and without revealing their presence).

Doing this type of development can reveal how the end-users are having difficulties using the software, and meanwhile, they remain within the flow of the software by the fact that the programmer intervened, quietly, to overcome any of the computer system deficiencies that might have disrupted the effort.

Perhaps this makes clear why it is often referred to as a Wizard of Oz, involving the human staying in-the-loop and secretly playing the role of Oz.

Getting back to the Reverse Turing Test, the human contestant might be pretending to be the AI to figure out where the AI is lacking, and thus be better able to enhance the AI and continue on the quest toward AGI.

In that manner, a Reverse Turing Test can be used for perhaps both fun and profit.

Turing Test Upside-Down And Right Side Up

Some believe that we might ultimately be headed toward what is sometimes called the Upside-Down Turing Test.

Yes, thats right, this is yet another variant.

In the Upside-Down Turing Test, replace the moderator with AI.

Say what?

This less discussed variant involves having AI be the judge or interrogator, rather than a human doing so. The AI asks questions of the two contestants, still consisting of an AI and a human, and then renders an opinion about which is which.

Your first concern might be that the AI seems to have two seats in this game, and as such, it is either cheating or simply a nonsensical arrangement. Those that postulate this variant are quick to point out that the original Turing Test has a human as a moderator and a human as a contestant, thus, why not allow the AI to do the same.

The instant retort is that humans are different from each other, while AI is presumably the same thing and not differentiable.

Thats where those interested in the Upside-Down Turing Test would say you are wrong in that assumption. They contend that we are going to have multitudes of AI, each of which will be its own differentiable instance, and be akin to how humans are each distinctive instances (in brief, the argument is that AI will be polylithic and heterogeneous, rather than monolithic or homogeneous).

The counterargument is that the AI is presumably going to be merely some kind of software and a machine, all of which can be readily combined into other software and machines, but that you cannot readily combine humans and their brains. We each have a brain intact within our skulls, and there are no known means to directly combine them or mesh them with others.

Anyway, this back-and-forth continues, each proffering a rejoinder, and it is not readily apparent that the Upside-Down variant can be readily discarded as a worthwhile possibility.

As you might imagine, there is an Upside-Down Turing Test and also an Upside-Down Reverse Turing Test, mirroring the aspect of the conventional Turing Test and its counterpart the Reverse Turing Test (some, by the way, do not like the use of Upside-Down and instead insist that this added variant is merely another offshoot of the Reverse Turing Test).

You might begrudgingly agree to let the AI be in two places at once, and have one AI as the interrogator and one as a contestant.

What good does that do anyway?

One thought is that it helps to potentially further showcase whether AI is intelligent, which might be evident as to the questioning and the nature of how the AI digests the answers being provided, illustrating the AIs capacity as the equivalent of a human judge or interrogator.

Thats the mundane or humdrum explanation.

Are you ready for the scary version?

It has to do with intelligence, as Ill describe next.

Some believe that AI will eventually exceed human intelligence, arriving at Artificial Super Intelligence (ASI).

The word super is not meant to imply superman or superwoman kinds of powers, and instead of that, the intelligence of the AI is beyond our human intelligence, though not necessarily able to leap tall buildings or move faster than a speeding bullet.

Nobody can say what this ASI or superintelligence might be able to think of, and perhaps we as humans are so limited in our intelligence that we cannot see beyond our limits. As such, the ASI might be intelligent in ways that we cannot foresee.

Thats why some are considering AI or AGI to potentially be an existential threat to humanity (this is something that for example Elon Musk has continued to evoke, see my coverage at this link here), and the ASI presumed to be even more so a potential menace.

If you are interested in this existential threat argument, as Ive pointed out repeatedly (see the link here), there are just as many ways to conjure that the AI or AGI or ASI will help mankind and aid us in flourishing as there are the doomsday scenarios of our being squashed like a bug. Also, there is a rising tide of interest in AI Ethics, fortunately, which might aid in coping with, avoiding, or mitigating the coming AI calamities (for more on AI Ethics, see my discussion at this link here).

That being said, it certainly makes sense to be prepared for the doom-and-gloom scenario, due to the rather obvious discomfort and sad result that would accrue going down that path. I presume that none of us want to be summarily crushed out of existence like some annoying and readily dispatched pests.

Returning to the Upside-Down Turing Test, it could be that an ASI would sit in the moderator's seat and be judging whether conventional AI has yet reached the aspirational level of AI that renders it able to pass the Turing Test and be considered indistinguishable from human intelligence.

Depending on how far down the rabbit hole you want to go on this, at some point the Turing Test might have two seats for the ASI, and one seat for AI. This means that the moderator would be an ASI, while there is conventional AI as a contestant and another ASI as the other contestant.

Notice that there is not a human involved at all.

Maybe we ought to call this the Takeover Turing Test.

No humans needed; no humans allowed.

Conclusion

It is unlikely that AI is going to be crafted simply for the sake of making AI, and instead, there will be a purpose-driven rationale for why humans opt to create AI.

One such purpose involves the desire to have self-driving cars.

A true self-driving car is one that has AI driving the car and there is no need for a human driver. The only role of a human would be as a passenger, but not at all as a driver.

A vexing question right now is what level or degree of AI is needed to achieve self-driving cars.

Some believe that until AI has arrived at the aspirational AGI, we will not have true self-driving cars. Indeed, those with such an opinion would likely say that the AI has to achieve sentience, perhaps doing so in a moment of switchover from automation into a spark of being that is called the moment of singularity (for more on this, see my analysis at this link here).

Hogwash, some counter, and insist that we can get AI that is not necessarily Turing Test worthy but that can nonetheless safely and properly drive cars.

To be clear, right now there is not any kind of AI self-driving car that approaches anything like AGI, and so for the moment, we are faced with trying to decide if plain vanilla AI can be sufficient to drive a car. Quick aside, for those interested in AI, some refer to any symbolic approach to AI as GOFAI or Good Old-Fashioned Artificial Intelligence, which is both endearing and to some degree a backhanded slight, all at the same time (see more at my explanation here).

Follow this link:

The Famous AI Turing Test Put In Reverse And Upside-Down, Plus Implications For Self-Driving Cars - Forbes

Scoop: Coming Up on a Rebroadcast of MATCH GAME on ABC – Sunday, July 26, 2020 – Broadway World

"Sam Richardson, Jane Krakowski, Ben Schwartz, Caroline Rhea, James Van Der Beek, Vivica A. Fox" - There is something for everyone on this week's "Match Game." We've got singing, a world record-holding strongman and host Alec Baldwin battling celebrity panelist Caroline Rhea for the title of "America's Sweetheart," airing SUNDAY, JULY 26 (10:00-11:00 p.m. EDT), on ABC. (TV-14, DL) Produced by Fremantle, "Match Game" features four contestants each week vying for the chance to win $25,000, as they attempt to match the answers of six celebrities in a game of fill-in-the-blank. Episodes can also be viewed on demand and Hulu. (Rebroadcast. OAD: 6/7/20)Celebrity panelists include the following:Sam Richardson ("The Tomorrow War"; "Superintelligence")Jane Krakowski (Tony winner; "30 Rock"; "Unbreakable Kimmy Schmidt"; "Dickinson")Ben Schwartz ("Space Force"; "Middleditch & Schwartz")Caroline Rhea("Women of a Certain Age"; "The COMEDY CENTRAL ROAST of Alec Baldwin")James Van Der Beek ("Varsity Blues"; "What Would Diplo Do")Vivica A. FOX ("Empire"; "Arkansas"; podcast "Hustling with Vivica A. Fox")Joining the celebrity panelists are contestants Marisa Aull (hometown: Lexington, Kentucky), Adam Burnes (hometown: Sacramento, California), Shirene Warner (hometown: Stafford, Virginia) and Vincent Panico (hometown: Whitehouse Station, New Jersey)."Match Game" isexecutive produced by Scott St. John, Alec Baldwin, Mallory Schwartz and Fremantle's Jennifer Mullin.From This AuthorTV Scoop

Excerpt from:

Scoop: Coming Up on a Rebroadcast of MATCH GAME on ABC - Sunday, July 26, 2020 - Broadway World

Consciousness Existing Beyond Matter, Or in the Central Nervous System as an Afterthought of Nature? – The Daily Galaxy –Great Discoveries Channel

Posted on Jul 11, 2020 in Science

Does human consciousness exist separate from matter, or is it embodied in the body a critical player in anything that has to do with mind? We are not thinking machines that feel; rather, we are feeling machines that think. answers neuroscientist Antonio Damasio, who pioneered the field of embodied consciousness the bodily origins of our sense of self. We may smile and the dog may wag the tail, but in essence, he says. we have a set program and those programs are similar across individuals in the species. There is no such thing as a disembodied mind.

Consciousness is considered by leading scientists as the central unsolved mystery of the 21st Century: I have a much easier time imagining how we understand the Big Bang than I have imagining how we can understand consciousness, says Edward Witten, theoretical physicist at the Institute for Advanced Study in Princeton, New Jersey who has been compared to Isaac Newton and Einstein about the phenomena that has been described as assuming the role spacetime did before Einstein invented his theory of relativity.

Some scientists have asked how can we be sure that the source of consciousness lies within our bodies at all? One popular, if mystical, idea, writes astrophysicist Paul Davies in The Demon in the Machine, is that flashes of mathematical inspiration can occur by the mathematicians mind somehow breaking through into a Platonic realm of mathematical forms and relationships that not only lies beyond the brain but beyond space and time altogether.

The English astronomer, Fred Hoyle, infamous for his rejection of the Big Bang theory, suggested an even more radical hypothesis: that quantum effects in the human brain leave open the possibility of a superintelligence in the cosmic future using a subtle but well-known backwards-in-time property of quantum mechanics in order to steer scientific progress.

Four billion years ago, writes Damasio, in The Strange Order of Things: Life, Feeling, and the Making of the Cultural Mind, the first primitive organisms monitored changes in their bodily state equivalent to hunger, thirst, pain and so on and had feedback mechanisms to maintain equilibrium. The relic of those primitive mechanisms is our autonomic nervous system, which controls bodily functions such as heartbeat and digestion, and of which we are largely unconscious.

Consciousness is Like Spacetime Before Einsteins Relativity

Then, about half a billion years ago, the central nervous system, featuring a brain, evolved an afterthought of nature, says Damasio who a proposes three layered theory of consciousness based on a hierarchy of stages, with each stage building upon the last. The most basic representation of the organism is referred to as the Protoself, next is Core Consciousness, and finally, Extended Consciousness.

Damasio, who is an internationally recognized leader in neuroscience, was educated at the University of Lisbon and currently directs the University of Southern California Brain and Creativity Institute. The human brain, he argues, became the anchor of what had once been a more distributed mind. Changes in bodily state were projected onto the brain and experienced as emotions or drives the emotion of fear, say, or the drive to eat. Subjectivity evolved later again, he argues. It was imposed by the musculoskeletal system, which evolved as a physical framework for the central nervous system and, in so doing, also provided a stable frame of reference: the unified I of conscious experience.

Ultimate Mystery of the Universe Human Consciousness: Were Like Neanderthals Trying to Understand Astronomy

Life was regulated at first without feelings of any sort; here was no mind and no consciousness. There was, Damasio writes, a set of homeostatic mechanisms blindly making the choices that would turn out to be more conducive to survival. The arrival of nervous systems, capable of mapping and image making, opened the way for simple minds to enter the scene. During the Cambrian explosion, after numerous mutations, certain creatures with nervous systems would have generated not just images of the world around them but also an imagetic counterpart to the busy process of life regulation that was going on underneath. This would have been the ground for a corresponding mental state, the thematic content of which would have been valenced in tune with the condition of life, at that moment, in that body. The quality of the ongoing life state would have been felt.

Enter Sarah Garfinkel, at the University of Sussex, UK, who joins Damasio in arguing that our thoughts, feelings and behaviors are shaped in part by the internal signals that arise from our body. But, she reports in New Scientist: it goes beyond that. It is leading her and others to a surprising conclusion: that the body helps to generate our sense of self and is a key part of consciousness. This idea has practical implications in assessing people who show little sign of consciousness. It may also force us to reconsider where we draw the line between life and death, and provide a new insight into how consciousness evolved.

Since 2000, concludes Damasio, I have been defending the idea that the body is a critical player in anything that has to do with mind.

The Daily Galaxy, Max Goldberg, via New Scientist and Antonio R. Damasio, Descartes Error and the Strange Order of Things: Life, Feeling, and the Making of the Cultural Mind and Paul Davies, The Demon in the Machine All Kindle editions

Image credit: Shutterstock License

Read more from the original source:

Consciousness Existing Beyond Matter, Or in the Central Nervous System as an Afterthought of Nature? - The Daily Galaxy --Great Discoveries Channel

If you can’t beat ’em, join ’em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink – Business Insider India

Advertisement

The announcement of Neuralinks mission statement comes after Musk claimed that AI-brain-chip could potentially be ready to be put into a human patient within a year in May.

Advertisement

Think of humanity as a biological boot loader for digital superintelligence, Musk told Alibabas Jack Ma during the World AI Conference in Shanghai. A boot loader is a tiny piece of code without which a computer cant load a necessary feature for a computer to start up.

According to Musk, comparing human intelligence to AI is like comparing the difference between chimpanzees and humans. He believes that theres no doubt that AI will much much smarter than the smartest human. Advertisement

How Neuralink wants to change the worldUnlike AI solutions that have come before Neuralink, Musk wants to achieve this symbiosis with minimal invasion. The use of flexible threads is less likely to damage the brain than the materials currently used in brain-machine interfaces. These threads are thinner than a human hair.

Using threads also brings the possibility of being able to transfer a higher volume of data. The white paper Elon Musk & Neuralink claims that the proposed system could include as many as 3,072 electrodes per array distributed across 96 threads.Advertisement

SEE ALSO:Elon Musk hints at new Tesla Gigafactory in Asia but not in China

Panasonic's CEO says Elon Musk is a genius but can be be 'overly optimistic'

Elon Musk's Boring Company is hosting a competition to see who can dig tunnels faster than a snail

Go here to read the rest:

If you can't beat 'em, join 'em Elon Musk tweets out the mission statement for his AI-brain-chip Neuralink - Business Insider India

The Shadow of Progress – Merion West

In a worldview that prizes purity above progress, the flawed and erroneous are stains to be expunged. Their remembrance is not only deplorable but damning by association.

Indeed, history is nothing more than a tableau of crimes and misfortunes. History is nothing but a pack of tricks that we play upon the dead.

~Voltaire

We are at war with the past. What began as a stand against state-sponsored violence has metastasized; it has spread to every facet of politics and culture and has spiraled to the brink of complete moral frenzy. The anti-racism Leftstill well in the throes of George Floyds deathhas moved away from the police, politicians, and partisan prejudice towards a new (or, rather, not-so-new) nemesis: the pages of history themselves.

In its crusade against racial and social injustice, Black Lives Matter and its ideological peers are making no exceptions for neither the ancient nor the antiquated. They make no distinctions among those who lived 50, 100, or even 1,000 years ago. Indeed, from the indignant throngs of recent weeks, we have stood witness to a second wave of statue removalsranging from democratic campaigns to criminal defenestrationsacross the United States and beyond, in what can only be described as some desperate attempt at historical redaction. In a worldview that prizes purity above progress, the flawed and erroneous are stains to be expunged. Their remembrance is not only deplorable but damning by association.

It is this latter sentiment that should have us most concerned. While it is the nature of cynical traditions to deny progress and its many achievements, it is an entirely new form of pessimism to deplore its very existence. If ones worldview is a mixture of mistrust and misanthropy, it should come as no surprise that ones past appears populated by villains and reprobates. It should come as no surprise in principleas we shall seebut it is a novel and enfeebling mistake to bear such wickedness as ones own. In reaching so deep into the gutters of the past, we are finding ourselves sullied with regard to the present. We find ourselves sickened by the legacies of evil. In merely perceiving the long-since departed, we find ourselves shackledand, in many cases, sentencedby the sins of our fathers.

Such is the nature of our new, historical masochism. It is a fallacy that owes, in large part, to presentism: the tendency to judge the past by todays morality. It is a mistake that centers on days gone by, but it threatens everything that we have achieved and stand for in the future. This is not hyperbole: The war against history is a philosophical mistake bordering on existential threat not because those who do not learn history are doomed to repeat itbut because it was never really about history in the first place. It is about progress. The presentism paradox is all about how we can only perceive past evils from a position of virtue. Its mistake is to conflate the two. The result is a war not against those historical failures we deplorebut against their corrections. We are at war with our achievements.

As a society, we stand at a unique perspective throughout history. We exist at the pinnacle of all scientific, technological, and moral understanding after a long and distinguished career of misery. Fans of Steven Pinkers 2011 book The Better Angels of Our Nature will be familiar with this position, as well as his trademark brand of quantitative optimism. Those who are not may think it perverse to even suggest. How could a society racked with injustice, plagued with war, and all but enthralled by the specter of power be anything but detestable? How could a civilization poised to destroy itself be anything other than falling apart?

The answers are, in part, factual and, in part, philosophical. The short version is: It is not true. It is not true that we have reached new heights of death and despair. It is not true that our destruction is imminent. Indeed, Pinkers work remains as our greatest rebuke of such despondency. He shows us how the opposite is true; he shows us all the ways in which we are healthier, happier, more wealthy, more peaceful, more compassionate, and more loving than ever before. By every metric, material and meaning-filled, we are leading the way to a better tomorrow. We have known this for some time nowever since the Enlightenment and its exceptional achievements, heretical visionaries have dared to honor an unprecedented success. Pinker is just the latest in a long line of heroic optimists, building upon the sentiments of such Enlightenment figures as William Godwin, Anthony Ashley-Cooper and, some centuries later, the philosopher Karl Popper. In his 1963 book Conjectures and Refutations, it was Popper who wrote:

In spite of our great and serious troubles, and in spite of the fact that ours is surely not the best possible society, I assert that our own free world is by far the best society which has come into existence during the course of human history.

Not quite convinced? That is okay. In any other argument of this type, contemporary optimism would require further defense. There is more to be said about destitution, climate change, existential risk, Our Final Hour, and Superintelligence; there is more to be discussed if one hopes to dispel an adored desperation. But it is the miraculous irony of our newest afflictionstanding in the face of such a robust and wistful gloomthat the fight against history is itself optimistic. In order to admonish with righteous authority, one must first assume some measure of moral advantage. One must first contend some basis by which abolition supplants enslavement.

Concealed within the logic of our new-found presentism is a commitment to moral realism. After all, crimes are only so much if we are correct in our convictions. This stands in stark contrast to the moral and epistemological relativism so treasured by the Left: a relativism from which many derive contempt towards a uniquely Western hubris. However, as we have seen, it is a hubris shared across oceans of time if not water, against those less fortunate in wisdom. I am sure that the relativist-Left, alerted to their spatial and temporal hypocrisy, shall be quick to renounce such bigotry: one they so selectively despise.

But probably notit is foundational to their cherished masochism. They have arrived at a contemporary optimism by accident; they subvert it to pessimistic ends. This is the error I am referring to: a bizarre new form of moral and historical inversion that holds solutions accountable for their problems, progress accountable for its obstacles, and the present accountable for its past. In their view, our superior vantage is merely a window into damnation. In 2020, hindsight is blinding.

But there is another way! Despite its seductive nature, historical pessimism is a surprisingly easy mistake to correct forif you know how. The answer is gratitude. The correct response to fortune is thanks and compassion to those with lessnot guilt and hatred of those with more. And if history is the shadow cast by progress, then we should feel grateful that it is cast behind usnot forward, or downwardsand be careful not to heed its familiar call. As Pinker urges us to recall:

If the past is a foreign country, it is a shockingly violent one. It is easy to forget how dangerous life used to be, how deeply brutality was once woven into the fabric of daily existence. Cultural memory pacifies the past, leaving us with pale souvenirs whose bloody origins have been bleached away.

It is easy to forget just how far we have come. It is easy to forget just how mistaken we can be, and have been, and are; and we should be thankful. We should be thankful that in place of past monsters we have only their monuments, that in place of old slavers we have only their memory. The shadow of progress is an illusion cast by self-doubt. It is a mistake. When we do look towards the past, towards those figures less privileged than ourselves, we should do so with compassion, and forgiveness, for the right to condemn is, itself, a sign of good fortune. We should embrace our privileges as giftsnot sins. And we must understand thatmore than any otherour greatest privilege is the time in which we find ourselves. Our greatest privilege is today. We should be quick to salute it.

Tom Hyde is a graduate of University College London and a freelance writer. He is primarily interested in how science and philosophy influence cultural trends.

More:

The Shadow of Progress - Merion West

Josiah Henson: the forgotten story in the history of slavery – The Guardian

From its very first moments, Harriet Beecher Stowes debut novel Uncle Toms Cabin was a smashing success. It sold out its 5,000-copy print run in four days in 1852, with one newspaper declaring that everybody has read it, is reading, or is about to read it. Soon, 17 printing presses were running around the clock to keep up with demand. By the end of its first year in print, the book had sold more than 300,000 copies in the US alone, and another million in Great Britain. It went on to become the bestselling novel of the 19th century.

Before reading Uncle Toms Cabin, I only knew that Stowes novel had been credited with influencing the debate at the heart of the American civil war. I had an expensive education, but sadly I learned very little about black history at school; by my early 20s, only names such as Frederick Douglass or Harriet Tubman still rang a bell. All that changed when I discovered that Stowes novel was based on the life of a real man, named Josiah Henson, whose cabin in Ontario was just a few hours from my home.

As I walked the four-acre grounds and tiny museum, an astounding story unfolded. Henson was entertained at both Windsor Castle and the White House. He won a medal at the first Worlds Fair, the Great Exhibition at the Crystal Palace in Hyde Park. The British prime minister Lord John Russell threw him a surprise banquet. The archbishop of Canterbury wept after hearing his story. Henson rescued 118 enslaved people, including his own brother, and helped build a 500-person freeman settlement, called Dawn, that was known as one of the final stops on the Underground Railroad. But before all this, he was brutally enslaved for more than 40 years.

And few people have heard of him. Henson has largely been lost to history. Every month, nearly 1.4 million people Google Abraham Lincoln, 228,000 look up Frederick Douglass, and 135,000 search for Confederate general Robert E Lee. Around 3,400 seek out Henson. But after I visited his cabin, I had to know more so I set off on a 3,000-mile journey to retrace his footsteps from birth to freedom.

Henson was born near Port Tobacco, Maryland, around 1789. His first memory was of his father being whipped to the bone, having his ear cut off and being sold south all as punishment for striking a white man who had attempted to rape his wife. He never saw his father again. Several years later, Henson was separated from his mother and sold to a child trafficker, but soon fell dangerously ill. The slave trader offered the boy to Hensons mothers owner, an alcoholic blacksmith named Isaac Riley, at a price he couldnt refuse: free of charge if the boy died, some horseshoeing work if he survived.

Henson not only survived but rose to the position of farm overseer and Rileys market man in the nations capital. There he rubbed shoulders with lawyers, businessmen and Methodist ministers, one of whom taught him how to preach and helped him fundraise to buy his freedom.

After receiving a $350 down payment on his emancipation about three years wages for a white farm labourer Riley swindled Henson by sending him to Kentucky to visit his brother Amos, who attempted to sell him south to New Orleans. Henson narrowly avoided that harsh fate through a highly providential twist of events: Rileys nephew Amos Junior, the young man tasked with selling Henson, contracted malaria. Rather than letting the teenager die, Henson honourably loaded him on a steamship, then returned north.

In 1830, Henson escaped Kentucky by water on a moonless night. Travelling by night and sleeping by day, Henson, his wife and four children made the 600-mile journey to the Canadian border on foot, assisted in part by Quakers and Native Americans, but mostly by their own pluck. Upon reaching the Niagara River, a kindly Scottish captain paid to send the Henson family across. According to one edition of Hensons autobiography, the captain asked if Henson would be a good man in his new land.

Yes, Henson replied. Ill use my freedom well.

And indeed, the overarching theme of Hensons story is the stewardship of freedom. Rather than using his prodigious business and oratory skills to simply build a comfortable life for himself, he agitated for equality of opportunity, smuggled friends and family to safety, planted churches, and defended himself against imprisonment after supporting families who sent sons to fight in the civil war. He embarked on a nearly 100-stop British speaking tour to raise funds for the cause. With the help of US American supporters and British Christian philanthropists, he constructed a settlement for African American refugees, fundraised for black social enterprises including a sawmill and brickworks, and even built a desegregated school, nearly a century before the end of Jim Crow in the 1960s.

Inspired in part by Hensons story, Stowe penned her novel. The backlash came rapidly and rabidly; authors and columnists rushed to defend their romantic and chivalric southern ideals from this Yankee onslaught, arguing that Stowes writing was nothing more than sectarian propaganda.

In response, Stowe published A Key to Uncle Toms Cabin. In it, she named all the real people who inspired Mr Haley, George Harris, Eliza, Simon Legree, and the rest. As for Uncle Tom, Stowe wrote: The character of Uncle Tom has been objected to as improbable; and yet the writer has received more confirmations of that character, and from a great variety of sources, than of any other in the book. Laying out the inspiration for various scenes in Uncle Toms story, she declared: A last instance parallel with that of Uncle Tom is to be found in the published memoirs of the venerable Josiah Henson now pastor of the missionary settlement at Dawn, in Canada.

Among all the readers of Stowes Key, there was one whose influence could not be overstated. According to the Library of Congresss circulation records, Lincoln borrowed The Key to Uncle Toms Cabin on 16 June 1862, and returned it 43 days later, on 29 July. The dates correspond exactly to the time during which he drafted the Emancipation Proclamation. We may never know the degree to which Stowe influenced Lincoln, but it is clear that during the critical time, he had Hensons story near at hand.

Hensons story played a major role in Lincolns election as well. His Republican party distributed 100,000 copies of Uncle Toms Cabin during the presidential campaign of 1860, as a way to stir up anti-slavery support. Without the abolitionist press and Stowes book, its possible that Lincoln would not have garnered enough support to win. As fellow Republican US senator Charles Sumner declared: Had there been no Uncle Toms Cabin, there would have been no Lincoln in the White House.

Despite the fact that Henson played a pivotal role in world history, entrenched values are not easily uprooted. After Stowe published the Key and identified Henson, his supporters rebranded him the real Uncle Tom. It was a good thing, at the time. Today Uncle Tom has a very derogatory meaning, due to its bastardisation at the hands of racist blackface playwrights in the late 19th and early 20th centuries. The man who sacrificed himself to win freedom for others was turned into a subservient and cowardly slave who curries favour with the white man. In a cruel disfigurement of a fictional hero, humility became baboonery, martyrdom became traitorhood. For more than 40 years after his death, blackface Tom shows played within a half-mile of Hensons grave. And within a generation, his story was nearly lost.

This is perhaps the greatest travesty of the white-centric narratives we are taught about our nations pasts, that a bona fide international hero can be erased because of the colour of his skin. Black history has been intentionally lost and destroyed, on a huge scale. There are certainly many more figures like Henson, but we dont even know what we dont know. Harpers Magazine once estimated that the US owes more than $100 trillion in reparations for forced labour between 1619 and 1865. After slavery was abolished in the US, it continued overseas there may be more people enslaved today than at any point in history while other means of repression were quickly institutionalised to deal with the black problem in the US. Jim Crow tactics eventually failed, as will manufactured inequality, mass incarceration, and violence-based policing. But who knows what the future holds, in the age of AI superintelligence, algorithmic blockchains, and surveillance corporatism.

Henson died at 93 in Ontario, in 1883. Today, he has been consigned to obscurity. There are no riverfront statues of him, nor are there any parks, schools, or universities named in his honour. He is little-taught in British, American or Canadian history classrooms, nor has his story been portrayed on the big or small screen. But thankfully, Hensons legacy continues through his descendants, who include Arctic explorer Matthew Henson, Oscar-nominated actor Taraji P Henson, and the hundreds-strong family reunion that takes place every summer, rotating between Michigan, Ontario, and Maryland. So long as the Henson family lives, there will be torchbearers to keep his story alive. It may take another 100 years before school children know the name Henson as readily as they do Lincoln and Washington but as monuments to racism topple around the globe, they leave space for worthier replacements.

Read the rest here:

Josiah Henson: the forgotten story in the history of slavery - The Guardian

The world’s best virology lab isn’t where you think – Spectator.co.uk

If you ever doubt how clever evolution can be, remember that it may take a year or morefor the brightest minds on the planet to find and approve a vaccine for the coronavirus. Yet 99 per cent of otherwise healthy people seem to have an immune system that can crack the problem in under a week.

When I posted this on Twitter, I got a little abuse from a few strange people who thought I was calling scientists dumb. Quite the reverse. 99 per cent may be too high a figure, but it is surely evidence of some bizarre superintelligence within the human body that many of us can do unconsciously something that the combined brains of the worlds pharmaceutical industries so far cannot match. In a matter of days, it can spot, target, test and devise an antibody to eliminate a hostile pathogen that it has never encountered before. Each of us is walking around every day without realising that we are home to the worlds best virology lab.

True, the immune system does not have to wait for FDA approval. But it does have to do something similar ensure that the cure does not do more harm than the disease. (Diseases such as Lupus, Multiple Sclerosis and Rheumatoid Arthritis are examples of what happens when the system goes rogue.) And its also worth noting that a human vaccine does not, in fact, cure the disease it simply hacks the immune system to create its own cure.

A few dissident thinkers including me and theeconomist Robin Hanson - have wondered aloud whether, in the time before a vaccine is available, there might be a role for an earlier practice called 'variolation'. This was introduced to Britain from the Ottoman Empire by Lady Mary Wortley Montagu in the early eighteenth century as a treatment against smallpox. Montagu controversially infected her own children with a small initial dose of smallpox, the assumption being that the body was better able to cope when presented with a small initial dose of the virus than with a larger one. She gained a PR coup for the procedure when the then Princess of Wales adopted the procedure for her two daughters. Seven prisoners awaiting hanging at Newgate prison had been offered their freedom in exchange for undergoing the procedure all seven survived. (Horrible to say it, but one small advantage of the death penalty is that it does solve certain problems in medical ethics). Once EdwardJenner (and, earlier, Benjamin Jesty) came up with a cowpox vaccine, variolation sensibly fell out of favour.

We dont yet know whether the scale of the initial dose affects the course or outcome of the disease and it would be heinous to act without this information. So far, strangely, most models of the disease assume infection is just a binary question you are either infected or you are not. Is this a safe assumption, or are there gains to be had from also ensuring that if you are infected, you arent infected very much?

Im not taking any chances, While everyone else was stockpiling toilet paper, I invested in one of these.

Link:

The world's best virology lab isn't where you think - Spectator.co.uk

Is Artificial Intelligence (AI) A Threat To Humans? – Forbes

Are artificial intelligence (AI) and superintelligent machines the best or worst thing that could ever happen to humankind? This has been a question in existence since the 1940s when computer scientist Alan Turing wondered and began to believe that there would be a time when machines could have an unlimited impact on humanity through a process that mimicked evolution.

Is Artificial Intelligence (AI) A Threat To Humans?

When Oxford University Professor Nick Bostroms New York Times best-seller, Superintelligence: Paths, Dangers, Strategies was first published in 2014, it struck a nerve at the heart of this debate with its focus on all the things that could go wrong. However, in my recent conversation with Bostrom, he also acknowledged theres an enormous upside to artificial intelligence technology.

You can see the full video of our conversation here:

Since the writing of Bostrom's book in 2014, progress has been very rapid in artificial intelligence and machine and deep learning. Artificial intelligence is in the public discourse, and most governments have some sort of strategy or road map to address AI. In his book, he talked about AI being a little bit like children playing with a bomb that could go off at any time.

Bostrom explained, "There's a mismatch between our level of maturity in terms of our wisdom, our ability to cooperate as a species on the one hand and on the other hand our instrumental ability to use technology to make big changes in the world. It seems like we've grown stronger faster than we've grown wiser."

There are all kinds of exciting AI tools and applications that are beginning to affect the economy in many ways. These shouldnt be overshadowed by the overhype on the hypothetical future point where you get AIs with the same general learning and planning abilities that humans have as well as superintelligent machines.These are two different contexts that require attention.

Today, the more imminent threat isn't from a superintelligence, but the usefulyet potentially dangerousapplications AI is used for presently.

How is AI dangerous?

If we focus on whats possible today with AI, here are some of the potential negative impacts of artificial intelligence that we should consider and plan for:

Change the jobs humans do/job automation: AI will change the workplace and the jobs that humans do. Some jobs will be lost to AI technology, so humans will need to embrace the change and find new activities that will provide them the social and mental benefits their job provided.

Political, legal, and social ramifications: As Bostrom advises, rather than avoid pursuing AI innovation, "Our focus should be on putting ourselves in the best possible position so that when all the pieces fall into place, we've done our homework. We've developed scalable AI control methods, we've thought hard about the ethics and the governments, etc. And then proceed further and then hopefully have an extremely good outcome from that." If our governments and business institutions don't spend time now formulating rules, regulations, and responsibilities, there could be significant negative ramifications as AI continues to mature.

AI-enabled terrorism: Artificial intelligence will change the way conflicts are fought from autonomous drones, robotic swarms, and remote and nanorobot attacks. In addition to being concerned with a nuclear arms race, we'll need to monitor the global autonomous weapons race.

Social manipulation and AI bias: So far, AI is still at risk for being biased by the humans that build it. If there is bias in the data sets the AI is trained from, that bias will affect AI action. In the wrong hands, AI can be used, as it was in the 2016 U.S. presidential election, for social manipulation and to amplify misinformation.

AI surveillance: AIs face recognition capabilities give us conveniences such as being able to unlock phones and gain access to a building without keys, but it also launched what many civil liberties groups believe is alarming surveillance of the public. In China and other countries, the police and government are invading public privacy by using face recognition technology. Bostrom explains that AI's ability to monitor the global information systems from surveillance data, cameras, and mining social network communication has great potential for good and for bad.

Deepfakes: AI technology makes it very easy to create "fake" videos of real people. These can be used without an individual's permission to spread fake news, create porn in a person's likeness who actually isn't acting in it, and more to not only damage an individual's reputation but livelihood. The technology is getting so good the possibility for people to be duped by it is high.

As Nick Bostrom explained, The biggest threat is the longer-term problem introducing something radical thats super intelligent and failing to align it with human values and intentions. This is a big technical problem. Wed succeed at solving the capability problem before we succeed at solving the safety and alignment problem.

Today, Nick describes himself as a frightful optimist that is very excited about what AI can do if we get it right. He said, The near-term effects are just overwhelmingly positive. The longer-term effect is more of an open question and is very hard to predict. If we do our homework and the more we get our act together as a world and a species in whatever time we have available, the better we are prepared for this, the better the odds for a favorable outcome. In that case, it could be extremely favorable.

For more on AI and other technology trends, see Bernard Marrs new book Tech Trends in Practice: The 25 Technologies That Are Driving The 4Th Industrial Revolution, which is available to pre-order now.

Here is the original post:

Is Artificial Intelligence (AI) A Threat To Humans? - Forbes

Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche – Pulse Nigeria

No-one is safe from Elon Musk's barbs, it seems not even Bill Gates.

Elon Musk dissed the Microsoft billionaire in a tweet sent Tuesday, claiming his conversations with the Microsoft founder had been "underwhelming."

Musk made the remark after an unofficial Tesla news account expressed disappointment with Gates' recent decision to buy a Porsche Taycan instead of a Tesla.

The Porsche Taycan is the German automaker's first all-electric vehicle and represents a direct rival to many of Tesla's models. Its starting price is $103,800 .

Gates said he'd ordered the "very, very cool" vehicle during an interview with YouTuber Marques Brownlee , published Friday.

"That's my first electric car, and I'm enjoying it a lot," he said.

During the interview, the 64-year-old tech grandee discussed the state of electric cars in general, noting that their range still falls below that of traditional gasoline vehicles. Consumers may experience "anxiety" about this when buying one, he said.

Still, Gates and Musk have more insights in common than the Tesla CEO might like to admit.

They have both, for example, spoken about the dangers posed by artificial intelligence.

Both men have endorsed a book by Oxford philosophy professor Nick Bostrom, "Superintelligence," which warns of the risks to human life posed by AI.

Musk said the book was "worth reading" in a 2014 tweet , while Gates endorsed the book in a 2015 interview with Baidu CEO Robin Li .

NOW WATCH: 62 new emoji and emoji variations were just finalized, including a bubble tea emoji and a transgender flag. Here's how everyday people submit their own emoji.

See Also:

SEE ALSO: AI is a greater threat to human existence than climate change, says the Oxford professor endorsed by Bill Gates

Originally posted here:

Elon Musk dings Bill Gates and says their conversations were underwhelming, after the Microsoft billionaire buys an electric Porsche - Pulse Nigeria

Thinking Beyond Flesh and Bones with AI – Ghana Latest Football News, Live Scores, Results – Ghanasoccernet.com

The best way to predict the future is to invent it goes the quote. If you are someone who is interested in discovering and inventing things, then "Artificial Intelligence" is the right domain for you. It will not only make your life interesting, but you would be able to make other lives simple and easy!

What does thinking beyond bones and flesh mean? Artificial intelligence is not just about inventing robots and replacing humans, but also about every hard activity of replacing the slog. For example, AI can be used in different areas in the medical field, civil engineering, military services, machine learning, and other fields. To simply portray, artificial intelligence enables computers or software to think wisely about how a person behaves. As a result, the field is vast and you can have your hands on whichever lane seems alluring to you.

The ultimate goal of AI is to achieve human goals through computer programming! AI is about mimicking human intelligence, but with a computer program and with a little help from data. The way they think, act and respond to problems just like a human mind.

One of the most significant features of AI is the new invention of Israeli military soldier robots, which are used as soldiers to replace men and women. This, in turn, is not only effective, but it also reduces the loss of life caused by each war. Its design also minimizes the damage to the robot. How sensitive, but a knowledgeable and useful invention! Therefore, the future of the world depends on how easy it is to obtain any work, and our future is nothing more than Artificial Intelligence!

Now, let us see how many types of AI are there!

Artificial Narrow Intelligence (ANI)

The concept of ANI generally means the flow of designing a computer or machine to perform a single task with high intelligence. It understands the individual tasks that must be performed efficiently. It is considered the most rudimentary concept of AI.

E.g.:

Artificial superintelligence is an aspect where intelligence is more powerful and sophisticated than human intelligence. While human intelligence is considered to be the most capable and developmental, superintelligence can suppress human intelligence.

It will be able to perform abstractions that are impossible for human minds to even think. The human brain is constrained to some billion neurons.

Artificial intelligence has the ability to mimic human thought. The ASI goes a step beyond and acquires a cognitive ability that is superior to humans.

As the name suggests, it is designed for general purpose. Its smartness could be applied to a variety of tasks as well as to learn and improve itself. It is as intelligent as a human brain. Unlike ANI it can improve the performance of itself.

E.g.: AlphaGo, it is currently used only to play the game Go, but its intelligence can be used in various levels and fields.

Scope of AI

The global demand for experts with relevant AI knowledge has doubled in the past three years and will continue to increase in the future. There are many more options for voice recognition, expert system, AI-enabled equipment, and more.

Artificial intelligence is the end of the future. So, why is no one willing to contribute to the future of the planet? Actually, in recent years, AI jobs have increased by almost 129%. In the United States alone, the demand for AI-related job is as high as 4,000!

Well, to catch the lightning opportunity present in AI, you need a bachelor's degree in computer science, data science, information science, math, etc. Now, if you are an undergraduate, then you can easily get a job in the AI domain with a reputed online certification course on AI. Doing this, you can earn anywhere between 600,000 and 1,000,000 in India! In the United States, you can get US$50,000 - US$100,000.

In this smart world, it's easy to find any online certification courses. Some online courses may only focus on the simple foundations of AI, while others offer professional courses, etc. All you have to do is choose the lane you want to follow and start your route.

You would be glad to know that Intellipaat offers an industry-wide best AI course program that has been meticulously designed as per the industry standard and conducted by SMEs. This will not only enhance your knowledge but also help you bring a share of the knowledge gained in the field.

You need to master certain necessary skills to shine in this field such as Programming, Robotics, autonomous cars, space research, etc., You will also be required to gain special skills in Mathematics, statistics, analytics, and engineering skills. A good communication skill is always appreciated if you are aspiring to be in the business field in order to explain and get the right thing to the people out there.

Learners fascinated in the profession of artificial intelligence should discover numerous options in the field. Up-and-coming careers in AI can be accomplished in a variety of environments, such as finance, government, private agencies, healthcare, arts, research, agriculture, and more. The range of jobs and opportunities in AI is very high.

See more here:

Thinking Beyond Flesh and Bones with AI - Ghana Latest Football News, Live Scores, Results - Ghanasoccernet.com

Liquid metal tendons could give robots the ability to heal themselves – Digital Trends

Since fans first clapped eyes on the T-1000, the shape-shifting antagonist from 1991s Terminator 2: Judgment Day, many people have been eagerly anticipating the day in which liquid metal robots became a reality. And by eagerly anticipating, we mean had the creeping sense that such a thing is a Skynet eventuality, so we might as well make the best of it.

Jump forward to the closing days of 2019 and, while robots havent quite advanced to the level of the 2029 future sequences seen in T2, scientists are getting closer. In Japan, roboticists from the University of Tokyos JSK Lab have created a prototype robot leg with a metal tendon fusethats able to repair fractures. How does it do this? Simple: By autonomously melting itself and then reforming as a single piece. The work was presented at the recent 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

The self-healing module is comprised of two halves that are connected via magnets and springs. Each half of the module is filled with an alloy with a low melting point of just 50 degrees Celsius (122 degrees Fahrenheit). When the fuse breaks, the cartridges heat, melting the alloy and allowing the two halves to fuse together again. While the re-fused joints are not as strong as they were before any break took place, the researchers have observed that gently vibrating the joint during melting and reforming results in a joint that is up to 90% of its original strength. This could be further optimized in the future.

Its still very early in the development process. But the ultimate ambition is to develop ways that robots will be able to better heal themselves, rather than having to rely on external tools to do so. Since roboticists regularly borrow from nature for biomimetic solutions to problems, the idea of robots that can heal like biological creatures makes a lot of sense.

Just like breakthroughs in endeavors like artificial muscles and continued research toward creating superintelligence, it does take us one step closer to the world envisioned in Terminator. Wheres John savior of all humanity Connor when you need him?

Link:

Liquid metal tendons could give robots the ability to heal themselves - Digital Trends