The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: November 17, 2019
Mexico has reached a turning point: Will it destroy the cartels or itself? – AZCentral
Posted: November 17, 2019 at 2:34 pm
Opinion: The slaughter of women and children caught America's attention, but the real turning point for Mexico came with cartel violence weeks before.
Mexican soldiers patrol around the city of Culiacan on Friday Oct. 18, 2019.(Photo: Augusto Zurita/Associated Press)
We were the only two people sitting at a very long backyard picnic table,because our kids and wives had become great friends. All around us kids were running and playing.
The topic was culture, or actually the contrast of cultures.
He and his family are citizens of Mexico, who, because of his work, live in Arizona. He was describing to me the differences between our two countries.
He pointed to the end of the picnic table, to a cellphone I hadnt noticed, but that one of the kids or moms must have left there. He said, If this were Mexico, that would be gone.
If there is an important distinction between the two nations, it is the petty corruption that permeates daily life south of the border. People do not have much, he explained, and thus would not pass up the opportunity to take something of value that could put food on the table tomorrow.
I dont believe he was making a value judgment. He is a proud Mexican, and once put his culture on full display at an outdoorquinceaera for his 15-year-old daughter. There was music, dancing, food, all redolent of a culture fully alive. When it was over, I told him, In my next life, I want to be born Mexican.
But he and his wife are Mexican living in the United States, separated from their families and enjoying the relative peace from the troubles down there.
Mexico today is becoming more and more a narco state on the road to collapse. Wide swaths of the country have been taken over by drug gangs, who if they dont overtly tell you who is in charge, will in a moment settle the question with guns.
This is a seminal year for our neighbors down south, not because a family of white European fundamentalist Mormons was gunned down in broad daylight nine mothers and children murdered.
That kind of slaughter is all too common in Mexico.
Autoplay
Show Thumbnails
Show Captions
The people of Mexico are unsettled and sensing a turning point because of what happened only weeks earlier inCuliacn, the capital city of Sinaloa State. There a patrol of about 30 soldiers from Mexican President Andrs Manuel Lpez Obradors newly formed National Guard was moving through a residential area, probably with intent, when someone started shooting at them.
They followed the gunfire to a home where they found four men, one of them Ovidio Guzmn Lpez, the 29-year-old son of drug lord Joaqun El Chapo Guzmn, once considered the worlds biggest drug dealer.
DIAZ: Culiacn gun battle proves (again) that Mexico can't fight the cartels
El Chapo, the former leader of the Sinaloa cartel is now rotting in a maximum-securityprison cell in Colorado, serving multiple life sentences. Two of his sons, including Ovidio, have been indicted, and so Ovidio was taken into custody.
Very soon after the soldiers found themselves surrounded by an armed force of some 400 sicarios, the Spanish word for hitmen or drug-gang militias. Vehicles outfitted with large-caliber automatic weapons descended on the city. The narcos were taking over.
They set tanker trucks and other vehicles on fire in major intersections to make it hard for national security forces to respond and started firing upon the National Guard troops.
The people of Culiacn were horrified by the piercing sound of large-weapons fire and took cover. Black clouds boiled up over their city and made Culiacn look like hell on earth.
The origin of the place name Culiacn is uncertain, but it may have been taken from the wordcoahuacanor "palace of snakes."Surrounded by snakes, the National Guard decided this was not the time to go to war with the Sinaloa cartel. They stood down. The drug lords held the town for a while and used that time to break out 30 of their compadresfrom jail.
Lpez Obrador, the president the Mexicans call "AMLO," said, This is no longer a war."
AMLO was elected in part to end the policies of two prior Mexican presidents, Felipe Caldern and Enrique Pea Nieto, who had declared war on the cartels and had pursued, with the help of the United States, a decapitation strategy of taking out the cartel bosses.
"This isno longer about force, confrontation, annihilation, extermination, or killing in the heat of the moment, AMLO said.You cant put out fire with fire.
When you take fire to the cartels, fire erupts and the murder rate leaps even higher, as it did to record levels. AMLO came in promising a new tactic abrazos no balazos or hugs not bullets.
His strategy is social programs to fight poverty, a ban to end corruption, a call to all Mexicans to exemplify good behavior.
But how do you hug the men who have completely corrupted your institutions, your courts, your police departments? How do you exhort to good behavior those who would kidnap 43 young college students from Iguala and murder them all at once?
How do you communicate with men who communicate by rolling five severed human heads onto a dance floor in Uruapan? Or who ambush and murder 14 police officers in Michoacn?
How can you ever make peace with those who would fire more than 200 rounds from assault weapons atwomen and children, killing nine of them?
The only way you do is to surrender. And that is what AMLO did.
If Caldern and Pea Nieto relied solely on enforcement, Lpez Obrador has chosen to give up the legitimate power of the state,"wrote Mexican journalist and Univision anchor in Los Angeles in an op-ed in theWashington Post.The Mexican government capitulated. Cartels will surely take notice.
Thats an absolute destruction of the rule of law and its going to get worse, said Derek Maltz, former chief of special operations at the Drug Enforcement Administration, to theWall Street Journal.
What we saw in Culiacn, said Edgardo Buscaglia, an expert on organized crime at Columbia University to the (London)Guardian, was the parallel state showing itself.
And in Mexico, this was a watershed, said Ismael Bojrquez, editor of the investigative Sinaloa weekly Ro Doce to theGuardian. Life goes on, yes, but not in the same way. We dont know if this will now be the reaction every time criminal groups feel threatened and we know even less what the federal government intends to do about it.
Those who believe marijuana legalization in the United States will end this scourge need to understand that marijuana is not the cash crop it was once believed to be for the Mexican mob.
In 2009, the Rand research organization found that the estimates of marijuana incartel exports were wildly overstated:
The claim that 60 percent of Mexican DTO (drug trafficking organizations) gross drug export revenues comes from marijuana is not credible. There is no public documentation about how this figure is derived, and government analyses reveal great uncertainty. RANDs exploratory analysis on this point suggests that 1526 percent is a more credible range.
The range of drugs and drug markets are expanding and diversifying as never before, reports the UN World Drug Report for 2018. And Mexican cartels are pushing out in the trafficking of heroin, cocaine, methamphetamines, fentanyl.
The United States will not in our life time legalize those dangerous street drugs.
That means Mexican cartels will control market share. In fact, they have become diversified conglomerates.
[In] this global business logic through franchises, Sinaloa resembles the hamburger chains we all know, and thats why we say this cartel is a multinational drug company, Jorge Hernndez Tinajero, author and researcher at the Universidad Nacional Autnoma de Mxico, toldSmall Wars Journal.
So the Mexicans have a problem. They dont have security. They dont control their state. They face an elemental struggle to be a civilized societyand it wont be achieved by words or safety nets.
The cartel bosses and their minionsare not Mexicans. They are enemies of the people and the state. They have to be destroyed. Defeating them will require force, yes, but also shrewd tactics and policies.
Mexico will need leadership from the top and from the grass roots.
Leadership at the top needs to shore up the court system so it actually dispenses justice. Theyll need to build up the army and local law enforcement to start protecting the Mexican people. Today they do not. Ninety-eight percent of all violent crime in Mexico nowgoes unsolved, reports theWashington Post.
Leadership from the bottom, from the Mexican people, needs to declare that the casual corruption in daily life is no longer tolerable. That a cellphone that disappears from a backyard picnic table ultimately gives license to bulletsthat fly in Culiacn.
Phil Boas is editorial page editor of The Arizona Republic. He can be reached at 602-444-8292 or phil.boas@arizonarepublic.com.
Read or Share this story: https://www.azcentral.com/story/opinion/op-ed/philboas/2019/11/16/mexico-has-reached-turning-point-destroy-cartels-itself/4178290002/
Excerpt from:
Mexico has reached a turning point: Will it destroy the cartels or itself? - AZCentral
Posted in War On Drugs
Comments Off on Mexico has reached a turning point: Will it destroy the cartels or itself? – AZCentral
What is AI (artificial intelligence)? – Definition from …
Posted: at 2:33 pm
Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.
AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as Apple's Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.
Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings, as well as access to Artificial Intelligence as a Service (AIaaS) platforms. AI as a Service allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment. Popular AI cloud offerings include Amazon AI services, IBM Watson Assistant, Microsoft Cognitive Services and Google AI services.
While AI tools present a range of new functionality for businesses ,the use of artificial intelligence raises ethical questions. This is because deep learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human selects what data should be used for training an AI program, the potential for human bias is inherent and must be monitored closely.
Some industry experts believe that the term artificial intelligence is too closely linked to popular culture, causing the general public to have unrealistic fears about artificial intelligence and improbable expectations about how it will change the workplace and life in general. Researchers and marketers hope the label augmented intelligence, which has a more neutral connotation, will help people understand that AI will simply improve products and services, not replace the humans that use them.
Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, categorizes AI into four types, from the kind of AI systems that exist today to sentient systems, which do not yet exist. His categories are as follows:
AI is incorporated into a variety of different types of technology. Here are seven examples.
Artificial intelligence has made its way into a number of areas. Here are six examples.
The application of AI in the realm of self-driving cars raises security as well as ethical concerns. Cars can be hacked, and when an autonomous vehicle is involved in an accident, liability is unclear. Autonomous vehicles may also be put in a position where an accident is unavoidable, forcing the programming to make an ethical decision about how to minimize damage.
Another major concern is the potential for abuse of AI tools. Hackers are starting to use sophisticated machine learning tools to gain access to sensitive systems, complicating the issue of security beyond its current state.
Deep learning-based video and audio generation tools also present bad actors with the tools necessary to create so-called deepfakes , convincingly fabricated videos of public figures saying or doing things that never took place .
Despite these potential risks, there are few regulations governing the use AI tools, and where laws do exist, the typically pertain to AI only indirectly. For example, federal Fair Lending regulations require financial institutions to explain credit decisions to potential customers, which limit the extent to which lenders can use deep learning algorithms, which by their nature are typically opaque. Europe's GDPR puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.
In 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered. Since that time the issue has received little attention from lawmakers.
The rest is here:
Posted in Ai
Comments Off on What is AI (artificial intelligence)? – Definition from …
Microsoft sends a new kind of AI processor into the cloud – Ars Technica
Posted: at 2:33 pm
Microsoft rose to dominance during the '80s and '90s thanks to the success of its Windows operating system running on Intels processors, a cosy relationship nicknamed Wintel.
Now Microsoft hopes that another another hardwaresoftware combo will help it recapture that successand catch rivals Amazon and Google in the race to provide cutting-edge artificial intelligence through the cloud.
Microsoft hopes to extend the popularity of its Azure cloud platform with a new kind of computer chip designed for the age of AI. Starting today, Microsoft is providing Azure customers with access to chips made by the British startup Graphcore.
Graphcore, founded in Bristol, UK, in 2016, has attracted considerable attention among AI researchersand several hundred million dollars in investmenton the promise that its chips will accelerate the computations required to make AI work. Until now it has not made the chips publicly available or shown the results of trials involving early testers.
Microsoft, which put its own money into Graphcore last December as part of a $200 million funding round, is keen to find hardware that will make its cloud services more attractive to the growing number of customers for AI applications.
Unlike most chips used for AI, Graphcores processors were designed from scratch to support the calculations that help machines to recognize faces, understand speech, parse language, drive cars, and train robots. Graphcore expects it will appeal to companies running business-critical operations on AI, such as self-driving-car startups, trading firms, and operations that process large quantities of video and audio. Those working on next-generation AI algorithms may also be keen to explore the platforms advantages.
Microsoft and Graphcore today published benchmarks that suggest the chip matches or exceeds the performance of the top AI chips from Nvidia and Google using algorithms written for those rival platforms. Code written specifically for Graphcores hardware may be even more efficient.
The companies claim that certain image-processing tasks work many times faster on Graphcores chips, for example, than on its rivals using existing code. They also say they were able to train a popular AI model for language processing, called BERT, at rates matching those of any other existing hardware.
BERT has become hugely important for AI applications involving language. Google recently said that it is using BERT to power its core search business. Microsoft says it is now using Graphcores chips for internal AI research projects involving natural language processing.
Karl Freund, who tracks the AI chip market at Moor Insights, says the results show the chip is cutting-edge but still flexible. A highly-specialized chip could outperform one from Nvidia or Google but would not be programmable enough for engineers to develop new applications. Theyve done a good job making it programmable, he says. Good performance in both training and inference is something they've always said they would do, but it is really, really hard.
Freund adds that the deal with Microsoft is crucial for Graphcores business, because it provides an on-ramp for customers to try the new hardware. The chip may well be superior to existing hardware for some applications, but it takes a lot of effort to redevelop AI code for a new platform. With a couple of exceptions, Freund says, the chips benchmarks are not eye-popping enough to lure companies and researchers away from the hardware and software they are already comfortable using.
Graphcore has created a software framework called Poplar, which allows existing AI programs to be ported to its hardware. Plenty of existing algorithms may still be better-suited to software that runs on top of rival hardware, though. Googles Tensorflow AI software framework has become the de facto standard for AI programs in recent years, and it was written specifically for Nvidia and Google chips. Nvidia is also expected to release a new AI chip next year, which is likely to have better performance.
Graphcore
Nigel Toon, cofounder and CEO of Graphcore, says the companies began working together a year after his companys launch, through Microsoft Research Cambridge in the UK. His companys chips are especially well-suited to tasks that involve very large AI models or temporal data, he says. One customer in finance supposedly saw a 26-fold performance boost in an algorithm used to analyze market data thanks to Graphcores hardware.
A handful of other, smaller companies also announced today that they are working with Graphcore chips through Azure. This includes Citadel, which will use the chips to analyze financial data, and Qwant, a European search engine that wants the hardware to run an image-recognition algorithm known as ResNext.
The AI boom has already shaken up the market for computer chips in recent years. The best algorithms perform parallel mathematical computations, which can be done more effectively on a graphics chips (or GPUs) that have hundreds of simple processing cores as opposed to conventional chips (CPUs) that have a few complex processing cores.
The GPU-maker Nvidia has ridden the AI wave to riches, and Google announced in 2017 that it would develop its own chip, the Tensor Processing Unit, which is architecturally similar to a GPU but optimized for Tensorflow.
Graphcores chips, which it calls intelligence processing units (IPUs), have many more cores than GPUs or TPUs. They also feature memory on the chip itself, which removes a bottleneck that comes with moving data onto a chip for processing and off again.
Facebook is also working on its own AI chips. Microsoft has previously touted reconfigurable chips made by Intel and customized by its engineers for AI applications. A year ago, Amazon revealed it was also getting into chipmaking, but with a more general-purpose processor optimized for Amazons cloud services.
More recently, the AI boom has sparked a flurry of startup hardware companies to develop more specialized chips. Some of these are optimized for specific applications such as autonomous driving or surveillance cameras. Graphcore and a few others offer much more flexible chips, which are crucial for developing AI applications but also much more challenging to produce. The companys last investment round gave the company a valuation of $1.7 billion.
Graphcores chips might first find traction with top AI experts who are able to write the code needed to exploit their benefits. Several prominent AI researchers have invested in Graphcore, including Demis Hassabis, cofounder of DeepMind, Zoubin Ghahramani, a professor at the University of Cambridge and the head of Ubers AI lab, and Peiter Abbeel, a professor at UC Berkeley who specializes in AI and robotics. In an interview with WIREDlast December, AI visionary Geoffrey Hinton discussed the potential for Graphcore chips to advance fundamental research.
Before long, companies may be tempted to try out the latest thing, too. As Graphcores CEO Toon says, Everybody's trying to innovate, trying to find an advantage.
This story originally appeared on wired.com.
Listing image by Graphcore
Read this article:
Microsoft sends a new kind of AI processor into the cloud - Ars Technica
Posted in Ai
Comments Off on Microsoft sends a new kind of AI processor into the cloud – Ars Technica
"AI washing" threatens to overinflate expectations for the technology – Axios
Posted: at 2:33 pm
Zealous marketing departments, capital-hungry startup founders and overeager reporters are casting the futuristic sheen of artificial intelligence over many products that are actually driven by simple statistics or hidden people.
Why it matters: This "AI washing" threatens to overinflate expectations for the technology, undermining public trust and potentially setting up the booming field for a backlash.
The big picture: The tech industry has always been infatuated with the buzzword du jour. Before AI landed in this role, it belonged to "big data."Before that, everyone was "in the cloud" or "mobile first." Even earlier, it was "Web 2.0" and "social software."
Plenty of companies rely on one or the other of those tactics, which straddle the line between attractive branding and misdirection.
"It's really tempting if you're a CEO of a tech startup to AI-wash because you know you're going to get funding," says Brandon Purcell, a principal analyst at Forrester.
The tech sector's fake-it-till-you-make-it attitude plays into the problem.
The confusion and deception get an assist from the fuzzy definition of AI. It covers everything from state-of-the-art deep learning, which powers most autonomous cars, to 1970s-era "expert systems" that are essentially huge sets of human-coded rules.
The rest is here:
"AI washing" threatens to overinflate expectations for the technology - Axios
Posted in Ai
Comments Off on "AI washing" threatens to overinflate expectations for the technology – Axios
Toyotas AI Bets Go Beyond Automotive – Forbes
Posted: at 2:33 pm
Vehicle manufacturers know that they need to invent in autonomous technologies if they want to continue to remain relevant. As such, it should be no surprise that many car companies are investing in AI technologies to keep themselves competitive and relevant. Interviewed on an AI Today podcast episode, Jim Adler, Founding Managing Director of Toyota AI Ventures shared insights into the sort of investments Toyota AI Ventures is making in the industry, how the automotive industry is benefiting from these investments, and what non-automotive related AI and ML investments they are making.
Jim Adler, Founding Managing Director at Toyota AI Ventures
Why AI-Related Investments are so important for Toyota
Founded in 2017, Toyota AI Ventures raised a $100 million fund to invest in artificial intelligence, cloud-based data, and robotics that may also leverage AI and cloud-based data. Toyota AI Ventures is a subsidiary of the Toyota Research Institute and helps AI ventures around the world to bring new artificial technology to the market. There are so many companies working to develop AI technology that can help improve the quality of life around the world.
Most of Toyota AI Ventures investments have been into very early-stage startups with seed and series A funding. Not only is Toyota looking to get in early on, but they are looking for the most emerging technologies. In the future, Toyota AI Ventures might consider also investing in later-stage startups, but for now, they are sticking with the front-line investments.
Investing in startups helps Toyota learn what is working in the industry and where customers interests are changing and evolving. The investments help can help Toyota learn about ways in which their own products might help them succeed as well. Adler says that if one of their startups succeeds, they celebrate that success with the startup and if they fail, they take it as a learning experience. What Toyota looks for when it comes to selecting investments are applications that start whole new markets. Companies must show that they are willing to develop a detailed, full-spectrum approach to their development.
AI is such a hot area for research and development at the moment, and opportunities for investment are abundant. Many would think that Toyota is focused only on those areas that deal with vehicles or other technology that directly impacts Toyota, however Toyota supports any technology that can change the future. They invest into a wide range of technology.
Specific AI investments
One company that is part of Toyotas AI portfolio is Intuition Robotics. Intuition Robotics is focused on developing artificial intelligence solutions that act as a companion for the elderly. These AIs can converse with users to remind them to take their medication, suggest being more active, and otherwise help them to live healthier lives as they continue to age. Interactions like this have been proven to help seniors become healthier and create healthy habits. It can also help them to feel as if they have more socialization especially if they live alone.
Another company that is part of Toyotas AI investment portfolio is Joby Aviation, a company aiming to deliver safe and affordable public air travel. This can get people off the road, lower commute times, and be better for the environment. Joby is developing a series of all-electric aircraft that are capable of utilizing VTOL (Vertical Take-Off and Landing) in order to transport people from one place to another. These airplanes will travel faster than helicopters and use complex software onboard to help with flight.
In a similar vein, SLAMcore, also a Toyota AI venture investment, is a London based startup that works with drones has also received funding from Toyota Ventures. Almost all drones currently on the market rely on GPS to be able to fly themselves. However with SLAMcores AI, both robots and drones can use spatial sensors to detect where they are and navigate.
These various companies exist in markets and areas that are outside of current core of Toyotas business. However, Toyota is making these investments to help discover what may be next for the company. The Toyota Institute was originally started to help Toyota develop self-driving cars but now the company is realizing there is a potential for more on the horizon such as home robots or mobility options.
Speaking more toward the automotive industry, Adler says that Toyota is focusing on data-centricity when it comes to the future of Toyota and AI. Companies in other industries have started to become more data-focused and are leaders in their industries, such as Netflix or Amazon, so Toyota sees this as important for them as well. Adler mentioned that you cant fake it in the AI world, especially when it comes to using AI technology in cars. The technology has to be real and has to have a full approach. Multiple companies have been found trying to pass off humans as AI and that wont work for many applications.
The future of AI
Adler acknowledges many of the challenges facing AI and their portfolio investments. While advancements in computer vision, machine learning, AI, and robotics are showing that the technology is able to do more than it has in the past, such as being able to deliver products to customers autonomously, successfully navigating real streets and terrain while avoiding obstacles and arriving at the correct destination consistently is something that hasnt been as easily achieved.
Autonomous vehicles as well as just more intelligent human-driven vehicles are areas where a lot of development is going on and even more innovative products are on the way. Toyota is working on a program known as Guardian that works to guard the driver against dangers on the road. The AI-enabled Guardian is designed to help make sure that drivers do not end up in situations that could be dangerous to them.
Toyota is just one of many companies to be increasing their AI investments and applications. Startups around the world have promised a lot of new AI applications and some have already delivered on them. What is in store for technology with the possibilities brought to the table by AI is very promising.
View original post here:
Posted in Ai
Comments Off on Toyotas AI Bets Go Beyond Automotive – Forbes
What Is The Future Of Enterprise AI? – Forbes
Posted: at 2:33 pm
Depositphotos
Due to the increasing involvement of state players in automation warfare, when AI-driven automation is on its way to becoming a war weapon, what will it mean for an enterprise to stay competitive for survival?
Introduction
Artificial intelligence is redefining the very meaning of being an enterprise.The rapidly advancing artificial intelligence (AI) capability is on its way to revolutionizing every aspect of an enterprise. The ability to access data has leveled the playing field and brought every enterprise a unique possibility of progress. What needs to be seen is in this level playing field, which enterprises will be able to compete and lay a new foundation for fundamental transformation and which ones will decline.
Acknowledging this evolving reality,Risk Groupinitiated a much-needed discussion on The Future of Enterprise AI with Ankur Dinesh Garg onRisk Roundup.
Disclosure: I am the CEO of Risk Group LLC.
Risk Group discusses "The Future of Enterprise AI" with Ankur Dinesh Garg, chief of artificial intelligence at Hotify Inc., board member and chief of artificial intelligence at Sonasoft, a board member at Iamwire, advisor to many companies and member at Forbes Technology Council based in the United States.
Purpose of Enterprise AI
Enterprises across industries are undergoing a profound and lasting shift in the relative balance of AI adoption. AI application will offer each enterprise as many opportunities as it does challenges. While access to technology, data, and information is common to all enterprises, what is not common is how each enterprise uses that informationand for what reason. While AI has given enterprises across industries and nations the same starting point in access to AI technology, it is crucial to understand the parameters that will define their individual and collective success.
There are many variables in each enterprise ecosystem that will determine whether an enterprise will be able to use the data and information from its ecosystem to develop AI, automate, and transform to succeed.Ankur Garg expands on this notion on Risk Roundup: All the enterprises are running the race of AI, and who is going to win largely depends on many crucial elements. For example, how accurately enterprise leaders can articulate the problem that they are facing, and the business impact and the value associated with the problem.
As the state of AI deployment accelerates, it is difficult to grasp what staying competitive means for an enterprises survival. It is an understatement that enterprises across nations are expected to face extraordinary challenges and changes in the coming years, with automation driven growth as the only constant in those changes. As a result, it is vital to understand what does AI-driven growth means for enterprises.
Emerging Trends
The emerging trends in AI-driven automation reflect significant shifts of players and actions in the AI sphere that reveal the reconfigurations of ideas, interests, influence, and investments in the AI domain of enterprise adoption and transformation. Enterprises are beginning to understand the consequences of the evolving artificial intelligence-driven automation ecosystem far beyond narrow artificial intelligence, crossing economic, commerce, education, governance, and trade supply chains. While the relationship between enterprises and automation is complicated, and at times indirect, the force and pace of AI-driven automation change expected in the coming years will present each enterprise challenges and opportunities for its: products, services, processes, operations, and supply chains. From what it seems, the AI applications of tomorrow will be hybrid systems composed of several components and reliant on many different data sets, methodologies, and models.
The growing layers of cyberspace are connecting humans and machines across cyberspace, aquaspace, geospace, and space (CAGS). It is not only the human users that are getting connected, but the growing number of internet of things (IoT) devices are also getting active and operational with the rollout of 5G. Individually and collectively, the ever-increasing connectivity of man and machines, living and non-living, is creating enormous amounts of data and is driving the rapid expansion of AI across enterprises.
However, so far, there was not enough processing power for enterprises to implement ideal AI techniques. While the AI-driven automation emerged a few years ago, it is only now maturing as cloud computing, and massively parallel processing systems advance AI implementation further. As a result, AI-driven automation adoption is now progressing further as an essential trend.
There are many functional parts of enterprises that are already benefiting from the AI transformation. From R&D projects, customer service, finance, accounting, andIT, there are rapid shifts from experimental to applied AI technology across enterprises. There is no doubt that each enterprise will benefit from intelligent decision making to streamlined supply chains, customer relations to recruitment practices. At the same time, AI-driven automation is on its way to becoming a war weapon, as shown by the increasing involvement of state players in automation warfare. This is aimed at crippling AI competition and is progressing rapidly despite the growing complexities and challenges.
As Enterprise AI demand grows, so does the rise of AI-as-a-service. Moreover, AI-driven automation, data analytics, and low-code platforms are converging as AI fundamentally shifts the competitive landscape. New organizational capabilities are becoming critical, and so is the need to effectively manage the growing security risks of dual-use of AI.
When common sense tasks become more straightforward for computers to process, AI-driven intelligent applications and robots will become extremely useful in enterprise operations and supply chains. While a limited understanding of use cases what problems can be solved using AI, where to apply AI, what data sets to use, how to get credible data and skilled resources still slows down AI adoption, company culture also plays a vital role in AI adoption strategies and is proving to be a barrier to AI adoption.
Enterprise Digital Data Infrastructure
While enterprises are taking advantage of AI and are beginning to harness these technologies and benefits, the AI growth for any industry is driven and shaped by several variables and external factors, many of which can be amplified or influenced by data choices made at the enterprise or industry level. So, how will availability, affordability, accessibility, and integrity of data impact potential AI growth for enterprises across nations?
As seen, many enterprises lack the necessary digital data infrastructure. The lack of digital support, in turn, discourages opportunities and innovations in AI, making it challenging to address enterprise needs adequately leaving each of its enterprises with outdated data, information, and intelligence. Moreover, the credibility of the data sets also is an emerging concern. That brings us to two important questions: how are enterprises addressing digital data infrastructure challenges? What are the different data types that are important for enterprises?
While enterprises are currently using AI in areas for which they already have some data and analytics in place, many meaningful data partnerships are emerging. The emerging integrated structured data and text, when available to train AI systems, will bring necessary progress in enterprise AI. It will be interesting to see how this new data-driven world reality brings each enterprise across industries, both opportunities, and risks.
What Next?
The potential of Enterprise AI can transform the enterprise ecosystem in many ways. From decision making to supply chain intelligence and tracking capabilities to the automation of business processes, AI can change the entire enterprise ecosystem across CAGS. The time is now to understand its risks and rewards.
NEVER MISS ANY OF JAYSHREES POST
Simply join here for a regular update from Jayshree
Read more here:
Posted in Ai
Comments Off on What Is The Future Of Enterprise AI? – Forbes
Where AI and ethics meet – Cosmos
Posted: at 2:33 pm
By Stephen Fleischresser
Given a swell of dire warnings about the future of artificial intelligence over the last few years, the field of AI ethics has become a hive of activity.
These warnings come from a variety of experts such as Oxford Universitys Nick Bostrom, but also from more public figures such as Elon Musk and the late Stephen Hawking. The picture they paint is bleak.
In response, many have dreamed up sets of principles to guide AI researchers and help them negotiate the maze of human morality and ethics. Now, a paper in Nature Machine Intelligence throws a spanner in the works by claiming that such high principles, while laudable, will not give us the ethical AI society we need.
The field of AI ethics is generally broken into two areas: one concerning the ethics guiding humans who develop AIs, and the other machine ethics, guiding the moral behaviour of the AIs or robots themselves. However, the two areas are not so easily separated.
Machine ethics has a long history. In 1950 the great science fiction writer Isaac Asimov clearly articulated his now famous three laws of robotics in his work I, Robot, and proposed them as such:
1-A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2-A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3-A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Later a zeroth law was added: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
These laws together were Asimovs (and editor John W Campbells) musing on how to ensure an artificially intelligent system would not turn on its creators: a safety feature designed to produce friendly and benevolent robots.
Isaac Asimov articulated his three laws of robotics in 1950
Alex Gotfryd/CORBIS/Corbis via Getty Images
Asimov explored the limits of the three laws in numerous writings, often finding them wanting. While the laws were a literary device, they have nonetheless informed the real-world field of AI ethics.
In 2004, the film adaptation of I, Robot was released, featuring an AI whose interpretation of the three laws led to a plan to dominate human beings in order to save us from ourselves.
To highlight the flaws in the ethical principles of the three laws, an organisation called the Singularity Institute for Artificial Intelligence (now the Machine Intelligence Research Institute), headed up by the American AI researcher Eliezer Yudkowsky, started an online project called Three Laws Unsafe.
Yudkowsky, an early theorist of the dangers of super-intelligent AI and proponent of the idea of Friendly AI, argued that such principles would be hopelessly simplistic if AI ever developed to the stage depicted in Asimovs fictions.
Despite widespread recognition of the drawbacks of the three laws, many organisations, from private companies to governments, nonetheless persisted with projects to develop principle-based systems of AI ethics, with one paper listing 84 documents containing ethical principles or guidelines for AI that have been published to date.
This continued focus on ethical principles is partly because, while the three laws were designed to govern AI behaviour alone, principles of AI ethics apply to AI researchers as well as the intelligences that they develop. The ethical behaviour of AI is, in part, a reflection of the ethical behaviour of those that design and implement them, and because of this, the two areas of AI ethics are inextricably bound to one another.
AI development needs strong moral guidance if we are to avoid some of the more catastrophic scenarios envisaged by AI critics.
A review published last year by AI4People, an initiative of the international non-profit organisation Atomium-European Institute for Science, Media and Democracy, reports that many of these projects have developed sets of principles that closely resemble those in medical ethics: beneficence (do only good), nonmaleficence (do no harm), autonomy (the power of humans to make individual decisions), and justice.
This convergence, for some, lends a great deal of credibility to these as possible guiding principles for the development of AIs in the future.
However, Brent Mittelstadt of the Oxford Internet Institute and the British Governments Alan Turing Institute an ethicist whose research concerns primarily digital ethics in relation to algorithms, machine learning, artificial intelligence, predictive analytics, Big Data and medical expert systems now argues that such an approach, called principlism, is not as promising as it might look.
Mittelstadt suggests significant differences between the fields of medicine and AI research that may well undermine the efficacy of the formers ethical principles in the context of the latter.
His first argument concerns common aims and fiduciary duties, the duties in which trusted professionals, such as doctors, place others interests above their own. Medicine is clearly bound together by the common aim of promoting the health and well-being of patients and Mittelstadt argues that it is a defining quality of a profession for its practitioners to be part of a moral community with common aims, values and training.
For the field of AI research, however, the same cannot be said. AI is largely developed by the private sector for deployment in public (for example, criminal sentencing) and private (for example, insurance) contexts, Mittelstadt writes. The fundamental aims of developers, users and affected parties do not necessarily align.
Similarly, the fiduciary duties of the professions and their mechanisms of governance are absent in private AI research.
AI developers do not commit to public service, which in other professions requires practitioners to uphold public interests in the face of competing business or managerial interests, he writes. In AI research, public interests are not granted primacy over commercial interests.
In a related point, Mittelstadt argues that while medicine has a professional culture that lays out the necessary moral obligations and virtues stretching back to the physicians of ancient Greece, AI development does not have a comparable history, homogeneous professional culture and identity, or similarly developed professional ethics frameworks.
Medicine has had a long time over which to learn from its mistakes and the shortcomings of the minimal guidance provided by the Hippocratic tradition. In response, it has codified appropriate conduct into modern principlism which provides fuller and more satisfactory ethical guidance.
AI research is obviously a far younger field, devoid of these rich historical opportunities to learn. Further complicating the issue is that the context of application for medicine is comparatively narrow, whereas AI can in principle be deployed in any context involving human expertise, leading it to be radically multi- and interdisciplinary, with researchers coming from varied disciplines and professional backgrounds, which have incongruous histories, cultures, incentive structures and moral obligations.
This makes it extraordinarily difficult to develop anything other than broadly acceptable principles to guide the people and processes responsible for the development, deployment and governance of AI across radically different contexts of use. The problem, says Mittelstadt, is translating these into actual good practice. At this level of abstraction, he warns, meaningful guidance may be impossible.
Finally, the author points to the relative lack of legal and professional accountability mechanisms within AI research. Where medicine has numerous layers of legal and professional protections to uphold professional standards, such things are largely absent in AI development. Mittelstadt draws on research showing that codes of ethics do not themselves result in ethical behaviour, without those codes being embedded in organisational culture and actively enforced.
This is a problem, he writes. Serious, long-term commitment to self-regulatory frameworks cannot be taken for granted.
All of this together leads Mittelstadt to conclude: We must therefore hesitate to celebrate consensus around high-level principles that hide deep political and normative disagreement.
Instead he argues that AI research needs to develop binding and highly visible accountability structures at the organisational level, as well as encouraging actual ethical practice in the field to inform higher level principles, rather than relying solely on top-down principlism. Similarly, he advocates a focus on organisational ethics rather than professional ethics, while simultaneously calling for the professionalisation of AI development, partly through the licensing of developers of high-risk AI.
His final suggestion for the future of AI ethics is to exhort AI researchers not to treat ethical issues as design problems to be solved. It is foolish to assume, he writes, that very old and complex normative questions can be solved with technical fixes or good design alone.
Instead, he writes that intractable principled disagreements should be expected and welcomed, as they reflect both serious ethical consideration and diversity of thought. They do not represent failure, and do not need to be solved. Ethics is a process, not a destination. The real work of AI ethics begins now: to translate and implement our lofty principles, and in doing so to begin to understand the real ethical challenges of AI.
The rest is here:
Posted in Ai
Comments Off on Where AI and ethics meet – Cosmos
Why business leaders are short sighted on AI – ZDNet
Posted: at 2:33 pm
Artificial intelligence is one of the most marketed (hyped, you might say) and ill-defined technology categories being packaged and lobbed at those in the enterprise these days. You might think overexposure would lead to fatigue and a healthy dose of skepticism.
Not so, according to original research fromIFS, a global enterprise applications company. IFS recently released the findings of a global research study into the attitudes and strategies towards artificial intelligence among business leaders. The study polled 600 business leaders worldwide and a broad spectrum of industries involved with their companies' enterprise technology, including enterprise resource planning, enterprise asset management, and field service management.
The range of findings are worth a closer look, but the headline is that business leaders across a variety of industries are convinced AI will be an essential component of their companies' success in the near future. In fact, about 90% of respondents reported at least some plans to implement AI in various parts of their business. That's a telling statistic. Whether motivated by fear of missing out or clear-eyed optimism, business leaders seem sold on AI's promise.
According to the study, industrial automation was the most commonly reported area of investment, with 44.6% planning AI projects, while customer relationship management and inventory planning and logistics tied for second place at 38.9%.
"AI is no longer an emerging technology. It is being implemented to support business automation in the here and now, as this study clearly proves," IFS VP of AI and RPA Bob De Caux said. "We are seeing many real-world examples where technology is augmenting existing decision-making processes by providing users with more timely, accurate and pertinent information. In today's disruptive economy, the convergence of technologies such as AI, RPA, and IoT is bolstering a new form of business automation that will provide companies that are brave enough with the tools and services they need to be more competitive and outflank larger competitors."
When asked how they plan to use AI, 60.6% of respondents to the IFS study said they expected it would help them make existing workers more productive. Just under half, 47.9%, said they would use AI to add value to products and services they sell to customers. About 18.1% said they would proactively use it to replace existing workers.
The data suggest industrial and business leaders are enthusiastically planning to involve AI in their business. They may still be coming to terms, however, with the implications of the resulting transformation. While a majority cite increased productivity to justify AI investments, many executives aren't looking ahead to the inevitable reduced demand for labor. As the report points out, it's unlikely consumption levels or demand will increase in proportion to productivity.
That is, if technologies like RPA can really live up to the productivity hype. Generalized confidence in an outcome should never be mistaken for proof of that outcome.
Just how stark is the failure on the part of business leaders to think about the effect the technology could have on workers? Consider that while a majority of respondents anticipated productivity increases from AI, only 29.3% anticipated AI would lead to a reduction in headcount in their industry.
It doesn't take AI to recognize that something seems mighty amiss there.
More here:
Posted in Ai
Comments Off on Why business leaders are short sighted on AI – ZDNet
Perception won’t be reality, once AI can manipulate what we see | TheHill – The Hill
Posted: at 2:33 pm
Voice-spoofing technology was used to steal a quarter-million dollars in March from the unwitting CEO of an energy company, who thought he was talking to his (German) boss. A recent study showed that 72 percent of people reading an AI-generated news story thought it was credible. In September, a smartphone app called Zao became a viral sensation in China; before the government abruptly outlawed it, Zao allowed people to seamlessly swap themselves into famous movie scenes.
Then there is that infamous case of doctored video of the House Speaker Nancy Pelosi (D-Calif.) that went viral before being detected as being manipulated to make her appear drunk.
Most of the recent advances in AI artificial intelligence have come in the realm of perceptual intelligence. This has enabled our devices to see (and recognize faces of our friends, for example), to hear (and recognize that song) and even to parse text (and recognize the rough intent of the email in your mailbox). Todays AI technology can also generate these percepts our devices can generate scenes and faces that never existed, clone voice to generate speech, and even write pithy (if stilted) responses to the emails in your inbox.
This ability to generate perceptions puts AI in a position of great promise and great peril.
Synthetic media can have many beneficial applications. After all, inducing suspension of disbelief in the audience is the cornerstone of much of entertainment. Nevertheless, it is the potential misuses of the technology especially going under the name of deep fakes that are raising alarms.
If perception is reality, then what happens to reality when AI can generate or manipulate perceptions? Although forgeries, fakes and spoofs have existed for much of human history, they had to be crafted manually until now. The advent of perceptual AI technology has considerably reduced the effort needed to generate convincing fakes. As we saw, the Zao app allowed lay users to swap themselves into movie scenes. What is more, as the technology advances, it will become harder to spot the fakes. Sites such as Which Face is Real? show that, already, most people cannot tell AI-generated images from real ones.
Easy generation and widespread dissemination of synthetic media can have quite significant adverse consequences for many aspects of civil society. Elections can be manipulated through spread of deep fake videos that put certain candidates in compromising positions. Spoofing voice and video calls can unleash a slew of new consumer scams. Individual privacy can be invaded by inserting peoples likenesses into compromising (and sometimes pornographic) pictures and videos.
What are our options in fighting this onslaught of AI-enabled synthetic media? To begin with, AI technology itself can help us detect deep fakes by leveraging the known shortfalls in the current AI technology; there are techniques that spot fake text, voice, images and video. For example, in the case of images, fakes can be detected by imperceptible pixel-level imperfections or background inconsistencies; it is hard for most fake-generators to get the background details correct. (In much the same way, when we remember our dreams in the morning, the parts that dont make sense are often not the faces of the people but, rather, the background story.) For detecting fake videos of people, current techniques focus on the correlations between lip movements, speech patterns and gestures of the original speaker. Once detected, fake media can be added to some global databases of known fakes, helping with their faster identification in the future.
Beyond detection, there are incipient attempts at regulation. California recently passed Assembly Bill 730 making deep fake videos illegal providing some measure of protection against invasion of individual privacy. Twitter is establishing its own guidelines to tag synthetic media (deep fakes) with community help. Non-profit organizations like Partnership on AI have established steering committees to study approaches to ensure the integrity of perceptual media. Other technology companies, including Facebook and AI Foundation, have supported gathering and sharing benchmark data sets to help accelerate research into deep fake detection. AI Foundation has released a platform, called Reality Defender 2020, specifically to help combat the impact of deep fakes on the 2020 elections.
While policies are important, so is educating the public about the need to be skeptical about perceptions in this age of AI. After all, the shortcomings of the generation technology today are not likely to persist into the future. In the long term, we should expect AI systems to be capable of producing fakes that cannot be spotted either by us or by our AI techniques. We have to gird ourselves for a future where our AI-generated doppelgangers may come across as more authentic to our acquaintances. Hopefully, by then, we will learn not to trust our senses blindly and, instead, insist on provenance such as cryptographic authentication techniques to establish the trustworthiness of what we perceive.Asking our loved ones on the phone to provide authentication may offend our sense of trust, but it may be the price we will have to pay as AIs ability to generate and manipulate media becomes ever more sophisticated.
As deep fakes increase in sophistication, so will our immunity to them: We will learn not to trust our senses, and to insist on authentication. The scary part of the deep fake future is not the long term but the short term, before we outgrow our seeing is believing" mindset. One consolation is that the short term may also be the only time when AI can still be an effective part of the solution to the problem it has wrought in this vulnerable period.
Subbarao Kambhampati, PhD, is a professor of computer science at Arizona State University and chief AI officer for AI Foundation, which focuses on the responsible development of AI technologies. He served as president and is now past-president of the Association for the Advancement of Artificial Intelligence and was a founding board member of Partnership on AI. He can be followed on Twitter @rao2z.
More:
Perception won't be reality, once AI can manipulate what we see | TheHill - The Hill
Posted in Ai
Comments Off on Perception won’t be reality, once AI can manipulate what we see | TheHill – The Hill
How artificial intelligence is redefining the role of manager – World Economic Forum
Posted: at 2:33 pm
Artificial intelligence (AI) will impact every job, in every industry and every country. There are significant fears that AI will eliminate jobs altogether. Many reports have exposed the harsh realities of workforce automation, especially for certain types of jobs and demographics. For instance, the Brookings Institution found that automation threatens 25% of all US jobs, with an emphasis on low-wage earners in positions where tasks are routine-based. A separate study by the Institute for Womens Policy Research found that women comprise 58% of jobs at highest risk of automation.
Yet despite these realities, we are beginning to accept our new AI world and adopt these technologies as we see the potential new opportunities. Other studies emphasize how AI will create more jobs or just remove tasks within jobs. A new global study by Oracle and Future Workplace of 8,370 employees, managers and HR leaders across 10 countries, found that almost two-thirds of workers are optimistic, excited and grateful about AI and robot co-workers. Nearly one-quarter went as far as saying they have a loving and gratifying relationship with AI at work, showing an appreciation for how it simplifies and streamlines their lives.
Proportion of respondents who believe robots will one day replace their managers
Image: Oracle & Future Workplace AI@Work Study 2019
Surprisingly, last year, we discovered that the majority of workers would trust orders from a robot. This year, almost two-thirds of workers said they would trust orders from a robot over their manager, and half have already turned to a robot instead of their manager for advice. At American Express, decisions like figuring out what product offer is most relevant to different customer segments are now handled by AI, eliminating the need for managers and employees to discuss these tasks.
Now that AI is removing many of the administrative tasks typically handled by managers, their roles are evolving to focus more on soft over hard skills. The survey found that workers believe robots are better than their managers at providing unbiased information, maintaining work schedules, problem-solving and budget management, while managers are better at empathy, coaching and creating a work culture.
Anthony Mavromatis, vice-president of customer data science and platforms at American Express, points out another way that AI is changing the managers role: AI is increasingly freeing up their time and allowing them to focus on the essence of their job. Going forward, what really matters is the very human skill of being able to be creative and innovate something that AI isnt good at yet. By cutting them loose from tasks traditionally expected of them, AI allows managers to focus on forging stronger relationships with their teammates and having a greater impact in their roles.
Companies such as Hilton that were early in using AI to simplify their recruiting process are now expanding their use to other applications, like digital assistants, for certain processes including feedback and performance reviews. They envision that digital assistants will allow employees to say something like, I want to take next Friday off, please schedule, and the necessary HR steps are taken. The digital assistant will be able to be used from a mobile device or a desktop; whenever is most convenient. When you think about the number of hotel employees who work throughout our hotels serving guests with limited or no time on a computer, and the time constraints we all face, this mobile capability will be a game-changer, says Kellie Romack, Hiltons vice-president of digital HR and strategic planning. The company is primed to use AI to help it focus on the needs of both employees and guests.
AI wont be replacing a managers job; it will be supplementing it. The future of work is one where robots and humans will be working side by side, helping each other get work done faster and more efficient than ever before. As Mavromatis puts it: AI plus human equals the future. It's not one or the other.
License and Republishing
World Economic Forum articles may be republished in accordance with our Terms of Use.
Written by
Dan Schawbel, Partner and research director, Future Workplace
The views expressed in this article are those of the author alone and not the World Economic Forum.
See the original post here:
How artificial intelligence is redefining the role of manager - World Economic Forum
Posted in Ai
Comments Off on How artificial intelligence is redefining the role of manager – World Economic Forum