MEPs in ‘urgent’ call for new laws on artificial intelligence and robotics – The Register

The European Parliament today called for EU-wide liability laws to cover robotics and artificial intelligence. MEPs also want researchers to adopt ethical standards that "respect human dignity".

In a resolution today MEPs noted that several countries are planning robotics regulations and that the EU needs to take the lead on setting these standards, so as not to be forced to follow those set by third countries.

According to a European Parliamentary press release, MEPs said draft legislation was urgently needed to clarify liability in accidents involving self-driving cars.

Although manufacturers including Volvo, Google, and Mercedes say they will accept full liability if their autonomous vehicles cause a collision, this is not currently a legal requirement.

MEPs recommended a mandatory insurance scheme and a supplementary fund to ensure that victims of accidents involving driverless cars are fully compensated.

Additionally, they propose a voluntary ethical code of conduct for robotics researchers and designers to ensure that the machines operate in accordance with legal and ethical standards and that robot design and their use respect human dignity.

The resolution arises from a report by Mady Delvaux MEP, which was adopted by the European Parliaments committee on legal affairs in January.

Several of its clauses regarding the potential introduction of a basic income to deal with the effect that robotics and artificial intelligence may have on the jobs market were removed, prompting Delvaux to complain: Although I am pleased that the plenary adopted my report on robotics, I am also disappointed that the right-wing coalition of ALDE, EPP and ECR refused to take account of possible negative consequences on the job market. They rejected an open-minded and forward-looking debate and thus disregarded the concerns of our citizens.

MEPs also asked the Commission to consider creating a European agency for robotics and artificial intelligence, which would be available to supply public authorities with technical, ethical and regulatory expertise.

The European Parliament resolution will now be answered by the European Commission, which alone has legislative initiative in the EU. The Commission is not obliged to draft new laws but must explain its rationale for rejecting Parliamentary resolutions.

Therese Comodini Cachia MEP, of the Maltese centre-right Nationalist Party and Parliament's rapporteur for robotics, said: Despite the sensations reported in the past months, I wish to make one thing clear: Robots are not humans and never will be," EU Reporter reports. "No matter how autonomous and self-learning they become they do not attain the characteristics of a living human being. Robots will not enjoy the same legal physical personality.

"However for the purposes of the liability for damages caused by robots, the various legal possibilities need to be explored. Who will bear responsibility in case of an accident of an automated car? How will any legal solution affect the development of robotics, those who own them and victims of the damage?

We invite the European Commission to consider the impact of different solutions to make sure that harm caused to persons and to our environment is properly addressed, she concluded.

The rest is here:

MEPs in 'urgent' call for new laws on artificial intelligence and robotics - The Register

Artificial intelligence has brought doubt and suspicion to the ancient world of Japanese chess – Quartz

Artificial intelligence has brought doubt and suspicion to the ancient world of Japanese chess
Quartz
Japan's embrace of modern technology has never been fully comfortable or all-encompassing. Robot animals keep nonagenarians company in nursing homes, even as banking remains firmly stuck in the past. Robot dinosaurs tend to guests at a hotel, while ...

View post:

Artificial intelligence has brought doubt and suspicion to the ancient world of Japanese chess - Quartz

Artificial intelligence doesn’t have to be a job killer – ZDNet

Getty Images/iStockphoto

What impact will artificial intelligence (AI) have on the workforce? Will smart machines really replace a large number of people in a variety of jobs?

10 types of enterprise deployments

As businesses continue to experiment with the Internet of Things, interesting use cases are emerging. Here are some of the most common ways IoT is deployed in the enterprise.

These questions have been on the minds of a lot of people of late -- especially as AI becomes even more advanced. Clearly the technology will take away the need for some functions that are now performed by humans. But there's good reason to believe that AI will actually create a lot of new jobs as well -- at least in some areas of the economy.

"For information workers, the near-term opportunity is to leverage machine learning and natural language processing to make sense of a disconnected and cacophonic set of information sources, so people can focus on what matters most to them," said David Lavenda, vice president of product strategy at mobile-enterprise collaboration company Harmon.ie, who does academic research on information overload in organizations.

AI automation now is best geared toward specific, highly-contextual tasks, Lavenda said. "In the consumer world, we are seeing things like customer service bots," he said. "But information workers typically operate in a broad range of tasks and responsibilities. Without a definite context, AI will struggle to make decisions independently."

For example, IBM is focusing Watson's AI capabilities on highly-contextual business cases such as evaluating health studies and helping doctors make decisions.

Still, organizations and individuals need to prepare for the growing role of AI in the workplace.

"The trick is to make it easier for workers to consume the increasing amount of disconnected information, not make them learn new skills," Lavenda said. "People want to focus on the business, not on learning new technology. If anything, the promise of AI is that people won't have to know more IT skills to be effective."

The focus on AI in the enterprise should be on making workers' lives simpler, not more difficult, Lavenda said. "People are already inundated by continuous new software and gadgets," he said. "They just can't keep up. The future lies in hiding complexity, not introducing new complexity."

Some industries are feeling the impact of AI sooner than others. For instance, healthcare is already seeing an impact from IBM's AI-based Watson technology, Lavenda said. "Since AI is a horizontal technology, it will appear first in industries where suppliers identify key use cases," he said.

One promising use case Lavenda cites is helping salespeople close more business by connecting disconnected information from sources such as Salesforce, Zendesk, SharePoint, email, Yammer, and Chatter into one coherent picture of what's happening with their business. "Without having to learn any new skills or install new apps, AI-based solutions can present this information in a coherent fashion right within email or within a document window, so that salespeople can focus on closing business, not using technology," he said.

Long term, there is no doubt that AI will impact jobs. "Like in the past, all new technology displaces professions," Lavenda said. "We don't have many telegraph or telephone operators today, to say nothing of keypunch data entry clerks. Yet new technologies bring new opportunities, and at least so far the new technologies increase the number of job opportunities, not lessen them."

How artificial intelligence is changing the data center:

See original here:

Artificial intelligence doesn't have to be a job killer - ZDNet

Could artificial intelligence hold the key to predicting earthquakes? – CBS News

Women cry in front of damaged houses in a street in the central Italian village of Illica on August 24, 2016, following a powerful earthquake.

Mario Laporta/AFP/Getty Images

Can artificial intelligence, or machine learning, be deployed to predict earthquakes, potentially saving thousands of lives around the world? Some seismologists are working to find out. But they know such efforts are eyed with suspicion in the field.

Youre viewed as a nutcase if you say you think youre going to make progress on predicting earthquakes, Paul Johnson, a geophysicist at Los Alamos National Laboratory, told Scientific American.

In the past, scientists have used various criteria to try to predict earthquakes, including foreshocks, electromagnetic disturbances, changes in groundwater chemistry. Slow slip events that is, tectonic motion that unfolds over weeks or months have also been placed under the microscope for clues to certain earthquakes.

Play Video

Scientists have discovered that two fault lines link together north of San Francisco, creating a new risk for the nearly seven million people liv...

But no approach thus far has made a significant difference.

Johnson and his colleagues are now trying a new approach: They are applying machine learning algorithms to massive data sets of measurements taken continuously before, during and after lab-simulated earthquake events to try to discover hidden patterns that can illuminate when future artificial quakes are most likely to happen. The team is also applying machine learning analysis to raw data from real earthquake temblors.

The research has already produced interesting results.

The researchers found the computer algorithm picked up on a reliable signal in acoustical datacreaking and grinding noises that continuously occur as the lab-simulated tectonic plates move over time, Scientific American reported. The algorithm revealed these noises change in a very specific way as the artificial tectonic system gets closer to a simulated earthquakewhich means Johnson can look at this acoustical signal at any point in time, and put tight bounds on when a quake might strike.

This is just the beginning, Johnson told the magazine. I predict, within the next five to 10 years machine learning will transform the way we do science.

2017 CBS Interactive Inc. All Rights Reserved.

Read the original:

Could artificial intelligence hold the key to predicting earthquakes? - CBS News

Artificial Intelligence Is Becoming A Major Disruptive Force In Banks’ Finance Departments – Forbes


Forbes
Artificial Intelligence Is Becoming A Major Disruptive Force In Banks' Finance Departments
Forbes
A combination of elements including massive distributed computing power, the decreasing cost of data storage, and the rise of open source frameworks is helping to accelerate the application of artificial intelligence (AI). Our own research indicates ...

and more »

Go here to see the original:

Artificial Intelligence Is Becoming A Major Disruptive Force In Banks' Finance Departments - Forbes

Can Artificial Intelligence Predict Earthquakes? – Scientific American

Predicting earthquakes is the holy grail of seismology. After all, quakes are deadly precisely because theyre erraticstriking without warning, triggering fires and tsunamis, and sometimes killing hundreds of thousands of people. If scientists could warn the public weeks or months in advance that a large temblor is coming, evacuation and other preparations could save countless lives.

So far, no one has found a reliable way to forecast earthquakes, even though many scientists have tried. Some experts consider it a hopeless endeavor. Youre viewed as a nutcase if you say you think youre going to make progress on predicting earthquakes, says Paul Johnson, a geophysicist at Los Alamos National Laboratory. But he is trying anyway, using a powerful tool he thinks could potentially solve this impossible puzzle: artificial intelligence.

Researchers around the world have spent decades studying various phenomena they thought might reliably predict earthquakes: foreshocks, electromagnetic disturbances, changes in groundwater chemistryeven unusual animal behavior. But none of these has consistently worked. Mathematicians and physicists even tried applying machine learning to quake prediction in the 1980s and 90s, to no avail. The whole topic is kind of in limbo, says Chris Scholz, a seismologist at Columbia Universitys LamontDoherty Earth Observatory.

But advances in technologyimproved machine-learning algorithms and supercomputers as well as the ability to store and work with vastly greater amounts of datamay now give Johnsons team a new edge in using artificial intelligence. If we had tried this 10 years ago, we would not have been able to do it, says Johnson, who is collaborating with researchers from several institutions. Along with more sophisticated computing, he and his team are trying something in the lab no one else has done before: They are feeding machinesraw datamassive sets of measurements taken continuously before, during and after lab-simulated earthquake events. They then allow the algorithm to sift through the data to look for patterns that reliably signal when an artificial quake will happen. In addition to lab simulations, the team has also begun doing the same type of machine-learning analysis using raw seismic data from real temblors.

This is different from how scientists have attempted quake prediction in the pastthey typically used processed seismic data, called earthquake catalogues, to look for predictive clues. These data sets contain only earthquake magnitudes, locations and times, and leave out the rest of the information. By using raw data instead, Johnsons machine algorithm may be able to pick up on important predictive markers.

Johnson and collaborator Chris Marone, a geophysicist at The Pennsylvania State University, have already run lab experiments using the schools earthquake simulator. The simulator produces quakes randomly and generates data for an open-source machine-learning algorithmand the system has achieved some surprising results. The researchers found the computer algorithm picked up on a reliable signal in acoustical datacreaking and grinding noises that continuously occur as the lab-simulated tectonic plates move over time. The algorithm revealed these noises change in a very specific way as the artificial tectonic system gets closer to a simulated earthquakewhich means Johnson can look at this acoustical signal at any point in time, and put tight bounds on when a quake might strike.

For example, if an artificial quake was going to hit in 20 seconds, the researchers could analyze the signal to accurately predict the event to within a second. Not only could the algorithm tell us when an event might take place within very fine time boundsit actually told us about physics of the system that we were not paying attention to, Johnson explains. In retrospect it was obvious, but we had managed to overlook it for years because we were focused on the processed data. In their lab experiments the team looked at the acoustic signals and predicted quake events retroactively. But Johnson says the forecasting should work in real time as well.

Of course natural temblors are far more complex than lab-generated ones, so what works in the lab may not hold true in the real world. For instance, seismologists have not yet observed in natural seismic systems the creaking and grinding noises the algorithm detected throughout the lab simulations (although Johnson thinks the sounds may exist, and his team is looking into this). Unsurprisingly, many seismologists are skeptical that machine learning will provide a breakthroughperhaps in part because they have been burned by so many failed past attempts. Its exciting research, and I think well learn a lot of physics from [Johnsons] work, but there are a lot of problems in implementing this with real earthquakes, Scholz says.

Johnson is also cautiousso much so that he hesitates to call what he is doing earthquake prediction. We recognize that you have to be careful about credibility if you claim something that no one believes you can do, he says. Johnson also notes he is currently only pursuing a method for estimating the timing of temblors, not the magnitudehe says predicting the size of a quake is an even tougher problem.

But Scholz and other experts not affiliated with this research still think Johnson should continue exploring this approach. Theres a possibility it could be really great, explains David Lockner, a research geophysicist at the U.S. Geological Survey. The power of machine learning is that you can throw everything in the pot, and the useful parameters naturally fall out of it. So even if the noise signals from Johnsons lab experiments do not pan out, he and other scientists may still be able to apply machine learning to natural earthquake data and shake out other signals that do work.

Johnson has already started to apply his technique to real-world datathe machine-learning algorithm will be analyzing earthquake measurements gathered by scientists in France, at Lawrence Berkeley National Laboratory and from other sources. If this method succeeds, he thinks it is possible experts could predict quakes months or even years ahead of time. This is just the beginning, he says. I predict, within the next five to 10 years machine learning will transform the way we do science.

Read the original here:

Can Artificial Intelligence Predict Earthquakes? - Scientific American

This Startup Has Developed A New Artificial Intelligence That Can (Sometimes) Beat Google – Forbes


Forbes
This Startup Has Developed A New Artificial Intelligence That Can (Sometimes) Beat Google
Forbes
The entire tech industry has fallen hard for a branch of artificial intelligence called deep learning. Also known as deep neural networks, the AI involves throwing massive amounts of data at a neural network to train the system to understand things ...
AI's Factions Get Feisty. But Really, They're All on the Same TeamWIRED
Artificial intelligence is expected to get smarter much faster thanks to GamalonDigital Trends

all 11 news articles »

The rest is here:

This Startup Has Developed A New Artificial Intelligence That Can (Sometimes) Beat Google - Forbes

Artificial Intelligence Enters The Classroom – News One

Artificial intelligence increasingly touches our livesfrom driverless cars to interaction with our smart phones.

Dennis Bonilla, executive dean of information systems and technology at the University of Phoenix, told NewsOne that the transformational technology reaches into classrooms and impacts how students learn.

For example, in flipped classrooms, teachers assign students homework that utilizes artificial intelligence technology. The software can send the instructor a detailed analysis of students comprehension of the assignment. That can enable the teacher to prepare more effectively for interactive learning the next day in the classroom.

With that data in hand, the teacher can begin her lesson with information that large swaths of students struggled to understand. The software can also recommend student pairings for effective group activities in class.

That scenario, however, is not playing out equally. Many school districts with concentrated populations of low-income and students of color are left out of access to the latest education tools.

Artificial Intelligence and how its used in classrooms

Artificial intelligence, or simply AI, is a subset of computer science that involves teaching computers how to learn, reason, and make decisions like humans do. Bonilla said the technology has been around since the 1950s, but advances have led to everyday applications that have made people more aware of the technology.

In the classroom, AI enables customized learning. The software can analyze student comprehension and identify which areas individual students are struggling to master and why he or she has problems learning the material. It can also understand how each student learns and create a roadmap for academic success.

Another application of the technology helps educators improve lesson plans and curriculum. Some of the software can grade and evaluate essays and examsquickly compiling a database that reveals, among other things, patterns of wrong answers.

Are teachers still needed?

While AI is a powerful tool, it cannot replace teachers. While machines are better at analyzing data, they lack the social quality of a human being, empathy, the human touch, said Bonilla.

He explained that job security for teachers is not really a major issue. Rather, technology is transforming the profession. It eliminates time-consuming tasks, such as grading papers. It can also serve as a training tool for inexperienced educators, as well as those with years of experience.

At the same time, the technology addresses the teacher shortage problem by eliminating the need for lots of teaching bodies in the classroom.

Some students face barriers to accessing technology

These new education tools will likely bypass scores of students. The cost of purchasing and installing the software means that students in poorly funded school districts will not benefit from AI.

Yes, its expensive, but cost is only part of the problem, Bonilla stated. The real issue is how can school systems integrate the technology when theyre challenged by incorporating all the other technology thats available.

In general, when it comes to computer science learning, scores of students are left out. A joint survey conducted by Google and Gallup, titled Searching for Computer Science: Access and Barriers in U.S. K-12 Education, found that low-income students and Black students have the least access to computer science education.

Whats more, there are notable differences in the role of technology between school districts in low-income and wealthier school districts.

A Pew survey of nearly 2,500 teachers found that 56 percent of educators who teach low-income students said they cannot incorporate certain technology into their lesson plans because their students lacked resources, such as digital devices and high-speed internet at home.

Bonilla is optimistic that the technology gap will close. He said some large tech companies are underwriting free platforms that will help to make the technology widely available. At the same time, several education nonprofit organizations are helping school districts integrate the technology.

SEE ALSO:

Nations Largest School District Discusses Integrating Technology At EdTech Week

Study: Teachers In Low-Income Schools Pessimistic About Education Technology

Original post:

Artificial Intelligence Enters The Classroom - News One

RPI artificial intelligence expert looks at Westworld – Albany Times Union

Artificial intelligence expert and RPI professor Selmer Bringsjord will lecture Wednesday on the concepts behind the HBO series Westworld.

Artificial intelligence expert and RPI professor Selmer Bringsjord will lecture Wednesday on the concepts behind the HBO series Westworld.

RPI artificial intelligence expert looks at Westworld

Troy

Fans of the innovative HBO series "Westworld" a futuristic tale of life-like robots mixing with guests of a Wild West-styled adult theme park can hear Wednesday about how close such technology is from a Rensselaer Polytechnic Institute professor involved in artificial intelligence research for the U.S. military.

"'Westworld' is an HBO series that deals with the 'big questions' of artificial intelligence (AI) in an undeniably vivid and timely way," said Selmer Bringsjord, director of the RPI Artificial Intelligence and Reasoning Lab. "The real world will ineluctably move toward giving experiences to humans in environments that are at once immersive and populated with sophisticated AIs and robots."

Currently, Bringsjord is working on a multi-million dollar AI development project with support from the U.S. Office of Naval Research, which wants to advance military robotics for logistics and other missions. His work focuses on how to program a form of moral sense into AI, so that a robot not under continuous human control can make appropriate choices such as not harming innocent humans or causing unnecessary damage when faced with unexpected circumstances.

In "Westworld," robots are residents (called "hosts") of a corporate-owned Wild West theme park where they meet paying human guests who seek adventures including violence and sex, all while overseen by human staff. The first season was the highest-rated for an initial season in the history of HBO and the schedule for the second season has yet to be announced.

While all the technology necessary for such robotics does not exist today, much of it is rapidly developing, said Bringsjord, who also heads the RPI Department of Cognitive Science. His lecture: Is "Westworld" Our (Near) Future? is set for noon Wednesday on campus in Room 4101 of the Russell Sage Building on campus.

His research relies on the development of increasing levels of AI in computer systems, and then using that computing power to contain and employ concepts of morality, expressed as algorithms in programming language. What humans can choose through free will, and have developed through experience, philosophy and religious strictures, machines will have to grasp through mathematics and logic.

While the physical aspects and appearance of lifelike robots are now very possible, one of the biggest challenges facing AI today is creating a robot that can react, empathize and improvise when dealing with humans and its other surroundings.

The challenge is how to write computer code that can make "story-based entertainment and, for that matter, art engaging, and at the same time new and improvisational," said Bringsjord. "'Macbeth' is great, yes; but the witches give us the same ghoulish deal in every run, and Lady Macbeth has her way with her man in every run as well."

Such a repetitive, static experience at a robotic theme park would soon become tiresome to a human guest. "'Westworld' is based on the dream of allowing humans to enter stories in immersive environments in which new narrative is created on the fly by AIs themselves, drawing humans in," he said.

Currently, there is no known method to impart such improvisational ability to AI, as is possessed by human actors and authors. Some theme parks with robotic attractions have tried to work around this issue by also deploying human actors, so that some characters' reactions to visitors can be spontaneous, he said.

bnearing@timesunion.com 518-454-5094 @Bnearing10

Go here to read the rest:

RPI artificial intelligence expert looks at Westworld - Albany Times Union

Terrifyingly, Google’s Artificial Intelligence acts aggressive when cornered – Chron.com

Science and tech

DeepMind's AI recently acted aggressively when threatened in a computer game.

Click through to see the top science and tech predictions for 2017

DeepMind's AI recently acted aggressively when threatened in a computer game.

Click through to see the top science and tech predictions for 2017

LIST: The biggest science and tech predictions for 2017

The first baby with three parents will be born

A new fertility technique allows doctors to replace defective DNA found within a mother's egg with the DNA from another female donor. The result is a baby born with the DNA of two mothers. The first three-parent baby may potentially be born around Christmas of 2017.

Source: The Telegraph

LIST: The biggest science and tech predictions for 2017

The first baby with three parents will be born

A new fertility technique allows doctors to replace defective DNA found within a mother's egg with the DNA

Scientists will discover the truth behind "dark matter"

Dark matter, a mysterious type of matter that makes up a little more than a quarter of the universe, is several experiments away from being detected.Dr. Katherine Freese, an expert in the field of dark matter, says 2017 may be the year "the 80-year-old dark matter puzzle will finally be solved."

Source: NBC

Scientists will discover the truth behind "dark matter"

Dark matter, a mysterious type of matter that makes up a little more than a quarter of the universe, is several experiments away from being

The first "artificial pancreas" for people withtype 1 diabetes will hit the market

The "MiniMed 670G," an FDA-approved artificial pancreas, will monitor blood sugar and deliver insulin doses. It is set to be available by Spring 2017.

Source: CBS News

The first "artificial pancreas" for people withtype 1 diabetes will hit the market

The "MiniMed 670G," an FDA-approved artificial pancreas, will monitor blood sugar and deliver insulin doses. It is set to be

Genetically modified mosquitoes might be released to fight Zika in the U.S.

A company that creates genetically modified mosquitoes that have their offspring die when they mate with wild female mosquitos, may begin trials in Florida in 2017.

Source: NPR

Genetically modified mosquitoes might be released to fight Zika in the U.S.

A company that creates genetically modified mosquitoes that have their offspring die when they mate with wild female mosquitos, may

Costumer service will depend on social media more

"Social messaging channels such as Facebook Messenger and Twitter Direct Message are becoming increasingly important tools for brand engagement and customer service resolution. Big brands are already seeing a major shift from public posts to private messages."

Source: Inc

Costumer service will depend on social media more

"Social messaging channels such as Facebook Messenger and Twitter Direct Message are becoming increasingly important tools for brand engagement and customer

The first "human head transplant" may occur

Sergio Canavero, an Italian neuroscientist, is preparing to perform the first human head transplant. The surgery is slated for 2017.

Source: CBS News

The first "human head transplant" may occur

Sergio Canavero, an Italian neuroscientist, is preparing to perform the first human head transplant. The surgery is slated for 2017.

Source: CBS News

A new space race

Buzz Aldrin, the second human on the moon, told NBC and Americans to "get ready for intense competition in the development of human spaceflight systems." He said the space race will lead to "technical and business innovations we don't yet appreciate or understand."

Source: NBC

A new space race

Buzz Aldrin, the second human on the moon, told NBC and Americans to "get ready for intense competition in the development of human spaceflight systems." He said the space race will lead to

Robot chefs will cook our food

Moley Robotics, a company that is building a robot chef capable of cooking 2,000 recipes, will begin selling in early 2017.

Source: Time

Robot chefs will cook our food

Moley Robotics, a company that is building a robot chef capable of cooking 2,000 recipes, will begin selling in early 2017.

Source: Time

For the first time in a century, the U.S. will experience a "total solar eclipse."

Source: Wall Street Journal

For the first time in a century, the U.S. will experience a "total solar eclipse."

Source: Wall Street Journal

The Cassini spacecraft's 20-year mission will come to an end

Since arriving at Saturn in 2004, Cassini has provided scientists with valuable data and images. NASA said the Cassini mission will end on September 15, 2017, when the spacecraft plunges into Saturn's atmosphere to burn.

Source: NASA

The Cassini spacecraft's 20-year mission will come to an end

Since arriving at Saturn in 2004, Cassini has provided scientists with valuable data and images. NASA said the Cassini mission will end on September

2017 will be less hot than 2016

While 2017 is still expected to be one of the hottest years on record because of climate change, it won't be as hot as 2016 due to the absence ofEl Nio and the warming conditions it creates. Forecasters predict a 1.13F drop in average temperatures.

Source: Climatecentral.org

2017 will be less hot than 2016

While 2017 is still expected to be one of the hottest years on record because of climate change, it won't be as hot as 2016 due to the absence ofEl Nio and the warming

Hackers will useartificial intelligence

James R. Clapper, the director of National Intelligence, said artificial intelligence will make life easier for everyone, even hackers.

Source: New York Times

Hackers will useartificial intelligence

James R. Clapper, the director of National Intelligence, said artificial intelligence will make life easier for everyone, even hackers.

Source: New York Times

The first HIV vaccine

"PRO 140," a drug currently undergoing trials, will have "expected commercialization in 2017.

Source: HIVequal.org

The first HIV vaccine

"PRO 140," a drug currently undergoing trials, will have "expected commercialization in 2017.

Source: HIVequal.org

More laptops will be able to double as tablets

"Its becoming increasingly difficult to innovate on a traditional clamshell laptop design. Consequently, PC makers are putting most of their attention on innovating around what the industry calls 2-in-1s, which feature a tablet-style design with an attachable keyboard."

Source: Time

More laptops will be able to double as tablets

"Its becoming increasingly difficult to innovate on a traditional clamshell laptop design. Consequently, PC makers are putting most of their attention on

China's lunar mission will bring back moon samples for the first time in 40 years

China has scheduled an unmanned moon sample-return mission, known as Chang'e 5, for 2017.

Source: Space.com

China's lunar mission will bring back moon samples for the first time in 40 years

China has scheduled an unmanned moon sample-return mission, known as Chang'e 5, for 2017.

Source: Space.com

Investments into artificial intelligence (AI) start ups will explode, but it might be a bust

"[Venture capitalist] will swarm startups in these spaces like sharks smelling chum in the water... Most of these startups will crash and burn without ever turning a profit. That said, a select few will drive truly deep innovation, and in doing so, reshape the world."

Source:Inc

Investments into artificial intelligence (AI) start ups will explode, but it might be a bust

"[Venture capitalist] will swarm startups in these spaces like sharks smelling chum in the water... Most of these

The first (real) images of the Milk Way's super-massive black hole

A network of nine telescopes around the globe are adding the finishing touches to their project: In early 2017, the telescopes will snap the first images ofSagittarius A*, the black hole at the center of the Milky Way.

Source: BBC

The first (real) images of the Milk Way's super-massive black hole

A network of nine telescopes around the globe are adding the finishing touches to their project: In early 2017, the telescopes will snap the

Terrifyingly, Google's Artificial Intelligence acts aggressive when cornered

Being a sore loser is not an admired quality; especially when it's a sophisticated piece of artificial intelligence that's lashing out.

Researchers at DeepMind, Google's artificial intelligence lab, recently performed a number of tests by having its most complex AI play a series of a games with a version of itself.

In the first game, two AI agents, one red and one blue, scramble to see who can collect the most apples, or green squares.

Go here to see the original:

Terrifyingly, Google's Artificial Intelligence acts aggressive when cornered - Chron.com

Artificial Intelligence and The Confusion of Our Age – Patheos (blog)

Elon Musk is saying outlandish things again. Several months ago, the Tesla and SpaceX CEO said that chances are we are all living in a simulation. Thankfully, other writers have contested this in a kinder manner than I would have (the words I have for Musks theory aresomething along the lines of utter nonsense and logically self-defeating, but I digress).

Well, now Musk thinks that humans must merge with machines, or else become defunct from the threat of advanced artificial intelligence. I guess he no longer thinks we live in a computer simulation. Why worry about humans becoming defunct if we are all brains in a vat?

Having millions of dollars does not mean that one can construct logically coherent chains of thought.

All that aside, I have several major issues with Musks assessment.

On an argumentative level Musks claims seem to paint artificial intelligence as some sort monster we have no control over. He talks about the threat of A.I. while ignoring that humans are the ones who create and control it, thus ignoring that we could easily stop working on it as it currently stands (as this Skynet-esque threat) if we are really so concerned about it displacing people.

Further, claims like Musks ignore the reality that no matter how advanced A.I. becomes, it is still artificial and reliant on programming put into it by human minds that are ontologically distinct from mere neurological matter and functions.

But really, the underlying presupposition of Musks confused plea for the merger of humans and machines is the biggest problem here. It implicitly assumes that humans are mere technology to be exploited for profit and material success. In this view humans are not persons, with an ultimate goal of flourishing, but mere biological machinery that need to be upgraded to a biomechanical level. When ones ultimate meaning has no transcendent anchor or reference point (e.g. God as the transcendent Source and Ground of reality), humans will inevitably be reduced down to mere technology. The bloodbath that is secularized 20th century bears stark witness to this.

Of course, Musk and those like him fundamentally misunderstand that mind is quite distinct from brain. True, the mental and the neurological are inextricably related. But to think that consciousness is derived or secreted from neurological matter is a fundamental confusion of categories, the product of an age that has forgotten to think deeply about the nature of reality and what persons not just human beings, but human persons really and truly are.

Artificial intelligence, no matter how complex, is not the same as human consciousness:

Computational models of the mind would make sense if what a computer actually does could be characterized as an elementary version of what the mind does, or at least as something remotely like thinking. In fact, though, there is not even a useful analogy to be drawn here. A computer does not even really compute. We compute, using it as a tool. We can set a program in motion to calculate the square root of pi, but the stream of digits that will appear on the screen will have mathematical content only because of our intentions, and because wenot the computerare running algorithms. The computer, in itself, as an object or a series of physical events, does not contain or produce any symbols at all; its operations are not determined by any semantic content but only by binary sequences that mean nothing in themselves. The visible figures that appear on the computers screen are only the electronic traces of sets of binary correlates, and they serve as symbols only when we represent them as such, and assign them intelligible significances. The computer could just as well be programmed so that it would respond to the request for the square root of pi with the result Rupert Bear; nor would it be wrong to do so, because an ensemble of merely material components and purely physical events can be neither wrong nor right about anythingin fact, it cannot be about anything at all. Software no more thinks than a minute hand knows the time or the printed word pelican knows what a pelican is.

David Bentley Hart The Experience of God: Being, Consciousness, Bliss p. 219

Read more here:

Artificial Intelligence and The Confusion of Our Age - Patheos (blog)

Google’s DeepMind artificial intelligence becomes ‘highly aggressive’ when stressed. Skynet, anyone? – Mirror.co.uk

Google's DeepMind is one of the most famous examples of artificial intelligence.

Last year it famously defeated the world's best Go player at the tricky Chinese board game. It's also being used at Moorfields Eye Hospital to recognise eye diseases from scans.

But new research shows that DeepMind reacts to social situations in a similar way to a human. Notably, it started to act in an "aggressive manner" when put under pressure.

Google's computer scientists ran 40 million different turns of Gathering a fruit-gathering video game that asked two different DeepMind participants to compete against each other to collect the most apples.

When there were enough apples to share, the two computer combatants were fine - efficiently collecting the virtual fruit. But as soon as the resources became scarce, the two agents became aggressive and tried to knock each other out of the game and steal the apples.

The video below shows the process - with the DeepMind "gamers" represented in red and blue while the apples are green. The laser beams are yellow - and while the combatants don't get any reward for a hit, it does knock the opponent out of the game for a set period of time.

Video Unavailable

Click to play Tap to play

Play now

"We characterize how learned behavior in each domain changes as a function of environmental factors including resource abundance," the team wrote in a paper explaining their results.

"Our experiments show how conflict can emerge from competition over shared resources and shed light on how the sequential nature of real world social dilemmas affects cooperation.

"We noted that the policies learned in environments with low abundance or high conflict-cost were highly aggressive while the policies learned with high abundance or low conflict cost were less aggressive. That is, the Gathering game predicts that conflict may emerge from competition for scarce resources, but is less likely to emerge when resources are plentiful."

The results are interesting in that they show computers are able to adapt to situations and modify their behaviour accordingly.

Many experts have warned of the dangers of true artificial intelligence in machines. Elon Musk singled out DeepMind in particular as one to keep an eye on.

"The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like DeepMind, you have no idea how fast it is growing at a pace close to exponential," he wrote in 2014.

"I am not alone in thinking we should be worried."

"The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen..."

So while Google's super-smart computers may be content to beat each other up in a race to collect virtual apples, the prospects for the future could be worrying. Especially if your name's Sarah Connor.

Video Unavailable

Click to play Tap to play

Play now

Video will play in

Read this article:

Google's DeepMind artificial intelligence becomes 'highly aggressive' when stressed. Skynet, anyone? - Mirror.co.uk

Tinder’s Sean Rad On How Technology And Artificial Intelligence Will Change Dating – Forbes


Forbes
Tinder's Sean Rad On How Technology And Artificial Intelligence Will Change Dating
Forbes
Tinder's Sean Rad On How Technology And Artificial Intelligence Will Change Dating. {{article.article.images.featured.description}} {{article.article.images.featured.caption}}. Most Read. {{contrib_block.display_advoice_brand}} ...
Tinder and the Dawn of the Dating ApocalypseVanity Fair
Press - TinderTinder
The Gender Similarities Hypothesis - CiteSeerXCiteSeerX
NCBI - National Institutes of Health -App Annie
all 58 news articles »

Visit link:

Tinder's Sean Rad On How Technology And Artificial Intelligence Will Change Dating - Forbes

Elon Musk Says We Must Become Cyborgs To Interface With Artificial Intelligence – CleanTechnica

February 13th, 2017 by Steve Hanley

Originally published on Gas2.

Elon Musk was in Dubai on February 13th to officially announce that Tesla would soon open a showroom there and is working to install 5 new Superchargers in the United Arab Emirates. Musk being Musk, he didnt fly halfway around the world just to cut a ceremonial ribbon. He also addressed theWorld Government Summit in Dubai while he was there. Here is some of what he had to say.

Over time I think we will probably see a closer merger of biological intelligence and digital intelligence, Musk told his audience. Its mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.

Computers can communicate at a trillion bits per second. Humans, on the other hand, who do most of their communicating by typing on various digital devices with their fingers, are limited to a woeful 10 bits per second. As artificial intelligence technology improves, as some point humans will become irrelevant. Thats why they must learn to merge with machines, according to Musk.Some high bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence and maybe solves the control problem and the usefulness problem, Musk explained.

Musk has spoken often on his deep seated fear of deep artificial intelligence. That is intelligence that goes far beyond systems that can make cars drive themselves all the way to what he callsartificial general intelligence. He describes it as smarter than the smartest human on earth and calls it a dangerous situation.The technology he proposes would create a new layer in the human brain that could access information quickly and tap into artificial intelligence.

The most near term impact from a technology standpoint is autonomous cars That is going to happen much faster than people realize and its going to be a great convenience, Musk said. He claims that within 20 years, up to 15% of the worlds workforce will be rendered redundant by artificial intelligence. There are many people whose jobs are to drive. In fact I think it might be the single largest employer of people driving in various forms. So we need to figure out new roles for what do those people do, but it will be very disruptive and very quick.

Last November, Musk told CNBC that governments will be forced to provide a universal basic income when machines start doing most jobs. Better pray Mitch McConnell and Paul Ryan are not still in charge of Congress when that happens.

Source: CNBC Photo credit: YouTube

Reprinted with permission

Buy a cool T-shirt or mug in the CleanTechnica store! Keep up to date with all the hottest cleantech news by subscribing to our (free) cleantech daily newsletter or weekly newsletter, or keep an eye on sector-specific news by getting our (also free) solar energy newsletter, electric vehicle newsletter, or wind energy newsletter.

Tags: artificial intelligence, basic living wage, cyborg, Musk at World Government Summit in Dubai

Steve Hanley writes about the interface between technology and sustainability from his home in Rhode Island. You can follow him onGoogle +and onTwitter.

See the rest here:

Elon Musk Says We Must Become Cyborgs To Interface With Artificial Intelligence - CleanTechnica

Ford snaps up artificial-intelligence startup Argo – SiliconBeat


SiliconBeat
Ford snaps up artificial-intelligence startup Argo
SiliconBeat
We founded Argo AI to tackle one of the most challenging applications in computer science, robotics and artificial intelligence self-driving vehicles, the company wrote. While technology exists today to augment the human driver and automate the ...
Ford Makes $1 Billion Bet On Artificial Intelligence Startup As Recruiting ToolForbes
What's behind Ford's $1 billion on artificial intelligence: self-driving excellenceExtremeTech
Ford Invests $1 Billion In Artificial Intelligence Company Based In Pennsylvania, California and MichiganArea Development Online
Car and Driver (blog) -New Atlas -Computer Business Review -Ford Media
all 107 news articles »

Originally posted here:

Ford snaps up artificial-intelligence startup Argo - SiliconBeat

Artificial Intelligence To Reveal The Biggest Secret In Oil – OilPrice.com

The EIA is, and will continue to be, the gold standard for reporting U.S. oil inventories, and the administrations weekly reports have the power to immediately swing crude oil prices depending on builds or draws in oil stocks. But while the U.S. is a transparently reporting country, other nations are not revealing oil stocks data, or are rarely reportingat bestopaque figures in which the markets have little faith.

No one really knows how much oil the countries around the world are storing, creating uncertainty in the supply side of the oil markets.

But one of the hottest new technologies may shed more light on oil storage around the globe, especially in countries that are keeping inventory figures to themselves. The tech is artificial intelligence (AI) - and an unlikely collocation to put in a sentence with oil and inventories.

Yet, U.S. geospatial analytics company Orbital Insight has been using a form of AI - convolutional neural networks (CNN) to analyze satellite images and identify and quantify crude oil storage tanks. The tanks have floating roofs, so the volume of oil is visible. Orbital Insight is using shadow-detection technology and calculates how full a storage tank is by the size of the crescent-shaped shadow on the tank roof.

That approach to estimating oil storage capacity and volumes may reveal how much oil closed economies have or plan to have. One such nation is China, a huge consumer of oil and oil productsa closed economy not eager to share its reserves and storage figures with the world. But Chinas storage has the potential to sway global oil markets.

Last year, Orbital Insight said that it had found that there were 2,100 commercial and strategic petroleum reserve tanks across China, with the capacity to store 900 million barrels of oil as of the end of 2014. Thats four times more than the 500 tanks reported in the industry-standard database of tank farms at TankTerminals.com. Orbital Insights estimates showed that China had around 600 million barrels of oil supply on its territory as of May 2016and thats not counting underground storage. Related:Is $60 Oil Within Reach?

Orbital Insight has tracked U.S. and China oil storage so far and is currently analyzing this data for the world. Its plans are to launch oil storage estimates for countries like Russia, Brazil, India, Venezuela, Angola, Nigeria, and Iran.

If AI analyses of satellite images can reveal how many oil storage tanks these countries have, and how full they are - above ground at least they could shed more light on oil supplies around the world.

For now, tracking and reporting crude oil supply, storage, and flows is being done (if at all) in a variety of ways depending on the country: some estimate supply by calculating domestic consumption and tracking tankers, others just give some figures without revealing details, and a third group of nations report supply by collecting and aggregating data from companies.

These are also prone to misinformation and are sometimes not retroactively reviewed in case some companies did not report in the survey, Michael D. Cohen, an analyst at Barclays, has recently told Rigzone.

Although satellite technology can tell you a lot, but not everything - as Sandy Fielden, director of research for commodities and energy at Morningstar, told Rigzone AI may have the power to increase, even if just a little bit, transparency in oil storage and inventories reporting. Because as sensitive as the oil industry is, there will always be countries that will not be sharing transparent and independent oil inventory figures.

By Tsvetana Paraskova for Oilprice.com

More Top Reads From Oilprice.com:

Read this article:

Artificial Intelligence To Reveal The Biggest Secret In Oil - OilPrice.com

Artificial Intelligence Is Not a ThreatYet – Scientific American

In 2014 SpaceX CEO Elon Musk tweeted: Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes. That same year University of Cambridge cosmologist Stephen Hawking told the BBC: The development of full artificial intelligence could spell the end of the human race. Microsoft co-founder Bill Gates also cautioned: I am in the camp that is concerned about super intelligence.

How the AI apocalypse might unfold was outlined by computer scientist Eliezer Yudkowsky in a paper in the 2008 book Global Catastrophic Risks: How likely is it that AI will cross the entire vast gap from amoeba to village idiot, and then stop at the level of human genius? His answer: It would be physically possible to build a brain that computed a million times as fast as a human brain.... If a human mind were thus accelerated, a subjective year of thinking would be accomplished for every 31 physical seconds in the outside world, and a millennium would fly by in eight-and-a-half hours. Yudkowsky thinks that if we don't get on top of this now it will be too late: The AI runs on a different timescale than you do; by the time your neurons finish thinking the words I should do something you have already lost.

The paradigmatic example is University of Oxford philosopher Nick Bostrom's thought experiment of the so-called paperclip maximizer presented in his Superintelligence book: An AI is designed to make paperclips, and after running through its initial supply of raw materials, it utilizes any available atoms that happen to be within its reach, including humans. As he described in a 2003 paper, from there it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities. Before long, the entire universe is made up of paperclips and paperclip makers.

I'm skeptical. First, all such doomsday scenarios involve a long sequence of if-then contingencies, a failure of which at any point would negate the apocalypse. University of West England Bristol professor of electrical engineering Alan Winfield put it this way in a 2014 article: If we succeed in building human equivalent AI and if that AI acquires a full understanding of how it works, and if it then succeeds in improving itself to produce super-intelligent AI, and if that super-AI, accidentally or maliciously, starts to consume resources, and if we fail to pull the plug, then, yes, we may well have a problem. The risk, while not impossible, is improbable.

Second, the development of AI has been much slower than predicted, allowing time to build in checks at each stage. As Google executive chairman Eric Schmidt said in response to Musk and Hawking: Don't you think humans would notice this happening? And don't you think humans would then go about turning these computers off? Google's own DeepMind has developed the concept of an AI off switch, playfully described as a big red button to be pushed in the event of an attempted AI takeover. As Baidu vice president Andrew Ng put it (in a jab at Musk), it would be like worrying about overpopulation on Mars when we have not even set foot on the planet yet.

Third, AI doomsday scenarios are often predicated on a false analogy between natural intelligence and artificial intelligence. As Harvard University experimental psychologist Steven Pinker elucidated in his answer to the 2015 Edge.org Annual Question What Do You Think about Machines That Think?: AI dystopias project a parochial alpha-male psychology onto the concept of intelligence. They assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world. It is equally possible, Pinker suggests, that artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization.

Fourth, the implication that computers will want to do something (like convert the world into paperclips) means AI has emotions, but as science writer Michael Chorost notes, the minute an A.I. wants anything, it will live in a universe with rewards and punishmentsincluding punishments from us for behaving badly.

Given the zero percent historical success rate of apocalyptic predictions, coupled with the incrementally gradual development of AI over the decades, we have plenty of time to build in fail-safe systems to prevent any such AI apocalypse.

Read the rest here:

Artificial Intelligence Is Not a ThreatYet - Scientific American

Google’s Artificial Intelligence System – TechMalak (blog)

Artificial Intelligence and Machine Learning is amuch-discussed topic in and outside of the tech community these days. In the coming years, we could likely see Artificial Intelligence being used in almost every horizon right from occupying the spaces of your home to large and gigantic industrial applications.

So to explain in laymans terms, the A.I of Machine Learning technology aims at making self-learning robots that in future can exhibit human-likecapabilities and perform similar tasks.

Many tech giants like Facebook, Google, Amazon, Tesla and others have already started working in this segment. However, there are some people expressing concern about developing AI because of the potential threat to the existence of humans.

Stephen Hawking has previously said that AI will either bethe best, or the worst thing, ever to happen to humanity.

Similarly, Tesla boss and CEO Elon Musk, while talking about AI, saidAI systems today have impressive but narrow capabilities.It seems that well keep whittling away at their constraints, and in the extreme case, they will reach human performance on virtually every intellectual task. Its hard to fathom how much human-level AI could benefit society, and its equally hard to imagine how much it could damage society if built or used incorrectly.

Some interesting results have recently come out of the tests performed with Googles DeepMind AI system. In this test, the Google AI demonstrated an ability of self-learning from its own memory.

The team at DeepMind AI ran about 40 million tests for a fruit gathering computer game wherein two DeepMind agents have to compete to gather the maximum number of virtual Apples.

Initially, with enough Apples to gather, things went well between the two agents. But as soon as the number of available apples began to reduce, the two agents became extremely aggressive to the point of using laser beams to defeat each other down to gather more Apples.

Take a look at the below video:

Studying this behavioral pattern, the researchers at Google said that DeepMind feels that it is about to lose, and hence it turns aggressive to come out in the top spot.

The algorithm was designed like if one agent tags the opponent with a laser beam, it would leave the opponent out of the game for some time. Meaning the first one could then gather more Apples and ultimately win.

Rhett Jones from Gizmodo, tells that for simpler algorithms DeepMind used smaller agents and demonstrated peaceful co-existence among themselves.

However, for complex networks, the agents turned competitive and aggressive, each willing to defeat the other one.

However, the Google team performed another test with three AI agents in a video game called Wolfpack. Two of the three agents are wolves while the third is a prey.

Unlike the aggressive behavior inGathering,the two wolf agents exhibited a cooperative behavior. This is because the algorithm would reward both the wolf agents with extra points once they come near the prey, regardless of who takes away the prey.

In their paper, the Google team writes The idea is that the prey is dangerous a lone wolf can overcome it, but is at risk of losing the carcasses to scavengers.However, when the two wolves capture the prey together, they can better protect the carcass from scavengers, and hence receive a higher reward.

We are just in the nascent stage of development for Artificial Intelligence, and these results successfully exhibit and early behavior of humanness filled with competitive nature.

This technology is likely to have a lot of merits and make multiple tasks absolutely easy for humans. But we believe that developers should take every care in their creations.

For all of you who are interested in knowing more about it and analyze the data of these experiments, you can have a look at the official DeepMind blog.

See the original post:

Google's Artificial Intelligence System - TechMalak (blog)

Inside Intel Corporation’s Artificial Intelligence Strategy – Motley Fool

A much discussed area in technology these days is artificial intelligence, a type of machine learning. Artificial intelligence is a workload that requires an immense amount of processing power, which is why companies like microprocessor giant Intel (NASDAQ:INTC) -- a company that brings in tens of billions of dollars from sales of processors -- see this market as an interesting long-term growth opportunity.

Interestingly, although Intel is a major supplier of processors for artificial intelligence workloads, the company doesn't get nearly as much attention for its efforts in this market as does graphics specialist NVIDIA (NASDAQ:NVDA) -- a company that has seen significant revenue and profit growth from artificial intelligence applications as its long-term investments in this space are paying off.

Intel CEO Brian Krzanich at the company's AI day back in November 2016. Image source: Intel.

Intel went over its artificial intelligence strategy at its Feb. 9 investor meeting. Let's look at what the company had to say about the market and how it plans to win in it.

According to Intel, only 7% of server sales in 2016 were used for artificial intelligence workloads, but it is the "fastest-growing data center workload."

Within that 7%, the company says that 60% of those servers were used for "classical machine learning" while the remaining 40% were used for "deep learning."

The company then went on to show that of the servers used for classical machine learning, 97% used Intel Xeon processors to handle the computations, 2% used alternative architectures, and 1% used Intel processors paired with graphics processing units (likely from NVIDIA).

Among servers used for deep learning applications, the chipmaker says that 91% use just Intel Xeon processors to handle the computations, 7% use Xeon processors paired with graphics processing units, while 2% use alternative architectures altogether.

The point that Intel is trying to make is that its chips overwhelmingly dominate the market for servers that run artificial intelligence workloads today.

Intel clearly views graphics processors from the likes of NVIDIA as a threat to its position in the artificial intelligence market -- a reasonable viewpoint considering that NVIDIA's data center graphics processor business continues to grow at a phenomenal rate (revenue was up 145% in the company's fiscal year 2017).

The risk is that that those graphics processors, though usually paired with Intel Xeon processors, will reduce the demand for said Xeon processor (i.e., if some number of Xeon processors can be replaced by one Xeon processor and some smaller number of graphics processors, then Intel loses).

Intel's strategy, then, appears to be to cast a very wide net with a wide range of different architectures and hope that it can offer better solutions for specific types of artificial intelligence workloads than the graphics chipmakers like NVIDIA can.

Intel's broad AI product portfolio. Image source: Intel.

Look at the slide above and you'll notice Intel has different solutions for different types of workloads. It's promoting its next-generation Xeon processor (known as Skylake-EP) as the standard, general-purpose artificial intelligence processor.

From there, the offerings get more targeted. For some workloads, it will offer a specialized version of its Xeon Phi processor called Knights Mill. For others, it's going to offer combined Xeon processor with Field Programmable Gate Array (FPGA) chips. And, for still others, the company plans to offer a chip that combines a Xeon processor with a specialized deep learning chip called Lake Crest (based on technology that Intel acquired when it picked up start-up Nervana Systems).

Intel's strategy looks as solid as it can possibly be as it seems to be throwing its entire technical arsenal at the problem -- I'd say the company is well positioned to profit from the continued proliferation of artificial intelligence workloads.

What will only become evidence in time, though, will be how much market share Intel will ultimately be able to capture in this market. The underlying market growth should mean that Intel's revenue and profits here will grow, but obviously, the magnitude of that growth will depend on its ability to defend its market share while at the same time defending its average selling prices.

Ashraf Eassa owns shares of Intel. The Motley Fool owns shares of and recommends Nvidia. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.

Read more from the original source:

Inside Intel Corporation's Artificial Intelligence Strategy - Motley Fool