Artificial intelligence market: Weighing the IT channel’s role – TechTarget

Like mobile technology, cloud computing, big data and IoT before it, artificial intelligence may just be the next big thing that channel partners should have on their radars. But as with any new technology that comes along, partners need to ensure they have the right business skill sets for system implementations.

Mobile devices are disrupting your customers IT strategies, leading to lots of problems that need solving. Find out where the best opportunities lie and get advice from experts on how to approach the market, including what not to do.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

The hype about AI is rising, although the jury is still out on whether the artificial intelligence market will be a great opportunity for the channel, said Seth Robinson, senior director of technology analysis at CompTIA.

"AI is not going to be on its own something that is all that tangible that you can grasp and pursue," he said.

A sure sign of the growing interest is recent vendor product releases, noted Steve White, program vice president of channels and alliances at IDC. "When we spoke to channel and alliance folks 18 months ago, it was IoT, and now it's AI. It's the new golden child," he said.

Once vendors like Microsoft, Google, Salesforce and Cisco have announced offerings it creates greater access and interest for the technology, "and they're not investing unless they see opportunity," White said. "That's definitely why the channel should be interested."

Another appeal of the artificial intelligence market is that the technology is applicable for use in most industries, observers said.

That said, AI is still "very early in overall adoption cycle" and people are mainly curious about the technology and are in the exploratory phase right now, Robinson said.

He believes AI platforms will be complicated and there needs to be a deeper conversation with customers about the business needs for deploying the technology. That conversation should be about what AI is and how it fits into their business, Robinson said. "We haven't seen much readiness to move into that strategic conversation yet."

Without a clear understanding of the business objectives, companies may not utilize the technology's features, such as machine learning and cognitive computing, and then they obviously won't reap the benefits, he said. For example, if a company spends extra on help desk software with AI baked in but uses it in a standard way without utilizing AI, "you haven't moved the needle," he cautioned. "So, channel firms have to be careful about overselling without helping companies transition to new processes and workflows and the best usage of these things that are available today."

Probably the best and earliest example of AI in the enterprise is IBM's Watson technology, Robinson said. Within the channel, he said, there has been a lot of buzz about CrushBank, a spinoff of a managed service provider (MSP) that built an IT help desk application on top of Watson. "They're building a help desk application that utilizes Watson, so if you want to get CrushBank's product or are working with them, you'll get this new app with AI baked into it," he said. This is not an example of reselling or installing AI, but rather, incorporating the technology into apps, "which a lot of MSPs and VARs [value-added reseller] aren't thinking about,'' Robinson said.

In CrushBank's case, they are helping customers change workflow processes to utilize help desk features in a more efficient way, he said. "And that gets out of the wheelhouse of channel firms," since the channel historically has been built on management of technology, Robinson added.

"AI is a very natural way to supplement the tasks and work we do every day to support our clients," CrushBank co-founder and CTO David Tan said."I think the need has been there, but the growth in technology and platforms has made it more pronounced, and the technology to power the solutions is finally becoming mature."

CrushBank sells its platform to other MSPs, Tan said. The next step the channel needs to focus on is to really integrate technology into a business, which is at the core of what digital transformation is all about, he said. AI, according to Tan, can be particularly effective at making that happen.

Another example of a company using AI to change business processes is Actionable Science. The company has created AI-powered bots to help medium- and large-sized businesses improve productivity, enhance customer experiences, increase employee satisfaction and reduce costs, said Manish Sharma, co-founder and head of business development.

The bots address a range of tasks for sales, servicing, IT help desk, HR help desk and other functions. Actionable Science's advanced bots have natural language conversations, evolve using machine learning and execute tasks by leveraging robotic process automation, Sharma said.

The company has about a dozen partners so far, he said, adding that the artificial intelligence market "has got to be one of the top priorities for channel partners that want to stay relevant and grow their business in the future." They can do that by developing "an expertise in one or several specific applications," Sharma said.

The skills he believes a partner needs for AI work include a combination of process analytics, user experience and "requirements management that is very specific to AI."

This [technology] is going to be a lot more consulting-heavy, so you have to have those professional consulting folks with a depth of knowledge around [AI]. Steve Whiteprogram vice president of channels and alliances, IDC

White concurred that if a partner is already doing work in business intelligence or analytics, AI "would seem like a fairly obvious add-on that they should be looking at" because it takes the products they're offering to their customers to the next level.

"At the end of the day, AI is even smarter to leverage that platform you've already built,'' and expand upon it as an opportunity for growth, White said.

Partners also need to be able to build a consulting practice around AI, White believes. "This [technology] is going to be a lot more consulting-heavy, so you have to have those professional consulting folks with a depth of knowledge around [AI]. Like most tech trends, we see the partners who act quicker, funnily enough, are the ones who are more successful."

Learn about robotic process automation

How to get started in IoT managed services

Opinion: Why BI and AI are a natural fit

Continue reading here:

Artificial intelligence market: Weighing the IT channel's role - TechTarget

Elon Musk Is Wrong Again. AI Isn’t More Dangerous Than North Korea. – Fortune

Elon Musk's recent remark on Twitter that artificial intelligence (AI) is more dangerous than North Korea is based on his bedrock belief in the power of thought. But this philosophy has a dark side.

If you believe that a good idea can take over the world and if you conjecture that computers can or will have ideas, then you have to consider the possibility that computers may one day take over the world. This logic has taken root in Musk's mind and, as someone who turns ideas into action for a living, he wants to make sure you get on board too. But hes wrong, and you shouldnt believe his apocalyptic warnings.

Here's the story Musk wants you to know but hasn't been able to boil down to a single tweet. By dint of clever ideas, hard work, and significant investment, computers are getting faster and more capable. In the last few years, some famously hard computational problems have been mastered, including identifying objects in images, recognizing the words that people say, and outsmarting human champions in games like Go. If machine learning researchers can create programs that can replace captioners, transcriptionists, and board game masters, maybe it won't be long before they can replace themselves. And, once computer programs are in the business of redesigning themselves, each time they make themselves better, they make themselves better at making themselves better.

The resulting intelligence explosion would leave computers in a position of power, where they, not humans, control our future. Their objectives, even if benign when the machines were young, could be threatening to our very existence in the hands of an intellect dwarfing our own. That's why Musk thinks this issue is so much bigger than war with North Korea. The loss of a handful of major cities wouldn't be permanent, whereas human extinction by a system seeking to improve its own capabilities by turning us into computational components in its mega-brainthat would be forever.

Musks comparison, however, grossly overestimates the likelihood of an intelligence explosion. His primary mistake is in extrapolating from recent successes of machine learning the eventual development of general intelligence. But machine learning is not as dangerous as it might look on the surface.

For example, you may see a machine perform a task that appears to be superhuman and immediately be impressed. When people learn to understand speech or play games, they do so in the context of the full range of human experiences. Thus when you see something that can respond to questions or beat you soundly in a board game, it is not unreasonable to infer that it also possesses a range of other human capacities. But that's not how these systems work.

In a nutshell, here's the methodology that has been successful for building advanced systems of late: First, people decide what problem they want to solve and they express it in the form of a piece of code called an objective functiona way for the system to score itself on the task. They then assemble perhaps millions of examples of precisely the kind of behavior they want their system to exhibit. After that they design the structure of their AI system and tune it to maximize the objective function through a combination of human insight and powerful optimization algorithms.

At the end of this process, they get a system that, often, can exhibit superhuman performance. But the performance is on the particular task that was selected at the beginning. If you want the system to do something else, you probably will need to start the whole process over from scratch. Moreover, the game of life does not have a clear objective functioncurrent methodologies are not suited to creating a broadly intelligent machine.

Someday we may inhabit a world with intelligent machines. But we will develop together and will have a billion decisions to make that shape how that world develops. We shouldn't let our fears prevent us from moving forward technologically.

Michael L. Littman is a professor of computer science at Brown University and co-director of Brown's Humanity Centered Robotics Initiative.

Read more:

Elon Musk Is Wrong Again. AI Isn't More Dangerous Than North Korea. - Fortune

Messenger Launches New Artificial Intelligence Features – Huffington Post Australia

Messenging app 'Messenger' launched a range of new artificial intelligence (AI) features in Australia on Wednesday.

The AI, called 'M', works almost like a prompting service, where it recognises words and phrases used in a conversation and then suggests relevant content and actions based on the chat between the two users.

For example, if you're speaking to someone on their birthday, 'M' will recognise either through a phrase used or their Messenger profile when their birthday is and then prompt you to send a birthday message.

Similarly, if you are chatting about making plans or struggling to come to a group decision about something, the AI will suggest you make a plan or start a group poll respectively. If you are chatting in a one-on-one conversation, and one person rises the idea of making a call, 'M' will prompt you to start a video or voice chat.

Other features include stickers for commonly used phrases including 'thankyou' or 'bye-bye' and a prompt to share your location with someone if phrases like 'where are you?' and 'see you soon' are used. Messenger also launched a content saving option that encourages you to save videos, Facebook posts and pages from your conversations to look at later.

If you tire of the notifications and suggestions from Messenger, it's easy to opt-out of the AI technology by adjusting your Messenger settings. It's also possible to dismiss a suggestion made by 'M' if you feel it is irrelevant.

The 'M' artificial intelligence technology was first launched in the U.S. in April and is also currently available in Mexico and Spain. Canada, South Africa and the U.K. will gain assess to the technology at the same time as all of us here in Australia.

Go here to read the rest:

Messenger Launches New Artificial Intelligence Features - Huffington Post Australia

MIT’s new artificial intelligence could kill buffering – Alphr

For some, the sight of the buffer circle is enough to bring on spasms of existential angst. When that spinning circle of death appears, the digital world cracks, its illusory sense of control slips from your sweaty palm, and you are reminded, however briefly, that you are not the master of this realm, and you have no real idea how the machine you are using works. Its also very annoying if youre trying to show a video to someone.

Researchers at MIT may have come up with a way to stave of techno-existential panic for good, thanks to a new artificial intelligence system that can keep video steaming buttery smooth.

Buffering happens because video streaming occurs in chunks, with your device downloading sequential portions of a file that are then stitched together. This means you can start watching the video before downloading the entire thing, but if connection wavers you might finish one chunk before the next has been fully downloaded.

Sites like YouTube use Adaptive Bitrate (ABR) algorithms to work out what resolution a video should display at. In a nutshell, these allow the system to maintain the flow of images be measuring a networks speed and lowering the resolution appropriately, or by working to maintain a sufficient buffer at the tip of the video. The issue is that neither of these techniques on their own can prevent annoying pauses in a clips if the network has a sudden drop in traffic flow say, if youre in a particularly crowded area, or if youre moving in and out of tunnels.

MIT's Computer Science and Artificial Intelligence Lab (CSAIL) AI, dubbed Pensive, takes these algorithms, but uses a neural network to intelligently work out when a system should flip between one and the other. The AI was trained on a months worth of video content, and was given reward and penalty conditions, to push it to calculate the most effective times to switch between ABR algorithms.

This system is adjustable, meaning it can be tweaked depending on what a content provider might want to prioritise such as consistent image quality or smoother playback. "Our system is flexible for whatever you want to optimise it for," commented MIT professor Mohammad Alizadeh in a statement. "You could even imagine a user personalising their own streaming experience based on whether they want to prioritise rebuffering versus resolution."

While the death of the buffer symbol might be cause for celebration, the researchers also point to the benefits the AI system could have for virtual reality potentially making it much easier for people to stream high-resolution VR games and films. This is really just the first step in seeing what we can do, noted Alizadeh.

Original post:

MIT's new artificial intelligence could kill buffering - Alphr

Beyond HAL: How artificial intelligence is changing space systems – SpaceNews

This computer-generated view depicts part of Mars at the boundary between darkness and daylight, with an area including Gale Crater beginning to catch morning light. Curiosity was delivered in 2012 to Gale crater, a 155-kilometer-wide crater that contains a record of environmental changes in its sedimentary rock. Credit: NASA JPL-CALTECH

Thisarticleoriginally appeared in the July 3, 2017 issue of SpaceNews magazine.

Mars 2020 is an ambitious mission. NASA plans to gather 20 rock cores and soil samples within 1.25 Mars years, or about 28 Earth months a task that would be impossible without artificial intelligence because the rover would waste too much time waiting for instructions.

It currently takes the Mars Science Laboratory team at NASAs Jet Propulsion Laboratory eight hours to plan daily activities for the Curiosity rover before sending instructions through NASAs over-subscribed Deep Space Network. Program managers tell the rover when to wake up, how long to warm up its instruments and how to steer clear of rocks that damage its already beat-up wheels.

Mars 2020 will need far more autonomy. Missions are paced by the number of times the ground is in the loop, said Jennifer Trosper, Mars Science Laboratory mission manager. The more the rover can do on its own, the more it can get done.

The $2.4 billion Mars 2020 mission is just one example of NASAs increasing reliance on artificial intelligence, although the term itself makes some people uneasy. Many NASA scientists and engineers prefer to talk about machine learning and autonomy rather than artificial intelligence, a broad term that in the space community sometimes evokes images of HAL 9000, the fictional computer introduced in Arthur C. Clarkes 2001: A Space Odyssey.

To be clear, NASA is not trying to create HAL. Instead, engineers are developing software and algorithms to meet the specific requirements of missions.

Work we are doing today focuses not so much on general intelligence but on trying to allow systems to be more independent, more self-reliant, more autonomous, said Kelly Fong, the NASA Ames Research Centers senior scientist for autonomous systems and director of the Intelligent Robotics Group.

For human spaceflight, that means giving astronauts software to help them respond to unexpected events ranging from equipment failure to medical emergencies. A medical support tool, for example, combines data mining with reasoning and learning algorithms to help astronauts on multi-month missions to Mars handle everything from routine care to ailments or injuries without having to talk to a roomful of flight controllers shadowing them all the time, Fong said.

Through robotic Mars missions, NASA is demonstrating increasingly capable rovers. NASAs Mars Exploration Rovers, Spirit and Opportunity, could do very little on their own when they bounced onto the red planet in 2004, although they have gained some autonomy through software upgrades. Curiosity, by comparison, is far more capable.

Last year, Curiosity began using software called Autonomous Exploration for Gathering Increased Science that combines computer vision with machine learning to select rocks and soil samples to investigate based on criteria determined by scientists. The rover can zap targets with its ChemCam laser, analyze the gases that burn off, package the data with images and send them to Earth.

Scientists on the mission have been excited about this because in the past they had to look at images, pick targets, send up commands and wait for data, said Kiri Wagstaff, a researcher in JPLs Machine Learning and Instrument Autonomy Group.

Although data can travel between Earth and Mars in 10 to 30 minutes, mission controllers can only send and receive data during their allotted time on the Deep Space Network.

Even if the rover could talk to us 24/7 we wouldnt be listening, Wagstaff said. We only listen to it in a 10-minute window once or twice day because the Deep Space Network is busy listening to Cassini, Voyager, Pioneer, New Horizons and every other mission out there.

The Mars 2020 rover is designed to make better use of limited communications with mission managers by doing more on its own. It will wake itself up and heat instruments to their proper temperatures before working through a list of mandatory activities plus additional chores it can perform if has enough battery power remaining.

Ideally, we want to say, This area is of interest to us. We want images of objects and context from the instruments. Call us when youve got all that and we will use the information to get a sample, Trosper said.

NASA isnt there yet, but Mars 2020 takes the agency in that direction with software to enable the rover to drive from point to point through Martian terrain while avoiding obstacles. Its the kind of basic skill toddlers learn, not to run into things, but its a good skill, Fong said. That type of autonomy is increasingly being added to our space systems. Going forward, I see us adding more and more of these intelligent skills.

Future missions like NASAs Europa Clipper will need robust artificial intelligence to look for plumes rising from a subsurface ocean and cracks in the moons icy surface caused by hydrothermal vents. When scientists cant predict when or where they will make discoveries, they need artificial intelligence to watch for things, notice them, capture data and send it back to us, Wagstaff said.

As the Europa Clippers instruments collect data, the spacecrafts onboard processor will need to assign priorities to the observations and downlink the most interesting ones to Earth, Wagstaff said. We always can collect more data than we can transmit.

That is particularly true of missions beyond Mars, where NASA orbiters can relay data. Missions to Europa or Saturns moon Enceladus also will experience communication delays because of the distance.

NASA has developed software on Earth-observation satellites that could be used in future missions to ocean worlds. The Intelligence Payload Experiment cubesat launched in 2013 relied on machine learning to analyze images and highlight anything that stood out from its surroundings.

It has its eyes open to look for anything that doesnt match what we expect or anything that stands out as being different, Wagstaff said. We cant predict what we are going to find. We dont want to miss something just because we havent trained instruments to look for it.

A proposed future mission to bore through Europas ice to investigate whether life exists in an ocean below would require even more onboard intelligence. NASA probably would design software to look for inconsistencies in chemical composition or temperature. That would keep you from having to say what life would look like, what it would it would be eating and its energy source, Wagstaff said.

Before engineers send hardware or software into space, they test it extensively in analogous environments on Earth. Engineers test Mars missions in the desert. The best analog for Europa missions may be glacial deposits in the Arctic.

We are acutely aware of risk mitigation because we are dealing with spacecraft that cost hundreds of millions or even billions of dollars, Wagstaff said. Everything we do is thoroughly tested for years in some cases before it is ever put on the spacecraft.

AI at the controls

Thecapsules SpaceXand Boeing are building to ferry astronauts between Earth and the International Space Station are designed to operate autonomously from the minute they launch, through the demanding task of docking and on their return trip.

NASA crews will spend far less time learning to operate the spacecraft than preparing to conduct microgravity research and maintain the orbiting outpost, said Chris Ferguson, the former space shuttle commander who directs crew and mission operations for Boeings CST-100 Starliner program.

It provides a lot of relief in the training timeframe. They dont have to learn everything. They just have to learn the important things and how to realize when the vehicle is not doing something its suppose to be doing, Ferguson told SpaceNews.

Starliner flight crew will train to monitor the progress of the spaceship. If something goes wrong, they will know how to take control manually and work with the ground crew to fix the problem, he added.

NASA insisted on that high degree of autonomy, in part, to ensure the crewcapsules could serve as lifeboats in case of emergencies.

If theres a bad day up there and the crew needed to come home quickly, they could pop into the vehicle with very little preparation, close the hatch and set a sequence of events into play that would get them home very quickly, Ferguson said.

In many ways, Starliners autonomy in flight is similar to an airplanes. Whether on commercial airplanes or spacecraft, everyone is beginning to realize pilots are turning into systems monitors more than active participants, Ferguson said.

When Starliner docks with space station, the crew will be monitoring sophisticated sensors and image processors. Boeing relies on cameras, infrared imagers and Laser Detection and Ranging sensors that create three-dimensional maps of the approach. A central processor will determine which sensor is more likely to be accurate and will weight the data accordingly to ensure that two vehicles that were previously traveling quickly relative to one another come into contact at about four centimeters per second.

In spite of the complexity, astronauts will view displays that look similar to the ones airplane pilots see on instrument landing systems, Ferguson said.

Go here to read the rest:

Beyond HAL: How artificial intelligence is changing space systems - SpaceNews

Intel CEO Brian Krzanich will discuss the future of artificial intelligence and more at Disrupt SF – TechCrunch

From smart assistants like Alexa and Siri to the latest bleeding edge advancements in robotics, theres no buzzier buzzwords in the tech world than artificial intelligence. The topic of AI has been a primary focus for Intels Brian Krzanich, as he works to expand the chipmakers scope from PCs to the next generation of technology breakthroughs.

Intels Chief Executive will be joining us on stage at TechCrunch Disrupt San Francisco 2017 in September to discuss the companys recent massive investments in AI, from multibillion dollar acquisitions to the formation of the Artificial Intelligence Products Group, which reports directly to Krzanich.

Intels CEO has been extremely bullish about forward facing technologies since taking the helm in 2013. Along with AI, under Krzanichs watch, the silicon juggernaut has become a leader in developing the underlying technologies that power 5G networks, self-driving cards, drones and cloud computing.

It marks a strong contrast from the Intel Krzanich inherited as chief, which was still reeling from a failure to fully embrace mobile. Instead the company ceded much of the market it dominant during its 90s heyday, as other chipmakers rushed in to dominate smartphones and tablets.

Krzanich has seen plenty of ups and downs during his time at the company, having first come on board as an engineer 35 years ago. In the intervening years, hes held a wide range of different roles at the company, serving as a fabrication plant manager and holding leadership positions in the companys manufacturing organization, before become COO in 2012.

Since his most recent promotion, the companys PC focus has shifted from 80-percent of the business to around 50-percent, with the other half shifting toward more forward facing technologies. Recently, the company has made a big investment in drones, including last years Super Bowl halftime, which featured 300 flying in tandem, alongside Lady Gaga (subsequent displays have featured up to 500).

Of course, diversification doesnt always take. Intels massive investment in wearables doesnt appear to have panned out. That wing of the company has taken a pretty notable hit as the rest of the industry has flatlined. On a whole, however, most of the companys recent moves appear to have put Intel on the right track as it looks to take on the future of the ever-changing tech world head on.

Late last night Kraznich joined several other executives and left one of Trumps advisory councils in the wake of the white supremacist rally in Charlottesville, which many felt was inadequately addressed by the president and his administration.In apost on Policy@Intel, the companys public policy blog, Brian Krzanich wrote that he resigned from the American Manufacturing Council on Monday to call attention to the serious harm our divided political climate is causing to critical issues, including the serious need to address the decline of American manufacturing.

Krzanich will join us to discuss how a company with roots as deep as Intels plans for the future. You can plan on being there to hear it first hand. Tickets are now available at an early bird rate.

Follow this link:

Intel CEO Brian Krzanich will discuss the future of artificial intelligence and more at Disrupt SF - TechCrunch

People are far more likely to be killed by artificial intelligence than nuclear war with North Korea, warns Elon Musk – The Independent

Elon Musk says artificial intelligence poses more of a risk than a potential nuclear conflict between the US and North Korea.

The CEO of Tesla issued the warning after an AI built by OpenAI, a company founded by Mr Musk, defeated the worlds best Dota 2 players after just two weeks of training.

If you're not concerned about AI safety, you should be. Vastly more risk than North Korea, he tweeted shortly after the bots victory, along with a picture of a poster bearing the slogan: In the end, the machines will win.

The poster, incidentally, is actually about gambling.

Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too, he added later.

Biggest impediment to recognizing AI danger are those so convinced of their own intelligence they can't imagine anyone doing what they can't.

A recent University of Oxford study concluded that AI will be better than humans at all tasks within 45 years, and many people, including Stephen Hawking, believe humans will be in trouble in the future if our goals don't align with those of machines.

However, following the exchange of increasinglyheated words between Donald Trump and Kim Jong-un, some Twitter users pointed out thatnuclear war might wipe humans outbefore AI even gets the chance to.

Mr Musk has spoken out about the potential dangers of AI on numerous occasions, and recently engaged in a war of words with Mark Zuckerberg, who has a very different outlook to him.

After Mr Musk called AI a fundamental existential risk for human civilisation, the Facebook founder branded his views as negative and pretty irresponsible.

Mr Musk hit back by saying Mr Zuckerbergs understanding of the subject was limited.

He wants the companies working on AI to slow down to ensure they dont unintentionally build something unsafe, and says it needs to be regulated.

I think we should be really concerned about AI and I think we should AIs a rare case where I think we need to be proactive in regulation instead of reactive, he said last month.

Because I think by the time we are reactive in AI regulation, its too late.

Go here to read the rest:

People are far more likely to be killed by artificial intelligence than nuclear war with North Korea, warns Elon Musk - The Independent

Tiny IDF Unit Is Brains Behind Israeli Army Artificial Intelligence – Haaretz

The operational research unit of the Military Intelligence Unit the software unit of the Israeli armys J6/C4i Directorates Lotem Unit doesnt look like the kind of place where state-of-the art artificial intelligence is being put to work.

There are no espresso machines, brightly colored couches or views of Tel Aviv from the top floors of an office tower. The unit conducts its work in the backwater of Ramat Gan and has the look and feel of any other army office.

But the unit is engaged in the same kind of AI work that the worlds biggest tech companies, like Google, Facebook and Chinas Baidu are doing in a race to apply machine learning to such functions as self-driving cars, analysis of salespeoples telephone pitches and cybersecurity or to fight Israels next war more intelligently.

Maj. Sefi Cohen, 34, is head of the unit, which in effect makes him the armys chief data officer. As he explains it, his units mission is to provide soldiers in the field data-based insights with the help of smart tools. We embed these capabilities in applications that help commanders in the field, he said.

One example is a system for predicting rocket launches from the Gaza Strip. After Operation Protective Edge we developed an app that learns from field sensors and other data we collected what are the most likely areas launchers will be set up and at what hours. That enables us to know in advance what will happen and what areas should be attacked in order to fight them more effectively, he explained.

We've got more newsletters we think you'll find interesting.

Please try again later.

This email address has already registered for this newsletter.

In one project the unit built a system based on neural networks whose purpose is to extract from a video a suspicious object and describe it in writing. It wont replace human observers, but instead of looking at five cameras, it will be able to be responsible for dozens, said Cohen.

Cohen said the amount of data at his disposal from the army is endless, reaching into petabytes (one million gigabytes) in some areas. It also makes use of data from outside sources and the apps it develops use open-source code. We return to the world things that we use, Cohen says, Models that are operational obviously do not go out.

Cohen got his start in combat signals corps. Near the end of his compulsory service he completed a course in Lotem and spent another 10 years at its command and control systems unit. Ive always loved algorithms. I was already involved with them in high school and worked in the field. When I saw drafted I wanted to combine the technology with a combat, he recalls.

Cohen set up the unit he now leads with the help of local high-tech executives. I convinced my commanders that we could use machine learning in combat, and from there I started to bring in more and more people, he said. The unit now comprises about 20 officers, all of them in the career army and holding advanced degrees in computer science, focusing on AI.

The units only female member left recently, so for the moment its an all-male team. Cohen says most are graduates of the armys elite Talpiot program; the one who isnt has a masters from the Technion Israel Institute of Technology. Everyone whos here is the tops. I learn a lot from them, he said.

Want to enjoy 'Zen' reading - with no ads and just the article? Subscribe today

Read more:

Tiny IDF Unit Is Brains Behind Israeli Army Artificial Intelligence - Haaretz

What does AI mean for the future of manufacture? – Telegraph.co.uk

The world is on the brink of the fourth industrial revolution, and it could change the way we use everything from cars to shoes.

The first three industrial revolutions brought us mechanisation, mass production and automation. Now, more than half a century after the first robots worked on production lines, artificial intelligence (AI) and machine learning are shaking things up again.

Manufacturing is becoming less about muscle and more about brainsGreg Kinsey, VP, Hitachi Insight Group

Industry 4.0 uses technologies such as the internet of things to make manufacturing smarter allowing companies to revolutionise the way they make and ship goods. Manufacturing is becoming less about muscle and more about brains, says Greg Kinsey, vice president of Hitachi Insight Group.

It becomes less place-specific. You start to look at 3D printing. The shoe industry is contemplating: do we actually need to produce all these shoes in lots of variations in southeast Asia, ship them around the world, only to go to the shop and it doesnt have your size? Why not produce them at the point of sale put your foot in the scanner, measure the size and shape, swipe your credit card and pick your shoes up later that day?

The digital transformation of manufacturing and supply chains means that data from factories is directly analysed using technologies such as machine learning and AI. The process can lead to drastic efficiency gains up to 10pc, says Mr Kinsey. Companies can also see manufacturing lead times slashed in half.

Consumers will see a wider variety of products, to the point of mass customisation, where you can design your own, says Mr Kinsey. Product will become linked to emerging demand, so well never be in a position where things are just out of stock.

The first stage, says Mr Kinsey, is to get rid of paper-based processes something that many factories still rely on. Once digitised, the data can be crunched to ensure factories are operating efficiently. But the idea isnt to get rid of people; its to augment what they do.

When I graduated from university, I was heavily into industrial robots, says Mr Kinsey. Everyone said that robots were going to take our jobs. But the companies that invested heavily in robots like German car makers are now world leaders, employing many more people than they would otherwise have done.

When we use AI tools to predict bad quality, or to optimise the settings for a production line, we can manage it with more confidence. We have had a lot of clients tell us that this technology helps them improve the way they work. This is should be the real driver of innovation.

European companies are currently leading the charge in the digital transformation of industry, says Mr Kinsey. Many are also working closely with start-ups to enhance industrial processes.

Theres a lot of interest in working with start-ups, Mr Kinsey explains. When you embark on innovation, you dont always know what the solutions are.

Companies that invested heavily in robots are now world leaders, and employ more peopleGreg Kinsey, VP, Hitachi Insight Group

The resulting Industry 4.0 may change the way we all think about products, Mr Kinsey says and the first signs are already here.

In Europe, you have a lot of people thinking: Do I need to own a car? That would have been unthinkable 20 or 30 years ago. Michelin already has aircraft tyres that are on a pay-per-use basis: people pay based on the number of times the jet takes off.

You need to embrace this technology; if you dont, because you fear that you might lose some jobs, you are going to lose all the jobs, as your company will no longer be competitive. In fact, digital technologies can improve the workplace and quality of work.

Modern life is saturated with data, and new technologies are emerging nearly every day but how can we use these innovations to make a real difference to the world?

Hitachi believes that Social Innovation should underpin everything they do, so they can find ways to tackle the biggest issues we face today.

Visit social-innovation.hitachi to learn how Social Innovation is helping Hitachi drive change across the globe.

See the original post:

What does AI mean for the future of manufacture? - Telegraph.co.uk

What An Artificial Intelligence Researcher Fears About AI – IFLScience

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?

Fear of the unforeseen

The HAL 9000 computer, dreamed up byscience fiction author Arthur C. Clarkeand brought to life bymovie director Stanley Kubrickin 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.

Fear of misuse

Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Original post:

What An Artificial Intelligence Researcher Fears About AI - IFLScience

‘It knew what you were going to do next’: AI learns from pro gamers then crushes them – Washington Post

For decades, the worlds smartest game-playing humans have been racking up losses to increasingly sophisticated forms of artificial intelligence.

The defeats began in the 1990s when IBMs Deep Blue computer conquered chess master Garry Kasparov. More recently,Ke Jie until then the worlds best player of the ancient Chinese board game Go was defeated by a Google computer programin May.

Now the AIsupergamers havemoved intothe world of e-sports. Last week, an artificial intelligence bot created by the Elon Musk-backed start-up OpenAI defeated some of the worlds most talented players of Dota 2, a fast-paced, highly complex, multiplayer online video game that draws fierce competition from all over the globe.

[Billionaire burn: Musk says Zuckerbergs understanding of AI threat is limited]

OpenAI unveiled itsbot at an annual Dota 2 tournament where players walk away with millions in prize money.It was a pivotal moment in gaming and in AI research largely because of how the bot developed its skills and how long it took to refine them enough to defeat the worlds most talented pros, according to Greg Brockman, co-founder and chief technology officer of OpenAI.

The somewhat frightening reality: It only took the bot two weeks to go from laughable novice to world-class competitor, a period in which Brockman said the bot gathered lifetimes of experience by playing itself.

During that period, players said, the botwent from behaving like a bot to behaving in a way that felt more alive.

[New artificial intelligence promises to make travel a little smarter. Does it?]

Danylo Dendi Ishutin, one of the games top players, was defeated twice by his AI competition, whichfelt a little like human, but a little like something else, he said, according to the Verge.

Brockman agreed with that perspective:You kind of see that this thing is super fast and no human can execute its moves as well, but it was also strategic, and it kind of knows what youre going to do, he said. When you go off screen, for example, it would predict what you were going to do next. Thats not something we expected.

Brockman said games are a great testing ground for AI because they offer a defined set of rules with baked-in complexity that allow developers to measure a bots changing skill level.He said one of the major revelations of the Dota 2 bots success was that it was achieved via self-play a form of training in which the bot would continuously play against a copy of itself until it amassed more and more knowledge while improving incrementally.

[Was this created by a human or computer? See if you can tell the difference.]

For a game as complicated as Dota 2 which incorporates more than 100 playable roles and thousands of moves self play proved more organic and comprehensive than having a human program the bots behavior.

If youre a novice playing against someone who is awesome playing tennis against Serena Williams, for example youre going to be crushed, and you wont realize there are slightly better techniques or ways of doing something, Brockman said. The magic happens when your opponent is exactly balanced with you so that if you explore and find a slightly better strategy it is then reflected in your performance in the game.

Tesla chief executive Elon Musk hailed the bots achievement in historic fashion on Twitter before going on to once again express his concerns about artificial intelligence, which he said poses vastly more risk than North Korea.

Musk unleashed a debateon the dangers of AI last month when he tweeted that Facebook chief executive Mark Zuckerbergs understanding of the threat posed by AI is limited.

MORE READING:

Can a better nights sleep in a hipster bus replace flying?

One recipe at a time, YouTubes Binging With Babish is disrupting the content industry

I spent three minutes inside Teslas Model 3 and Im still thinking about it a day later

Read more:

'It knew what you were going to do next': AI learns from pro gamers then crushes them - Washington Post

Elon Musk Is Very Freaked Out by This Artificial Intelligence System’s Victory Over Humans – Inc.com

With all that's happening in the world, Elon Musk wants to make sure you don't forget about what he thinks is the biggest danger to humanity.

Over the weekend, Musk returned to tweeting about one of his favorite topics of discussion: artificial intelligence. He referenced the threat of nuclear war with North Korea to help make his point.

Musk's tweets came hours after an A.I. system developed by OpenAI defeated some of the world's best players at a military strategy game called Dota 2. According to a blog post by OpenAI, successfully playing the game involves predicting how an opponent will move, improvising in unfamiliar scenarios, and convincing the opponent's allies to help you instead.

OpenAI is the nonprofit artificial intelligence company Musk co-founded along with Peter Thiel and Sam Altman. The company's purpose is to research and develop A.I. and develop best practices to help ensure that the technology is used for good.

Musk has in the past called A.I. humanity's "biggest existential threat." A known A.I. fear monger, he recently got in a brief public spat with Mark Zuckerberg about the danger that the technology poses to humans. Zuckerberg, whose Facebook--like Tesla--invests heavily in artificial intelligence, referred to Musk's prophesizing about doomsday scenarios as "irresponsible." Musk responded on Twitter the next day by calling Zuckerberg's understanding of the topic "limited."

Comparing the threat of A.I. to that of nuclear war with North Korea is clearly a tactic meant to shock, as Musk has been wont to do on this topic. Earlier this year, he laid out a scenario in which A.I. systems meant to farm strawberries could lead to the destruction of mankind.

Even if Musk is speaking in hyperbole, though, it's not hard to see why an A.I. system that outsmarts humans at military strategy might be cause for concern.

Musk's opinions on the technology have been at odds with those of tech leaders like Zuckerberg, Amazon's Jeff Bezos, and Google co-founders Larry Page and Sergey Brin. All have advocated for A.I. in recent years with few, if any, reservations.

While Tesla relies heavily on artificial intelligence in developing self-driving cars, Musk's opinions have been at odds with those of his fellow tech titans. In July, Musk told a group at the National Governors Association Summer Meeting in Rhode Island that he believes A.I. should be regulated proactively, before the need for such limitations even arise.

"I have exposure to the very cutting-edge A.I.," he said, "and I think people should be really concerned about it."

Link:

Elon Musk Is Very Freaked Out by This Artificial Intelligence System's Victory Over Humans - Inc.com

Four jobs artificial intelligence (AI) won’t destroy – Telegraph.co.uk

Given the trajectory that artificial intelligence is on, machines will soon do everything that people do today. In a world of increasingly powerful technology, which in aggregate will make the world a better, richer place but at the micro, personal level will make a lot of skills less relevant and less valuable, it is smart to try to figure out how to beat the bot. These are four areas and skills that are AI-proof well, at least for a little while

According to the job website CareerCast, data science is the toughest job to fill in 2017. That is because all sort of businesses banks, airlines and manufacturers, not just technology companies know they need to run their operations based on data (rather than guesswork) and are scrambling to hire the talent.

You do not have to be a maths savant to be a data scientist. The biggest trend this year is the growth of the citizen data scientist. Get started by working with software from Tableau or Qlik.

Aaron Levie, chief executive of cloud storage vendor Box, recently said: If you want a job for the next few years, work in technology. If you want a job for life, work in cybersecurity.

The battle between black hats and white hats gets more and more intense each year as the modern-day equivalents of Willie Sutton, a notorious US career bank robber in the 20th century, go where the money is ie, hacking code.

Keeping 16-year-old Ukrainians and state-sponsored operatives at bay is a task without end. You might not be able to talk about your work but your bank balance will know.

Apples design sensibility beautiful objects, beautiful online and retail experiences has changed the face of modern business. Now every company and organisation knows it needs to upgrade its customer-facing game to stay in tune with changing demographics and changing times.

Design, once an afterthought when engineers and accountants had done the real work, is front and centre in every critical decision businesses are making. Consequently, design companies are being acquired right, left, and centre by big consulting and technology firms.

If you do not have a STEM (science, technology, engineering, maths) background, but are more artsy, design (of products and services and user interfaces) is one of the surest ways for a non-technologist to thrive in an increasingly technocentric world.

In recent research conducted by the Cognizant Centre for the Future of Work, almost all of the 2,500 leading executives who were interviewed agreed that humans need to be more strategic in the face of growing automation. What does that mean?

Rote tasks, which still represent a substantial proportion of most peoples day-to-day work, are morphing into the machine, freeing up time and energy to ask better questions, craft better directions and generate more impactful innovation.

This is happening at the executive level within your organisation and in the small department where perhaps you work.

The need to elevate the role of human relative to machine is the great challenge and opportunity in front of us all. So there will be plenty of work for strategists to help chief executives and boards understand what their company should do when machines do everything.

And there will be plenty of work for people who can think strategically about the work they do and how to do it as software and robots become more and more intelligent, and more and more useful.

A final thought is that only a third of the survey respondents thought that the rise of artificial intelligence would lead to large-scale reductions in the number of people needed to do work, which is the widespread meme in the zeitgeist about artificial intelligence (AI) and robots.

The vast majority believe, as does Cognizant, that unquenchable human ingenuity will continue to find plenty of work for human hands and brains to do to satisfy existing and emerging wants and needs. When machines do everything there will still be plenty for humans to do. You should get on with it.

To better understand how your company can benefit from artificial intelligence, visit whenmachinesdoeverything.com

Ben Pring is a co-author of What To Do When Machines Do Everything (Wiley 2017) and leads Cognizants Centre for the Future of Work.

This article was originally produced and published by Business Reporter. View the original article atbusiness-reporter.co.uk

Excerpt from:

Four jobs artificial intelligence (AI) won't destroy - Telegraph.co.uk

Is Artificial Intelligence No Longer Cutting Edge? – Bloomberg Big Law Business

Sponsored by:

ILTACON Series Will Explore Myths, Realities of AI and Automation in the Law

The legal technology industry is gearing up for one of its premier events of the year, the International Legal Technology Association Conference (ILTACON). The massive event, which kicks off on Aug. 14 at the Mandalay Bay Resort and Casino in Las Vegas, will boast four major keynotes, almost 200 peer-based educational sessions with over 350 speakers, and an exhibit hall featuring more than 200 service and software vendors to the legal market.

A theme pervasive to many of the sessions and, as a result, much of the conversation in the hallways and planned networking events, is artificial intelligence. AI, which in the legal world takes the shape of machine learning and natural language processing, for example, will be extensively deliberated during a three-part series, Artificial Intelligence in Law. Experts will not only discuss how AI being used to leverage data, automate legal work, reduce costs and enhance efficiencies, but also share what it takes to implement AI initiatives in legal departments and law firms.

It begins with an understanding of what the role of AI is in the delivery of legal services, and what it isnt. In the legal industry, the mention of AI is too often met with either fear, disbelief, or irrational exuberance. But none of those reactions is warranted, said Martin Tully, co-chair of the data law practice at Akerman and a panelist at the AI kickoff session on Monday, Aug. 14, entitled The Myths, Realities and Future of Artificial Intelligence and Automation in the Law (Part 1 of 3).

As Tully explains, the legal industry should think of AI as meaning as augmented intelligence, something that allows lawyers and their clients to far better understand information and data, and to make smarter, more efficient decisions but not replacing humans with an army of legal robots, he said.

There are at least four common AI uses for which firms or law department can easily license commercial, off-the-shelf AI software and deploy it in a manner similar to other practice technologies, according to Ron Friedman, partner at consulting firm Fireman & Company. Friedman, who will be a panelist during the Artificial Intelligence in Law: AI in Action (Part 2 of 3) session on Tuesday, Aug. 15 at 11 a.m. The four use cases are:

All four uses address clearly defined problems lawyers face. All four have multiple providers that offer off-the-shelf software. These products are straightforward to deploy from both the IT or user training/adoption perspective, Friedman said. Many lawyers have used predictive coding for years. One software company that provides software to accelerate due diligence reviews recently publicly stated this week that it has over 200 law firm licensees that run 1,000 projects per month. The AI-driven legal research products have seen rapid uptake. Deploying AI is no longer cutting edge.

In some circles, AI may no longer be vanguard, but its still a notion that many law firms are only beginning to accept or embrace. With pressure coming from corporate legal departments to not only improve efficiencies and save money, but to also be more technologically advanced, credible AI technologies can be an attractive option for firms when they see it as a way to create more business value, and make their jobs easier.

The first thing they have to do is be receptive. Pick up the phone or reach out. The power behind AI is extremely complex. Sometimes engineers dont even understand all the parts. Whats important is that the interface be very simple. And it has to be easier than anything you were doing in the past, said Jake Heller, CEO of AI-based legal research firm Casetext. Law firms are trying to solve real problems for the firm and for their clients. Clients are asking them to increase the quality of services and efficiency. That can seem paradoxical, but that pressure is driving the receptiveness to AI solutions.

Certainly law firms, as well as legal departments, vary in degree by which they are adopting AI-based technologies, but the majority of law firms, and in most other business verticals, dont have mature AI strategies, according to Alex Lazo, CIO of Mullen Coughlin.

AI is such a new technology and just now starting to trend, he said. You will see some early adopters with any new tech that start to pave the way.

ILTACONs third session in its AI series, Artificial Intelligence in Law: From Theory to Practice (Part 3 of 3), will take place on Aug. 16 at 11 am.

See the rest here:

Is Artificial Intelligence No Longer Cutting Edge? - Bloomberg Big Law Business

Artificial Intelligence More Dangerous Than North Korea, Elon Musk Tweets – CleanTechnica

August 14th, 2017 by Steve Hanley

We would expect Elon Musk to be a champion of artificial intelligence. After all, it is the cornerstone of the autonomous driving system known as Autopilot that is featured in Tesla automobiles. But he has been warning about the potential dangers of AI since 2014, when he called it the biggest existential threat to humanity ever known. How can someone be a champion of new technology he finds so potentially dangerous? Easy Musk is not constrained by conventional thinking. His ability to see not only both sides of a coin but also the edge and whats inside is legendary.

In 2015, Musk co-founded OpenAI, whose mission isto develop artificial intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. His involvement in AI research is partly to keep an eye on whats going on in the field.

Last month, during a wide-ranging presentation to the National Governors Association, Musk called for more government regulation of artificial intelligence before its too late. Last week, OpenAI beat all human competitors at an international competition for the multiplayer online battle arena game Dota 2.The OpenAI program was able to predict where human players would deploy forces and improvise on the spot, in a game where sheer speed of operation does not correlate with victory. That means the OpenAI entrant was simply better, not just faster, than the best human players. After the victory, Musk tweeted:

Asked how one regulates something that can be done by a single hacker operating at midnight from a corner of his mothers basement, Musk replied, It is far too complex for that. Requires a team of the worlds best AI researchers with massive computing resources.

The public was exposed to the potential for abuse by artificial intelligence in the 2002 movie Minority Report based on a novel by Philip K. Dick. That film was supposedly set in the the year 2054 (the same year in which all cars will be electric, supposedly), but the predictive power of the game-playing software from OpenAI is already a part of American culture.

In 2004, tech guru Peter Thiel, who once was associated with Elon Musk during their work on PayPal, formed Palantir together with Nathan Gettings, Joe Lonsdale, Stephen Cohen, and Alex Karp. Lord of the Rings fans may remember that the palantir was a seeing stone that gave Saruman the ability to see in darkness or blinding light.

The literal definition of palantir means one that sees from afar, or a mythical instrument of omnipotence.Others may associate a palantir with the mysterious, all-seeing television screens that featured prominently in George Orwells 1984.Others may see a link to Jeremy Benthams Panopticon, an idea he promoted near the end of the 18th century.

According to The Guardian, Palantir watches everything you do and predicts what you will do next in order to stop it. As of 2013,its client list includedthe CIA, the FBI, the NSA, the Center for Disease Control, the Marine Corps, the Air Force, Special Operations Command, West Point, and the IRS. Up to 50% of its business is with the public sector. In-Q-Tel, the CIAs venture arm, was an early investor. The other 50% of its business is with Wall Street hedge funds and investment banks.

Palantir is working closely with police in Los Angeles and Chicago. Some may applaud its ability to predict which people represent a danger to society, but critics contend that such data-mining techniques only reinforce stereotypes that some segments of law enforcement already have, especially when it comes to black males. An officer who comes into contact with someone that Palantir has labeled as a threat is likely to behave differently than if that person has not been pre-targeted by an algorithm.

Even more troubling is a firm known as Cambridge Analytica, formed in 2013 specifically to use data mining to influence elections. Its head, Robert Mercer, and Peter Thiel are both staunch supporters of Donald Trump. Both organizations operate in extreme secrecy bordering on paranoia. It is not a stretch of the imagination to suggest that Cambridge Analytica may have had as much influence on the results of the last presidential election as the alleged Russian interference in the campaign.

Certainly, organizations that exert so much control over federal, state, and local governments cry out for oversight. Musks call for more regulation of AI will be vehemently opposed by Palantir, Cambridge Analytica, and any other companies looking to make a buck from compiling data and selling their conclusions to those in power.

Musks concerns were manifested in the movieI, Robot, starring Will Smith, in which a cyborg imbued with artificial intelligence seeks to take over control of society. It is only a movie, of course, but it raised disturbing questions about what the future may hold as machines become ever more sophisticated.

Recently, Musk got into a public spat with Facebook founder Mark Zuckerberg over the dangers of AI. Musk characterized Zuckerberg as someone with limited understanding of the subject after Zuckerberg accused Musk of scaremongering about the dangers of artificial intelligence. But when it comes down to the nitty gritty and society needs authoritative, practical advice about AI, who you gonna call Musk or Zuckerberg? Exactly.

Check out our new 93-page EV report, based on over 2,000 surveys collected from EV drivers in 49 of 50 US states, 26 European countries, and 9 Canadian provinces.

Tags: artificial intelligence, Cambridge Analytica, Dota2, Elon Musk tweet, Jeremy Betham, Minority Report, OpenAI, Palantir, Panopticon, Peter Thiel, Robert Mercer

Steve Hanley writes about the interface between technology and sustainability from his home in Rhode Island. You can follow him onGoogle +and onTwitter. "There may be times when we are powerless to prevent injustice, but there must never be a time when we fail to protest." Elie Wiesel

Read more:

Artificial Intelligence More Dangerous Than North Korea, Elon Musk Tweets - CleanTechnica

Swift creator Chris Lattner moving to Google Brain’s artificial intelligence effort – AppleInsider (press release) (blog)

By Mike Wuerthele Monday, August 14, 2017, 12:40 pm PT (03:40 pm ET)

Lattner announced in a Tweet that he was starting at Google Brain around August 21.

Given the relatively open-source nature of Swift, Lattner can continue to contribute to the language, to some extent even after his departure from Apple.

Lattner studied computer science at the University of Portland, Ore. After being one of the co-authors of LLVM, Lattner was hired by Apple in 2005, and was instrumental in the advancement of Xcode, Apple's OpenGL implementation, and every aspect of Apple's Swift rollout and continued development.

Tesla hired Lattner to serve as the company's Vice President of Autopilot Software. The match only lasted about six months, with Lattner ultimately stating that the position wasn't a good fit for him.

At the time of Lattner's departure, Apple coder Ted Kremenek was selected to lead the Swift development team. Given the relatively open-source nature of Swift, Lattner can continue to contribute to the language, to some extent even after his departure from Apple.

Google Brain is Alphabet's division focusing on machine learning and artificial intelligence. It is more about practical application of the technology across Google's entire product line, and stated goals for the group are to advance the discipline widely beyond the company's halls.

Read this article:

Swift creator Chris Lattner moving to Google Brain's artificial intelligence effort - AppleInsider (press release) (blog)

Finding Harmony Between Human and Artificial Intelligence – Customer Think

[Image Source: Interactions.com]

Currently, artificial intelligence (AI) technologies are having an increasing impact on many aspects of daily life. I recently spoke with Interactions Dr. Michael Johnston, a veteran of speech and language technology with over 25 years of experience in the industry, to discuss the benefits of combining artificial intelligence with human understanding.

Artificial intelligence refers to the capability of a machine to mimic or approximate the capabilities of humans. Examples include:

Increasingly, systems combining constellations of AI technologies that previously were only found in research prototypes are coming into daily use by consumers in applications such as mobile and in-home virtual assistants (e.g. Siri, Cortana, and Alexa).

Despite these successes, significant challenges remain in the application of AI especially in language applications as we scale from simpler information seeking and control tasks (play David Bowie, turn on the lights) to more complex tasks involving richer language and dialog (e.g. troubleshooting for technical support, booking multi-part travel reservations, giving financial advice). Among enterprise applications of AI, one approach that is gaining popularity is to forego the attempt to create a fully autonomous AI-driven solution in favor of leveraging an effective blend of human and machine intelligence.

HUMAN INTELLIGENCE HAS ALWAYS PLAYED A CRITICAL ROLE IN MACHINE LEARNING Specifically, in supervised learning, human intelligence is generally applied to assign labels, or richer annotations, to examples used for training AI models which are then deployed in fully automated systems. Effective solutions are now emerging that involve the symbiosis of human and artificial intelligence in real time. These approaches vary in whether a human agent or artificial agent is the driver of the interaction.

In the case of an artificial agent fielding calls, text messages or other inputs from a user, human intelligence can be engaged in real time to provide live supervision of the behavior of the automated solution at various different levels (Human-assisted AI). For example, human agents can listen to audio and assist with hard to recognize speech inputs, assigning a transcription and/or semantic interpretation to the input. They can also assist with higher level decisions, such as which path to take in an interactive dialog flow, or how best to generate an effective response to the user. In these cases, the goal is to contain the interaction in what appears to the customer to be an automated solution, but one that leverages human intelligence just enough to maintain robustness and a high quality of interaction.

In contrast, in AI-assisted Human Interaction, the driver of the interaction is a human agent, and the users perception is that they are interacting with a person. The role of the AI is to provide assistance to the human agent in order to optimize and enhance their performance. For example, an AI solution assisting a contact center agent might suggest a possible response to return in text or read out to a customer.

Several companies have recently explored the application of sequence-to-sequence models using Deep Neural Networks to formulate a response or multiple responses that an agent can adopt or edit. One of the great advantages of this setting for applying new machine learning algorithms is reduced risk of failure as the human agent maintains the final say on whether to adopt the suggested response or use another. In addition, human decisions to adopt, reject, or edit suggested responses provide critical feedback for improvement of the AI models making the suggestions.

Another example of an AI-assisted Human Interaction is the application of predictive models based on user profiles and interaction history, to support a financial advisor with suggestions they can make to a client, or assist a sales person in recommending the optimal strategy to take for up-selling a product. Yet further applications of AI empowering human agents include within-call analytics to track customer or agent emotion and provide live feedback to the human agent on their own emotional state or that of the customer.

Perhaps the best solutions for customer care will combine both humans assisting AI and AI assisting humans: Customers will first engage with automated virtual assistants that respond to their calls, texts, messages and other inputs, and human assistance will play a role in optimizing performance. Then, if the call requires transfer to a human agent, that agent will be supported by an AI-enabled solution which quickly brings them up to speed on the history of the interaction and can assist them in real time as they respond to and engage with the customer.

TaraWildt

Interactions

Tara is a content marketing professional with experience in digital and social marketing. As Content Marketing Manager at Interactions, she is responsible for the overall content development and social media strategy. Tara holds a BA in International Relations from the University of San Diego and an MBA from Northeastern University.

Current Author Rank: 68

See the original post:

Finding Harmony Between Human and Artificial Intelligence - Customer Think

Are robots moving sculptures? On Art, illusion and artificial intelligence – Salon

Traditional art has an element of illusionism to it. This has long been commented on, and is responsible for the prevalent thought (at least among the general public) that the more realistic the artwork, the more a man-made creation looks like a nature-made one, the better it must be. The ancient praised the lifelike naturalism of painters, with Pliny relating the famous story of a duel between two artists, one of whom was able to fool a bird into swooping in to peck at his painted grapes, whereas the other was able to fool the first artist, tricking him into trying to pull aside a curtain that was, in fact, his painting of a curtain. Fooling a human trumps fooling an animal, and the ability to inspire awe, wonder, the how-did-they-do-that expression, has long been the goal of most traditional art. Think of a tale of Pygmalion, in which an ivory sculpture of a naked woman was so realistic, and its sculptors love for it so strong, that it actually came to life.

And so it is with robots, particularly the latest generation of Artificial Intelligence, which strives for a human-like appearance, yes, but also an ability to make human-like decisions and responses. From films like Ex Machina, AI and I, Robot to the AI that lives in our pockets and living rooms, like Siri and Amazon Echo, we want artificial intelligence to feel lifelike. But we also want to know how and why this works. If we cannot explain why, if the illusionism feels too real, it can frighten.

Perhaps the most famous of a sculpture come to life, a historical robotic AI conundrum, was a man-shaped machine called The Turk. This metal automaton was first unveiled in 1769, presented to the court of Empress Maria Theresa of Austria by one Wolfgang von Kempelen, a Hungarian inventor of, among other things, pontoon bridges, water pumps, steam turbines, a typewriter for a blind pianist, and a speaking machine that functioned like a mechanical model of the human vocal tract. This invention took 20years to produce, and used bellows of the sort that would stoke a fire, reeds from bagpipes, the bell of a clarinet and other components to produce sounds on demand that were reminiscent of human speech sounds.

While many of von Kempelens inventions areimpressive, he is best known for his Turk, which was a full-sized manikin in the form and attire of a mustachioed Ottoman man, smoking a long pipe with one hand and seated behind a table upon which a chessboard sat. The automaton appeared to move on its own and consider its human opponents chess game, reacting appropriately and winning most of its matches during its existence, in constant use (it was destroyed in a fire at a Philadelphia theater, which damaged the neighboring museum in which it was stored, in 1854). The Turk was victorious against several famous opponents, including Benjamin Franklin and Napoleon Bonaparte.

Von Kempelen got the idea to build The Turk after seeing illusionist Francois Pelletier perform at Schonbrunn Palace in Austria. Von Kempelen promised to return to the palace with an illusion that would outdo Pelletiers act. Return he did, with The Turk in tow. The automaton was designed with a stage magicians style in mind, for viewers would logically think that there might be some human hidden inside. So when it was presented, von Kempelen would open a series of cabinet drawers to show the audience that the base of the table was empty. Doors on the left of the cabinet showed brass gears and mechanics that looked like the inside of a clock. The back doors of the left side could be opened to show the audience all the way through to the other side. The right side also included brass structures, but these could be removed. There were hidden doors beneath the manikin, and thus behind the table, showing further clock-like workings. In short, the entire base of the automaton could be shown to the audience, to assure them that there was no person hidden inside.

But this is where the magicians sleight of hand came in. The middle of the table, beneath the chess set, did not open all the way to the far side. There was a compartment under the table that was not visible when the left and right cabinets were opened. Instead, there was a seat that could smoothly slide from side to side, where the chess master sat, rather contorted. When von Kempelen opened the left cabinet, the chess master would slide to the right. When he then closed the left cabinet to open the right, the chess master would slide left. Each time the seat slid, it automatically shifted fake gearworks into place to fill a cabinet that was otherwise empty when the cabinet doors were closed.

When the audience was satisfied that the base of the table contained nothing but gears, then the chess master would take his place on the right side, and use those same brass gears to manipulate the manikins arms and even his facial expressions. The chess master could see the board because each piece was magnetized, so the underside of the chess board had pieces on it that indicated where the real chess pieces sat on the board above.

As a further diversionary move, von Kempelen would place a small wooden box, in the shape of a coffin, on top of the table, adjacent to the chess board, when they game began, and would periodically look inside it, never showing the audience what it contained, but leading them to conclude that it contained some key to the functioning of the robot. Not only would the robot defeat opponents, react to them (even tsk-tsking them if they tried to cheat), it could also perform a complex chess puzzle called the knights tour, in which a player must move a knight so that it lands on every square on the board only once. To top it off, The Turk had a sort of Ouija board, through which it could speak to opponents and bystanders by spelling out its reply in German (though oddly not in Turkish).

In point of fact, The Turk was a hoax. Well, sort of. It was not a computer-programmed automaton, but rather a human-operated automaton. The trick was that a real (and preferably very small) human chess master was concealed inside the table component of the automaton, and would engage the chess opponents by manipulating the movements of The Turk through a system of levers. At least six known chess masters operated The Turk at some point (including a Bavarian rabbi and the very first chess Grandmaster).

Von Kempelen was not happy about his inventions popularity, as word of it spread, books were written about it, and it was in demand across Europe. He tried to dismiss his creation as a mere bagatelle, and even once dismantled it to discourage invitations, while he plowedahead on other projects. This is likely because of the logistical difficulties in procuring chess masters and the fear that showcasing it too often would lead to the unmasking of its workings. He only reassembled it on direct command of Emperor Joseph II, and he subsequently sent it on a tour of Europe.

While The Turk lost to several leading chess masters, it won almost all of its games, including the besting of Benjamin Franklin while he was American ambassador in Paris. Philip Thicknesse, Thomas Gainsboroughs dear friend, published a book on The Turk, trying to expose it as a hoax he was almost right, in thinking that a small child was concealed inside it. After von Kempelens death, The Turk passed through various hands and was eventually sent to the United States, where Edgar Allan Poes personal doctor bought it.

The Turk is but one story among many of a high-profile automaton that captured the worlds imagination. It is a sculpture, and therefore a work of art, but one that had the illusion of life breathed into it, thus it was a proto-robot. Most who saw it considered it an act of illusionism, not a real automaton but some trick of the inventors which was deemed pleasurable to its audience. The game was to figure out how it worked, knowing that it was not actually a man-built machine that could think and act on its own.

The Turk was, of course, the precursor to Deep Blue, the computer chess program that actually is programmed to think for itself, without the need for the showmanship of the mechanical manikin. Immersion in the liveliness of The Turk made it feel not like an artwork, not a metal statue, but something new, a magicians prop or clockwork mechanism. But of course it was both, art and artificial intelligence.

As is all AI, whether or not its creators feel the need to place it into a naturalistic shape, like a metal Ottoman. Todays AI inhabits the realm of minimalist or abstract art, with Amazon Echo as a sort of Brancusian monolith. Theres even a new robot you can have sex with, meant not just as an object of lust-satisfaction, but also a companion. Its the ancient story of Pygmalion, the sculptor who falls in love with his work, Galatea, only for it to come to life. AI is art: man-made approximations of nature, whatever the look of their skin.

View original post here:

Are robots moving sculptures? On Art, illusion and artificial intelligence - Salon

Buy these seven shares to profit from driverless cars and artificial intelligence – Telegraph.co.uk

Autonomous vehicles

Fully autonomous cars are estimated to be just five years away, depending on both technology and the development of a regulatory system. This will dramatically increase the market for the components required.

For now, much of the growth comes from advanced driver assistance systems, such as automatic braking or adaptive cruise control.

Market value: 19.5bn

Last years pre-tax profit: 763m

This semiconductor firm was tipped by all of the technology fund managers we spoke to. It makes components used in systems such as emergency braking and battery management.

Hyunho Sohn, manager of the 2bn Fidelity Global Technology fund, said: Infineon exemplifies a company poised to gain from the move to electric and autonomous cars. It has a market-leading position and, as the technology going into each vehicle increases, it should experience increases in revenue and margin.

Market value: 18.7bn

Last year's pre-tax profit: 1.9bn

Delphi integrates different technologies into packages that meet the rigorous standards of the automotive industry, Mr Sohn explained

He said: "The firm has strong relationships with the major car manufacturers, and is well positioned to profit from both the rapid proliferation in low-level systems, and the eventual roll-out of fully autonomous driving."

Read the rest here:

Buy these seven shares to profit from driverless cars and artificial intelligence - Telegraph.co.uk

Teaching AI Systems to Behave Themselves – New York Times

For years, Mr. Musk, along with other pundits, philosophers and technologists, have warned that machines could spin outside our control and somehow learn malicious behavior their designers didnt anticipate. At times, these warnings have seemed overblown, given that todays autonomous car systems can even get tripped up by the most basic tasks, like recognizing a bike lane or a red light.

But researchers like Mr. Amodei are trying to get ahead of the risks. In some ways, what these scientists are doing is a bit like a parent teaching a child right from wrong.

Many specialists in the A.I. field believe a technique called reinforcement learning a way for machines to learn specific tasks through extreme trial and error could be a primary path to artificial intelligence. Researchers specify a particular reward the machine should strive for, and as it navigates a task at random, the machine keeps close track of what brings the reward and what doesnt. When OpenAI trained its bot to play Coast Runners, the reward was more points.

This video game training has real-world implications.

If a machine can learn to navigate a racing game like Grand Theft Auto, researchers believe, it can learn to drive a real car. If it can learn to use a web browser and other common software apps, it can learn to understand natural language and maybe even carry on a conversation. At places like Google and the University of California, Berkeley, robots have already used the technique to learn simple tasks like picking things up or opening a door.

All this is why Mr. Amodei and Mr. Christiano are working to build reinforcement learning algorithms that accept human guidance along the way. This can ensure systems dont stray from the task at hand.

Together with others at the London-based DeepMind, a lab owned by Google, the two OpenAI researchers recently published some of their research in this area. Spanning two of the worlds top A.I. labs and two that hadnt really worked together in the past these algorithms are considered a notable step forward in A.I. safety research.

This validates a lot of the previous thinking, said Dylan Hadfield-Menell, a researcher at the University of California, Berkeley. These types of algorithms hold a lot of promise over the next five to 10 years.

The field is small, but it is growing. As OpenAI and DeepMind build teams dedicated to A.I. safety, so too is Googles stateside lab, Google Brain. Meanwhile, researchers at universities like the U.C. Berkeley and Stanford University are working on similar problems, often in collaboration with the big corporate labs.

Read the rest here:

Teaching AI Systems to Behave Themselves - New York Times