Daily Archives: May 11, 2017

Cisco acquires conversational AI startup MindMeld for $125 million – TechCrunch

Posted: May 11, 2017 at 12:54 pm


TechCrunch
Cisco acquires conversational AI startup MindMeld for $125 million
TechCrunch
MindMeld, originally called Expect Labs, was launched on the stage of TechCrunch Disrupt SF 2012. At that time the startup wanted to build an iPad app that could listen in on your conversations and provide relevant contextual information. Since then ...
Cisco taps into AI, collaboration with $125M MindMeld buyNetwork World
Cisco To Acquire AI Startup MindMeld For $125 Million To Boost Spark UC ProductsCRN
Cisco Adds AI in MindMeld BuyNo Jitter
The Register -ZDNet -CNET
all 16 news articles »

Read more:

Cisco acquires conversational AI startup MindMeld for $125 million - TechCrunch

Posted in Ai | Comments Off on Cisco acquires conversational AI startup MindMeld for $125 million – TechCrunch

Microsoft’s bid to bring AI to every developer is starting to make sense – Ars Technica

Posted: at 12:54 pm

SEATTLEFor the thirdyear in a row, Microsoft is heavily promoting machine-learning services at its Build developer conference. Over the three years, some of the language used around the services has changedthe "machine learning" term seems to have fallen out of favor, being replaced by the better-known "artificial intelligence," and Microsoft has added many more services. But the bigger change is that ubiquitous intelligence now seems a whole lot more feasible than it did three years ago.

Three years ago, the service selection was narrowa language service that identified important elements from natural language, speech-to-text and text-to-speech, an image-recognition service, a facial recognition service. But outside of certain toy applications, such as Microsoft's age-guessing website, the services felt more than a little abstract. They felt disconnected from real-world applications.

Last year, the services took shape a little more. The bot bandwagon was just getting started, with Microsoft offering a framework for developers to build their own chatbots, and the right plumbing components have been published to hook those bots up to things like Skype and Teams. The appeal of the bots seemed perhaps limited, but other components that were displayed, such as a training user interface to help refine the language-understanding service, looked more promising. They showed ways in which a developer who wasn't an expert in machine learning or artificial intelligence could not just build systems that used machine-learning components, but which tailored those components to tackle the specific problem area the developer was interested in.

This year, the machine-learning story is improving once again. More services have been added, to make the platform able to do more things. Some of these are similar to the old services; for example, there's an image recognition service, "Custom Vision." The difference between this and the old vision service is that the new one is trainable. The old service has a corpus of objects that it understands, and if it sees them in a picture, it'll tell you. But if that corpus doesn't match the needs of your application, there's no way to add to it. The new service lets you upload small amounts of training dataabout 20 representations of each object, typicallyto generate a new image recognition model. The model generation itself, however, is entirely handled by the service; developers don't need to understand how it works.

Microsoft also has what it calls "Cognitive Services Labs," where developers can create more experimental AI-like services. The first of these is a gesture-recognizing service.

As well as working to build more trainable services, Microsoft is also working to train its bots to recognize certain standard processes, such as specifying a date or taking payment information.

These various machine-learning components are starting to become versatile enough and useful enough that they can solve problems that couldn't be solved before. Last year, Rolls-Royce, for example, developed a system that takes buzzwords"Internet of Things" and "machine learning"and did something useful with them. Rolls-Royce makes jet engines used in commercial airliners, and its latest jet engines are Internet of Things jet engines: they collect tons of telemetry data about operating conditions and upload them to Azure. The telemetry data is then combined with plane-level information such as altitude and flight plan.

Rolls-Royce has used machine learning to build a model that takes all this data and estimates when engine components will fail. This, in turn, allows preventative maintenance to be performed; the system can make estimates of which components are near the end of their lifetime (even if that lifetime has been prematurely shortened, as would be the case for an engine used on a plane only used for short flights). The system then advises that maintenance be performed to swap out the parts before they actually fail. This is even tied into inventory management, so the system can suggest making a replacement a little sooner than otherwise necessary, if it knows that the plane is flying somewhere that doesn't have the right parts available.

Hand-in-hand with these intelligent services, Microsoft has promoted its bot framework.Many people have misgivings about the industry-wide focus on bots, finding it hard to envisage a world in which we routinely type or talk to computer programs. However, Microsoft says that the bots have been instrumental in letting people learn how to use the cognitive services, and the company has seen substantial growth in developer interest for bots, especially in business-to-consumer roles. Using text chat on the Web to talk to a low-level sales rep or tech support person is a pretty common activity, for example, and some of this workload is a good match for bots with a suitable understanding of the problem domain.

Culture appears to play a significant role. We all remember Microsoft's neo-Nazi chatbot, Tay, but what's often forgotten is that Redmond had a different chatbot, XiaoIce, that spoke Chinese to Chinese users. That chatbot didn't have any of the problems that Tay did, and the Chinese market uses XiaoIce in a very different way; as well as using the bot's interactive or conversational features, Microsoft has found that people will just talk to it, unwinding from the day's stresses or using it as a sounding board of sorts.

Some of these differences are obvious when explained; for example, we were told that adoption of speech-to-text was much higher in China than in other countries because keyboard entry of Chinese text is much more awkward. Others were a little more surprising. Microsoft has found that even when the input modality is the same, audience demographics change the kind of language that's used with bots, and the things people ask the bots to do. While Facebook Messenger and Kik are both text chat, the older audience on Messenger uses bot services differently than the younger Kik crowd.

Even bot-averse users might find that they're more amenable to the concept in, for example, Teams or Slack. The conceptual shift from typing to your colleagues to typing to a bot feels much smaller.

But the cognitive services don't live or die on the success of bots anyway. We're already seeing hints of more subtle interfaces, such as Cortana reading your e-mails and figuring out if you havehave committed to any particular actions within themshe'll remind you to call people if you previously promised to do something by a given date. Doing this effectively requires comparable natural language parsing to a chatbot, but it transforms the intelligence from a system that must be explicitly interacted with into one that's altogether more transparent.

It's still early days for machine learning, and these capabilities are far from ubiquitous. The shift to "artificial intelligence" terminology is also unfortunate, as it sets users up for disappointmentthese systems are still a long way short of rivaling Lt. Cmdr. Data or the Terminator, and these fictional characters arguably define the widespread perception and understanding of "artificial intelligence."

But the overall movement is positive. Over the last couple of years, Microsoft's cognitive services have gone from abstract and somewhat impenetrable to a useful set of tools that developers of all kinds can integrate into their apps, all without having to be experts in machine learning or artificial intelligence.

The rest is here:

Microsoft's bid to bring AI to every developer is starting to make sense - Ars Technica

Posted in Ai | Comments Off on Microsoft’s bid to bring AI to every developer is starting to make sense – Ars Technica

Facebook’s AI Translates 9X Faster Than Rivals – Investopedia

Posted: at 12:54 pm


Investopedia
Facebook's AI Translates 9X Faster Than Rivals
Investopedia
Facebook Inc. (FB) wants the world to use its social media networks and communicate with users around the globe, and to meet that end it announced it developed an artificial intelligence (AI)-based tool that can translate languages nine times faster ...
Facebook's New AI Could Lead to Translations That Actually Make SenseWIRED
Facebook's new AI aims to destroy the language barrierEngadget
Facebook Is Using AI To Make Language Translation Much FasterFast Company
ABC News -Digital Trends
all 47 news articles »

Visit link:

Facebook's AI Translates 9X Faster Than Rivals - Investopedia

Posted in Ai | Comments Off on Facebook’s AI Translates 9X Faster Than Rivals – Investopedia

What Is Intelligence? 20 Years After Deep Blue, AI Still Can’t Think Like Humans – Live Science

Posted: at 12:54 pm

World Chess champion Garry Kasparov (left) ponders a chess move during the sixth and final game of his match with IBM's Deep Blue computer on May 11, 1997.

When the IBM computer Deep Blue beat the world's greatest chess player, Garry Kasparov, in the last game of a six-game match on May 11, 1997, the world was astonished. This was the first time any human chess champion had been taken down by a machine.

That win for artificial intelligence was historic, not only for proving that computers can outperform the greatest minds in certain challenges, but also for showing the limitations and shortcomings of these intelligent hunks of metal, experts say.

Deep Blue also highlighted that, if scientists are going to build intelligent machines that think, they have to decide what "intelligent" and "think" mean. [Super-Intelligent Machines: 7 Robotic Futures]

During the multigame match that lasted days at the Equitable Center in Midtown Manhattan, Deep Blue beat Kasparov two games to one, and three games were a draw. The machine approached chess by looking ahead many moves and going through possible combinations a strategy known as a "decision tree" (think of each decision describing a branch of a tree). Deep Blue "pruned" some of these decisions to reduce the number of "branches" and speed the calculations, and was still able to "think" through some 200 million moves every second.

Despite those incredible computations, however, machines still fall short in other areas.

"Good as they are, [computers] are quite poor at other kinds of decision making," said Murray Campbell, a research scientist at IBM Research. "Some doubted that a computer would ever play as well as a top human.

"The more interesting thing we showed was that there's more than one way to look at a complex problem," Campbell told Live Science. "You can look at it the human way, using experience and intuition, or in a more computer-like way." Those methods complement each other, he said.

Although Deep Blue's win proved that humans could build a machine that's a great chess player, it underscored the complexity and difficulty of building a computer that could handle a board game. IBM scientists spent years constructing Deep Blue, and all it could do was play chess, Campbell said. Building a machine that can tackle different tasks, or that can learn how to do new ones, has proved more difficult, he added.

At the time Deep Blue was built, the field of machine learning hadn't progressed as far as it has now, and much of the computing power wasn't available yet, Campbell said. IBM's next intelligent machine, named Watson, for example, works very differently from Deep Blue, operating more like a search engine. Watson proved that it could understand and respond to humans by defeating longtime "Jeopardy!" champions in 2011.

Machine learning systems that have been developed in the past two decades also make use of huge amounts of data that simply didn't exist in 1997, when the internet was still in its infancy. And programming has advanced as well.

The artificially intelligent computer program called AlphaGo, for example, which beat the world's champion player of the board game Go, also works differently from Deep Blue. AlphaGo played many board games against itself and used those patterns to learn optimal strategies. The learning happened via neural networks, or programs that operate much like the neurons in a human brain. The hardware to make them wasn't practical in the 1990s, when Deep Blue was built, Campbell said.

Thomas Haigh, an associate professor at the University of Wisconsin-Milwaukee who has written extensively on the history of computing, said Deep Blue's hardware was a showcase for IBM's engineering at the time; the machine combined several custom-made chips with others that were higher-end versions of the PowerPC processors used in personal computers of the day. [History of A.I.: Artificial Intelligence (Infographic)]

Deep Blue also demonstrated that a computer's intelligence might not have much to do with human intelligence.

"[Deep Blue] is a departure from the classic AI symbolic tradition of trying to replicate the functioning of human intelligence and understanding by having a machine that can do general-purpose reasoning," Haigh said, hence the effort to make a better chess-playing machine.

But that strategy was based more on computer builders' idea of what was smart than on what intelligence actually might be. "Back in the 1950s, chess was seen as something that smart humans were good at," Haigh said. "As mathematicians and programmers tended to be particularly good at chess, they viewed it as a good test of whether a machine could show intelligence."

That changed by the 1970s. "It was clear that the techniques that were making computer programs into increasingly strong chess players did not have anything to do with general intelligence," Haigh said. "So instead of thinking that computers were smart because they play chess well, we decided that playing chess well wasn't a test of intelligence after all."

The changes in how scientists define intelligence also show the complexity of certain kinds of AI tasks, Campbell said. Deep Blue might have been one of the most advanced computers at the time, but it was built to play chess, and only that. Even now, computers struggle with "common sense" the kind of contextual information that humans generally don't think about, because it's obvious.

"Everyone above a certain age knows how the world works," Campbell said. Machines don't. Computers have also struggled with certain kinds of pattern-recognition tasks that humans find easy, Campbell added. "Many of the advances in the last five years have been in perceptual problems," such as face and pattern recognition, he said.

Another thing Campbell noted computers can't do is explain themselves. A human can describe her thought processes, and how she learned something. Computers can't really do that yet. "AIs and machine learning systems are a bit of a black box," he said.

Haigh noted that even Watson, in its "Jeopardy!" win, did not "think" like a person. "[Watson] used later generations of processors to implement a statistical brute force approach (rather than a knowledge-based logic approach) to Jeopardy!," he wrote in an email to Live Science. "It again worked nothing like a human champion, but demonstrated that being a quiz champion also has nothing to do with intelligence," in the way most people think of it.

Even so, "as computers come to do more and more things better than us, we'll either be left with a very specific definition of intelligence or maybe have to admit that computers actually are intelligent, but in a different way from us," Haigh said.

Because humans and computers "think" so differently, it will be a long time before a computer makes a medical diagnosis, for example, all by itself, or handles a problem like designing residences for people as they age and want to remain in their homes, Campbell said. Deep Blue showed the capabilities of a computer geared to a certain task, but to date, nobody has made a generalized machine learning system that works as well as a purpose-built computer.

For example, computers can be very good at crunching lots of data and finding patterns that humans would miss. They can then make that information available to humans to make decisions. "A complementary system is better than a human or machine," Campbell said.

It's also probably time to tackle different problems, he said. Board games like chess or Go allow players to know everything about their opponent's position; this is called a complete information game. Real-world problems are not like that. "A lesson we should have learned by now There's not that much more that we can learn from board games." (In 2017, the artificially intelligent computer program called Libratus beat the best human poker players in a 20-day No-Limit Texas Hold 'em tournament, which is considered a game of incomplete information.)

As for Deep Blue's fate, the computer was dismantled after the historic match with Kasparov; components of it are on display at the National Museum of American History in Washington, D.C., and the Computer History Museum in Mountain View, California.

Original article on Live Science.

Continue reading here:

What Is Intelligence? 20 Years After Deep Blue, AI Still Can't Think Like Humans - Live Science

Posted in Ai | Comments Off on What Is Intelligence? 20 Years After Deep Blue, AI Still Can’t Think Like Humans – Live Science

The next 5 years in AI will be frenetic, says Intel’s new AI chief – PCWorld

Posted: at 12:54 pm

Thank you

Your message has been sent.

There was an error emailing this page.

Research into artificial intelligence is going gangbusters, and the frenetic pace wont let up for about five yearsafter which the industry will concentrate around a handful of core technologies and leaders, the head of Intels new AI division predicts.

Intel is keen to be among them. In March, it formed an Artificial Intelligence Products Group headed byNaveen Rao. He previously was CEO ofNervana Systems, a deep-learning startup Intel acquired in 2016. Rao sees the industry moving at breakneck speed.

Its incredible, he said. You go three weeks without reading a paper and youre behind. Its just amazing.

It wasnt so long ago that artificial intelligence research was solely the domain of university research labs, but tech companies have stormed into the space in the last couple of years and sent technical hurdles tumbling.

Weve hit upon a set of fundamental principles, and now we can really get to that point where we can innovate and iterate quickly on them and build really new cool things, he said.

Rao likened it to the development of concrete. It took a while for humans to invent and perfect concrete, but once that happened, all sorts of things suddenly became possible.

Thats why I think the next five or six years are going to be really, really fast moving. It will stabilize at that point after we figure out what the stack looks like and who the players are in thestack, he said.

Intels new AI group represents its biggest step yet toward being one of those leaders. The group brings together all of the companys hardware and software researchtied to machine learning, algorithms and deep learning.

While Intel is best known as a chip maker, its AI research also includes software packages that help developers add AI capabilities to Intel-based hardware. By doing some of the software work, Intel aims to make it easier for its customers to build AI-based systems. That, in turn, will help it sell hardware.

The company does something similar in other areas of its business.One of the areas its already focused on is self-driving cars. The vehicles use artificial intelligence to make split-second decisions about how to navigate roads and are a good example of a research area thats seen rapid progress.

A car used by Intel to test the companys autonomous driving technology as seen on May 3, 2017 in San Jose, California.

A lot of Intels competition comes from the big tech companies of Silicon Valley. The U.S. is one of the biggest players in AI, thanks to companies like Google and Facebook, and Rao also credits Canada and the U.K. as pioneers. But China is beginning to make its presence felt.

I was in China a few months ago. Its really taking off, he said. The folks there are very hungry to build these kinds of things, and the skill sets are building up really quick, so I think in the next couple of years youll start seeing China be a major player.

Martyn Williams covers general technology news for the IDG News Service and is based in San Francisco. He was previously based in Tokyo.

Continue reading here:

The next 5 years in AI will be frenetic, says Intel's new AI chief - PCWorld

Posted in Ai | Comments Off on The next 5 years in AI will be frenetic, says Intel’s new AI chief – PCWorld

Cray Announces New, AI-Focused Supercomputers – ExtremeTech

Posted: at 12:54 pm

Deep learning, self-driving cars, and AI are all huge topics these days, with companies like Nvidia, IBM, AMD, and Intel all throwing their hats into the ring. Now Cray, which helped pioneer the very concept of a supercomputer, is also bringingits own solutions to market.

Cray announced a pair of new systems: the Cray CS-Storm 500GT, and the CS-Storm 500NX. Both are designed to work with Nvidias Pascal-based Tesla GPUs, but they offer different feature sets and capabilities. the CS-Storm 500GT supports up to 8x 450W or 10x 400W accelerators, including Nvidias Tesla P40 or P100 GPU accelerators. Add-in boards like Intels Knights Landing and FPGAs built by Nallatech are also supported in this system, which uses PCI Express for its peripheral interconnect. The 500GT platform uses Intels Skylake Xeon processors.

The Cray CS-Storm 500GT supports up to 10 P40 or P100 GPUs and taps Nvidias NVLink connector rather than PCI Express. Xeon Phi and Nallatech devices arent listed as being compatible with this system architecture. Full specs on each are listed below:

The CS-Storm 500NX uses NVLink, which is why Cray can list it as supporting up to eight P100 SMX2 GPUs, without having eighth PCIe 3.0 slots (just in case that was unclear).

Customer demand for AI-capable infrastructure is growing quickly, and the introduction of our new CS-Storm systems will give our customers a powerful solution for tackling a broad range of deep learning and machine learning workloads at scale with the power of a Cray supercomputer, said Fred Kohout, Crays senior vice president of products and chief marketing officer. The exponential growth of data sizes, coupled with the need for faster time-to-solutions in AI, dictates the need for a highly-scalable and tuned infrastructure.

Nvidias NVLink fabric can be used to attach GPUs without using PCI Express.

The surge in self-driving cars, AI, and deep learning technology could be a huge boon to companies like Cray, which once dominated the supercomputing industry. Cray went from an early leader in the space to a shadow of its former self after a string of acquisitions and unsuccessful products in the late 1990s and early 2000s. From 2004 forwards the company has enjoyed more success, with multiple high-profile design wins using AMD, Intel, and Nvidia hardware.

So far, Nvidia has emerged as the overall leader in HPC workload accelerators. Of the 86 systems listed as using an accelerator at the TOP500 list, 60 of them use Fermi, Kepler, or Pascal (Kepler is the clear winner, with 50 designs). The next-closest hybrid is Intel, which has 21 Xeon Phi wins.

AMD has made plans to enter these markets with deep learning accelerators based on its Polaris and Vega architectures, but those chips havent actually launched in-market yet. By all accounts, these are the killer growth markets for the industry as a whole, and they help explain why even some game developers like Blizzard want to get in on the AI craze. As compute resources shift towards Amazon, Microsoft, and other cloud service providers, the companies that can provide the hardware these workloads run on will be best positioned for the future. Smartphones and tablets didnt really work for Nvidia or Intelmaking AMDs decision to stay out of those markets retrospectively look very, very wisebut both are positioned well to capitalize on these new dense server trends. AMD is obviously playing catch-up on the CPU and GPU front, but Ryzen should deliver strong server performance when Naples launches later this quarter.

Read the original post:

Cray Announces New, AI-Focused Supercomputers - ExtremeTech

Posted in Ai | Comments Off on Cray Announces New, AI-Focused Supercomputers – ExtremeTech

Three ways Artificial Intelligence will advance media businesses – Videonet

Posted: at 12:53 pm

As consumers get more comfortable with AI and understand its ability to improve their lives, companies are making investments to improve key supporting technology, such as translation, algorithms and discovery.

In IBB Consultings work with media companies, weve identified three key areas where opportunity exists to integrate AI and machine learning to improve the customer experience, boost revenues, increase productivity and more. From chatbots and content creation to new levels of personalization, the following areas are opportunities to integrate AI into everything from consumer-facing properties to internal functions.

AI-driven chatbots can already interact with people on the web or on mobile to help find information, answer questions and sell services. Were early in the game when it comes to how they will be deployed, with the goal being to ultimately get to a place where communicating with a chatbot feels no different than chatting with a real person.

Most customer requests and issues are basic and can be handled by programming chatbots to understand questions and trigger words, then provide answers by querying specific data sources. From a customer care perspective, chatbots and virtual assistants can be a cost-effective way to meet the needs of customers any time of day, with no wait. If necessary, chatbots can be programmed to hand off to a human agent should the conversation become too complicated. Over time, this will not be necessary as machine learning drives continuous improvements and the chatbots understanding of how to address an issue it once could not.

Chatbots can also provide new customer experiences. For example, The Food Network built a Facebook chatbot that recommends meals to interested customers by searching its database of more than 60,000 recipes. HBO built a customized website for Westworld that included a chatbot named Aeden that interacted with visitors and helped them explore the park.

This is just the beginning of whats possible. Content owners can create destinations that feature chatbots modeled off of favorite characters. Fans can chat about their favorite topics, ask questions about the show, or really, just about anything. AI-driven chatbots will reduce OPEX as they become more effective and create better engagement with consumers by giving them another way to interact with their favorite content and characters. The added layer of engagement will then drive more tune-in to popular content and create more impression opportunities for lesser-known content.

AI can also serve the role of content creator. Its not going to start writing TV scripts, but today, AI is smart enough to pull data from multiple sources to generate financial reports, sports commentaries and brief event summaries, it can aid in research or even basic content creation. However, while AI can report facts, it cannot add emotional responses or opinions. It also struggles to create detailed storylines. As machine learning continues to evolve, media companies can lean on the technology to produce other kinds of content, such as series and movie reviews that pull from, and aggregate in a cohesive story, what consumers are saying on social media.

AI can also support video production efforts. Fuisz Video is an interactive video and technology company that allows marketers to simultaneously shoot videos from multiple perspectives. Integrated AI then automatically produces a seamlessly edited clip that is ready for distribution or finishing touches. In an age when getting content online as fast as possible is a key to driving social sharing and large audiences, media companies need to capitalize on every advantage available to them.

AI can also be used to accomplish the opposite task. Instead of providing a single edit, AI can create a range of edit choices. Maybe different cuts need to feature different actors more prevalently than others. Or different edits to fill different time requirements may be needed. This functionality will be critical as programmers aim to get hyper-surgical in targeting of audiences across different social networks and platforms.

Today, when users interact with companies, they can sometimes be made to feel like just another number. Theyre asked to enter account information and PINs. The rep they speak with typically does not have time to comb through their history or profile before a conversation starts. This couldnt be more at odds with the kind of experience that customers value most.

Today, personalization is a major differentiator as media properties compete for customers. Viewers are favoring tailored recommendations, like Spotifys Discover Weekly. Netflixs recommendation algorithm has been estimated to save the company $1B annually by keeping users engaged and reducing churn.

AI can be used to customize service interfaces or web homepage experiences, hiding content that would not appeal to a certain customer, while putting a spotlight on precisely what will keep them watching, listening or clicking for long periods of time. AI can also be trained over time to offer recommendations that are personalized, but not too personal (aka creepy). IRIS.TV has already started curating video libraries for media companies to align with each viewers preferences and behaviors, while Vidora is helping news companies customize communication to reduce churn.

Media companies should consider building more robust recommendation engines that leverage deep learning to create more personalized experiences. Having the capability in-house vs using a vendor solution provides content providers with more flexibility and gives them the option to use the technology in other parts of the business. As the technology continues to learn it will become an increasingly valuable asset for target marketing and other monetization opportunities.

Kicking off AI efforts in a smart way

AIs possibilities and support requirements can be overwhelming, but investments in machine learning and application of AI across the media business is a critical step toward staying ahead of the competition. After all, digital native businesses are building from the foundation with AI capabilities in mind.

Media companies should identify and prioritize AI business opportunities that either solve a current problem or transform the business in a visionary new direction. Developing rapid proof of concepts using proven vendors and technologies can return immediate insights. With so many AI companies emerging, potential acquisitions should be evaluated.

Weve only seen the beginning of whats possible with AI. While there is a long road ahead, moving forward with a strategy is the only way to address the challenges and opportunities media companies will face along the way.

See the article here:

Three ways Artificial Intelligence will advance media businesses - Videonet

Posted in Artificial Intelligence | Comments Off on Three ways Artificial Intelligence will advance media businesses – Videonet

A Minority Report on Artificial Intelligence – NewCo Shift

Posted: at 12:53 pm

Want to feel old? Steven Spielbergs Minority Report was released fifteen years ago.

It casts a long shadow. For a decade after the films release, it was referenced at least once at every conference relating to human-computer interaction. Unsurprisingly, most of the focus has been on the technology in the film. The hardware and interfaces in Minority Report came out of a think tank assembled in pre-production. It provided plenty of fodder for technologists to mock and praise in subsequent years: gestural interfaces, autonomous cars, miniature drones, airpods, ubiquitous advertising and surveillance.

At the time of the films release, a lot of the discussion centered on picking apart the plot. The discussions had the same tone of time-travel paradoxes, the kind thrown up by films like Looper and Interstellar. But Minority Report isnt a film about time travel, its a film about prediction.

Or rather, the plot is about prediction. The filmlike so many great works of cinemais about seeing. Its packed with images of eyes, visions, fragments, and reflections.

The theme of prediction was rarely referenced by technologists in the subsequent years. After all, that aspect of the storyas opposed to the gadgets, gizmos, and interfaceswas one rooted in a fantastical conceit; the idea of people with precognitive abilities.

But if you replace that human element with machines, the central conceit starts to look all too plausible. Its suggested right there in the film:

To which the response is:

Suppose that Agatha, Arthur, and Dashiell werent people in a floatation tank, but banks of servers packed with neural nets: the kinds of machines that are already making predictions on trading stocks and shares, traffic flows, mortgage applicationsand, yes, crime.

Rewatching Minority Report now, it holds up very well indeed. Apart from the misstep of the final ten minutes, its a fast-paced twisty noir thriller. For all the attention to detail in its world-building and technology, the idea that may yet prove to be most prescient is the concept of Precrime, introduced in the original Philip K. Dick short story, The Minority Report.

Minority Report works today as a commentary on Artificial Intelligencewhich is ironic given that Spielberg directed a film one year earlier ostensibly about A.I.. In truth, that film has little to say about technologybut much to say about humanity.

Like Minority Report, A.I. was very loosely based on an existing short story: Super-Toys Last All Summer Long by Brian Aldiss. Its a perfectly-crafted short story that is deeply, almost unbearably, sad.

When I had the great privilege of interviewing Brian Aldiss, I tried to convey how much the story affected me.

At the time of its release, the general consensus was that A.I. was a mess. Its true. The film is a mess, but I think that, like Minority Report, its worth revisiting.

Watching now, A.I. feels like a horror film to me. The horror comes notas we first suspectfrom the artificial intelligence. The horror comes from the humans. I dont mean the cruelty of the flesh fairs. Im talking about the cruelty of Monica, who activates Davids unconditional love only to reject it (watching now, both scenesthe activation and the rejectionare equally horrific). Then theres the cruelty of the people of who created an artificial person capable of deep, never-ending love, without considering the implications.

There is no robot uprising in the film. The machines want only to fulfil their purpose. But by the end of the film, the human race is gone and the descendants of the machines remain. Based on the conduct of humanity that were shown, its hard to mourn our species extinction. For a film that was panned for being overly sentimental, it is a thoroughly bleak assessment of what makes us human.

The question of what makes us human underpins A.I., Minority Report, and the short stories that spawned them. With distance, it gets easier to brush aside the technological trappings and see the bigger questions beneath. As Al Robertson writes, its about leaving the future behind:

This was originally posted on my own site.

Link:

A Minority Report on Artificial Intelligence - NewCo Shift

Posted in Artificial Intelligence | Comments Off on A Minority Report on Artificial Intelligence – NewCo Shift

BNY Mellon advances artificial intelligence tech across operations – Reuters

Posted: at 12:53 pm

By Anna Irrera | NEW YORK

NEW YORK The Bank of New York Mellon Corp (BK.N) has developed and deployed automated computer programs, or more than 220 "bots", across its businesses over the past 15 months seeking more efficiency and lower costs, as the adoption of artificial intelligence technology in banking increases.

The 233-year-old custodian bank says its new army of robotics, or software created to carry out an often repetitive task that would normally be performed by humans, range from automated programs that respond to data requests from external auditors, to systems that correct formatting and data mistakes in requests for dollar funds transfers.

BNY Mellon said 20 bots have been placed in production since the start of the year. The robotics push comes as the banking sector ramps up the use of artificial intelligence (AI) and automation to save money and time on cumbersome and manual processes, ranging from back office tasks to customer service.

The bank estimates that its funds transfer bots alone are saving it $300,000 annually, by cutting down the time its employees need to spend on identifying and dealing with data mistakes and accelerating payments processing.

Bots are becoming particularly popular in banking for customer service, as banks are hopeful they can replace their costly large call centers.

More than three quarters of 600 top bankers surveyed by consultancy Accenture Plc (ACN.N) for a recent report said they believe AI will be the primary way banks interact with customers within the next three years.

One of the objectives of BNY Mellon's bot strategy is to help the organization "get rid of the mind numbing tasks" so that employees could focus on different activities, said Doug Shulman, senior executive vice president at BNY Mellon.

The bots also help provide a better client experience and reduce costs, Shulman added.

The bank said, for example, bots that reply to information requests on financial statements from auditors, enabled it to cut down its response time to 24 hours from 6 to 10 business days. The process previously required humans to manually sift through large swathes of data stored across six different IT systems.

Another bot in production allows the bank to process trades that fail to be automatically processed on its custody platform because of issues such as incorrect information. The bot investigates the reason for the failure and corrects the issue, cutting down processing times by 30 percent, the bank said.

But as AI advances, many banks face significant challenges with older systems as they have been around for decades and are not built for newer forms of computing.

(*This version of the story corrects spelling of last name to Shulman, not Schulman)

(Reporting by Anna Irrera; Editing by Bernard Orr)

BERLIN Germany's federal cyber agency said on Thursday that Yahoo Inc had not cooperated with its investigation into a series of hacks that compromised more than one billion of the U.S. company's email users between 2013 and 2016.

Twitter Inc said on Thursday it signed a multi-year deal with the U.S. National Football League to live-stream pre-game coverage as well as a 30-minute show on the microblogging website.

BRUSSELS/LUXEMBOURG Uber faces the biggest challenge yet to its European roll-out after the region's top court was advised to rule that the U.S. ride-hailing firm is actually a transport service not an app.

Read more:

BNY Mellon advances artificial intelligence tech across operations - Reuters

Posted in Artificial Intelligence | Comments Off on BNY Mellon advances artificial intelligence tech across operations – Reuters

Job losses due to artificial intelligence and robotics ‘could reach 40% by 2030 – Metro

Posted: at 12:53 pm


Metro
Job losses due to artificial intelligence and robotics 'could reach 40% by 2030
Metro
'We use the term augmented intelligence [rather than artificial intelligence],' Paul Ryan, head of Watson Artificial Intelligence, IBM UK, tells Metro.co.uk at the AI Summit. 'We are not pursuing general AI, we're not building systems that are trying ...

See the rest here:

Job losses due to artificial intelligence and robotics 'could reach 40% by 2030 - Metro

Posted in Artificial Intelligence | Comments Off on Job losses due to artificial intelligence and robotics ‘could reach 40% by 2030 – Metro