How AI will revolutionize manufacturing – MIT Technology Review

Ask Stefan Jockusch what a factory might look like in 10 or 20 years, and the answer might leave you at a crossroads between fascination and bewilderment. Jockusch is vice president for strategy at Siemens Digital Industries Software, which develops applications that simulate the conception, design, and manufacture of products like cell phones or smart watches. His vision of a smart factory is abuzz with independent, moving robots. But they dont stop at making one or three or five things. Nothis factory is self-organizing.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Reviews editorial staff.

Depending on what product I throw at this factory, it will completely reshuffle itself and work differently when I come in with a very different product, Jockusch says. It will self-organize itself to do something different.

Behind this factory of the future is artificial intelligence (AI), Jockusch says in this episode of Business Lab. But AI starts much, much smaller, with the chip. Take automaking. The chips that power the various applications in cars todayand the driverless vehicles of tomorroware embedded with AI, which support real-time decision-making. Theyre highly specialized, built with specific tasks in mind. The people who design chips then need to see the big picture.

You have to have an idea if the chip, for example, controls the interpretation of things that the cameras see for autonomous driving. You have to have an idea of how many images that chip has to process or how many things are moving on those images, Jockusch says. You have to understand a lot about what will happen in the end.

This complex way of building, delivering, and connecting products and systems is what Siemens describes as chip to citythe idea that future population centers will be powered by the transmission of data. Factories and cities that monitor and manage themselves, Jockusch says, rely on continuous improvement: AI executes an action, learns from the results, and then tweaks its subsequent actions to achieve a better result. Today, most AI is helping humans make better decisions.

We have one application where the program watches the user and tries to predict the command the user is going to use next, Jockusch says. The longer the application can watch the user, the more accurate it will be.

Applying AI to manufacturing can result in cost savings and big gains in efficiency. Jockusch gives an example from a Siemens factory of printed circuit boards, which are used in most electronic products. The milling machine used there has a tendency to goo up over timeto get dirty. The challenge is to determine when the machine has to be cleaned so it doesnt fail in the middle of a shift.

We are using actually an AI application on an edge device that's sitting right in the factory to monitor that machine and make a fairly accurate prediction when it's time to do the maintenance, Jockusch says.

The full impact of AI on businessand the full range of opportunities the technology can uncoveris still unknown.

There's a lot of work happening to understand these implications better, Jockusch says. We are just at the starting point of doing this, of really understanding what can optimization of a process do for the enterprise as a whole.

Business Lab is hosted by Laurel Ruma, director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast episode was produced in partnership with Siemens Digital Industries Software.

Siemens helps Vietnamese car manufacturer produce first vehicles, Automation.com, September 6, 2019

Chip to city: the future of mobility, by Stefan Jockusch, The International Society for Optics and Photonics Digital Library, September 26, 2019

Laurel Ruma: From MIT Technology Review, I'm Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is artificial intelligence and physical applications. AI can run on a chip, on an edge device, in a car, in a factory, and ultimately, AI will run a city with real-time decision-making, thanks to fast processing, small devices, and continuous learning. Two words for you: smart factory.

My guest is Dr. Stefan Jockusch, who is vice president for strategy for Siemens Digital Industries Software. He is responsible for strategic business planning and market intelligence, and Stefan also coordinates projects across business segments and with Siemens Digital Leadership. This episode of Business Lab is produced in association with Siemens Digital Industries. Welcome, Stefan.

Stefan Jockusch: Hi. Thanks for having me.

Laurel: So, if we could start off a bit, could you tell us about Siemens Digital Industries? What exactly do you do?

Stefan: Yeah, in the Siemens Digital Industries, we are the technical software business. So we develop software that supports the whole process from the initial idea of a product like a new cell phone or smartwatch, to the design, and then the manufactured product. So that includes the mechanical design, the software that runs on it, and even the chips that power that device. So with our software, you can put all this into the digital world. And we like to talk about what you get out of that, as the digital twin. So you have a digital twin of everything, the behavior, the physics, the simulation, the software, and the chip. And you can of course use that digital twin to basically do any decision or try out how the product works, how it behaves, before you even have to build it. That's in a nutshell what we do.

Laurel: So, staying on that idea of the digital twin, how do we explain the idea of chip to city? How can manufacturers actually simulate a chip, its functions, and then the product, say, as a car, as well as the environment surrounding that car?

Stefan: Yeah. Behind that idea is really the thought that in the future, and today already we have to build products, enabling the people who work on that to see the whole, rather than just a little piece. So this is why we make it as big as to say from chip to city, which really means, when you design a chip that runs in a vehicle of today and more so in the future, you have to take a lot of things into account while you are designing that chip. You have to have an idea if the chip, for example, controls the interpretation of things that the cameras see for autonomous driving, you have to have an idea how many images that chip has to process or how many things are moving on those images and obvious pedestrians, what recognition do you have to do? You have to understand a lot about what will happen in the end. So the idea is to enable a designer at the chip level to understand the actual behavior of a product.

And what's happening today, especially is that we don't develop cars anymore just with a car in mind, we more and more are connecting vehicles to the environment, to each other. And one of the big purposes, as we all know, that is of course, to improve the contamination in cities and also the traffic in cities, so really to make these metropolitan areas more livable. So that's also something that we have to take into account in this whole process chain, if we want to see the whole as a designer. So this is the background of this whole idea, chip to city. And again, the way it should look like for a designer, if you think about, I'm designing this vision module in a car, and I want to understand how powerful it has to be. I have a way to immerse myself into a simulation, a very accurate one, and I can see what data my vehicle will see, what's in them, how many sensor inputs I get from other sources, and what I have to do. I can really play through all of that.

Laurel: I really like that framing of being able to see the whole, not just the piece of this incredibly complex way of thinking, building, delivering. So to get back down to that piece level, how does AI play a role at the chip level?

Stefan: AI is a lot about supporting or even making the right decision in real time. And that's I think where AI and the chip level become so important together, because we all know that a lot of smart things can be done if you have a big computer sitting somewhere in a data center. But AI and the chip level is really very targeted at these applications that need real-time performance and a performance that doesn't have time to communicate a lot. And today it's really evolving to that the chips that do AI applications are now designed already in a very specialized way, whether they have to do a lot of compute power or whether they have to conserve energy as best as they can, so be very low power consumption or whether they need more memory. So yeah, it's becoming more and more commonplace thing that we see AI embedded in tiny little chips, and then probably in future cars, we will have a dozen or so semiconductor-level AI applications for different things.

Laurel: Well, that brings up a good point because it's the humans who are needing to make these decisions in real time with these tiny chips on devices. So how does the complexity of something like continuous learning with AI, not just help the AI become smarter but also affect the output of data, which then eventually, even though very quickly, allows the human to make better decisions in real time?

Stefan: I would say most applications of AI today are rather designed to help a human make a good decision rather than making the decision. I don't think we trust it quite that much yet. So as an example, in our own software, like so many makers of software, we are starting to use AI to make it easier and faster to use. So for example, you have these very complex design applications that can do a lot of things, and of course they have hundreds of menus. So we have one application where the program watches the user and tries to predict the command the user is going to use next. So just to offer it and just say, "Aren't you about to do this?" And of course, you talked about the continuous improvement, continuous learningthe longer the application can watch the user, the more accurate it will be.

It's currently already at a level of over 95%, but of course continuous learning improves it. And by the way, this is also a way to use AI not just to help a single user but to start encoding a knowledge, an experience, a varied experience of good users and make it available to other users. If a very experienced engineer does that and uses AI and you basically take those learned lessons from that engineer and give it to someone less experienced who has to do a similar thing, that experience will help the new user as well, the novice user.

Laurel: That's really compelling because you're rightyou're building a knowledge database, an actual database of data. And then also this all helps the AI eventually, but then also really does help the human because you are trying to extend this knowledge to as many people as possible. Now, when we think about that and AI at the edge, how does this change opportunities for the business, whether you're a manufacturer or the person using the device?

Stefan: Yeah. And in general, of course, it's a way for everyone who makes a smart product to differentiate, to create differentiation because all these, the functions enabled by AI of course are smart, and they give some differentiation. But the example I just mentioned where you can predict what a user will do, that of course is something that many pieces of software don't have yet. So it's a way to differentiate. And it certainly opens lots of opportunities to create these very highly differentiated pieces of functionality, whether it's in software or in vehicles, in any other area.

Laurel: So if we were actually to apply this perhaps to a smart factory and how people think of a manufacturing chain, first this happens, and then that happens and a car door is put on and then an engine is put in or whatever. What can we apply to that kind of traditional way of thinking of a factory and then apply this AI thinking to it?

Stefan: Well, we can start with the oldest problem a factory has had. I mean, factories have always been about producing something very efficiently and continuously and leveraging the resources. So any factory tries to be up and running whenever it's supposed to be up and running, have no unpredicted or unplanned downtime. So AI is starting to become a great tool to do this. And I can give you a very hands-on example from a Siemens factory that does printed circuit boards. And one of the steps they have to do is milling of these circuit boards. They have a milling machine and any milling machine, especially one like that that's highly automated and robotic, it has a tendency to goo up over time, to get dirty. And so one challenge is to have the right maintenance because you don't want the machine to fail right in the middle of a shift and create this unplanned downtime.

So one big challenge is to figure out when this machine has to be maintained, without of course, maintaining it every day, which would be very expensive. So we are using actually an AI application on an edge device that's sitting right in the factory, to monitor that machine and make a fairly accurate prediction when it's time to do the maintenance and clean the machine so it doesnt fail in the next shift. So this is just one example, and I believe there is hundreds of potential applications that may not be totally worked out yet in this area of really making sure that factories produce consistent high quality, that there's no unplanned downtime of the machines. There's of course, a lot of use already of AI in visual quality inspections. So there's tons and tons of applications on the factory floor.

Laurel: And this has massive implications for manufacturers, because as you mentioned, it saves money, right? So is this a tough shift, do you think, for executives to think about investing in technology in a bit of a different way to then get all of those benefits?

Stefan: Yeah. It's like with every technology, I wouldn't think it's a big block, there's a lot of interest at this point and there's many manufacturers with initiatives in that space. So I would say it's probably going to create a significant progress in productivity, but of course, it also means investment. And I can say since it's fairly predictable to see what the payback of this investment will be. As far as we can see, there's a lot of positive energy there, to make this investment and to modernize factories.

Laurel: What kind of modernizations you need for the workforce in the factories when you are installing and applying, kind of retooling to have AI applications in mind?

Stefan: That's a great question because sometimes I would say many users of artificial intelligence applications probably don't even know they're using one. So you basically get a box and it will tell you, is recommended to maintain this machine now. The operator probably will know what to do, but not necessarily know what technology they're working with. But that said of course there will probably will be some, I would say, almost emerging specialties or emerging skills for engineers to really, how to use and how to optimize these AI applications that they use on the factory floor. Because as I said, we have these applications that are up and running and working today, but to get to those applications to be really useful, to be accurate enough, that of course, to this point needs a lot of expertise, at least some iteration as well. And there's probably not too many people today who really are experienced enough with the technologies and also understand the factory environment well enough to do this.

I think this is a fairly, pretty rare skill these days and to make this a more commonplace application of course we will have to create more of these experts who are really good at making AI factory-floor-ready and getting it to the right maturity.

Laurel: That seems to be an excellent opportunity, right? For people to learn new skills. This is not an example of AI taking away jobs and that more negative connotations that you get when you talk about AI and business. In practice, if we combine all of this and talk about VinFast, the Vietnamese car manufacturer that wanted to do things quite a bit differently than traditional car manufacturing. First, they built a factory, but then they applied that kind of overarching thinking of chip to factory and then eventually to city. So coming back full circle, why is this thinking unique, especially for a car manufacturer and what kind of opportunities and challenges do they have?

Stefan: Yeah. VinFast is an interesting example because when they got into making vehicles, they basically started on a green field. And that is probably the biggest difference between VinFast and the vast majority of the major automakers. That all of them are a hundred or more years old and have of course a lot of history, which then translates into having existing factories or having a lot of things that were really built before the age of digitalization. So VinFast started from a greenfield, and that of course is a big challenge, it makes it very difficult. But the advantage was that they really have the opportunity to start off with a full digitalized approach, that they were able to use software. Because they were basically constructing everything, and they could really start off with this fairly complete digital twin of not only their product but also they designed the whole factory on a computer before even starting to build it. And then they build it in record time.

So that's probably the big, unique aspect that they have this opportunity to be completely digital. And once you are at that state, once you can already say my whole design, of course, my software running on the vehicle, but also my whole factory, my whole factory automation. I already have this in a fully digital way and I can run through simulations and scenarios. That also means you have a great starting point to use these AI technologies to optimize your factory or to help the workers with the additional optimizations and so on.

Laurel: Do you think it's impossible to be one of those hundred-year-old manufacturers and slowly adopt these kinds of technologies? You probably don't have to have a greenfield environment, it just makes everything easy or I should say easier, right?

Stefan: Yeah. All of them, I mean the auto industry has traditionally been one of the one that invested most in productivity and in digitalization. So all of them are on that path. Again, they don't have this very unique situation that you, or rarely have this unique situation that you can really start from a blank slate. But a lot of the software technology of course, also is adapted to that scenario. Where for example, you have an existing factory, so it doesn't help you a lot to design a factory on the computer if you already have one. So you use these technologies that allow you to go through the factory and do a 3D scan. So you know exactly how the factory looks like from the inside without having it designed in a computer, because you essentially produce that information after the fact. So that's definitely what the established or the traditional automakers do a lot and where they're also basically bringing the digitalization even into the existing environment.

Laurel: We're really discussing the implications when companies can use simulations and scenarios to apply AI. So when you can, whether or not it's greenfield or you're adopting it for your own factory, what happens to the business? What are the outcomes? Where are some of the opportunities that are possible when AI can be applied to the actual chip, to the car, and then eventually to the city, to a larger ecosystem?

Stefan: Yeah. When we really think about the impact to the business, I frankly think we are at the beginning of understanding and calculating what the value of faster and more accurate decisions really is, which are enabled by AI. I don't think we have a very complete understanding at this point, and it's fairly obvious to everybody that digitalizing like the design process and the manufacturing process. It not only saves R&D effort and R&D money, but it also helps optimize the supply chain inventories, the manufacturing costs, and the total cost of the new product. And that is really where different aspects of the business come together. And I would frankly say, we start to understand the immediate effects, we start to understand if I have an AI-driven quality check that will reduce my waste, so I can understand that kind of business value.

But there is a whole dimension of business value of using this optimization that really translates to the whole enterprise. And I would say there's a lot of work happening to understand these implications better. But I would say at this point, we are just at the starting point of doing this, of really understanding what can optimization of a process do for the enterprise as a whole.

Laurel: So optimization, continuous learning, continuous improvement, this makes me think of, and cars, of course, The Toyota Way, which is that seminal book that was written in 2003, which is amazing, because it's still current today. But with lean manufacturing, is it possible for AI to continuously improve that at the chip level, at the factory level, at the city to help these businesses make better decisions?

Stefan: Yeah. In my view, The Toyota Way, again, the book published in the early 2000s, with continuous improvement, in my view, continuous improvement of course always can do a lot, but there's a little bit of recognition in the last, I would say five to 10 years, somewhere like that, that continuous improvement might have hit the wall of what's possible. So there is a lot of thought since then of what is really the next paradigm for manufacturing. When you stop thinking about evolution and optimization and you think about more revolution. And one of the concepts that have been developed here is called industry 4.0, which is really the thought about turning upside down the idea of how manufacturing or how the value chain can work. And really think about what if I get two factories that are completely self-organizing, which is kind of a revolutionary step. Because today, mostly a factory is set up around a certain idea of what products it makes and when you have lines and conveyors and stuff like that, and they're all bolted to the floor. So it's fairly static, the original idea of a factory. And you can optimize it in an evolutionary way for a long time, but you'd never break through that threshold.

So the newest thought or the other concepts that are being thought about are, what if my factory consists of independent, moving robots, and the robots can do different tasks. They can transport material, or they can then switch over to holding a robot arm or a gripper. And depending on what product I throw at this factory, it will completely reshuffle itself and work differently when I come in with a very different product and it will self-organize itself to do something different. So those are some of the paradigms that are being thought of today, which of course, can only become a reality with heavy use of AI technologies in them. And we think they are really going to revolutionize at least what some kinds of manufacturing will do. Today we talk a lot about lot size one, and that customers want more options and variations in a product. So the factories that are able to do this, to really produce very customized products, very efficiently, they have to look much different.

So in many ways, I think there's a lot of validity to the approach of continuous improvement. But I think we right now live in a time where we think more about a revolution of the manufacturing paradigm.

Laurel: That's amazing. The next paradigm is revolution. Stefan, thank you so much for joining us today in what has been an absolutely fantastic conversation on the Business Lab.

Stefan: Absolutely. My pleasure. Thank you.

Laurel: That was Stefan Jockusch, vice president of strategy for Siemens Digital Industry Software, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That's it for this episode of Business Lab. I'm your host, Laurel Ruma. I'm the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in prints, on the web, and at events online and around the world. For more information about us and the show, please check out our website at technologyreview.com. The show is available wherever you get your podcasts. If you enjoyed this episode, we hope you'll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Read the original here:

How AI will revolutionize manufacturing - MIT Technology Review

Posted in Ai

AI is for the Birds in a New Computer Science Project | Newsroom – UC Merced University News

The Soundscapes to Landscapes (S2L): Monitoring Animal Biodiversity from Space Using Citizen Scientists program is supported by $1.1 million over three years through NASAs Citizen Science for Earth Systems program. It uses citizen scientists to deploy the AudioMoths. Other birders knowledgeable in bird calls will annotate a subset of the recordings, which serve as the training data for the AI models.

This summer, Newsam also received a $90,000, one-year AI for Earth Innovation grant from Global Wildlife Conservation in partnership with Microsoft. The nonprofit relies on research to work with local communities to address the root causes of threats to wildlife.

Newsams is one of only five projects funded out of 135 applications. The grant supports AI projects that can scale quickly. The research will benefit many other projects because it is open source.

For Newsam, there are many questions about processing the data, and many technical challenges. The recordings have biophony, geophony and anthrophony noise, and the bird calls are often faint. Some species have different calls for different communications: warning calls, mating calls and others. Which one should the AI focus on?

Birds often modify their calls by changing frequency, for example, if other birds are also calling, Newsam said. I am learning a lot about bird calls.

Baligar hears the calls as something more than just bird communication.

I like to think of birds as musical instruments, he said. All the violins are orange crowned warblers, but no two violins are the same. A bird song plays different notes, and every bird likes to play a song differently every time.

Each AudioMoth gathers about 2,000 minutes of data per site. So far, the team has more than 500,000 minute-long recordings more than 8,000 hours of data from over 600 locations and terrabytes of data to manage.

However, training the AI model requires a lot of annotated data.

Deep learning is data hungry, Baligar said. The more data the better. On average, we have just 650 training clips per bird species, which is not a lot.

Newsam, who co-founded the Spatial Analysis Research Center (SpARC) at UC Merced, is an expert in image analysis and understanding.

Image and audio are sensorily very different but in the end, it is just data data that we are turning into information through several processes, he said.

Baligar did not set out to study sound or bird calls when he was a masters student. He was more interested in time-series questions. Now, audio over time is the focus of his dissertation, and potentially the basis for a company he hopes to launch after graduation.

Computer science and environmental science are two of UC Merceds growing number of strengths, said Professor Josh Viers, director of the Center for Information Technology Research in the Interest of Society at UC Merced.

Professor Newsams research is indicative of the progress UC Merced has made in attracting top talent and solving important global problems, Viers said. Shawn is a leader in developing computer science tools that interpret and integrate massive amounts of information, from Earth imagery to sound recordings, and his research is pushing the envelope on innovation in sustainability and technology. It is really exciting to see this example of artificial intelligence used to benefit wildlife conservation efforts.

Future work for the team includes trying to identify individual birds and be able to track them over their range.

If we can overcome some of the modeling challenges, Newsam said, we might be able to replace satellites with much more fine-scaled information about all kinds of wildlife.

Read the original here:

AI is for the Birds in a New Computer Science Project | Newsroom - UC Merced University News

Posted in Ai

This AI Generates Photos Using Only Text Captions as a Guide – PetaPixel

Researchers at the Allen Institute for Artificial Intelligence (AI2) have created a machine learning algorithm that can produce images using only text captions as its guide. The results are somewhat terrifying but if you can look past the nightmare fuel, this creation represents an important step forward in the study of AI and imaging.

Unlike some of the genuinely mind-blowing machine learning algorithms weve shared in the pastsee here, here, and herethis creation is more of a proof-of-concept experiment. The idea was to take a well-established computer vision model that can caption photos based on what it sees in the image, and reverse it: producing an AI that can generate images from captions, instead of the other way around.

This is a fascinating area of study and, as MIT Technology Review points out, it shows in real terms how limited these computer vision algorithms really are. While even a small child can do both of these things readilydescribe an image in words, or conjure a mental picture of an image based on those wordswhen the Allen Institute researchers tried to generate a photo from a text caption using a model called LXMERT, it generated nonsense in return.

So they set out to modify LXMERT and created X-LXMERT. And while the results that X-LXMERT generates given a text caption arent exactly coherent, theyre not nonsense eitherthe general idea is usually there. Here are some example images created by the researchers using various captions:

And here are a few examples we generated by plugging various captions into a live demo they created using their model:

The above are all based on captions provided by the researchers, and most of them seem to at least contain the major concepts in each description. However, when we tried to create totally new captions based on more esoteric concepts like photographer, photography studio, or even the word camera, the results fell apart:

While the results from and limitations of X-LXMERT probably dont inspire either awe or the fear of the impending AI revolution, the groundbreaking masking technique that the researchers developed is an important first step in teaching an AI to fill in the blanks that any text description inherently leaves out.

This will eventually lead to better image recognition and computer vision, which can only help improve tasks that actually matter to the readers of this site. In other words: the better a computer is at understanding what you mean when you describe an image or image editing task, the more complex the tasks it will be able to perform on that image.

To learn more about this creation or see some more creepy AI-generated images, read the full research paper here or check out an interactive live demo of the model at this link.

(via DPReview)

Originally posted here:

This AI Generates Photos Using Only Text Captions as a Guide - PetaPixel

Posted in Ai

VMware and Nvidia make the power of AI accessible to every enterprise – SiliconANGLE News

What do you get if you mix Nvidia Corp.s artificial intelligence smarts with VMware Inc.s virtualization and cloud expertise? Attendees at VMworld 2020 virtual found out when thepartners announced the release of a jointly engineered solution that promises to bring AI to every enterprise.

This is a great moment in time where AI has finally come to life, because the hardware and software has come together to make it possible, said Manuvir Das (pictured, right), head of enterprise computing at Nvidia Corp.

Das and Krish Prasad (pictured, left), senior vice president and general manager of the Cloud Platform Business Unit at VMware Inc., joined John Furrier, host of theCUBE, SiliconANGLE Medias livestreaming studio, during VMworld. They discussed the partnership between Nvidia and VMware, as well as the democratization of AI.(* Disclosure below.)

Nvidiais more usually associated with graphics processing units than AI, but the company recently closed a deal to buy Arm Holdings Inc. in a move it described as creating the worlds premier computing company for the age of AI.

VMware has been on a transformation journey of its own, morphing from offering a platform for running virtual machines into a hybrid cloud management platform that can run either Kubernetes or VM workloads on-premises or in the cloud, or clouds. This vastly simplifies the operational complexity that our customers have to deal with, Prasad said. The next chapter in VMwares journey is doing the same thing for AI workloads he added.

There is some real deep computer science here between the engineers at VMware and Nvidia, said Das, describing how the technology works. He suggested imagining the process as a three-layer stack: The foundation is the hardware to run the algorithms, which Nvidia has with its GPUs. On top is the AI-enabled software stack with all the right algorithmics that take advantage of that hardware, Das stated. This is actually where Nvidia spends most of its effort today.

Providing the middle layer that marries the software and hardware is the VMware platform. Wire these three components together with the right algorithms and you get real acceleration, according to Das.

Early use-case examples come from the healthcare field, where cancer detection has been increased exponentially through the application of AI. The workload is running 30 times faster than it was running before this integration, Das stated.

Prasad concluded: We think that this is going to vastly accelerate the adoption of AI and essentially democratize AI in the enterprise.

Watch the complete video interview below, and be sure to check out more of SiliconANGLEs and theCUBEs coverage of VMworld. (* Disclosure: VMware Inc. sponsored this segment of theCUBE. Neither VMware nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.

Wed also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we dont have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary onSiliconANGLE along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams attheCUBE take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here,please take a moment to check out a sample of the video content supported by our sponsors,tweet your support, and keep coming back toSiliconANGLE.

View post:

VMware and Nvidia make the power of AI accessible to every enterprise - SiliconANGLE News

Posted in Ai

9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI) – Forbes

Technical skills and data literacy are obviously important in this age of AI, big data, and automation. But that doesn't mean we should ignore the human side of work skills in areas that robots can't do so well. I believe these softer skills will become even more critical for success as the nature of work evolves, and as machines take on more of the easily automated aspects of work. In other words, the work of humans is going to become altogether more, well, human.

9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI)

With this in mind, what skills should employees be looking to cultivate going forward? Here are nine soft skills that I think are going to become even more precious to employers in the future.

1. Creativity

Robots and machines can do many things, but they struggle to compete with humans when it comes to our ability to create, imagine, invent, and dream. With all the new technology coming our way, the workplaces of the future will require new ways of thinking making creative thinking and human creativity an important asset.

2. Analytical (critical) thinking

As well as creative thinking, the ability to think analytically will be all the more precious, particularly as we navigate the changing nature of the workplace and the changing division of labor between humans and machines. That's because people with critical thinking skills can come up with innovative ideas, solve complex problems and weigh up the pros and cons of various solutions all using logic and reasoning, rather than relying on gut instinct or emotion.

3. Emotional intelligence

Also known as EQ (as in, emotional IQ), emotional intelligence describes a person's ability to be aware of, control, and express their own emotions and be aware of the emotions of others. So when we talk about someone who shows empathy and works well with others, were describing someone with a high EQ. Given that machines cant easily replicate humans ability to connect with other humans, it makes sense that those with high EQs will be in even greater demand in the workplace.

4. Interpersonal communication skills

Related to EQ, the ability to successfully exchange information between people will be a vital skill, meaning employees must hone their ability to communicate effectively with other people using the right tone of voice and body language in order to deliver their message clearly.

5. Active learning with a growth mindset

Someone with a growth mindset understands that their abilities can be developed and that building skills leads to higher achievement. They're willing to take on new challenges, learn from their mistakes, and actively seek to expand their knowledge. Such people will be much in demand in the workplace of the future because, thanks to AI and other rapidly advancing technologies, skills will become outdated even faster than they do today.

6. Judgement and decision making

We already know that computers are capable of processing information better than the human brain, but ultimately, it's humans who are responsible for making the business-critical decisions in an organization. It's humans who have to take into account the implications of their decisions in terms of the business and the people who work in it. Decision-making skills will, therefore, remain important. But there's no doubt that the nature of human decision making will evolve specifically, technology will take care of more menial and mundane decisions, leaving humans to focus on higher-level, more complex decisions.

7. Leadership skills

The workplaces of the future will look quite different from today's hierarchical organizations. Project-based teams, remote teams, and fluid organizational structures will probably become more commonplace. But that won't diminish the importance of good leadership. Even within project teams, individuals will still need to take on leadership roles to tackle issues and develop solutions so common leadership traits like being inspiring and helping others become the best versions of themselves will remain critical.

8. Diversity and cultural intelligence

Workplaces are becoming more diverse and open, so employees will need to be able to respect, understand, and adapt to others who might have different ways of perceiving the world. This will obviously improve how people interact within the company, but I think it will also make the businesss services and products more inclusive, too.

9. Embracing change

Even for me, the pace of change right now is startling, particularly when it comes to AI. This means people will have to be agile and cultivate the ability to embrace and even celebrate change. Employees will need to be flexible and adapt to shifting workplaces, expectations, and required skillsets. And, crucially, they'll need to see change not as a burden but as an opportunity to grow.

Bottom line: we needn't be intimated by AI. The human brain is incredible. It's far more complex and more powerful than any AI in existence. So rather than fearing AI and automation and the changes this will bring to workplaces, we should all be looking to harness our unique human capabilities and cultivate these softer skills skills that will become all the more important for the future of work.

AI is going to impact businesses of all shapes and sizes across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.

Read this article:

9 Soft Skills Every Employee Will Need In The Age Of Artificial Intelligence (AI) - Forbes

Posted in Ai

What investment trends reveal about the global AI landscape – Brookings Institution

We arent what we were in the 50s and 60s and 70s, former Secretary of Defense Ash Carter recently reflected. In those days, all technology of consequence for protecting our people, and all technology of any consequence at all, came from the United States and came from within the walls of government. Those days are irrevocably lost. To get that technology now, Ive got to go outside the Pentagon no matter what, Carter added.

The former Pentagon chief may be overstating the case, but when it comes to artificial intelligence, theres no doubt that the private sector is in command. Around the world, nations and their governments rely on private companies to build their AI software, furnish their AI talent, and produce the AI advances that underpin economic and military competitiveness. The United States is no exception.

With Big Techs titans and endless machine-learning startups racing ahead on AI, its easy to imagine that the public sector has little to contribute. But the federal governments choices on R&D policy, immigration, antitrust, and government contracting could spell the difference between growth and stagnation for Americas AI industry in the coming years. Meanwhile, as AI booms in other countries, diplomacy and trade policy can help the United States and its private sector take greatest advantage of advances abroad, and protective measures against industrial espionage and unfair competition can help keep America ahead of its adversaries.

Smart policy starts with situational awareness. To achieve the outcomes they intend and avoid unwanted distortions and side effects in the market, American policymakers need to understand where commercial AI activity takes place, who funds it and carries it out, which real-world problems AI companies are trying to solve, and how these facets are changing over time. Our latest research focuses on venture capital, private equity, and M&A deals from 2015 through 2019, a period of rapid growth and differentiation for the global AI industry.

Although the COVID-19 pandemic has since disrupted the market, with implications for AI that are still unfolding, studying this period helps us understand the foundations of todays AI sectorand where it may be headed.

America leads, but doesnt dominate

Contrary to narratives that Beijing is outpacing Washington in this field, the United States remains the leading destination for global AI investments. China is making meaningful investments in AI, but in a diverse, global playing field it is one player among many.

As of the end of 2019, the United States had the worlds largest investment market in privately held AI companies, including startups as well as large companies that arent traded on stock exchanges. We estimate AI companies attracted nearly $40 billion globally in disclosed investment in 2019 alone, as shown in Figure 1. American companies attracted the lions share of that investment: $25.2 billion in disclosed value (64% of the global total) across 1,412 transactions. (These disclosed totals significantly understate U.S. and global investment, since many deals and deal values are undisclosed, so total transaction values were probably much higher.)

Around the world, private-market AI investment grew tremendously from 2015 to 2019especially outside China. Notwithstanding occasional claims in the media that China is outstripping U.S. investment in AI, we find that Chinese investment levels in fact continue to lag behind the United States. Consistent with broader trends in Chinas tech sector, the Chinese AI market saw a dramatic boom from 2015 to 2017, prompting many of those media claims. But the following two years, investment sharply declined, resulting in little net growth in the annual level of investment from 2015 to 2019.

Figure 1: Total disclosed value of equity investments in privately held AI companies, by target region

Although Americas nearest rival for AI supremacy may not have taken the lead, our data suggest the United States shouldnt grow complacent. Americas AI companies remain ahead in overall transaction value, but they account for a steadily shrinking percentage of global transactions. And by our estimates, investment outside the United States and China is quickly expanding, with Israel, India, Japan, Singapore, and many European countries growing faster than their larger competitors by some or all metrics.

Figure 2: Investment activity and growth in the top 10 target countries (ranked by disclosed value)

Chinese investors play a meaningful but limited role

Chinas investments abroad are attracting mounting scrutiny, but in the American AI investment market, Chinese investors are relatively minor players. In 2019, we estimate that disclosed Chinese investors participated in 2% of investments into American AI companies, down from a peak of only 5% in 2016. As Figure 3 makes clear, the Chinese investors in our dataset generally seem to invest in Chinese AI companies instead.

Figure 3: Investment events with at least one Chinese investor participant, by target region

There was also little evidence in our data that disclosed Chinese investors seek out especially sensitive companies or technologies, such as defense-related AI, when they invest outside China. That said, our data are limited; some Chinese investors may be undisclosed or operate through foreign subsidiaries that obscure their interests. And aggregate trends are of course only one part of the picture. Some China-based investors clearly invest abroad in order to extract security-sensitive information or technology. These efforts deserve scrutiny. But overall, it seems that disclosed Chinese investors, and any bad actors among them, are a relatively small piece of a larger and more diverse AI investment market.

Few AI companies focus on public-sector needs

When it comes to specific applications, we found that most AI companies are focused on transportation, business services, or general-purpose applications. There are some differences across borders: Compared to the rest of the world, investment into Chinese AI companies is concentrated in transportation, security and biometrics (including facial recognition), and arts and leisure, while in the United States and other countries, companies focused on business uses, general-purpose applications, and medicine and life sciences attract more capital.

Across all countries, though, relatively few private-market investments seem to be flowing to companies that focus squarely on military and government AI applications. Even the related category of security and biometrics is relatively small, though materially larger in China. Governments can and do adapt commercial AI tools for their own purposes, but for the time being, relatively few AI startups seem to be working and raising funds with public-sector clients in mind, especially outside China.

Figure 4: Regional investment targets by application area

The bottom-line on global AI

The worlds AI landscape is changing fast, and a plethora of unpredictable geopolitical factors, from U.S.-China decoupling to COVID-related disruptions, counsel against confident claims about where the global AI landscape is headed next. Still, our estimates of investment around the world point to fundamental, longer-term trends unlikely to vanish anytime soon. These trends have important implications for policy:

Go here to read the rest:

What investment trends reveal about the global AI landscape - Brookings Institution

Posted in Ai

The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61…

New York, Sept. 30, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "North America Artificial Intelligence in Healthcare Diagnosis Market Forecast to 2027 - COVID-19 Impact and Regional Analysis by Diagnostic Tool ; Application ; End User ; Service ; and Country" - https://www.reportlinker.com/p05974389/?utm_source=GNW

The healthcare industry has always been a leader in innovation.The constant mutating of diseases and viruses makes it difficult to stay ahead of the curve.

However, with the help of artificial intelligence and machine learning algorithms, it continues to advance, creating new treatments and helping people live longer and healthier.A study published by The Lancet Digital Health compared the performance of deep learning a form of artificial intelligence (AI) in detecting diseases from medical imaging versus that of healthcare professionals, using a sample of studies carried out between 2012 and 2019.

The study found that, in the past few years, AI has become more precise in identifying disease diagnosis in these images and has become a more feasible source of diagnostic information.With advancements in AI, deep learning may become even more efficient in identifying diagnosis in the coming years.

Moreover, it can help doctors with diagnoses and notify when patients are weakening so that the medical intervention can occur sooner before the patient needs hospitalization. It can save costs for both the hospitals and patients. Additionally, the precision of machine learning can detect diseases such as cancer quickly, thus saving lives.In 2019, the medical imaging toolsegment accounted for a larger share of the North America artificial intelligence in healthcare diagnosis market. Its growth is attributed to an increasing adoption of AI technology for diagnosis of chronic conditions is likely to drive the growth of diagnostic tool segment in the North America artificial intelligence in healthcare diagnosis.In 2019, the radiology segment held a considerable share of the for North America artificial intelligence in healthcare diagnosis market, by the application. This segment is also predicted to dominate the market by2027 owing to rising demand for AI based application for radiology.A few major primary and secondary sources for the artificial intelligence in healthcare diagnosis market included US Food and Drug Administration, and World Health Organization.Read the full report: https://www.reportlinker.com/p05974389/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Go here to see the original:

The North America artificial intelligence in healthcare diagnosis market is projected to reach from US$ 1,716.42 million in 2019 to US$ 32,009.61...

Posted in Ai

Inside the Army’s futuristic test of its battlefield artificial intelligence in the desert – C4ISRNet

YUMA PROVING GROUND, Ariz. After weeks of work in the oppressive Arizona desert heat, the U.S. Army carried out a series of live fire engagements Sept. 23 at Yuma Proving Ground to show how artificial intelligence systems can work together to automatically detect threats, deliver targeting data and recommend weapons responses at blazing speeds.

Set in the year 2035, the engagements were the culmination of Project Convergence 2020, the first in a series of annual demonstrations utilizing next generation AI, network and software capabilities to show how the Army wants to fight in the future.

The Army was able to use a chain of artificial intelligence, software platforms and autonomous systems to take sensor data from all domains, transform it into targeting information, and select the best weapon system to respond to any given threat in just seconds.

Army officials claimed that these AI and autonomous capabilities have shorted the sensor to shooter timeline the time it takes from when sensor data is collected to when a weapon system is ordered to engaged from 20 minutes to 20 seconds, depending on the quality of the network and the number of hops between where its collected and its destination.

We use artificial intelligence and machine learning in several ways out here, Brigadier General Ross Coffman, director of the Army Futures Commands Next Generation Combat Vehicle Cross-Functional Team, told visiting media.

We used artificial intelligence to autonomously conduct ground reconnaissance, employ sensors and then passed that information back. We used artificial intelligence and aided target recognition and machine learning to train algorithms on identification of various types of enemy forces. So, it was prevalent throughout the last six weeks.

Promethean Fire

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

The first exercise featured is informative of how the Army stacked together AI capabilities to automate the sensor to shooter pipeline. In that example, the Army used space-based sensors operating in low Earth orbit to take images of the battleground. Those images were downlinked to a TITAN ground station surrogate located at Joint Base Lewis McCord in Washington, where they were processed and fused by a new system called Prometheus.

Currently under development, Prometheus is an AI system that takes the sensor data ingested by TITAN, fuses it, and identifies targets. The Army received its first Prometheus capability in 2019, although its targeting accuracy is still improving, according to one Army official at Project Convergence. In some engagements, operators were able to send in a drone to confirm potential threats identified by Prometheus.

From there, the targeting data was delivered to a Tactical Assault Kit a software program that gives operators an overhead view of the battlefield populated with both blue and red forces. As new threats are identified by Prometheus or other systems, that data is automatically entered into the program to show users their location. Specific images and live feeds can be pulled up in the environment as needed.

All of that takes place in just seconds.

Once the Army has its target, it needs to determine the best response. Enter the real star of the show: the FIRES Synchronization to Optimize Responses in Multi-Domain Operations, or FIRESTORM.

What is FIRESTORM? Simply put its a computer brain that recommends the best shooter, updates the common operating picture with the current enemy situation, and friendly situation, admissions the effectors that we want to eradicate the enemy on the battlefield, said Coffman.

Army leaders were effusive in praising FIRESTORM throughout Project Convergence. The AI system works within the Tactical Assault Kit. Once new threats are entered into the program, FIRESTORM processes the terrain, available weapons, proximity, number of other threats and more to determine what the best firing system to respond to that given threat. Operators can assess and follow through with the systems recommendations with just a few clicks of the mouse, sending orders to soldiers or weapons systems within seconds of identifying a threat.

Just as important, FIRESTORM provides critical target deconfliction, ensuring that multiple weapons systems arent redundantly firing on the same threat. Right now, that sort of deconfliction would have to take place over a phone call between operators. FIRESTORM speeds up that process and eliminates any potential misunderstandings.

In that first engagement, FIRESTORM recommended the use of an Extended-Range Cannon Artillery. Operators approved the algorithms choice, and promptly the cannon fired a projectile at the target located 40 kilometers away. The process from identifying the target to sending those orders happened faster than it took the projectile to reach the target.

Perhaps most surprising is how quickly FIRESTORM was integrated into Project Convergence.

This computer program has been worked on in New Jersey for a couple years. Its not a program of record. This is something that they brought to my attention in July of last year, but it needed a little bit of work. So we put effort, we put scientists and we put some money against it, said Coffman. The way we used it is as enemy targets were identified on the battlefield FIRESTORM quickly paired those targets with the best shooter in position to put effects on it. This is happening faster than any human could execute. It is absolutely an amazing technology.

Dead Center

Prometheus and FIRESTORM werent the only AI capabilities on display at Project Convergence.

In other scenarios, a MQ-1C Gray Eagle drone was able to identify and target a threat using the on-board Dead Center payload. With Dead Center, the Gray Eagle was able to process the sensor data it was collecting, identifying a threat on its own without having to send the raw data back to a command post for processing and target identification. The drone was also equipped with the Maven Smart System and Algorithmic Inference Platform, a product created by Project Maven, a major Department of Defense effort to use AI for processing full motion video.

According to one Army officer, the capabilities of the Maven Smart System and Dead Center overlap, but placing both on the modified Gray Eagle at Project Convergence helped them to see how they compared.

With all of the AI engagements, the Army ensured there was a human in the loop to provide oversight of the algorithms' recommendations. When asked how the Army was implementing the Department of Defenses principles of ethical AI use adopted earlier this year, Coffman pointed to the human barrier between AI systems and lethal decisions.

So obviously the technology exists, to remove the human right the technology exists, but the United States Army, an ethical based organization thats not going to remove a human from the loop to make decisions of life or death on the battlefield, right? We understand that, explained Coffman. The artificial intelligence identified geo-located enemy targets. A human then said, Yes, we want to shoot at that target.

Here is the original post:

Inside the Army's futuristic test of its battlefield artificial intelligence in the desert - C4ISRNet

Posted in Ai

Admiral Seguros Is The First Spanish Insurer To Use Artificial Intelligence To Assess Vehicle Damage – PRNewswire

To do this, Admiral Seguros is using an AI solution, developed by the technology company Tractable, which accurately evaluates vehicle damage with photos sent through a web application. The app, via the AI, completes the complex manual tasks that an advisor would normally perform and produces a damage assessment in seconds, often without the need for further review.

Upon receiving the assessment, Admiral Seguros will use it to make immediate payment offers to policyholders when appropriate, allowing them to resolve claims in minutes, even on the first call.

Jose Maria Perez de Vargas, Head of Customer Management at Admiral Seguros, said: "Admiral Seguros continues to advance in digitalisation as a means to provide a better service to our policyholders, providing them with an easy, secure and transparent means of evaluating damages without the need for travel, achieving compensation in a few hours. It's a simple, innovative and efficient claims management process that our clients will surely appreciate."

Adrien Cohen, co-founder and president of Tractable, said: "By using our AI to offer immediate payments, Admiral Seguros will resolve many claims almost instantly, to the delight of its customers. This is central to our mission of using Artificial Intelligence to accelerate recovery, converting the process from weeks to minutes."

Tractable's AI uses deep learning for computer vision, in addition to machine learning techniques. The AI is trained with many millions of photographs of vehicle damage, and the algorithms learn from experience by analyzing a wide variety of different examples. Tractable's technology can be applied globally to any vehicle.

The AI enables insurers to assess car damage, shares recommended repair operations, and guides the claims management process to ensure these are processed and settled as quickly as possible.

According to Admiral Seguros, the application of this technology in the insurance sector will be a great step in digitization and will offer a great improvement in the customer experience of Admiral's insurance brands in Spain, Qualitas Auto and Balumba.

About Tractable:

Tractable develops artificial intelligence for accident and disaster recovery. Its AI solutions have been deployed by leading insurers across Europe, North America and Asia to accelerate accident recovery for hundreds of thousands of households. Tractable is backed by $55m in venture capital and has offices in London, New York City and Tokyo.

About Admiral Seguros

In Spain, Admiral Group plc has been based in Seville since 2006 thanks to the creation of Admiral Seguros. More than 700 people work from there and for the entire national territory, cementing and marketing their two commercial brands: Qualitas Auto, and Balumba.

Recognized as the third best company to work for in Spain, the sixth in Europe and the eighteenth in the world by the consultancy Great Place to Work, Admiral Seguros is committed to a corporate culture focused on people.

SOURCE Tractable

https://tractable.ai

Originally posted here:

Admiral Seguros Is The First Spanish Insurer To Use Artificial Intelligence To Assess Vehicle Damage - PRNewswire

Posted in Ai

Industry VoicesAI doesn’t have to replace doctors to produce better health outcomes – FierceHealthcare

Americans encounter some form of artificial intelligence and machine learning technologies in nearly every aspect of daily life: We accept Netflixs recommendations on what movie we should stream next, enjoy Spotifys curated playlists and take a detour when Waze tells us we can shave eight minutes off of our commute.

And it turns out that were fairly comfortable with this new normal: A survey released last year by Innovative Technology Solutions found that, on a scale of 1 to 10, Americans give their GPS systems an 8.1 trust and satisfaction score, followed closely by a 7.5 for TV and movie streaming services.

But when it comes to higher stakes, were not so trusting. When asked about whether they trust an AI doctor diagnosing or treating a medical issue, respondents scored it just a 5.4.

CMS Doubles Down on CAHPS and Raises the Bar on Member Experience

A new CMS final rule will double the impact of CAHPS and member experience on a Medicare plans overall Star Rating. Learn more and discover how to exceed member expectations and improve Star Ratings in this new whitepaper.

Overall skepticism about medical AI and ML is nothing new. In 2012, we were told that IBMs AI-powered Watson was being trained to recommend treatments for cancer patients. There were claims that the advanced technology could make medicine personalized and tailored to millions of people living with cancer. But in 2018, reports surfaced that indicated the research and technology had fallen short of expectations, leaving users to speculate the accuracy of Watsons predictive analytics.

RELATED:Investors poured $4B into healthcare AI startups in 2019

Patients have been reluctant to trust medical AI and ML out of fear that the technology would not offer a unique or personalized recommendation based on individual needs. A piece in Harvard Business Review in 2019 referenced a survey in which 200 business students were asked to take a free health assessment to perform a diagnosis40% of students signed up for the assessment when told their doctor would perform the diagnosis, while only 26% signed up when told a computer would perform the diagnosis.

These concerns are not without basis. Many of the AI and ML approaches that are being used in healthcare todaydue to simplicity and ease of implementationstrive for performance at the population-level by fitting to the characteristics most common among patients. They look to do well in the general case, failing to serve large groups of patients and individuals with unique health needs. However, this limitation of how AI and ML is being applied is not a limitation of the technology.

If anything, what makes AI and ML exceptionalif done rightis its ability to process huge sets of data comprising a diversity of patients, providers, diseases and outcomes and model the fine-grained trends that could potentially have a lasting impact on a patients diagnosis or treatment options. This ability to use data in the large for representative populations and to obtain inferences in the small for individual-level decision support is the promise of AI and ML. The whole process might sound impersonal or cookie-cutter, but the reality is that the advancements in precision medicine and delivery will make care decisions more data-driven and thus more exact.

Consider a patient choosing a specialist. Its anything but data-driven: Theyll search for a provider in-network or maybe one that is conveniently located, without understanding potential health outcomes as a result of their choice. The issue is that patients lack the proper data and information they need to make these informed choices.

RELATED:The unexpected ways AI is impacting the delivery of care, including for COVID-19

Thats where machine intelligence comes into playan AI/ML model that is able to accurately predict the right treatment, at the right time, by the right provider for a patient, which could drastically help reduce the rate of hospitalizations and emergency room visits.

As an example, research published last month in AJMC looked at claims data from 2 million Medicare beneficiaries between 2017 and 2019 to evaluate the utility of ML in the management of severe respiratory infections in community and post-acute settings. The researchers found that machine intelligence for precision navigation could be used to mitigate infection rates in the post-acute care setting.

Specifically, at-risk individuals who received care at skilled nursing facilities (SNFs) that the technology predicted would be the best choice for them had a relative reduction of 37% for emergent care and 36% for inpatient hospitalizations due to respiratory infections compared to those who received care at non-recommended SNFs.

This advanced technology has the ability to comb through and analyze an individuals treatment needs and medical history so that the most accurate recommendations can be made based on that individuals personalized needs and the doctors or facilities available to them. In turn, matching a patient to the optimal provider has the ability to drastically improve health outcomes while also lowering the cost of care.

We now have the technology where we can use machine intelligence to optimize some of the most important decisions in healthcare. The data show results we can trust.

Zeeshan Syed is the CEO and Zahoor Elahi is the COO of Health at Scale.

Link:

Industry VoicesAI doesn't have to replace doctors to produce better health outcomes - FierceHealthcare

Posted in Ai

Will artificial intelligence have a conscience? – TechTalks

Does artificial intelligence require moral values? We spoke to Patricia Churchland, neurophilosopher and author of Conscience: The Origins of Moral Intuition

This article is part of the philosophy of artificial intelligence, a series of posts that explore the ethical, moral, and social implications of AI today and in the future

Can artificial intelligence learn the moral values of human societies? Can an AI system make decisions in situations where it must weigh and balance between damage and benefits to different people or groups of people? Can AI develop a sense of right and wrong? In short, will artificial intelligence have a conscience?

This question might sound irrelevant when considering todays AI systems, which are only capable of accomplishing very narrow tasks. But as science continues to break new grounds, artificial intelligence is gradually finding its way into broader domains. Were already seeing AI algorithms applied to areas where the boundaries of good and bad decisions are not clearly defined, such as criminal justice and job application processing.

In the future, we expect AI to care for the elderly, teach our children, and perform many other tasks that require moral human judgement. And then, the question of conscience and conscientiousness in AI will become even more critical.

With these questions in mind, I went in search of a book (or books) that explained how humans develop conscience and give an idea of whether what we know about the brain provides a roadmap for conscientious AI.

A friend suggested Conscience: The Origins of Moral Intuitionby Dr. Patricia Churchland, neuroscientist, philosopher, and professor emerita at the University of California, San Diego. Dr. Churchlands book, and a conversation I had with her after reading Conscience, taught me a lot about the extent and limits of brain science. Conscience shows us how far weve come to understand the relation between the brains physical structure and workings and the moral sense in humans. But it also shows us how much more we must go to truly understand how humans make moral decisions.

It is a very accessible read for anyone who is interested in exploring the biological background of human conscience and reflect on the intersection of AI and conscience.

Heres a very quick rundown of what Conscience tells us about the development of moral intuition in the human brain. With the mind being the main blueprint for AI, better knowledge of conscience can tell us a lot about what it would take for AI to learn the moral norms of human societies.

Conscience is an individuals judgment about what is normally right or wrong, typically, but not always, reflecting some standard of a group to which the individual feels attached, Churchland writes in her book.

But how did humans develop the ability to understand to adopt these rights and wrongs? To answer that question, Dr. Churchland takes us back through time, when our first warm-blooded ancestors made their apparition.

Birds and mammals are endotherms: their bodies have mechanisms to preserve their heat. In contrast, in reptiles, fish, and insects, cold-blooded organisms, the body adapts to the temperature of the environment.

The great benefit of endothermy is the capability to gather food at night and to survive colder climates. The tradeoff: endothermic bodies need a lot more food to survive. This requirement led to a series of evolutionary steps in the brains of warm-blooded creatures that made them smarter. Most notable among them is the development of the cortex in the mammalian brain.

The cortex can integrate diverse signals and pull out abstract representation of events and things that are relevant to survival and reproduction. The cortex learns, integrates, revises, recalls, and keeps on learning.

The cortex allows mammals to be much more flexible to changes in weather and landscape, as opposed to insects and fish, who are very dependent on stability in their environmental conditions.

But again, learning capabilities come with a tradeoff: mammals are born helpless and vulnerable. Unlike snakes, turtles, and insects, which hit the ground running and are fully functional when they break their eggshells, mammals need time to learn and develop their survival skills.

And this is why they depend on each other for survival.

The brains of all living beings have a reward and punishment system that makes sure they do things that support their survival and the survival of their genes. The brains of mammals repurposed this function to adapt for sociality.

In the evolution of the mammalian brain, feelings of pleasure and pain supporting self-survival were supplemented and repurposed to motivate affiliative behavior, Churchland writes. Self-love extended into a related but new sphere: other-love.

The main beneficiary of this change are the offspring. Evolution has triggered changes in the circuitry of the brains of mammals to reward care for babies. Mothers, and in some species both parents, go to great lengths to protect and feed their offspring, often at a great disadvantage to themselves.

In Conscience, Churchland describes experiments on the biochemical reactions of the brains of different mammals reward social behavior, including care for offspring.

Mammalian sociality is qualitatively different from that seen in other social animals that lack a cortex, such as bees, termites, and fish, Churchland writes. It is more flexible, less reflexive, and more sensitive to contingencies in the environment and thus sensitive to evidence. It is sensitive to long-term as well as short-term considerations. The social brain of mammals enables them to navigate the social world, for knowing what others intend or expect.

The brains of humans have the largest and most complex cortex in mammals. The brain of homo sapiens, our species, is three times as large as that of chimpanzees, with whom we shared a common ancestor 5-8 million years ago.

The larger brain naturally makes us much smarter but also has higher energy requirements. So how did we come to pay the calorie bill? Learning to cook food over fire was quite likely the crucial behavioral change that allowed hominin brains to expand well beyond chimpanzee brains, and to expand rather quickly in evolutionary time, Churchland writes.

With the bodys energy needs supplied, hominins eventually became able to do more complex things, including the development of richer social behaviors and structures.

So the complex behavior we see in our species today, including the adherence to moral norms and rules, started off as a struggle for survival and the need to meet energy constraints.

Energy constrains might not be stylish and philosophical, but they are as real as rain, Churchland writes in Conscience.

Our genetic evolution favored social behavior. Moral norms emerged as practical solutions to our needs. And we humans, like every other living being, are subject to the laws of evolution, which Churchland describes as a blind process that, without any goal, fiddles around with the structure already in place. The structure of our brain is the result of countless experiments and adjustments.

Between them, the circuitry supporting sociality and self-care, and the circuitry for internalizing social norms, create what we call conscience, Churchland writes. In this sense your conscience is a brain construct, whereby your instincts for caring, for self and others, are channeled into specific behaviors through development, imitation, and learning.

This is a very sensitive topic and complicated, and despite all the advances in brain science, many of the mysteries of the human mind and behavior remain unlocked.

The dominant role of energy requirements in the ancient origin of human morality does not mean that decency and honesty must be cheapened. Nor does it mean that they are not real. These virtues remain entirely admirable and worthy to us social humans, regardless of their humble origins. They are an essential part of what makes us the humans we are, Churchland writes.

In Conscience, Churchland discusses many other topics, including the role of reinforcement learning in the development of social behavior and the human cortexs far-reaching capacity to learn by experience, to reflect on counterfactual situations, develop models of the world, draw analogies from similar patterns and much more.

Basically, we use the same reward system that allowed our ancestors to survive, and draw on the complexity of our layered cortex to make very complicated decisions in social settings.

Moral norms emerge in the context of social tension, and they are anchored by the biological substrate. Learning social practices relies on the brains system of positive and negative reward, but also on the brains capacity for problem solving, Churchland writes.

After reading Conscience, I had many questions in mind about the role of conscience in AI. Would conscience be an inevitable byproduct of human-level AI? If energy and physical constraints pushed us to develop social norms and conscientious behavior, would there be a similar requirement for AI? Does physical experience and sensory input from the world play a crucial role in the development of intelligence?

Fortunately, I had the chance to discuss these topics with Dr. Churchland after reading Conscience.

What is evident from Dr. Churchlands book (and other research on biological neural networks), physical experience and constraints play an important role in the development of intelligence, and by extension conscience, in humans and animals.

But today, when we speak of artificial intelligence, we mostly talk about software architectures such as artificial neural networks. Todays AI is mostly disembodied lines of code that run on computers and servers and process data obtained by other means. Will physical experience and constraints be a requirement for the development of truly intelligent AI that can also appreciate and adhere to the moral rules and norms of human society?

Its hard to know how flexible behavior can be when the anatomy of the machine is very different from the anatomy of the brain, Dr. Churchland said in our conversation. In the case of biological systems, the reward system, the system for reinforcement learning is absolutely crucial. Feelings of positive and negative reward are essential for organisms to learn about the environment. That may not be true in the case of artificial neural networks. We just dont know.

She also pointed out that we still dont know how brains think. In the event that we were to understand that, we might not need to replicate absolutely every feature of the biological brain in the artificial brain in order to get some of the same behavior, she added.

Churchland reminded that while initially, the AI community largely dismissed neural networks, they eventually turned out to be quite effective when their computational requirements were met. And while current neural networks have limited intelligence in comparison to the human brain, we might be in for surprises in the future.

One of the things we do know at this stage is that mammals with cortex and with reward system and subcortical structures can learn things and generalize without a huge amount of data, she said. At the moment, an artificial neural network might be very good at classifying faces by hopeless at classifying mammals. That could just be a numbers problem.

If youre an engineer and youre trying to get some effect, try all kinds of things. Maybe you do have to have something like emotions and maybe you can build that into your artificial neural network.

One of my takeaways from Conscience was that humans generally align themselves with the social norms of their society, they also challenge them at times. And the unique physical structure of each human brain, the genes we inherit from our parents and the later experiences that we acquire through our lives make for the subtle differences that allow us to come up with new norms and ideas and sometimes defy what was previously established as rule and law.

But one of the much-touted features of AI is its uniform reproducibility. When you create an AI algorithm, you can replicate it countless times and deploy it in as many devices and machines as you want. They will all be identical to the last parametric values of their neural networks. Now, the question is, when all AIs are equal, will they remain static in their social behavior and lack the subtle differences that drive the dynamics of social and behavioral progress in human societies?

Until we have a much richer understanding of how biological brains work, its really hard to answer that question, Churchland said. We know that in order to get a complicated result out of a neural network, the network doesnt have to have wet stuff, it doesnt have to have mitochondria and ribosomes and proteins and membranes. How much else does it not have to have? We dont know.

Without data, youre just another person with an opinion, and I have no data that would tell me that youve got to mimic certain specific circuitry in the reinforcement learning system in order to have an intelligent network.

Engineers will try and see what works.

We have yet to learn much about human conscience, and even more about if and how it would apply to highly intelligent machines. We do not know precisely what the brain does as it learns to balance in a headstand. But over time, we get the hang of it, Churchland writes in Conscience. To an even greater degree, we do not know what the brain does as it learns to find balance in a socially complicated world.

But as we continue to observe and learn the secrets of the brain, hopefully we will be better equipped to create AI that serves the good of all humanity.

Read more here:

Will artificial intelligence have a conscience? - TechTalks

Posted in Ai

Why Artificial Intelligence Should Be on the Menu this Season – FSR magazine

The perfect blend of AI collaboration needs workers to focus on the tasks where they excel.

Faced with the business impacts of one of the largest health crises to date, restaurants of all sizes are in a pivotal moment in time where every decisionshort term and long termcounts. For their businesses to survive, restaurant owners have had to act fast by rethinking operations and introducing pandemic-related initiatives.

Watching the worlds largest chains all the way down to the local mom-and-pops become innovators in such extreme times has shown the industrys tenacity and survival instinct, even when all odds are stacked against their favor. None of these initiatives would be possible without technology as the driving factor.

Why AI is on the Menu This Season

A recent Dragontail Systems survey found that 70 percent of respondents would be more comfortable with delivery if they were able to monitor their orders preparation from start to finish. Consumers want to be at the forefront of their meals creationthey dont want to cook it, but they do want to know it was prepared in a safe environment and delivered hot and fresh to their door.

Aside from AIs role on the back-end helping with preparation time estimation and driver scheduling, the technology is now being used in cameras, for example, which share real-time images with consumers so that they can be sure their orders are handled with care. Amid the pandemic, this means making sure that gloves and masks are used during the preparation process and that workspaces are properly sanitized.

It is clear that AI is already radically altering how work gets done in and out of the kitchen. Fearmongers often tout AIs ability to automate processes and make better decisions in faster time compared to humans, but restaurants that deploy it mainly to displace employees will see only short-term productivity gains.

The perfect blend of AI collaboration needs workers to focus on the tasks where they excel, like customer service, so that the human element of the experience is never lost, only augmented.

AI on the Back-End

Ask any store or shift manager how they feel about workforce scheduling, and almost none will say its their favorite part of the job. Its a Catch-22: even when its done, its never perfect. However, when AI is in charge, everything looks different.

Parameters such as roles in the restaurants, peak days and hours, special events such as a Presidential debate, overtime, seniority, skills, days-off and more can be easily tracked. Managers are not only saving time in handing off this daunting task, but also allowing the best decisions to be made for optimal restaurant efficiency.

Another aspect is order prioritizationby nature, most kitchens and restaurants prepare meals based on FIFO (first-in-first-out). When using AI that enhances kitchen prioritization, for example, cooks are informed when to cook an order, ensuring that there are actually drivers available to deliver it to the customer in a timely manner.

Delivery management then allows drivers to make more deliveries per hour just by following the systems decisions, which improve and optimize the dispatching functionality.

The Birth of the Pandemic Intelligent Kitchen/Store

With the pandemic, our awareness of sanitation and cleanliness went dramatically up and the demand for solutions came with it. AI cameras give customers exactly thata real-time, never-before-seen view inside the kitchen to monitor how their order is being prepped, managed, and delivered.

Another aspect where AI comes in handy is avoiding dine-in and doing more take-out and drive-thru. When a customer is making an order online and picking the order up in their car, an AI camera can detect the car plate number in addition to the customer location (phone GPS) when entering the drive-thru area to provide a faster service with a runner from the restaurant.

In addition, the new concept of contactless menus where the whole menu is online with a quick scan of a QR code is another element building popularity during the pandemic. The benefits go beyond minimizing contact with physical menus; when a restaurant implements a smart online menu, they can collect data and offer personalized suggestions based on customers favorite foods, food/drink combos, weather-based food recommendations, upsell, cross-sell personalized etc.all powered by AI.

Restaurants can no Longer Afford Aversion to Technology

Challenges associated with technology, including implementation and a long-roadmap, are fading awaymost technology providers are offering Plug & Play products or services, and most of them are working on a SaaS model. This means theres no commitment, they are easy to use, and integrate seamlessly with the POS.

Restaurants dont have to make a big investment to reap the benefits technology bringstaking little steps that slowly improve restaurant operations and customer experience can still lead to increased growth and higher profit margins, especially during the pandemic when money is tight.

Technology enhances the experience, giving consumers a reason to keep ordering from their favorite places at a time when the stakes have never been so high, and the competition has never been as fierce. The pandemic is far from over but the changes we are seeing will be here for a lifetime. Thats why it is so important to leverage technology and AI now in order to see improvements in customer satisfaction and restaurant efficiency in the long term.

Continued here:

Why Artificial Intelligence Should Be on the Menu this Season - FSR magazine

Posted in Ai

Global AI in Asset Management Market By Technology, By Deployment Mode, By Application, By End User, By Region, Industry Analysis and Forecast, 2020 -…

New York, Sept. 28, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global AI in Asset Management Market By Technology, By Deployment Mode, By Application, By End User, By Region, Industry Analysis and Forecast, 2020 - 2026" - https://www.reportlinker.com/p05975407/?utm_source=GNW The main areas where AI is gaining attraction in financial assets management comprise personal financial management, investment banking, and fraud detection. With the aid of technologies like machine learning and AI and predictive analytics, financial institutes can accomplish their financial assets more effectively and can also meet the changing behavior of the consumer. This will beneficial for organizations to improve their business operations and the process by automation that further results in enhanced customer experience.

The increasing demand for automation systems in financial products along with Changing customer behavior and are the two major factors that contribute to the growing demand for artificial intelligence (AI) in the market for assets management. AI is largely dependent on digital data produced from various sources like business processes and customer service. Financial providing banks such as investment banks and other institutes are using artificial intelligence to recognize and analyze the veiled patterns from the collected data to improve the proficiencies of asset management. The companies can prepare themselves by adopting these technologies, in order to deal with continually changing acquiescence and regulatory environment that are related to market risks.

The exponential increase in data volumes, low-interest rates, and strict regulations are promising asset managers that reconsider their traditional strategies business. Furthermore, new technological advancements have covered that of artificial intelligence for asset management. Connection of knowledge, NLP (natural language processing) techniques, and domain-enriched ML (machine learning), etc. has been adopted by numerous FinTech companies to deal with enhanced financial and investment services.

Based on Technology, the market is segmented into Machine Learning, Natural Language Processing and Others. Based on Deployment Mode, the market is segmented into On-Premise and Cloud. Based on Application, the market is segmented into Portfolio Optimization, Risk & Compliance, Conversational Platform, Process Automation, Data Analysis and Others. Based on End User, the market is segmented into BFSI, Automotive, Healthcare, Retail & eCommerce, Energy & Utilities, Media & Entertainment and Others. Based on Regions, the market is segmented into North America, Europe, Asia Pacific, and Latin America, Middle East & Africa.

The major strategies followed by the market participants are Partnerships and Product Launches. Based on the Analysis presented in the Cardinal matrix; Microsoft Corporation is the major forerunner in the AI in Asset Management Market. Companies such as Amazn.com, Inc., IBM Corporation, Salesforce.com, Inc., BlackRock, Inc., Genpact Limited, IPsoft, Inc., Lexalytics, Inc., Infosys Limited, and Narrative Science, Inc. are some of the key innovators in the market.

The market research report covers the analysis of key stake holders of the market. Key companies profiled in the report include IBM Corporation, Microsoft Corporation, Genpact Limited, Infosys Limited, Amazon.com, Inc., BlackRock, Inc. (PNC Financial Services Group), IPsoft, Inc., Salesforce.com, Inc., Lexalytics, Inc. and Narrative Science, Inc.

Recent strategies deployed in AI in Asset Management Market

Partnerships, Collaborations, and Agreements:

Sep-2020: Salesforce announced its collaboration with ServiceMax, the leader in asset-centric field service management. Following the collaboration, the latter company launched ServiceMax Asset 360 for Salesforce, a new product built on Salesforce Field Service. This product would bring ServiceMaxs asset-centric approach and decade-plus of experience to more customers across a broader set of industries to help them keep critical assets running.

Aug-2020: IPsoft announced its partnership with Sterling National Bank, the wholly-owned operating bank subsidiary of Sterling Bancorp. Following the partnership, Sterling National Bank deployed Amelia, the industry-leading Digital Employee platform. Amelia would accelerate Sterlings digital transformation and provide enhanced customer service.

Aug-2020: Salesforce entered into collaboration with Sitetracker, a cloud-based project management company. Sitetracker customers can now benefit from Einstein Analytics native artificial intelligence to predict project outcomes and durations, gain insights on financial and project performance and enhance the utilization of vendors, project managers, and field teams. Sitetracker updated its platform to provide customers AI-driven predictive reporting and dashboarding through Salesforce Einstein Analytics. With this upgradation, Sitetracker customers can gather specific, deep, and up-to-the-minute strategic actionable insights on how they deploy and maintain critical infrastructure.

Aug-2020: Microsoft teamed up with SimCorp following which the latter company integrated its front-to-back investment management platform, SimCorp Dimension, with Microsoft Azure as part of the firms cloud transformation. The multi-asset investment management solutions provider will now be able to serve clients with a scalable, secure, and cost-efficient public cloud solution during current heightened market conditions, increased competition, and regulation.

Jul-2020: BlackRock came into partnership with Citi following which Citi and BlackRock and its Aladdin business to improve the administration of securities services for mutual clients. Aladdin is an end-to-end investment management platform that utilizes a combination of risk analytics, portfolio management, trading, and operations onto a unified platform. By integrating the Aladdin Provider network, Citi has been optimizing its operating model to support both BlackRocks asset management business and the wider Aladdin system.

Jul-2020: IPsoft partnered with Microsoft following which Amelia, a comprehensive digital employee, would be available in the Microsoft Azure Marketplace, an online store providing applications and services for use on Azure. The addition of Amelia aimed to enable Microsoft sellers, partners, and customers to easily integrate and take advantage of her conversational AI for the enterprise.

Jul-2020: IBM came into partnership with Verizon, a telecommunications company. Under this partnership, IBMs AI, hybrid cloud, and asset management tools have been integrated with Verizons wireless carriers 5G and edge computing technologies. The partnership combined low latency 5G networking and multi-access computing along with the wireless carriers ThingSpace IoT platform and an asset tracking system. IBM would provide its Watson AI tools along with data analytics and its Maximo asset monitor.

Jul-2020: Infosys is partnering with Vanguard, an American registered investment advisor. The partnership would deliver a technology-driven approach to plan administration and fundamentally reshape the corporate retirement plan experience for its sponsors and participants.

Jul-2020: Microsoft Corporation entered into partnership with MSCI Inc. The partnership would accelerate innovation among the global investment industry. By bringing together the power of Microsofts cloud and AI technologies with MSCIs global reach through its portfolio of investment decision support tools, the companies aim to unlock new innovations for the industry and enhance MSCIs client experience among the worlds most sophisticated investors, including asset managers, asset owners, hedge funds and banks.

Jun-2020: BlackRock signed a partnership agreement with Northern Trust, a financial services company. Following the partnership, the latter company deployed BlackRocks Aladdin investment management platform. The partnership connected Northern Trusts fund accounting, fund administration, asset servicing, and middle office capabilities to BlackRocks Aladdin platform, creating greater connectivity between the asset manager and asset servicer.

Jun-2020: IBM extended its partnership with Siemens following which the companies announced a new solution designed to optimize the Service Lifecycle Management (SLM) of assets by dynamically connecting real-world maintenance activities and asset performance back to design decisions and field modifications. This new solution established an end-to-end digital thread between equipment manufacturers and the owner/operators of that equipment by using elements of the Xcelerator portfolio from Siemens Digital Industries Software and IBM Maximo. The combined capabilities of IBM and Siemens can help companies create and manage a closed-loop, end-to-end digital twin that breaks down traditional silos to service innovation and revenue generation.

May-2020: IPsoft extended its partnership with Unisys Corporation to embed cognitive AI capabilities within InteliServe, the Unisys pervasive workplace automation platform. Together, the companies provide an integrated suite of best-in-class cognitive technology that resolves all workplace issues from tech and HR to legal and finance. Amelia is now the first point of contact for InteliServe, bringing a consistent experience and reaching all users regardless of work location (home, office, or on the run).

May-2020: Infosys announced partnership with Avaloq, a leading wealth management software and digital technology provider. The partnership was focused to provide end-to-end (e2e) wealth management capabilities through digital platforms. Infosys would be an implementation partner for Avaloqs wealth management suite of solutions to help clients modernize and transform their legacy systems into cutting-edge digital advisory platforms.

Apr-2020: BlackRock partnered with Microsoft Corporation to host BlackRocks Aladdin infrastructure on the Microsoft Azure cloud platform. The partnership was focused on bringing enhanced capabilities to BlackRock and its Aladdin clients, which include many of the worlds most sophisticated institutional investors and wealth managers. By adopting Microsoft Azure, BlackRock accelerated innovation on Aladdin through greater computing scale and unlock new capabilities to enhance the client experience.

Mar-2020: Genpact partnered with HighRadius, an enterprise SaaS (Software-as-a-service) fintech company. The partnership was focused on providing improvements to enterprise accounts receivable and bringing together digital automation solutions powered by advanced machine learning and artificial intelligence. The companies would solve this challenge by delivering a transformative digital automation solution that enables businesses to maximize their working capital while enhancing customer and user experiences.

Acquisition and Mergers:

Aug-2020: Salesforce completed its acquisition of Tableau Software, an interactive data visualization software company. The acquisition helped Salesforce in enabling companies around the world to tap into data across their entire business and surface deeper insights to make smarter decisions, drive intelligent, connected customer experiences, and accelerate innovation.

May-2019: BlackRock acquired eFront, the French alternative investment management software and solutions provider. The acquisition expanded its presence and technology capabilities in France, Europe, and across the world. Additionally, eFront extended Aladdins end-to-end processing capabilities in alternative asset classes, enabling clients to get an enterprise view of their portfolio.

May-2018: Microsoft announced the acquisition of Semantic Machines, a developer of new approaches for building conversational AI. Together the companies will develop their work in conversational AI with Microsofts digital assistant Cortana and social chatbots like XiaoIce.

Mar-2017: Genpact completed the acquisition of Rage Frameworks, a leader in knowledge-based automation technology and services providing AI for the Enterprise. The acquisition extended the frontier of AI for the enterprise. Genpact embedded Rages AI in business operations and applied it to complex enterprise issues to allow clients to generate insights and drive decisions and action, at a scale and speed that humans alone could not achieve.

Product Launches and Product Expansions:

Sep-2020: Salesforce launched the next generation of Salesforce Field Service, with new appointment scheduling and optimization capabilities, artificial intelligence-driven guidance for dispatchers, asset performance insights, and automated customer communications. This service equipped teams across industries with AI-powered tools to deliver trusted, mission-critical field service.

Aug-2020: AWS launched Contact Center Intelligence (CCI) solutions, a combination of services powered by AWSs machine learning technology. These solutions aimed to help enterprises add ML-based intelligence to their contact centers. AWS CCI solutions enabled organizations to leverage machine learning functionality such as text-to-speech, translation, enterprise search, chatbots, business intelligence, and language comprehension in their current contact center environments.

Nov-2019: IBM introduced the Maximo Asset Monitor, a new AI-powered monitoring solution. The solution was designed to help maintenance and operations leaders better understand and improve the performance of their high-value physical assets. This solution helps in generating essential insights with AI-powered anomaly detection and provides enterprise-wide visibility into critical equipment performance.

Nov-2019: Microsoft Researchs Natural Language Processing Group unveiled dialogue generative pre-trained transformer. DialoGPT is a deep-learning natural processing model for use in automatic conversation response generation. This model has been trained on more than 147M dialogues and achieves the results on several benchmarks.

Jun-2019: Amazon Connect introduced AI-Powered Speech Analytics, a solution that provides customer insights in real-time. It helps agents and supervisors better understand and respond to customer needs so they can resolve customer issues and improve the overall customer experience. The solution includes pre-trained AWS artificial intelligence (AI) services that enabled customers to transcribe, translate, and analyze each customer interaction in Amazon Connect, and presents this information to assist contact center agents during their conversations.

May-2019: Salesforce released Einstein Analytics for Financial Services, a customizable analytics solution. The solution delivers AI-augmented business intelligence for wealth advisors, retail bankers, and managers. Einstein Analytics for Financial Services includes Actionable insights powered by AI, Built-in industry dashboards, a Customizable platform to analyze external data, and Built-in compliance with industry regulations.

Scope of the Study

Market Segmentation:

By Technology

Machine Learning

Natural Language Processing

Others

By Deployment Mode

On-Premise

Cloud

By Application

Portfolio Optimization

Risk & Compliance

Conversational Platform

Process Automation

Data Analysis

Others

By End User

BFSI

Automotive

Healthcare

Retail & eCommerce

Energy & Utilities

Media & Entertainment

Others

By Geography

North America

o US

o Canada

o Mexico

o Rest of North America

Europe

o Germany

o UK

o France

o Russia

o Spain

o Italy

o Rest of Europe

Asia Pacific

o China

o Japan

o India

o South Korea

o Singapore

o Malaysia

o Rest of Asia Pacific

LAMEA

o Brazil

o Argentina

o UAE

o Saudi Arabia

o South Africa

o Nigeria

o Rest of LAMEA

Companies Profiled

IBM Corporation

Microsoft Corporation

Genpact Limited

Infosys Limited

Amazon.com, Inc.

BlackRock, Inc. (PNC Financial Services Group)

IPsoft, Inc.

Salesforce.com, Inc.

Lexalytics, Inc.

Narrative Science, Inc.

Originally posted here:

Global AI in Asset Management Market By Technology, By Deployment Mode, By Application, By End User, By Region, Industry Analysis and Forecast, 2020 -...

Posted in Ai

Banner Health is the first to bring AI to stroke care in Phoenix – AZ Big Media

Banner Health has partnered with Viz.ai to bring the first FDA-cleared computer-aided triage system to the Phoenix-metro area. This new technology will help facilitate early access to the most advanced stroke care for Banner Healths patients across the state, including machine-learning rapid analysis of suspected large vessel occlusion (LVO) strokes, which account for approximately one in four acute ischemic strokes.

Having developed the first Joint Commission Certified Primary Stroke Center in Arizona, Banner University Medical Center Phoenix, part of the Banner Health network, continues its commitment to leverage advanced innovations to improve access to the most optimal treatments for patients who are suffering an acute stroke. Viz.ai solutions will allow Banner Health to further enhance the power of its stroke care teams through rapid detection and notification of suspected LVO strokes. The technology also allows Banners stroke specialists to securely communicate to synchronize care and determine the optimal patient treatment decision, potentially saving critical minutes, even hours, in the triage, diagnosis, and treatment of strokes.

Treating a patient suffering from a stroke requires quick and decisive action. Just 15 minutes can make a difference in saving someones life, said Jeremy Payne, MD, PhD, director of the Stroke Center at Banner University Medicine Neuroscience Institute. Viz.ais solutions will truly transform the way that we deliver stroke care to our community, which we believe will result in improved outcomes for our patients.

This applied artificial intelligence-based technology is being deployed throughout the Banner Health network including atBanner University Medical Center Phoenix,Banner Del E Webb Medical Centerin Sun City West, andBanner Desert Medical Centerin Mesa. Within the next few months, it is expected to be used as an early-warning system for strokes throughout the entire network of Banner hospitals in Arizona.

With this technology our stroke specialists can be automatically notified of potential large strokes within minutes of imaging completion, and the computerized platform often recognizes the stroke before the patient has left the CT scanner, Payne added. We can immediately access the specialized imaging results on our phones, and then communicate with the Emergency Department physician in a matter of minutes. This dramatically accelerates our ability to initiate treatment.

Combining groundbreaking applied artificial intelligence with seamless communication, Viz.ais image analysis facilitates the fast and accurate triage of suspected LVOs in stroke patients and better collaboration between clinicians at comprehensive and referral hospitals. Viz.ai synchronizes care across the whole care team, enabling a new era of Synchronized Care, where the right patient gets to the right doctor at the right time.

We are excited to bring our technology to Banner Health, said Dr. Chris Mansi, co-founder and CEO of Viz.ai. The exceptional care provided by the Banner Health stroke network will be enhanced by our cutting-edge applied artificial intelligence platform which will enable faster coordination of care for the sickest patients and improve access to life-saving therapy through the community they serve.

To learn more about Banner Healths stroke program, visitbannerhealth.com/stroke.

See the article here:

Banner Health is the first to bring AI to stroke care in Phoenix - AZ Big Media

Posted in Ai

Artificial Intelligence What it is and why it matters | SAS

The term artificial intelligence was coined in 1956, but AI has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.

Early AI research in the 1950s explored topics like problem solving and symbolic methods. In the 1960s, the US Department of Defense took interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) completed street mapping projects in the 1970s. And DARPA produced intelligent personal assistants in 2003, long before Siri, Alexa or Cortana were household names.

This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and smart search systems that can be designed to complement and augment human abilities.

While Hollywood movies and science fiction novels depict AI as human-like robots that take over the world, the current evolution of AI technologies isnt that scary or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry. Keep reading for modern examples of artificial intelligence in health care, retail and more.

Why is artificial intelligence important?

See the article here:

Artificial Intelligence What it is and why it matters | SAS

Posted in Ai

What Is Artificial Intelligence (AI)? | PCMag

In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."

McCarthy called this new field of study "artificial intelligence," and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could "use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."

At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, more than six decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans.

But true artificial intelligence, as McCarthy conceived it, continues to elude us.

A great challenge with artificial intelligence is that it's a broad term, and there's no clear agreement on its definition.

As mentioned, McCarthy proposed AI would solve problems the way humans do: "The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans," McCarthy said.

Andrew Moore, Dean of Computer Science at Carnegie Mellon University, provided a more modern definition of the term in a 2017 interview with Forbes: "Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence."

But our understanding of "human intelligence" and our expectations of technology are constantly evolving. Zachary Lipton, the editor of Approximately Correct, describes the term AI as "aspirational, a moving target based on those capabilities that humans possess but which machines do not." In other words, the things we ask of AI change over time.

For instance, In the 1950s, scientists viewed chess and checkers as great challenges for artificial intelligence. But today, very few would consider chess-playing machines to be AI. Computers are already tackling much more complicated problems, including detecting cancer, driving cars, and processing voice commands.

The first generation of AI scientists and visionaries believed we would eventually be able to create human-level intelligence.

But several decades of AI research have shown that replicating the complex problem-solving and abstract thinking of the human brain is supremely difficult. For one thing, we humans are very good at generalizing knowledge and applying concepts we learn in one field to another. We can also make relatively reliable decisions based on intuition and with little information. Over the years, human-level AI has become known as artificial general intelligence (AGI) or strong AI.

The initial hype and excitement surrounding AI drew interest and funding from government agencies and large companies. But it soon became evident that contrary to early perceptions, human-level intelligence was not right around the corner, and scientists were hard-pressed to reproduce the most basic functionalities of the human mind. In the 1970s, unfulfilled promises and expectations eventually led to the "AI winter," a long period during which public interest and funding in AI dampened.

It took many years of innovation and a revolution in deep-learning technology to revive interest in AI. But even now, despite enormous advances in artificial intelligence, none of the current approaches to AI can solve problems in the same way the human mind does, and most experts believe AGI is at least decades away.

The flipside, narrow or weak AI doesn't aim to reproduce the functionality of the human brain, and instead focuses on optimizing a single task. Narrow AI has already found many real-world applications, such as recognizing faces, transforming audio to text, recommending videos on YouTube, and displaying personalized content in the Facebook News Feed.

Many scientists believe that we will eventually create AGI, but some have a dystopian vision of the age of thinking machines. In 2014, renowned English physicist Stephen Hawking described AI as an existential threat to mankind, warning that "full artificial intelligence could spell the end of the human race."

In 2015, Y Combinator President Sam Altman and Tesla CEO Elon Musk, two other believers in AGI, co-founded OpenAI, a nonprofit research lab that aims to create artificial general intelligence in a manner that benefits all of humankind. (Musk has since departed.)

Others believe that artificial general intelligence is a pointless goal. "We don't need to duplicate humans. That's why I focus on having tools to help us rather than duplicate what we already know how to do. We want humans and machines to partner and do something that they cannot do on their own," says Peter Norvig, Director of Research at Google.

Scientists such as Norvig believe that narrow AI can help automate repetitive and laborious tasks and help humans become more productive. For instance, doctors can use AI algorithms to examine X-ray scans at high speeds, allowing them to see more patients. Another example of narrow AI is fighting cyberthreats: Security analysts can use AI to find signals of data breaches in the gigabytes of data being transferred through their companies' networks.

Early AI-creation efforts were focused on transforming human knowledge and intelligence into static rules. Programmers had to meticulously write code (if-then statements) for every rule that defined the behavior of the AI. The advantage of rule-based AI, which later became known as "good old-fashioned artificial intelligence" (GOFAI), is that humans have full control over the design and behavior of the system they develop.

Rule-based AI is still very popular in fields where the rules are clearcut. One example is video games, in which developers want AI to deliver a predictable user experience.

The problem with GOFAI is that contrary to McCarthy's initial premise, we can't precisely describe every aspect of learning and behavior in ways that can be transformed into computer rules. For instance, defining logical rules for recognizing voices and imagesa complex feat that humans accomplish instinctivelyis one area where classic AI has historically struggled.

An alternative approach to creating artificial intelligence is machine learning. Instead of developing rules for AI manually, machine-learning engineers "train" their models by providing them with a massive amount of samples. The machine-learning algorithm analyzes and finds patterns in the training data, then develops its own behavior. For instance, a machine-learning model can train on large volumes of historical sales data for a company and then make sales forecasts.

Deep learning, a subset of machine learning, has become very popular in the past few years. It's especially good at processing unstructured data such as images, video, audio, and text documents. For instance, you can create a deep-learning image classifier and train it on millions of available labeled photos, such as the ImageNet dataset. The trained AI model will be able to recognize objects in images with accuracy that often surpasses humans. Advances in deep learning have pushed AI into many complicated and critical domains, such as medicine, self-driving cars, and education.

One of the challenges with deep-learning models is that they develop their own behavior based on training data, which makes them complex and opaque. Often, even deep-learning experts have a hard time explaining the decisions and inner workings of the AI models they create.

Here are some of the ways AI is bringing tremendous changes to different domains.

Self-driving cars: Advances in artificial intelligence have brought us very close to making the decades-long dream of autonomous driving a reality. AI algorithms are one of the main components that enable self-driving cars to make sense of their surroundings, taking in feeds from cameras installed around the vehicle and detecting objects such as roads, traffic signs, other cars, and people.

Digital assistants and smart speakers: Siri, Alexa, Cortana, and Google Assistant use artificial intelligence to transform spoken words to text and map the text to specific commands. AI helps digital assistants make sense of different nuances in spoken language and synthesize human-like voices.

Translation: For many decades, translating text between different languages was a pain point for computers. But deep learning has helped create a revolution in services such as Google Translate. To be clear, AI still has a long way to go before it masters human language, but so far, advances are spectacular.

Facial recognition: Facial recognition is one of the most popular applications of artificial intelligence. It has many uses, including unlocking your phone, paying with your face, and detecting intruders in your home. But the increasing availability of facial-recognition technology has also given rise to concerns regarding privacy, security, and civil liberties.

Medicine: From detecting skin cancer and analyzing X-rays and MRI scans to providing personalized health tips and managing entire healthcare systems, artificial intelligence is becoming a key enabler in healthcare and medicine. AI won't replace your doctor, but it could help to bring about better health services, especially in underprivileged areas, where AI-powered health assistants can take some of the load off the shoulders of the few general practitioners who have to serve large populations.

In our quest to crack the code of AI and create thinking machines, we've learned a lot about the meaning of intelligence and reasoning. And thanks to advances in AI, we are accomplishing tasks alongside our computers that were once considered the exclusive domain of the human brain.

Some of the emerging fields where AI is making inroads include music and arts, where AI algorithms are manifesting their own unique kind of creativity. There's also hope AI will help fight climate change, care for the elderly, and eventually create a utopian future where humans don't need to work at all.

There's also fear that AI will cause mass unemployment, disrupt the economic balance, trigger another world war, and eventually drive humans into slavery.

We still don't know which direction AI will take. But as the science and technology of artificial intelligence continues to improve at a steady pace, our expectations and definition of AI will shift, and what we consider AI today might become the mundane functions of tomorrow's computers.

Further Reading

Here is the original post:

What Is Artificial Intelligence (AI)? | PCMag

Posted in Ai

What is AI? Everything you need to know about Artificial …

Video: Getting started with artificial intelligence and machine learning

It depends who you ask.

Back in the 1950s, the fathers of the field Minsky and McCarthy, described artificial intelligence as any task performed by a program or a machine that, if a human carried out the same activity, we would say the human had to apply intelligence to accomplish the task.

That obviously is a fairly broad definition, which is why you will sometimes see arguments over whether something is truly AI or not.

AI systems will typically demonstrate at least some of the following behaviours associated with human intelligence: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

AI is ubiquitous today, used to recommend what you should buy next online, to understand what you say to virtual assistants such as Amazon's Alexa and Apple's Siri, to recognise who and what is in a photo, to spot spam, or detect credit card fraud.

At a very high level artificial intelligence can be split into two broad types: narrow AI and general AI.

Narrow AI is what we see all around us in computers today: intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so.

This type of machine intelligence is evident in the speech and language recognition of the Siri virtual assistant on the Apple iPhone, in the vision-recognition systems on self-driving cars, in the recommendation engines that suggest products you might like based on what you bought in the past. Unlike humans, these systems can only learn or be taught how to do specific tasks, which is why they are called narrow AI.

There are a vast number of emerging applications for narrow AI: interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines, organizing personal and business calendars, responding to simple customer-service queries, co-ordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location, helping radiologists to spot potential tumors in X-rays, flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices, the list goes on and on.

Artificial general intelligence is very different, and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience. This is the sort of AI more commonly seen in movies, the likes of HAL in 2001 or Skynet in The Terminator, but which doesn't exist today and AI experts are fiercely divided over how soon it will become a reality.

Special report: How to implement AI and machine learning (free PDF)

A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Mller and philosopher Nick Bostrom reported a 50 percent chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90 percent by 2075. The group went even further, predicting that so-called ' superintelligence' -- which Bostrom defines as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" -- was expected some 30 years after the achievement of AGI.

That said, some AI experts believe such projections are wildly optimistic given our limited understanding of the human brain, and believe that AGI is still centuries away.

There is a broad body of research in AI, much of which feeds into and complements each other.

Currently enjoying something of a resurgence, machine learning is where a computer system is fed large amounts of data, which it then uses to learn how to carry out a specific task, such as understanding speech or captioning a photograph.

Key to the process of machine learning are neural networks. These are brain-inspired networks of interconnected layers of algorithms, called neurons, that feed data into each other, and which can be trained to carry out specific tasks by modifying the importance attributed to input data as it passes between the layers. During training of these neural networks, the weights attached to different inputs will continue to be varied until the output from the neural network is very close to what is desired, at which point the network will have 'learned' how to carry out a particular task.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fuelled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

Download now: IT leader's guide to deep learning(Tech Pro Research)

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently refining a more effective form of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The structure and training of deep neural networks.

Another area of AI research is evolutionary computation, which borrows from Darwin's theory of natural selection, and sees genetic algorithms undergo random mutations and combinations between generations in an attempt to evolve the optimal solution to a given problem.

This approach has even been used to help design AI models, effectively using AI to help build AI. This use of evolutionary algorithms to optimize neural networks is called neuroevolution, and could have an important role to play in helping design efficient AI as the use of intelligent systems becomes more prevalent, particularly as demand for data scientists often outstrips supply. The technique was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

Finally there are expert systems, where computers are programmed with rules that allow them to take a series of decisions based on a large number of inputs, allowing that machine to mimic the behaviour of a human expert in a specific domain. An example of these knowledge-based systems might be, for example, an autopilot system flying a plane.

The biggest breakthroughs for AI research in recent years have been in the field of machine learning, in particular within the field of deep learning.

This has been driven in part by the easy availability of data, but even more so by an explosion in parallel computing power in recent years, during which time the use of GPU clusters to train machine-learning systems has become more prevalent.

Not only do these clusters offer vastly more powerful systems for training machine-learning models, but they are now widely available as cloud services over the internet. Over time the major tech firms, the likes of Google and Microsoft, have moved to using specialised chips tailored to both running, and more recently training, machine-learning models.

An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which useful machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train up models for DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end graphics processing units (GPUs).

As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.

Supervised learning

A common technique for teaching AI systems is by training them using a very large number of labelled examples. These machine-learning systems are fed huge amounts of data, which has been annotated to highlight the features of interest. These might be photos labelled to indicate whether they contain a dog or written sentences that have footnotes to indicate whether the word 'bass' relates to music or a fish. Once trained, the system can then apply these labels can to new data, for example to a dog in a photo that's just been uploaded.

This process of teaching a machine by example is called supervised learning and the role of labelling these examples is commonly carried out by online workers, employed through platforms like Amazon Mechanical Turk.

See also: How artificial intelligence is taking call centers to the next level

Training these systems typically requires vast amounts of data, with some systems needing to scour millions of examples to learn how to carry out a task effectively -- although this is increasingly possible in an age of big data and widespread data mining. Training datasets are huge and growing in size -- Google's Open Images Dataset has about nine million images, while its labelled video repository YouTube-8M links to seven million labelled videos. ImageNet, one of the early databases of this kind, has more than 14 million categorized images. Compiled over two years, it was put together by nearly 50,000 people -- most of whom were recruited through Amazon Mechanical Turk -- who checked, sorted, and labeled almost one billion candidate pictures.

In the long run, having access to huge labelled datasets may also prove less important than access to large amounts of compute power.

In recent years, Generative Adversarial Networks ( GANs) have shown how machine-learning systems that are fed a small amount of labelled data can then generate huge amounts of fresh data to teach themselves.

This approach could lead to the rise of semi-supervised learning, where systems can learn how to carry out tasks using a far smaller amount of labelled data than is necessary for training systems using supervised learning today.

Unsupervised learning

In contrast, unsupervised learning uses a different approach, where algorithms try to identify patterns in data, looking for similarities that can be used to categorise that data.

An example might be clustering together fruits that weigh a similar amount or cars with a similar engine size.

The algorithm isn't setup in advance to pick out specific types of data, it simply looks for data that can be grouped by its similarities, for example Google News grouping together stories on similar topics each day.

Reinforcement learning

A crude analogy for reinforcement learning is rewarding a pet with a treat when it performs a trick.

In reinforcement learning, the system attempts to maximise a reward based on its input data, basically going through a process of trial and error until it arrives at the best possible outcome.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has been used to best human performance in a variety of classic video games. The system is fed pixels from each game and determines various information, such as the distance between objects on screen.

By also looking at the score achieved in each game the system builds a model of which action will maximise the score in different circumstances, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Many AI-related technologies are approaching, or have already reached, the 'peak of inflated expectations' in Gartner's Hype Cycle, with the backlash-driven 'trough of disillusionment' lying in wait.

With AI playing an increasingly major role in modern software and services, each of the major tech firms is battling to develop robust machine-learning technology for use in-house and to sell to the public via cloud services.

Each regularly makes headlines for breaking new ground in AI research, although it is probably Google with its DeepMind AI AlphaGo that has probably made the biggest impact on the public awareness of AI.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to GPU arrays for training and running machine learning models, with Google also gearing up to let users use its Tensor Processing Units -- custom chips whose design is optimized for training and running machine-learning models.

All of the necessary associated infrastructure and services are available from the big three, the cloud-based data stores, capable of holding the vast amount of data needed to train machine-learning models, services to transform data to prepare it for analysis, visualisation tools to display the results clearly, and software that simplifies the building of models.

These cloud platforms are even simplifying the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise.

Cloud-based, machine-learning services are constantly evolving, and at the start of 2018, Amazon revealed a host of new AWS offerings designed to streamline the process of training up machine-learning models.

For those firms that don't want to build their own machine learning models but instead want to consume AI-powered, on-demand services -- such as voice, vision, and language recognition -- Microsoft Azure stands out for the breadth of services on offer, closely followed by Google Cloud Platform and then AWS. Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella -- and recently investing $2bn in buying The Weather Channel to unlock a trove of data to augment its AI services.

Internally, each of the tech giants -- and others such as Facebook -- use AI to help drive myriad public services: serving search results, offering recommendations, recognizing people and things in photos, on-demand translation, spotting spam -- the list is extensive.

But one of the most visible manifestations of this AI war has been the rise of virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

The Amazon Echo Plus is a smart speaker with access to Amazon's Alexa virtual assistant built in.

Relying heavily on voice recognition and natural-language processing, as well as needing an immense corpus to draw upon to answer queries, a huge amount of tech goes into developing these assistants.

But while Apple's Siri may have come to prominence first, it is Google and Amazon whose assistants have since overtaken Apple in the AI space -- Google Assistant with its ability to answer a wide range of queries and Amazon's Alexa with the massive number of 'Skills' that third-party devs have created to add to its capabilities.

Read more: How we learned to talk to computers, and how they learned to answer back (PDF download)

Despite being built into Windows 10, Cortana has had a particularly rough time of late, with the suggestion that major PC makers will build Alexa into laptops adding to speculation about whether Cortana's days are numbered, although Microsoft was quick to reject this.

It'd be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo are investing heavily in AI in fields ranging from ecommerce to autonomous driving. As a country China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by 2020.

Baidu has invested in developing self-driving cars, powered by its deep learning algorithm, Baidu AutoBrain, and, following several years of tests, plans to roll out fully autonomous vehicles in 2018 and mass-produce them by 2021.

Baidu's self-driving car, a modified BMW 3 series.

Baidu has also partnered with Nvidia to use AI to create a cloud-to-car autonomous car platform for auto manufacturers around the world.

The combination of weak privacy laws, huge investment, concerted data-gathering, and big data analytics by major firms like Baidu, Alibaba, and Tencent, means that some analysts believe China will have an advantage over the US when it comes to future AI research, with one analyst describing the chances of China taking the lead over the US as 500 to one in China's favor.

While you could try to build your own GPU array at home and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.

All of the major tech firms offer various AI services, from the infrastructure to build and train your own machine-learning models through to web services that allow you to access AI-powered tools such as speech, language, vision and sentiment recognition on demand.

There's too many to put together a comprehensive list, but some recent highlights include: in 2009 Google showed it was possible for its self-driving Toyota Prius to complete more than 10 journeys of 100 miles each -- setting society on a path towards driverless vehicles.

In 2011, the computer system IBM Watson made headlines worldwide when it won the US quiz show Jeopardy!, beating two of the best players the show had ever produced. To win the show, Watson used natural language processing and analytics on vast repositories of data that it processed to answer human-posed questions, often in a fraction of a second.

IBM Watson competes on Jeopardy! in January 14, 2011

In June 2012, it became apparent just how good machine-learning systems were getting at computer vision, with Google training a system to recognise an internet favorite, pictures of cats.

Since Watson's win, perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, an ancient Chinese game whose complexity stumped computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational point of view. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training these deep learning networks can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. At last year's prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

And AI continues to sprint past new milestones, last year a system trained by OpenAI defeated the world's top players in one-on-one matches of the online multiplayer game Dota 2.

That same year, OpenAI created AI agents that invented their own invented their own language to cooperate and achieve their goal more effectively, shortly followed by Facebook training agents to negotiate and even lie.

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, use of AI is helping robots move into new areas such as self-driving cars, delivery robots, as well as helping robots to learn new skills. General Motors recently said it would build a driverless car without a steering wheel or pedals by 2019, while Ford committed to doing so by 2021, and Waymo, the self-driving group inside Google parent Alphabet, will soon offer a driverless taxi service in Phoenix.

Fake news

We are on the verge of having neural networks that can create photo-realistic images or replicate someone's voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people's image, with tools already being created to convincingly splice famous actresses into adult films.

Speech and language recognition

Machine-learning systems have helped computers recognise what people are saying with an accuracy of almost 95 percent. Recently Microsoft's Artificial Intelligence and Research group reported it had developed a system able to transcribe spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99 percent accuracy, expect speaking to computers to become the norm alongside more traditional forms of human-machine interaction.

Facial recognition and surveillance

In recent years, the accuracy of facial-recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99 percent accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and are also trialling the use of facial-recognition glasses by police.

Although privacy regulations vary across the world, it's likely this more intrusive use of AI technology -- including AI that can recognize emotions -- will gradually become more widespread elsewhere.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs.

There have been trials of AI-related technology in hospitals across the world. These include IBM's Watson clinical decision support tool, which is trained by oncologists at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK's National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Again, it depends who you ask. As AI-powered systems have grown more capable, so warnings of the downsides have become more dire.

Tesla and SpaceX CEO Elon Musk has claimed that AI is a "fundamental risk to the existence of human civilization". As part of his push for stronger regulatory oversight and more responsible research into mitigating the downsides of AI he set up OpenAI, a non-profit artificial intelligence research company that aims to promote and develop friendly AI that will benefit society as a whole. Similarly, the esteemed physicist Stephen Hawking has warned that once a sufficiently advanced AI is created it will rapidly advance to the point at which it vastly outstrips human capabilities, a phenomenon known as the singularity, and could pose an existential threat to the human race.

Yet the notion that humanity is on the verge of an AI explosion that will dwarf our intellect seems ludicrous to some AI researchers.

Chris Bishop, Microsoft's director of research in Cambridge, England, stresses how different the narrow intelligence of AI today is from the general intelligence of humans, saying that when people worry about "Terminator and the rise of the machines and so on? Utter nonsense, yes. At best, such discussions are decades away."

The possibility of artificially intelligent systems replacing much of modern manual labour is perhaps a more credible near-future possibility.

While AI won't replace all jobs, what seems to be certain is that AI will change the nature of work, with the only question being how rapidly and how profoundly automation will alter the workplace.

There is barely a field of human endeavour that AI doesn't have the potential to impact. As AI expert Andrew Ng puts it: "many people are doing routine, repetitive jobs. Unfortunately, technology is especially good at automating routine, repetitive work", saying he sees a "significant risk of technological unemployment over the next few decades".

The evidence of which jobs will be supplanted is starting to emerge. Amazon has just launched Amazon Go, a cashier-free supermarket in Seattle where customers just take items from the shelves and walk out. What this means for the more than three million people in the US who works as cashiers remains to be seen. Amazon again is leading the way in using robots to improve efficiency inside its warehouses. These robots carry shelves of products to human pickers who select items to be sent out. Amazon has more than 100,000 bots in its fulfilment centers, with plans to add many more. But Amazon also stresses that as the number of bots have grown, so has the number of human workers in these warehouses. However, Amazon and small robotics firms are working to automate the remaining manual jobs in the warehouse, so it's not a given that manual and robotic labor will continue to grow hand-in-hand.

Amazon bought Kiva robotics in 2012 and today uses Kiva robots throughout its warehouses.

The rest is here:

What is AI? Everything you need to know about Artificial ...

Posted in Ai

How AI will automate cybersecurity in the post-COVID world – VentureBeat

By now, it is obvious to everyone that widespread remote working is accelerating the trend of digitization in society that has been happening for decades.

What takes longer for most people to identify are the derivative trends. One such trend is that increased reliance on online applications means that cybercrime is becoming even more lucrative. For many years now, online theft has vastly outstripped physical bank robberies. Willie Sutton said he robbed banks because thats where the money is. If he applied that maxim even 10 years ago, he would definitely have become a cybercriminal, targeting the websites of banks, federal agencies, airlines, and retailers. According to the 2020 Verizon Data Breach Investigations Report, 86% of all data breaches were financially motivated. Today, with so much of societys operations being online, cybercrime is the most common type of crime.

Unfortunately, society isnt evolving as quickly as cybercriminals are. Most people think they are only at risk of being targeted if there is something special about them. This couldnt be further from the truth: Cybercriminals today target everyone. What are people missing? Simply put: the scale of cybercrime is difficult to fathom. The Herjavec Group estimates cybercrime will cost the world over $6 trillion annually by 2021, up from $3 trillion in 2015, but numbers that large can be a bit abstract.

A better way to understand the issue is this: In the future, nearly every piece of technology we use will be under constant attack and this is already the case for every major website and mobile app we rely on.

Understanding this requires a Matrix-like radical shift in our thinking. It requires us to embrace the physics of the virtual world, which break the laws of the physical world. For example, in the physical world, it is simply not possible to try to rob every house in a city on the same day. In the virtual world, its not only possible, its being attempted on every house in the entire country. Im not referring to a diffuse threat of cybercriminals always plotting the next big hacks. Im describing constant activity that we see on every major website the largest banks and retailers receive millions of attacks on their users accounts every day. Just as Google can crawl most of the web in a few days, cybercriminals attack nearly every website on the planet in that time.

The most common type of web attack today is called credential stuffing. This is when cybercriminals take stolen passwords from data breaches and use tools to automatically log in to every matching account on other websites to take over those accounts and steal the funds or data inside them. These account takeover (ATO) events are possible because people frequently reuse their passwords across websites. The spate of gigantic data breaches in the last decade has been a boon for cybercriminals, reducing cybercrime success to a matter of reliable probability: In rough terms, if you can steal 100 users passwords, on any given website where you try them, one will unlock someones account. And data breaches have given cybercriminals billions of users passwords.

Above: Source: Attacks Against Financial Services via F5 Security Incident Response Team in 2017-2019

Whats going on here is that cybercrime is a business, and growing a business is all about scale and efficiency. Credential stuffing is only a viable attack because of the large-scale automation that technology makes possible.

This is where artificial intelligence comes in.

At a basic level, AI uses data to make predictions and then automates actions. This automation can be used for good or evil. Cybercriminals take AI designed for legitimate purposes and use it for illegal schemes. Consider one of the most common defenses attempted against credential stuffing CAPTCHA. Invented a couple of decades ago, CAPTCHA tries to protect against unwanted bots by presenting a challenge (e.g., reading distorted text) that humans should find easy and bots should find difficult. Unfortunately, cybercriminal use of AI has inverted this. Google did a study a few years ago and found that machine-learning based optical character recognition (OCR) technology could solve 99.8% of CAPTCHA challenges. This OCR, as well as other CAPTCHA-solving technology, is weaponized by cybercriminals who include it in their credential stuffing tools.

Cybercriminals can use AI in other ways too. AI technology has already been created to make cracking passwords faster, and machine learning can be used to identify good targets for attack, as well as to optimize cybercriminal supply chains and infrastructure. We see incredibly fast response times from cybercriminals, who can shut off and restart attacks with millions of transactions in a matter of minutes. They do this with a fully automated attack infrastructure, using the same DevOps techniques that are popular in the legitimate business world. This is no surprise, since running such a criminal system is similar to operating a major commercial website, and cybercrime-as-a-service is now a common business model. AI will be further infused throughout these applications over time to help them achieve greater scale and to make them harder to defend against.

So how can we protect against such automated attacks? The only viable answer is automated defenses on the other side. Heres what that evolution will look like as a progression:

Right now, the long tail of organizations are at level 1, but sophisticated organizations are typically somewhere between levels 3 and 4. In the future, most organizations will need to be at level 5. Getting there successfully across the industry requires companies to evolve past old thinking. Companies with the war for talent mindset of hiring huge security teams have started pivoting to also hire data scientists to build their own AI defenses. This might be a temporary phenomenon: While corporate anti-fraud teams have been using machine learning for more than a decade, the traditional information security industry has only flipped in the past five years from curmudgeonly cynicism about AI to excitement, so they might be over-correcting.

But hiring a large AI team is unlikely to be the right answer, just as you wouldnt hire a team of cryptographers. Such approaches will never reach the efficacy, scale, and reliability required to defend against constantly evolving cybercriminal attacks. Instead, the best answer is to insist that the security products you use integrate with your organizational data to be able to do more with AI. Then you can hold vendors accountable for false positives and false negatives, and the other challenges of getting value from AI. After all, AI is not a silver bullet, and its not sufficient to simply be using AI for defense; it has to be effective.

The best way to hold vendors accountable for efficacy is by judging them based on ROI. One of the beneficial side effects of cybersecurity becoming more of an analytics and automation problem is that the performance of all parties can be more granularly measured. When defensive AI systems create false positives, customer complaints rise. When there are false negatives, ATOs increase. And there are many other intermediate metrics companies can track as cybercriminals iterate with their own AI-based tactics.

If youre surprised that the post-COVID Internet sounds like its going to be a Terminator-style battle of good AI vs. evil AI, I have good news and bad news. The bad news is, were already there to a large extent. For example, among major retail sites today, around 90% of login attempts typically come from cybercriminal tools.

But maybe thats the good news, too, since the world obviously hasnt fallen apart yet. This is because the industry is moving in the right direction, learning quickly, and many organizations already have effective AI-based defenses in place. But more work is required in terms of technology development, industry education, and practice. And we shouldnt forget that sheltering-in-place has given cybercriminals more time in front of their computers too.

Shuman Ghosemajumder is Global Head of AI at F5. He was previously CTO of Shape Security, which was acquired by F5 in 2020, and was Global Head of Product for Trust & Safety at Google.

Read more:

How AI will automate cybersecurity in the post-COVID world - VentureBeat

Posted in Ai

3 Predictions For The Role Of Artificial Intelligence In Art And Design – Forbes

Christies made the headlines in 2018 when it became the first auction house to sell a painting created by AI. The painting, named Portrait of Edmond de Belamy, ended up selling for a cool $432,500, but more importantly, it demonstrated how intelligent machines are now perfectly capable of creating artwork.

3 Predictions For The Role Of Artificial Intelligence In Art And Design

It was only a matter of time, I suppose. Thanks to AI, machines have been able to learn more and more human functions, including the ability to see (think facial recognition technology), speak and write (chatbots being a prime example). Learning to create is a logical step on from mastering the basic human abilities. But will intelligent machines really rival humans remarkable capacity for creativity and design? To answer that question, here are my top three predictions for the role of AI in art and design.

1. Machines will be used to enhance human creativity (enhance being the key word)

Until we can fully understand the brains creative thought processes, its unlikely machines will learn to replicate them. As yet, theres still much we dont understand about human creativity. Those inspired ideas that pop into our brain seemingly out of nowhere. The eureka! moments of clarity that stop us in our tracks. Much of that thought process remains a mystery, which makes it difficult to replicate the same creative spark in machines.

Typically, then, machines have to be told what to create before they can produce the desired end result. The AI painting that sold at auction? It was created by an algorithm that had been trained on 15,000 pre-20th century portraits, and was programmed to compare its own work with those paintings.

The takeaway from this is that AI will largely be used to enhance human creativity, not replicate or replace it a process known as co-creativity." As an example of AI improving the creative process, IBM's Watson AI platform was used to create the first-ever AI-generated movie trailer, for the horror film Morgan. Watson analyzed visuals, sound, and composition from hundreds of other horror movie trailers before selecting appropriate scenes from Morgan for human editors to compile into a trailer. This reduced a process that usually takes weeks down to one day.

2. AI could help to overcome the limits of human creativity

Humans may excel at making sophisticated decisions and pulling ideas seemingly out of thin air, but human creativity does have its limitations. Most notably, were not great at producing a vast number of possible options and ideas to choose from. In fact, as a species, we tend to get overwhelmed and less decisive the more options were faced with! This is a problem for creativity because, as American chemist Linus Pauling the only person to have won two unshared Nobel Prizes put it, You cant have good ideas unless you have lots of ideas. This is where AI can be of huge benefit.

Intelligent machines have no problem coming up with infinite possible solutions and permutations, and then narrowing the field down to the most suitable options the ones that best fit the human creatives vision. In this way, machines could help us come up with new creative solutions that we couldnt possibly have come up with on our own.

For example, award-winning choreographer Wayne McGregor has collaborated with Google Arts & Culture Lab to come up with new, AI-driven choreography. An AI algorithm was trained on thousands of hours of McGregors videos, spanning 25 years of his career and as a result, the program came up with 400,000 McGregor-like sequences. In McGregors words, the tool gives you all of these new possibilities you couldnt have imagined.

3. Generative design is one area to watch

Much like in the creative arts, the world of design will likely shift towards greater collaboration between humans and AI. This brings us to generative design a cutting-edge field that uses intelligent software to enhance the work of human designers and engineers.

Very simply, the human designer inputs their design goals, specifications, and other requirements, and the software takes over to explore all possible designs that meet those criteria. Generative design could be utterly transformative for many industries, including architecture, construction, engineering, manufacturing, and consumer product design.

In one exciting example of generative design, renowned designer Philippe Starck collaborated with software company Autodesk to create a new chair design. Starck and his team set out the overarching vision for the chair and fed the AI system questions like, "Do you know how we can rest our bodies using the least amount of material?" From there, the software came up with multiple suitable designs to choose from. The final design an award-winning chair named "AI" debuted at Milan Design Week in 2019.

Machine co-creativity is just one of 25 technology trends that I believe will transform our society. Read more about these key trends including plenty of real-world examples in my new books, Tech Trends in Practice: The 25 Technologies That Are Driving The 4th Industrial Revolution and The Intelligence Revolution: Transforming Your Business With AI.

Here is the original post:

3 Predictions For The Role Of Artificial Intelligence In Art And Design - Forbes

Posted in Ai

This know-it-all AI learns by reading the entire web nonstop – MIT Technology Review

This is a problem if we want AIs to be trustworthy. Thats why Diffbot takes a different approach. It is building an AI that reads every page on the entire public web, in multiple languages, and extracts as many facts from those pages as it can.

Like GPT-3, Diffbots system learns by vacuuming up vast amounts of human-written text found online. But instead of using that data to train a language model, Diffbot turns what it reads into a series of three-part factoids that relate one thing to another: subject, verb, object.

Pointed at my bio, for example, Diffbot learns that Will Douglas Heaven is a journalist; Will Douglas Heaven works at MIT Technology Review; MIT Technology Review is a media company; and so on. Each of these factoids gets joined up with billions of others in a sprawling, interconnected network of facts. This is known as a knowledge graph.

Knowledge graphs are not new. They have been around for decades, and were a fundamental concept in early AI research. But constructing and maintaining knowledge graphs has typically been done by hand, which is hard. This also stopped Tim Berners-Lee from realizing what he called the semantic web, which would have included information for machines as well as humans, so that bots could book our flights, do our shopping, or give smarter answers to questions than search engines.

A few years ago, Google started using knowledge graphs too. Search for Katy Perry and you will get a box next to the main search results telling you that Katy Perry is an American singer-songwriter with music available on YouTube, Spotify, and Deezer. You can see at a glance that she is married to Orlando Bloom, shes 35 and worth $125 million, and so on. Instead of giving you a list of links to pages about Katy Perry, Google gives you a set of facts about her drawn from its knowledge graph.

But Google only does this for its most popular search terms. Diffbot wants to do it for everything. By fully automating the construction process, Diffbot has been able to build what may be the largest knowledge graph ever.

Alongside Google and Microsoft, it is one of only three US companies that crawl the entire public web. It definitely makes sense to crawl the web, says Victoria Lin, a research scientist at Salesforce who works on natural-language processing and knowledge representation. A lot of human effort can otherwise go into making a large knowledge base. Heiko Paulheim at the University of Mannheim in Germany agrees: Automation is the only way to build large-scale knowledge graphs.

To collect its facts, Diffbots AI reads the web as a human wouldbut much faster. Using a super-charged version of the Chrome browser, the AI views the raw pixels of a web page and uses image-recognition algorithms to categorize the page as one of 20 different types, including video, image, article, event, and discussion thread. It then identifies key elements on the page, such as headline, author, product description, or price, and uses NLP to extract facts from any text.

Every three-part factoid gets added to the knowledge graph. Diffbot extracts facts from pages written in any language, which means that it can answer queries about Katy Perry, say, using facts taken from articles in Chinese or Arabic even if they do not contain the term Katy Perry.

Browsing the web like a human lets the AI see the same facts that we see. It also means it has had to learn to navigate the web like us. The AI must scroll down, switch between tabs, and click away pop-ups. The AI has to play the web like a video game just to experience the pages, says Tung.

Diffbot crawls the web nonstop and rebuilds its knowledge graph every four to five days. According to Tung, the AI adds 100 million to 150 million entities each month as new people pop up online, companies are created, and products are launched. It uses more machine-learning algorithms to fuse new facts with old, creating new connections or overwriting out-of-date ones. Diffbot has to add new hardware to its data center as the knowledge graph grows.

Researchers can access Diffbots knowledge graph for free. But Diffbot also has around 400 paying customers. The search engine DuckDuckGo uses it to generate its own Google-like boxes. Snapchat uses it to extract highlights from news pages. The popular wedding-planner app Zola uses it to help people make wedding lists, pulling in images and prices. NASDAQ, which provides information about the stock market, uses it for financial research.

Adidas and Nike even use it to search the web for counterfeit shoes. A search engine will return a long list of sites that mention Nike trainers. But Diffbot lets these companies look for sites that are actually selling their shoes, rather just talking about them.

For now, these companies must interact with Diffbot using code. But Tung plans to add a natural-language interface. Ultimately, he wants to build what he calls a universal factoid question answering system: an AI that could answer almost anything you asked it, with sources to back up its response.

Tung and Lin agree that this kind of AI cannot be built with language models alone. But better yet would be to combine the technologies, using a language model like GPT-3 to craft a human-like front end for a know-it-all bot.

Still, even an AI that has its facts straight is not necessarily smart. Were not trying to define what intelligence is, or anything like that, says Tung. Were just trying to build something useful.

See the article here:

This know-it-all AI learns by reading the entire web nonstop - MIT Technology Review

Posted in Ai