Amazon, Google, Salesforce And Leading Roboticists On The Golden Age Of AI – Forbes


Forbes
Amazon, Google, Salesforce And Leading Roboticists On The Golden Age Of AI
Forbes
The 11th annual MIT Tech Conference, a student-led event organized by the MIT Sloan Tech Club, had exponential technologies as its theme this year. Here's what I learned from the event's morning sessions which covered artificial intelligence ...

Read more here:

Amazon, Google, Salesforce And Leading Roboticists On The Golden Age Of AI - Forbes

Google to expand AI-powered flood forecasts in India for monsoon season – VentureBeat

Google is growing its flood prediction AI for India to cover more than 11,000 square kilometers along the Ganga and Brahmaputra rivers, the company announced today. Approximately 20% of global flood fatalities occur in India.

Monsoon rains have been above average in India for three consecutive weeks, Reuters reported Wednesday.

Since Flood Forecast initiative trials in the Patna region first began about a year ago, 800,000 notifications have been sent to smartphone users. Notifications are also sent to a human network of volunteers with the nonprofit SEEDS who spread emergency warnings to people without phones.

The expansion will be supported by new forecast methodologies like a recently developed approach to creating Digital Elevation Models (DEMs) that optimize the inundation model to work with tensor processing units (TPU) and supply predictions 85 times faster than with CPUs alone.

Correlating and aligning the images in large batches, we adjust their camera models (and simultaneously solve for a coarse terrain) to make the images mutually consistent. Then we create a depth map for each image. To make the elevation map, we optimally fuse the depth maps together at each location, Google senior software engineer Sella Nevo said in a blog post. For additional efficiency improvements, were also looking at using machine learning to replace some of the physics-based algorithms, extending data-driven discretization to two-dimensional hydraulic models, so we can support even larger grids and cover even more people.

In addition to using TPUs to improve the accuracy of flood predictions, Google began to draw on imagery from the European Space Agency Sentinel-1 satellite constellation.

In January, Google said flood predictions had achieved 75% accuracy. Model prediction levels have remained the same, a company spokesperson told VentureBeat in an email.

Flood prediction systems could become increasingly important as climate change worsens. AI models are being built to protect people and property from all kinds of natural disasters, with researchers this week publishing a paper on machine learning to predict large wildfires.

Go here to read the rest:

Google to expand AI-powered flood forecasts in India for monsoon season - VentureBeat

Misinformation research relies on AI and lots of scrolling – NPR

Atilgan Ozdil/Anadolu Agency/Getty Images

Atilgan Ozdil/Anadolu Agency/Getty Images

What sorts of lies and falsehoods are circulating on the internet? Taylor Agajanian used her summer job to help answer this question, one post at a time. It often gets squishy.

She reviewed a social media post where someone had shared a news story about vaccines with the comment "Hmmm, that's interesting." Was the person actually saying that the news story was interesting, or insinuating that the story isn't true?

Agajanian read around and between the lines often while working at University of Washington's Center for an Informed Public, where she reviewed social media posts and recorded misleading claims about COVID-19 vaccines.

As the midterm election approaches, researchers and private sector firms are racing to track false claims about everything from ballot harvesting to voting machine conspiracies. But the field is still in its infancy even as the threats to the democratic process posed by viral lies loom. Getting a sense of which falsehoods people online talk about might sound like a straightforward exercise, but it isn't.

"The broader question is, can anyone ever know what everybody is saying?" says Welton Chang, CEO of Pyrra, a startup that tracks smaller social media platforms. (NPR has used Pyrra's data in several stories.)

Automating some of the steps the University of Washington team uses humans for, Pyrra uses artificial intelligence to extract names, places and topics from social media posts. Using the same technologies that in recent years enable AI to write remarkably like humans, the platform generates summaries of trending topics. An analyst reviews the summaries, weeds out irrelevant items like advertising campaigns, gives them a light edit and shares them with clients.

A recent digest of such summaries include the unsubstantiated claim "Energy infrastructure under globalist attack."

The University of Washington and Pyrra's approaches are on the more extreme ends in terms of automation - few teams have so many staff - around 15 - just to monitor social media, or rely so heavily on algorithms as to have it synthesize material and output.

All methods carry caveats. Manually monitoring and coding content could miss out on developments; and while capable of processing huge amounts of data, artificial intelligence struggles to handle the nuances of distinguishing satire from sarcasm.

Although incomplete, having a sense of what's circulating in the online discourse allows society to respond. Research into voting-related misinformation in 2020 has helped inform election officials and voting rights groups about what messages to emphasize this year.

For responses to be proportionate, society also needs to evaluate the impact of false narratives. Journalists have covered misinformation spreaders who seem to have very high total engagement numbers but limited impact, which risks "spreading further hysteria over the state of online operations," wrote Ben Nimmo, who now investigates global threats at Meta, Facebook's parent company.

While language can be ambiguous, it's more straight forward to track who's been following and retweeting whom. Other researchers analyze networks of actors as well as narratives.

The plethora of approaches is typical of a field that's just forming, says Jevin West, who studies the origins of academic disciplines at University of Washington's Information School. Researchers come from different fields and bring methods they're comfortable with to start, he says.

West corralled research papers from academic database Semantic Scholar mentioning 'misinformation' or 'disinformation' in their title or abstract, and found that many papers are from medicine, computer science, psychology and there also geology, mathematics and art.

"If we're a qualitative researcher, we'll go...and literally code everything that we see." West says. More quantitative researchers do large scale analysis like mapping topics on Twitter.

Projects often use a mix of methods. "If [different methods] start converging on similar kinds of...conclusions, then I think we'll feel a little bit better about it." West says.

One of the very first steps of misinformation research - before someone like Agajanian starts tagging posts - is identifying relevant content under a topic. Many researchers start their search with expressions they think people talking about the topic could use, see what other phrases and hashtags appear in the search results, add that to the query, and repeat the process.

It's possible to miss out on keywords and hashtags, not to mention that they change over time.

"You have to use some sort of keyword analysis. " West says, "Of course, that's very rudimentary, but you have to start somewhere."

Some teams build algorithmic tools to help. A team at Michigan State University manually sorted over 10,000 tweets to pro-vaccine, anti-vaccine, neutral and irrelevant as training data. The team then used the training data to build a tool that sorted over 120 million tweets into these buckets.

For the automatic sorting to remain relatively accurate as the social conversation evolves, humans have to keep annotating new tweets and feed them the training set, Pang-Ning Tan, a co-author of the project, told NPR in an email.

If the interplay between machine detection - human review rings familiar, that might be because you've heard of large social platforms like Facebook, Twitter and Tik Tok describing similar processes to moderate content.

Unlike the platforms, another fundamental challenge researchers have to face is data access. Much misinformation research uses Twitter data, in part because Twitter is one of the few social media platforms that easily lets users tap into its data pipeline - known as Application Programming Interface or API. This allows researchers to easily download and analyze large numbers of tweets and user profiles.

The data pipelines of smaller platforms tend to be less well-documented and could change on short notice.

Take the recently-deplatformed Kiwi Farms as an example. The site served as a forum for anti-LGBTQ activists to harass gay and trans people. "When it first went down, we had to wait for it to basically pop back up somewhere, and then for people to talk about where that somewhere is." says Chang.

"And then we can identify, okay, the site is now here - it has this similar structure, the API is the same, it's just been replicated somewhere else. And so we're redirecting the data ingestion and pulling content from there."

Facebook's data service CrowdTangle, while purporting to serve up all publicly available posts, has been found to not have consistently done so. On another occasion, Facebook bungled data sharing with researchers Most recently, Meta is winding down CrowdTangle, with no alternatives announced set to be in place.

Other large platforms, like YouTube and TikTok, do not have an accessible API , a data service or collaboration with researchers at all. Tik Tok has promised more transparency for researchers.

In such a vast, fragmented, and shifting landscape, West says there's no great way at this point to say what's the state of misinformation on a given topic.

"If you were to ask Mark Zuckerberg, what are people saying on Facebook today? I don't think he could tell you." says Chang.

Continued here:

Misinformation research relies on AI and lots of scrolling - NPR

How AI Is Already Changing Business – Harvard Business Review

Erik Brynjolfsson, MIT Sloan School professor, explains how rapid advances in machine learning are presenting new opportunities for businesses. He breaks down how the technology works and what it can and cant do (yet). He also discusses the potential impact of AI on the economy, how workforces will interact with it in the future, and suggests managers start experimenting now. Brynjolfsson is the co-author, with Andrew McAfee, of the HBR Big Idea article, The Business of Artificial Intelligence. Theyre also the co-authors of the new book, Machine, Platform, Crowd: Harnessing Our Digital Future.

Download this podcast

SARAH GREEN CARMICHAEL: Welcome to the HBR IdeaCast from Harvard Business Review. Im Sarah Green Carmichael.

Its a pretty sad photo when you look at it. A robot, just over a meter tall and shaped kind of like a pudgy rocket ship, laying on its side in a shallow pool in the courtyard of a Washington, D.C. office building. Workers human ones stand around, trying to figure out how to rescue it.

The security robot had just been on the job for a few days when the mishap occurred. One entrepreneur who works in the office complex wrote: We were promised flying cars. Instead we got suicidal robots.

For many people online, the snapshot symbolized something about the autonomous future that awaits. Robots are coming, and computers can do all kinds of new work for us. Cars can drive themselves. For some people this is exciting, but there is also clearly fear out there about dystopia. Tesla CEO Elon Musk calls artificial intelligence an existential threat.

But our guest on the show today is cautiously optimistic. Hes been watching how businesses are using artificial intelligence and how advances in machine learning will change how we work. Erik Brynjolfsson teaches at MIT Sloan School and runs the MIT Initiative on the Digital Economy. And hes the co-author with Andrew McAfee of the new HBR article, The Business of Artificial Intelligence.

Erik, thanks for talking with the HBR IdeaCast.

ERIK BRYNJOLFSSON: Its a pleasure.

SARAH GREEN CARMICHAEL: Why are you cautiously optimistic about the future of AI?

ERIK BRYNJOLFSON: Well actually that story you told about the robot that had trouble was a great lead in because in many ways it epitomizes some of the strengths and weaknesses of robots today. Machines are quite powerful and in many ways, theyre superhuman you know just as a calculator can do arithmetic a lot better than me, were having artificial intelligence thats able to do all sorts of functions in terms of recognizing different kinds of cancer images, or now getting superhuman even in speech recognition in some applications but theyre also quite narrow. They dont have general intelligence the way people do. And thats why partnerships of humans and machines are often going to be the most successful in business.

SARAH GREEN CARMICHAEL: You know its funny, cause when you talk about image recognition I think about a fantastic image in your article that is called Puppy or Muffin. I was amazed at how much puppies and muffins look alike in sort of even more amazed that robots can tell them apart.

ERIK BRYNJOLFSSON: Yeah, its a funny image. It always gets a laugh and encourage people to go take a look at it. And there are lots of things that humans are pretty good at in distinguishing different kinds of images. And for a long time, machines were nowhere near as good as recently as seven, eight years ago, machines made about a 30 percent error rate on image net, this big database that Fei Fei Li created of over 10 million images. Now machines are down less, you know, less than 5%, 3-4% depending on how its set up. Humans still have about a 5% error rate. Sometimes they get those puppies and nothings wrong. Be careful what you reach for next time youre at that breakfast bar. But thats a good example.

The reason its improved so much in the past few years is because of this new approach using deep neural nets thats gotten much more powerful for image recognition and really all sorts of different applications. I think thats a big reason why theres so much excitement these days.

SARAH GREEN CARMICHAEL: Yeah, its one of those things where we all kind of like to make fun of machines that get it wrong but also its sort of terrifying when they get it right.

ERIK BRYNJOLFSSON: Yeah. Machines are not going to be perfect drivers, theyre not going to be perfect at making credit decisions that are going to be perfect at distinguishing you know muffins and puppies. And so, we have to make sure we build systems that are robust to those imperfections. But the point we make an article, Andy and I point out that you know humans arent perfect at any of those tasks either. And so, the benchmark for most entrepreneurs and managers is: whos going to be better for solving this particular task or better yet can we create a system that combines the strengths of both humans and machines and does something better than either of them would do individually.

SARAH GREEN CARMICHAEL: With photo recognition and facial recognition, I know that Facebook facial recognition software cant tell the difference between me wearing makeup and me not wearing makeup, which is also sort of funny and horrifying right? But at the same time, you know, I think a lot of us struggle to recognize people out of context, we see someone at the grocery store and we think you know, I know that person from somewhere. So, its something that humans dont always get right either.

ERIK BRYNJOLFSSON: Oh yeah. Im the worlds worst. You know at conferences I would love it if there was a little machine whispering in my ear who this person is and how I met them before. So there, you know, there are those kinds of tradeoffs. But it can lead to some risks. For instance, you know if machines are making bad decisions on important things, like who should get parole or who gets credit or not. That could be really problematic. Worse yet, sometimes they have biases that are built in from the data sets they use. If the people you hire in the past all had a certain kind of ethnic or gender tilt to them, then if you use a training set and teach the machine how to hire people it will learn the same biases that the humans had previously. And, of course, that can be perpetuated and scaled up in ways that we wouldnt like to see.

SARAH GREEN CARMICHAEL: There is a lot of hype right now around AI or artificial intelligence. Some people say machine learning, other people come along and say: hold on hold on hold on, like a lot of this is just software and weve been using it for a long time. So how do you kind of think through the different terms and what they really mean?

ERIK BRYNJOLFSSON: Well theres a really important difference between the way the machines are working now versus previously you know any McAfee and I wrote this book The Second Machine Age where we talked about having machines do more and more cognitive tasks. And for most of the past 30 or 40 years thats been done by us painstakingly programming, writing code of exactly what we want the machine to do. You know if its doing tax preparation, add up this number and multiply it by that number, and of course we had to understand exactly what the task was in order to specify it.

But now the new machine learning approaches literally have the machines learn on their own things that we dont know how to explain the face recognition is a perfect example. It would be really hard for me to describe you know my mothers face, you know how far apart are her eyes or what does her ear look like.

ERIK BRYNJOLFSSON: I can recognize it but I couldnt really write code to do it. And the way the machines are working now is, instead of having us write the code, we give them lots and lots of examples. You know here are pictures of my mom from different perspectives, or here pictures of cats and dogs or heres a piece of speech you know with the word yes and the word no. And if you give them enough examples the machine learning algorithms figure out the rules on their own.

Thats a real breakthrough. It overcomes what we call Polanyis paradox. Michael Polanyi the Polymath and philosopher from the 1960s famously said We all know more than we can tell but with machine learning we dont have to be able to tell or explain what to do. We just have to show examples. That change is whats opening up so many new applications for machines and allowing it to do a whole set of things that previously only humans could do.

SARAH GREEN CARMICHAEL: So, its interesting to think about kind of the human work that has to just go into training the machines like someone who would sit there literally looking at pictures of blueberry muffins and tagging them muffin, muffin, muffin so the machine you know learns thats not a Chihuahua, thats a blueberry muffin. Is that the kind of thing where in the future you could see that kind of rote algorithm, machine training work being kind of a low-paid dead-end job whereas maybe that person once would have had a more interesting job but now the machine has the more interesting job.

ERIK BRYNJOLFSSON: I dont think thats going to be a big source of employment, but it is true there are places like Amazons Mechanical Turk where thousands of people do exactly what you said, they tag images and label them. Thats how ImageNet the database of millions of images got labeled. And so, there are people being hired to do that. Companies sometimes find that training machines by having humans tagged the data is one way to proceed.

But often they can find ways of having data thats already tagged in some way, thats generated from their enterprise resource planning system or from their call center. And if theyre clever, that will lead to the creation of this tag data, and I should back up a bit and say that machines, one of their big weaknesses is that they really do need tag data. Thats the most powerful kind of algorithm, sometimes called supervised learning, where humans have the advanced tag and explained what the data means.

And then the machine learns from those examples and eventually can extrapolate it to other kinds of examples. But unlike humans, they often need thousands or even millions of examples to do a good job whereas you know, a two-year-old probably would learn after one or two times what a cat was versus a dog was that you wouldnt have to show, you know, 10,000 pictures of a cat before they got it.

SARAH GREEN CARMICHAEL: Right. Given where we are with AI and machine learning right now, on balance, do you feel like this is something that is overhyped and people talk too much about sort of too science fiction terms or is it something thats not quite hyped enough and actually people are underestimating what it could do in the relatively near future.

ERIK BRYNJOLFSSON: Well its actually both at the same time, if you can believe it. I think that people have unrealistic expectations about machines having all these general capabilities kind of from watching science fiction like the Terminator. And if a machine can understand Chinese characters you might think it also could understand Chinese speech and it could recommend a good Chinese restaurant, know a little bit about the Xing dynasty and none of that would be true. A machine that can play expert chess cant even play checkers or go or other games. So, in a way theyre very narrow and fragile.

But on the other hand, I think the set of applications for those narrow capabilities is quite large, using that supervised learning algorithms, I think there are a lot more specific tasks that could be done that weve only scratched the surface of and because theyve improved so much in the past five or 10 years, most of those opportunities have not yet really been explored or even discovered yet. Theres a few places where the big giants like Google and Microsoft and Facebook have made rapid progress, but I think that there are literally tens of thousands of more narrow applications that small and medium businesses could start using machine learning for in their own areas.

SARAH GREEN CARMICHAEL: What are some examples of ways that companies are using this technology right now?

ERIK BRYNJOLFSSON: Well one of my favorite ones I learned from my friend Sebastian Thrun Hes the founder of Udacity, the online learning course, which by the way is a good way to learn more about these technologies. But he found that when people were coming to his site and asking questions on the chat room, some of the sales people were doing a really good job of come to the right course and closing the sale and others, well, not so much. This created a set of training data.

He and his grad student realized that if they took the transcripts they would see that certain sets of words in certain dialogues lead to success and sales and others didnt. And he fed that information into a machine learning algorithm and it started identifying which patterns of phrases and answers were the most successful.

But what happened next was I think especially interesting instead of just trying to build a bot that would answer all the questions, they built a bot that would advise the human salespeople. So now when people go to the site the bot kind of looks over the shoulder of the human and when it sees some of those key words it whispers into his or her ear: hey, you know you might want to try this phrase or you might want to point him to this particular course.

And that works well for the most common kinds of queries, but the more obscure ones that the bot has never seen before the human is much better at. And this kind of partnership is a great example of an effective use of AI and also how you can use existing data to turn into a tag data set that the supervised learning system benefits from.

SARAH GREEN CARMICHAEL: So how did these people feel about being coached by a bot?

ERIK BRYNJOLFSSON: Well, its helped them close their sales so its made them more productive. Sebastian says its about 50% more successful when theyre using the bot. So I think its been its been beneficial in helping them learn more rapidly than they would have if they just kind of stumbled all along.

Going forward, I think this is an example of how the bots are often good at the more routine repetitive kinds of tasks, the machines can do the ones that they have lots of data for. And the humans tend to excel at the more unusual tasks for most of us. I think thats kind of a good trade-off. Most of us would prefer having kind of more interest in varied work lives rather than doing the same thing over and over.

SARAH GREEN CARMICHAEL: So, sales is a form of knowledge work right and you sort of gave an example there. One of the big challenges in that kind of work is that you cant its really hard to scale up one persons productivity if you are a law firm, for example, and you want to serve more clients have to hire more lawyers. It sounds like AI could be one way to get finally around that conundrum.

ERIK BRYNJOLFSSON: Yeah AI certainly can be a big force multiplier. Its a great way of taking some of your best, you know, lawyers or doctors and having them explain how they go about doing things and give examples of successes and the machine can learn from those and replicate it or be combined with people who are already doing the jobs and help in a way coached them or handle some of the cases that are most common.

SARAH GREEN CARMICHAEL: So, is it just about being more productive or did you see other examples of human machine collaboration that tackled different types of business challenges?

ERIK BRYNJOLFSSON: Well in some cases its a matter of being more productive, in many cases, a matter of doing the job better than you could before. So there are systems now that can help read medical images and diagnose cancer quite well, the best ones often are still combined with humans because the machines make different kinds of mistakes in the humans so that the machine often will create what are called false positives where it thinks theres cancer but its really not and the humans are better at ruling those out. You know maybe there was an eyelash on the image or something that was getting in the way.

And so, by having the machine first filter through all the images and say hey here are the ones that look really troubling. And then having a human look at those ones and focus more closely on the ones that are problematic, you end up getting much better outcomes than if that person had to look at all the images herself or himself and maybe, maybe overlook some potentially troubling cases.

SARAH GREEN CARMICHAEL: Why now? Because people predicted for a long time that I was just around the corner and sounds like its finally starting to happen and really make its way into businesses. Why are we seeing this finally start to happen right now?

ERIK BRYNJOLFSSON: Yes, thats a great question. Its really the combination of three forces that have come together. The first one is simply that we have much better computer power than we did before. So, Moores Law, the doubling of computer power is part of it. Theres also specialized chips called GPUs and TPUs that are another tenfold or even a hundredfold faster than ordinary chips. As a result, training a system that might have taken a century or more if you done it with 1990s computers can be done in a few days today.

And so obviously that opens up a whole new set of possibilities that just wouldnt have been practical before. The second big force is the explosion of digital data. Data is the lifeblood of these systems, you need them to train. And now we have so many more digital images, digital transcripts, digital data from factory gauges and keeping track of information, and that all can be fed into these systems to train them.

And as I said earlier, they need lots and lots of examples. And now we have digital examples in a way we didnt previously and in the end with the Internet are things you can imagine its going to be a lot more digital data going forward. And last but not least, there have been some significant improvements in the algorithms the men and women working in these fields have improved on the basic algorithms. Some of them were first developed literally 30 years ago, but theyve now been tweaked and improved, and by having faster computers and more data you can learn more rapidly what works and what doesnt work. When you put these three things together, computer power, more data, and better algorithms, you get sometimes as much as a millionfold improvement on some applications, for instance recognizing pedestrians as they cross the street, which of course is really important for applications like self-driving cars.

SARAH GREEN CARMICHAEL: If those are sort of the factors that are pushing us forward, what are some of the factors that might be inhibiting progress?

ERIK BRYNJOLFSSON: Whats not holding us back is the technology, what is holding us back is the imagination of business executives to use these new tools in their businesses. You know, with every general-purpose technology, whether its electricity or the internal combustion engine the real power comes from thinking of new ways of organizing your factory, new ways of connecting to your customers, new business models. Thats where the real value comes. And one of the reasons we were so happy to write for Harvard Business Review was to reach out to people and help them be more creative about using these tools to change the way they do business. Thats where the real value is.

SARAH GREEN CARMICHAEL: I feel like so much of the broader conversation that AI is about, will this create jobs or destroy jobs? And Im just wondering is that a question that you get asked a lot, and are you sick of answering it?

ERIK BRYNJOLFSSON: Well of course it gets asked a lot. And Im not sick of answering because its really important. I think the biggest challenge for our society over the next 10 years is going to be, how are we going to handle the economic implications of these new technologies. And you introduced me in the beginning as a cautious optimist, I think you said, and I think thats about right. I think that if we handle this well this can and should be the best thing that ever happened to humanity.

But I dont think its automatic. Im cautious about that. Its entirely possible for us to not invest in the kind of education and retraining of people to not do the kinds of new policies, to encourage business formation and new business models even. Income distribution has to be rethought and tax policy things like the earned income tax credit in the United States and similar wage subsidies in other countries.

ERIK BRYNJOLFSSON: We need to make a bunch of changes across the board at the policy level. Businesses need to rethink how they work. Individuals need to take personal responsibility for learning the new skills that are going to be needed going forward. If we do all those things Im pretty optimistic.

But I wouldnt want people to become complacent, because already over the past 10 years a lot of people have been left behind by the digital revolution that weve had so far. And looking forward, Id say we aint seen nothing yet. We have incredibly powerful technologies especially in artificial intelligence that are opening up new possibilities. But I want us to think about how we can use technology to create shared prosperity for the many, not just the few.

SARAH GREEN CARMICHAEL: Are there tasks or jobs that machine learning, in your opinion, cant do or wont do?

ERIK BRYNJOLFSSON: Oh, there are so many. Just to be totally clear, most things, machine learning cant do. Its able to do a few narrow areas really, really well. Just like a calculator can do a few things really, really well, but humans are much more general, much more broad set of skills, and the set of skills that humans can do it is being encroached on.

Machines are taking over more and more tasks are combining, teaming up in more and more tasks but in particular, machines are not very good at very broad-scale creativity you know. Being an entrepreneur or writing a novel or developing a new scientific theory or approach, those kinds of creativity are beyond what machines can do today by and large.

Secondly, and perhaps for an even broader impact, is interpersonal skills, connecting with the humans. You know were wired to trust and care for it and be interested in other humans in a way that we arent with other machines.

So, whether its coaching or sales or negotiation or caring for people, persuading, people those are all areas where humans have an edge. And I think there will be an explosion of new jobs whether its for personal coaches or trainers or team oriented activities. I would love to see more people learning those kinds of softer skills that machines are not good at. Thats where theyll be a lot of jobs in the future.

SARAH GREEN CARMICHAEL: I was surprised to see in the article though, that some of these AI programs are actually surprisingly good at recognizing human emotions. I was really startled by that.

ERIK BRYNJOLFSSON: I have to be careful. One of the main things I learned working with Andy and going to visit all these places is never say never, any particular thing that one of us said oh this will never happen, you know, we find out that someone is working in a lab.

So my advice is that their relative strengths and relative weaknesses and emotional intelligence, I still think is a relative strength of humans, but there are particular narrow applications where machines are improving quite rapidly. Affectiva, a company here in Boston has gotten very good at reading emotions, is part of what you need to do to be a good coach to be a caring person, is not the whole picture, but it is one piece of the interpersonal skills that machines are helping with.

SARAH GREEN CARMICHAEL: What do you see as the biggest risks with AI?

ERIK BRYNJOLFSSON: I think there are a few. One of the big risks is that these machine learning algorithms can have implicit biases and they can be very hard to detect or correct. If the training data is biased, has some kind of racial or ethnic or other biases in its data, then those can be perpetuated in the sample. And so, we need to be very careful about how we train the systems and what data we give them.

And its especially important because they dont have the kind of explicit rules that earlier waves of technology had. So, its hard to even know. Its unlikely to have a rule that says, you know, dont give loans to black people or whatever, but it may implicitly have its thumb on the scale in one way or the other if the training data were biased.

SARAH GREEN CARMICHAEL: Right. Because it might notice for instance that, statistically speaking, black people get turned down more for loans that kind of thing.

ERIK BRYNJOLFSSON: Yeah, if the people who you had made those decisions before were biased in a use for the training data that could end up creating a biased training set. And you know, maybe nobody explicitly says that they were biased, but it sort of shows up in other subtle ways based on the, you know, the zip code that someones coming from or their last name or their first name or whatever. So that would be subtle things that you need to be careful of.

The other thing is what we touched on earlier just the whole, whats happening with income inequality and opportunity as the machines get better at many kinds of tasks, you know, driving a truck or handling a call center. The people who had been doing those jobs need to find new things to do. And often those new jobs wont be paying as well if we arent careful. So that could be a real income hit. Already we see growing income inequality.

We have to be aggressive about thinking how we can create broadly shared prosperity. One of the things we did at MIT is we launched something called the Inclusive Innovation Challenge which recognizes and rewards organizations that are using technology to create shared prosperity, theyre innovating in ways that do that. Id love to see more and more entrepreneurs think in that way not just how they can create concentrated wealth, but how they can create broadly shared prosperity.

SARAH GREEN CARMICHAEL: Elon Musk has been out there saying artificial intelligence could be an existential threat to human beings. Other people have talked about fears that the machines could take over and turn against us. How do you feel about those kinds of concerns?

ERIK BRYNJOLFSSON: Well, like I said earlier, you can never say never and, you know, as machines kept getting more and more powerful I can imagine them having enormous powers especially as we delegate more of the operations of our critical infrastructure in our electricity and our water system and our air traffic control and even our military operations to them. But the reason I didnt list it is I dont see it as the most immediate risk right now, the technologies that are being rolled out right now, they have effects on bias and decision making their effect on jobs and income. But by and large they dont have those kinds of existential risks.

I think its important that we have researchers working in those areas and thinking about them but I wouldnt want to, to panic Congress or the people right now into doing something that would probably be counterproductive if we overreacted right now.

I think its an area for research but in terms of devoting billions of dollars of effort, I would put that towards education and retraining and handling bias the things that are facing us right now and will be facing us for the next five and 10 years.

SARAH GREEN CARMICHAEL: What do you feel is the appropriate role of regulation as AI develops?

ERIK BRYNJOLFSSON: I think we need to be watchful, because theres the potential for AI to lead to more concentration of power and more concentration of wealth. The best antidote to that is competition.

And what weve seen the tech industries, for most of the past 10, 20, 30 years is that as one monopolist, whether its IBM or Microsoft, gets a lot of power, another company comes along and knocks it off its perch. I remember teaching a class where about 15 years ago a speaker said you know Yahoo has search locked up no ones ever going to displace Yahoo. So you know we need to be humble and realize that the giants of today face threats and could be overturned.

That said, if there becomes a sort of a stagnant loss of innovation and these companies have a stranglehold on markets and maybe have other adverse effects in areas like privacy, then it would be right for government to step in. My instinct right now would be sort of watchful waiting, keeping an eye on these companies and doing what we could to foster innovation and competition as the best way to protect consumers.

SARAH GREEN CARMICHAEL: So, if all of this still sounds quite futuristic to the average manager, if theyre kind of like: OK, you know this is sort of way outside of what Im working on in my role, what are the sort of things that youd advise people to keep in mind or think about?

ERIK BRYNJOLFSSON: Well it starts with realizing this is not futuristic and way out there. There are lots of small and medium sized companies that are learning how to apply this right now, whether its, you know, sorting cucumbers to be more effective, somebody wrote an application that did that, to helping with recommendations online. Theres a company Im advising called Infinite Analytics that is giving customers better recommendations about what products they should be choosing, to helping with, you know, credit decisions.

There are so many areas where you can apply these technologies right now you can take courses or you can have people in your organization take courses or you can hire people at places like Udacity or fast.ai, my friend Jeremy Howard runs a great course in that area, and put it to work right away and start with something small and simple.

But definitely dont think of this as futuristic. Dont be put off by the science fiction movies whether, you know, the Terminator or other AI shows. Thats not whats going on. Its a bunch of very specific practical applications that are completely feasible in 2017.

SARAH GREEN CARMICHAEL: Erik, thanks so much for talking with us today about all of this.

ERIK BRYNJOLFSSON: Its been a real pleasure.

SARAH GREEN CARMICHAEL: Thats Erik Brynjolfsson. Hes the director of the MIT Initiative on the Digital Economy. And hes the co-author with Andrew McAfee of the new HBR article, The Business of Artificial Intelligence.

You can read their HBR article, and also read about how Facebook uses AI and Machine learning in almost everything you see, and you can watch a video shot in my own kitchen! about how IBMs Watson uses AI to create new recipes. Thats all at hbr.org/AI.

Thanks for listening to the HBR IdeaCast. Im Sarah Green Carmichael.

See the article here:

How AI Is Already Changing Business - Harvard Business Review

How AI And Real-Time Machine Data Helps Kone Move Millions Of People A Day – Forbes


Forbes
How AI And Real-Time Machine Data Helps Kone Move Millions Of People A Day
Forbes
KONE's mission is to improve the flow of urban life. The Finnish-headquartered elevator and escalator engineering and maintenance company is responsible for 1.1 million elevators worldwide. As well as office and apartments, it runs people moving ...

Visit link:

How AI And Real-Time Machine Data Helps Kone Move Millions Of People A Day - Forbes

R&D Roundup: Supercomputer COVID-19 insights, ionic spiderwebs, the whiteness of AI – TechCrunch

I see far more research articles than I could possibly write up. This column collects the most interesting of those papers and advances, along with notes on why they may prove important in the world of tech and startups. This week: supercomputers take on COVID-19, beetle backpacks, artificial spiderwebs, the overwhelming whiteness of AI and more.

First off, if (like me) you missed this amazing experiment where scientists attached tiny cameras to the backs of beetles, I dont think I have to explain how cool it is. But you may wonder why do it? Prolific UW researcher Shyam Gollakota and several graduate students were interested in replicating some aspects of insect vision, specifically how efficient the processing and direction of attention is.

The camera backpack has a narrow field of view and uses a simple mechanism to direct its focus rather than processing a wide-field image at all times, saving energy and better imitating how real animals see. Vision is so important for communication and for navigation, but its extremely challenging to do it at such a small scale. As a result, prior to our work, wireless vision has not been possible for small robots or insects, said Gollakota. You can watch the critters in action below and dont worry, the beetles lived long, happy lives after their backpack-wearing days.

The health and medical community is always making interesting strides in technology, but its often pretty niche stuff. These two items from recent weeks are a bit more high-profile.

One is a new study being conducted by UCLA in concert with Apple, which especially with its smartwatch has provided lots of excellent data to, for example, studies of arrhythmia. In this case, doctors are looking at depression and anxiety, which are considerably more difficult to quantify and detect. But by using Apple Watch, iPhone and sleep monitor measurements of activity levels, sleep patterns and so on, a large body of standardized data can be amassed.

Go here to read the rest:

R&D Roundup: Supercomputer COVID-19 insights, ionic spiderwebs, the whiteness of AI - TechCrunch

From drugs to galaxy hunting, AI is elbowing its way into boffins’ labs – The Register

Feature Powerful artificially intelligent algorithms and models are all the rage. They're knocking out it of the park in language translation and image recognition, but autonomous cars and chatbots? Not so much.

One area machine learning could do surprisingly well in is science research. As AI advances, its potential is being seized by academics. The number of natural science studies that use machine learning is steadily rising.

Two separate papers that show how neural networks can be trained to pinpoint when the precise shuffle of particles leads to a physical phase transition something that could help scientists understand phenomena like superconductivity were published on the same day earlier this month in Nature Physics.

Science has had an affair with AI for a while, said Marwin Segler, a PhD student studying chemistry under Professor Mark Waller at the University of Mnster, Germany. However, until now, the relationship hasnt been terribly fruitful.

Segler is interested in retrosynthesis, a technique that reveals how a desired molecule can be broken down into simpler chemical building blocks. Chemists can then carry out the necessary reaction steps to craft the required molecule from these building blocks. These molecules can then be used in drugs and other products.

A good analogy would be something like a cooking recipe, Segler told The Register. Imagine youre trying to make a complicated cake. Retrosynthesis will show you how to make the cake, and the ingredients you need.

In the 1990s, before the deep learning hype kicked off, expert systems were used to perform retrosynthesis. Rules for reactions had to be manually programmed in: this is tedious work, and it never delivered any convincing results.

Now things are starting to look more promising with modern AI techniques. Retrosynthesis has strong analogies to puzzle games, particularly Go. Software can attempt to solve retrosynthesis problems in the same way it solves Go challenges: splintering the problem ahead into component parts and finding the best route to the solution.

All the viable moves in a Go match can be fanned out into a large search tree and the winning moves are identified using a Monte Carlo Tree Search an algorithm used by AlphaGo to defeat Lee Sedol, a Korean Go champion.

Just like how AlphaGo was trained to triumph in Go games, Seglers AlphaChem program is trained to determine the best move to find the puzzle pieces that fit together to build the desired molecule. The code is fed a library containing millions of chemical reactions to obtain the necessary bank of knowledge to ultimately break down molecules into building blocks.

Chemists rely on their intuition, which they master during long years of work and study, to prioritize which rules to apply when retroanalyzing molecules. Analogous to master move prediction in Go, we showed recently that, instead of hand-coding, neural networks can learn the master chemist moves, the AlphaChem paper [PDF], submitted to AI conference ICLR 2017 in January, reads.

There are thousands of possible moves per position to play on the Go board, just as there are multiple pathways to consider when trying to break down a molecule into simpler components.

AlphaGo and AlphaChem both cut down on computational costs by pruning the search tree, so there are fewer branches to consider. Only the top 50 most-promising moves are played out, so it doesnt take a fancy supercomputer packing tons of CPU cores and accelerators to perform the retrosynthesis an Apple MacBook Pro will do.

During the testing phase, AlphaChem was pitted against two other more-traditional search algorithms to find the best reactions for 40 molecules. Although AlphaChem proved slower than the best-first search algorithm, it was more accurate solving the problem up to 95 per cent of the time.

Segler hopes AlphaChem will one day be used to find new ways of making drugs more cheaply or to help chemists manufacture new molecules. It is possible the software will, in future revisions, reveal reactions and techniques humans had not considered.

Its true that using AI is fashionable right now, and interest has piqued in science because of the hype, he said. But on the other hand, its getting used more because its producing better results.

Investment in AI has led to better algorithms, and a lot of the frameworks, such as TensorFlow, Caffe, and PyTorch, are publicly available, making it easier for non-experts to use.

I coded the Monte Carlo Tree Search algorithm myself, but for the neural network stuff I used Keras, Segler told us.

Although AI has been used in chemistry for over 40 years, its more challenging to apply it in chemistry compared to other subjects, Segler said. Gathering training data is very expensive in chemistry, because every data point is a laboratory experiment. We cannot simply annotate photos or gather lots of text from the internet, as in computer vision or natural language processing.

For one thing, a lot of medical-related data is kept confidential, and companies dont generally share this information to chemists and biochemists for training systems.

View post:

From drugs to galaxy hunting, AI is elbowing its way into boffins' labs - The Register

Ava expands its AI captioning to desktop and web apps, and raises $4.5M to scale – TechCrunch

The worldwide shift to virtual workplaces has been a blessing and a curse to people with hearing impairments. Having office chatter occur in text rather than speech is more accessible, but virtual meetings are no easier to follow than in-person ones which is why real-time captioning startup Ava has seen a huge increase in users. Riding the wave, the company just announced two new products and a $4.5 million seed round.

Ava previously made its name in the deaf community as a useful live transcription tool for real-life conversations. Start up the app and it would instantly hear and transcribe speech around you, color-coded to each speaker (and named if they activate a QR code). Extremely useful, of course, but when meetings stopped being in rooms and started being in Zooms, things got a bit more difficult.

Use cases have shifted dramatically, and people are discovering the fact that most of these tools are not accessible, co-founder and CEO Thibault Duchemin told TechCrunch.

And while some tools may have limited captioning built in (for example Skype and Google Meet), it may or may not be saved, editable, accurate or convenient to review. For instance Meets ephemeral captions, while useful, only last a moment before disappearing, and are not specific to the speaker, making them of limited use for a deaf or hard of hearing person trying to follow a multi-person call. And the languages they are available in are limited as well.

As Duchemin explained, it began to seem much more practical to have a separate transcription layer that is not specific to any one service.

Image Credits: Ava

Thus Avas new product, a desktop and web app called Closed Captioning, which works with all major meeting services and online content, captioning it with the same on-screen display and making the content accessible via the same account. That includes things like YouTube videos without subtitles, live web broadcasts and even audio-only content like podcasts, in more than 15 languages.

Individual speakers are labeled, automatically if an app supports it, like Zoom, or by having people in the meeting click a link that attaches their identity to the sound of their voice. (There are questions of privacy and confidentiality here, but they will differ case by case and are secondary to the fundamental capability of a person to participate.)

The transcripts all go to the persons Ava app, letting them check through at their leisure or share with the rest of the meeting. That in itself is a hard service to find, Duchemin pointed out.

Its actually really complicated, he said. Today if you have a meeting with four people, Ava is the only technology where you can have accurate labeling of who said what, and thats extremely valuable when you think about enterprise. Otherwise, he said, unless someone is taking detailed notes unlikely, expensive, and time-consuming meetings tend to end up black boxes.

For such high-quality transcription, speech-to-text AI isnt good enough, he admitted. Its enough to follow a conversation, but were talking about professionals and students who are deaf or hard of hearing, Duchemin said. They need solutions for meetings and classes and in-person, and they arent ready to go full AI. They need someone to clean up the transcript, so we provide that service.

Image Credits: Ava

Ava Scribe quickly brings in a human trained not in direct transcription but in the correction of the product of speech-to-text algorithms. That way a deaf person attending a meeting or class can follow along live, but also be confident that when they check the transcript an hour later it will be exact, not approximate.

Right now transcription tools are being used as value-adds to existing products and suites, he said ways to attract or retain customers. They arent beginning with the community of deaf and hard of hearing professionals and designing around their needs, which is what Ava has striven to do.

The explosion in popularity and obvious utility of their platform has led to this $4.5 million seed round, as well, led by Initialized Capital and Khosla Ventures; Day One Ventures also participated.

Duchemin said they expected to double the size of their team with the money, and start really marketing and finding big customers. Were very specialized, so we need a strong business model to grow, he said. A strong, unique product is a good place to start, though.

Read more:

Ava expands its AI captioning to desktop and web apps, and raises $4.5M to scale - TechCrunch

A new AI tool to fight the coronavirus – Axios

A coalition of AI groups is forming to produce a comprehensive data source on the coronavirus pandemic for policymakers and health care leaders.

Why it matters: A torrent of data about COVID-19 is being produced, but unless it can be organized in an accessible format, it will do little good. The new initiative aims to use machine learning and human expertise to produce meaningful insights for an unprecedented situation.

Driving the news: Members of the newly formed Collective and Augmented Intelligence Against COVID-19 (CAIAC) announced today include the Future Society, a non-profit think tank from the Harvard Kennedy School of Government, as well as the Stanford Institute for Human-Centered Artificial Intelligence and representatives from UN agencies.

What they're saying: "With COVID-19 we realized there are tons of data available, but there was little global coordination on how to share it," says Cyrus Hodes, chair of the AI Initiative at the Future Society and a member of the CAIAC steering committee. "That's why we created this coalition to put together a sense-making platform for policymakers to use."

Context: COVID-19 has produced a flood of statistics, data and scientific publications more than 35,000 of the latter as of July 8. But raw information is of little use unless it can be organized and analyzed in a way that can support concrete policies.

The bottom line: Humans aren't exactly doing a great job beating COVID-19, so we need all the machine help we can get.

Read more:

A new AI tool to fight the coronavirus - Axios

AI Project Produces New Styles of Art – Smithsonian

smithsonian.com 2 hours ago

Artificial intelligence is getting pretty good at besting humans in things like chess and Go and dominating at trivia. Now, AI is moving into the arts, aping van Goghs style and creating a truly trippy art form called Inceptionism. Anew AI project is continuing to push the envelope with an algorithm that only produces original styles of art, andChris Baraniuk at New Scientist reports that the product gets equal or higher ratings than human-generated artwork.

Researchers from Rutgers University, the College of Charleston and Facebooks AI Lab collaborated on the system, which is a type of generative adversarial network or GAN, which usestwo independent neural networks to critique each other. In this case, one of the systems is a generator network, which createspieces of art. The other network is the discriminator network, which is trained on 81,500 images from the WikiArt database, spanningcenturies of painting. The algorithm learned how to tell the difference between a piece of art versus a photograph or diagram, and it also learned how to identifydifferent styles of art, for instance impressionism versus pop art.

The MIT Technology Review reports that the first network created random images, then received analysis from the discriminator network. Over time,it learned to reproduce different art styles from history. But the researchers wanted to see if the system could do more than just mimic humans, so theyasked the generator to produce images that would be recognized as art, but did not fit any particular school of art. In other words, they asked it to do what human artists douse the past as a foundation, but interpret that to create its own style.

At the same time, researchersdidnt want the AIto just create something random. They worked to train the AI to find the sweet spot between low-arousal images (read: boring) and high-arousal images (read:too busy, ugly or jarring). You want to have something really creative and striking but at the same time not go too far and make something that isnt aesthetically pleasing, Rutgerscomputer science professor and project lead, Ahmed Elgammal, tells Baraniuk. The research appears on arXiv.

The team wanted to find out how convincing its AI artist was, so they displayed some of the AI artwork on the crowd-sourcing site Mechanical Turk along with historical Abstract Expressionism andimages from Art Basel's 2016 show in Basel, Switzerland, reports MIT Technology Review.

The researchers had usersrate the art, asking how much they liked it, how novel it was,and whether they believed it was made by a human or a machine. It turns out, the AI art rated higher in aesthetics thanthan the art from Basel, and found"more inspiring." The viewersalso had difficulty telling the difference between the computer-generated art and the Basel offerings, though they were able to differentiate between the historical Abstract Expressionism and the AI work. We leave open how to interpret the human subjects responses that ranked the CAN [Creative Adversarial Network] art better than the Art Basel samples in different aspects, the researcherswrite in the study.

As such networks improve, the definition of art and creativity will also change. MIT Technology Review asks, for instance,whether the project is simply an algorithm that has learned to exploit human emotions and not truly creative.

One thing is certain: itwill never cut off an ear for love.

Like this article? SIGN UP for our newsletter

Read the original:

AI Project Produces New Styles of Art - Smithsonian

Israel’s operation against Hamas was the world’s first AI war – The Jerusalem Post

Having relied heavily on machine learning, the Israeli military is calling Operation Guardian of the Walls the first artificial-intelligence war.For the first time, artificial intelligence was a key component and power multiplier in fighting the enemy, an IDF Intelligence Corps senior officer said. This is a first-of-its-kind campaign for the IDF. We implemented new methods of operation and used technological developments that were a force multiplier for the entire IDF.In 11 days of fighting in the Gaza Strip, the Israeli military carried out intensive strikes against Hamas and Palestinian Islamic Jihad targets. It targeted key infrastructure and personnel belonging to the two groups, the IDF said.While the military relied on what was already available on the civilian market and adapted it for military purposes in the years prior to the fighting the IDF established an advanced AI technological platform that centralized all data on terrorist groups in the Gaza Strip onto one system that enabled the analysis and extraction of the intelligence.Soldiers in Unit 8200, an Intelligence Corps elite unit, pioneered algorithms and code that led to several new programs called Alchemist, Gospel and Depth of Wisdom, which were developed and used during the fighting.Collecting data using signal intelligence (SIGINT), visual intelligence (VISINT), human intelligence (HUMINT), geographical intelligence (GEOINT) and more, the IDF has mountains of raw data that must be combed through to find the key pieces necessary to carry out a strike.Gospel used AI to generate recommendations for troops in the research division of Military Intelligence, which used them to produce quality targets and then passed them on to the IAF to strike.

cnxps.cmd.push(function () { cnxps({ playerId: '36af7c51-0caf-4741-9824-2c941fc6c17b' }).render('4c4d856e0e6f4e3d808bbc1715e132f6'); });

Here is the original post:

Israel's operation against Hamas was the world's first AI war - The Jerusalem Post

The emergence of the professional AI risk manager – VentureBeat

When the 1970s and 1980s were colored by banking crises, regulators from around the world banded together to set international standards on how to manage financial risk. Those standards, now known as the Basel standards, define a common framework and taxonomy on how risk should be measured and managed. This led to the rise of professional financial risk managers, which was my first job. The largest professional risk associations, GARP and PRMIA, now have over 250,000 certified members combined, and there are many more professional risk managers out there who havent gone through those particular certifications.

We are now beset by data breaches and data privacy scandals, and regulators around the world have responded with data regulations. GDPR is the current role model, but I expect a global group of regulators to expand the rules to cover AI more broadly and set the standard on how to manage it. The UK ICO just released a draft but detailed guide on auditing AI. The EU is developing one as well. Interestingly, their approach is very similar to that of the Basel standards: specific AI risks should be explicitly managed. This will lead to the emergence of professional AI risk managers.

Below Ill flesh out the implications of a formal AI risk management role. But before that, there are some concepts to clarify:

The Basel framework is a set of international banking regulation standards developed by the Bank of International Settlements (BIS) to promote the stability of the financial markets. By itself, BIS does not have regulatory powers, but its position as the central bank of central banks makes Basel regulations the world standard. The Basel Committee on Banking Supervision (BCBS), which wrote the standards, formed at a time of financial crises around the world. It started with a group of 10 central bank governors in 1974 and is now composed of 45 members from 28 jurisdictions.

Given the privacy violations and scandals in recent times, we can see GDPR as a Basel standard equivalent for the data world. And we can see the European Data Protection Supervisor (EDPS) as the BCBS for data privacy. (EDPS is the supervisor of GDPR.) I expect a more global group will emerge as more countries enact data protection laws.

There is no leading algorithm regulation yet. GDPR only covers a part of it. One reason is that it is difficult to regulate algorithms themselves and another is that regulation of algorithms is embedded into sectoral regulations. For example, Basel regulates how algorithms should be built and deployed in banks. There are similar regulations in healthcare. Potential conflicting or overlapping regulations make writing a broader algorithmic regulation difficult. Nevertheless, regulators in the EU, UK, and Singapore are taking the lead in providing detailed guidance on how to govern and audit AI systems.

Basel I was written more than three decades ago in 1988. Basel II in 2004. Basel III in 2010. These regulations set the standards on how risk models should be built, what the processes are to support those models, and how risk will affect the banks business. It provided a common framework to discuss, measure, and evaluate the risks that banks are exposed to. This is what is happening with the detailed guidance being published by EU/UK/SG. All are taking a risk-based approach and helping define the specific risks of AI and the necessary governance structures.

A common framework allows professionals to quickly share concepts, adhere to guidelines, and standardize practices. Basel led to the emergence of financial risk managers and professional risk associations. A new C-level position was also created, the Chief Risk Officer (CRO). Bank CROs are independent from other executives and often report directly to the CEO or board of directors.

GDPR jumpstarted this development for data privacy. It required that organizations with over 250 employees have a data protection officer (DPOs). This caused a renewed interest in the International Association of Privacy Professionals. Chief Privacy and Data Officers (CPOs and CDOs) are also on the rise. With broader AI regulations coming, there will be a wave of professional AI risk managers and a global professional community forming around it. DPOs are the first iteration.

The job will combine some duties and skill sets of financial risk managers and data protection officers. A financial risk manager needs technical skills to build, evaluate, and explain models. One of their major tasks is to audit a banks lending models while they are being developed and when theyre in deployment. DPOs have to monitor internal compliance, conduct data protection impact assessments (DPIAs), and act as the contact point for top executives and regulators. AI risk managers have to be technically adept yet have a good grasp of regulations.

AI development will be much slower. Regulation is the primary reason banks have not been at the forefront of AI innovation. Lending models are not updated for years to avoid additional auditing work from internal and external parties.

But AI development will be much safer as well. AI risk managers will require that a models purpose be explicitly defined and that only the required data is copied for training. No more sensitive data in a data scientists laptop.

The emergence of the professional AI risk manager will be a boon to startups building in data privacy and AI auditing.

Data privacy. Developing models on personal data will automatically require a DPIA. Imagine data scientists having to ask for approval before they start a project. (Hint: not good) To work around this, data scientists would want tools to anonymize data at scale or generate synthetic data so they can avoid DPIAs. So the opportunities for startups are twofold: There will be demand for software to comply with regulations, and there will be demand for software that provides workarounds to those regulations, such as sophisticated synthetic data solutions.

AI auditing. Model accuracy is one AI-related risk for which we already have common assessment techniques. But for other AI-related risks, there are none. There is no standard to auditing fairness and transparency. Making AI models robust to adversarial attacks is still an active area of research. So this is an open space for startups, especially those in the explainable AI space, to help define the standards and be the preferred vendor.

Kenn So is an associate at Shasta Ventures investing in AI/smart software startups. He was previously an associate at Ernst & Young, building and auditing bank models and was one of the financial risk managers that emerged out of the Basel standards.

Continue reading here:

The emergence of the professional AI risk manager - VentureBeat

The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act – Fasken

Laws governing technology have historically focused on the regulation of information privacy and digital communications. However, governments and regulators around the globe have increasingly turned their attention to artificial intelligence (AI) systems. As the use of AI becomes more widespread and changes how business is done across industries, there are signs that existing declarations of principles and ethical frameworks for AI may soon be followed by binding legal frameworks. [1]

On June 16, 2022, the Canadian government tabled Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 proposes to enact, among other things, the Artificial Intelligence and Data Act (AIDA). Although there have been previous efforts to regulate automated decision-making as part of federal privacy reform efforts, AIDA is Canadas first effort to regulate AI systems outside of privacy legislation. [2]

If passed, AIDA would regulate the design, development, and use of AI systems in the private sector in connection with interprovincial and international trade, with a focus on mitigating the risks of harm and bias in the use of high-impact AI systems. AIDA sets out positive requirements for AI systems as well as monetary penalties and new criminal offences on certain unlawful or fraudulent conduct in respect of AI systems.

Prior to AIDA, in April 2021, the European Commission presented a draft legal framework for regulating AI, the Artificial Intelligence Act (EU AI Act), which was one of the first attempts to comprehensively regulate AI. The EU AI Act sets out harmonized rules for the development, marketing, and use of AI and imposes risk-based requirements for AI systems and their operators, as well as prohibitions on certain harmful AI practices.

Broadly speaking, AIDA and the EU AI Act are both focused on mitigating the risks of bias and harms caused by AI in a manner that tries to be balanced with the need to allow technological innovation. In an effort to be future-proof and keep pace with advances in AI, both AIDA and the EU AI Act define artificial intelligence in a technology-neutral manner. However, AIDA relies on a more principles-based approach, while the EU AI Act is more prescriptive in classifying high-risk AI systems and harmful AI practices and controlling their development and deployment. Further, much of the substance and details of AIDA are left to be elaborated in future regulations, including the key definition of high risk AI systems to which most of AIDAs obligations attach.

The table below sets out some of the key similarities and differences between the current drafts of AIDA and the EU AI Act.

High-risk system means:

The EU AI Act does not apply to:

AIDA does not stipulate an outright ban on AI systems presenting an unacceptable level of risk.

It does, however, make it an offence to:

The EU AI Act prohibits certain AI practices and certain types of AI systems, including:

Persons who process anonymized data for use in AI systems must establish measures (in accordance with future regulations) with respect to:

High-risk systems that use data sets for training, validation and testing must be subject to appropriate data governance and management practices that address:

Data sets must:

Transparency. Persons responsible for high-impact systems must publish on a public website a plain-language description of the AI system which explains:

Transparency. AI systems which interact with individuals and pose transparency risks, such as those that incorporate emotion recognition systems or risks of impersonation or deception, are subject to additional transparency obligations.

Regardless of whether or not the system qualifies as high-risk, individuals must be notified that they are:

Persons responsible for AI systems must keep records (in accordance with future regulations) describing:

High-risk AI systems must:

Providers of high-risk AI systems must:

The Minister of Industry may designate an official to be the Artificial Intelligence and Data Commissioner, whose role is to assist in the administration and enforcement of AIDA. The Minister may delegate any of their powers or duties under AIDA to the Commissioner.

The Minister of Industry has the following powers:

The European Artificial Intelligence Board will assist the European Commission in providing guidance and overseeing the application of the EU AI Act. Each Member State will designate or establish a national supervisory authority.

The Commission has the authority to:

Persons who commit a violation of AIDA or its regulations may be subject to administrative monetary penalties, the details of which will be establish by future regulations. Administrative monetary penalties are intended to promote compliance with AIDA.

Contraventions to AIDAs governance and transparency requirements can result in fines:

Persons who commit more serious criminal offences (e.g., contravening the prohibitions noted above or obstructing or providing false or misleading information during an audit or investigation) may be liable to:

While both acts define AI systems relatively broadly, the definition provided in AIDA is narrower. AIDA only encapsulates technologies that process data autonomously or partly autonomously, whereas the EU AI Act does not stipulate any degree of autonomy. This distinction in AIDA is arguably a welcome divergence from the EU AI Act, which as currently drafted would appear to include even relatively innocuous technology, such as the use of a statistical formula to produce an output. That said, there are indications that the EU AI Acts current definition may be modified before its final version is published, and that it will likely be accompanied by regulatory guidance for further clarity. [4]

Both acts are focused on avoiding harm, a concept they define similarly. The EU AI Act is, however, slightly broader in scope as it considers serious disruptions to critical infrastructure a harm, whereas AIDA is solely concerned with harm suffered by individuals.

Under AIDA, high-impact systems will be defined in future regulations, so it is not yet possible to compare AIDAs definition of high-impact systems to the EU AI Acts definition of high-risk systems. The EU AI Act identifies two categories of high-risk systems. The first category is AI systems intended to be used as safety components of products, or as products themselves. The second category is AI systems listed in an annex to the act and which present a risk to the health, safety, or fundamental rights of individuals. It remains to be seen how Canada would define high-impact systems, but the EU AI Act provides an indication of the direction the federal government could take.

Similarly, AIDA also defers to future regulations with respect to risk assessments, while the proposed EU AI Act sets out a graduated approach to risk in the body of the act. Under the EU AI Act, systems presenting an unacceptable level of risk are banned outright. In particular, the EU AI Act explicitly bans manipulative or exploitive systems that can cause harm, real-time biometric identification systems used in public spaces by law enforcement, and all forms of social scoring. AI systems presenting low or minimal risk are largely exempt from regulations, except for transparency requirements.

AIDA only imposes transparency requirements on high-impact AI systems, and does not stipulate an outright ban on AI systems presenting an unacceptable level of risk. It does, however, empower the Minister of Industry to order that a high-impact system presenting a serious risk of imminent harm cease being used.

AIDAs application is limited by the constraints of the federal governments jurisdiction. AIDA broadly applies to actors throughout the AI supply chain from design to delivery, but only as their activities relate to international or interprovincial trade and commerce. AIDA does not expressly apply to intra-provincial development and use of AI systems. Government institutions (as defined under the Privacy Act) are excluded from AIDAs scope, as are products, services, and activities that are under the direction or control of specified federal security agencies.

The EU AI Act specifically applies to providers (although this may be interpreted broadly) and users of AI systems, including government institutions but excluding where AI systems are exclusively developed for military purposes. The EU AI Act also expressly applies to providers and users of AI systems insofar as the output produced by those systems is used in the EU.

AIDA is largely silent on requirements with respect to data governance. In its current form, it only imposes requirements on the use of anonymized data in AI systems, most of which will be elaborated in future regulations. AIDAs data governance requirements will apply to anonymized data used in the design, development, or use of any AI system, whereas the EU AI Acts data governance requirements will apply only to high-impact systems.

The EU AI Act sets the bar very high for data governance. It requires that training, validation, and testing datasets be free of errors and complete. In response to criticisms of this standard for being too strict, the European Parliament has introduced an amendment to the act that proposes to make error-free and complete datasets an overall objective to the extent possible, rather than a precise requirement.

While AIDA and the EU AI Act both set out requirements with respect to assessment, monitoring, transparency, and data governance, the EU AI Act imposes a much heavier burden on those responsible for high-risk AI systems. For instance, under AIDA, persons responsible for such systems will be required to implement mitigation, monitoring, and transparency measures. The EU AI Act goes a step further by putting high-risk AI systems through a certification scheme, which requires that the responsible entity conduct a conformity assessment and draw up a declaration of conformity before the system is put into use.

Both acts impose record-keeping requirements. Again, the EU AI Act is more prescriptive, but contrary to AIDA, its requirements will only apply to high-risk systems, whereas AIDAs record-keeping requirements would apply to all AI systems.

Finally, both acts contain notification requirements that are limited to high-impact (AIDA) and high-risk (EU AI Act) systems. AIDA imposes a slightly heavier burden, requiring notification for all uses that are likely to result in material harm. The EU AI Act only requires notification if a serious incident or malfunction has occurred.

Both AIDA and the EU AI Act provide for the creation of a new monitoring authority to assist with administration and enforcement. The powers attributed to these entities under both acts are similar.

Both acts contemplate significant penalties for violations of their provisions. AIDAs penalties for more serious offences up to $25 million CAD or 5% of the offenders gross global revenues from the preceding financial year are significantly greater than those found in Quebecs newly revised privacy law and the EUs General Data Protection Regulation (GDPR). The EU AI Acts most severe penalty is higher than both the GDPR and AIDAs most severe penalty: up to 30 million or 6% of gross global revenues from the preceding financial year for non-compliance with prohibited AI practices or the quality requirements set out for high-risk AI systems.

In contrast to the EU AI Act, AIDA also introduces new criminal offences for the most serious offences committed under the act.

Finally, the EU AI Act would also grant discretionary power to Member States to determine additional penalties for infringements of the act.

While both AIDA and the EU AI Act have broad similarities, it is impossible to predict with certainty how similar they could eventually be, given that so much of AIDA would be elaborated in future regulations. Further, at the time of writing, Bill C-27 has only completed first reading, and is likely to be subject to amendments as it makes its way through Parliament.

It is still unclear how much influence the EU AI Act will have on AI regulations globally, including in Canada. Regulators in both Canada and the EU may aim for a certain degree of consistency. Indeed, many have likened the EU AI Act to the GDPR, in that it may set global standards for AI regulation just as the GDPR did for privacy law.

Regardless of the fates of AIDA and the EU AI Act, organizations should start considering how they plan to address a future wave of AI regulation.

For more information on the potential implications of the new Bill C-27, Digital Charter Implementation Act, 2022, please see our bulletin,The Canadian Government Undertakes a Second Effort at Comprehensive Reform to Federal Privacy Law, on this topic.

[1]There have been a number of recent developments in AI regulation, including the United Kingdoms Algorithmic Transparency Standard, Chinas draft regulations on algorithmic recommendation systems in online services, the United States Algorithmic Accountability Act of 2022, and the collaborative effort between Health Canada, the FDA and the United Kingdoms Medicines and Healthcare Products Regulatory Agency to publish Guiding Principles on Good Machine Learning Practice for Medical Device Development.

[2]In the public sphere, the Directive on Automated Decision-Makingguides the federal governments use of automated decision systems.

[3]This prohibition is subject to three exhaustively listed and narrowly defined exceptions where the use of such AI systems is strictly necessary to achieve a substantial public interest, the importance of which outweighs the risks: (1) the search for potential victims of crime, including missing children; (2) certain threats to the life or physical safety of individuals or a terrorist attack; and (3) the detection, localization, identification or prosecution of perpetrators or suspects of certain particularly reprehensible criminal offences.

[4]As an indication of potential changes, the Slovenian Presidency of the Council of the European Union tabled a proposed amendment to the act in November 2021 that would effectively narrow the scope of the regulation to machine learning.

Continued here:

The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act - Fasken

Australians have low trust in artificial intelligence and want it to be better regulated – The Conversation Australia

Every day we are likely to interact with some form of artificial intelligence (AI). It works behind the scenes in everything from social media and traffic navigation apps to product recommendations and virtual assistants.

AI systems can perform tasks or make predictions, recommendations or decisions that would usually require human intelligence. Their objectives are set by humans but the systems act without explicit human instructions.

As AI plays a greater role in our lives both at work and at home, questions arise. How willing are we to trust AI systems? And what are our expectations for how AI should be deployed and managed?

To find out, we surveyed a nationally representative sample of more than 2,500 Australians in June and July 2020. Our report, produced with KPMG and led by Nicole Gillespie, shows Australians on the whole dont know a lot about how AI is used, have little trust in AI systems, and believe it should be carefully regulated.

Trust is central to the widespread acceptance and adoption of AI. However, our research suggests the Australian public is ambivalent about trusting AI systems.

Nearly half of our respondents (45%) are unwilling to share their information or data with an AI system. Two in five (40%) are unwilling to rely on recommendations or other output of an AI system.

Further, many Australians are not convinced about the trustworthiness of AI systems, but more are likely to perceive AI as competent than to be designed with integrity and humanity.

Despite this, Australians generally accept (42%) or tolerate AI (28%), but few approve (16%) or embrace (7%) it.

When it comes to developing and using AI systems, our respondents had the most confidence in Australian universities, research institutions and defence organisations to do so in the public interest. (More than 81% were at least moderately confident.)

Australians have least confidence in commercial organisations to develop and use AI (37% no or low confidence). This may be due to the fact that most (76%) believe commercial organisations use AI for financial gain rather than societal benefit.

These findings suggest an opportunity for businesses to partner with more trusted entities, such as universities and research institutions, to ensure that AI is developed and deployed in an ethical and trustworthy way that protects human rights. They also suggest businesses need to think further about how they can use AI in ways that create positive outcomes for stakeholders and society more broadly.

Read more: Your questions answered on artificial intelligence

Overwhelmingly (96%), Australians expect AI to be regulated and most expect external, independent oversight. Most Australians (over 68%) have moderate to high confidence in the federal government and regulatory agencies to regulate and govern AI in the best interests of the public.

However, the current regulation and laws fall short of community expectations.

Our findings show the strongest driver of trust in AI is the belief that the current regulations and laws are sufficient to make the use of AI safe. However, most Australians either disagree (45%) or are ambivalent (20%) that this is the case.

These findings highlight the need to strengthen the regulatory and legal framework governing AI in Australia, and to communicate this to the public, to help them feel comfortable with the use of AI.

What do Australians expect when AI systems are deployed? Most of our respondents (more than 83%) have clear expectations of the principles and practices they expect organisations to uphold in the design, development and use of AI systems in order to be trusted.

These include:

high standards of robust performance and accuracy

data privacy, security and governance

human agency and oversight

transparency and explainability

fairness, inclusion and non-discrimination

accountability and contestability

risk and impact mitigation.

Read more: Will we ever agree to just one set of rules on the ethical development of artificial intelligence?

Most Australians (more than 70%) would also be more willing to use AI systems if there were assurance mechanisms in place to bolster standards and oversight. These include independent AI ethics reviews, AI ethics certifications, national standards for AI explainability and transparency, and AI codes of conduct.

Organisations can build trust and make consumers more willing to use AI systems, when they are appropriate, by clearly supporting and implementing ethical practices, oversight and accountability.

Most Australians (61%) report having a low understanding of AI, including low awareness of how and when it is used. For example, even though 78% of Australians report using social media, almost two in three (59%) were unaware that social media apps use AI. Only 51% report even hearing or reading about AI in the past year. This low awareness and understanding is a problem given how much AI is being used in our daily lives.

The good news is most Australians (86%) want to know more about AI. When we consider these factors together, there is a need and an appetite for a public literacy program in AI.

One model for this comes from Finland, where a government-backed course in AI literacy aims to teach more than 5 million EU citizens. More than 530,000 students have enrolled in the course so far.

Overall, our findings suggest public trust in AI systems can be improved by strengthening the regulatory framework for governing AI, living up to Australians expectations of trustworthy AI, and strengthening Australias AI literacy.

Read more: Your questions answered on artificial intelligence

Here is the original post:

Australians have low trust in artificial intelligence and want it to be better regulated - The Conversation Australia

Google’s latest AI experiment lets software autocomplete your doodles – The Verge

Google Brain, the search giants internal artificial intelligence division, has been making substantial progress on computer vision techniques that let software parse the contents of hand-drawn images and then recreate those drawings on the fly. The latest release from the divisions AI experiments series is a new web app that lets you collaborate with a neural network to draw doodles of everyday objects. Start with any shape, and the software will then auto-complete the drawing to the best of its ability using predictions and its past experience digesting millions of user-generated examples.

Googles AI is constantly improving thanks to human-drawn doodles

The software is called Sketch-RNN, and Google researchers first announced it back in April. At the time, the team behind Sketch-RNN revealed that the underlying neural net is being continuously trained using human-made doodles sourced from a different AI experiment first released back in November called Quick, Draw! That program asked human users to draw various simple objects from a text prompt, while the software attempted to guess what it was every step of the way. Another spinoff from Quick, Draw! is a web app called AutoDraw, which identified poorly hand-drawn doodles and suggested clean clip art replacements.

All of these programs improve over time as more people use them and keep feeding the AI learning mechanism instructive data. The end goal, it appears, is to teach Google software to contextualize real-world objects and then recreate them using its understanding of how the human brain draws connections between lines, shapes, and other image components. From there, Google could reasonably deploy even better versions of its existing image recognition tools, or perhaps even train future AI algorithms to help robots tag and identify their surroundings.

In the case of this new web app, users can now work alongside Sketch-RNN to see how well it takes a starting shape and transforms it into the desired object or thing youre trying to draw. For instance, select pineapple from the drop-down list of preselected subjects and start with just an oval. From there, Sketch-RNN attempts to make sense of the objects orientation and decides where to try and doodle in the fruits thorny protruding leaves:

The image list is pretty diverse, with everything from fire hydrant to power outlet to the Mona Lisa. Sketch-RNN is also pretty hit or miss when it comes to more complicated drawings. This is the software trying its (virtual and disembodied) hand at doodling a roller coaster:

There are a number of other Sketch-RNN demos you can check out to get a deeper understanding for how the program functions. One, called Multiple Predict, lets Sketch-RNN generate numerous different versions of the same subject. For instance, when given a prompt to draw a mosquito, you just need to draw what looks like a thorax or abdomen and Sketch-RNN will take it from there while showing you how else it predicts the image could be completed:

There are two other demos, titled Interpolation and Variational Auto-Encoder, that will have Sketch-RNN try to move between two different types of similar drawings in real time and also try to mimic your drawing will slight tweaks it comes up with on its own:

The whole set of programs is a fascinating look underneath the hood of modern computer vision and image and object recognition tool sets tech companies have at their disposal. If you dont mind drawing crudely with a computer mouse or trackpad and have some free time on your hands, its worth an afternoon trying to see how much better or demonstrably worse Sketch-RNN make can make your doodles.

See the article here:

Google's latest AI experiment lets software autocomplete your doodles - The Verge

Bringing bots to life with AI – VentureBeat

Any video gamer knows how boring NPCs (non-playable characters) in digital worlds are. Their behavior is simple and predictable and their words entirely scripted by a staff of writers. This makes them uninteresting opponents and unsatisfying companions.

Were far more likely to emotionally attach to lifelike characters, like the emo robot sidekicks in the Star Wars franchise,but crafting believable, autonomous entities you can actually interact with is no easy feat.

Character models built by artificial intelligence aim to escape the uncanny valleyand imbue inanimate objects and digital characters with an aura of realism and life. Normally this is accomplished by modeling CG (computer generated) characters after humans wearing sensors, but this tactic limits you to the actors exact movements.

What if you want believable behavior that humans cant model, such as a zombie with a missing head and limbs?The DWANGO Artificial Intelligence Laboratory in Japan recently presented artificial intelligence technology that does precisely this to legendary animator Hayao Miyazaki of Studio Ghibli.

The creepy and grotesque realism of the demo prompted Miyazaki to proclaim the technology as an insult to life itself and bemoan that we are nearing the end of times.

If you build AI that replaces drawing, you shouldnt be surprised to piss off a man who spent his entire life drawing.

Not everyone is as pessimistic about AI as Miyazaki. Brad Knox from the MIT Media Lab sees incredible potential for machine learning to create engaging, emotional, and authentic characters, robots, and toys. Im unaware of any NPCs or electronic toy characters that can sustain an illusion of life over more than an hour, says Knox, whose company Bots Alive creates exactly this illusion.

Their first offering is a smartphone kit that gives lifelike autonomy to the popular Hexbug Spider toy. This robot spider is normally controlled manually with a remote, but the Bots Alive kit gives the toy a brain to intelligently and autonomously navigate around obstacles while quirkily looking around and precociously bumping into things the same way a live spider might. If you pick up two robots, the two can play together either as friends or foes.

Knox and his team developed the robots autonomous behavior by extending the machine learning technique called learning by demonstration, which works as follows:

Bots Alive keeps the kit at an economical $35 by leveraging your smartphone processor and camera instead of expensive hardware sensors.The Hexbug Spider, not included, is an affordable $25 add-on which many robot enthusiasts already own. The total price tag is one third of the cost of Cozmo, another autonomous toy robot made by Anki, currently selling for $180 on Amazon.

Want to see for yourself whether intelligent autonomy enhances your play experience? Head over to the Bots Alive Kickstarter campaignto pick up your own kit.

From Tamagotchi to The Sims, we humans spend hours playing with and building emotional attachment to inanimate toys and digital characters. Now we haveimmersive VR games like Loading Humanthat feature complex emotional entanglements with NPCs.

For better or worse, making characters, robots, and toys more believable with artificial intelligence enhances their reality and thus our attachment to them. Knox expects that limitations with current machine learning methods, such as optimally sensing and encoding contextual information, will be improved by deep learning and new research.

Will we live harmoniously alongside lifelike robots and digital avatars,or will AI-powered characters bring about Miyazakis end of times? We can only wait to see.

This article appeared originally at TopBots.

Go here to see the original:

Bringing bots to life with AI - VentureBeat

Forget quantum and AI security hype, just write bug-free code … – The Register

RSA USA Every year, the RSA Conference in San Francisco brings out the best and the brightest for its crypto panel, and the view from the floor was simple. Ignore the fads and hyped technology, and concentrate on the basics: good, clean, secure programming.

The panelists were unimpressed with recent moves to build artificially intelligent security systems despite the success of programs like the DARPA Cyber challenge saying it was too early to consider such systems reliable and warning that some may never be.

Im skeptical of AI on security, said Ronald Rivest, MIT Institute professor and the R in RSA. Where we are seeing it becoming a wedge issue with the recent election is with AI bots in chat rooms. In 10 or 15 years youll be competing to find a real human in a sea of chat bots.

His former colleague at RSA, Adi Shamir, currently the Borman professor of computer science at the Weizmann Institute, was similarly skeptical about AI systems in security. Attempting to train such a device could lead to interesting problems.

Fifteen years from now we will give all data to AI systems, it will think, and [then] say that in order to save the internet Ill have to kill it, he semi-joked. The internet is beyond salvaging; we need to start over with something better.

Some AI systems might be useful for IT defense, Shamir said, given the ability for computers to handle large volumes of data and check for anomalies. But you need a human touch to find zero-day flaws and attack using them, he opined.

Shamir was equally as dismissive of quantum computing systems and quantum cryptography, saying it was not on my list of worries. He was far more concerned about using large-scale computing to hack existing encryption algorithms.

Susan Landau, professor of cybersecurity policy at Worcester Polytechnic Institute, said she was worried about quantum systems. There hasnt been enough research into building quantum computing-proof algorithms and the industry was missing a tick, she insisted.

Meanwhile Whitfield Diffie, one of the inventors of public key encryption, said that the issues facing the industry werent going to be fixed by a magic AI or quantum bullet. Instead the industry needs to go back to fundamentals, he suggested.

If the resources spent on interactive security, such as firewalls and antivirus and the like, were spent on improvements in the logical functioning of devices and a big improvement in quality of programming, we would get much better results, Diffie said.

Continued here:

Forget quantum and AI security hype, just write bug-free code ... - The Register

Humans and AI will work together in almost every job, Parc CEO … – Recode

Artificial intelligence is poised to continue advancing until it is everywhere and before it gets there, Tolga Kurtoglu wants to make sure its trustworthy.

Kurtoglu is the CEO of Parc, the iconic Silicon Valley research and development firm previously known as Xerox Parc. Although its best known for its pioneering work in the early days of computing developing technologies such as the mouse, object-oriented programming and the graphical user interface Parc continues to help companies and government agencies envision the future of work.

A really interesting project that were working on is about how to bring together these AI agents, or computational agents, and humans together, in a way that they form sort of collaborative teams, to go after tasks, Kurtoglu said on the latest episode of Recode Decode, hosted by Kara Swisher. And robotics is a great domain for exploring some of the ideas there.

Whereas today you might be comfortable asking Apples Siri for the weather or telling Amazon's Alexa to add an item to your to-do list, Kurtoglu envisions a future where interacting with a virtual agent is a two-way street. You might still give it commands and questions, but it would also talk back to you in a truly conversational way.

What were talking about here is more of a symbiotic team between an AI agent and a human, he said. They solve the problems together, its not that one of them tells the other what to do; they go back and forth. They can formulate the problem, they can build on each others ideas. Its really important because were seeing significant advancements and penetration of AI technologies in almost all industries."

You can listen to Recode Decode on Apple Podcasts, Google Play Music, Spotify, TuneIn, Stitcher and SoundCloud.

Kurtoglu believes that both in our personal lives and in the office, every individual will be surrounded by virtual helpers that can process data and make recommendations. But before artificial intelligence reaches that level of omnipresence, it will need to get a lot better at explaining itself.

"At some point, there is going to be a huge issue with people really taking the answers that the computers are suggesting to them without questioning them, he said. So this notion of trust between the AI agents and humans is at the heart of the technology were working on. Were trying to build trustable AI systems.

So, imagine an AI system that explains itself, he added. If youre using an AI to do medical diagnostics and it comes up with a seemingly unintuitive answer, then the doctor might want to know, Why? Why did you come up with that answer as opposed to something else? And today, these systems are pretty much black boxes: You put in the input, it just spits out what the answer is.

So, rather than just spitting out an answer, Kurtoglu says virtual agents will explain what assumptions they made and how they used those assumptions to reach a conclusion: Here are the paths Ive considered, here are the paths I've ruled out and heres why.

If you like this show, you should also sample our other podcasts:

If you like what were doing, please write a review on Apple Podcasts and if you dont, just tweet-strafe Kara.

Continued here:

Humans and AI will work together in almost every job, Parc CEO ... - Recode

New AI tech to bridge the culture gap in organisations: IT experts – BusinessLine

Digital transformation (DX) is set to bridge the culture gap, with DX requiring a new level of collaboration between business leaders, employees, and IT staff, according to IT experts.

In 2020, a cultural shift and collaborative mentality will become just as important as the technology itself, said Don Schuerman, CTO, VP product marketing, Pegasystems.

Organisations will look at the DX culture and ramp up efforts to ensure that DX is optimised for success. Expect traditional organisational boundaries between IT and business lines to start breaking down, and new roles like citizen developer and AI Ethicist that blend IT and business backgrounds to grow, he added.

Mankiran Chowhan, Managing Director, Indian Subcontinent, SAP Concur, noted that as we move towards the fourth industrial revolution, workers looking to save time will kick demand for AI into overdrive, and in 2020, workplace changes related to AI will become a noticeable trend.

A recent PwC report revealed that 67 per cent would prefer AI assistance over humans as office assistants. Band-aid transformation is also expected to lose out to deeper DX efforts. Offering consumers a slick interface or a cool app only scratches the surface of a true digital transformation, said Pegasystems Schuerman. He added that next year is bound to witness visible failures of organisations and projects that do not take their transformation efforts below the surface.

AI is also expected to move out of the lab. Rubber will truly meet the road, with DX tech, which has been in a constant state of being in the labs, moving out, explains Schuerman.

While societal tension around AI will continue, Chowhan said that workers openness to automation will incrementally drive change. For example, millennials, who now represent the majority of workers, are instinctively comfortable using AI. As consumers, they are more likely to approve of AI-provided customer support, automated product recommendations, and even want AI to enhance their experience watching sports, he said.

AI and emotional intelligence are expected to converge. Customers are individuals with similar needs: to feel important, heard and respected. As a result, empathetic AI is increasingly applied in advertising, customer service, and to measure how engaged a customer is in their journey.

A report from Accenture showed that AI has the potential to add $957 billion or 15 per cent of Indias current gross value in 2035. Chowhan said that in 2020, this trend will kick into gear, with more technology companies infusing empathy into their AI.

As companies use empathetic AI to bring more of the benefits of advanced technology to life, they will instill more trust, create better user experiences, and deliver higher productivity, said the SAP Concur official.

Machine learning (ML) is also expected to move from a novelty to a routine function. In 2020, ML will be less of a novelty, as it proliferates under the hood of technology services everywhere, especially behind everyday workflows, said Chowhan. Apart from that, data is expected to move from an analytical to a decision-making tool.

In 2020, the shift to leveraging data for real-time decision-making will accelerate for a number of business functions, he added, noting that in the coming years, more organisations will start to realise the potential of their data to intelligently guide business decisions and leverage them to reach even greater levels of success.

Dave Russell, Vice-President of Enterprise Strategy at Veeam Software, noted that all applications will become mission-critical. The number of applications that businesses classify as mission-critical will rise during 2020, paving the way to a landscape in which every app is considered a high-priority, as businesses become completely reliant on their digital infrastructure.

A Veeam Cloud Data Management report showed IT decision-makers saying their business can tolerate two hours downtime of mission-critical apps.

Application downtime costs organisations $20.1 million globally in lost revenue and productivity annually, he said.

Visit link:

New AI tech to bridge the culture gap in organisations: IT experts - BusinessLine

Artificial Intelligence in Fintech – Global Market Growth, Trends and Forecasts to 2025 – Assessment of the Impact of COVID-19 on the Industry -…

DUBLIN--(BUSINESS WIRE)--The "AI in Fintech Market - Growth, Trends, Forecasts (2020-2025)" report has been added to ResearchAndMarkets.com's offering.

The global AI in Fintech market was estimated at USD 6.67 billion in 2019 and is expected to reach USD 22.6 billion by 2025. The market is also expected to witness a CAGR of 23.37% over the forecast period (2020-2025).

Artificial Intelligence improves results by applying methods derived from the aspects of human intelligence but beyond human scale. The computational arms race since the past few years has revolutionized the fintech companies. Further, data and the near-endless amounts of information are transforming AI to unprecedented levels where smart contracts will merely continue the market trend.

Key Highlights

Major Market Trends

Quantitative and Asset Management to Witness Significant Growth

North America Accounts for the Significant Market Share

Competitive Landscape

AI in Fintech market is moving towards fragmented owing to the presence of many global players in the market. Further various acquisitions and collaboration of large companies are expected to take place shortly, which focuses on innovation. Some of the major players in the market are IBM Corporation, Intel Corporation, Microsoft Corporation, among others.

Some recent developments in the market are:

Key Topics Covered

1 INTRODUCTION

1.1 Study Deliverables

1.2 Scope of the Study

1.3 Study Assumptions

2 RESEARCH METHODOLOGY

3 EXECUTIVE SUMMARY

4 MARKET DYNAMICS

4.1 Market Overview

4.2 Industry Attractiveness - Porter's Five Force Analysis

4.2.1 Bargaining Power of Suppliers

4.2.2 Bargaining Power of Buyers/Consumers

4.2.3 Threat of New Entrants

4.2.4 Threat of Substitute Products

4.2.5 Intensity of Competitive Rivalry

4.3 Emerging Use-cases for AI in Financial Technology

4.4 Technology Snapshot

4.5 Introduction to Market Dynamics

4.6 Market Drivers

4.6.1 Increasing Demand for Process Automation Among Financial Organizations

4.6.2 Increasing Availability of Data Sources

4.7 Market Restraints

4.7.1 Need for Skilled Workforce

4.8 Assessment of Impact of COVID-19 on the Industry

5 MARKET SEGMENTATION

5.1 Offering

5.1.1 Solutions

5.1.2 Services

5.2 Deployment

5.2.1 Cloud

5.2.2 On-premise

5.3 Application

5.3.1 Chatbots

5.3.2 Credit Scoring

5.3.3 Quantitative and Asset Management

5.3.4 Fraud Detection

5.3.5 Other Applications

5.4 Geography

5.4.1 North America

5.4.2 Europe

5.4.3 Asia-Pacific

5.4.4 Rest of the World

6 COMPETITIVE LANDSCAPE

6.1 Company Profiles

6.1.1 IBM Corporation

6.1.2 Intel Corporation

6.1.3 ComplyAdvantage.com

6.1.4 Narrative Science

6.1.5 Amazon Web Services Inc.

6.1.6 IPsoft Inc.

6.1.7 Next IT Corporation

6.1.8 Microsoft Corporation

6.1.9 Onfido

6.1.10 Ripple Labs Inc.

6.1.11 Active.ai

6.1.12 TIBCO Software (Alpine Data Labs)

6.1.13 Trifacta Software Inc.

6.1.14 Data Minr Inc.

6.1.15 Zeitgold GmbH

7 INVESTMENT ANALYSIS

8 MARKET OPPORTUNITIES AND FUTURE TRENDS

For more information about this report visit https://www.researchandmarkets.com/r/y1fj00

View original post here:

Artificial Intelligence in Fintech - Global Market Growth, Trends and Forecasts to 2025 - Assessment of the Impact of COVID-19 on the Industry -...