Page 97«..1020..96979899..110120..»

Category Archives: Ai

AI and maths to play bigger role in global diplomacy, says expert – The Guardian

Posted: October 15, 2021 at 9:05 pm

International diplomacy has traditionally relied on bargaining power, covert channels of communication and personal chemistry between leaders. But a new era is upon us in which the dispassionate insights of AI algorithms and mathematical techniques such as game theory will play a growing role in deals struck between nations, according to the co-founder of the worlds first centre for science in diplomacy.

Michael Ambhl, a professor of negotiation and conflict management and former chief Swiss-EU negotiator, said recent advances in AI and machine learning mean that these technologies now have a meaningful part to play in international diplomacy, including at the Cop26 summit starting later this month and in post-Brexit deals on trade and immigration.

These technologies are partially already used and it will be the intention to use them more, said Ambhl. Everything around data science, artificial intelligence, machine learning we want to see how can it be made beneficial for multilateral or bilateral diplomacy.

The use of AI in international negotiations is at an early stage, he said, citing the use of machine learning to assess the integrity of data and detect fake news to ensure the diplomatic process has reliable foundations. In the future, these technologies could be used to identify patterns in economic data underpinning free trade deals and help standardise some aspects of negotiations.

The Lab for Science in Diplomacy, a collaboration between ETH Zrich where Ambhl is based and the University of Geneva, will also focus on negotiation engineering, where existing mathematical techniques such as game theory are used either to help frame a discussion, or to play out different scenarios before engaging in talks.

These tools are not new. Game theory was developed in the 1920s by the Hungarian-American mathematician John Von Neumann, initially to formalise the concept of bluffing in poker and later used to weigh up nuclear strike scenarios during the cold war. However, until recently such techniques fell out of mainstream use, not due to a lack of technology but rather a lack of knowledge, according to Ambhl. Diplomats are not that accustomed to it.

But as the world becomes more tech and data-savvy, those who ignore quantitative methods risk selling themselves short. Ambhl said that, as Switzerlands chief EU negotiator, he ran a game theory simulation ahead of talks that led to Switzerland joining the Shengen area and a raft of agreements with the EU on tax, trade and security. The analysis indicated that it was in Switzerlands interest for the negotiations to take place as a package rather than sequentially, and so the Swiss government insisted on this as a basis for talks.

Did the EU do their own analysis? I dont think so, said Ambhl. We didnt tell them that we did game theory.

Taking a mathematical approach can also help de-emotionalise underlying conflicts, according to Ambhl. He cites talks between Iran and the P5+1 countries in Geneva in 2005, where as facilitator he came up with a mathematical formula for the rate at which Iran would reduce its number of nuclear centrifuges. When we presented the idea it was, Now lets talk about the size of this gradient, alpha, which is between 0 and 1, he said. You discuss it on a more technical level.

Can deep-rooted political issues really be distilled down into a gradient on a curve? Ambhl said that this misses the point, which is to crystallise what is under negotiation not to offer a fully formed solution. Its not about making a technical agreement, he said. Its a political question, but you break it down. You divide it into problems and sub-problems and sub-sub-problems.

A more scientific approach does not mean ditching traditional methods. Im not pretending you can only negotiate well if you do it this way, he said. It still depends very much on other factors like how much bargaining power you have, whether you have a charming negotiator, whether you have a PM behind who supports tough negotiations and how well have you prepared.

Are there risks that any of these new approaches could backfire, with rival AIs escalating conflicts or arriving at diplomatic solutions that are mathematically optimal, but have disastrous real-world consequences?

Youre not going to war only because a blind algorithm decides it it goes without saying that this would be idiotic, said Ambhl. Its always only a decision tool.

You cannot just blindly rely on it, but you also cannot blindly rely on the gut feeling of these politicians, he added. You have to make a clever combination of new technologies and the political analysis.

Read more:

AI and maths to play bigger role in global diplomacy, says expert - The Guardian

Posted in Ai | Comments Off on AI and maths to play bigger role in global diplomacy, says expert – The Guardian

Spot AI emerges from stealth with $22M for a platform to draw out more intelligence from organizations basic security videos – TechCrunch

Posted: at 9:05 pm

Security cameras, for better or for worse, are part and parcel of how many businesses monitor spaces in the workplace for security or operational reasons. Now, a startup is coming out of stealth with funding for tech designed to make the video produced by those cameras more useful. Spot AI has built a software platform that reads that video footage regardless of the type or quality of camera it was created on and makes video produced by those cameras searchable by anyone who needs it, both by way of words and by way of images in the frames shot by the cameras.

Spot AI has been quietly building its technology and customer base since 2018, and already has hundreds of customers and thousands of users. Notably, its customers reach well beyond tech early adopters, spanning from SpaceX to transportation company Cheeseman, Mixt and Northland Cold Storage.

Now that Spot AI is releasing its product more generally, it is disclosing $22 million in funding, a $20 million Series A led by Redpoint Ventures with Bessemer Venture Partners also participating, and a previous $2 million seed round from angels, Village Global and Stanford StartX (where the three founders studied). Other investors are not being disclosed.

The gap in the market that Spot AI is aiming to fill is the one created by some of the more legacy technology used by organizations today: a huge amount of security cameras in 2019 estimated at 70 million in the U.S. alone, although that also includes public video surveillance are in use in the workplace today, usually set up around entrances to buildings, in office buildings themselves, in factories and other campus environments and so on, both to track the movement of people as well as the state of inanimate objects and locations used by the business (for example, machines, doorways, rooms and so on).

The issue is that many of these cameras are very old, analogue set-ups; and whether they are older or newer hardware, the video that is produced on them is of a very basic nature. Its there for single-purpose uses; it is not indexable and older video gets erased; and often it doesnt even work as it is supposed to. Indeed, security cam footage is neglected enough that people usually only realize how badly something works or didnt work at all just when they actually need to see some footage (only to discover it is not there). And some of the more sophisticated solutions that do exist are very expensive and unlikely to be adopted quickly by the wider market of very non-tech, analogue companies.

On top of all this, security cameras have a very bad rap, not helped by their multifaceted, starring role in video surveillance systems. Backlash happens both because of how they get used in public environments perhaps in the name of public safety, but still there as quiet observers and recorders of everything we do whether we want them there or not and in how private security video footage gets appropriated in the aftermath of being recorded. In those cases, some of that is intentional, such as when Amazons Ring has shared footage with police. And some is unintentional see the disclosure of hackers accessing and posting video from another startup building video systems for enterprises, Verkada.

Spot AI is entering the above market with all good intentions, CEO and co-founder Tanuj Thapliyal said in an interview. The startups theory is that security cameras are already important and the point is to figure out how to use them better, for more productive purposes that can cover not just security, but health and safety and operations working as they should.

If you make the video data [produced by these cameras] more useful and accessible to more people in the workplace, then you transform it from this idea of surveillance to the idea of video intelligence, said Thapliyal, who co-founded the company with Rish Gupta and Sud Bhatija. It can help you make all sorts of important decisions. Its ethos seems to come out of the idea that these cameras are here, so we need to find better ways of using them more effectively and responsibly.

The Spot AI system currently comes in three parts. The first is a set of cameras that Spot AI offers as an option to any of its customers, free of charge, currently to keep even if a customer decides to cease working with Spot AI. These cameras, 5MP, IP-based devices, are designed to upgrade the quality of video feeds, although Thapliyal points out that the Spot AI system can actually work with footage from any camera at all if necessary.

The second part is a network video recorder that captures video from all of the cameras you have deployed. These are edge computers, fitted with AI chips that process and begin reading and categorizing the video that is captured, turning it into data that can be then searched through the third part of Spot AIs system.

That third part is a dashboard that both lets users search through a companys video troves by keywords or processes, and create frames and alerts on current streams to note when something has occurred in the frame (for example, a door opening, or a person entering a space, or even something not working as it should).

The idea is that this part of the video service will become more sophisticated over time (and indeed there are more features being added even going from stealth into GA). While there are a number of IoT plays out there that are designed to help monitor connected devices, the pitch with Spot AI is that it can be more attuned to how connected and not-connected things are moving about in physical spaces, regardless of whether they involve connected devices or not.

I asked Thapliyal about the security issues reported about Verkada both the incident involvingmalicious hackers earlier this year, and another accusation going back years about how some employees at the company itself abused its video systems. The company is close enough in targeting similar markets and coincidentally both have a connection to Meraki, a WiFi tech company acquired by Cisco, in that both are founded by ex-Meraki employees that I couldnt help but wonder how Spot AI might insulate itself from similar issues, something that customers presumably also ask.

Verkada sells hardware, and their cloud software only works with their hardware, he responded, adding that Theyre pretty expensive; up to a few thousand dollars per camera. (Spot AIs cameras are free, while the deployments begin at $2,200.) They also sell more hardware for building security like access control, environmental sensors, etc., he added, calling it, a terrific set of products with terrific software.

But, he noted, Were not in the hardware business. Our only focus is to make video easier to access and use. We only charge for software, and give away all camera hardware for free if customers want them. Our bet is that if we can help more customers get more value out of video, then we can earn more of their business through a software subscription.

And on security, the companys concept is very different, built around a zero-trust architecture that siloes access away between customers and requires multi-factor authentication for any systems access, he said.

Like other technology companies, we are always reviewing, challenging and improving our cybersecurity. Our goal is to provide a great web dashboard, and let customers choose whats best for them. For example, cloud backup of video is an optional feature that customers can opt into, at no extra charge. The product fully works with private and local storage already included in the subscription. This is particularly helpful for healthcare customers that have HIPAA requirements.

Its good to see the company having a position, and product set, that is aiming to address issues around security. The proof will be in the pudding, and it still remains predicated on the basic idea of video surveillance being something that can be used without being abused. That could make it a non-starter for many. Its worth pointing out that for now, Spot AI has no intention of selling to public safety or government: its focus is solely on private enterprises and the opportunity to rethink their security camera investments and usage.

And indeed, it seems that for investors, the main message is how the company has created a tech platform with enough utility that it is finding traction in as wide a way as possible, including with non-tech customers.

There is a flood of new users and companies driving daily decisions using their cameras. In an industry crowded with legacy vendors, Spot AIs software-focused model is by far the simplest choice for customers, said Tomasz Tunguz, MD at Redpoint Ventures, in a statement.

Today, only the worlds biggest businesses have access to proprietary AI camera systems, while most small and midsize businesses are left behind, added Byron Deeter, a partner at Bessemer Venture Partners. Spot AIs easy-to-use technology is accelerating the consumption of video data across all businesses, big and small.

Read more from the original source:

Spot AI emerges from stealth with $22M for a platform to draw out more intelligence from organizations basic security videos - TechCrunch

Posted in Ai | Comments Off on Spot AI emerges from stealth with $22M for a platform to draw out more intelligence from organizations basic security videos – TechCrunch

Spice AI wants to help developers build smarter applications – TechCrunch

Posted: at 9:05 pm

Spice AI, a Seattle-based startup that aims to make it significantly easier for developers to leverage AI in their applications, today announced that it has raised a $1 million seed funding round.

Thats obviously not a huge round, but the investors will likely make you perk up a bit: Madrona Venture Group, Picus Capital, TA Ventures and angels like GitHub CEO Nat Friedman and Microsoft Azure CTO Mark Russinovich. And the team behind the platform also has some serious credentials, with CEO Luke Kim spending a decade at Microsoft, where he co-created the Incubations team at Azure and led the engineering work to create Dapr, while CTO Phillip LeBlanc worked on Azure Active Directory, Visual Studio App Center and GitHub actions.

The team argues that even today, building AI into an application is still far too hard. During his time at Microsoft, Kim started working on a personal project that focused on neurofeedback. To make this kind of therapy more accessible, he wanted to build an AI system that could analyze the time series data from an EEG and, in the process, he realized how difficult building systems like this still is.

It was super hard, he said. Its funny. Because I was at Microsoft I had all the resources. And I was on this side project no resources. And in both cases, I saw that people were struggling with integrating true AI/ML in their applications.

He noted that while there has been tremendous progress in AI in the last decade, there is still a wide gap between taking that progress and building intelligent software.

Image Credits: SpiceAI

I think about it like the last mile. The fiber infrastructure got built up but actually connecting it to your house took a long time. This is the theme I see for really using ML in applications. Were looking to really fill that gap and make it super easy for developers, Kim said.

He noted that in building Spice AI, the team took a lot of what they learned from Dapr, but also looked at what Vercel is doing with Next.js, for example.

Now, all of this may sound a bit familiar. After all, theres a plethora of startups out there that want to democratize AI. But Kim argues that most of them simply focus on making AI available for anyone, which makes data analytics and business intelligence easier and accessible to more people. Spice AI, however, wants to help developers integrate AI into their applications. Unsurprisingly, that means the companys target audience is professional developers, not data science teams.

One interesting aspect of how Spice AI is building its system is that it focuses on reward functions. The idea here is that developers can specify what the algorithm should optimize itself for. If the application controls an air-conditioning system, for example, that outcome would be lower electricity usage. In a project the company is trialing with an Australian retailer, the focus is on finding the ideal pickup location for a customers order, which may not always be the closest location, depending on variables like drive times, item availability, etc.

The company is also building a package manager (called the Spicerack) that will allow developers to publish the manifest with their reward functions so that others can re-use them for their own use cases.

Like similar projects, the Spice AI team is launching its idea as an open source project. The idea is to then later release a commercial version with enterprise support, but the team is also thinking about a hosted version, as well as private registries to let enterprises host their models (the company calls these Spicepods).

Madrona has been investing in intelligent applications for nearly a decade and are excited by Luke and Phillips vision to seamlessly bring AI development into existing workflows for developers to accelerate and build high-quality applications, said Madrona Venture Group partner Aseem Datar, who until recently was the GM/COO for the Microsoft Cloud. This is just the beginning, and I am excited to be on this journey together and work with such a talented team from day one to make it real.

Read the original here:

Spice AI wants to help developers build smarter applications - TechCrunch

Posted in Ai | Comments Off on Spice AI wants to help developers build smarter applications – TechCrunch

Beethoven’s Unfinished 10th Symphony Brought to Life by Artificial Intelligence – Scientific American

Posted: at 9:05 pm

Teresa Carey: This is Scientific Americans 60-Second Science. I'm Teresa Carey.

Every morning at five oclock, composer Walter Werzowa would sit down at his computer to anticipate a particular daily e-mail. It came from six time zones away, where a team had been working all night (or day, rather) to draft Beethovens unfinished 10th Symphonyalmost two centuries after his death.

The e-mail contained hundreds of variations, and Werzowa listened to them all.

Werzowa: So by nine, 10 oclock in the morning, its likeIm already in heaven.

Carey: Werzowa was listening for the perfect tunea sound that was unmistakably Beethoven.

But the phrases he was listening to werent composed by Beethoven. They were created by artificial intelligencea computer simulation of Beethovens creative process.

Werzowa: There werehundreds of options, and some are better than others. But then there is that one which grabs you, and that was just a beautiful process.

Carey: Ludwig van Beethoven was one of the most renowned composers in Western music history. When he died in 1827, he left behind musical sketches and notes that hinted at a masterpiece. There was barely enough to make out a phrase, let alone a whole symphony. But that didnt stop people from trying.

In 1988 musicologist Barry Cooper attempted. But he didnt get beyond the first movement. Beethovens handwritten notes on the second and third movements are meagernot enough to compose a symphony.

Werzowa: A movement of a symphony can have up to 40,000 notes. And some of his themes were three bars, like 20 notes. Its very little information.

Carey: Werzowa and a group of music experts and computer scientists teamed up to use machine learning to create the symphony. AhmedElgammal, the director of the Art and Artificial Intelligence Laboratory at Rutgers University, led the AI side of the team.

Elgammal: When you listen to music read by AIto continue a theme of music, usually its a very short few seconds, and then they start diverging and becoming boring and not interesting. They cannot really take that and compose a full movement of a symphony.

Carey: The teams first task was to teach the AI to think like Beethoven. To do that, they gave it Beethovens complete works, his sketchesand notes. They taught it Beethoven's processlike how he went from those iconic four notes to his entire Fifth Symphony.

[CLIP: Notes from Symphony no. 5]

Carey: Then they taught it to harmonize with a melody, compose a bridge between two sectionsand assign instrumentation. With all that knowledge, the AI came as close to thinking like Beethoven as possible. But it still wasnt enough.

Elgammal: The way music generation using AI works is very similar to the way, when you write an e-mail, you find that the e-mail thread predicts whats the next word for you or what the rest of the sentence is for you.

Carey: Butlet the computer predict your words long enough, and eventually, the text will sound like gibberish.

Elgammal: It doesnt really generate something that can continue for a long time and be consistent. So that was the main challenge in dealing with this project: How can you take a motif or a short phrase of music that Beethoven wrote in his sketchand continue it into a segment of music?

Carey: Thats where Werzowas daily e-mails came in. On those early mornings, he was selecting what he thought was Beethovens best. And, piece by piece, the team built a symphony.

Matthew Guzdial researches creativity and machine learning at the University of Alberta. He didnt work on the Beethoven project, but he says that AI is overhyped.

Guzdial: Modern AI, modern machine learning, is all about just taking small local patterns and replicating them. And its up to a human to then take what the AI outputs and find the genius. The genius wasnt there. The genius wasnt in the AI. The genius was in the human who was doing the selection.

Carey: Elgammal wants to make the AI tool available to help other artists overcome writers block or boost their performance. But both Elgammal and Werzowa say that the AI shouldnt replace the role of an artist. Insteadit should enhance their work and process.

Werzowa: Like every tool, you can use a knife to kill somebody or to save somebodys life, like with a scalpel in a surgery. So it can go any way. If you look at the kids, like kids are born creative.Its like everything is about being creative, creative and having fun. And somehow were losing this. I think if we could sit back on a Saturday afternoon in our kitchen, and because maybe were a little bit scared to make mistakes, ask the AI to help us to write us a sonata, song or whateverin teamwork, life will be so much more beautiful.

Carey: The team released the 10th Symphony over the weekend. When asked who gets credit for writing it Beethoven, the AIor the team behind itWerzowa insists it is a collaborative effort. But, suspending disbelief for a moment, it isnt hard to imagine that were listening to Beethoven once again.

Werzowa: I dare to say that nobody knows Beethovenas well as the AI, didas well as the algorithm. I think music, when you hear it, when you feel it, when you close your eyes, it does something to your body. Close your eyes, sit back and be open for it, and I would love to hear what you felt after.

Carey: Thanks for listening. For Scientific Americans60-Second Science, Im Teresa Carey.

[The above text is a transcript of this podcast.]

Link:

Beethoven's Unfinished 10th Symphony Brought to Life by Artificial Intelligence - Scientific American

Posted in Ai | Comments Off on Beethoven’s Unfinished 10th Symphony Brought to Life by Artificial Intelligence – Scientific American

New Clemson-MUSC partnership adds power of artificial intelligence to health care – Medical University of South Carolina

Posted: at 9:05 pm

Clemson University and the Medical University of South Carolina have joined forces to harness the power of artificial intelligence to improve health care in South Carolina.

Artificial Intelligence, or AI, has the power to sift through the billions of chemical and electrical signals in the brain to differentiate a simple blink of an eye from an abnormality that may diagnose neurological diseases like Alzheimers, for example. AI can analyze complex medical images to detect early signs of a tumor. Or it could predict a stroke or early onset cancer.

These are a few examples of limitless potential, but to put the power of AI to use in health care, research collaborations between AI experts, medical researchers and clinicians are essential. The newClemson-MUSC AI Hubaims to build those collaborations, invest in promising research projects and equip researchers with the knowledge, tools and experts to apply AI to their work.

The Clemson-MUSC AI Hub features two key components: an AI Advocates cohort of experts; and an Augmentation Grant program to invest in interdisciplinary research. Additionally, the leadership team plans to hold an AI Summit in January for researchers to share ideas and form collaborations.

The Clemson-MUSC AI Hub is led by Brian Dean, professor and chair of the Division of Computer Science at Clemson; Christopher McMahan, an associate professor in the School of Mathematical and Statistical Sciences at Clemson; and Hamilton Baker, M.D., pediatric cardiologist at MUSC Health.

We want to be a catalyst between the fields of public health, medical research and AI and machine learning to advance science, said McMahan, who is using AI to understand how genetics impact addiction, in hopes of developing customizable individual treatment plans.

The reality is we use AI in lots of places already, most people just dont realize it, said Baker, who currently uses AI to study language processing. There are so many projects out there that could benefit tremendously from what were doing.

The AI Hub will help grow the baseline of research teams conducting AI medical research. Recruiting faculty will be very important, Dean said. If you are involved in research that is generating a ton of data, AI is something you should consider, particularly if key patterns of interest in the data are very subtle. Modern advancements in AI can be gamechangers.

The onlineClemson-MUSC AI Hubwill include details on both programs, a list of upcoming events, tutorials and other information.

Go here to read the rest:

New Clemson-MUSC partnership adds power of artificial intelligence to health care - Medical University of South Carolina

Posted in Ai | Comments Off on New Clemson-MUSC partnership adds power of artificial intelligence to health care – Medical University of South Carolina

China’s AI dominance worries Pentagon and other US officials – Vox.com

Posted: at 9:05 pm

The Pentagons first-ever chief software officer abruptly quit earlier this month, and now we know exactly why: Nicolas Chaillan, former CSO of the United States Air Force and Space Force, told the Financial Times that the United States has no competing fighting chance against China in 15 to 20 years when it comes to cyberwarfare and artificial intelligence.

Chaillan, a 37-year-old tech entrepreneur, added that cyber defenses at many government agencies are at kindergarten level, and that companies like Google were doing the US a disservice by not working with the military more on AI, since Chinese companies were making a massive investment in AI without getting all hung up on the ethics of it all. And while quitting your job because America has already lost the AI race is a bit dramatic, Chaillan isnt the only one whos concerned about Chinas dominance in this arena.

A growing number of leaders in Washington and Silicon Valley are worried about the US falling behind in the race to AI supremacy. Congressional hearings on the future of AI have been going on since 2016, and Chaillan said he plans to testify in some upcoming ones. Earlier this year, the National Security Commission on AI, a project chaired by former Google CEO Eric Schmidt, also boldly declared that China is poised to surpass the US as the worlds AI superpower. In a statement signed by Elon Musk, Jack Dorsey, and Stephen Hawking, among thousands of scientists, the commission said, AI technology has reached a point where the deployment of such systems is practically if not legally feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Get the best of Recode's essential reporting on tech and business news.

We can all agree that nobody wants China to invent a real-world version of Skynet, the all-powerful AI that takes over the planet in the Terminator movies. But we dont want the US to do that either. And what does the finish line in this AI race actually look like? Does the US really want to win at all costs?

For years, pundits have been comparing the AI race to the space race and warning that the US is losing it. Its a handy analogy, since it helps Americans put current conflicts with countries like China and Russia into the familiar context of the Cold War. Many have argued that weve found ourselves in a second Cold War and that the country that wins the AI race will take the throne as the dominant superpower. But the AI revolution isnt just about fighting wars or geopolitical dominance. What were racing to build will transform almost every aspect of our lives, from how we run businesses to how we process information to how we get around.

So its imperative that the US is thoughtful about quickly charging into a future filled with autonomous cars, boundless data collection, and full-time surveillance. These are the applications that next-generation AI will enable, and if a small group of powerful tech companies and/or the US military pushes for innovation without putting the proper guard rails in place, this world-changing technology could lead to some grim unintended consequences. President Biden called for the US and Europe to work together on developing new technology responsibly in a February speech at the Munich Security Conference.

We must shape the rules that will govern the advance of technology and the norms of behavior in cyberspace, artificial intelligence, biotechnology so that they are used to lift people up, not used to pin them down, Biden said. We must stand up for the democratic values that make it possible for us to accomplish any of this, pushing back against those who would monopolize and normalize repression.

You could also look to present-day China to see what the near future of a more AI-centric society might look like. As Kai-Fu Lee argues in his book AI Superpowers: China, Silicon Valley, and the New World Order, China has been more aggressive at implementing AI breakthroughs, especially in surveillance and data collection applications, thanks in part to government support and a lack of oversight thats let some tech companies there leapfrog the competition and dominate entire industries. WeChat and its parent company, Tencent, are perfect examples of this. On WeChat, privacy does not seem to be a priority, but the vast quantities of data the app can collect are certainly helpful for training AI.

Imagine, if you will, that Facebook acquired Visa and Mastercard and integrated everything into the functions, as well as invested money into Amazon and Uber and OpenTable and so on and so forth, and made an ecosystem that once you log into Facebook, all these things are one click away and then you could pay for them with another click, Lee told New York magazine. That is the kind of convenience that WeChat brought about, and its true worth is the gigantic data set of all the user data that goes through it.

This is the sort of winning-at-all costs approach that appears to give China a leg up in the AI race. China also appears to be playing catch-up when it comes to establishing standards for algorithmic ethics. Just last week, the country issued its first-ever guidelines on AI ethics. The US has long known that algorithms can be racist or sexist, and the Pentagon adopted its guidelines on ethical AI nearly two years ago. And as weve learned more recently, the AI that companies like Facebook and YouTube use to serve up content can also be used to radicalize people and undermine democracy. Thats why especially in the wake of Facebooks whistleblower scandal that revealed internal research showing that its products were harmful to some users, including teenage girls lawmakers in the US lately seem more interested in talking about how to regulate algorithms than how to beat China in the AI race.

The two things arent mutually exclusive, by the way. Chaillan, the former military software chief, has certainly earned his right to an opinion about how quickly the US is developing its cyber defenses and artificially intelligent computers. And now that hes taking his knowledge of how the Pentagon works to the private sector, hell probably make good money addressing his concerns. For the rest of us, the rise of AI shouldnt feel like a race against China. Its more like a high-stakes poker game.

This story first published in the Recode newsletter. Sign up here so you dont miss the next one!

See the article here:

China's AI dominance worries Pentagon and other US officials - Vox.com

Posted in Ai | Comments Off on China’s AI dominance worries Pentagon and other US officials – Vox.com

AI project brings the climate crisis to your home – The Next Web

Posted: at 9:05 pm

Scientists have developed a novel way of making people care about climate change: flooding their homes.

Not their real homes, of course; the destruction is merely a simulation for now. But projecting catastrophic consequences onto familiar places could generate awareness through empathy.

Shock is not the endgame here, said study lead author Victor Schmidt, a PhD candidate at the Mila Quebec AI Institute in Montreal, Canada. We want to trigger and leverage emotions towards actions.

The images of floods, wildfires, and smog are created via a deep learning model the researchers call ClimateGAN.

Their architecture harnesses generative adversarial networks (GANs), which create new images by pitting two neural networks against each other: a generator and a discriminator.

The generator produces artificial content, such as pictures of flooded streets. The discriminator then compares the fake images to real photos. After numerous iterations, the generator learns how to fool the discriminator into believing the artificial images are real.

The visualizations are then projected onto photos of real places. You can see them for yourself at a website called This Climate Does Not Exist.

Just enter an address listed on Google Street View and the system will slap your choice of flood, wildfire, or smog onto the location.

The images do not exist, but the disasters they depict could make the abstract impacts of climate change more concrete.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to itright here.

Read the original post:

AI project brings the climate crisis to your home - The Next Web

Posted in Ai | Comments Off on AI project brings the climate crisis to your home – The Next Web

Putting artificial intelligence at the heart of health care with help from MIT – MIT News

Posted: at 9:05 pm

Artificial intelligence is transforming industries around the world and health care is no exception. A recent Mayo Clinic study found that AI-enhanced electrocardiograms (ECGs) have the potential to save lives by speeding diagnosis and treatment in patients with heart failure who are seen in the emergency room.

The lead author of the study is Demilade Demi Adedinsewo, a noninvasive cardiologist at the Mayo Clinic who is actively integrating the latest AI advancements into cardiac care and drawing largely on her learning experience with MIT Professional Education.

Identifying AI opportunities in health care

A dedicated practitioner, Adedinsewo is a Mayo Clinic Florida Women's Health Scholar and director of research for the Cardiovascular Disease Fellowship program. Her clinical research interests include cardiovascular disease prevention, women's heart health, cardiovascular health disparities, and the use of digital tools in cardiovascular disease management.

Adedinsewos interest in AI emerged toward the end of her cardiology fellowship, when she began learning about its potential to transform the field of health care. I started to wonder how we could leverage AI tools in my field to enhance health equity and alleviate cardiovascular care disparities, she says.

During her fellowship at the Mayo Clinic, Adedinsewo began looking at how AI could be used with ECGs to improve clinical care. To determine the effectiveness of the approach, the team retroactively used deep learning to analyze ECG results from patients with shortness of breath. They then compared the results with the current standard of care a blood test analysis to determine if the AI enhancement improved the diagnosis of cardiomyopathy, a condition where the heart is unable to adequately pump blood to the rest of the body. While she understood the clinical implications of the research, she found the AI components challenging.

Even though I have a medical degree and a masters degree in public health, those credentials arent really sufficient to work in this space, Adedinsewo says. I began looking for an opportunity to learn more about AI so that I could speak the language, bridge the gap, and bring those game-changing tools to my field.

Bridging the gap at MIT

Adedinsewos desire to bring together advanced data science and clinical care led her to MIT Professional Education, where she recently completed the Professional Certificate Program in Machine Learning & AI. To date, she has completed nine courses, including AI Strategies and Roadmap.

All of the courses were great, Adedinsewo says. I especially appreciated how the faculty, like professors Regina Barzilay, Tommi Jaakkola, and Stefanie Jegelka, provided practical examples from health care and nonhealth care fields to illustrate what we were learning.

Adedinsewos goals align closely with those of Barzilay, the AI lead for the MIT Jameel Clinic for Machine Learning in Health. There are so many areas of health care that canbenefit from AI, Barzilay says. Its exciting to see practitioners like Demijoin the conversation and help identify new ideas for high-impact AIsolutions.

Adedinsewo also valued the opportunity to work and learn within the greater MIT community alongside accomplished peers from around the world, explaining that she learned different things from each person. It was great to get different perspectives from course participants who deploy AI in other industries, she says.

Putting knowledge into action

Armed with her updated AI toolkit, Adedinsewo was able to make meaningful contributions to Mayo Clinics research. The team successfully completed and published their ECG project in August 2020, with promising results. In analyzing the ECGs of about 1,600 patients, the AI-enhanced method was both faster and more effective outperforming the standard blood tests with a performance measure (AUC) of 0.89 versus 0.80. This improvement could enhance health outcomes by improving diagnostic accuracy and increasing the speed with which patients receive appropriate care.

But the benefits of Adedinsewos MIT experience go beyond a single project. Adedinsewo says that the tools and strategies she acquired have helped her communicate the complexities of her work more effectively, extending its reach and impact. I feel more equipped to explain the research and AI strategies in general to my clinical colleagues. Now, people reach out to me to ask, I want to work on this project. Can I use AI to answer this question? she said.

Looking to the AI-powered future

Whats next for Adedinsewos research? Taking AI mainstream within the field of cardiology. While AI tools are not currently widely used in evaluating Mayo Clinic patients, she believes they hold the potential to have a significant positive impact on clinical care.

These tools are still in the research phase, Adedinsewo says. But Im hoping that within the next several months or years we can start to do more implementation research to see how well they improve care and outcomes for cardiac patients over time.

Bhaskar Pant, executive director of MIT Professional Education, says We at MIT Professional Education feel particularly gratified that we are able to provide practitioner-oriented insights and tools in machine learning and AI from expert MIT faculty to frontline health researchers such as Dr. Demi Adedinsewo, who are working on ways to enhance markedly clinical care and health outcomes in cardiac and other patient populations. This is also very much in keeping with MITs mission of 'working with others for the betterment of humankind!'

See the rest here:

Putting artificial intelligence at the heart of health care with help from MIT - MIT News

Posted in Ai | Comments Off on Putting artificial intelligence at the heart of health care with help from MIT – MIT News

AI at scale with MLOps: What CEOs need to know – McKinsey

Posted: at 9:05 pm

What if a company built each component of its product from scratch with every order, without any standardized or consistent parts, processes, and quality-assurance protocols? Chances are that any CEO would view such an approach as a major red flag preventing economies of scale and introducing unacceptable levels of riskand would seek to address it immediately.

Yet every day this is how many organizations approach the development and management of artificial intelligence (AI) and analytics in general, putting themselves at a tremendous competitive disadvantage. Significant risk and inefficiencies are introduced as teams scattered across an enterprise regularly start efforts from the ground up, working manually without enterprise mechanisms for effectively and consistently deploying and monitoring the performance of live AI models.

Ultimately, for AI to make a sizable contribution to a companys bottom line, organizations must scale the technology across the organization, infusing it in core business processes, workflows, and customer journeys to optimize decision making and operations daily. Achieving such scale requires a highly efficient AI production line, where every AI team quickly churns out dozens of race-ready, risk-compliant, reliable models. Our research indicates that companies moving toward such an approach are much more likely to realize scale and valuewith some adding as much as 20 percent to their earnings before interest and taxes (EBIT) through their use of AI as they tap into the $9 trillion to $15 trillion in economic value potential the technology offers.

CEOs often recognize their role in providing strategic pushes around the cultural changes, mindset shifts, and domain-based approach necessary to scale AI, but we find that few recognize their role in setting a strategic vision for the organization to build, deploy, and manage AI applications with such speed and efficiency. The first step toward taking this active role is understanding the value at stake and whats possible with the right technologies and practices. The highly bespoke and risk-laden approach to AI applications that is common today is partly a function of decade-old data science practices, necessary in a time when there were few (if any) readily available AI platforms, automated tools, or building blocks that could be assembled to create models and analytics applications and no easy way for practitioners to share work. In recent years, massive improvements in AI tooling and technologies have dramatically transformed AI workflows, expediting the AI application life cycle and enabling consistent and reliable scaling of AI across business domains. A best-in-class framework for ways of working, often called MLOps (short for machine learning operations), now can enable organizations to take advantage of these advances and create a standard, company-wide AI factory capable of achieving scale.

In this article, well help CEOs understand how these tools and practices come together and identify the right levers they can pull to support and facilitate their AI leaders efforts to put these practices and technologies firmly in place.

Gone are the days when organizations could afford to take a strictly experimental approach to AI and analytics broadly, pursuing scattered pilots and a handful of disparate AI systems built in silos. In the early days of AI, the business benefits of the technology were not apparent, so organizations hired data scientists to explore the art of the possible with little focus on creating stable models that could run reliably 24 hours a day. Without a focus on achieving AI at scale, the data scientists created shadow IT environments on their laptops, using their preferred tools to fashion custom models from scratch and preparing data differently for each model. They left on the sidelines many scale-supporting engineering tasks, such as building crucial infrastructure on which all models could be reliably developed and easily run.

Today, market forces and consumer demands leave no room for such inefficiencies. Organizations recognizing the value of AI have rapidly shifted gears from exploring what the technology can do to exploiting it at scale to achieve maximum value. Tech giants leveraging the technology continue to disrupt and gain market share in traditional industries. Moreover, consumer expectations for personalized, seamless experiences continue to ramp up as they are delighted by more and more AI-driven interactions.

Thankfully, as AI has matured, so too have roles, processes, and technologies designed to drive its success at scale. Specialized roles such as data engineer and machine learning engineer have emerged to offer skills vital for achieving scale. A rapidly expanding stack of technologies and services has enabled teams to move from a manual and development-focused approach to one thats more automated, modular, and fit to address the entire AI life cycle, from managing incoming data to monitoring and fixing live applications. Start-up technology companies and open-source solutions now offer everything from products that translate natural language into code to automated model-monitoring capabilities. Cloud providers now incorporate MLOps tooling as native services within their platform. And tech natives such as Netflix and Airbnb that have invested heavily in optimizing AI workflows have shared their work through developer communities, enabling enterprises to stitch together proven workflows.

Alongside this steady stream of innovation, MLOps has arisen as a blueprint for combining these platforms, tools, services, and roles with the right team operating model and standards for delivering AI reliably and at scale. MLOps draws from existing software-engineering best practices, called DevOps, which many technology companies credit for enabling faster delivery of more robust, risk-compliant software that provides new value to their customers. MLOps is poised to do the same in the AI space by extending DevOps to address AIs unique characteristics, such as the probabilistic nature of AI outputs and the technologys dependence on the underlying data. MLOps standardizes, optimizes, and automates processes, eliminates rework, and ensures that each AI team member focuses on what they do best (exhibit).

Since MLOps is relatively new and still evolving, definitions of what it encompasses within the AI life cycle can vary. Some, for example, use the term to refer only to practices and technologies applied to monitoring running models. Others see it as only the steps required to move new models into live environments. We find that when the practice encompasses the entire AI life cycledata management, model development and deployment, and live model operationsand is supported by the right people, processes, and technologies, it can dramatically raise the bar for what companies can achieve.

To understand the business impact of end-to-end MLOps, it is helpful to examine the potential improvements from four essential angles: productivity and speed, reliability, risk, and talent acquisition and retention. Inefficiencies in any of these areas can choke an organizations ability to achieve scale.

We frequently hear from executives that moving AI solutions from idea to implementation takes nine months to more than a year, making it difficult to keep up with changing market dynamics. Even after years of investment, leaders often tell us that their organizations arent moving any faster. In contrast, companies applying MLOps can go from idea to a live solution in just two to 12 weeks without increasing head count or technical debt, reducing time to value and freeing teams to scale AI faster. Achieving productivity and speed requires streamlining and automating processes, as well as building reusable assets and components, managed closely for quality and risk, so that engineers spend more time putting components together instead of building everything from scratch.

Organizations should invest in many types of reusable assets and components. One example is the creation of ready-to-use data products that unify a specific set of data (for instance, combining all customer data to form a 360-degree view of the customer), using common standards, embedded security and governance, and self-service capabilities. This makes it much faster and easier for teams to leverage data across multiple current and future use cases, which is especially crucial when scaling AI within a specific domain where AI teams often rely on similar data.

An Asian financial-services company, for example, was able to reduce the time to develop new AI applications by more than 50 percentin part by creating a common data-model layer on top of source systems that delivered high-quality, ready-to-use data products for use in numerous product and customer-centric AI applications. The company also standardized supporting data-management tooling and processes to create a sustainable data pipeline, and it created assets to standardize and automate time-consuming steps such as data labeling and data-lineage tracking. This was a stark difference from the companys previous approach, where teams structured and cleaned raw data from source systems using disparate processes and tools every time an AI application was being developed, which contributed to a lengthy AI development cycle.

Another critical element for speed and productivity improvements is developing modular components, such as data pipelines and generic models that are easily customizable for use across different AI projects. Consider the work of a global pharmaceutical company that deployed an AI recommendation system to optimize the engagement of healthcare professionals and better inform them of more than 50 drugcountry combinations, ultimately helping more appropriate patient populations get access to and benefit from these medicines. By building a central AI platform and modular premade components on top, the company was able to industrialize a base AI solution that could rapidly be tailored to account for different drug combinations in each market. As a result, it completed this massive deployment in under a year and with only ten AI project teams (a global team and one in each target country)five times faster and less resource intensive than if it had delivered in the traditional way. To get there, executives made investments in new operating models, talent, and technologies. For example, they erected an AI center of excellence, hired MLOps engineers, and standardized and automated model development to create model production pipelines that speed time to value and reduce errors that can cause delays and introduce risks.

Organizations often invest significant time and money in developing AI solutions only to find that the business stops using nearly 80 percent of them because they no longer provide valueand no one can figure out why thats the case or how to fix them. In contrast, we find that companies using comprehensive MLOps practices shelve 30 percent fewer models and increase the value they realize from their AI work by as much as 60 percent.

One way theyre able to do this is by integrating continuous monitoring and efficacy testing of models into their workflows, instead of bolting them on as an afterthought, as is common. Data integrity and the business context for certain analytics can change quickly with unintended consequences, making this work essential to create always-on AI systems. When setting up a monitoring team, organizations should, where possible, make sure this team is independent from the teams that build the models, to ensure independent validation of results.

The aforementioned pharmaceutical company, for instance, put a cross-functional monitoring team in place to ensure stable and reliable deployment of its AI applications. The team included engineers specializing in site reliability, DevOps, machine learning, and cloud, along with data scientists and data engineers. The team had broad responsibility for managing the health of models in production, from detecting and solving basic issues, such as model downtime, to complex issues, such as model drift. By automating key monitoring and management workflows and instituting a clear process for triaging and fixing model issues, the team could rapidly detect and resolve issues and easily embed learnings across the application life cycle to improve over time. As a result, nearly a year after deployment, model performance remains high, and business users continue to trust and leverage model insights daily. Moreover, by moving monitoring and management to a specialized operations team, the company reduced the burden on those developing new AI solutions, so they can maintain a laser focus on bringing new AI capabilities to end users.

Despite substantial investments in governance, many organizations still lack visibility into the risks their AI models pose and what, if any, steps have been taken to mitigate them. This is a significant issue, given the increasingly critical role AI models play in supporting daily decision making, the ramp-up of regulatory scrutiny, and the weight of reputational, operational, and financial damage companies face if AI systems malfunction or contain inherent biases.

While a robust risk-management program driven by legal, risk, and AI professionals must underlie any companys AI program, many of the measures for managing these risks rely on the practices used by AI teams. MLOps bakes comprehensive risk-mitigation measures into the AI application life cycle by, for example, reducing manual errors through automated and continuous testing. Reusable components, replete with documentation on their structure, use, and risk considerations, also limit the probability of errors and allow for component updates to cascade through AI applications that leverage them. One financial-services company using MLOps practices has documented, validated, and audited deployed models to understand how many models are in use, how those models were built, what data they depend on, and how they are governed. This provides its risk teams with an auditable trail so they can show regulators which models might be sensitive to a particular risk and how theyre correcting for this, enabling them to avoid heavy penalties and reputational damage.

In many companies, the availability of technical talent is one of the biggest bottlenecks for scaling AI and analytics in general. When deployed well, MLOps can serve as part of the proposition to attract and retain critical talent. Most technical talent get excited about doing cutting-edge work with the best tools that allow them to focus on challenging analytics problems and seeing the impact of their work in production. Without a robust MLOps practice, top tech talent will quickly become frustrated by working on transactional tasks (for instance, data cleansing and data integrity) and not seeing their work have a tangible business impact.

Implementing MLOps requires significant cultural shifts to loosen firmly rooted, siloed ways of working and focus teams on creating a factory-like environment around AI development and management. Building an MLOps capability will materially shift how data scientists, engineers, and technologists work as they move from bespoke builds to a more industrialized production approach. As a result, CEOs play a critical role in three key areas: setting aspirations, facilitating shared goals and accountability, and investing in talent.

As in any technology transformation, CEOs can break down organizational barriers by vocalizing company values and their expectations that teams will rapidly develop, deliver, and maintain systems that generate sustainable value. CEOs should be clear that AI systems operate at the level of other business-critical systems that must run 24/7 and drive business value daily. While vision setting is key, it pays to get specific on whats expected.

Among the key performance metrics CEOs can champion are the following:

Fully realizing such goals can take 12 to 24 months, but with careful prioritization of MLOps practices, many teams we work with see significant progress toward those goals in just two to three months.

One of the fundamental litmus tests for impact is the degree to which goals are shared across business leaders and the respective AI, data, and IT teams. Ideally, the majority of goals for AI and data teams should be in service of business leaders goals. Conversely, business leaders should be able to articulate what value they expect from AI and how it will come to fruition.

Another measure is the level of collaboration around strategic technology investments to provision tooling, technologies, and platforms that optimize AI workflows. With the rapid pace of technological change, IT often struggles to balance the need for new AI tooling and technologies with concerns that short-term fixes increase technology costs over the long term. Comprehensive MLOps practices ensure a road map to reduce both complexity and technical debt when integrating new technologies.

Most AI leaders we know spend significant time building strong relationships with their IT counterparts to gain the support they need. But when CEOs actively encourage these partnerships, it accelerates their development considerably.

The role of data scientists, for example, is changing. While they previously depended on low-level coding, they must now possess knowledge of software engineering to assemble models from modular components and build production-ready AI applications from the start.

Newer roles needed on AI teams have emerged as well. One is that of the machine learning engineer who is skilled in turning AI models into enterprise-grade production systems that run reliably. To build out its ML engineering team, a North American retailer combined existing expertise of internal IT developers who understood and could effectively navigate the organizations systems with new external hires who brought broad experience in MLOps from different industries.

AI is no longer just a frontier for exploration. While organizations increasingly realize value from AI applications, many fail to scale up because they lack the right operational practices, tools, and teams. As demand for AI has surged, so has the pace of technological innovations that can automate and simplify building and maintaining AI systems. MLOps can help companies incorporate these tools with proven software-engineering practices to accelerate the development of reliable AI systems. With knowledge of what good MLOps can do and what levers to pull, CEOs can facilitate the shift to more systematic AI development and management.

Continued here:

AI at scale with MLOps: What CEOs need to know - McKinsey

Posted in Ai | Comments Off on AI at scale with MLOps: What CEOs need to know – McKinsey

Study: Artificial Intelligence Can Predict Risk of Recurrence for Women With Common Breast Cancer – Pharmacy Times

Posted: at 9:05 pm

They added that this is one of the first proofs of concept illustrating the power of an AI model for identifying parameters associated with relapse that the human brain could not detect.

A study from Gustave Roussy and the startup Owkin shows that deep learning analysis in digitized pathology slides can help classify patients with localized breast cancer between high- and low-risk of metastatic relapse in the next 5 years using artificial intelligence (AI).

This could help in therapeutic decision making and avoid unnecessary chemotherapy affecting the personal, professional, and social lives for low-risk women, according to the researchers. They added that this is one of the first proofs of concept illustrating the power of an AI model for identifying parameters associated with relapse that the human brain could not detect.

The RACE AI study was conducted among a cohort of 1400 patients managed at Gustave-Roussy between 2005 and 2013 for localized hormone-sensitive (hormone receptor-positive, humean epidermal growth factor receptor 2-negative) breast cancer, and these women were treated with surgery, radiotherapy, hormone therapy, and sometimes chemotherapy to reduce the risk of distant relapse.

Gustave Roussy and Owkin proposed a new method to direct patients identified as being at high risk towards new innovative therapies and avoiding unnecessary chemotherapy for low-risk patients. An AI model was developed in the RACE AI that would assess the risk of relapse with an area under the curve of 81% to help the practitioner determine the benefit/risk balance of chemotherapy.

The calculation is based on the patients clinical data combined with the analysis of stained and digitized histological slides of the tumor. The slides contain rich information for the management of cancer, making it unnecessary to develop a new technique or to equip a specific technical platform, according to the study.

A slide scanner is the only important equipment that can be found in most laboratories, as it digitized the morphological information present on the slide, according to the researchers.

This study opens up next steps for using the model on an independent cohort of patients treated outside of Gustave Roussy, and if the results are confirmed through reliable information to clinicians, this AI tool will prove to be a valuable aid to therapeutic decisions, according to the study.

REFERENCE

Artificial intelligence predicts the risk of recurrence for women with the most common breast cancer. EurekAlert! September 21, 2021. Accessed September 22, 2021. https://www.eurekalert.org/news-releases/929023

View post:

Study: Artificial Intelligence Can Predict Risk of Recurrence for Women With Common Breast Cancer - Pharmacy Times

Posted in Ai | Comments Off on Study: Artificial Intelligence Can Predict Risk of Recurrence for Women With Common Breast Cancer – Pharmacy Times

Page 97«..1020..96979899..110120..»