Just what can AI in IT operations accomplish? – TechTarget

Artificial intelligence is a much-misused term. Many so-called AI systems on the market are nothing more than fairly simple rules-based engines. When it comes to AI in IT operations, the application of an if this, then or if this, else methodology does not really denote the presence and use of AI.

To gain real insight into AIOps uses cases and how the artificial intelligence should function, it is better to look at examples of how tasks have been dealt with historically -- and then consider how AI might assume those duties. This involves a comparison between AI and the thing it is meant to replace or augment: the human brain. Humans will usually notice when something is wrong within a normal pattern. For example, we can spot where a single pixel is dead on an HD or 4K screen. Across a full IT platform, this falls short. There are too many variables for humans to keep an eye on at one time.

As such, enterprise IT monitoring systems aggregate system logs and carry out basic pattern matching to see if everything is OK. When these systems see something amiss, they flag the problem via some mechanism -- often a traffic-light notification on the sys admin's screen -- so that a person can step in to take appropriate action.

When rules-based engines are applied to such systems, common problems can be codified. A standard automated response then rectifies the problem. Although this has helped in the operation of complex platforms, it still leaves a lot to be desired. This is where AI should come to the fore.

At the most basic level, we have three approaches for how AI can be applied:

Let's explore how these approaches to AI in IT operations might play out in hypothetical scenarios.

Imagine that you are presented with a problem that you have encountered before. Your brain applies a simplistic approach: "If I have come up against this before, and I fixed it successfully, then I may as well apply the same solution again." This is essentially a rules-based engine approach. It works, mostly.

AIOps systems can easily apply such logic. All they need is a system that comes with a set of known problems and solutions out of the box and a capability to learn as it goes, adding new known issues specific to its environment. Examples would include the application of a patch to a system or the allocation of extra resources to meet a workload's needs.

What if the simplicity described above doesn't work? For example, the patch is applied, but it doesn't work? The human brain would think, "If only we could go back to where we were with a working system."

An AIOps system should be able to create a restore point before it applies any change; that way, in the event of trouble, the system would know to fall back to that restore point. Even better, AIOps should try to identify the reason a patch failed. Was it due to something that can possibly (or probably) be fixed? What are the odds that the fix will work? How long will it take -- and what will be the effect on the operation of the workload? Can the system evaluate this scenario -- not just in technical terms, but also in a way that considers which responses would minimize the financial hit the business might suffer? These types of questions and answers require real AI capabilities.

What happens when AIOps comes up against a problem it has not seen before? The human mind will consider a range of options, from the fight-or-flight response against an unknown threat to the let's-think-about-this reaction to a less-threatening problem.

In this AIOps use case, the AI needs to be able to work through the problem in its own way. A malicious attack against a platform, for example, would be a fight-or-flight situation. Should a DDoS attack be blocked completely (flight), or can the workload be redeployed elsewhere to minimize impact (fight)? Is an intrusion attack something that needs to be blocked, or could it be a false positive caused by someone trying to access the system from an unexpected location or device?

In the fight case, AIOps isn't quite ready. Let's assume that the platform shows signs that something is wrong, but the available data does not point toward a clear root cause. People would try to think beyond their direct experiences. Maybe they have come up against something similar elsewhere that could spark ideas. If not, they might ask others for guidance or suggestions.

Similarly, an AIOps system would look at its installed rules and interrogate its platform-specific issue database. When there is nothing there that's able to address the problem at hand, it could then go to the cloud and see if any other platform with the same AIOps system has seen something similar. In much the same way that humans need to be able to describe a problem in a meaningful way -- "it's not working" isn't helpful -- tools that deploy AI in IT operations need a standardized taxonomy to offer a description of the problem so that the AI can get back meaningful responses from other possible resources.

The AI engine must then be able take the information and analyze the problem again, coming up with possible solutions and the probabilities that those options would work. Any possible solution must be weighed against the potential risks the business would face should that option fail. Where the business risk is too high, the AIOps system must revert to the old standard of alerting admins, who could then apply human intelligence to the problem.

We are still at the early stages of real AI in IT operations, but advancements will be rapid. We should expect to see these technologies mature significantly in the coming years, and adoption will increase in pace with those improvements. Just make sure that any system chosen now can grow with your needs -- and that you understand the capabilities and challenges of AIOps before you fully commit.

Read more from the original source:

Just what can AI in IT operations accomplish? - TechTarget

Building The Worlds Top AI Industry Community – Lessons From Ai4 – Forbes

When it comes to artificial intelligence, and technology in general, we as a society are often guilty of thinking of it as separate from humanity. However, AI and humanity are of course intricately entwined, as AI is built by humans. Just as the saying goes, no man is an island, the same can be said of what we build.

Artificial intelligence (AI) has now permeated all sectors of business - and for good reason.

Across multiple industries, difficult or expensive tasks can be automated by AI/ML and, as a result, catapult even failing businesses into success. AI boasts a seemingly infinite list of applications - from improving customer experience to curing sleep disorders.

However, as the use of artificial intelligence rises, so does the need for cross-industry communication on the topic.

Ai4 is unique and worth high creates a bridge between industries, leaders, and technologists. The company provides a common framework for what AI means to both enterprise and the future of our globe as we transition into a new era of responsible human-machine collaboration.

One attendee, senior manager at The Aldo Group, commented on the benefits of this approach stating, I gained great insights into how my peers and competitors are leveraging AI & ML.

Ai4 Live Event

Ai4 was started by co-founders Marcus Jecklin and Michael Weiss as a small 350-person AI for financial services conference at a hotel in Brooklyn, NY. Since then, it has grown to be the top community for industry professionals seeking to learn about artificial intelligence.

Ai4 convenes thousands of people each year and reaches tens of thousands more through offline and online events, AI enterprise trainings, AI blogs and newsletters, AI matchmaking programs, and an AI jobs board.

Ai4 2020 (originally scheduled to take place at the MGM Grand in Las Vegas, now taking place digitally) promises to be an incredibly impactful event.

By gathering leaders of enterprise from across industry, government organizations, disruptive startups, investors, research labs, academia, associations, open source projects, media and analysts, Ai4 is creating the largest and most influential venue for AI-related idea-sharing, commerce, and technological progress.

Speakers for this years event include Salahuddin Khawaja, Managing Director - Automation / Global Risk, Bank of America Merrill Lynch; Stephen Wong, Chief Informatics Officer, Houston Methodist; Ameen Kazerouni, Head of ML/AI Research and Platforms, Zappos; Barret Zoph, Staff Research Scientist, Google Brain; Meltem Ballan, Data Science Lead, General Motors.

Ai4 Live Event

The speakers were amazing, commented an Assortment & Space Analyst, BJs Wholesale regarding past Ai4 live events. They covered a wide range of topics that will certainly help push our AI initiatives forward.

Success in business can often be attributed to networking - and this is no different when it comes to technology. Networks foster the exchange of ideas, as well as mutual confidence and understanding.

They also enable best practices to be created and distributed. Herein lies the genius of Ai4s AI Matchmaking system. Through this system, Ai4 arranges digital 1-1 meetings between industry leaders and vetted AI companies from the Ai4 community.

The results of this model seem to speak for themselves, according to participants. As far as recommendations go, I dont believe there is anything currently on the market, that competes or provides as much value, as Ai4s 1:1 virtual meetings, said Founder & CEO at Medlytics.

For any technology, a lack of open channels for communication will not only stall progress, but in the case of AI, it could also mean profound impacts for society. Simply put, more perspectives we encourage in this field translates into more comprehensive discussions of ethical implications and inclusive development.

Ai4 has been able to effectively address this need in the AI community by not only facilitating (virtual) space such as their AI Slack Community, but also conversations. Webinars led by AIs industry leaders on pressing topics are frequently hosted by Ai4.

Ai4 event

Additionally, Ai4 provides AI training in the form of open enrollment courses for Data Scientists & Execs as well as enterprise AI training advisory services ensuring that the Ai4 community members remain on the cutting edge. With the explosion of AI education providers in recent years, Ai4 is using their expertise to help enterprises navigate the AI education landscape to find the optimal curriculum at a fair price.

The individual presentations and moderated panels had a great combination of thoughtful commentary and technical details, commented Founder & CEO at RCM Brain, satisfying a diverse audience of technologists and business leaders.

Perhaps now more than ever, it is crucial that we remain connected - especially when it comes to the innovations that will shape our future. Ai4 is demonstrating the right way to build an advanced technology community with global, virtual conversations made up of diverse, cross-cultural, cross-industry perspectives and led by the worlds preeminent experts.

Link:

Building The Worlds Top AI Industry Community - Lessons From Ai4 - Forbes

Dont believe the hype: AI is no silver bullet – ComputerWeekly.com

You could be forgiven for thinking the entire world is now powered by artificial intelligence (AI) systems. McKinsey predicted a couple of years ago that the technology would add $13tn (10.8tn/9.7tn) to the global economy by 2030, and its currently easier to list the cyber security firms that arent shouting about their AI and machine learning capabilities than those that are.

Unfortunately, the reality of how its currently used doesnt map to the marketing hyperbole. We want to believe in the narrative because, like flying cars and jetpacks, the technology is so appealing to us. The cold, hard truth is very different.

Chief information security officers (CISOs) looking for new security partners must therefore be pragmatic when assessing whats out there. AI is helpful, in limited use cases, to take the strain off stretched security teams, but its algorithms still have great difficulty recognising unknown attacks. Its time for a reality check.

We live in a world where cyber attackers seem to hold all the cards or, if they dont, theyre certainly on an impressive winning streak.

The Identity Theft Resource Center (ITRC) revealed a 17% increase in data breaches in 2019 versus 2018, with more than 164 million records exposed across virtually every vertical you can imagine. The vast cyber crime economy that supports these endeavours is estimated to be worth $1.5tn annually, almost as much as the GDP of Russia.

Covid-19 has only made things more challenging for CISOs. An explosion in unmanaged home-working endpoints, distracted employees, stretched IT support staff, overloaded virtual private networks (VPNs), and unpatched remote access infrastructure has ramped up cyber risk levels. Skilled security professionals remain worryingly hard to find theres now a global shortage of more than four million.

All of this sets the scene for AI to ride in and save the day. But while intelligent algorithms have been developed to beat the worlds best Go players, power the voice assistants in our homes, and unlock our smartphones via facial recognition, a breakthrough remains as elusive as ever when it comes to cyber security.

Lets be clear. Machine and deep learning are good at some things. Give the system plenty of data and train it to spot subtle patterns and it can do so quite successfully. This could be useful in flagging known security threats and misconfigurations that human eyes may otherwise miss.

Its good in areas like anti-fraud tooling, for example, because scammers usually riff off the same underlying ideas when trying to defraud banks and businesses. By spotting these needles in the haystack, AI can help in-demand security professionals to do their jobs more efficiently and effectively.

Yet in this respect, AI is similar to a Google search engine, filtering through large volumes of data that humans couldnt possibly sort. What we havent achieved yet is the creation of independent learning machines that can draw new conclusions from patterns. The much-touted capability of baselining normal and then being able to spot abnormalities that could indicate suspicious patterns is actually much harder than it sounds.

Networks are incredibly complex and the bigger they are, the harder they are to map. Add to this the fact that commercial networks are constantly changing and developing new behaviours and interactions, and you have even more complexity. That means AI systems end up flagging even just the regular evolution of a healthy network as suspicious, resulting in an overwhelming number of false positives.

Cyber criminals also have a few tricks up their sleeve. By making their behaviour appear as normal as possible they could trick these intelligent systems. On the other hand, well-documented adversarial techniques can trick AI into making the wrong decisions by creating the digital equivalent of optical illusions.

So where do we go from here? Can we design improvements into AI systems to make them more effective in cyber security? The single biggest challenge in this field is transparency: the ability of a system to explain why it arrived at a particular decision.

Unfortunately, those AI systems that can explain how they came up with an answer are less effective than the more inscrutable black boxes. Users dont trust results from these opaque systems and find it challenging to follow up leads which simply say something unusual happened without explaining what made it worth flagging and why it matters to the business.

The lesson here for CISOs is buyer beware. Absolutely invest in AI systems for spotting well-established patterns that can make your security team more productive. But dont imagine the tech will be able to achieve sophisticated detection of new and unknown threats or replace human security analysts.

AI doesnt automate the work humans were already doing because there was never any way they could search through vast datasets in the first place. Your human experts will still need to take the lead, albeit with some help.

For what seems like the past 50 years, we have been a decade away from a breakthrough into artificial general intelligence (AGI). Despite the industry hype, this vision remains as elusive today as it ever was.

See the original post:

Dont believe the hype: AI is no silver bullet - ComputerWeekly.com

AI Used to Prevent Californian Wildfires – RTInsights

The utility industry is adopting AI to keep their equipment in good working order and keep people and property safe.

A volt transmission failurein Sonoma County is considered the cause of the Kincade Fire, which burned77,000 acres of forest and destroyed over 120 buildings.

Its not the first time faulty power lines have been the cause of major wildfires, and these faults are expected to occur more frequently as the aging grid infrastructure requires more maintenance.

SEE ALSO: Real-Time Data Promises to Reduce Firefighting Risks

To reduce the likelihood of afault leading to evacuations and destruction, Buzz Solutions has developed anartificial intelligence solution, which warns utilities about potential problemsbefore they happen.

The current review process ofelectrical infrastructure takes six to eight months, as engineers have to siftthrough thousands of images and then conduct in-person inspections, beforefixing the faults.

Buzz Solutions AI platformreduces the investigation time to a couple of days, as it runs all of theimages through an asset-tracking system, which can detect faults and vegetationencroachment.

It is definitely time tomove forward using AI to reduce the wildfire threat. We believe the utilityindustry is ready to use a far better approach to keep their equipment in goodworking order and keep people and property safe, said Kaitlyn Albertoli, co-founderand CEO at Buzz Solutions.

Alongside the asset-trackingplatform, Buzz Solutions also offers a predictive maintenance solution, which feedshistorical, asset and fault, and weather data to determine future high-riskareas.

Our vision is to useinnovative technology to safeguard our infrastructure and environment today andhelp to predict where problems will crop up in the future, said Vikhyat Chaudhry,co-founder at CTO at Buzz Solutions. This is even more important as we areseriously impacted by climate change.

Several utilities across theU.S. are piloting the platform, including a utility in Southern California andNew York. It recently received $1.2 million in seed funding to expand itsoperations.

Read the original here:

AI Used to Prevent Californian Wildfires - RTInsights

AI Weekly: In a chaotic year, AI is quietly accelerating the pace of space exploration – VentureBeat

The year 2020 continues to be difficult here on Earth, where the pandemic is exploding again in regions of the world that were once successful in containing it. Germany reported a record number of cases this week alongside Poland and the Czech Republic, as the U.S. counted 500,000 new cases. Its the backdrop to a tumultuous U.S. election, which experts fear will turn violent on election day. Meanwhile, Western and Southern states like Oregon, Washington, California, and Louisiana are reeling from historically destructive wildfires, severe droughts, and hurricanes.

Things are calmer in outer space, where scientists are applying AI to make exciting new finds. Processes that would have taken hours each day if performed by humans have been reduced to minutes, a testament to the good AI can achieve when used in a thoughtful way. While not necessarily groundbreaking, unprecedented, or state-of-the-art with regard to technique, the innovations are inspiring stories of discovery at a time when there isnt a surfeit of hope.

Earlier this month, researchers at NASAs Jet Propulsion Laboratory in California announced they had fed an algorithm 6,830 images taken by the Context Camera on NASAs Mars Reconnaissance Orbiter (MRO) to identify changes to the Martian surface. Given 112,000 images taken by the Context Camera, the AI tool spotted a cluster of craters in the Noctis Fossae region of Mars, including 20 new areas of interest that might have formed from a meteor impact between March 2010 and May 2012. NASA hopes to use similar classification technology on future Mars orbiters, which might provide a more complete picture of how often meteors strike Mars.

In August, researchers at the University of Warwick built a separate AI algorithm to dig through NASA data containing thousands of potential planet candidates. The team trained the system on data collected by NASAs now-retired Kepler Space Telescope, which spent nine years in deep space searching for new worlds. Once it learned to separate planets from false positives, it was used to analyze datasets that hadnt yet been validated, which is when it found 50 exoplanets.

And last week, Intel, the European Space Agency (ESA), and startupUbotica detailed what they claim is the first AI-powered satellite to orbit Earth: the desktop-sized PhiSat-1. It aims to solve the problem of clouds obscuring satellite photos by collecting a large number of images from space in the visible, near-infrared, and thermal-infrared parts of the electromagnetic spectrum and then filtering out cloud-covered images using AI algorithms. Future versions of the PhiSat-1 could look for fires when flying over areas prone to wildfire and notify responders in minutes rather than hours. Over oceans, which are typically ignored, they might spot rogue ships or environmental accidents, and over ice, they could track thickness and melting ponds to help monitor climate change.

AI is problematic in many respects; its biased, discriminatory, and harmful at its worst. We have written about how facial recognition algorithms tend to be less accurate when applied to certain racial and ethnic groups. Natural language processing models embed implicit and explicit gender biases, as well as toxic theories and conspiracies. And governments are investigating the use of AI and machine learning to wage deadly warfare.

This being the case, some AI like that applied to Martian landscapes, telescope snapshots, and cloudy satellite images can be a force for good. And in a year marked by tragedy and general skepticism about technology (and the tech industry), this positivity isnt just encouraging, but sorely needed.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Here is the original post:

AI Weekly: In a chaotic year, AI is quietly accelerating the pace of space exploration - VentureBeat

From vampires to AI, Twilight director Catherine Hardwicke offers a new take on being human – CNET

Catherine Hardwicke (left) speaks with cast members Don Cheadle and Helena Howard on the set of Don't Look Deeper.

Catherine Hardwicke knows a few things about telling stories with teens. After working as a production designer in Hollywood, she co-wrote and directed -- for a total of $3 -- the 2003 teen drama Thirteen and followed that up two years later with Lords of Dogtown, a look at skateboarding culture in Southern California.

Then in 2008, Hardwicke did something remarkable while telling the story of a human teen named Bella who falls in love with a vampire named Edward. The movie -- Twilight, based on the book of the same name -- cost $40 million to make but brought in $400 million at the box office. That's not bad, given that Hardwicke was told no one would even go to see it, besides a few teenage girls.

Subscribe to the CNET Now newsletter for our editors' picks of the most important stories of the day.

Instead, Twilight was the start of a five-film series that grossed over $3 billion around the world and helped popularize other film series with a young woman protagonist, including The Hunger Games and The Divergent series. Despite all of that success, Hardwicke was shocked that none of the follow-ups was directed by a woman. More than a decade later, Hardwicke says things are getting better for women directors, calling out Patty Jenkins for directing the blockbuster Wonder Woman, and now her work for Quibi, the new streaming service built around short films and episodes for our mobile times that run 10 minutes or less.

InDon't Look Deeper, Hardwicke taps into teenage angst about self-identify and about growing up through Aisha (played by Helena Howard), a biracial young woman living with her dad (Don Cheadle) in the central California city of Modesto and set "15 minutes into the future." But pretty quickly, we -- and Aisha -- learn she's not human, a discovery she tries to explain to her analyst (Emily Mortimer) who quickly reboots and wipes Aisha's memory. Or so she thinks. That sets up the first season, which Hardwicke describes as 14 chapters that explore a world with technology advanced enough to raise the question: What does it mean to be human?

"On one level, we have our character going through all those things that a teenager goes through. But when she's struggling with her own identity and something seems a little bit off, suddenly she finds out that it's really way more off than she thought," Hardwicke tells me from her home in Los Angeles for CNET's I'm So Obsessed podcast series.

As for the short-form storytelling format, Hardwicke says she's a fan (though the jury is out on whether Quibi will be a success.) "Sometimes we just cannot commit to sit down or even stay awake at night for an hour show or a two-hour movie," she says. "You just want something to change your palette and take you to a different place from your workday and be able to go to sleep and dream about something new."

Jan Luis Catellano, Ema Horvarth and Helena Howard star in Quibi's Don't Look Deeper.

I talked to Hardwicke about the unexpected success of Twilight, about women directors (we need more), about what it will be like returning to movie theaters when we're through coronavirus quarantines, and about her relationship with tech, including her Alexa speaker and Tesla (she hopes one day soon it can park itself). And of course, I asked her about why she, a former architect-turned-product designer-turned director who loved watching Clint Eastwood westerns growing up in South Texas, is obsessed with creativity and "how you can nurture your creativity to do something that amazes even yourself."

Listen to my entire conversation with Hardwicke on Apple Podcasts, on Spotify or in the player embedded above, and subscribe to I'm So Obsessed on your favorite podcast app. In each episode, my series co-host Patrick Holland and I catch up with artists, actors and creators to learn about their work, their career and their current obsession.

Read the original here:

From vampires to AI, Twilight director Catherine Hardwicke offers a new take on being human - CNET

Is the oil & gas sector seeing the beginnings of an AI investment boom? – Offshore Technology

The oil & gas industry is seeing an increase in artificial intelligence (AI) investment across several key metrics, according to an analysis of GlobalData data.

AI is gaining an increasing presence across multiple sectors, with top companies completing more AI deals, hiring for more AI roles and mentioning it more frequently in company reports at the start of 2021.

GlobalDatas thematic approach to sector activity seeks to group key company information on hiring, deals, patents and more by topic to see which companies are best placed to weather the disruptions coming to their industries.

These themes, of which AI is one, are best thought of as any issue that keeps a CEO awake at night, and by tracking them it becomes possible to ascertain which companies are leading the way on specific issues and which are dragging their heels.

According to this method, Shell, Gazprom, Rosneft are classed as dominant players in AI in the sector, with an additional seven companies classified as leaders. Nine companies are considered to be vulnerable due to a lack of investment in AI.

One area in which there has been some decrease in AI investment among oil & gas companies is in the number of deals. GlobalData show that there were 11 AI deals in oil & gas in the first quarter of 2019. By the first quarter of 2021, that number was one.

Hiring patterns within the oil & gas sector as a whole are pointing towards an increase in the level of attention being shown to AI-related roles. There was a monthly average of 478 actively advertised-for open AI roles within the industry in April this year, up from a monthly average of 333 in December 2020.

It is also apparent from an analysis of keyword mentions in financial filings that AI is occupying the minds of oil & gas companies to an increasing extent.

There have been 390 mentions of AI across the filings of the biggest oil & gas companies so far in 2021, equating to 9.9% of all tech theme mentions. This figure represents an increase compared to 2016, when AI represented 7.5% of the tech theme mentions in company filings.

AI is increasingly fueling innovation in the oil & gas sector, particularly in the past six years. There were, on average, 61 oil & gas patents related to AI granted each year from 2000 to 2014. That figure has risen to an average of 131 patents since then, reaching 245 in 2020.

Modular and Single-Lift E-Houses, Living Quarters and Multi-Purpose Buildings and Cabins

28 Aug 2020

Read the original:

Is the oil & gas sector seeing the beginnings of an AI investment boom? - Offshore Technology

AI helps radiologists improve accuracy in breast cancer detection with lesser recalls – Healthcare IT News

A new study, conducted by Korean academic hospitals and Lunit, a medical AI company specializing in developing AI solutions for radiology and oncology, demonstrated the benefits of AI-aided breast cancer detection from mammography images. The study was published online on 6 February 2020, in Lancet Digital Health and features large-scale data of over 170,000 mammogram examinations from five institutions across South Korea, USA, and the UK, consisting of Asian and Caucasian female breast images.

TOP FINDINGS

One of the major findings showed that AI, in comparison to the radiologists, displayed better sensitivity in detecting cancer with mass (90% vs 78%) and distortion or asymmetry (90% vs 50%). The AI was better in the detection of T1 cancers, which is categorized as early-stage invasive cancer. AI detected 91% of T1 cancers and 87% of node-negative cancers, whereas the radiologist reader group detected 74% for both.

Another finding was a significant improvement in the performance of radiologists, before and after using AI. According to the study, the AI alone showed 88.8% sensitivity in breast cancer detection, whereas radiologists alone showed 75.3%. When radiologists were aided by AI, the accuracy increased by 9.5% to 84.8%.

An important factor in diagnosing mammograms is breast density and dense breast tissues, mostly from the Asian population, make it harder to interpret as dense tissue is more likely to mask cancers in mammograms. According to the studys findings, the diagnostic performance of AI was less affected by breast density, whereas radiologists' performance was prone to density, showing higher sensitivity for fatty breasts at 79.2% compared to dense breasts at 73.8%. When aided by AI, the radiologists sensitivity when interpreting dense breasts increased by 11%.

THE LARGER TREND

Findings from a study published in Nature indicated that Googles AI model spotted breast cancer in de-identified screening mammograms with greater accuracy, with fewer false positives and false negatives than experts, HealthCareITNews reported.

Lunit recently raised a $26M Series C funding from Korean and Chinese investors, which the company said was its biggest funding round, according to a DealStreetAsia report in January.

ON THE RECORD

It is an unprecedented quantity of data with accurate ground truth--especially the 36,000 cancer cases, which is seven times larger than the usual number of datasets from resembling studies conducted previously, said Hyo-Eun Kim, the first author of the study and Chief Product Officer at Lunit.

Prof. Eun-Kyung Kim, the corresponding author of the study and a breast radiologist at Yonsei University Severance Hospital, said: One of the biggest problems in detecting malignant lesions from mammography images is that to reduce false negativesmissed casesradiologists tend to increase recalls, casting a wider safety net, which brings an increased number of unnecessary biopsies.

It requires extensive experience to correctly interpret breast images, and our study showed that AI can help find more breast cancer with lesser recalls, also detecting cancers in its early stage of development.

Continue reading here:

AI helps radiologists improve accuracy in breast cancer detection with lesser recalls - Healthcare IT News

Google’s AI thinks women wearing masks have mouths covered with duct tape – ZDNet

AI may not know what's going on here.

Artificial intelligence is a work in progress.

Or, as some critics might say , a work in abject regress that will wreck humanity's remaining faith in itself.

Even some tech companies seem a touch unsure about their own AI systems. Why, not too long ago IBM announced it was withdrawing from the facial recognition business altogether.

We'll come back to IBM in a moment. You see, I've just been handed the results of a study that leaves a lot to consider.

Performed by marketing company Wunderman Thompson's Data group, the study examined whether well-known visual AI systems look at men wearing PPE masks in the same way as they do women.

The researchers took 256 images of each gender -- of varying qualities and taken in varying locations -- and then used generic models trained by some of the larger names in tech: Google Cloud Vision, Microsoft Azure's Cognitive Services Computer Vision, and IBM's Watson Visual Recognition.

The results were a little chilling.

Though none of the systems were particularly stellar at spotting masks, they were twice as likely to identify male mask-wearers as female mask-wearers.

So what did they think the women were wearing? Well, Google's AI identified 28% of the images as being women with their mouths covered by duct tape. In 8% of cases, the AI thought these were women with facial hair. Quite a lot of facial hair, it seems.

IBM's Watson took things a little further. In 23% of cases, it saw a woman wearing a gag. In another 23% of cases, it was sure this was a woman wearing a restraint or chains.

Microsoft's Computer Vision may need a little more accurate coding too. It suggested that 40% of the women were wearing a fashion accessory, while 14% were wearing lipstick.

Such results may make many wonder where these AIs get their ideas from. A simple answer might be "men."

The researchers, however, suggested the machines were looking for inspiration in "a darker corner of the web where women are perceived as victims of violence or silenced."

It's hard not to imagine that's true and it's something that potentially has awful consequences as we disappear ever more readily into AI's odiferous armpit.

The researchers say they're not trying to demonize AI. (AI is quite good at doing that for itself.)

Instead, as Wunderman Thompson's director of data science Ilinca Barsan put it: "If we want our machines to do work that accurately and responsibly reflects society, we need to help them understand the social dynamics that we live in to stop them from reinforcing existing inequalities through automation and put them to work for good instead."

See also:2084: What happens when artificial intelligence meets Big Brother |No matter how sophisticated, AI systems still need human oversight |AI's big problem: Lazy humans just trust the algorithms too much |What is AI? Everything you need to know about Artificial Intelligence

Still, when I asked the researchers what they thought about IBM withdrawing from the facial recognition business, they replied: "Our research focused on visual label recognition rather than facial recognition, but if it's this easy for an (admittedly general) AI model to confuse someone wearing a mask with someone being gagged or restrained, then withdrawing from a business that is so prone to misuse, privacy violation, and training bias seems to be the right (and smart) thing to do for IBM."

Humanity hasn't done too good a job of helping machines understand vital elements. Humanity itself, for example. Partly because machines just don't have that instinct. And partly because humans struggle to understand themselves.

How often have you been driven toward head-butting walls during even the briefest encounter with customer service AI?

I fear, though, that too many AI systems have already been dragged into a painfully biased view of the world, one from which they may never entirely return.

How much more darkness does that risk propagating?

See original here:

Google's AI thinks women wearing masks have mouths covered with duct tape - ZDNet

THE OUTER LIMITS: Successfully Implementing AI at the Edge – Electronic Design

Date: Thursday, June 04, 2020 Time: 2:00 PM Eastern Daylight Time Sponsor: AvnetDuration: 1 Hour

Register Today!

Summary

The explosiveand often disruptivegrowth of the Internet of Things has accelerated its expansion in the vertical markets of countless industries. In response, edge computing has presented itself as a solution to issues ranging from heavy use of server-oriented IoT functionality and excessive bandwidth use to advanced security and enhanced functionality.

As AI has evolved into a significant force-multiplier in intelligent IoT devices and products, striking a balance between cloud and edge intelligence has become crucial to implementation. Presented by Alix Paultre, this webinar will cover the make-or-break aspects of selecting and implementing hardware for AI-powered solutions at the edgeand what well see as next-generation smart infrastructures emerge.

Overview of Topics:

PLUS AI EVERYWHERE.An exploration of the key ways AI at the edge will impact smart cities, facilities, and homes through tomorrows intelligent infrastructures.

Speaker

Alix Paultre, Senior Technology Editor and European Correspondent

Alix Paultre is an embedded electronics industry writer and journalist with over two decades of experience in the field. He currently resides in Wiesbaden, Germany, working as a Contributing Editor and European Correspondent for a variety of industry publications. Alix has also served as the Editor in Chief of Power Systems Design and the Editorial Director for the Electronic Design Group at Advantage Business Media, overseeing Electronic Component News and Wireless Design and Development. Alix started in the electronics media field as an Editor at Electronic Products (under Hearst), and gained his early electronics experience as an Electronic Warfare/Signals Intelligence Analyst for the former U.S. Army Security Agency (ASA).

The Amazing AI Giveaway

To qualify, register below and join the event by 2:00 PM ET on June 4 for a chance to win one of the following prizes. Winners will be notified the following day.

Register

The rest is here:

THE OUTER LIMITS: Successfully Implementing AI at the Edge - Electronic Design

Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All – Forbes


Forbes
Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All
Forbes
Made possible in part by the Samsung Strategy and Innovation Center, the work centered on using both physical feedback and audio data to train AI for the task of analyzing, and recognizing, when conversations take a turn. Study participants were asked ...

and more »

Read the original post:

Wearable AI Detects Tone Of Conversation To Make It Navigable (And Nicer) For All - Forbes

Realizing the Potential of AI Localism | by Stefaan G. Verhulst & Mona Sloane – Project Syndicate

With national innovation strategies focused primarily on achieving dominance in artificial intelligence, the problem of actually regulating AI applications has received less attention. Fortunately, cities and other local jurisdictions are picking up the baton and conducting policy experiments that will yield lessons for everyone.

NEW YORK Every new technology rides a wave from hype to dismay. But even by the usual standards, artificial intelligence has had a turbulent run. Is AI a society-renewing hero or a jobs-destroying villain? As always, the truth is not so categorical.

As a general-purpose technology, AI will be what we make of it, with its ultimate impact determined by the governance frameworks we build. As calls for new AI policies grow louder, there is an opportunity to shape the legal and regulatory infrastructure in ways that maximize AIs benefits and limit its potential harms.

Until recently, AI governance has been discussed primarily at the national level. But most national AI strategies particularly Chinas are focused on gaining or maintaining a competitive advantage globally. They are essentially business plans designed to attract investment and boost corporate competitiveness, usually with an added emphasis on enhancing national security.

This singular focus on competition has meant that framing rules and regulations for AI has been ignored. But cities are increasingly stepping into the void, with New York, Toronto, Dubai, Yokohama, and others serving as laboratories for governance innovation. Cities are experimenting with a range of policies, from bans on facial-recognition technology and certain other AI applications to the creation of data collaboratives. They are also making major investments in responsible AI research, localized high-potential tech ecosystems, and citizen-led initiatives.

This AI localism is in keeping with the broader trend in New Localism, as described by public-policy scholars Bruce Katz and the late Jeremy Nowak. Municipal and other local jurisdictions are increasingly taking it upon themselves to address a broad range of environmental, economic, and social challenges, and the domain of technology is no exception.

For example, New York, Seattle, and other cities have embraced what Ira Rubinstein of New York University calls privacy localism, by filling significant gaps in federal and state legislation, particularly when it comes to surveillance. Similarly, in the absence of a national or global broadband strategy, many cities have pursued broadband localism, by taking steps to bridge the service gap left by private-sector operators.

Subscribe today and get unlimited access to OnPoint, the Big Picture, the PS archive of more than 14,000 commentaries, and our annual magazine, for less than $2 a week.

SUBSCRIBE

As a general approach to problem solving, localism offers both immediacyand proximity. Because it is managed within tightly defined geographic regions, it affords policymakers a better understanding of the tradeoffs involved. By calibrating algorithms and AI policies for local conditions, policymakers have a better chance of creating positive feedback loops that will result in greater effectiveness and accountability.

Feedback loops can have a massive impact, particularly when it comes to AI. In some cases, local AI policies could have far-reaching effects on how technology is designed and deployed elsewhere. For example, by establishing an Algorithms Management and Policy Officer, New York City has created a model that can be emulated worldwide.

AI localism also lends itself to greater policy coordination and increased citizen engagement. In Toronto, a coalition of academic, civic, and other stakeholders came together to ensure accountability for Sidewalk Labs, an initiative launched by Alphabet (Googles parent company) to improve services and infrastructure through citywide sensors. In response to this civic action, the company has agreed to follow six guidelines for responsible artificial intelligence.

As this example shows, reform efforts are more likely to succeed when local groups, pooling their expertise and influence, take the lead. Similarly, in Brooklyn, New York, the tenant association of the Atlantic Plaza Towers (in collaboration with academic researchers and nongovernmental organizations) succeeded in blocking a plan to use facial recognition technology in lieu of keys. Moreover, this effort offered important cues for how AI should be regulated more broadly, particularly in the context of housing.

But AI localism is not a panacea. The same tight local networks that offer governance advantages can also result in a form of regulatory capture. As such, AI localism must be subject to strict oversight and policies to prevent corruption and conflicts of interest.

AI localism also poses a risk of fragmentation. While national approaches have their shortcomings, technological innovation (and the public good) can suffer if AI localism results in uncoordinated and incompatible policies. Both local and national regulators must account for this possibility by adopting a decentralized approach that relies less on top-down management and more on coordination. This, in turn, requires a technical and regulatory infrastructure for collecting and disseminating best practices and lessons learned across jurisdictions.

Regulators are only just beginning to recognize the necessity and potential of AI localism. But academics, citizens, journalists, and others are already improving our collective understanding of what works and what doesnt. At The GovLab, for example, we are deepening our knowledge base and building the information-sharing mechanisms needed to make city-based initiatives a success. We plan to create a database of all instances of AI localism, from which to draw insights and a comparative list of campaigns, principles, regulatory tools, and governance structures.

Building up our knowledge is the first step toward strengthening AI localism. Robust governance capacities in this domain are the best way to ensure that the remarkable advances in AI are put to their best possible uses.

Here is the original post:

Realizing the Potential of AI Localism | by Stefaan G. Verhulst & Mona Sloane - Project Syndicate

AI to the rescue: technology can protect nesting birds – Sustainability Times

The technology is far superior to human eyes, which can be a big bonus in conservation.

Birds nesting on the ground across farmlands in Europe often face a singular threat: plows and other agricultural tools. Each spring numerous breeding farmland birds fall victim to agricultural activities as people fail to spot them in time before destroying their nests by accident.

Yet science can come to the rescue in the form of drones and artificial intelligence.

A team of researchers from the University of Helsinki decided to fly a drone equipped with a thermal camera over some agricultural fields in southern Finland, then fed the resulting images to an AI algorithm designed to identify nests of northern lapwings (Vanellus vanellus).

During a pilot study the researchers found that thermal vision when use at ground level was hampered by the presence of dense vegetation and objects in the way. So they decided to give the camera a birds eye view by making it airborne with a drone.

The technique worked like a charm. The thermal imaging system works best on cloudy days and when the temperature is colder. At least at high latitudes, the temperature of these nests is typically higher than that of the surrounding environment, explainsAndrea Santangeli, an fellow at the Finnish Museum of Natural History Luomus at University of Helsinki.

The technology is far superior to human eyes, which can be a big bonus in protecting threatened birds that are fast losing their habitats to agricultural activities, Santangeli says. We have been involved in conservation of ground-nesting farmland birds for years, and realized how difficult it is to locate nests on the ground, he notes.

Drones equipped with sensors are already in use in precision agriculture for mapping the spread of diseases on crops and monitoring other threats to them. The new AI technology could now be employed effectively in conservation efforts such as by integrating nest detection within the precision agriculture system that heavily relies on drone-borne sensors, the scientists explain in a study on their findings.

The conservation community must be ready to embrace technology and work across disciplines and sectors in order to seek efficient solutions, Santangeli stresses. This is already happening, with drone technology becoming rapidly popular in conservation.

The next step involves fine-tuning the system for use in other environments to protect other threatened species. The scientists hope that soon their system will be fully integrated into agricultural practices, so that detecting and saving nests from mechanical destruction will become a fully automated part of food production.

Read the original post:

AI to the rescue: technology can protect nesting birds - Sustainability Times

This Project Democratizes AI Investments On The Blockchain – Entrepreneur

The D.AI.SY project powered by Endotech lets people get income upfront, as well as a residual income from trading profits

Let the business resources in our guide inspire you and help you achieve your goals in 2021.

February4, 20213 min read

Opinions expressed by Entrepreneur contributors are their own.

Blockchain technology is quickly becoming a popular method to create and sustain various businesses. It was first recognized when cryptocurrency went mainstream. However, it has not been heavily utilized in the financial industry until now.

A noteworthy example of emergent blockchain technology isD.AI.SY,a crowdfunding model that enables cryptocurrency holders to receive equity in various forms, as well as peer-to-peer rewards through a secure system of crowdfunding supported by blockchain technology.

D.AI.SYlets people get income upfront, as well as a residual income from trading profits. It also provides investors with stock equity through its proprietary PACESETTER equity bonus system.

The first undertaking ofD.AI.SYis a crowdfunding project with a major artificial intelligence company,Endotech. TheD.AI.SY-Endotech teamis looking to develop AI-powered investing to unlock high-risk/high-return alpha from aggregated financial data. This tactical investing technology can produce substantially increased probability and reduced risks while keeping high-returns potential for investors.

D.AI.SYis currently working on a Tron Smart Contract. There are many benefits to Smart Contract Technology, as it allows for safe interaction betweenD.AI.SYand its members, as the scaling of transaction capacity with low transaction fees. All Smart Contract transactions are transparent for verification on the blockchain. Further, the Smart Contract is also immutable and indestructible, having the ability to persist to the end of time after it is launched.

For context,Endotechhas specialized for years in developing fully-automated tactical investment platforms based on dynamic artificial intelligence modeling.

D.AI.SYis Endotechs newest project which is set to deliver a new standard of predicting the probability of success in various trading markets like Forex, cryptocurrency, commodities, among other traditional markets.

The team is spearheaded by CEO and co-founder Dr. Anna Becker, and COO and co-founder Dmitry Gooshchin. The co-founders possess immense knowledge of blockchain technology and artificial intelligence, and their ingenuity is set to be reflected in their project for years to come.

Dr. Becker has achieved a plethora of success in the fintech space of artificial intelligence. She founded Strategy Runner (acquired by MFGlobal), a trading software tool that provided full automation capabilities for over 50,000 clients with over 300 professional strategy developers. She has also worked with over 35 systematic funds and artificial intelligence technologies for brokers during her time with the Gilboa Fund-of-Funds.

Dr. Becker manages a team that oversees over 20 proprietary artificial intelligence systems that are currently operating inEndoTech. She has extensive knowledge and experience as she has worked with over 300 brokers, and served as a compliance officer working with regulatory entities, such as NFA and CFTC.

Endotechhas big plans for the development of theD.AI.SYsoftware technology, which aims to produce more predictable, stable and reduced risk investment gains in various trading markets.The Daisy solution aims to rebalance the investment ecosystem by harnessing technological developments for improved investment opportunities and sustainability while offering certified network participants access to automated investment.

Read more here:

This Project Democratizes AI Investments On The Blockchain - Entrepreneur

How wearable AI could help you recover from covid – MIT Technology Review

The Illinois program gives people recovering from covid-19 a take-home kit that includes a pulse oximeter, a disposable Bluetooth-enabled sensor patch, and a paired smartphone. The software takes data from the wearable patch and uses machine learning to develop a profile of each persons vital signs. The monitoring system alerts clinicians remotely when a patients vitals such as heart rateshift away from their usual levels.

Typically, patients recovering from covid might get sent home with a pulse oximeter. PhysIQs developers say their system is much more sensitive because it uses AI to understand each patients body, and its creators claim it is much more likely to anticipate important changes.

Its an enormous benefit, says Terry Vanden Hoek, the chief medical officer and head of emergency medicine at University of Illinois Health, which is hosting the pilot. Working with covid cases is hard, he says: When you work in the emergency department its sad to see patients who waited too long to come in for help. They would require intensive care on a ventilator. You couldnt help but ask, If we could have warned them four days before, could we have prevented all this?

Like Angela Mitchell, most of the study participants are African-American. Another large group are Latino. Many are also living with risk factors such as diabetes, obesity, hypertension, or lung conditions that can complicate covid-19 recovery. Mitchell, for example, has diabetes, hypertension, and asthma.

African-American and Latino communities have been hardest hit by the pandemic in Chicago and across the country. Many are essential workers or live in high-density, multigenerational housing.

For example, there are 11 people in Mitchells house, including her husband, three daughters, and six grandchildren. I do everything with my family. We even share covid-19 together! she says with a laugh. Two of her daughters tested positive in March 2020, followed by her husband, before Mitchell herself.

Although African-Americans are only 30% of Chicagos population, they made up about 70% of the citys earliest covid-19 cases. That percentage has declined, but African-Americans recovering from covid-19 still die at rates two to three times those for whites, and vaccination drives have been less successful at reaching this community. The PhysIQ system could help improve survival rates, the studys researchers say, by sending patients to the ER before its too late, just as they did with Mitchell.

PhysIQ founder Gary Conkright has previous experience with remote monitoring, but not in people. In the mid-1990s, he developed an early artificial-intelligence startup called Smart Signal with the University of Chicago. The company used machine learning to remotely monitor the performance of equipment in jet engines and nuclear power plants.

Our technology is very good at detecting subtle changes that are the earliest predictors of a problem, says Conkright. We detected problems in jet engines before GE, Pratt & Whitney, and Rolls-Royce because we developed a personalized model for each engine.

Smart Signal was acquired by General Electric, but Conkright retained the right to apply the algorithm to the human body. At that time, his mother was experiencing COPD and was rushed to intensive care several times, he said. The entrepreneur wondered if he could remotely monitor her recovery by adapting his existing AI system. The result: PhysIQ and the algorithms now used to monitor people with heart disease, COPD, and covid-19.

Its power, Conkright says, lies in its ability to create a unique baseline for each patienta snapshot of that persons normand then detect exceedingly small changes that might cause concern.

The algorithms need only about 36 hours to create a profile for each person.

The system gets to know how you are looking in your everyday life, says Vanden Hoek. You may be breathing faster, your activity level is falling, or your heart rate is different than the baseline. The advanced practice provider can look at those alerts and decide to call that person to check in. If there are concernssuch as potential heart or respiratory failure, he saysthey can be referred to a physician or even urgent care or the emergency department.

In the pilot, clinicians monitor the data streams around the clock. The system alerts medical staff when the participants condition changes even slightlyfor example, if their heart rate is different from what it normally is at that time of day.

Read the rest here:

How wearable AI could help you recover from covid - MIT Technology Review

AI in Social Media Market Key Growth Factors, development trends, key manufacturers and competitive forecast 2025 – The Think Curiouser

This newly added research report on global AI in Social Media market represents an elaborate description of the market scenario, analyzing the industry developments across timelines to influencing accurate forecast predictions.

The report specifically determines dominant AI in Social Media market developments and events that are influenced by macro and micro economic factors. The report denotes crucial information delivery encompassing primary and secondary information that have been sourced across multiple platforms.

Access the PDF sample of AI in Social Media market report @ https://www.orbisresearch.com/contacts/request-sample/4051803?utm_source=Atish

This report focuses on the global top players, coveredGoogleFacebookMicrosoftAWSIBMAdobe SystemsBaiduSalesforceTwitterSnapClarabridgeConverseonSprinklrUnmetricIsentiumCluepNetbaseSpredfastSynthesioCrimson HexagonHootsuiteSprout SocialVidoraMeltwaterTalkwalker

A detailed competition analysis has also been included in the report to deliver insightful understanding on core vendors, leading players as well as their effective growth strategies based on which new and established players can deploy remunerative business decisions.

Make an enquiry of this report @ https://www.orbisresearch.com/contacts/enquiry-before-buying/4051803?utm_source=Atish

Segment by Type, the product can be split intoMachine Learning and Deep LearningNatural Language Processing (NLP)

By Application, the market can be split intoRetail and eCommerceBanking, Financial Services, and Insurance (BFSI)Media and AdvertisingEducationPublic UtilitiesOthers

The report delivers crucial details on primary applications of the product and services that align with end-user requirements. The report sheds light on management and production details incorporating detailed assessment of trends that play crucial roles in decision enablement across businesses. The report also delivers details on vendor landscape and commercial environment.

Browse the complete AI in Social Media market report @ https://www.orbisresearch.com/reports/index/global-ai-in-social-media-market-report-history-and-forecast-2014-2025-breakdown-data-by-companies-key-regions-types-and-application?utm_source=Atish

A crucial reference data on competition spectrum has also been included in the report to identify their competition management tricks besides understanding their AI in Social Media market stance across geographical terrains and growth hubs. Each of the players marked in the report has been specifically assessed to derive logical deductions of their tactical decisions. As well as performance.

About Us:Orbis Research (orbisresearch.com) is a single point aid for all your market research requirements. We have vast database of reports from the leading publishers and authors across the globe. We specialize in delivering customized reports as per the requirements of our clients. We have complete information about our publishers and hence are sure about the accuracy of the industries and verticals of their specialization. This helps our clients to map their needs and we produce the perfect required market research study for our clients.

Contact Us:Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: +1 (972)-362-8199 ; +91 895 659 5155

See original here:

AI in Social Media Market Key Growth Factors, development trends, key manufacturers and competitive forecast 2025 - The Think Curiouser

3 ways AI is changing the game for recruiters and talent managers – Forbes

AI transforms the nature of work, but doesnt change the jobs to be done.

From voice-activated smart speakers like Google Home to the spam filter on our work emails, AI has infiltrated our daily lives. Depending on who you talk to, AI will either enable us to do our jobs better or make them completely redundant. The reality is that AI transforms the nature of work, but doesnt change the jobs to be done. The aspects that make us inherently human critical reasoning, communication and empathy will still be vital attributes in the future of work.

If you give a computer a problem, it learns from its interactions with the problem to identify a solution faster than humans can. But, if you ask a computer to look at two paintings and say which is more interesting, it cannot. Unlike people, artificial intelligence is not able to think abstractly and emotionally.

By supplementing human intelligence and creativity with technology that reduces menial processes, there is a great opportunity to enable recruiters not replace them. McKinsey research shows that over two thirds of businesses (69%) believe AI brings value to their Human Resources function.

Here are three ways AI improves recruitment practices:

1. Reducing unconscious bias

People have an unintentional tendency to make decisions based on their underlying beliefs, experiences and feelings its how we make sense of the world around us. And recruiting is no different. In fact, theres bias in something as straightforward as the words we choose.

Research shows that job descriptions that use descriptive words like support and understanding are biased towards female applicants, whereas competitive and lead are biased towards males. When we use these loaded words, were limiting the pool of candidates who will apply for an open role, making the recruiting process biased and affecting hiring outcomes. AI-enabled tools such as Textio can support recruiters to identify the use of bias in role description wording. Removing these words and making descriptions neutral and inclusive can lead to 42% more applications.

Unconscious bias can extend beyond our choice of words to the decisions we make about candidates. Unintentionally, recruiters and hiring managers can decide to interview someone based on the university they attended or even where they are from, or view them as a cultural fit based on answers. But decisions based on these familiarities disregard important factors like a candidates previous work experience and skills. When AI is used to select the shortlist for interviews, it can circumvent bias that was introduced by manually scanning resumes.

While AI can reduce bias, this is only true if the programs themselves are designed carefully. Machine learning algorithms are subject to the potentially biased programming choice of the people who build them and the data set theyre given. While development of this technology is still being fine tuned, we need to focus on finding the balance between artificial intelligence and human intelligence. We shouldnt rely solely on one or the other, but instead use them to complement each other.

2. Improving recruitment for hiring managers, recruiters and candidates

It goes without saying that recruitment is a people-first function. Candidates want to speak to a recruiter or hiring manager and form an authentic connection, which they wont be able to get from interacting with a machine.

Using AI, recruiters can remove tedious and time-consuming processes, so recruiters have more time to focus on engaging candidates as part of the assessment process.

XOR is a good example of this. The platform enables pre-screening of applications, qualifications and automatic interview scheduling. By taking out these tedious administrative tasks from a recruiters day, they can optimise their time to focus on finding the best fit for the role.

AI also helps create an engaging and personalised candidate experience. AI can be leveraged to nurture talent pools by serving relevant content to candidates based on their previous applications. At different stages of the process, AI can ask candidates qualifying questions, learn what types of roles they would be interested in, and serve them content that assists in their application.

But AI does have a different impact on the candidate experience depending on what stage it is implemented in the recruitment process. Some candidates prefer interacting with a chatbot at the start of the application process, as they feel more comfortable to ask general questions such as salary and job location. For delivery firm Yodel, implementing chatbots at the initial stage of the application process resulted in a decrease in applicant drop-off rates. Now only 8% of applicants choose not to proceed with their application, compared to a previous drop-off rate of 50-60%.

When it comes to more meaningful discussions such as how the role aligns with a candidate's career goals and how they can progress within the company human interaction is highly valued. Considering when and how you use AI to enhance the recruitment experience is key to getting the best results.

3. Identifying the best candidate for a role

At its core, recruitment is about finding the best person for a role. During the screening process, recruiters can use AI to identify key candidates by mapping the traits and characteristics of previous high-performing employees in the same role to find a match. This means recruiters are able to fill open roles more quickly and ensure that new hires are prepared to contribute to their new workplace.

PredictiveHire is one of these tools. It uses AI to run initial screening of job applications, making the process faster and more objective by pulling data and trends from a companys previous high-performing employees and scanning against candidate applications. With 88% accuracy, PredictiveHire identifies the traits inherent to a companys high performers so recruiters can progress them to the interview stage.

Undoubtedly, we will continue to see more exciting applications of AI in the next few years. The talent search process can certainly be streamlined and improved by incorporating AI. For recruiters, it is about finding the right balance in marrying AI application and human intelligence to make the hiring process what it should be seamless and engaging.

Read more:

3 ways AI is changing the game for recruiters and talent managers - Forbes

AI enhanced content coming to future Android TVs – Android Authority

Whenever an event like IFA rolls around, the artificial intelligence buzzword emerges to dazzle prospective customers and investors. However, the number of actually impressive use cases for AI is increasing. TCL, one of the industrys biggest TV brands, showcased the AI capabilities of its second-generation AiPQ Engine onstage at IFA. Get ready for Android TV and other smart TVs with AI enhancements in the near future.

TCLs little chip leverages machine learning capabilities to recognize parts of video content, such as landscape backgrounds or faces to ensure accurate skin mapping. The AI processor can also adjust audio playback based on the scene content or music. It can also increase or lower volume based on ambient sounds in your living room. TCL also envisions this being used to dynamically upscale 4K content using super-resolution enhancements.

The bottom line is that this AI display processor can detect and enhance both audio and visual content dynamically, rather than relying on adaptive presets and standard settings. The video embedded below showcases some of the processors features.

It looks pretty nifty, especially when combined with TCLs other TV innovations. These include QLED and mini LED display technology, living room hands-free voice controls, and pop-up cameras for making calls and chatting on social media. Keep an eye out future TCL Android TVs sporting these enhanced AI capabilities.

See also: AI-enhanced displays are coming to affordable smartphones

Read more:

AI enhanced content coming to future Android TVs - Android Authority

‘Why not change the world?’: Grant will fast-track AI tools for screening high-risk COVID cases – Health Imaging

While many existing algorithms fail to account for comorbidities during screening, Yan said his team will incorporate such information, including scan data to assess lung function, demographic information, vital signs and laboratory blood tests.

Many imaging societies, including the American College of Radiology, have urged physicians to avoid using CT as a first-line tool to screen patients during the pandemic. Other countries, including hard-hit Northern Italy, however, have leaned heavily on the modality.

Yan is among the latest in a long line of projects harnessing the power of AI to spot people at higher risk for the novel virus.

It is tremendously important to me and my team that we can contribute our knowledge and skills to fight the COVID-19 pandemic, Yan said. It is our way to answer, Why not change the world? the unofficial Rensselaer motto.

Massachusetts General Hospital is also partnering with RPI to bring this project to fruition.

View post:

'Why not change the world?': Grant will fast-track AI tools for screening high-risk COVID cases - Health Imaging

Protecting privacy in an AI-driven world – Brookings Institution

Our world is undergoing an information Big Bang, in which the universe of data doubles every two years and quintillions of bytes of data are generated every day.1 For decades, Moores Law on the doubling of computing power every 18-24 months has driven the growth of information technology. Nowas billions of smartphones and other devices collect and transmit data over high-speed global networks, store data in ever-larger data centers, and analyze it using increasingly powerful and sophisticated softwareMetcalfes Law comes into play. It treats the value of networks as a function of the square of the number of nodes, meaning that network effects exponentially compound this historical growth in information. As 5G networks and eventually quantum computing deploy, this data explosion will grow even faster and bigger.

The impact of big data is commonly described in terms of three Vs: volume, variety, and velocity.2 More data makes analysis more powerful and more granular. Variety adds to this power and enables new and unanticipated inferences and predictions. And velocity facilitates analysis as well as sharing in real time. Streams of data from mobile phones and other online devices expand the volume, variety, and velocity of information about every facet of our lives and puts privacy into the spotlight as a global public policy issue.

Artificial intelligence likely will accelerate this trend. Much of the most privacy-sensitive data analysis todaysuch as search algorithms, recommendation engines, and adtech networksare driven by machine learning and decisions by algorithms. As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed.

As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed.

Facial recognition systems offer a preview of the privacy issues that emerge. With the benefit of rich databases of digital photographs available via social media, websites, drivers license registries, surveillance cameras, and many other sources, machine recognition of faces has progressed rapidly from fuzzy images of cats3 to rapid (though still imperfect) recognition of individual humans. Facial recognition systems are being deployed in cities and airports around America. However, Chinas use of facial recognition as a tool of authoritarian control in Xinjiang4 and elsewhere has awakened opposition to this expansion and calls for a ban on the use of facial recognition. Owing to concerns over facial recognition, the cities of Oakland, Berkeley, and San Francisco in California, as well as Brookline, Cambridge, Northampton, and Somerville in Massachusetts, have adopted bans on the technology.5 California, New Hampshire, and Oregon all have enacted legislation banning use of facial recognition with police body cameras.6

This policy brief explores the intersection between AI and the current privacy debate. As Congress considers comprehensive privacy legislation to fill growing gaps in the current checkerboard of federal and state privacy, it will need to consider if or how to address use personal information in artificial intelligence systems. In this brief, I discuss some potential concerns regarding artificial intelligence and privacy, including discrimination, ethical use, and human control, as well as the policy options under discussion.

The challenge for Congress is to pass privacy legislation that protects individuals against any adverse effects from the use of personal information in AI, but without unduly restricting AI development or ensnaring privacy legislation in complex social and political thickets. The discussion of AI in the context of the privacy debate often brings up the limitations and failures of AI systems, such as predictive policing that could disproportionately affect minorities7 or Amazons failed experiment with a hiring algorithm that replicated the companys existing disproportionately male workforce.8 These both raise significant issues, but privacy legislation is complicated enough even without packing in all the social and political issues that can arise from uses of information. To evaluate the effect of AI on privacy, it is necessary to distinguish between data issues that are endemic to all AI, like the incidence of false positives and negatives or overfitting to patterns, and those that are specific to use of personal information.

The privacy legislative proposals that involve these issues do not address artificial intelligence in name. Rather, they refer to automated decisions (borrowed from EU data protection law) or algorithmic decisions (used in this discussion). This language shifts peoples focus from the use of AI as such to the use of personal data in AI and to the impact this use may have on individuals. This debate centers in particular on algorithmic bias and the potential for algorithms to produce unlawful or undesired discrimination in the decisions to which the algorithms relate. These are major concerns for civil rights and consumer organizations that represent populations that suffer undue discrimination.

Addressing algorithmic discrimination presents basic questions about the scope of privacy legislation. First, to what extent can or should legislation address issues of algorithmic bias? Discrimination is not self-evidently a privacy issue, since it presents broad social issues that persist even without the collection and use of personal information, and fall under the domain of various civil rights laws. Moreover, making these laws available for debate could effectively open a Pandoras Box because of the charged political issues they touch on and the multiple congressional committees with jurisdiction over various such issues. Even so, discrimination is based on personal attributes such as skin color, sexual identity, and national origin. Use of personal information about these attributes, either explicitly ormore likely and less obviouslyvia proxies, for automated decision-making that is against the interests of the individual involved thus implicates privacy interests in controlling how information is used.

This charade of consent has made it obvious that notice-and-choice has become meaningless. For many AI applications it will become utterly impossible.

Second, protecting such privacy interests in the context of AI will require a change in the paradigm of privacy regulation. Most existing privacy laws, as well as current Federal Trade Commission enforcement against unfair and deceptive practices, are rooted in a model of consumer choice based on notice-and-choice (also referred to as notice-and-consent). Consumers encounter this approach in the barrage of notifications and banners linked to lengthy and uninformative privacy policies and terms and conditions that we ostensibly consent to but seldom read. This charade of consent has made it obvious that notice-and-choice has become meaningless. For many AI applicationssmart traffic signals and other sensors needed to support self-driving cars as one prominent exampleit will become utterly impossible.

Although almost all bills on Capitol Hill still rely on the notice-and-choice model in some degree, key congressional leaders as well as privacy stakeholders have expressed desire to change this model by shifting the burden of protecting individual privacy from consumers over to the businesses that collect data.9 In place of consumer choice, their model focuses on business conduct by regulating companies processing of datawhat they collect and how they can use it and share it. Addressing data processing that results in any algorithmic discrimination can fit within this model.

A model focused on data collection and processing may affect AI and algorithmic discrimination in several ways:

In addition to these provisions of general applicability that may affect algorithmic decisions indirectly, a number of proposals specifically address the subject.10

The responses to AI that are currently under discussion in privacy legislation take two main forms. The first targets discrimination directly. A group of 26 civil rights and consumer organizations wrote a joint letter advocating to prohibit or monitor use of personal information with discriminatory impacts on people of color, women, religious minorities, members of the LGBTQ+ community, persons with disabilities, persons living on l winsome, immigrants, and other vulnerable populations.11 The Lawyers Committee for Civil Rights Under Law and Free Press Action have incorporated this principle into model legislation aimed at data discrimination affecting economic opportunity, public accommodations, or voter suppression.12 This model is substantially reflected in the Consumer Online Privacy Rights Act, which was introduced in the waning days of the 2019 congressional session by Senate Commerce Committee ranking member Maria Cantwell (D-Wash.). It also includes a similar provision restricting the processing of personal information that discriminates against or classifies individuals on the basis of protected attributes such race, gender, or sexual orientation.13 The Republican draft counterproposal addresses the potential for discriminatory use of personal information by calling on the Federal Trade Commission to cooperate with agencies that enforce discrimination laws and to conduct a study.14

This approach to algorithmic discrimination implicates debates over private rights of action in privacy legislation. The possibility of such individual litigation is a key point of divergence between Democrats aligned with consumer and privacy advocates on one hand, and Republicans aligned with business interests on the other. The former argue that private lawsuits are a needed force multiplier for federal and state enforcement, while the latter express concern that class action lawsuits, in particular, burden business with litigation over trivial issues. In the case of many of the kinds of discrimination enumerated in algorithmic discrimination proposals, existing federal, state, and local civil rights laws enable individuals to bring claims for discrimination. Any federal preemption or limitation on private rights of action in federal privacy legislation should not impair these laws.

The second approach addresses risk more obliquely, with accountability measures designed to identify discrimination in the processing of personal data. Numerous organizations and companies as well as several legislators propose such accountability. Their proposals take various forms:

A sense of fairness suggests such a safety valve should be available for algorithmic decisions that have a material impact on individuals lives. Explainability requires (1) identifying algorithmic decisions, (2) deconstructing specific decisions, and (3) establishing a channel by which an individual can seek an explanation. Reverse-engineering algorithms based on machine learning can be difficult, and even impossible, a difficulty that increases as machine learning becomes more sophisticated. Explainability therefore entails a significant regulatory burden and constraint on use of algorithmic decision-making and, in this light, should be concentrated in its application, as the EU has done (at least in principle) with its legal effects or similarly significant effects threshold. As understanding increases about the comparative strengths of human and machine capabilities, having a human in the loop for decisions that affect peoples lives offers a way to combine the power of machines with human judgment and empathy.

Because of the difficulties of foreseeing machine learning outcomes as well as reverse-engineering algorithmic decisions, no single measure can be completely effective in avoiding perverse effects. Thus, where algorithmic decisions are consequential, it makes sense to combine measures to work together. Advance measures such as transparency and risk assessment, combined with the retrospective checks of audits and human review of decisions, could help identify and address unfair results. A combination of these measures can complement each other and add up to more than the sum of the parts. Risk assessments, transparency, explainability, and audits also would strengthen existing remedies for actionable discrimination by providing documentary evidence that could be used in litigation. Not all algorithmic decision-making is consequential, however, so these requirements should vary according to the objective risk.

The window for this Congress to pass comprehensive privacy legislation is narrowing. While the Commerce Committee in each house of Congress has been working on a bipartisan basis throughout 2019 and have put out discussion drafts, they have yet to reach agreement on a bill. Meanwhile, the California Consumer Privacy Act went into effect on Jan. 1, 2020,21 impeachment and war powers have crowded out other issues, and the presidential election is going into full swing.

The window for this Congress to pass comprehensive privacy legislation is narrowing.

In whatever window remains to pass privacy legislation before the 2020 election, the treatment of algorithmic decision-making is a substantively and politically challenging issue that will need a workable resolution. For a number of civil rights, consumer, and other civil society groups, establishing protections against discriminatory algorithmic decision-making is an essential part of legislation. In turn, it will be important to Democrats in Congress. At a minimum, some affirmation that algorithmic discrimination based on personal information is subject to existing civil rights and nondiscrimination laws, as well as some additional accountability measures, will be essential to the passage of privacy legislation.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative, and Amazon and Intel provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Read more:

Protecting privacy in an AI-driven world - Brookings Institution