Daily Archives: July 18, 2021

As the Use of AI Spreads, Congress Looks to Rein It In – WIRED

Posted: July 18, 2021 at 5:43 pm

Theres bipartisan agreement in Washington that the US government should do more to support development of artificial intelligence technology. The Trump administration redirected research funding toward AI programs; President Bidens science adviser Eric Lander said of AI last month that Americas economic prosperity hinges on foundational investments in our technological leadership.

At the same time, parts of the US government are working to place limits on algorithms to prevent discrimination, injustice, or waste. The White House, lawmakers from both parties, and federal agencies including the Department of Defense and the National Institute for Standards and Technology are all working on bills or projects to constrain potential downsides of AI.

Bidens Office of Science and Technology Policy is working on addressing the risks of discrimination caused by algorithms. The National Defense Authorization Act passed in January introduced new support for AI projects, including a new White House office to coordinate AI research, but also required the Pentagon to assess the ethical dimensions of AI technology it acquires, and NIST to develop standards to keep the technology in check.

In the past three weeks, the Government Accountability Office, which audits US government spending and management and is known as Congress watchdog, released two reports warning that federal law enforcement agencies arent properly monitoring the use and potential errors of algorithms used in criminal investigations. One took aim at face recognition, the other at forensic algorithms for face, fingerprint, and DNA analysis; both were prompted by lawmaker requests to examine potential problems with the technology. A third GAO report laid out guidelines for responsible use of AI in government projects.

Helen Toner, director of strategy at Georgetowns Center for Security and Emerging Technology, says the bustle of AI activity provides a case study of what happens when Washington wakes up to new technology.

As this technology is being used in the real world you get problems that you need policy and government responses to.

Helen Toner, director of strategy, Georgetown Center for Security and Emerging Technology

In the mid-2010s, lawmakers didnt pay much notice as researchers and tech companies brought about a rapid increase in the capabilities and use of AI, from conquering champs at Go to ushering smart speakers into kitchens and bedrooms. The technology became a mascot for US innovation, and a talking point for some tech-centric lawmakers. Now the conversations have become more balanced and business-like, Toner says. As this technology is being used in the real world you get problems that you need policy and government responses to.

Face recognition, the subject of GAOs first AI report of the summer, has drawn special focus from lawmakers and federal bureaucrats. Nearly two dozen US cities have banned local government use of the technology, usually citing concerns about accuracy, which studies have shown is often worse on people with darker skin.

The GAOs report on the technology was requested by six Democratic representatives and senators, including the chairs of the House oversight and judiciary committees. It found that 20 federal agencies that employ law enforcement officers use the technology, with some using it to identify people suspected of crimes during the January 6 assault on the US Capitol, or the protests after the killing of George Floyd by Minneapolis police in 2020.

Fourteen agencies sourced their face recognition technology from outside the federal governmentbut 13 did not track what systems their employees used. The GAO advised agencies to keep closer tabs on face recognition systems to avoid the potential for discrimination or privacy invasion.

The GAO report appears to have increased the chances of bipartisan legislation constraining government use of face recognition. At a hearing of the House Judiciary Subcommittee on Crime, Terrorism, and Homeland Security held Tuesday to chew over the GAO report, Representative Sheila Jackson Lee (DTexas), the subcommittee chair, said that she believed it underscored the need for regulations. The technology is currently unconstrained by federal legislation. Ranking member Representative Andy Biggs (RArizona) agreed. I have enormous concerns, the technology is problematic and inconsistent, he said. If were talking about finding some kind of meaningful regulation and oversight of facial recognition technology then I think we can find a lot of common ground.

See more here:

As the Use of AI Spreads, Congress Looks to Rein It In - WIRED

Posted in Ai | Comments Off on As the Use of AI Spreads, Congress Looks to Rein It In – WIRED

Beware explanations from AI in health care – Science

Posted: at 5:43 pm

Artificial intelligence and machine learning (AI/ML) algorithms are increasingly developed in health care for diagnosis and treatment of a variety of medical conditions (1). However, despite the technical prowess of such systems, their adoption has been challenging, and whether and how much they will actually improve health care remains to be seen. A central reason for this is that the effectiveness of AI/ML-based medical devices depends largely on the behavioral characteristics of its users, who, for example, are often vulnerable to well-documented biases or algorithmic aversion (2). Many stakeholders increasingly identify the so-called black-box nature of predictive algorithms as the core source of users' skepticism, lack of trust, and slow uptake (3, 4). As a result, lawmakers have been moving in the direction of requiring the availability of explanations for black-box algorithmic decisions (5). Indeed, a near-consensus is emerging in favor of explainable AI/ML among academics, governments, and civil society groups. Many are drawn to this approach to harness the accuracy benefits of noninterpretable AI/ML such as deep learning or neural nets while also supporting transparency, trust, and adoption. We argue that this consensus, at least as applied to health care, both overstates the benefits and undercounts the drawbacks of requiring black-box algorithms to be explainable.

Read more from the original source:

Beware explanations from AI in health care - Science

Posted in Ai | Comments Off on Beware explanations from AI in health care – Science

AI and financial processes: Balancing risk and reward – VentureBeat

Posted: at 5:43 pm

All the sessions from Transform 2021 are available on-demand now. Watch now.

Of all the enterprise functions influenced by AI these days, perhaps none is more consequential than AI and financial processes. People dont like when other people fiddle with their money, let alone an emotionless robot.

But as it usually goes with first impressions, AI is winning converts in monetary circles, in no small part due to its ability to drive out inefficiencies and capitalize on hidden opportunities basically creating more wealth out of existing wealth.

One of the ways it does this is to reduce the cost of accuracy, says Sanjay Vyas, CTO of Planful, a developer of cloud-based financial planning platforms. His take is that while finance has lagged in the adoption of AI, it is starting to catch up as more tech-savvy professionals enter the field. A key challenge in finance is to push data accuracy as far as you can without it costing more than you are either saving or earning.

To date, this effort has been limited largely by the number of man-hours you are willing to devote to achieving accuracy, but AI turns this equation on its head since it can work all day and all night focusing on the most minute of discrepancies.

This will likely be a particular boon for smaller organizations that lack the resources and the scale to make this kind of data analysis worthwhile. And as weve seen elsewhere, it also frees up time for human finance specialists to concentrate on higher-level, strategic initiatives.

AI is also contributing to the financial sector in other novel ways fraud detection, for example. GoodData senior content writer Harry Dix recently highlighted the multiple ways in which careful analysis of data trails can quickly lead to fraud discovery and take-down of perpetrators. Most frauds require careful coordination between multiple players in order to disguise their crimes as normal transactions, but a properly trained AI model can drill down into finite data sets to detect suspicious patterns. And it can do this much faster than a human examiner, often detecting the fraud before it has been fully implemented and assets have gone missing.

Implementing AI in financial processes is not just a way to get ahead, social media entrepreneur Annie Brown says on Forbes it is necessary to remain afloat in an increasingly challenging economy. With fintech and digital currencies now mainstream, organizations that cannot keep up with the pace of business will find themselves on the road to obsolescence in short order.

New breeds of financial services everything from simple banking and transaction processing to sophisticated trading and capital management are cropping up every day, virtually all of which are using AI in one form or another to streamline processes, improve customer service, and produce greater returns.

Still, the overriding question regarding AI in financial processes is how to ensure the AI behaves honestly and ethically. While honesty and ethics havent exactly been hallmarks of the financial industry throughout its human-driven history, steps can be taken to ensure AI will not knowingly deliver poor outcomes to users. The European Commission, for one, is developing a legal framework to govern the use of AI in areas like credit checks and chatbots.

At the same time, the IEEE has compiled a guidebook with input from more than 50 leading financial institutions from the U.S., U.K., and Canada on the proper way to instill trust and ethical behavior in AI models. The guide offers multiple tips on how to train AI with fairness, transparency and privacy across multiple domains, such as cybersecurity, loan and deposit pricing and hiring.

It seems that finance is feeling the push and pull of AI more than other disciplines. On the one hand is the lure of greater profits and higher returns; on the other is the fear that something could go wrong, terribly wrong.

The solution: Avoid the temptation to push AI into finance-related functions until the enterprise is ready. Just like any employee, AI must be trained and seasoned before it can be entrusted with higher levels of responsibility. After all, you wouldnt promote someone fresh out of college to CFO on their first day. By starting AI out with low-level financial responsibilities, it must then prove itself worthy of promotion just like any other employee.

Go here to read the rest:

AI and financial processes: Balancing risk and reward - VentureBeat

Posted in Ai | Comments Off on AI and financial processes: Balancing risk and reward – VentureBeat

Mimecasts new AI tools protect against the sneakiest phishing attacks – VentureBeat

Posted: at 5:43 pm

All the sessions from Transform 2021 are available on-demand now. Watch now.

Email security provider Mimecast this week launched Mimecast CyberGraph, an AI-driven add-on to Mimecast Secure Email Gateway (SEG) that sniffs out sophisticated and hard-to-detect phishing and impersonation threats, the company said.

Mimecast CyberGraph uses machine-learning technology to detect and prevent phishing attacks delivered via email, the Lexington, Massachusetts-based company said in a statement. The addition to Mimecasts flagship email security product creates an identity graph a chart of relationships between email senders and uses an AI engine to identify potential threat actors and warn employees of possible cyber threats.

Phishing and impersonation attacks are getting more sophisticated, personalized, and harder to stop. If not prevented, these attacks can have devastating results for an enterprise organization, Mimecast VP of product management for threat intelligence Josh Douglas said.

Security controls need to be constantly updated and improved to outsmart threat actors. CyberGraph leverages our AI and machine-learning technologies to help keep employees one step ahead with real-time warnings, directly at the point of risk.

Mimecast SEG customers can integrate and activate the add-on without disrupting their email security operations, Douglas said. He also noted that the addition of CyberGraphs capabilities means enterprise SEG customers no longer need to find a third-party point product to provide high-level protection against email threats.

In addition to the identity graph, CyberGraph includes other capabilities to prevent cyber attacks, like blocking embedded trackers and warning users of potential threats with color-coded banners.

Douglas said the release is timely because email threats have never been more pervasive or sophisticated than they are in the COVID-19 era, which greatly increased the exposure of remote workforces to threats. He cited Mimecast research published in the companys State of Email Security Report, which found that both the number of threats and the number of employees falling for threats dramatically increased during the pandemic.

CyberGraph is available now to Mimecast SEG customers in the United States and United Kingdom, with availability in more regions coming soon, the company said.

Read more from the original source:

Mimecasts new AI tools protect against the sneakiest phishing attacks - VentureBeat

Posted in Ai | Comments Off on Mimecasts new AI tools protect against the sneakiest phishing attacks – VentureBeat

Largo Medical 1st in US to offer new AI for colonoscopies – ABC Action News

Posted: at 5:43 pm

LARGO, Fla. Leading the way, right here in Tampa Bay, Largo Medical Center is now the first hospital in the country to offer a newly FDA approved technology to help doctors detect colon cancer early.

This is great news for patients, as the recommended age for first colonoscopy for those at average risk of colon cancer is now 45 instead of 50, according to the American Cancer Society.

The newly FDA-approved technology is an artificial intelligence that aids doctors during colonoscopies. Patients wont feel a difference during the colonoscopy. Its still the same scope, but now its smarter.

We know that usually, its around 20-25 percent miss rate as far as polyps. And what we, as GI physicians, we are afraid of, its something called interval cancer, which means between the colonoscopy we have done and the next one; the patient develops colon cancer, said Dr. Meir Mizrahi, Medical Director of Advanced GI Services for HCA Healthcare and Largo Medical Center, and the Program Director for the Advanced Endoscopy Fellowship at Largo Medical Center.

But now, with this new AI called GI Genius by Medtronic, theyre missing fewer polyps. Thats because the machine is programmed to detect 31 million different kinds of polyps.

Ive done in my life more than 10,000 colonoscopies, maybe saw 8,000 polyps, I cannot compare myself to an artificial intelligence, said Dr. Mizrahi.

Heres how it works:

When the system will recognize a polyp, there will be another square, which is green again, around the polyp. So based on the size of the polyp, the actual square will change and will give you almost the whole margins of the polyp, said Dr. Mizrahi.

Essentially, a green square will pop up on the screen around each polyp that it sees, allowing the doctor to go back and take a closer look at the polyp.

Medtronic

A recent study found that GI Genius was able to identify precancerous or cancerous tumors at a 13 percent higher rate than a standard colonoscopy.

Which might prevent 14 percent of interval cancers, said Dr. Mizrahi.

Dr. Mizrahi is confident that this added intelligence will prolong lives and he urges people to get their colonoscopies.

The American Cancer Society recently lowered the recommended screening age for those at average risk of colon cancer from 50 to 45.

And Dr. Mizrahi says colonoscopy prep is now easier than ever.

Today we have a very, very good product that we are using, and actually the new product is tabs, its not even liquid. You just take those tabs and drink a lot of water, and amazing, amazing results as far as colon cleaning, said Dr. Mizrahi.

Colorectal, or colon cancer, is the third leading cause of death from cancer in the United States for both men and women. It usually starts from polyps or other precancerous growths in the colon.

Read more:

Largo Medical 1st in US to offer new AI for colonoscopies - ABC Action News

Posted in Ai | Comments Off on Largo Medical 1st in US to offer new AI for colonoscopies – ABC Action News

Fixing ‘concept drift’: Retraining AI systems to deliver accurate insights at the edge – GCN.com

Posted: at 5:43 pm

INDUSTRY INSIGHT

If youre like many people, you view more streaming content now than ever. To keep you watching, content providers rely on machine-learning algorithms that recommend relevant new content.

But when the COVID-19 hit, viewing habits changed radically. Suddenly, different people were streaming different content at different times and in different ways. Were the ML algorithms now making less-relevant recommendations? And were they falsely confident in the accuracy of their less-precise predictions?

Such are the vagaries of concept drift, an issue few users of artificial intelligence are aware of. As government organizations leverage more AI in more far-flung locations, concept drift is a problem theyll have to address. Particularly when deploying AI at the networks edge, concept drift presents challenges.

Yet by being aware of the problem and its solutions, agencies can make sure their analysts, data scientists and systems integrators take steps to optimize the accuracy and confidence of their AI deployments.

Growth in government AI

While AI remains an emerging technology, both military and civilian government organizations increasingly deploy the capability -- particularly ML -- in a variety of situations:

Many of these applications operate at the edge. The edge, however, presents unique challenges, because the models must be lean enough to run with limited processing power and network bandwidth. Those constraints become bigger factors when retraining algorithms to address concept drift.

Concept drift: High confidence in low accuracy

A simple way to think about AI algorithms is to say they accept data inputs and produce predictive outputs. Inputs could include images of cars, specifications such as machine tolerances or environmental factors such as temperature. Outputs could include identification of road hazards or forecasts of when equipment will require maintenance.

Concept drift occurs when the behaviors or features of the outputs being predicted change over time such that predictions become inaccurate for similar input data. Lets say an ML algorithm designs shipping routes based on inputs such as the location of manufacturing sites, seasonal weather patterns, fuel costs and geopolitical realities. If the optimal shipping route changes over time, perhaps because sea currents change due to climate change, the model concept will have drifted. This will cause the algorithm to make recommendations based on an out-of-date mapping between the input data and the outputs being predicted.

Two key problems result from concept drift. First, the algorithm starts making predictions that are less accurate often, much less accurate. So it might recommend a shipping route thats slow, costly or even dangerous.

Second, and more deceptive, the algorithm continues to report a high level of confidence in its predictions, even though theyre markedly less accurate. Therefore, the model might accurately identify anomalous network behavior 70 times out of 100 but report that its 99% confident in the accuracy of its identifications.

Retraining at the edge

Technology vendors are developing AI training algorithms that can both determine when a model concept has drifted and identify the new inputs that will most efficiently retrain the model. In the meantime, when AI results that dont align with whats expected, data scientists or systems integrators should explore whether they need to investigate concept drift. If so, they should take these steps:

Identify root causes. Re-establish the ground truth of the algorithm by checking its results against what has been established as reality. Select a few samples, manually create accurate labels for them and compare the models confidence against its actual accuracy.

If confidence is high but accuracy is low, investigate how inputs have changed. Lets say the inputs of an autonomous vehicle have been corrupted by dirt on its camera lens. Thats a problem of data drift, not concept drift. But if the vehicle was trained in a temperate environment and is now being used in a desertscape, concept drift might have occurred.

Retrain the algorithm. There are two basic approaches to retraining: continual learning and transfer learning. Continual learning makes small, regular updates to the model over time. In this case, samples are manually selected and labeled so they can be used to retrain the model to maintain accuracy.

Transfer learning reuses the existing model as the foundation for a new model. Lets say the initial models basic features are solid but its classification capability is attuned to data inputs that no longer reflect reality. Transfer learning allows the classification capability to be retrained without rebuilding the model from scratch.

The ability to realign without starting over is crucial at the edge. Creation of an AI algorithm typically involves large data volumes that require the processing power of a centralized data center. Limited processing power and network bandwidth dictate that edge-based updates to AI algorithms be only incremental.

Building trust in AI outputs

Ultimately, agencies want their AI to deliver accurate insights and predictions. Just as important, they want those outputs to be trusted by the people who rely on them. Thats where addressing concept drift becomes crucial.

AI is still new to many people. Government employees and citizens alike might be hesitant to trust AI analyses and recommendations. The more often AI outputs are found to be inaccurate, the more user skepticism will grow. By actively addressing concept drift, agencies can ensure the accuracy and confidence of their AI models. In particular, they can avoid false positives and false negatives that erode trust.

Content-streaming services use AI for purposes that are helpful but hardly high stakes. Government agencies will increasingly deploy AI in mission-critical use cases that can have a significant impact on personnel and citizens. Managing concept drift can make sure algorithms deliver the insights and predictions they need -- and drive acceptance that maximizes investments in AI.

About the Author

Sean McPherson is research scientist and manager of AI and ML for Intel.

See the original post here:

Fixing 'concept drift': Retraining AI systems to deliver accurate insights at the edge - GCN.com

Posted in Ai | Comments Off on Fixing ‘concept drift’: Retraining AI systems to deliver accurate insights at the edge – GCN.com

Announcing the winners of the Women in AI Awards at Transform 2021 – VentureBeat

Posted: at 5:43 pm

All the sessions from Transform 2021 are available on-demand now. Watch now.

One of the goals of Transform 2021 is to bring a broad variety of expertise, views, and experiences to the stage virtual this year to illustrate all the different ways AI is changing the world. As part of VentureBeats commitment to supporting diversity and inclusion in AI, that also means being mindful of who is being represented on the panels and talks.

The Women in AI Awards ends a week that kicked off with the Women in AI Breakfast, with several number of talks on inclusion and bias in between. Margaret Mitchell, a leading AI researcher on responsible AI, spoke, as well as executives from Pinterest, Redfin, Intel, and Salesforce.

VentureBeat leadership made the final selections out of the over 100 women who were nominated during the open nomination period. Selecting the winners was difficult because it was clear that each of these nominees are trailblazers who made outstanding contributions in the AI field.

This award honors women who have started companies showing great promise in AI and considers factors such as business traction, the technology solution offered by the company, and impact in the AI space.

Briana Brownell, founder and CEO of Pure Strategy was the winner of the AI Entrepreneur Award for 2021. Brownell and her team at Pure Strategy designed Annie (ANIE), an Automated Neural Intelligence Engine to help humans understand unstructured data. Annie has been used by doctors, specialists and physicians assistants to communicate with patients and with each other across cultural knowledge and overcoming biases, phobias and anxieties.

This award honors those who have made a significant impact in an area of research in AI, helping accelerate progress either within their organization, as part of academic research, or impacting AI approaches in technology in general.

Dr. Nuria Oliver, chief scientific advisor of Vodafone Institute, received the AI Research Award for 2021. Oliver is the named inventor of 40 filed patents, including a computational modeling of human behavior via machine learning techniques and on the development of intelligent interactive systems. Shes been named an ACM Distinguished Scientist and Fellow, as well as a Fellow of the IEEE and of Euroway. She also pioneered the not-for-profit business and academic research to use anonymized mobile data to track and prevent the spread of Ebola and Malaria in Africa, which has since been deployed across Africa and Europe in a matter of days in 2020 to track and prevent the spread of COVID-19. Whats more, she has proposed that all of the data scientists involved in her humanitarian efforts work on those projects pro-bono.

This award honors those who demonstrate exemplary leadership and progress in the growing topic of responsible AI. This year, there was a tie.

Haniyeh Mahmoudian, the global AI ethicist at DataRobot and Noelle Silver, founder of the AI Leadership Institute, both received the Responsibility & Ethics Award for 2021.

Mahmoudian was an early adopter of bringing statistical bias measures into developmental processes. She wrote Statistical Parity along with natural language explanations for users, a feat that has resulted in a priceless improvement in model bias that scales exponentially, as the platform is used across hundreds of companies and v verticals such as banking, insurance, tech, CPG and manufacturing. A contributing member of the Trusted AI teams culture of inclusiveness, Mahmoudian operates under the core belief that diversity of thought will result in thoughtful and improved outcomes. Mahmoudians research in the risk level for COVID contagion outside of racial bias was used at the Federal level to inform resource allocation and also by Moderna during vaccine trials.

A consistent champion for public understanding of AI and tech fluency, Silver has launched and established several initiatives supporting women and underrepresented communities within AI including the AI Leadership Institute, WomenIn.AI. and more. Shes a Red Hat Managed OpenShift Specialist in AI/ML, a WAC Global Digital Ambassador, a Microsoft MVP in Artificla Intelligence and numerous other awards as well as a 2019 winner of the VentureBeat Women in AI mentorship award

This award honors leaders who helped mentor other women in the field of AI, provided guidance and support, and encouraged more women to enter the field of AI.

Katia Walsh, Levi Strauss chief strategy and AI officer, was the recipient of the AI Mentorship Award for 2021. Walsh has been an early influencer for women in AI since her work at Vodafone, actively searching for female candidates on the team and mentoring younger female colleagues, and serving as strategy advisor to Fellowship.AI, a free data science training program. At Levi Strauss, Walsh created a digital upskilling program that is the first of its kind in the industry, with two thirds of its bootcamp participants are women.

This award honors those in the beginning stages of their AI career who have demonstrated exemplary leadership traits.

The Rising Star Award for 2021 was awarded to Arezou Soltani Panah, a research fellow at Deakin University in Australia.

Panahs work at Swinburne Social Innovation Research Institute focuses on solving complex social problems such as loneliness, family violence and social stigma. While her work demands substantial cross-disciplinary research and collaborating with subject matter experts like social scientists and governmental policy advisors, she has created a range of novel structured machine learning solutions that span across those disciplines to create responsible AI research. Panahs focus on social inequality and disempowerment has used the power of natural language processing to measure language and algorithmic bias. One such project quantified the extent of gender bias in featuring female athletes in the Victoria, Australian news and how womens achievements are attributed to their non-individual efforts such as their team, coach or partner compared to their male counterparts. Another project looked at gender biases in reporting news on obesity and the consequences to weight stigmatization in public health policies.

One thing was very clear from reading over the nominations that came in: There are many leaders doing meaning work in AI. It was very inspiring to see the caliber of executives and scientists leading the way in AI and making a difference in our world. The list of nominations are full of leaders who will continue to make their mark over the next few years and there will be more opportunities to hear about their work.

Original post:

Announcing the winners of the Women in AI Awards at Transform 2021 - VentureBeat

Posted in Ai | Comments Off on Announcing the winners of the Women in AI Awards at Transform 2021 – VentureBeat

Announcing the AI Innovation Awards winners at Transform 2021 – VentureBeat

Posted: at 5:43 pm

All the sessions from Transform 2021 are available on-demand now. Watch now.

After hearing from AI executives, scientists, and leaders during the Transform 2021 virtual conference, it is clear innovation in the field abounds. The AI Innovation Awards caps off a week of celebrating companies and individuals pushing the AI boundaries to discover new capabilities.

The third annual AI Innovation Awards honors people and companies engaged in compelling and influential work in five areas: natural language processing and understanding, business applications, edge innovation, Startup Spotlight, and AI for Good.

Many things become possible when machines understand the languages people speak and write. Among those gains, smart assistants can handle more tasks across diverse industries.

Hugging Face received the Innovation in Natural Language Process/Understanding Award for 2021 for the teams work in democratizing NLP.

While research is essential, the true impact of AI comes from practical applications tackling real-world problems.

Pilot received the Innovation in Business Applications Award for 2021 for making the back office experience better for small businesses without deep finance teams. The softwares predictive insights help small businesses make better budgeting and spending decisions.

Edge AI is going to become even more important with the boom in the internet of things and near-ubiquitous network capabilities promised by 5G.

SambaNova Systems received the Innovation in Edge Award for 2021 for developing systems that run AI and data-intensive apps from the datacenter to the edge.

The Startup Spotlight focused on companies that work with AI, have raised $35 million or less in funding, have been in operation for no more than two years, and have the potential to make a significant contribution to the field in the years to come.

Parity received the Startup Spotlight Award for 2021 for its tools and services designed to identify and remove bias from AI systems. And after a week of deep conversations about the importance of ethics and responsible AI, its clear this is going to be a very important area to focus on.

AI for Good recognizes AI technology, the application of AI, and advocacy or activism in the field of AI that protects or improves human lives or operates to fight injustice, improve equality, and better serve humanity.

Folding@Home, based out of the School of Medicine at Washington University in St. Louis, with support from its other main labs at Memorial Sloan Kettering Cancer Center and Temple University, received the AI for Good Award for 2021. By employing crowdsourced computer-processing power to help run molecular calculations for diseases, Folding@Home helps scientists study how proteins misfold and cause disease and develop therapies for the diseases based on the research. Folding@Home solved some basic problems in the research for SAR-CoV-2, helping scientists in their work for the COVID-19 vaccine.

The nominees were selected by a committee consisting of Vijoy Pandey, the VP of engineering and CTO of cloud and distributed systems at Cisco; Raffael Marty, senior VP of product and cybersecurity at ConnectWise and the former chief research and intelligence officer at Forcepoint; and Stacey Shulman, VP of the IoT Group and general manager of Health, Life Sciences, and Emerging Technologies at Intel. Each of the nominees is a trailblazer involved in influential work in AI, and there will be more opportunities to hear their stories in the years to come.

View original post here:

Announcing the AI Innovation Awards winners at Transform 2021 - VentureBeat

Posted in Ai | Comments Off on Announcing the AI Innovation Awards winners at Transform 2021 – VentureBeat

Feeding the machine: We give an AI some headlines and see what it does – Ars Technica

Posted: at 5:43 pm

Enlarge / Turning the lens on ourselves, as it were. Is Our Machine Learning?View more stories

There's a moment in any foray into new technological territory when you realize you may have embarked on a Sisyphean task. Staring at the multitude of options available to take on the project, you research your options, read the documentation, and start to workonly to find that actually just defining the problem may be more work than finding the actual solution.

Reader, this is where I found myself two weeks into this adventure in machine learning. I familiarized myself with the data, the tools, and the known approaches to problems with this kind of data, and I tried several approaches to solving what on the surface seemed to be a simple machine-learning problem: based on past performance, could we predict whether any given Ars headline will be a winner in an A/B test?

Things have not been going particularly well. In fact, as I finished this piece, my most recent attempt showed that our algorithm was about as accurate as a coin flip.

But at least that was a start. And in the process of getting there, I learned a great deal about the data cleansing and pre-processing that goes into any machine-learning project.

Our data source is a log of the outcomes from 5,500-plus headline A/B tests over the past five yearsthat's about as long as Ars has been doing this sort of headline shootout for each story that gets posted. Since we have labels for all this data (that is, we know whether it won or lost its A/B test), this would appear to be a supervised learning problem. All I really needed to do to prepare the data was to make sure it was properly formatted for the model I chose to use to create our algorithm.

I am not a data scientist, so I wasn't going to be building my own model any time this decade. Luckily, AWS provides a number of pre-built models suitable to the task of processing text and designed specifically to work within the confines of the Amazon cloud. There are also third-party models, such as Hugging Face, that can be used within the SageMaker universe. Each model seems to need data fed to it in a particular way.

The choice of the model in this case comes down largely to the approach we'll take to the problem. Initially, I saw two possible approaches to training an algorithm to get a probability of any given headline's success:

The second approach is much more difficult, and there's one overarching concern with either of these methods that makes the second even less tenable: 5,500 tests, with 11,000 headlines, is not a lot of data to work with in the grand AI/ML scheme of things.

So I opted for binary classification for my first attempt, because it seemed the most likely to succeed. It also meant the only data point I needed for each headline (besides the headline itself) is whether it won or lost the A/B test. I took my source data and reformatted it into a comma-separated value file with two columns: titles in one, and "yes" or "no" in the other. I also used a script to remove all the HTML markup from headlines (mostlya few HTML tags for italics). With the data cut down almost all the way to essentials, I uploaded it into SageMaker Studio so I could use Python tools for the rest of the preparation.

Next, I needed to choose the model type and prepare the data. Again, much of data preparation depends on the model type the data will be fed into. Different types of natural language processing models (and problems) require different levels of data preparation.

After that comes tokenization. AWS tech evangelist Julien Simon explains it thusly: Data processing first needs to replace words with tokens, individual tokens. A token is a machine-readable number that stands in for a string of characters. So 'ransomware would be word one, he said, crooks would be word two, setup would be word three so a sentence then becomes a sequence of tokens, and you can feed that to a deep-learning model and let it learn which ones are the good ones, which ones are the bad ones.

Depending on the particular problem, you may want to jettison some of the data. For example, if we were trying to do something like sentiment analysis (that is, determining if a given Ars headline was positive or negative in tone) or grouping headlines by what they were about, I would probably want to trim down the data to the most relevant content by removing "stop words"common words that are important for grammatical structurebut don't tell you what the text is actually saying (like most articles).

However, in this case, the stop words were potentially important parts of the dataafter all, we're looking for structures of headlines that attract attention. So I opted to keep all the words. And in my first attempt at training, I decided to use BlazingText, a text processing model that AWS demonstrates in a similar classification problem to the one we're attempting. BlazingText requires the "label" datathe data that calls out a particular bit of text's classificationto be prefaced with "__label__". And instead of a comma-delimited file, the label data and the text to be processed are put in a single line in a text file, like so:

Another part of data preprocessing for supervised training ML is splitting the data into two sets: one for training the algorithm, and one for validation of its results. The training data set is usually the larger set. Validation data generally is created from around 10 to 20 percent of the total data.

There has been a great deal of research into what is actually the right amount of validation datasome of that research suggests that the sweet spot relates more to the number of parameters in the model being used to create the algorithm rather than the overall size of the data. In this case, given that there was relatively little data to be processed by the model, I figured my validation data would be 10 percent.

In some cases, you might want to hold back another small pool of data to test the algorithm after it's validated. But our plan here is to eventually use live Ars headlines to test, so I skipped that step.

To do my final data preparation, I used a Jupyter notebookan interactive web interface to a Python instanceto turn my two-column CSV into a data structure and process it. Python has some decent data manipulation and data science-specific toolkits that make these tasks fairly straightforward, and I used two in particular here:

Heres a chunk of the code in the notebook that I used to create my training and validation sets from our CSV data:

I started by using pandas to import the data structure from the CSV created from the initially cleaned and formatted data, calling the resulting object "dataset." Using the dataset.head() command gave me a look at the headers for each column that had been brought in from the CSV, along with a peek at some of the data.

The pandas module allowed me to bulk-add the string "__label__" to all the values in the label column as required by BlazingText, and I used a lambda function to process the headlines and force all the words to lower case. Finally, I used the sklearn module to split the data into the two files I would feed to BlazingText.

Follow this link:

Feeding the machine: We give an AI some headlines and see what it does - Ars Technica

Posted in Ai | Comments Off on Feeding the machine: We give an AI some headlines and see what it does – Ars Technica

How The AI Revolution Is Augmenting The New World Of Work – Techfinancials.co.za

Posted: at 5:43 pm

Its an amusing irony that, for many of us, the quickest way to find answers to questions like these is via the technology that inspired them.

Artificial Intelligence (AI) has become part of the fabric of our daily lives, with smart assistants and technology now in millions of properties around the world. According to Statista, sales of Amazons Echo units reached 32 million in 2018 and are expected to hit 130 million by 2025.

Advances in technology also form the foundation of the worlds shift towards hybrid work, which has been accelerated by the COVID-19 pandemic.

Herding people to the office is looking increasingly obsolete, expensive and inconvenient, says IWG Founder and CEO Mark Dixon.

In some cases, data saved in the cloud isnt even in the same country as the staff accessing it. So why should workers go to the effort and expense of dragging themselves into work to spend the day working on a device that they have brought with them, and will return home with at the end of the day?

Now, companies as diverse as Standard Chartered Bank, NTT and Google are committing to hybrid working for the long-term, recognising its benefits for business as well as the work-life balance of employees. IWG has added more than a million new users to its global network of flexible workspaces in the first half of 2021.

Like the hybrid approach, modern AI can enhance our working lives, helping us to increase productivity and improve our wellbeing. Here, we explore how AI is making us happier and more effective at work.

In large firms, AI is already playing a part in the recruitment process, pre-screening candidates before members of the HR team review their applications.

Skills assessments, which help companies to decide who to invite for interviews, have been gamified by AI firms including Pymetrics. Its software allows candidates cognitive and emotional qualities to be appraised without fear that biases based on their race, gender or socioeconomic status might influence the outcome.

Modern Hire, another company that provides AI solutions for recruitment, has worked with more than 700 brands including Amazon, P&G and Walmart. It claims that services including automated interview scoring have been proven to be over three times less biased than human interview scorers and can help to ensure a fair, complete and objective hiring experience.

In theory, then, AI can help prevent you being recruited for a role that may not suit you. In addition to this, some companies are also using AI to further screen unsuccessful applicants, inviting them to try for alternative roles they might find more fitting.

When it comes to onboarding, forward-thinking companies such as Unilever have harnessed chatbot technology to help make sure no new hires question no matter how small goes unanswered. Its tool, Unabot, can offer information on everything from payroll problems to where workers will find parking spaces.

People who work in customer-facing roles might worry more than most about the rise of AI and chatbots in particular. While its true that theyre capable of handling a high percentage of basic queries, smart business leaders understand that chatbots need to work alongside human beings rather than replace them.

Chatbots can triage customer issues, addressing simple problems and freeing up staff to deal with more complex questions.

AI has a role to play when it comes to retaining, as well as recruiting staff. Technology can support ongoing professional development, which is always a priority for ambitious workers who are keen to learn from more experienced colleagues.

Engineering firm Honeywell has developed virtual reality (VR) and AI-based training tools that allow users to test their competency in challenging situations. The VR software presents them with simulated problems and also footage of workers real-life experiences, which is captured by engineers wearing specially designed headsets.

AI can also augment our working lives by performing dull, repetitive tasks such as arranging meetings or creating to-do lists on our behalf.

Microsoft Office 365 already standard software in many workplaces around the world has an array of simple AI features. Outlook, for instance, can scan messages and offer users a daily reminder of things theyve committed to do. Its calendar is able to link specific documents to scheduled meetings, identifying what might be pertinent based on analysis of titles, contents and origins.

AI is also being used at a deeper level by some companies keen to improve productivity, on the basis that data can identify problems more objectively than people.

In his upcoming book, Scary Smart, Mo Gawdat argues that AI systems make mistakes because the data they are fed reflects our imperfect world. Gawdat, former Chief Business Officer of Google [X], predicts that by 2049 AI will be a billion times more intelligent than humans but says that this does not mean a Terminator-style calamity is inevitable.

We are replicating human intelligence with machine learning, Gawdat says. Just like an 18-month-old infant, machines are learning by observing. What we show AI is critical, Gawdat argues, as it is fair to imagine that AI might be the last technology we humans invent Once [systems] are smart enough, they will solve the next problem on our behalf.

Gawdat is clear that, for all its superior processing power, the buck stops with human beings when it comes to the effects of the AI we develop.

We need to fill the world with compassion and kindness if this is what we want to pass on to future generations, he insists. We need to make sure that those machines work on our side.

And, as we all start looking dimly forward to a post-COVID world, it is going to be fascinating to see how the world of work emerges post the pandemic. Certainly, the digital transformation that it has so dramatically accelerated will be here to stay. Equally permanent will be the move towards more flexible workspaces and job profiles.

Employers are realising that flexibility increases productivity and that giving their staff the opportunity to engage with their work on a variety of platforms, both physical and digital, increases both collaboration, productivity and employee loyalty.

IWG is leading the workspace revolution. Our companies help more than 2.5 million people and their businesses to work more productively. We do so by providing a choice of professional, inspiring and collaborative workspaces, communities and services.

Digitalization and new technologies are transforming the world of work. People want the personal productivity benefits of living and working how and where they want. Businesses want financial and strategic benefits. Our customers are start-ups, small and medium-sized enterprises, and large multinationals. With unique business goals, people and aspirations. They want workspaces and communities to match their needs. They want a choice.

Also read: Heartwood Properties Is Building Hybrid Offices With Innovative Future-Focused Features In Somerset West West

Go here to see the original:

How The AI Revolution Is Augmenting The New World Of Work - Techfinancials.co.za

Posted in Ai | Comments Off on How The AI Revolution Is Augmenting The New World Of Work – Techfinancials.co.za