The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: July 2021
Qualcomm’s Vision: The Future Of … AI – Forbes
Posted: July 21, 2021 at 12:25 am
Mobile Chip Leaders AI Starts In Mobile, And Grows To The Clouds
Company acquires assets from Twenty Billion Neurons GmbH to bolster its AI Team.
Qualcomm Technologies (QTI) is running a series of webinars titled The Future of..., and the most recent edition is on AI. In this lively session, I hosted a conversation with Ziad Asghar, QTI VP of Product Management, Alex Katouzian, QTI SVP and GM Mobile Compute and Infrastructure, and Clment Delangue, Co-Founder and CEO of the open source AI model company, Hugging Face, Inc. Ive also penned a short Research Note on the companys AI Strategy, which can be found here on Cambrian-AI, where we outline some impressive AI use cases.
Qualcomm believes AI is evolving exponentially thanks to billions of smart mobile devices, connected by 5G to the cloud, fueled by a vibrant ecosystem of application developers armed with open-source AI models. Other semiconductor companies might say something similar, but in Qualcomms case it uniquely starts with mobile. The latest Snapdragon 888 has a sixth generation AI engine powerful enough to process significant AI models on the phone, enabling applications such as on-device speech processing and even real-time spoken language translation. Qualcomm complements the edge devices with cloud processing using the Cloud AI100, which demonstrated leading performance efficiency recently on the MLPerf V1.0 benchmarks. Qualcomm calls this approach Distributed Intelligence.
Qualcomm envisions a tightly coupled network of AI processors across the cloud, Edge cloud, and ... [+] on-device endpoints.
To add more talent and IP to Qualcomms AI research lab, the company announced today that it has acquired a team from Twenty Billion Neurons GmbH (TwentyBN), including their high quality video dataset that is widely used by the AI research community.The founding CEO is Roland Memisevic, who co-led the world-renowned MILA AI institute withYoshua Bengio of the Universit de Montral.
While few smart phone users realize they are using AI every time they take a picture, even fewer understand that AI helps keep them connected. QTI embeds AI in both applications, such as computational photography and accurate voice interaction, as well as in the operation of the mobile handset itself, optimizing 5G to extend the networks reach, and power management to prolong battery life. Heres a few snipets from The Future Of... session.
Alex Katouzian, notes that Strategically what we do is we use our largest channel, which is mobile, to create inventions that will spiral and get reused in adjacent markets that have the same mobile traits. For example, PCs and XR or auto. And then, getting into some of the infrastructure-based designs as well, because AI can get used in edge cloud, in some private networks environments.
Ziad Asghar notes that We took our pedigree, which is amazing power efficiency, and applied it to AI processing. We've taken that from mobile, taken our learnings and applied it to the cloud side. If you look at what some of the major platforms are seeing today, theres a huge problem with the power consumption, where the power is basically doubling every year on the cloud side. So what we did was we took our AI expertise, we developed a new architecture, and came up with a product that's specifically designed for inferencing. That's what's given us an ability to be able to show performance at a power level that nobody else can show.
Hugging Faces Clment Delangue believes that Transfer Learning will be the next big thing in AI. I think, in five years, most of the machine learning models out there will be transfer learning models. And that's super exciting because they have new capabilities but also new opportunities for them to be smaller, more compressed, to be trained on smaller datasets, thanks of the unique characteristics and capabilities of transfer learning.
Qualcomm is one of the few if not only semiconductor companies to offer AI engines in both SoCs for mobile edge processing and in cloud servers. With over a decade of experience with mobile and now data center AI, the company is in a unique position to build the future:
We believe that this comprehensive strategy coupled with leadership performance and power efficiency will position Qualcomm well for significant growth in AI.
View post:
Posted in Ai
Comments Off on Qualcomm’s Vision: The Future Of … AI – Forbes
Scientists Are Giving AI The Ability to Imagine Things It’s Never Seen Before – ScienceAlert
Posted: at 12:25 am
Artificial intelligence(AI) is proving very adept at certain tasks like inventing human faces that don't actually exist, or winning games of poker but these networks still struggle when it comes to something humans do naturally: imagine.
Once human beings know what a cat is, we can easily imagine a cat of a different color, or a cat in a different pose, or a cat in different surroundings. For AI networks, that's much harder, even though they can recognize a cat when they see it (with enough training).
To try and unlock AI's capacity for imagination, researchers have come up with a new method for enabling artificial intelligence systems to work out what an object should look like, even if they've never actually seen one exactly like it before.
"We were inspired by human visual generalization capabilities to try to simulate human imagination in machines," says computer scientist Yunhao Gefrom the University of Southern California (USC).
"Humans can separate their learned knowledge by attributes for instance, shape, pose, position, color and then recombine them to imagine a new object. Our paper attempts to simulate this process using neural networks."
The key is extrapolation being able to use a big bank of training data (like pictures of a car) to then go beyond what's seen into what's unseen. This is difficult for AI because of the way it's typically trained to spot specific patterns rather than broader attributes.
What the team has come up with here is called controllable disentangled representation learning, and it uses an approach similar to those used to create deepfakes disentangling different parts of a sample (so separating face movement and face identity, in the case of a deepfake video).
It means that if an AI sees a red car and a blue bike, it will then be able to 'imagine' a red bike for itself even if it has never seen one before. The researchers have put this together in a framework they're calling Group Supervised Learning.
Extrapolating new data from training data. (Itti et al., 2021)
One of the main innovations in this technique is processing samples in groups rather than individually, and building up semantic links between them along the way. The AI is then able to recognize similarities and differences in the samples it sees, using this knowledge to produce something completely new.
"This new disentanglement approach, for the first time, truly unleashes a new sense of imagination in AI systems, bringing them closer to humans' understanding of the world," says USC computer scientist Laurent Itti.
These ideas aren't completely new, but here the researchers have taken the concepts further, making the approach more flexible and compatible with additional types of data. They've also made the framework open source, so other scientists can make use of it more easily.
In the future, the system developed here could guard against AI bias by removing more sensitive attributes from the equation helping to make neural networks that aren't racist or sexist, for example.
The same approach could also be applied in the fields of medicine and self-driving cars, the researchers say, with AI able to 'imagine' new drugs, or visualize new road scenarios that it hasn't been specifically trained for in the past.
"Deep learning has already demonstrated unsurpassed performance and promise in many domains, but all too often this has happened through shallow mimicry, and without a deeper understanding of the separate attributes that make each object unique," says Itti.
The research has been presented at the 2021 International Conference on Learning Representations and can be read here.
Read more here:
Scientists Are Giving AI The Ability to Imagine Things It's Never Seen Before - ScienceAlert
Posted in Ai
Comments Off on Scientists Are Giving AI The Ability to Imagine Things It’s Never Seen Before – ScienceAlert
Freshworks: 93% of IT managers have deployed AI, or plan to soon – VentureBeat
Posted: at 12:25 am
All the sessions from Transform 2021 are available on-demand now. Watch now.
Nearly all IT managers (93%) are currently exploring or deploying some level of AI to streamline help desk systems, according to a new report from Freshworks. Half of IT managers said they have already implemented AI tools.
Nearly 70% of IT managers said AI is either critical or very important for upgrading and modernizing their service desk capabilities. Even so, respondents said there are certain prerequisites for AI-enabled solutions. While the most desired characteristic of AI tools is their ease of integration with existing IT infrastructure, a majority of respondents indicated that any AI solutions for IT service management (ITSM)/IT operations management (ITOM) should be intuitive, scalable, collaborative, and fast and easy to deploy.
The survey explored a key metric associated with todays demanding IT environment: the number of IT service inquiries received by the IT support desk each day. That number ranged from an average of 44 inquiries per day for small companies to 725 per day for large organizations.
ITSM chatbots were the clear leader in planned or actual AI deployments. The survey found that 25% of respondents expected AI-powered technologies to reduce IT staff workloads, and that 39% have already experienced this benefit.
Survey respondents also explained what they wanted to gain from implementing AI: Speed of implementation (40%), Integration with legacy systems (40%), Overall cost of implementation (38%), and training the AI bots solution to return the most accurate response (39%).
Conducted across 14 countries, surveying more than 850 senior IT executives the survey reveals that AI has hit the mainstream.
Read the full Right sizing AI report from Freshworks.
Read more:
Freshworks: 93% of IT managers have deployed AI, or plan to soon - VentureBeat
Posted in Ai
Comments Off on Freshworks: 93% of IT managers have deployed AI, or plan to soon – VentureBeat
The Pentagon Is Bolstering Its AI Systemsby Hacking Itself – WIRED
Posted: at 12:25 am
The Pentagon sees artificial intelligence as a way to outfox, outmaneuver, and dominate future adversaries. But the brittle nature of AI means that without due care, the technology could perhaps hand enemies a new way to attack.
The Joint Artificial Intelligence Center, created by the Pentagon to help the US military make use of AI, recently formed a unit to collect, vet, and distribute open source and industry machine learning models to groups across the Department of Defense. Part of that effort points to a key challenge with using AI for military ends. A machine learning red team, known as the Test and Evaluation Group, will probe pretrained models for weaknesses. Another cybersecurity team examines AI code and data for hidden vulnerabilities.
Machine learning, the technique behind modern AI, represents a fundamentally different, often more powerful, way to write computer code. Instead of writing rules for a machine to follow, machine learning generates its own rules by learning from data. The trouble is, this learning process, along with artifacts or errors in the training data, can cause AI models to behave in strange or unpredictable ways.
For some applications, machine learning software is just a bajillion times better than traditional software, says Gregory Allen, director of strategy and policy at the JAIC. But, he adds, machine learning also breaks in different ways than traditional software.
A machine learning algorithm trained to recognize certain vehicles in satellite images, for example, might also learn to associate the vehicle with a certain color of the surrounding scenery. An adversary could potentially fool the AI by changing the scenery around its vehicles. With access to the training data, the adversary also might be able to plant images, such as a particular symbol, that would confuse the algorithm.
Allen says the Pentagon follows strict rules concerning the reliability and security of the software it uses. He says the approach can be extended to AI and machine learning, and notes that the JAIC is working to update the DoDs standards around software to include issues around machine learning.
We dont know how to make systems that are perfectly resistant to adversarial attacks.
Tom Goldstein, associate professor, computer science, University of Maryland
AI is transforming the way some businesses operate because it can be an efficient and powerful way to automate tasks and processes. Instead of writing an algorithm to predict which products a customer will buy, for instance, a company can have an AI algorithm look at thousands or millions of previous sales and devise its own model for predicting who will buy what.
The US and other militaries see similar advantages, and are rushing to use AI to improve logistics, intelligence gathering, mission planning, and weapons technology. Chinas growing technological capability has stoked a sense of urgency within the Pentagon about adopting AI. Allen says the DoD is moving in a responsible way that prioritizes safety and reliability.
Researchers are developing ever-more creative ways to hack, subvert, or break AI systems in the wild. In October 2020, researchers in Israel showed how carefully tweaked images can confuse the AI algorithms that let a Tesla interpret the road ahead. This kind of adversarial attack involves tweaking the input to a machine learning algorithm to find small changes that cause big errors.
Dawn Song, a professor at UC Berkeley who has conducted similar experiments on Teslas sensors and other AI systems, says attacks on machine learning algorithms are already an issue in areas such as fraud detection. Some companies offer tools to test the AI systems used in finance. Naturally there is an attacker who wants to evade the system, she says. I think well see more of these types of issues.
A simple example of a machine learning attack involved Tay, Microsofts scandalous chatbot-gone wrong, which debuted in 2016. The bot used an algorithm that learned how to respond to new queries by examining previous conversations; Redditors quickly realized they could exploit this to get Tay to spew hateful messages.
Visit link:
The Pentagon Is Bolstering Its AI Systemsby Hacking Itself - WIRED
Posted in Ai
Comments Off on The Pentagon Is Bolstering Its AI Systemsby Hacking Itself – WIRED
Operators Look to Computer Vision and AI for Valuable Insights – QSR magazine
Posted: at 12:25 am
Restaurants are known to have the slimmest of margins and now with severe labor shortages plaguing the industry, leading quick-service restaurantsare looking for new ways to streamline processes and improve productivity. As well, with digital transformation in full swing across most restaurant chains, the need for artificial intelligence and automation is critical for scaling new processes and increasing operational efficiencies.
Leading restaurant brands are looking to computer vision and AI to provide real-time data and insights on their physical spaces to measure, analyze and act upon key data points that are used to optimize production processes and labor efficiencies.
Restaurants are looking for accurate and reliable data collected in real-time with a 360-degree view of food preparation and production processes. Computer vision is an ideal solution for digitizing the on-location operation. Easy-to-deploy and scalable, edged-based smart devices; iPhones and iPads, and an app is all that is needed to deploy visual intelligence.
Once the devices are in place, automated AI models are used for collecting accurate and reliable data from the location, all without any human intervention. Data is processed in real-time, and results are used to compare across different tests and experiments while allowing for rapid ideation, all without the need for significant human resources. Here are just some of the examples of leading companies using computer vision to accelerate innovation:
Having a flexible system that adapts to different tests, accommodates different formats, layouts, and experiments are critical. Because computer vision can be easily deployed beyond the lab for field testing in real store environments and does not require major retrofit of existing stores, it is a key solution for accelerating innovation.
Nomad Go is the leading visual intelligence provider for foodservice organizations. Traditional computer vision systems are costly and time-intensive to deploy and so Nomad Go has developed a solution that deploys quickly and cost-effectively, allowing restaurants of any size or scale to get real-time insights about their spaces, out of the box, instantly.
Nomad Go unlocks actionable data such as people counting, speed of service, and other production processes. The innovative Edge Computer AI Vision technology generates anonymized real-time data streams about physical spaces for analytics, alerts, and real-time management of production processes, employee productivity, and customer experiences.
News and information presented in this release has not been corroborated by QSR, Food News Media, or Journalistic, Inc.
See more here:
Operators Look to Computer Vision and AI for Valuable Insights - QSR magazine
Posted in Ai
Comments Off on Operators Look to Computer Vision and AI for Valuable Insights – QSR magazine
AI adoption and analytics are rising, survey finds – VentureBeat
Posted: at 12:25 am
All the sessions from Transform 2021 are available on-demand now. Watch now.
The need for enterprise digital transformation during the pandemic has bolstered investments in AI. Last year, AI startups raised a collective $73.4 billion in Q4 2020, a $15 billion year-over-year increase. And according to a new survey from ManageEngine, the IT division of Zoho, business deployment of AI is on the rise.
In the survey of more than 1,200 tech execs at organizations looking at the use of AI and analytics, 80% of respondents in the U.S. said that theyd accelerated their AI adoption over the past two years. Moreover, 20% said theyd boosted their usage of business analytics compared with the global average, a potential sign that trust in AI is growing.
The COVID-19 pandemic forced businesses to adopt and adapt to new digital technologies overnight, ManageEngine VP Rajesh Ganesan, a coauthor of the survey, said in a press release. These findings indicate that, as a result, organizations and their leaders have recognized the value of these technologies and have embraced the promises they are offering even amidst global business challenges.
ManageEngines survey found that the dominant motivation behind business analytics technologies, at least in the U.S., is data-driven decision-making. Seventy-seven percent of respondents said that theyre using business analytics for augmented decision making while 69% said theyd improved the use of available data with business analytics. Sixty-five percent said that business analytics helps them make decisions faster, furthermore, reflecting an increased confidence in AI.
Execs responding to the survey also emphasized the importance of customer experience in their AI adoption decisions, with 59% in the U.S. saying that theyre leveraging AI to enhance customer services. Beyond customer experience, 61% of IT teams saw an uptick in applying business analytics, while marketing leaders saw a 44% surge; R&D teams saw 39%; software development and finance saw 38%; sales saw 37%; and operations saw 35%.
HR was among the groups that showed the lowest increase in business analytics usage, according to the survey. Research shows that companies are indeed struggling to apply data strategies to their HRoperations. A Deloittereportfound that more than 80% of HR professionals score themselves low in their ability to analyze, a troubling fact in a highly data-driven field.
Still, Ganesan said that the reports findings reinforce the notion that AI is a critical business enabler particularly when combined with cloud solutions that can support remote workers. Increased reliance on AI and business analytics is fueling data-driven decisions to operate the organization more efficiently and make customers happier, he continued.
Here is the original post:
AI adoption and analytics are rising, survey finds - VentureBeat
Posted in Ai
Comments Off on AI adoption and analytics are rising, survey finds – VentureBeat
AI Machine Learning Could be Latest Healthcare Innovation – The National Law Review
Posted: at 12:25 am
As wementionedin the early days of the pandemic, COVID-19 has been accompanied by a rise in cyberattacks worldwide. At the same time, the global response to the pandemic has acceleratedinterestin the collection, analysis, and sharing of data specifically, patient data to address urgent issues, such as population management in hospitals, diagnoses and detection of medical conditions, and vaccine development, all through the use of artificial intelligence (AI) and machine learning. Typically, AIML churns through huge amounts of real-world data to deliver useful results. This collection and use of that data, however, gives rise to legal and practical challenges. Numerous and increasingly strict regulations protect the personal information needed to feed AI solutions. The response has been to anonymize patient health data in time-consuming and expensive processes (HIPAA alone requires the removal of 18 types of identifying information). But anonymization is not foolproof and, after stripping data of personally identifiable information, the remaining data may be of limited utility. This is where synthetic data comes in.
A synthetic dataset comprises artificial information that can be used as a stand-in for real data. The artificial dataset can be derived in different ways. One approach starts with real patient data. Algorithms process the real patient data and learn patterns, trends, and individual behaviors. The algorithms then replicate those patterns, trends, and behaviors in a dataset of artificial patients, such that if done properly the synthetic dataset has virtually the same statistical properties of the real dataset. Importantly, the synthetic data cannot be linked back to the original patients, unlike some de-identified or anonymized data, which have been vulnerable to re-identification attacks. Other approaches involve the use of existing AI models to generate synthetic data from scratch, or the use of a combination of existing models and real patient data.
While the concept of synthetic data is not new, it hasrecentlybeen described as a promising solution for healthcare innovation, particularly in a time when secure sharing of patient data has been particularly challenged by lab and office closures. Synthetic data in the healthcare space can be applied flexibly to fit different use cases, and they can be expanded to create more voluminous datasets.
Synthetic datas other reported benefits include the elimination of human bias and the democratization of AI (i.e., making AI technology and the underlying data more accessible). Critically too, regulations governing personal information, such as HIPAA, the EU General Data Protection Regulation (GDPR), and the California Consumer Privacy Act (CCPA), may be read to permit the sharing and processing of original patient data (subject to certain obligations) such that the resulting synthetic datasets may carry less privacy risk.
Despite the potential benefits, the creation and use of synthetic data has its own challenges. First, there is the risk that AI-generated data is so similar to the underlying real data that real patient privacy is compromised. Additionally, the reliability of synthetic data is not yet firmly established. For example, it is reported that no drug developer has yet relied on synthetic data for a submission to the U.S. Food and Drug Administration because it is not known whether that type of data will be accepted by FDA. Perhaps most importantly, synthetic data is susceptible to adjustment, for good or ill. On the one hand, dataset adjustments can be used to correct for biases imbedded in real datasets. On the other, adjustments can also undermine trust in healthcare and medical research.
As synthetic data platforms proliferate and companies increasingly engage those services to develop innovative solutions, care should be exercised to guard against the potential privacy and reliability risks.
2021 Proskauer Rose LLP. National Law Review, Volume XI, Number 201
More:
AI Machine Learning Could be Latest Healthcare Innovation - The National Law Review
Posted in Ai
Comments Off on AI Machine Learning Could be Latest Healthcare Innovation – The National Law Review
Research shows AI is often biased. Here’s how to make algorithms work for all of us – World Economic Forum
Posted: at 12:25 am
Can you imagine a just and equitable world where everyone, regardless of age, gender or class, has access to excellent healthcare, nutritious food and other basic human needs? Are data-driven technologies such as artificial intelligence and data science capable of achieving this or will the bias that already drives real-world outcomes eventually overtake the digital world, too?
Bias represents injustice against a person or a group. A lot of existing human bias can be transferred to machines because technologies are not neutral; they are only as good, or bad, as the people who develop them. To explain how bias can lead to prejudices, injustices and inequality in corporate organizations around the world, I will highlight two real-world examples where bias in artificial intelligence was identified and the ethical risk mitigated.
In 2014, a team of software engineers at Amazon were building a program to review the resumes of job applicants. Unfortunately, in 2015 they realized that the system discriminated against women for technical roles. Amazon recruiters did not use the software to evaluate candidates because of these discrimination and fairness issues. Meanwhile in 2019, San Francisco legislators voted against the use of facial recognition, believing they were prone to errors when used on people with dark skin or women.
The National Institute of Standards and Technology (NIST) conducted research that evaluated facial-recognition algorithms from around 100 developers from 189 organizations, including Toshiba, Intel and Microsoft. Speaking about the alarming conclusions, one of the authors, Patrick Grother, says: "While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the algorithms we studied.
Ridding AI and machine learning of bias involves taking their many uses into consideration
Image: British Medical Journal
To list some of the source of fairness and non-discrimination risks in the use of artificial intelligence, these include: implicit bias, sampling bias, temporal bias, over-fitting to training data, and edge cases and outliers.
Implicit bias is discrimination or prejudice against a person or group that is unconscious to the person with the bias. It is dangerous because the person is unaware of the bias whether it be on grounds of gender, race, disability, sexuality or class.
This is a statistical problem where random data selected from the population do not reflect the distribution of the population. The sample data may be skewed towards some subset of the group.
This is based on our perception of time. We can build a machine-learning model that works well at this time, but fails in the future because we didn't factor in possible future changes when building the model.
This happens when the AI model can accurately predict values from the training dataset but cannot predict new data accurately. The model adheres too much to the training dataset and does not generalize to a larger population.
These are data outside the boundaries of the training dataset. Outliers are data points outside the normal distribution of the data. Errors and noise are classified as edge cases: Errors are missing or incorrect values in the dataset; noise is data that negatively impacts on the machine learning process.
Analytical techniques require meticulous assessment of the training data for sampling bias and unequal representations of groups in the training data. You can investigate the source and characteristics of the dataset. Check the data for balance. For instance, is one gender or race represented more than the other? Is the size of the data large enough for training? Are some groups ignored?
A recent study on mortgage loans revealed that the predictive models used for granting or rejecting loans are not accurate for minorities. Scott Nelson, a researcher at the University of Chicago, and Laura Blattner, a Stanford University economist, found out that the reason for the variance between mortgage approval for majority and minority group is because low-income and minority groups have less data documented in their credit histories. Without strong analytical study of the data, the cause of the bias will be undetected and unknown.
What if the environment you trained the data is not suitable for a wider population? Expose your model to varying environments and contexts for new insights. You want to be sure that your model can generalize to a wider set of scenarios.
A review of a healthcare-based risk prediction algorithm that was used on about 200 million American citizens showed racial bias. The algorithm predicts patients that should be given extra medical care. It was found out that the system favoured white patients over black patients. The problem with the algorithms development is that it wasn't properly tested with all major races before deployment.
Inclusive design emphasizes inclusion in the design process. The AI product should be designed with consideration for diverse groups such as gender, race, class, and culture. Foreseeability is about predicting the impact the AI system will have right now and over time.
Recent research published by the Journal of the American Medical Association (JAMA) reviewed more than 70 academic publications based on the diagnostic prowess of doctors against digital doppelgangers across several areas of clinical medicine. A lot of the data used in training the algorithms came from only three states: Massachusetts, California and New York. Will the algorithm generalize well to a wider population?
A lot of researchers are worried about algorithms for skin-cancer detection. Most of them do not perform well in detecting skin cancer for darker skin because they were trained primarily on light-skinned individuals. The developers of the skin-cancer detection model didn't apply principles of inclusive design in the development of their models.
Testing is an important part of building a new product or service. User testing in this case refers to getting representatives from the diverse groups that will be using your AI product to test it before it is released.
This is a method of performing strategic analysis of external environments. It is an acronym for social (i.e. societal attitudes, culture and demographics), technological, economic (ie interest, growth and inflation rate), environmental, political and values. Performing a STEEPV analysis will help you detect fairness and non-discrimination risks in practice.
The COVID-19 pandemic and recent social and political unrest have created a profound sense of urgency for companies to actively work to tackle inequity.
The Forum's work on Diversity, Equality, Inclusion and Social Justice is driven by the New Economy and Society Platform, which is focused on building prosperous, inclusive and just economies and societies. In addition to its work on economic growth, revival and transformation, work, wages and job creation, and education, skills and learning, the Platform takes an integrated and holistic approach to diversity, equity, inclusion and social justice, and aims to tackle exclusion, bias and discrimination related to race, gender, ability, sexual orientation and all other forms of human diversity.
The Platform produces data, standards and insights, such as the Global Gender Gap Report and the Diversity, Equity and Inclusion 4.0 Toolkit, and drives or supports action initiatives, such as Partnering for Racial Justice in Business, The Valuable 500 Closing the Disability Inclusion Gap, Hardwiring Gender Parity in the Future of Work, Closing the Gender Gap Country Accelerators, the Partnership for Global LGBTI Equality, the Community of Chief Diversity and Inclusion Officers and the Global Future Council on Equity and Social Justice.
It is very easy for the existing bias in our society to be transferred to algorithms. We see discrimination against race and gender easily perpetrated in machine learning. There is an urgent need for corporate organizations to be more proactive in ensuring fairness and non-discrimination as they leverage AI to improve productivity and performance. One possible solution is by having an AI ethicist in your development team to detect and mitigate ethical risks early in your project before investing lots of time and money.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Go here to read the rest:
Posted in Ai
Comments Off on Research shows AI is often biased. Here’s how to make algorithms work for all of us – World Economic Forum
AI analysis of 800 companies finds rampant greenwashing – Fast Company
Posted: at 12:25 am
In 2016, Tide launched Purclean, a new brand of detergent that claimed it was 100% plant-based. However, the National Advertising Division of BBB National Programs analyzed the claim four years later and found that Purclean was only 75% plant-based. While not great, 25% non-plant composition doesnt sound too bad until you learn that some of the materials are petroleum-based. This was completely counter to Tides marketing message, and extremely misleading for consumers.
And its a classic example of greenwashing, which by definition refers to misleading communication about a companys environmental practices and impact so as to present an environmentally responsible public image. In a time when marketers have roughly three seconds to grab someones attention, its a lot easier to spin the truth, especially when it comes to lauding the efforts of sustainability and eco-friendly endeavors. While there are companies committed to making a real difference for people and the planet (like Patagonia or Cree), there are many enterprises that espouse being green more so in marketing than actual practice. But how do we differentiate between greenwashing spin and the true green initiatives when it is incredibly difficult to hold companies accountable for their actions? Thankfully, we have a friend in artificial intelligence.
Meet ClimateBert, an AI tool that deconstructs corporate statements, annual reports, claims, and other materials to assess climate-related disclosures and measure actual performance. It was created by the Task Force on Climate-Related Financial Disclosures (TCFD), which provides a framework for public organizations to more effectively disclose climate-related performance. Because extracting salient information from companies on their climate disclosures is complex and time consuming, TCFD turned to natural language processing and existing deep neural networks for help. The sheer volume of data, often using subtle words, presents a major challenge to analyze in a timely fashion. Thanks to AI tools like ClimateBert, we can now shrink weeks of analysis into just days.
What did ClimateBert discover? Regrettably, after assessing more than 800 companies, ClimateBert has determined that corporations are talking a good game, but actual performance is lacking. Why? In TCFDs assessment, there are three major contributing factors. First, greenwashing has largely escaped scrutiny so far, so theres no incentive for companies to change. Second, the Paris accords have, ironically, let companies be more selective in what they want to disclose to limit brand risk. Third, with the exception of France, the reporting of corporate climate is a voluntary disclosure, enabling companies a lot of latitude on what they would like to share. Thats why TCFD has been pushing to make reporting standardized and mandatory.
Other organizations are also tapping into the power of AI to discover greenwashing. For example, Ping An, an insurance and finance company located in China, is leveraging its Digital Economic Research Center to use AI to assess corporate climate disclosure and detect greenwashing. Using natural language processing algorithms, the Digital Economic Research Center developed AI-driven indicators to determine climate risk exposure that was more granular than traditional environmental, social, and corporate governance (ESG) metrics. In effect, this AI found a more efficient way to determine if an enterprise was truly being eco-friendly or just greenwashing. Moreover, the AI can dynamically assess, in real time, the actual sustainability practices of a company as it keeps sharing more information.
While these examples sound promising in holding companies accountable to their environmental promises, challenges still remain. Our first problem is meaningful, robust data, which provides the fuel for any AI system to learn what greenwashing looks like. We need good data to train our AI systems as well as to give the machine something to analyze and review. While corporate social responsibility goals have been around for a couple of decades, collecting data on performance has lagged in part because of nebulous or subjective metrics. However, thanks to other emerging technology like IoT sensors (to collect ESG data) and blockchain (to track transactions), we have the infrastructure to collect more data, particularly for machine consumption. By measuring real-time energy usage, transportation routes, manufacturing waste, and so forth, we have more quantifiable ways to track corporations environmental performance without relying purely on what they say.
The second problem is applying macro benefits to micro solutions. It is not sufficient or accurate to evaluate corporations environmental progress on popular initiatives like tree planting. Companies like Microsoft, Alibaba, American Express, and others are all engaged in programs to plant millions of trees, which sounds like a great idea until you start to consider how much impact it really has. The average mature tree can offset about 48 pounds of carbon per year, but most companies dont factor in how much time it takes for a tree to grow. Moreover, the species of a tree also dictates how much carbon sequestration occurs. A mature silver maple tree can offset around 500 pounds of carbon per year, while palm trees average around 15 pounds per year. Companies need to understand how many trees, which type of trees, the location of trees, and so forth to accurately count carbon sequestration. This suddenly becomes a more arduous and taxing process that costs enterprises more money, resources, and time, which tends to de-incentivize them from accurately measuring the impact of their so-called eco-friendly initiatives.
Thankfully, AI technology is ideally suited to handling these tasks. With tools like Pachama and ML CO2 Impact, we have AI to assist organizations in accurately measuring and communicating their carbon impacts and offsets at a more granular level. In addition, organizations like Planet Home are using machine learning to develop personalized calculators to measure individual or organizational sustainable behavior to simplify data collection, measurement, and reporting. Moreover, they are helping people identify small steps that they are willing to take to be more sustainable, attempting to go beyond reactive measurement to proactive behavior.
This is the real value we can tap into through AI. Through greenwashing detection, AI helps us build truth and trust in corporate communication. As we shift to a fully integrated, sustainable corporate culture, AI can help organizations find more environmentally friendly opportunities to improve their carbon footprint. Ultimately, using AI to hold companies accountable for their environmental impact and to help them find ways to actually be green will lead to a more sustainable world for everyone.
Neil Sahota is the author of Own the A.I. Revolution: Unlock Your Artificial Intelligence Strategy to Disrupt Your Competition and works with the United Nations on the AI for Good Global Summit initiative. Sahota is also an IBM Master Inventor, former leader of the IBM Watson Group, and professor at the University of California, Irvine.
Link:
AI analysis of 800 companies finds rampant greenwashing - Fast Company
Posted in Ai
Comments Off on AI analysis of 800 companies finds rampant greenwashing – Fast Company
Colonoscopy AI passed meaningful milestones but has miles to go before it sweeps – AI in Healthcare
Posted: at 12:25 am
In one of the studies, the better adenoma detection rate reflected the AIs detection of very small adenomas.
However, AI CADe was no sharper than the operators eye at detecting polyps of more than 10 mm. In addition, AI CADe flagged more low-risk hyperplastic polyps and did not better at flagging advanced adenomas or sessile serrated lesions (flat polyps with elevated risk of becoming cancerous).
When it came to using AI to help characterize colorectal polyps (CADx), the results were more uniformly favorable toward the emerging technology.
For example, one study saw the addition of CADx as a support tool yield significant improvement in trainee physicians diagnostic accuracy.
A separate study found augmenting physician expertise with CADx lifted endoscopists diagnostic accuracy from 83% to 89%.
The greatest improvement was noted in novice endoscopists (73.8% to 85.6%), almost reaching the accuracy of experts (89.0%), Parsa and Byrne report.
These and other demonstrations of AI-aided colonoscopy represent promising steps toward standardization and improvement of colonoscopy quality, and implementation of resect-and-discard and diagnose-and-leave strategies, they comment. Yet issues such as real-world applications and regulatory approval need to be addressed before artificial intelligence models can be successfully implemented in clinical practice.
The study is posted in full for free.
Previous recent coverage of colonoscopy AI:
AI impressive as a second set of eyes in colonoscopy
FDA approves first-of-its-kind colonoscopy AI
3 reasons humans are irreplaceable by colonoscopy AI
Continued here:
Colonoscopy AI passed meaningful milestones but has miles to go before it sweeps - AI in Healthcare
Posted in Ai
Comments Off on Colonoscopy AI passed meaningful milestones but has miles to go before it sweeps – AI in Healthcare