The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: January 2021
Mother gets perfectly called out by her kid for believing Antifa attacked the Capitol – indy100
Posted: January 19, 2021 at 9:39 am
Its not a surprise that the Trump administration has sowed divisions between people, and that includes family members.
A child is garnering attention and praise for standing up to their Trump supporter mother, who attempted to push the conspiracy theory that Antifa was responsible for the Capitol riots.
In a viral video captured by the anonymous child, the mother enters their bedroom to tell them that Antifa attacked the Capitol. They were mixed in with the Trump supporters, she said, and cited people wearing MAGA hats backwards as proof of Antifa members disguising themselves as Trump supporters in the riots.
What is wrong with that sentence? her kid replies. When their mother cannot answer, they add that she just proved that Trump supporters also stormed the Capitol.
The mother challenges them by saying that Trump supporters were peacefully assembling outside, but they argue back that they were terrorising people.
The unconvinced mother then leaves, looking fed up, but not before the child puts an end to the conversation once and for all by telling her that her stupidity is gross.
With the video going viral, people on social media effusively praised the child for shutting down their parents baseless conspiracy theory so effectively.
This isnt the first time Gen Z has confronted their conservative parents. Numerous TikTok videos involve teens trying to educate adults on issues like Black Lives Matter, with the most famous example being Claudia Conway, daughter of former Trump adviser Kellyanne Conway.
Go here to see the original:
Mother gets perfectly called out by her kid for believing Antifa attacked the Capitol - indy100
Posted in Antifa
Comments Off on Mother gets perfectly called out by her kid for believing Antifa attacked the Capitol – indy100
Love in the time of algorithms: would you let your artificial intelligence choose your partner? – The Conversation AU
Posted: at 9:36 am
It could be argued artificial intelligence (AI) is already the indispensable tool of the 21st century. From helping doctors diagnose and treat patients to rapidly advancing new drug discoveries, its our trusted partner in so many ways.
Now it has found its way into the once exclusively-human domain of love and relationships. With AI-systems as matchmakers, in the coming decades it may become common to date a personalised avatar.
This was explored in the 2014 movie Her, in which a writer living in near-future Los Angeles develops affection for an AI system. The sci-fi film won an Academy Award for depicting what seemed like a highly unconventional love story.
In reality, weve already started down this road.
The online dating industrty is worth more than US$4 billion and there are a growing number of players in this market. Dominating it is the Match Group, which owns OkCupid, Match, Tinder and 45 other dating-related businesses.
Match and its competitors have accumulated a rich trove of personal data, which AI can analyse to predict how we choose partners.
The industry is majorly embracing AI. For instance, Match has an AI-enabled chatbot named Lara who guides people through the process of romance, offering suggestions based on up to 50 personal factors.
Tinder co-founder and CEO Sean Rad outlines his vision of AI being a simplifier: a smart filter that serves up what it knows a person is interested in.
Dating website eHarmony has used AI that analyses peoples chat and sends suggestions about how to make the next move. Happn uses AI to rank profiles and show those it predicts a user might prefer.
Loveflutters AI takes the guesswork out of moving the relationship along, such as by suggesting a restaurant both parties could visit. And Badoo uses facial recognition to suggest a partner that may look like a celebrity crush.
Dating platforms are using AI to analyse all the finer details. From the results, they can identify a greater number of potential matches for a user.
They could also potentially examine a persons public posts on social media websites such as Facebook, Twitter and Instagram to get a sense of their attitudes and interests.
This would circumvent bias in how people represent themselves on matchmaking questionnaires. Research has shown inaccuracies in self-reported attributes are the main reason online dating isnt successful.
While the sheer amount of data on the web is too much for a person to process, its all grist to the mill for a smart matchmaking AI.
Read more: Looking for love on a dating app? You might be falling for a ghost
As more user data is generated on the internet (especially on social media), AI will be able to make increasingly accurate predictions. Big players such as Match.com would be well-placed for this as they already have access to large pools of data.
And where there is AI there will often be its technological sibling, virtual reality (VR). As both evolve simultaneously, well likely see versions of VR in which would-be daters can practice in simulated environments to avoid slipping up on a real date.
This isnt a far stretch considering virtual girlfriends, which are supposed to help people practice dating, have already existed for some years and are maturing as a technology. A growing number of offerings point to a significant degree of interest in them.
With enough user data, future AI could eventually create a fully-customised partner for you in virtual reality one that checks all your boxes. Controversially, the next step would be to experience an avatar as a physical entity.
It could inhabit a life-like android and become a combined interactive companion and sex partner. Such advanced androids dont exist yet, but they could one day.
Read more: Robots with benefits: how sexbots are marketed as companions
Proponents of companion robots argue this technology helps meet a legitimate need for more intimacy across society especially for the elderly, widowed and people with disabilities.
Meanwhile, critics warn of the inherent risks of objectification, racism and dehumanisation particularly of women, but also men.
Another problematic consequence may be rising numbers of socially reclusive people who substitute technology for real human interaction. In Japan, this phenomenon (called hikikomori) is quite prevalent.
At the same time, Japan has also experienced a severe decline in birth rates for decades. The National Institute of Population and Social Security Research predicts the population will fall from 127 million to about 88 million by 2065.
Concerned by the declining birth rate, the Japanese government last month announced it would pour two billion yen (about A$25,000,000) into an AI-based matchmaking system.
The debate on digital and robotic love is highly polarised, much like most major debates in the history of technology. Usually, consensus is reached somewhere in the middle.
But in this debate, it seems the technology is advancing faster than we are approaching a consensus.
Generally, the most constructive relationship a person can have with technology is one in which the person is in control, and the technology helps enhance their experiences. For technology to be in control is dehumanising.
Humans have leveraged new technologies for millenia. Just as we learned how to use fire without burning down cities, so too we will have to learn the risks and rewards accompanying future tech.
Read the original:
Posted in Artificial Intelligence
Comments Off on Love in the time of algorithms: would you let your artificial intelligence choose your partner? – The Conversation AU
Artificial Intelligence to transform treatment of COVID-19 patients – Health Europa
Posted: at 9:36 am
Over 40,000 CT scans, MRIs, and x-rays from more than 10,000 patients have been brought together by NHSX throughout the pandemic to create a National COVID-19 Chest Imaging Database (NCCID). Hospitals and universities across the country are using the database to track patterns and markers of COVID-19 in patients, to quickly create treatment plans, and to better understand whether a patient will end up in a critical condition.
TheNCCIDis also helping researchers from universities in University College London, and in Bradford, to developAItools that could help doctors improve the treatment for patients with COVID-19.
Clinicians at Addenbrookes Hospital in Cambridge are developing an algorithm based on theNCCIDdatabase that will help to inform a more accurate diagnosis of patients when they present with potential COVID-19 symptoms without a positive test. This will help clinicians to implement earlier medical interventions, including giving patients oxygen and medication before reaching a critical stage of the illness.
The database can also help clinicians predict the need for additional ICU capacity, enabling the management of beds and staff resource in those settings.
Matt Hancock, Secretary of State for Health and Social Care, said: The use of artificial intelligence is already beginning to transform patient care by making the NHS a more predictive, preventive, and personalised health and care service. It is vital we always search for new ways to improve care, especially as we fight the pandemic with the recovery beyond. This excellent work is testament to how technology can help to save lives in the UK.
Carola-Bibiane Schonlieb, Professor of Applied Mathematics and head of theCambridge Image Analysisgroup at the University of Cambridge, said: TheNCCIDhas been invaluable in accelerating our research and provided us with a diverse, well-curated, dataset of UK patients to use in our algorithm development. The ability to access the data for 18 different trusts centrally has increased our efficiency and ensures we can focus most of our time on designing and implementing the algorithms for use in the clinic for the benefit of patients.
By understanding in the early stages of disease whether a patient is likely to deteriorate we can intervene earlier to change the course of their disease and potentially save lives as a result.
The database is also helping with the development of a national AI imaging platform that will allow for the safe collecting and sharing of data, developingAItechnologies to address a number of other conditions such as heart disease and cancers.
See the rest here:
Artificial Intelligence to transform treatment of COVID-19 patients - Health Europa
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence to transform treatment of COVID-19 patients – Health Europa
Many Artificial Intelligence Initiatives Included in the NDAA – RTInsights
Posted: at 9:36 am
The NDAA guidelines reestablish an artificial intelligence advisor to the president and push education initiatives to create a tech-savvy workforce.
Theres plenty of debate surrounding why the USAs current regulatory stance on artificial intelligence (AI) and cybersecurity remains fragmented. Regardless of your thoughts on the matter, the recently passed National Defense Authorization Act (NDAA) includes quite a few AI and cybersecurity driven initiatives for both the military and non-military entities.
Its common to attach provisions to bills whenCongress, the Senate, or both know the bill must pass by a certain time. TheNDAA is one such bill. It has a yearly deadline every year or the countrysmilitary completely loses funding leading to lawmakers using it to pass lawsthat dont always make it on their own. (This years bill was initially vetoed.But the veto was overridden on January 1.)
The bill contains 4,500 pages worth ofinformation. Along with a few different initiatives, one particular moveoutlines both the military and the governments new interest in artificialintelligence.
One of the biggest moves in the bill has to do with the newly created Joint AI Center (JAIC). It moves from the under the supervision of the DODs CIO to the deputy secretary of defense. It moves higher in the DOD hierarchy, possibly underscoring just how crucial new cybersecurity initiatives are to the Department of Defense.
To that end, the JAIC is the Department of Defenses (DoD) AI Center of Excellence that provides expertise to help the Department harness the game-changing power of AI. The mission of the JAIC is to transform the DoD by accelerating the delivery and adoption of artificial intelligence. The goal is to use AI to solve large and complex problem sets that span multiple services, then ensure the Services and Components have real-time access to ever-improving libraries of data sets and tools.
The center will also receive its own oversightboard matching other bill provisions dealing with AI ethics and will soonhave acquisition authority as well. The center will be creating reportsbiannually about its work and its integration with other notable agencies.
The secretary of defense will also investigatewhether the DoD can use AI ethically, both acquired and developed technologies.This will happen within 180 days of the bills passing, creating a pressingdeadline for handling ethics issues surrounding both new technologies and the often-controversialuse of military AI-use.
The DoD will receive a steering committee onemerging technology as well as new hiring guidelines for AI-technologists. Thedefense department will also take on five new AI-driven initiatives designed toimprove efficiency at the DoD.
The second massive provision in the bill is a large piece of cybersecurity legislation. The Cyberspace Solarium Commission worked on quite a few pieces of legislation that made it into the bills final version. The bill creates a White House cyber director position. It also gives the Cybersecurity and Infrastructure Security Agency (CISA) more authority for threat hunting.
It directs the executive branch to conductcontinuity of the economy planning to protect critical economicinfrastructure in the case of cyberattacks. It also establishes a joint cyberplanning office at CISA.
The Cybersecurity Security Model Certification(CMMC) will fall under the Government Accountability Office, and the governmentwill require regular briefings from the DoD on its progress. CMMC is thegovernments accreditation body, and this affects anyone in the defensecontract supply chain.
Entities outside the Department of Defensewill have new initiatives as well. The National AI Initiative hopes toreestablish the United States as a leading authority and provider of artificialintelligence. The initiative will coordinate research, development, anddeployment of new artificial intelligence programs among the DOD as well ascivilian agencies.
This coordination should help bring coherenceand consistency to research and development. In the past, critics have cited alack of realistic and workable regulations as a clear reason the United Stateshas fallen behind in AI development.
It will advise future presidents on the stateof AI within the country to increase competitiveness and leadership. Thecountry can expect more training initiatives and regular updates about thescience itself. It will lead and coordinate strategic partnerships andinternational collaboration with key allies and provide those opportunities tothe US economy.
AI bias is a huge concern among business and US citizens, so the National AI Initiative Advisory Committee will also create a subcommittee on AI and law enforcement. Its findings on data security, and legal standards could affect how businesses handle their own data security in the future.
The National Science Foundation will runawards, competitions, grants, and other incentives to develop trustworthy AI.The country is betting heavily on new initiatives to increase trust among USconsumers as AI becomes a more important part of our lives.
NIST will expand its mission to createframeworks and standards for AI adoption. NIST guidelines already offercompanies a framework for assessing cybersecurity. The updates will helpdevelop trustworthy AI and spell a pathway for AI adoption that consumers willtrust and embrace.
As countries scramble to first place in AIreadiness, these initiatives hope to fix some key gaps leading to the USslagging authority. The NDAA guidelines reestablish an AI-advisor to thepresident and push education initiatives to create a tech-savvy workforce.
It also helps create guidelines for businesses already frantically adopting AI-driven initiatives, providing critical guidance for cybersecurity and sustainability frameworks. Between training and NIST frameworks, businesses could see a new era of trustworthy and ethical AI the sort that creates real insights and efficiency while mitigating risk.
Other countries are investing heavily in AIdevelopment, so new and expanded provisions will help secure the United Statesplace as a world leader in AI. Governmental funding and collaboration withcivilian researchers and development teams is one way the US can remain trulycompetitive in new technology the presence of such a robust body ofAI-focused legislations suggests lawmakers are making this a priority.
Read more from the original source:
Many Artificial Intelligence Initiatives Included in the NDAA - RTInsights
Posted in Artificial Intelligence
Comments Off on Many Artificial Intelligence Initiatives Included in the NDAA – RTInsights
3 Reasons Why Governments Need to Regulate Artificial Intelligence – BBN Times
Posted: at 9:36 am
Artificial Intelligence (AI) research, although far from reaching its pinnacle, is already giving us glimpses of what a future dominated by this technology can look like.
While the rapid progress of the technology should be seen with a positive lens, it is important to exercise some caution and introduce worldwide regulations for the development and use of AI technology.
The constant research in the field of technology, in addition to giving rise to increasingly powerful applications, is also increasing the accessibility to these applications. It is making it easier for more and more people, as well as organizations, to use and develop these technologies. While the democratization of technology that is transpiring across the world is a welcome change, it cannot be said for all technological applications that are being developed.
The usage of certain technologies should be regulated, or at the very least monitored, to prevent the misuse or abuse of the technology towards harmful ends. For instance, nuclear research and development, despite being highly beneficial to everyone, is highly regulated across the world. Thats because, nuclear technology, in addition to being useful for constructive purposes like power generation, can also be used for causing destruction in the form of nuclear bombs. To prevent, international bodies have restricted nuclear research only to the entities that can keep the technology secure and under control. Similarly, the need for regulating AI research and applications is also becoming increasingly obvious. Read on to know why.
AI research, in recent years, has resulted in numerous applications and capabilities that used to be, not long ago, reserved for the realm of futuristic fiction. Today, it is not uncommon to come across machines that can perform specific logical and computational tasks better than humans. They can perform feats such as understanding what we speak or write using natural language processing,detecting illnesses using deep neural networks, and playing games involving logic and intuition better than us. Such applications, if made available to the general public and businesses worldwide, can undoubtedly make a positive impact in the world.
For instance, AI can predict the outcome of different decisions made by businesses and individuals and suggest the optimal course of action in any situation. This will minimize the risks involved in any endeavor and maximize the likelihood of achieving the most desirable outcomes. They can help businesses become more efficient by automating routine tasks and preserve human health and safety by undertaking tasks that involve high stress and hazard. They can also save lives by detecting diseases much earlier than can be diagnosed by human doctors. Thus, any progress made in the field of AI will result in an improvement in the overall standard of human life. However, it is important to realize that, like any other form of technology, AI is a double-edged sword.AI has a dark side, too. If highly advanced and complex AI systems are left uncontrolled and unsupervised, they stand the risk of deviating from desirable behavior and perform tasks in unethical ways.
There have been many instances where AI systems tried to fool its human developers by cheating in the way they performed tasks they were programmed to do. For example, anAI tasked with generating virtual maps from real aerial images cheatedin the way it performed its task by hiding data from its developers. This was caused by the fact that the developers used the wrong metric to evaluate the AIs performance, causing the AI to cheat to maximize the target metric. While itll be a long time before we have sentient AI that can potentially contemplate a coup against humanity, we already have AI systems that can cause a lot of harm by acting in ways not intended by the developers. In short,we are currently at more risk of AI doing things wrong than them doing the wrong things.
To prevent AI from doing things wrong (or doing the wrong things), it is important for the developers to exercise more caution and care while creating these systems. And the way the AI community is trying to achieve this currently is by having a generally accepted set of ethics and guidelines surrounding the ethical development and use of AI. Or, in some cases, ethical use of AI is being inspired by the collective activism of individuals in the tech community. For instance,Google recently pledged to not use AI for military applicationsafter its employees openly opposed the notion. While such movements do help in mitigating AI-induced risks and regulating AI development to a certain extent, it is not a given that every group involved in developing AI technology will comply with such activism.
AI research is being performed in every corner of the world, often in silos for competitive reasons. Thus, there is no way to know what goes on in each of these places, let alone stopping them from doing anything unethical. Also, while most developers try and create AI systems and test them rigorously to prevent any mishaps, they may often compromise such aspects while focusing on performance and on-time delivery of projects. This may lead to them creating AI systems that are not fully tested for safety and compliance. Even small issues can have devastating ramifications based on the application. Thus, it is necessary to institutionalize AI ethics into law, which will make regulating AI and its impact easier for governments and international bodies.
Legally regulating AI can ensure that AI safety becomes an inherent part of any future AI development initiative. This means that every new AI, regardless of its simplicity or complexity, will go through a process of development that immanently focus on minimizing non-compliance and chances of failure. To ensure AI safety, the regulators must consider a few must-have tenets as a part of the legislation. These tenets should include:
Any international agency or government body that sets about regulating AI through legislation should consult with experts in the field of artificial intelligence, ethics and moral sciences, and law and justice. Doing so helps in eliminating any political or personal agenda, biases, and misconceptions while framing the rules for regulating AI research and application. And once framed these regulations should be upheld and enforced strictly. This will ensure that only the applications that comply with the highest of the safety standards are adopted for mainstream use.
While regulating AI is necessary, it should not be done in a way that stifles the existing momentum in AI research and development. Thus, the challenge will be to strike a balance between allowing enough freedom to developers to ensure the continued growth of AI research and bringing in more accountability for the makers of AI. While too much regulation can prove to be the enemy of progress, no regulation at all can lead to the propagation of AI systems that can not only halt progress but can potentially lead to destruction and global decline.
See original here:
3 Reasons Why Governments Need to Regulate Artificial Intelligence - BBN Times
Posted in Artificial Intelligence
Comments Off on 3 Reasons Why Governments Need to Regulate Artificial Intelligence – BBN Times
Artificial intelligence can deepen social inequality. Here are 5 ways to help prevent this – The Conversation AU
Posted: at 9:36 am
From Google searches and dating sites to detecting credit card fraud, artificial intelligence (AI) keeps finding new ways to creep into our lives. But can we trust the algorithms that drive it?
As humans, we make errors. We can have attention lapses and misinterpret information. Yet when we reassess, we can pick out our errors and correct them.
But when an AI system makes an error, it will be repeated again and again no matter how many times it looks at the same data under the same circumstances.
AI systems are trained using data that inevitably reflect the past. If a training data set contains inherent biases from past human decisions, these biases are codified and amplified by the system.
Or if it contains less data about a particular minority group, predictions for that group will tend to be worse. This is called algorithmic bias.
Gradient Institute has co-authored a paper demonstrating how businesses can identify algorithmic bias in AI systems, and how they can mitigate it.
The work was produced in collaboration with the Australian Human Rights Commission, Consumer Policy Research Centre, CSIROs Data61 and the CHOICE advocacy group.
Algorithmic bias may arise through a lack of suitable training data, or as a result of inappropriate system design or configuration.
For example, a system that helps a bank decide whether or not to grant loans would typically be trained using a large data set of the banks previous loan decisions (and other relevant data to which the bank has access).
The system can compare a new loan applicants financial history, employment history and demographic information with corresponding information from previous applicants. From this, it tries to predict whether the new applicant will be able to repay the loan.
But this approach can be problematic. One way in which algorithmic bias could arise in this situation is through unconscious biases from loan managers who made past decisions about mortgage applications.
If customers from minority groups were denied loans unfairly in the past, the AI will consider these groups general repayment ability to be lower than it is.
Young people, people of colour, single women, people with disabilities and blue-collar workers are just some examples of groups that may be disadvantaged.
Read more: Artificial Intelligence has a gender bias problem -- just ask Siri
The biased AI system described above poses two key risks for the bank.
First, the bank could miss out on potential clients, by sending victims of bias to its competitors. It could also be held liable under anti-discrimination laws.
If an AI system continually applies inherent bias in its decisions, it becomes easier for government or consumer groups to identify this systematic pattern. This can lead to hefty fines and penalties.
Our paper explores several ways in which algorithmic bias can arise.
It also provides technical guidance on how this bias can be removed, so AI systems produce ethical outcomes which dont discriminate based on characteristics such as race, age, sex or disability.
For our paper, we ran a simulation of a hypothetical electricity retailer using an AI-powered tool to decide how to offer products to customers and on what terms. The simulation was trained on fictional historical data made up of fictional individuals.
Based on our results, we identify five approaches to correcting algorithmic bias. This toolkit can be applied to businesses across a range of sectors to help ensure AI systems are fair and accurate.
1. Get better data
The risk of algorithmic bias can be reduced by obtaining additional data points or new types of information on individuals, especially those who are underrepresented (minorities) or those who may appear inaccurately in existing data.
2. Pre-process the data
This consists of editing a dataset to mask or remove information about attributes associated with protections under anti-discrimination law, such as race or gender.
3. Increase model complexity
A simpler AI model can be easier to test, monitor and interrogate. But it can also be less accurate and lead to generalisations which favour the majority over minorities.
4. Modify the system
The logic and parameters of an AI system can be proactively adjusted to directly counteract algorithmic bias. For example, this can be done by setting a different decision threshold for a disadvantaged group.
5. Change the prediction target
The specific measure chosen to guide an AI system directly influences how it makes decisions across different groups. Finding a fairer measure to use as the prediction target will help reduce algorithmic bias.
In our recommendations to government and businesses wanting to employ AI decision-making, we foremost stress the importance of considering general principles of fairness and human rights when using such technology. And this must be done before a system is in-use.
We also recommend systems are rigorously designed and tested to ensure outputs arent tainted by algorithmic bias. Once operational, they should be closely monitored.
Finally, we advise that to use AI systems responsibly and ethically extends beyond compliance with the narrow letter of the law. It also requires the system to be aligned with broadly-accepted social norms and considerate of impact on individuals, communities and the environment.
With AI decision-making tools becoming commonplace, we now have an opportunity to not only increase productivity, but create a more equitable and just society that is, if we use them carefully.
Read more: YouTube's algorithms might radicalise people but the real problem is we've no idea how they work
More here:
Posted in Artificial Intelligence
Comments Off on Artificial intelligence can deepen social inequality. Here are 5 ways to help prevent this – The Conversation AU
Government Will Increase Scrutiny on AI in Screening – ESR NEWS
Posted: at 9:36 am
Written By Employment Screening Resources (ESR)
Government agencies in the United States such as the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), and the Equal Employment Opportunity Commission (EEOC) will increase scrutiny on how Artificial Intelligence (AI) is used in background screening, according to the ESR Top Ten Background Check Trends for 2021 compiled by leading global background check firm Employment Screening Resources (ESR).
In April 2020, the FTC the nations primary privacy and data security enforcer issued guidance to businesses on Using Artificial Intelligence and Algorithms written by Director of FTC Bureau of Consumer Protection Andrew Smith on the use of AI for Machine Learning (ML) technology and automated decision making with regard to federal laws that included the Fair Credit Reporting Act (FCRA) that regulates background checks.
Headlines tout rapid improvements in artificial intelligence technology. The use of AI technology machines and algorithms to make predictions, recommendations, or decisions has enormous potential to improve welfare and productivity. But it also presents risks, such as the potential for unfair or discriminatory outcomes or the perpetuation of existing socioeconomic disparities, Director Smith wrote in the FTC guidance.
The good news is that, while the sophistication of AI and machine learning technology is new, automated decision-making is not, and we at the FTC have long experience dealing with the challenges presented by the use of data and algorithms to make decisions about consumers, Smith wrote. In 2016, the FTC issued a report on Big Data: A Tool for Inclusion or Exclusion? In 2018, the FTC held a hearing to explore AI and algorithms.
In July 2020, the Consumer Financial Protection Bureau (CFPB) a government agency that helps businesses comply with federal consumer financial law published a blog on Providing adverse action notices when using AI/ML models that addressed industry concerns about how the use of AI interacts with the existing regulatory framework. One issue is how complex AI models address the adverse action notice requirements in the FCRA.
FCRA also includes adverse action notice requirements. For example, when adverse action is based in whole or in part on a credit score obtained from a consumer reporting agency (CRA), creditors must disclose key factors that adversely affected the score, the name and contact information of the CRA, and additional content. These notice provisions serve important anti-discrimination, educational, and accuracy purposes, the blog stated.
There may be questions about how institutions can comply with these requirements if the reasons driving an AI decision are based on complex interrelationships. Industry continues to develop tools to accurately explain complex AI decisions These developments hold great promise to enhance the explainability of AI and facilitate use of AI for credit underwriting compatible with adverse action notice requirements, the blog concluded.
In December 2020, ten Democratic members of the United States Senate sent a letter requesting clarification from the U.S. Equal Employment Opportunity Commission (EEOC) Chair Janet Dhillon regarding the EEOCs authority to investigate bias in AI driven hiring technologies, according to a press release on the website of U.S. Senator Michael Bennet (D-Colorado), one of the Senators who signed the letter.
While hiring technologies can sometimes reduce the role of individual hiring managers biases, they can also reproduce and deepen systemic patterns of discrimination reflected in todays workforce data Combatting systemic discrimination takes deliberate and proactive work from vendors, employers, and the Commission, Bennet and the other nine Senators wrote in the letter to EEOC Chair Dhillon.
Today, far too little is known about the design, use, and effects of hiring technologies. Job applicants and employers depend on the Commission to conduct robust research and oversight of the industry and provide appropriate guidance. It is essential that these hiring processes advance equity in hiring, rather than erect artificial and discriminatory barriers to employment, the Senators continued in the letter.
Machine learning is based on the idea that machines should be able to learn and adapt through experience and Artificial Intelligence refers to the broader idea that machines can execute tasks intelligently to simulate human thinking and capability and behavior to learn from data without being programmed explicitly, explained Attorney Lester Rosen, founder and chief executive officer (CEO) of ESR.
There have certainly been technological advances including back-office efficiencies and strides towards better integrations that streamline the employment screening process. However, does that qualify as machine learning or AI? In reality, true machine learning and artificial intelligence and the role it is likely to play in the future could fuel a new source of litigation for plaintiffs class action attorneys, said Rosen.
Proponents of AI argue that it will make the processes faster and take bias out of hiring decisions. It is doubtful that civil rights advocates and the EEOC will see it that way. The use of AI for decision making is contrary to one of the most fundamental bedrock principles of employment that each person should be treated as an individual, and not processed as a group or based upon data points, Rosen concluded.
Employment Screening Resources (ESR) a leading global background check provider that was ranked the number one screening firm by HRO Today in 2020 offers the award-winning ESR Assured Compliance system, which is part of The ESRCheck Solution, for real-time compliance that offers automated notices, disclosures, and consents for employers performing background checks. To learn more about ESR, visit http://www.esrcheck.com.
Since 2008, Employment Screening Resources (ESR) has annually selected the ESR Top Ten Background Check Trends that feature emerging and influential trends in the background screening industry. Each of the top background check trends for 2021 will be announced via the ESR News Blog and listed on the ESR background check trends web page at http://www.esrcheck.com/Tools-Resources/ESR-Top-Ten-Background-Check-Trends/.
NOTE: Employment ScreeningResources (ESR) does not provide or offer legal services or legal advice ofany kind or nature. Any information on this website is for educational purposesonly.
2021 Employment Screening Resources (ESR) Making copies of or using any part of the ESR News Blog or ESR website for any purpose other than your own personal use is prohibited unless written authorization is first obtained from ESR.
See original here:
Government Will Increase Scrutiny on AI in Screening - ESR NEWS
Posted in Artificial Intelligence
Comments Off on Government Will Increase Scrutiny on AI in Screening – ESR NEWS
Artificial intelligence is the future for pathology at Duke through new program – WRAL Tech Wire
Posted: at 9:36 am
DURHAM Researchers at Duke University have been merging artificial intelligence with health care toimprove patient outcomes for the better part of two decades. From making cochlear implants deliver purer sounds to the brain to finding hidden trends within reams of patient data, the field spans a diverse range of niches that are now beginning to make real impacts.
Among these niches, however, there is one in which Duke researchers have always been at the leading edgeimage analysis, with a broad team of researchers teaching computers to analyze images to unearth everything from various forms of cancer to biomarkers of Alzheimers disease in the retina.
To keep pushing the envelope in this field by cementing these relationships into both schools organization, Dukes Pratt School of Engineering and School of Medicine have launched a newDivision of Artificial Intelligence and Computational Pathology.
Machine learning can do a better job than the average person at finding the signal in the noise, and that can translate into better outcomes and more cost-effective care, saidMichael Datto, associate professor of pathology at Duke. This is one of the most exciting times Ive seen in pathology, and its going to be exciting to see what we can do.
The new division will support translational research by developing AI technologies for image analysis to enhance the diagnosis, classification, prediction and prognostication of a variety of diseases, as well as train the next generation of pathologists and scientists in the emerging field.
The division is led byCarolyn Glass, assistant professor of pathology, andLaura Barisoni, professor of pathology and medicine, and operates with the partnership ofAI Health, directed byLawrence Carin, professor of electrical and computer engineering and vice president for research at Duke, andAdrian Hernandez, professor of medicine and vice dean for clinical research.
Duke has taken the lead at the national level in establishing a division in the Department of Pathology in partnership with AI/Health with the goal of developing and establishing new models and protocols to practice pathology in the 21st century, said Barisoni, who is also director of renal pathology service at Duke.
AI Health is also a new initiative, launched as a collaboration between the Schools of Engineering and Medicine and Trinity College of Arts & Sciences, with units such as theDuke Global Health Instituteand theDuke-Margolis Center for Health Policy, to leverage machine learning to improve both individual and population health through education, research and patient-care projects.
For what everyone has envisioned for AI Health, we see pathology paving the way, said Hernandez. AI Health is a catalyst and spark for putting cutting-edge machine learning development and testing into real-world settings. In pathology, we have image-intensive data streams, and COVID-19 has really emphasized the need for the timely processing of patient samples.
Applying machine learning image analysis to pathology processes, however, is easier said than done. Figuring out how to process extremely large image files and train AI algorithms on relatively few examples is part of the focus of Carins laboratory, in partnership with Ricardo Henao, assistant professor of biostatistics and bioinformatics as well as electrical and computer engineering.
Current AI algorithms, such as convolutional neural networks (CNN), were originally designed for the analysis of natural images, such as those captured on phones. Adapting such algorithms for the diagnosis of biopsy scans, however, is challenging due to the large size of the scanstypically of tens of gigabytesand the sparsity of abnormal diagnostic cells they contain. Led by David Dov, a postdoctoral researcher in Carins laboratory, Duke engineers are working to overcome these challenges to design AI algorithms for the diagnosis of various conditions, such as different types of cancers and transplant rejection.
Designing algorithms that make a real impact on clinical practice requires close collaboration between AI researchers and pathologists, said Dov, who joined Duke after completing his PhD in electrical engineering at The Technion Israel Institute of Technology. A key challenge in these collaborations is gaining a deep understanding of the gaps in medical practice, and then ensuring that clinicians fully understand the capabilities and limitations of AI in bridging these gaps. The new Division of Artificial Intelligence and Computational Pathology plays an important role in facilitating such collaborations.
In a virtual kickoff meeting this fall, the new divisions leadership spoke to the potential it holds to improve patient health outcomes and several researchers delved into projects they already have underway in the field. For example,Danielle Range, assistant professor of pathology, spoke of efforts to use AI in diagnosing cancer;Roarke Horstmeyer, assistant professor of biomedical engineering, described his efforts to create a smart microscope to better diagnose disease; and Glass detailed her work on the use of machine learning in diagnosing transplant rejection.
In the last couple of years, we have seen an exponential increase in AI pathology interest from Duke undergraduates to medical students applying for Pathology residency positions, said Glass. I think continued development of a solid, integrated curriculum and educational program will be critical to train these future leaders.
(C) Duke University
See more here:
Artificial intelligence is the future for pathology at Duke through new program - WRAL Tech Wire
Posted in Artificial Intelligence
Comments Off on Artificial intelligence is the future for pathology at Duke through new program – WRAL Tech Wire
Jvion Applies Clinical Artificial Intelligence to Help Prioritize COVID-19 Vaccine Distribution to Most Vulnerable Communities and Individuals – PR…
Posted: at 9:36 am
Jvion COVID Community Vulnerability Map, now featuring COVID Vaccination Prioritization Index (VPI)
ATLANTA (PRWEB) January 19, 2021
Jvion, a leader in clinical artificial intelligence (AI), announced the launch of its COVID Vaccination Prioritization Index (VPI). The VPI helps guide the distribution of COVID-19 vaccines during subsequent phases of community vaccination efforts. The VPI will be applied in two ways. The first is an update to Jvions COVID Community Vulnerability Map, initially released last spring, that indexes communities by their priority level for vaccination, based on CDC guidelines and socioeconomic vulnerability. Jvion can also add the index to its COVID Patient Vulnerability Lists for new and existing customers.
The past year has been difficult for us all, but particularly so for our society's most vulnerable members: the elderly, the sick and the unemployed, racial and ethnic minorities, rural Americans, and the hard-working people on the frontlines, said Jvions Chief Product Officer Dr. John Showalter, MD, MSIS. Now that vaccines are here, were proud to be able to help these people get the protection they need as quickly as possible.
Jvions VPI takes into account the CDCs recommendations for who should be prioritized for the limited supply of vaccines. Once healthcare workers and long-term care facility residents are vaccinated, the next phases will prioritize essential workers, the elderly, and those with underlying medical conditions. At each phase, Jvions VPI will help public health officials determine which locations need more vaccines based on the makeup of the community, and help providers target their vaccination outreach to their patients at greatest risk.
To that effect, the COVID Community Vulnerability Map has been updated with a new layer that rates counties and zip codes on a scale from 1-6 based on the proportion of residents in the CDCs prioritization cohorts. The layer also accounts for environmental and social determinants of health (SDOH), such as air pollution, low-income jobs, and food insecurity, all of which have been correlated with higher rates of hospitalizations and deaths from COVID-19.
Since its release in March 2020, the Map has been viewed over two million times, including by members of the White House Task Force, FEMA, every military branch, and state and local governments, and has been used to guide public health outreach, resource allocation, and the deployment of mobile testing sites in vulnerable communities.
After launching the public-facing COVID Community Vulnerability Map, Jvion also sent Patient Vulnerability Lists to its provider and payer customers that ranked their patients or members by their vulnerability to severe outcomes if infected with COVID-19, based on their clinical, socioeconomic, and behavioral risk factors. These lists will be updated to flag those individuals who should be prioritized for vaccination.
The vaccine prioritization tools are possible thanks to the CORE. Built on Microsoft Azure, the CORE is a secure and scalable repository that includes clinical, socioeconomic, and experiential data on 30 million individuals. The CORE was the driving force behind Jvions COVID Response Suite, which included the COVID Community Vulnerability Map and Patient Vulnerability Lists, in addition to an Inpatient Triage Assessment and a Return to Work Assessment.
About JvionJvion, a leader in clinical artificial intelligence, enables providers, payers and other healthcare entities to identify and prevent avoidable patient harm, utilization and costs. An industry first, the Jvion CORE goes beyond predictive analytics and machine learning to identify patients on a trajectory to becoming high-risk. Jvion then determines the interventions that will more effectively reduce risk and enable clinical and operational action. The CORE accelerates time to value by leveraging established patient-level intelligence to drive engagement across healthcare organizations, populations, and individuals. To date, the Jvion CORE has been deployed across hundreds of clients and resulted in millions saved. For more information, visit https://www.jvion.com.
Share article on social media or email:
See the rest here:
Posted in Artificial Intelligence
Comments Off on Jvion Applies Clinical Artificial Intelligence to Help Prioritize COVID-19 Vaccine Distribution to Most Vulnerable Communities and Individuals – PR…
Death on the high seas: Taiwanese rights groups demand end to modern slavery on fishing boats – Telegraph.co.uk
Posted: at 9:34 am
The agency added it was working to enforce its clear policy against recruitment fees, the withholding of wages, and excess working hours, which should guarantee 10 hours rest a day and four days holiday every month.
There would be zero tolerance of physical or verbal abuse and mechanisms were in place to report violations. Harbour inspections to monitor conditions for foreign crews had also been introduced in 2018.
On the US crackdown, the agency said it was willing to listen to suggestions from all walks of life with humility and discuss ways to improve.
But while efforts were being made to close the regulatory gap between domestic and foreign workers, it insisted that most fishing boat owners were willing to treat foreign crews kindly.
NGOs had generally presented a one-sided, subjective picture that unfairly tainted the industry and did not always take into account the views of the vessel owners, it claimed.
Pressure for action continues to mount.
In recent years, the Taiwanese government has instituted legal and regulatory changes. However, NGOs find these changes to be insufficient and they continue reporting serious abuses, said a report by the Global Labour Justice-International Labour Rights Forum in December.
To end forced labour in distant water fisheries, the government must abolish the discriminatory employment scheme and ensure all migrant fishers are afforded the same labour rights and protections as Taiwanese fishers, said Kimberly Rogovin, the group'ssenior seafood campaign coordinator.
I do hope the fishing industry in Taiwan can learn to adjust to international and local regulation against illegal, unreported and unregulated fishing and the violation of human rights, added Lennon Ying-Dah Wong, a workers rights activist.
What we want is merely to stop this kind of scandal and abuse, not to destroy the industryIf the industry doesn't change, they might face more international sanctions.
Hariyanto, the head of the Indonesia Migrant Workers Union, said he had heard of 21 cases,including five from Taiwan, of modern slavery on fishing boats from 2019-2020. The case where Arif had died was one of the worst cases we found, he claimed.
But perhaps nobody wants to see reform more than fishermen like Jack and Stanley.
Stanley was left in debt to the broker who found him the job, and still hasa scar on his leg where he was struck with a fishing spear.
Jack remains in hiding in Taiwan, where he has found construction work, but is haunted by his experience at sea.
I just want to be heard..to tell the whole truth about what happened in our fishing boat, he said. I want to get justice for what happened to all of us.
Additional reporting: Dan Olanday in Manila
Protect yourself and your family by learning more aboutGlobal Health Security
Read the rest here:
Posted in High Seas
Comments Off on Death on the high seas: Taiwanese rights groups demand end to modern slavery on fishing boats – Telegraph.co.uk