The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Big Tech
- Black Lives Matter
- Boca Chica Texas
- Casino Affiliate
- Cbd Oil
- Chess Engines
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Designer Babies
- Donald Trump
- Elon Musk
- Ethical Egoism
- Eugenic Concepts
- Fake News
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Jordan Peterson
- Las Vegas
- Life Extension
- Marie Byrd Land
- Mars Colonization
- Mars Colony
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- National Vanguard
- New Utopia
- Online Casino
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Proud Boys
- Quantum Computing
- Quantum Physics
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Daily Archives: January 19, 2021
Posted: January 19, 2021 at 9:39 am
Immediately after the attack on the U.S. Capitol, all corners of the political spectrum repudiated the mob of President Trumps supporters. Yet within days, prominent Republicans, party officials, conservative media voices and rank-and-file voters began making a rhetorical shift to try to downplay the groups violent actions.
In one of the ultimate dont-believe-your-eyes moments of the Trump era, these Republicans have retreated to the ranks of misinformation, claiming it was Black Lives Matter protesters and far-left groups like antifa who stormed the Capitol in spite of the pro-Trump flags and QAnon symbology in the crowd. Others have argued that the attack was no worse than the rioting and looting in cities during the Black Lives Matter movement, often exaggerating the unrest last summer while minimizing a mobs attempt to overturn an election.
The shift is revealing about how conspiracy theories, deflection and political incentives play off one another in Mr. Trumps G.O.P. For a brief time, Republican officials seemed perhaps open to grappling with what their partys leader had wrought violence in the name of their Electoral College fight. But any window of reflection now seems to be closing as Republicans try to pass blame and to compare last summers lawlessness, which was condemned by Democrats, to an attack on Congress, which was inspired by Mr. Trump.
The violence at the Capitol was shameful, Rudolph W. Giuliani, the presidents lawyer, tweeted at 6:55 a.m. the morning after the attack. Our movement values respect for law and order and for the police. But now, in a new video titled What Really Happened on January 6th? Mr. Giuliani is among those who are back to emphasizing conspiracy theories.
The riot was preplanned, said Mr. Giuliani, the former mayor of New York City. This was an attempt to slander Trump. He added, The evidence is coming out.
For months, Republicans have used last summers protests as a political catchall, highlighting isolated instances of property destruction and calls to defund the police to motivate their base in November. The tactic proved somewhat effective on Election Day: Democrats lost ground in the House of Representatives, with Republican challengers hammering a message of liberal lawlessness.
About nine of every 10 voters said the protests had been a factor in their voting, according to estimates from A.P. VoteCast, a large voter survey conducted for The Associated Press by NORC at the University of Chicago. Nearly half of those respondents backed Mr. Trump, with some saying they worried that the unrest could disrupt their communities.
Republicans are now using the looting to try to explain away the Capitol attack. The result, for some Republican voters, ranges from doubt to conspiratorial thinking.
Suzanne Doherty, 67, who traveled from Michigan to be in Washington on Jan. 6 to support Mr. Trump, came away feeling confused and depressed over the invasion of the Capitol and not trusting the images of the mob.
I heard that on antifa websites, people were invited to go to the rally and dress up like Trump supporters, but Im not sure what to believe anymore, she said. There were people there only to wreak havoc. All I know is that there was a whole gamut of people there, but the rioters were not us. Maybe they were antifa. Maybe they were B.L.M. Maybe they were extreme right militants.
The conjecture that the mob was infiltrated by Black Lives Matter and antifa has been metastasizing from the dark corners of the pro-Trump internet to the floors of Congress and the Republican base, even as law enforcement officials say there is no evidence to support it. The authorities are now flagging threats of violence and rioting leading up to President-elect Joseph R. Biden Jr.s inauguration.
That has not stopped Republican lawmakers and some of their constituents from pushing these narratives to defend Mr. Trump.
Interviews with voters this week in Kenosha, the southeast Wisconsin city that was roiled by a high-profile police shooting last summer, captured the yawning split along ideological and racial lines. Democrats pointed to the differences in motivation between the Capitol mob and the mass protests of the Black Lives Matter movement, which was not seeking to overturn an election or being incited by the president. Republicans saw the Capitol attack as the work of outsiders or as justified by the summers isolated incidents of looting and property destruction.
I think the goal was to try to put some final nails in the coffin of Donald Trump, said Dale Rovik, a 59-year-old who supports Mr. Trump and is a native of Kenosha. I think its pretty clear that they did that to make him look bad and to accuse him and, of course, to try and impeach him again. That certainly is pretty clear to me.
Joe Pillizzi, a 67-year-old retired salesman in Kenosha who supports Mr. Trump, said he believed last summers looting and rioting had put a seed in the minds of the mob that attacked the Capitol.
If the Black Lives Matter didnt do what they did, I dont think the Capitol attack would have happened, he said.
Democrats have also seized on a point of conservative hypocrisy. For all the talk of supporting law and order, this months attacks pitted a violent mob against Capitol Hill law enforcement personnel, and a police officer was killed.
Dominique Pritchett, a 36-year-old mental health therapist in Kenosha who supports Black Lives Matter, said the events of the summer were being portrayed inaccurately by the right, while the Capitol rioters were treated far more softly by the police than peaceful Black Lives Matter protesters were.
No, the protests did not turn violent; the looting and rioting started, she said. No violence is acceptable; I think we all can agree to that. Referring to the Capitol rioters, she said: They are tearing up one of the most protected and prestigious places in the United States because No. 45 lost. Someone lost an election, versus Black and brown people getting gunned down and killed every day.
The misinformation on the right reflects the mood of Mr. Trumps most ardent base, the collection of elected officials in deep-red America who have consistently rationalized his behavior in crises. But other signs indicate that some Republicans are exasperated by Mr. Trump and his actions in a way not seen since he entered office.
A new Pew Research poll released Friday showed the presidents approval rating dropping sharply among Republicans since he inspired the mob violence, cratering to an all-time low of 60 percent, more than 14 percentage points lower than his previous nadir. Among Americans at large, Mr. Trumps approval rating was 29 percent, a low since he took office in 2017, and he had a 68 percent disapproval rating his highest recorded number.
In the House, 10 Republicans voted to impeach Mr. Trump for a second time, making it the most bipartisan effort of any impeachment effort in the countrys history. Mitch McConnell, the Senate majority leader, has signaled a desire to rid the party of Mr. Trump. And in recent days in Washington, some Republicans spoke out about the misinformation that had spread through the ranks of the partys base and its elected officials.
Representative Peter Meijer, a Republican freshman who voted to impeach Mr. Trump, said in an interview with The Daily, the New York Times audio podcast, that the prevalence of false information among the base had created two worlds among congressional Republicans one that is based in reality and another grounded in conspiracy.
The world that said this was actually a landslide victory for Donald Trump, but it was all stolen away and changed and votes were flipped and Dominion Voting Systems, Mr. Meijer said, describing what he called a fever swamp of conspiracy theories.
In a video news conference Friday, Senator Lindsey Graham of South Carolina also made a direct appeal to Republicans still in doubt. Biden actually won, he said. The election wasnt rigged.
Their words, contrasted with Mr. Trumps own message and that of many supporters, highlight a challenge for the Republican Party. The rioters targeted law enforcement personnel, members of Congress and even Vice President Mike Pence. However, much of the partys base and many of its leaders at the local and state levels remain loyal to Mr. Trump.
Another Republican who backed impeachment, Representative Tom Rice of South Carolina, acknowledged in an interview with The Associated Press that he was likely to face a G.O.P. primary challenger in his 2022 re-election effort because of his vote a threat the other nine Republicans who voted for impeachment will probably face as well.
FIRST G.O.P. Primary Challenger Announces Run in Michigan Against Freshman Rep. Meijer One of 10 G.O.P. Turncoats, read a headline from The Gateway Pundit, a right-wing and often conspiratorial news outlet that has amassed influence among Mr. Trumps base.
Reached by email, the site's founder, Jim Hoft, did not reply to questions but did send along several of his own news articles related to claims of antifa involvement in the Capitol attack citing the case of a man named John Sullivan, whom the right-wing media has dubbed an antifa leader in efforts to prove its theory of infiltration. He was the same man cited by Mr. Giuliani in tweets that threatened to expose and place total blame on John and the 226 members of antifa that instigated the Capitol riot.
Interviews with local and state Republican officials show the long-term effects that the amplification of misinformation has among the party. While few members of Congress have agreed with Mr. Trumps assertion that his actions were totally appropriate, several party officials did. And while many Republicans condemned violence, attacks on law enforcement personnel and the killing of a Capitol Police officer, Brian Sicknick, they did not agree that those things were the work of pro-Trump mobs acting in the presidents name, as is the consensus among law enforcement officials.
I do not believe President Trump should be blamed for what happened in D.C. on Jan. 6 any more than the media should be blamed for the carnage in Minneapolis, Portland, Dallas or Seattle, said Ed Henry, a former campaign chair for Mr. Trump in Alabama. The attack on the Capitol has not shaken my confidence in President Trump. I still support him.
Eileen Grossman, a Republican activist from Rhode Island who worked on Mr. Trumps campaign, dismissed the violence as the work of outside agitators.
I know that the violence was caused by bad actors from antifa and liberal progressives as well as Black Lives Matter, Ms. Grossman said, without citing any evidence. She added, using an acronym for Republicans in name only, that the Republicans who voted for impeachment would face primary challengers. They are RINOs and traitors.
Ms. Grossman has recently left Rhode Island because, in her words, she wanted to live in a red state. She moved to Georgia, a historically Republican state that in the last three months has voted for Mr. Biden in the presidential election and sent two Democratic candidates to the Senate.
Obviously I chose poorly, she said.
Reporting was contributed by Lisa Lerer in New York, Reid J. Epstein in Washington, Tom Kertscher in Milwaukee and Kathleen Gray in Port Huron, Mich.
Read this article:
Chilling posters reveal Antifa is planning to clash with Trump fans at Inauguration Day riots – The Sun
Posted: at 9:39 am
ANTIFA are planning clashes with pro-Trump supporters at Inauguration Day riots, flyers reveal.
The events are being organised to confront pro-Trump protesters who are headed to capitols across the US.
* Read our US Politics live blog for the latest news on Joe Biden, Donald Trump and Wednesday's inauguration
Cities have readied themselves for the threat of violence by erecting barriers and deploying thousands of National Guard troops ahead of the inauguration of Joe Biden.
But counter demonstrations are now being planned for Seattle, Portland, Denver and Sacramento.
Flyers for events circulating were highlighted by conservative journalist Andy Ngo who is a fierce critic of Antifa and has posted videos of violent clashes its supporters have been involved in.
He tweeted: #Antifa are continuing to put out the call for riots throughout the U.S. on 20 Jan. These are some of their flyers for Seattle, northern California & Denver.
Ngo also pointed to an event organised by the Pacific Northwest Youth Liberation Front.
The group was last year blamed for playing a major role in the months ofunrestinPortland.
According to the Seattle Times the group was emerging as a persistent militant voice in the city.
The paper reported that the group has anonymous leaders and considers itself anti-fascist and anti-capitalist with its goal the overthrowing of the US political system.
Earlier this month, in a foretaste of what could unfold, Antifa used chemical spray on pro-Trump supporters, who had gathered for a march.
As the two sides squared up to each on the boardwalk in Pacific Beach,California,video shows a black clad Antifa supporter pulling out a canister and spraying the MAGA fans.
Security across the country has now been increased leading up to the big day after theFBIwarned of armed protests in all 50 US states.
Right-wing protesters were seen outside statehouses on Sunday with some of carrying rifles.
However, some militia groups told followers not to attend over fears the planned events were really police traps.
The protests came as former FBI boss James Comey warnedJoe Biden'sinauguration isunder threat from "armed, disturbed" people following the January 6 riot at the US Capitol.
NEW THREATQAnon fanatics 'plotting ASSASINATION at Joe Biden's inauguration', FBI warns,'
BOLTS FROM THE BLUEMy skin MELTED after being zapped with 14K VOLTS while cutting hedges
TIGER BLING Joe Exotic so confident of Trump pardon he has a LIMO booked to pick him up
PEDO 'PREDATOR'Sicko 'got 3 children pregnant & abused a fourth teen he met on Facebook'
JOE SAYS NOBiden WON'T let Trump relax Covid entry restrictions from UK, Europe and Brazil
CULT-LIKE PARANOIAAnti-vax dad kills son, 9, in murder suicide over Covid conspiracy
National Guard troops were being activated in the days leading up toInauguration Day.
Approximately 19 states, including Florida, Illinois, Kentucky, Maine, Michigan, North Carolina, Oregon, Washington, and Wisconsin, activated Guardsmen to their capitol cities.
Last week, anintelligence report warned that militia mobs, white supremacists, and QAnon obsessives inspired by the Capitol riots pose a greater terror threatin the US thanISISthis year.
See the original post here:
Posted: at 9:39 am
Andy Ngo, the chronicler of violent antifa and Black Lives Matter riots, has broken the code on selling books.
A full three weeks before its Feb. 2 release, his Unmasked: Inside Antifa's Radical Plan to Destroy Democracy spent much of this week on Amazon as the No. 1 bestseller, a rarity for books not written by longtime authors and presidents.
In a note to Secrets, he said, I just thank everyone who responded to the antifa smears and threats by ordering the book. This is a tough economic time for many Americans and yet they see the importance of Unmasked.
He was referencing the attacks on social media from journalists and others who do not like his coverage of antifa and the riots they were involved in last year, especially in Portland, Oregon, and Seattle, Washington.
The protests to his book have been so hot that the popular seller Powells in Oregon decried the book and banned it from its shelves, though it will offer it online.
Unmasked held the No. 1 spot on Amazon for much of the week, before slipping to No. 2 Friday.
See original here:
Posted: at 9:39 am
Its not a surprise that the Trump administration has sowed divisions between people, and that includes family members.
A child is garnering attention and praise for standing up to their Trump supporter mother, who attempted to push the conspiracy theory that Antifa was responsible for the Capitol riots.
In a viral video captured by the anonymous child, the mother enters their bedroom to tell them that Antifa attacked the Capitol. They were mixed in with the Trump supporters, she said, and cited people wearing MAGA hats backwards as proof of Antifa members disguising themselves as Trump supporters in the riots.
What is wrong with that sentence? her kid replies. When their mother cannot answer, they add that she just proved that Trump supporters also stormed the Capitol.
The mother challenges them by saying that Trump supporters were peacefully assembling outside, but they argue back that they were terrorising people.
The unconvinced mother then leaves, looking fed up, but not before the child puts an end to the conversation once and for all by telling her that her stupidity is gross.
With the video going viral, people on social media effusively praised the child for shutting down their parents baseless conspiracy theory so effectively.
This isnt the first time Gen Z has confronted their conservative parents. Numerous TikTok videos involve teens trying to educate adults on issues like Black Lives Matter, with the most famous example being Claudia Conway, daughter of former Trump adviser Kellyanne Conway.
Go here to see the original:
Love in the time of algorithms: would you let your artificial intelligence choose your partner? – The Conversation AU
Posted: at 9:36 am
It could be argued artificial intelligence (AI) is already the indispensable tool of the 21st century. From helping doctors diagnose and treat patients to rapidly advancing new drug discoveries, its our trusted partner in so many ways.
Now it has found its way into the once exclusively-human domain of love and relationships. With AI-systems as matchmakers, in the coming decades it may become common to date a personalised avatar.
This was explored in the 2014 movie Her, in which a writer living in near-future Los Angeles develops affection for an AI system. The sci-fi film won an Academy Award for depicting what seemed like a highly unconventional love story.
In reality, weve already started down this road.
The online dating industrty is worth more than US$4 billion and there are a growing number of players in this market. Dominating it is the Match Group, which owns OkCupid, Match, Tinder and 45 other dating-related businesses.
Match and its competitors have accumulated a rich trove of personal data, which AI can analyse to predict how we choose partners.
The industry is majorly embracing AI. For instance, Match has an AI-enabled chatbot named Lara who guides people through the process of romance, offering suggestions based on up to 50 personal factors.
Tinder co-founder and CEO Sean Rad outlines his vision of AI being a simplifier: a smart filter that serves up what it knows a person is interested in.
Dating website eHarmony has used AI that analyses peoples chat and sends suggestions about how to make the next move. Happn uses AI to rank profiles and show those it predicts a user might prefer.
Loveflutters AI takes the guesswork out of moving the relationship along, such as by suggesting a restaurant both parties could visit. And Badoo uses facial recognition to suggest a partner that may look like a celebrity crush.
Dating platforms are using AI to analyse all the finer details. From the results, they can identify a greater number of potential matches for a user.
They could also potentially examine a persons public posts on social media websites such as Facebook, Twitter and Instagram to get a sense of their attitudes and interests.
This would circumvent bias in how people represent themselves on matchmaking questionnaires. Research has shown inaccuracies in self-reported attributes are the main reason online dating isnt successful.
While the sheer amount of data on the web is too much for a person to process, its all grist to the mill for a smart matchmaking AI.
Read more: Looking for love on a dating app? You might be falling for a ghost
As more user data is generated on the internet (especially on social media), AI will be able to make increasingly accurate predictions. Big players such as Match.com would be well-placed for this as they already have access to large pools of data.
And where there is AI there will often be its technological sibling, virtual reality (VR). As both evolve simultaneously, well likely see versions of VR in which would-be daters can practice in simulated environments to avoid slipping up on a real date.
This isnt a far stretch considering virtual girlfriends, which are supposed to help people practice dating, have already existed for some years and are maturing as a technology. A growing number of offerings point to a significant degree of interest in them.
With enough user data, future AI could eventually create a fully-customised partner for you in virtual reality one that checks all your boxes. Controversially, the next step would be to experience an avatar as a physical entity.
It could inhabit a life-like android and become a combined interactive companion and sex partner. Such advanced androids dont exist yet, but they could one day.
Read more: Robots with benefits: how sexbots are marketed as companions
Proponents of companion robots argue this technology helps meet a legitimate need for more intimacy across society especially for the elderly, widowed and people with disabilities.
Meanwhile, critics warn of the inherent risks of objectification, racism and dehumanisation particularly of women, but also men.
Another problematic consequence may be rising numbers of socially reclusive people who substitute technology for real human interaction. In Japan, this phenomenon (called hikikomori) is quite prevalent.
At the same time, Japan has also experienced a severe decline in birth rates for decades. The National Institute of Population and Social Security Research predicts the population will fall from 127 million to about 88 million by 2065.
Concerned by the declining birth rate, the Japanese government last month announced it would pour two billion yen (about A$25,000,000) into an AI-based matchmaking system.
The debate on digital and robotic love is highly polarised, much like most major debates in the history of technology. Usually, consensus is reached somewhere in the middle.
But in this debate, it seems the technology is advancing faster than we are approaching a consensus.
Generally, the most constructive relationship a person can have with technology is one in which the person is in control, and the technology helps enhance their experiences. For technology to be in control is dehumanising.
Humans have leveraged new technologies for millenia. Just as we learned how to use fire without burning down cities, so too we will have to learn the risks and rewards accompanying future tech.
Read the original:
Posted: at 9:36 am
Over 40,000 CT scans, MRIs, and x-rays from more than 10,000 patients have been brought together by NHSX throughout the pandemic to create a National COVID-19 Chest Imaging Database (NCCID). Hospitals and universities across the country are using the database to track patterns and markers of COVID-19 in patients, to quickly create treatment plans, and to better understand whether a patient will end up in a critical condition.
TheNCCIDis also helping researchers from universities in University College London, and in Bradford, to developAItools that could help doctors improve the treatment for patients with COVID-19.
Clinicians at Addenbrookes Hospital in Cambridge are developing an algorithm based on theNCCIDdatabase that will help to inform a more accurate diagnosis of patients when they present with potential COVID-19 symptoms without a positive test. This will help clinicians to implement earlier medical interventions, including giving patients oxygen and medication before reaching a critical stage of the illness.
The database can also help clinicians predict the need for additional ICU capacity, enabling the management of beds and staff resource in those settings.
Matt Hancock, Secretary of State for Health and Social Care, said: The use of artificial intelligence is already beginning to transform patient care by making the NHS a more predictive, preventive, and personalised health and care service. It is vital we always search for new ways to improve care, especially as we fight the pandemic with the recovery beyond. This excellent work is testament to how technology can help to save lives in the UK.
Carola-Bibiane Schonlieb, Professor of Applied Mathematics and head of theCambridge Image Analysisgroup at the University of Cambridge, said: TheNCCIDhas been invaluable in accelerating our research and provided us with a diverse, well-curated, dataset of UK patients to use in our algorithm development. The ability to access the data for 18 different trusts centrally has increased our efficiency and ensures we can focus most of our time on designing and implementing the algorithms for use in the clinic for the benefit of patients.
By understanding in the early stages of disease whether a patient is likely to deteriorate we can intervene earlier to change the course of their disease and potentially save lives as a result.
The database is also helping with the development of a national AI imaging platform that will allow for the safe collecting and sharing of data, developingAItechnologies to address a number of other conditions such as heart disease and cancers.
See the rest here:
Posted: at 9:36 am
The NDAA guidelines reestablish an artificial intelligence advisor to the president and push education initiatives to create a tech-savvy workforce.
Theres plenty of debate surrounding why the USAs current regulatory stance on artificial intelligence (AI) and cybersecurity remains fragmented. Regardless of your thoughts on the matter, the recently passed National Defense Authorization Act (NDAA) includes quite a few AI and cybersecurity driven initiatives for both the military and non-military entities.
Its common to attach provisions to bills whenCongress, the Senate, or both know the bill must pass by a certain time. TheNDAA is one such bill. It has a yearly deadline every year or the countrysmilitary completely loses funding leading to lawmakers using it to pass lawsthat dont always make it on their own. (This years bill was initially vetoed.But the veto was overridden on January 1.)
The bill contains 4,500 pages worth ofinformation. Along with a few different initiatives, one particular moveoutlines both the military and the governments new interest in artificialintelligence.
One of the biggest moves in the bill has to do with the newly created Joint AI Center (JAIC). It moves from the under the supervision of the DODs CIO to the deputy secretary of defense. It moves higher in the DOD hierarchy, possibly underscoring just how crucial new cybersecurity initiatives are to the Department of Defense.
To that end, the JAIC is the Department of Defenses (DoD) AI Center of Excellence that provides expertise to help the Department harness the game-changing power of AI. The mission of the JAIC is to transform the DoD by accelerating the delivery and adoption of artificial intelligence. The goal is to use AI to solve large and complex problem sets that span multiple services, then ensure the Services and Components have real-time access to ever-improving libraries of data sets and tools.
The center will also receive its own oversightboard matching other bill provisions dealing with AI ethics and will soonhave acquisition authority as well. The center will be creating reportsbiannually about its work and its integration with other notable agencies.
The secretary of defense will also investigatewhether the DoD can use AI ethically, both acquired and developed technologies.This will happen within 180 days of the bills passing, creating a pressingdeadline for handling ethics issues surrounding both new technologies and the often-controversialuse of military AI-use.
The DoD will receive a steering committee onemerging technology as well as new hiring guidelines for AI-technologists. Thedefense department will also take on five new AI-driven initiatives designed toimprove efficiency at the DoD.
The second massive provision in the bill is a large piece of cybersecurity legislation. The Cyberspace Solarium Commission worked on quite a few pieces of legislation that made it into the bills final version. The bill creates a White House cyber director position. It also gives the Cybersecurity and Infrastructure Security Agency (CISA) more authority for threat hunting.
It directs the executive branch to conductcontinuity of the economy planning to protect critical economicinfrastructure in the case of cyberattacks. It also establishes a joint cyberplanning office at CISA.
The Cybersecurity Security Model Certification(CMMC) will fall under the Government Accountability Office, and the governmentwill require regular briefings from the DoD on its progress. CMMC is thegovernments accreditation body, and this affects anyone in the defensecontract supply chain.
Entities outside the Department of Defensewill have new initiatives as well. The National AI Initiative hopes toreestablish the United States as a leading authority and provider of artificialintelligence. The initiative will coordinate research, development, anddeployment of new artificial intelligence programs among the DOD as well ascivilian agencies.
This coordination should help bring coherenceand consistency to research and development. In the past, critics have cited alack of realistic and workable regulations as a clear reason the United Stateshas fallen behind in AI development.
It will advise future presidents on the stateof AI within the country to increase competitiveness and leadership. Thecountry can expect more training initiatives and regular updates about thescience itself. It will lead and coordinate strategic partnerships andinternational collaboration with key allies and provide those opportunities tothe US economy.
AI bias is a huge concern among business and US citizens, so the National AI Initiative Advisory Committee will also create a subcommittee on AI and law enforcement. Its findings on data security, and legal standards could affect how businesses handle their own data security in the future.
The National Science Foundation will runawards, competitions, grants, and other incentives to develop trustworthy AI.The country is betting heavily on new initiatives to increase trust among USconsumers as AI becomes a more important part of our lives.
NIST will expand its mission to createframeworks and standards for AI adoption. NIST guidelines already offercompanies a framework for assessing cybersecurity. The updates will helpdevelop trustworthy AI and spell a pathway for AI adoption that consumers willtrust and embrace.
As countries scramble to first place in AIreadiness, these initiatives hope to fix some key gaps leading to the USslagging authority. The NDAA guidelines reestablish an AI-advisor to thepresident and push education initiatives to create a tech-savvy workforce.
It also helps create guidelines for businesses already frantically adopting AI-driven initiatives, providing critical guidance for cybersecurity and sustainability frameworks. Between training and NIST frameworks, businesses could see a new era of trustworthy and ethical AI the sort that creates real insights and efficiency while mitigating risk.
Other countries are investing heavily in AIdevelopment, so new and expanded provisions will help secure the United Statesplace as a world leader in AI. Governmental funding and collaboration withcivilian researchers and development teams is one way the US can remain trulycompetitive in new technology the presence of such a robust body ofAI-focused legislations suggests lawmakers are making this a priority.
Read more from the original source:
Posted: at 9:36 am
Artificial Intelligence (AI) research, although far from reaching its pinnacle, is already giving us glimpses of what a future dominated by this technology can look like.
While the rapid progress of the technology should be seen with a positive lens, it is important to exercise some caution and introduce worldwide regulations for the development and use of AI technology.
The constant research in the field of technology, in addition to giving rise to increasingly powerful applications, is also increasing the accessibility to these applications. It is making it easier for more and more people, as well as organizations, to use and develop these technologies. While the democratization of technology that is transpiring across the world is a welcome change, it cannot be said for all technological applications that are being developed.
The usage of certain technologies should be regulated, or at the very least monitored, to prevent the misuse or abuse of the technology towards harmful ends. For instance, nuclear research and development, despite being highly beneficial to everyone, is highly regulated across the world. Thats because, nuclear technology, in addition to being useful for constructive purposes like power generation, can also be used for causing destruction in the form of nuclear bombs. To prevent, international bodies have restricted nuclear research only to the entities that can keep the technology secure and under control. Similarly, the need for regulating AI research and applications is also becoming increasingly obvious. Read on to know why.
AI research, in recent years, has resulted in numerous applications and capabilities that used to be, not long ago, reserved for the realm of futuristic fiction. Today, it is not uncommon to come across machines that can perform specific logical and computational tasks better than humans. They can perform feats such as understanding what we speak or write using natural language processing,detecting illnesses using deep neural networks, and playing games involving logic and intuition better than us. Such applications, if made available to the general public and businesses worldwide, can undoubtedly make a positive impact in the world.
For instance, AI can predict the outcome of different decisions made by businesses and individuals and suggest the optimal course of action in any situation. This will minimize the risks involved in any endeavor and maximize the likelihood of achieving the most desirable outcomes. They can help businesses become more efficient by automating routine tasks and preserve human health and safety by undertaking tasks that involve high stress and hazard. They can also save lives by detecting diseases much earlier than can be diagnosed by human doctors. Thus, any progress made in the field of AI will result in an improvement in the overall standard of human life. However, it is important to realize that, like any other form of technology, AI is a double-edged sword.AI has a dark side, too. If highly advanced and complex AI systems are left uncontrolled and unsupervised, they stand the risk of deviating from desirable behavior and perform tasks in unethical ways.
There have been many instances where AI systems tried to fool its human developers by cheating in the way they performed tasks they were programmed to do. For example, anAI tasked with generating virtual maps from real aerial images cheatedin the way it performed its task by hiding data from its developers. This was caused by the fact that the developers used the wrong metric to evaluate the AIs performance, causing the AI to cheat to maximize the target metric. While itll be a long time before we have sentient AI that can potentially contemplate a coup against humanity, we already have AI systems that can cause a lot of harm by acting in ways not intended by the developers. In short,we are currently at more risk of AI doing things wrong than them doing the wrong things.
To prevent AI from doing things wrong (or doing the wrong things), it is important for the developers to exercise more caution and care while creating these systems. And the way the AI community is trying to achieve this currently is by having a generally accepted set of ethics and guidelines surrounding the ethical development and use of AI. Or, in some cases, ethical use of AI is being inspired by the collective activism of individuals in the tech community. For instance,Google recently pledged to not use AI for military applicationsafter its employees openly opposed the notion. While such movements do help in mitigating AI-induced risks and regulating AI development to a certain extent, it is not a given that every group involved in developing AI technology will comply with such activism.
AI research is being performed in every corner of the world, often in silos for competitive reasons. Thus, there is no way to know what goes on in each of these places, let alone stopping them from doing anything unethical. Also, while most developers try and create AI systems and test them rigorously to prevent any mishaps, they may often compromise such aspects while focusing on performance and on-time delivery of projects. This may lead to them creating AI systems that are not fully tested for safety and compliance. Even small issues can have devastating ramifications based on the application. Thus, it is necessary to institutionalize AI ethics into law, which will make regulating AI and its impact easier for governments and international bodies.
Legally regulating AI can ensure that AI safety becomes an inherent part of any future AI development initiative. This means that every new AI, regardless of its simplicity or complexity, will go through a process of development that immanently focus on minimizing non-compliance and chances of failure. To ensure AI safety, the regulators must consider a few must-have tenets as a part of the legislation. These tenets should include:
Any international agency or government body that sets about regulating AI through legislation should consult with experts in the field of artificial intelligence, ethics and moral sciences, and law and justice. Doing so helps in eliminating any political or personal agenda, biases, and misconceptions while framing the rules for regulating AI research and application. And once framed these regulations should be upheld and enforced strictly. This will ensure that only the applications that comply with the highest of the safety standards are adopted for mainstream use.
While regulating AI is necessary, it should not be done in a way that stifles the existing momentum in AI research and development. Thus, the challenge will be to strike a balance between allowing enough freedom to developers to ensure the continued growth of AI research and bringing in more accountability for the makers of AI. While too much regulation can prove to be the enemy of progress, no regulation at all can lead to the propagation of AI systems that can not only halt progress but can potentially lead to destruction and global decline.
See original here:
Artificial intelligence can deepen social inequality. Here are 5 ways to help prevent this – The Conversation AU
Posted: at 9:36 am
From Google searches and dating sites to detecting credit card fraud, artificial intelligence (AI) keeps finding new ways to creep into our lives. But can we trust the algorithms that drive it?
As humans, we make errors. We can have attention lapses and misinterpret information. Yet when we reassess, we can pick out our errors and correct them.
But when an AI system makes an error, it will be repeated again and again no matter how many times it looks at the same data under the same circumstances.
AI systems are trained using data that inevitably reflect the past. If a training data set contains inherent biases from past human decisions, these biases are codified and amplified by the system.
Or if it contains less data about a particular minority group, predictions for that group will tend to be worse. This is called algorithmic bias.
Gradient Institute has co-authored a paper demonstrating how businesses can identify algorithmic bias in AI systems, and how they can mitigate it.
The work was produced in collaboration with the Australian Human Rights Commission, Consumer Policy Research Centre, CSIROs Data61 and the CHOICE advocacy group.
Algorithmic bias may arise through a lack of suitable training data, or as a result of inappropriate system design or configuration.
For example, a system that helps a bank decide whether or not to grant loans would typically be trained using a large data set of the banks previous loan decisions (and other relevant data to which the bank has access).
The system can compare a new loan applicants financial history, employment history and demographic information with corresponding information from previous applicants. From this, it tries to predict whether the new applicant will be able to repay the loan.
But this approach can be problematic. One way in which algorithmic bias could arise in this situation is through unconscious biases from loan managers who made past decisions about mortgage applications.
If customers from minority groups were denied loans unfairly in the past, the AI will consider these groups general repayment ability to be lower than it is.
Young people, people of colour, single women, people with disabilities and blue-collar workers are just some examples of groups that may be disadvantaged.
Read more: Artificial Intelligence has a gender bias problem -- just ask Siri
The biased AI system described above poses two key risks for the bank.
First, the bank could miss out on potential clients, by sending victims of bias to its competitors. It could also be held liable under anti-discrimination laws.
If an AI system continually applies inherent bias in its decisions, it becomes easier for government or consumer groups to identify this systematic pattern. This can lead to hefty fines and penalties.
Our paper explores several ways in which algorithmic bias can arise.
It also provides technical guidance on how this bias can be removed, so AI systems produce ethical outcomes which dont discriminate based on characteristics such as race, age, sex or disability.
For our paper, we ran a simulation of a hypothetical electricity retailer using an AI-powered tool to decide how to offer products to customers and on what terms. The simulation was trained on fictional historical data made up of fictional individuals.
Based on our results, we identify five approaches to correcting algorithmic bias. This toolkit can be applied to businesses across a range of sectors to help ensure AI systems are fair and accurate.
1. Get better data
The risk of algorithmic bias can be reduced by obtaining additional data points or new types of information on individuals, especially those who are underrepresented (minorities) or those who may appear inaccurately in existing data.
2. Pre-process the data
This consists of editing a dataset to mask or remove information about attributes associated with protections under anti-discrimination law, such as race or gender.
3. Increase model complexity
A simpler AI model can be easier to test, monitor and interrogate. But it can also be less accurate and lead to generalisations which favour the majority over minorities.
4. Modify the system
The logic and parameters of an AI system can be proactively adjusted to directly counteract algorithmic bias. For example, this can be done by setting a different decision threshold for a disadvantaged group.
5. Change the prediction target
The specific measure chosen to guide an AI system directly influences how it makes decisions across different groups. Finding a fairer measure to use as the prediction target will help reduce algorithmic bias.
In our recommendations to government and businesses wanting to employ AI decision-making, we foremost stress the importance of considering general principles of fairness and human rights when using such technology. And this must be done before a system is in-use.
We also recommend systems are rigorously designed and tested to ensure outputs arent tainted by algorithmic bias. Once operational, they should be closely monitored.
Finally, we advise that to use AI systems responsibly and ethically extends beyond compliance with the narrow letter of the law. It also requires the system to be aligned with broadly-accepted social norms and considerate of impact on individuals, communities and the environment.
With AI decision-making tools becoming commonplace, we now have an opportunity to not only increase productivity, but create a more equitable and just society that is, if we use them carefully.
Read more: YouTube's algorithms might radicalise people but the real problem is we've no idea how they work
Posted: at 9:36 am
Written By Employment Screening Resources (ESR)
Government agencies in the United States such as the Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), and the Equal Employment Opportunity Commission (EEOC) will increase scrutiny on how Artificial Intelligence (AI) is used in background screening, according to the ESR Top Ten Background Check Trends for 2021 compiled by leading global background check firm Employment Screening Resources (ESR).
In April 2020, the FTC the nations primary privacy and data security enforcer issued guidance to businesses on Using Artificial Intelligence and Algorithms written by Director of FTC Bureau of Consumer Protection Andrew Smith on the use of AI for Machine Learning (ML) technology and automated decision making with regard to federal laws that included the Fair Credit Reporting Act (FCRA) that regulates background checks.
Headlines tout rapid improvements in artificial intelligence technology. The use of AI technology machines and algorithms to make predictions, recommendations, or decisions has enormous potential to improve welfare and productivity. But it also presents risks, such as the potential for unfair or discriminatory outcomes or the perpetuation of existing socioeconomic disparities, Director Smith wrote in the FTC guidance.
The good news is that, while the sophistication of AI and machine learning technology is new, automated decision-making is not, and we at the FTC have long experience dealing with the challenges presented by the use of data and algorithms to make decisions about consumers, Smith wrote. In 2016, the FTC issued a report on Big Data: A Tool for Inclusion or Exclusion? In 2018, the FTC held a hearing to explore AI and algorithms.
In July 2020, the Consumer Financial Protection Bureau (CFPB) a government agency that helps businesses comply with federal consumer financial law published a blog on Providing adverse action notices when using AI/ML models that addressed industry concerns about how the use of AI interacts with the existing regulatory framework. One issue is how complex AI models address the adverse action notice requirements in the FCRA.
FCRA also includes adverse action notice requirements. For example, when adverse action is based in whole or in part on a credit score obtained from a consumer reporting agency (CRA), creditors must disclose key factors that adversely affected the score, the name and contact information of the CRA, and additional content. These notice provisions serve important anti-discrimination, educational, and accuracy purposes, the blog stated.
There may be questions about how institutions can comply with these requirements if the reasons driving an AI decision are based on complex interrelationships. Industry continues to develop tools to accurately explain complex AI decisions These developments hold great promise to enhance the explainability of AI and facilitate use of AI for credit underwriting compatible with adverse action notice requirements, the blog concluded.
In December 2020, ten Democratic members of the United States Senate sent a letter requesting clarification from the U.S. Equal Employment Opportunity Commission (EEOC) Chair Janet Dhillon regarding the EEOCs authority to investigate bias in AI driven hiring technologies, according to a press release on the website of U.S. Senator Michael Bennet (D-Colorado), one of the Senators who signed the letter.
While hiring technologies can sometimes reduce the role of individual hiring managers biases, they can also reproduce and deepen systemic patterns of discrimination reflected in todays workforce data Combatting systemic discrimination takes deliberate and proactive work from vendors, employers, and the Commission, Bennet and the other nine Senators wrote in the letter to EEOC Chair Dhillon.
Today, far too little is known about the design, use, and effects of hiring technologies. Job applicants and employers depend on the Commission to conduct robust research and oversight of the industry and provide appropriate guidance. It is essential that these hiring processes advance equity in hiring, rather than erect artificial and discriminatory barriers to employment, the Senators continued in the letter.
Machine learning is based on the idea that machines should be able to learn and adapt through experience and Artificial Intelligence refers to the broader idea that machines can execute tasks intelligently to simulate human thinking and capability and behavior to learn from data without being programmed explicitly, explained Attorney Lester Rosen, founder and chief executive officer (CEO) of ESR.
There have certainly been technological advances including back-office efficiencies and strides towards better integrations that streamline the employment screening process. However, does that qualify as machine learning or AI? In reality, true machine learning and artificial intelligence and the role it is likely to play in the future could fuel a new source of litigation for plaintiffs class action attorneys, said Rosen.
Proponents of AI argue that it will make the processes faster and take bias out of hiring decisions. It is doubtful that civil rights advocates and the EEOC will see it that way. The use of AI for decision making is contrary to one of the most fundamental bedrock principles of employment that each person should be treated as an individual, and not processed as a group or based upon data points, Rosen concluded.
Employment Screening Resources (ESR) a leading global background check provider that was ranked the number one screening firm by HRO Today in 2020 offers the award-winning ESR Assured Compliance system, which is part of The ESRCheck Solution, for real-time compliance that offers automated notices, disclosures, and consents for employers performing background checks. To learn more about ESR, visit http://www.esrcheck.com.
Since 2008, Employment Screening Resources (ESR) has annually selected the ESR Top Ten Background Check Trends that feature emerging and influential trends in the background screening industry. Each of the top background check trends for 2021 will be announced via the ESR News Blog and listed on the ESR background check trends web page at http://www.esrcheck.com/Tools-Resources/ESR-Top-Ten-Background-Check-Trends/.
NOTE: Employment ScreeningResources (ESR) does not provide or offer legal services or legal advice ofany kind or nature. Any information on this website is for educational purposesonly.
2021 Employment Screening Resources (ESR) Making copies of or using any part of the ESR News Blog or ESR website for any purpose other than your own personal use is prohibited unless written authorization is first obtained from ESR.
See original here: