The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: September 2021
Urgent action needed over artificial intelligence risks to human rights – UN News
Posted: September 16, 2021 at 5:48 am
Urgent action is needed as it can take time to assess and address the serious risks this technology poses to human rights, warnedtheHigh Commissioner:The higher the risk for human rights, the stricter the legal requirements for the use of AI technology should be.
Ms. Bachelet also called for AI applications that cannot be used in compliance with international human rights law,to be banned. Artificial intelligence can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they areused without sufficient regard to how they affect peoples human rights.
On Tuesday, the UN rights chiefexpressed concern about the "unprecedented level of surveillance across the globe by state and private actors", which she insisted was "incompatible" with human rights.
She wasspeakingat a Council of Europe hearing on the implications stemming fromJulyscontroversy over Pegasus spyware.
The Pegasus revelations were no surprise to many people, Ms. Bachelet told the Council of Europe's Committee on Legal Affairs and Human Rights, in reference to the widespread use of spyware commercialized by the NSO group, which affected thousands of people in 45 countries across four continents.
The High Commissioners call came asher office, OHCHR,published a report that analyses how AI affects peoples right to privacy and other rights, including the rights to health, education, freedom of movement, freedom of peaceful assembly and association, and freedom of expression.
The document includes an assessment of profiling, automated decision-making and other machine-learning technologies.
The situation is dire said Tim Engelhardt, Human Rights Officer, Rule of Law and Democracy Section, who was speaking at the launch of the report in Geneva on Wednesday.
The situation has not improved over the yearsbut has become worsehe said.
Whilst welcoming the European Unions agreement to strengthen the rules on control and the growth of international voluntary commitments and accountability mechanisms, he warned that we dont think we will have a solution in the coming year, butthe first steps need to be taken now or many people in the world will pay a high price.
OHCHRDirector of Thematic Engagement, Peggy Hicks,added to Mr Engelhardts warning, stating it's not about the risks in future, but the reality today.Without far-reaching shifts,the harms will multiply with scale and speed and we won't know the extent of the problem.
According to the report, States and businesses often rushed to incorporate AI applications, failing to carry out due diligence. It states that there have been numerous cases of people being treated unjustlydue toAImisuse, such as being denied social security benefits because of faulty AI tools or arrested because of flawed facial recognitionsoftware.
The document details how AI systems rely on large data sets, with information about individuals collected, shared, merged and analysed in multiple and often opaque ways.
The data used to inform and guide AI systems can be faulty, discriminatory, out of date or irrelevant, it argues, adding that long-term storage of data also poses particular risks, as data could in the future be exploited in as yet unknown ways.
Given the rapid and continuous growth of AI, filling the immense accountability gap in how data is collected, stored, shared and used is one of the most urgent human rights questions we face, Ms. Bachelet said.
The report also stated that serious questions should be raised about the inferences, predictions and monitoring by AI tools, including seeking insights into patterns of human behaviour.
It found that the biased datasets relied on by AI systems can lead to discriminatory decisions, which are acute risks for already marginalized groups. This is whythere needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks,she added.
An increasingly go-to solution for States, international organizations and technology companies are biometric technologies, which the report states are an area where more human rights guidance is urgently needed.
These technologies, which include facial recognition, are increasingly used to identify people in real-time and from a distance, potentially allowing unlimited tracking of individuals.
The report reiterates calls for a moratorium on their use in public spaces, at least until authorities can demonstrate that there are no significant issues with accuracy or discriminatory impacts and that these AI systems comply with robust privacy and data protection standards.
The document also highlights a need for much greater transparency by companies and States in how they are developing and using AI.
The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society, the report says.
We cannot afford to continue playing catch-up regarding AI allowing its use with limited or no boundaries or oversight and dealing with the almost inevitable human rights consequences after the fact.
The power of AI to serve people is undeniable, but so is AIs ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us, Ms. Bachelet stressed.
See more here:
Urgent action needed over artificial intelligence risks to human rights - UN News
Posted in Artificial Intelligence
Comments Off on Urgent action needed over artificial intelligence risks to human rights – UN News
Elon is Right, AI is Hard: Five Pitfalls to Avoid in Artificial Intelligence | eWEEK – eWeek
Posted: at 5:48 am
During the recent Tesla AI Day event, Elon Musk said he discourages machine learning, because it is really difficult. Unless you have to use machine learning, dont do it.
Well, Musk may be right in his assessment, because machine learning is quite difficult to implement. Most companies desire the benefits of what artificial intelligence can achieve for their business, but most dont have what it takes to get it up and running. Therefore, as much as 85% of ML projects currently fail.
The takeaway from Musks startling statement is that organizations cant treat AI, of which machine learning is a subset, like a part-time project. Many businesses are making some important mistakes when trying to do AI. But it doesnt have to be this way. Below are five data points from Bin Zhao, Ph.D., Lead Data Scientist at Datatron, showing some common mistakes of AI implementation.
Dont treat AI/ML development like traditional software development. Developing AI/ML models is a much different process than software development, but many organizations try to apply the traditional software development lifecycle to manage AI/ML models.
Machine Learning development lifecycle (MLLC) takes much more time because of additional factors including translating AI algorithms to compatible software codes, unique infrastructure requirements, the need for frequent model iterations, and more. Compared to traditional programming languages, it can take more than five times as long. This means todays typical application release processes are simply not applicable.
This type of tools mistake introduces unnecessary delays and inefficiencies. In most IT situations, organizations can control the types of servers they buy, the software tools they use, the dependencies they build with and so on.
Not so with AI/ML; organizations must allow their data scientists to use their preferred tools based on what they think will get the job done in the best way. Otherwise, theyre likely to see all their data scientists leave.
DevOps is the union of software development and operations with the goals of reducing solution delivery time and sustaining a good user experience through automation (e.g. CI/CD and monitoring). But DevOps experts dont know the nuances of working with ML models.
MLOps is a new term that expresses how to apply DevOps rules to automate the building, testing and deployment of ML systems. The goal of MLOps is to unite ML application development and the operation of ML applications, making it easier for groups to deploy finer models more often.
Data scientists need the right raw data for modeling, and they excel in uncovering data to build the best models to solve business challenges. However, that does not mean they are experts in all the intricacies of deploying models to work with existing applications and infrastructure. This causes friction between them and the engineering team and business leaders, resulting in low job satisfaction for data scientists.
Though highly skilled and trained, they must rely on others for deployment and production, which also means that they cant iterate rapidly. And since the projects shift to the engineering team, who dont have the ML skill set, its easy for them to miss details especially if the model is not making accurate predictions.
Academic AI research has historically focused on developing models and algorithms. Limited efforts have been devoted towards iterating and improving data sets for a specific business problem, operationalizing a machine learning model or monitoring models in production.
Building and deploying a machine learning model for solving a real world problem is much more than developing the algorithm itself.
Operationalizing ML models is hard but not impossible. Using a new model development life cycle will streamline the process of model development and model production. It does this by helping data scientists, engineering and other involved teams make effective decisions in a timely manner. It will also help teams to reduce production risks.A successful model governance tool can also help by standardizing processes, simplifying governance and significantly reducing risks.
About the Author:
Bin Zhao, Ph.D., Lead Data Scientist at Datatron
View post:
Elon is Right, AI is Hard: Five Pitfalls to Avoid in Artificial Intelligence | eWEEK - eWeek
Posted in Artificial Intelligence
Comments Off on Elon is Right, AI is Hard: Five Pitfalls to Avoid in Artificial Intelligence | eWEEK – eWeek
New institute aims to unlock the secrets of corn using artificial intelligence – Agri-Pulse
Posted: at 5:48 am
Iowa State University researchers are growing two kinds of corn plants.
If you drive past the many fields near the universitys campus in Ames, you can see row after row of the first. But the second exists in a location that hasnt been completely explored yet: cyberspace.
The researchers, part of the AI Institute for Resilient Agriculture, are using photos, sensor data and artificial intelligence to create digital twins of corn plants that, through analysis, can lead to a better understanding of their real-life counterparts. They hope the resulting software and techniques will lead to better management, improved breeding, and ultimately, smarter crops.
We need to use lots of real-time, high-resolution data to make decisions, Patrick Schnable, an agronomy professor and director of Iowa States Plant Sciences Institute,told Agri-Pulse. Just collecting data for data's sake is not something that production ag wants. But data which is then linked to statistical models or other kinds of mathematical models that advise farmers on what to do has a lot of value.
The idea of machine learning systems that can improve or take over typical human tasks has been seeing increased attention over the past couple of years in many industries, including agriculture. In 2019, the National Science Foundation and several partner agencies, including the USDA, began establishing and funding AI institutes to research and advance artificial intelligence in fields like agriculture.
In their call for proposals, the organizations said AI could spur the next revolution in food and feed production.
The Green Revolution of the 1960s greatly enhanced food production and resulted in positive impacts on food security, human health, employment, and overall quality of life for many, the solicitation said. There were also unintended consequences on natural resource use, water and soil quality, and pest population expansion. An AI-based approach to agriculture can go much further by addressing whole food systems, inputs and outputs, internal and external consequences, and issues and challenges at micro, meso, and macro scales that include meeting policy requirements of ecosystem health.
Among the seven inaugural institutes established in 2020 were two focusing on agriculture: the AI Institute for Future Agricultural Resilience, Management and Sustainability at the University of Illinois at Urbana-Champaign, and the AI Institute for Next Generation Food Systems at the University of California, Davis. The 2021 lineup includesthe AIIRA and the Institute for Agricultural AI for Transformation Workforce and Decision Support (AgAID) at Washington State University.
Lakshmi Attigala, a senior scientist and lab manager at Iowa State University, prepares a corn plant to be photographed.
The AIIRA, which received $20 million in funding from these governmental organizations, plans to pool the expertise of researchers at Iowa State, Carnegie Mellon University, the University of Arizona, New York University, George Mason University, the Iowa Soybean Association, the University of Nebraska-Lincoln and the University of Missouri to study the intersection of plant science, agronomics and AI.
The institute hopes to develop AI algorithms that can take all of the collectible data from a field whether by ground robots, drones, or satellites and analyze it to create tools farmers can use to improve production of crops for resilience to the pressures brought about by climate change.
This is a game-changer, Baskar Ganapathysubramian, the director of the institute, told Agri-Pulse as he walked toward a nondescript white shed tucked between crop fields on the Iowa State University campus.
Scouting is based on the visual, he said. By using multimodal things, you can actually go beyond the visual and do early detection and early mitigation. That's not only sustainable, because you're going to use less of the chemicals needed, but also amazingly profitable.
Ganapathysubramian opened the door to reveal a flurry of activity. Directly inside, genetics graduate student Yawei Li held a protractor up to a corn plant in various positions, trying to measure the angles of its leaves.
Across the room, Lakshmi Attigala, a senior scientist and lab manager, grabbed a fully headed corn plant from a gray tote and walked it over to the labs makeshift photography studio, where a sheet of blue cloth hanging from the ceiling served as a backdrop.
She placed the corn plant in a small, rotating green vase ringed by light stands and adjusted its leaves, preparing it for a photo shoot. She gave it a unique number, 21-3N3125-1, which was printed on a piece of paper she attached to the front of it.
As the vase rotated, she used two cameras one hanging from the ceiling and the other sitting atop a tripod in front of the corn plant to take shots of the plant.
On the north side of the building two researchers senior staff member Zaki Jubery and graduate student Koushik Nagasubramanian placed eight more corn plants in a ring surrounding a terrestrial laser scanner. The scanner sends out a signal to detect point clouds, or find the exact dimensions of these plants based on which points the lasers bounce off.
Interested in more news on farm programs, trade and rural issues? Sign up for a four-week free trial toAgri-Pulse.Youll receive our content absolutely free during the trial period.
All three of these actions, while happening separately and in different parts of the room, feed data from the 80 corn plants scanned that day to a computer learning program that can study their features to learn what the plants look like. If cameras, lasers and sensors can collect enough data on corn plants, the software should be able to create near-identical models of them when fully developed.
The idea is that we perfect something from here and then we do that on a higher scale in the field, said Nagasubramanian. Thats a more complicated thing if you have plants in the background and you have changing light intensities and clouds.
The institute, which collaborates with the Genomes to Fields Initiative to phenotype corn hybrid varieties across 162 environments in North America, also monitors a corn field lined with cameras mounted on poles. The solar-powered cameras sit above the corn plants and take photos every 15 minutes to watch each one develop over time.
The resulting data can be fed to AI programs to get a better understanding of how these plants grow and what genetic traits they share.
Certainly it is going to help us understand for example, with the photography what is the genetic control of leaf angle. And then that would allow us to develop varieties with different leaf angles more readily, Schnable said.
Schnable said its too soon for the developing technology to be widely deployed in fields or used for breeding purposes and that for now, the research funding is limited. But he believes private companies will use AI technology to develop their own products.
These things do have significant impacts out there in the world, he said.
For more news, go to http://www.Agri-Pulse.com.
See the article here:
New institute aims to unlock the secrets of corn using artificial intelligence - Agri-Pulse
Posted in Artificial Intelligence
Comments Off on New institute aims to unlock the secrets of corn using artificial intelligence – Agri-Pulse
NHS Artificial Intelligence provider reports 160% growth, promising to transform healthcare with better data – PR Newswire UK
Posted: at 5:48 am
RwHealth's Data Science Platform combines Artificial Intelligence (AI)andMachine Learning to give healthcare providers in-depth data to make better decisions and improve patient outcomes. By using RwHealth's analytical capabilitiesto make predictions, model treatment options, improve safety and increase efficiency, clinicians can deliver better, more widespread care.
One important capability driving the company's significant growth is its ability to anticipate hospital patient flows. Being able to model patient numbers has been vital during Covid, and the RwHealth platform has helped UK hospitals to anticipate demand, combat bed shortages and tackle worsening waiting list issues.
RwHealth's success mirrors the wider growth of AI in healthcare, as stakeholders across the health ecosystem find new ways to increase efficiency, save money and deliver optimised clinical outcomes. In September 2020, NHSX, the organisation driving digital transformation inhealth and social care, announced a 250m investment into AI in UK healthcare.
Orlando Agrippa, CEO and Founder of RwHealth, said: "We've grown at an extraordinary rate, as healthcare providers realise how AI can improve patient outcomes, while helping to ease the wider pressures that the healthcare industry faces. It's importantto tackle backlogs and bed capacity issues so that healthcare remains safe andsteady as we attempt to recover post-Covid."
RwHealth client, Chief Medical Officer (interim) and Responsible OfficerAndrew McLaren, adds: "RwHealth's platform enables us to solve bottlenecks before they become a problem. Faster treatment leads to better outcomes, so every moment the solution helps us save no matter how small has a tangibleimpact on patient care."
The protection of patient data is at the heart of the RwHealth proposition. While its primary customers are NHS Trusts and private healthcare organisations, the companydoes nothold any private patient information, nor any personally identifiable hospital data.With a dedicated Data Protection Officer, RwHealth uses suitable safeguards to protect all information from unauthorised access.
Today, RwHealth works with more than 70 UKand international providers, its AI technology having processed more than 10m UKpatients and 5.5 million across the Middle East and Australia. Founded in 2017, RwHealth is headquartered in London's Canary Wharf.
Photo - https://mma.prnewswire.com/media/1626850/Orlando_Agrippa.jpg
SOURCE RwHealth
Read more:
Posted in Artificial Intelligence
Comments Off on NHS Artificial Intelligence provider reports 160% growth, promising to transform healthcare with better data – PR Newswire UK
US must not only lead in artificial intelligence, but also in its ethical application | TheHill – The Hill
Posted: at 5:48 am
Artificial intelligence (AI) is sometimes referred to as a herald of the fourth industrial revolution. That revolution is already here. Whenever you say Hey Siri or glance at your phone in order to unlock it, youre using AI. Its current and potential applications are numerous, including medical diagnosis and predictive technologies that enhance user interactions.
As chairwoman of the U.S. House Committee on Science, Space, and Technology, I am particularly interested in the potential for AI to accelerate innovation and discovery across the science and engineering disciplines. Just last year, DeepMind announced that its AI system AlphaFold had solved a protein-folding challenge that had stumped biologists for half a century. It is clear that not only will AI technologies be integral to improving the lives of Americans, but they will also help determine Americas standing in the world in the decades to come.
However, the vision of AIs role in humanitys future isnt all rosy. Increasingly autonomous devices and growing amounts of data will exacerbate traditional concerns, such as privacy and cybersecurity.Other potential dangers of AI have also arrived, appearing as patterns of algorithmic bias that often reflect our societys systemic racial and gender-based biases. We have seen discriminatory outcomes in AI systems that predict credit scores, health care risks, and recruitment potential. These are domains where we must mitigate the risk of bias in our decision-making, and the tools we use to augment that decision-making.
Technological progress does not have to come at the expense of safety, security, fairness, or transparency. In fact, embedding our values into technological development is central to our economic competitiveness and national security. Our federal government has the responsibility to work with private industry to ensure that we are able to maximize the benefits of AI technology for society while simultaneously managing its emerging risks.
To this end, the Science Committee has engaged in efforts to promote trustworthy AI. Last year, one of our signature achievements was passing the bipartisan National Artificial Intelligence Initiative Act, which directs the Department of Commerces National Institute of Standards and Technology (NIST) to develop a process for managing AI risks.
NIST may not be the most well-known government institution, but it has long conducted critical work on standard-setting and measurement research that is used by federal agencies and private industry. Over the past year, NIST has conducted a series of workshops examining topics like AI trustworthiness, bias, explainability, and evaluation. These workshops are geared at helping industry professionals understand how to detect, catalogue, and ultimately prevent the harmful outcomes that erode public trust in AI technology.
Most recently, NIST has been working to construct a voluntary Risk Management Framework that is intended to support the development and deployment of safe and trustworthy AI. This framework will be important for informing the work of both public and private sector AI researchers as they pursue their game-changing research. NIST is soliciting public comments until Sept. 15, 2021 and will develop the framework in several iterations, allowing for continued input. Interested stakeholders should submit comments and/or participate in the ongoing processes at NIST.
We know that AI has the potential to benefit society and make the world a better place. In order for the U.S. to be a true global leader in this technology, we have to ensure that the AI we create does just that.
Eddie Bernice JohnsonEddie Bernice JohnsonUS must not only lead in artificial intelligence, but also in its ethical application Our approach to schizophrenia is failing House passes bills to boost science competitiveness with China MORE represents the 30th District of Texas and is chairwoman of the House Committee on Science, Space, and Technology.
View post:
Posted in Artificial Intelligence
Comments Off on US must not only lead in artificial intelligence, but also in its ethical application | TheHill – The Hill
Who Are Antifa, and Are They a Threat? | Center for …
Posted: at 5:48 am
June 4, 2020
In response to the death of George Floyd, an unarmed African American who died after his neck was pinned under a police officers knee for nearly nine minutes in May 2020, protests erupted in over 140 U.S. cities. While the vast majority of protesters were peaceful, some violence and pillaging occurred. In New York City, for example, looters tore off the plywood that covered Macys iconic store in Herald Square on 34th Street, smashed windows, and stole whatever items they could grab before police chased them away. Others ransacked a nearby Nike store after shattering windows and walking off with armloads of athletic shirts, jeans, jackets, and sweatpants. In other citiesfrom Raleigh, North Carolina, to San Francisco, Californiaa small minority of individuals burned cars, attacked police officers, and looted businesses. In response, some U.S. officials fingeredwithout evidenceAntifa as the main culprits. On May 31, President Trump tweeted that he intended to designate Antifa as a terrorist organization. Attorney General William Barr similarly remarked that the violence instigated and carried out by Antifa and other similar groups in connection with the rioting is domestic terrorism and will be treated accordingly.
Q1: Who are Antifa?
A1: Antifa is a contraction of the phrase anti-fascist. It refers to a decentralized network of far-left militants that oppose what they believe are fascist, racist, or otherwise right-wing extremists. While some consider Antifa a sub-set of anarchists, adherents frequently blend anarchist and communist views. One of the most common symbols used by Antifa combines the red flag of the 1917 Russian Revolution and the black flag of 19th century anarchists. Antifa groups frequently conduct counter-protests to disrupt far-right gatherings and rallies. They often organize in black blocs (ad hoc gatherings of individuals that wear black clothing, ski masks, scarves, sunglasses, and other material to conceal their faces), use improvised explosive devices and other homemade weapons, and resort to vandalism. In addition, Antifa members organize their activities through social media, encrypted peer-to-peer networks, and encrypted messaging services such as Signal.
Antifa groups have been increasingly active in protests and rallies over the past few years, especially ones that include far-right participants. In June 2016, for example, Antifa and other protestors confronted a neo-Nazi rally in Sacramento, California, with at least five people stabbed. In February, March, and April 2017, Antifa members attacked alt-right demonstrators at the University of California, Berkeley using bricks, pipes, hammers, and homemade incendiary devices. In July 2019, William Van Spronsen, a self-proclaimed Antifa, attempted to bomb the U.S. Immigration and Customs Enforcement detention facility in Tacoma, Washington, using a propane tank but was killed by police.
Like some other types of domestic extremists in the United States, Antifa follow a decentralized organizational structure. In an influential article in the 1992 edition of the magazine Seditionist, anti-government activist Louis R. Beam advocated an organizational structure that he termed leaderless resistance. As Beam noted, Utilizing the Leaderless Resistance concept, all individuals and groups operate independently of each other, and never report to a central headquarters or single leader for direction or instruction, as would those who belong to a typical pyramid organization. Beam argued that the tactic was just as useful for left-wing as it was for right-wing extremists. The New American Patriot, he wrote several years later, will be neither left nor right, just a freeman fighting for liberty. Leaderless resistance became a useful model for many types of extremists, including far-left networks like Antifa.
Q2: What role have Antifa groups played in the protests?
A2: While it is difficult to assess with fidelity the identity or ideology of many of the looters, my conversations with law enforcement and intelligence officials in multiple U.S. cities suggest that Antifa played a minor role in violence. The vast majority of looting appeared to come from local opportunists with no affiliation and no political objectives. Most were common criminals.
Still, there was some evidence of organized activity by left-wing and right-wing extremists, including from individuals that traveled from other states. John Miller, the deputy commissioner of intelligence and counterterrorism at the New York Police Department, warned that a small, fringe network of extremists organized violence in New York City. Before the protests began, organizers of certain anarchist groups set out to raise bail money and people who would be responsible to be raising bail money, they set out to recruit medics and medical teams with gear to deploy in anticipation of violent interactions with police, he said, based on intelligence collected by New Yorks Joint Terrorism Task Force. They prepared to commit property damage and directed people who were following them that this should be done selectively and only in wealthier areas or at high-end stores run by corporate entities. There were also multiple reports of white supremacists infiltrating peaceful protests in cities like Boston, Denver, Tampa, and Dallas.
To add to the confusion, there was significant disinformation and a proliferation of fake accounts on social media platforms. For example, Twitter shut down several accounts that it said were operated by a white supremacist group called Identity Evropa, which was posing as Antifa. In one fake account with the Twitter handle @Antifa_US, Identity Evropa members allegedly called for violence in white suburban areas in the name of Black Lives Matters. Tonights the night, Comrades, one tweet noted with a brown raised fist emoji. Tonight we say F--- The City and we move into the residential areas... the white hoods.... and we take what's ours As Twitter explained, This account violated our platform manipulation and spam policy, specifically the creation of fake accounts. We took action after the account sent a Tweet inciting violence and broke the Twitter Rules. More broadly, extremists flooded social media with disinformation, conspiracy theories, and incitements to violenceswamping Twitter, YouTube, Facebook, and other platforms.
Q3: What is the broader threat from Antifa and other types of extremists?
A3: The threat from Antifa and other far-left networks is relatively small in the United States. The far-left includes a decentralized mix of actors. Anarchists, for example, are fundamentally opposed to the government and capitalism, and they have organized plots and attacks against government, capitalist, and globalization targets. Environmental and animal rights groups, such as the Earth Liberation Front and Animal Liberation Front, have conducted small-scale attacks against businesses they perceive as exploiting the environment. Antifa followers have committed a tiny number of plots and attacks.
Like virtually every domestic extremist group in the United Statesincluding such white supremacist organizations as the Base and the Atomwaffen Divisionthe U.S. government has not designated Antifa as a terrorist organization. Instead, the U.S. government has generally designated only international terrorist groups, such as al-Qaeda and the Islamic State. In April 2020, the Trump administration designated the Russian Imperial Movement, an ultra-nationalist white supremacist group based in Russia, as a terrorist organization. The designation allowed the U.S. Treasury Departments Office of Foreign Assets Control to block any U.S. property or assets belonging to the Russian Imperial Movement. It also barred Americans from financial dealings with the organization and made it easier to ban its members from traveling to the United States. While President Trump raised the possibility of designating Antifa as a terrorist organization, such a move would be problematic. It would trigger serious First Amendment challenges and raise numerous questions about what criteria should be used to designate far-right, far-left, and other extremist groups in the United States. In addition, Antifa is not a group per se, but rather a decentralized network of individuals. Consequently, it is unlikely that designating Antifa as a terrorist organization would even have much of an impact.
Based on a CSIS data set of 893 terrorist incidents in the United States between January 1994 and May 2020, attacks from left-wing perpetrators like Antifa made up a tiny percentage of overall terrorist attacks and casualties. Right-wing terrorists perpetrated the majority57 percentof all attacks and plots during this period, particularly those who were white supremacists, anti-government extremists, and involuntary celibates (or incels). In comparison, left-wing extremists orchestrated 25 percent of the incidents during this period, followed by 15 percent from religious terrorists, 3 percent from ethno-nationalists, and 0.7 percent from terrorists with other motives. In analyzing fatalities from terrorist attacks, religious terrorism has killed the largest number of individuals3,086 peopleprimarily due to the attacks on September 11, 2001, which caused 2,977 deaths. In comparison, right-wing terrorist attacks caused 335 fatalities, left-wing attacks caused 22 deaths, and ethno-nationalist terrorists caused 5 deaths.
Viewed in this context, the threat from Antifa-associated actors in the United States is relatively small.
Seth G. Jones holds the Harold Brown Chair and is director of the Transnational Threats Project at Center for Strategic and International Studies in Washington, D.C. He is the author, most recently, of A Covert Action (W.W. Norton, 2019).
Critical Questionsis produced by the Center for Strategic and International Studies (CSIS), a private, tax-exempt institution focusing on international public policy issues. Its research is nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all views, positions, and conclusions expressed in this publication should be understood to be solely those of the author(s).
2020 by the Center for Strategic and International Studies. All rights reserved.
Read the original post:
Posted in Antifa
Comments Off on Who Are Antifa, and Are They a Threat? | Center for …
Industry VoicesNot all automation is created equally for clinical documentation improvement – FierceHealthcare
Posted: at 5:48 am
Healthcare system survival pivots on many metrics, but the ability to generate revenue and to evidence high quality of care are two of the most essential.
At the center of both metrics is the clinical documentation process, where an accurate representation of every patients clinical experience while in a providers care must be recorded.
As simple as it may sound, achieving that accurate reflection of diagnoses, interventions and the clinical picture is anything but simple. Medicine is as much science as it is art, and complex definitions, levels of specificity and complex medical terminology mean that most hospitals struggle to document everything properly, leading to significant lost revenues and under-reporting on quality metrics.
Health systems have answered this challenge by standing up clinical documentation integrity (CDI) programs, staffed with clinicians. As more healthcare revenue is tied to achieving specific quality metrics, the role of CDI has become even more critical.
However, ensuring integrity and completeness of documentation would require health systems to staff CDI teams with an incredible amount of highly trained clinicians to review and correct documentation on every record, every day. The cost and complexity of such an operation is unimaginable, and no healthcare system has the resources to either employ that many people or even find a supply of that many highly specialized staff.
As a result, many health systems are turning to software to support CDI with technology that scales clinical staff abilities and provides intelligent automation. Unfortunately, the challenge that many have run into is how to identify the right technology for their operation.
RELATED:Industry VoicesWhy the COVID-19 pandemic was a watershed moment for machine learning
All the work CDI specialists perform requires clinical knowledgethe sort of knowledge that is gained only after decades of academic study and real work experience. Automating that work means that the technology must mirror the same level of clinical thinking that any one of these specialists employs every day.
The challenge is immense. Emulating clinical thinking with software is among the loftiest goals of artificial intelligence in healthcare and requires the most sophisticated, cutting-edge technologies availablenot to mention years of training. Even with the most advanced technology, AI has sometimes failed to impress the critics, as weve seen multiple reports call out the stumbles of larger ambitioned (but similarly conceptualized)efforts like IBM Watson.
But, while there are still areas for improvement, the truth is that AI still is making a significant impact across the healthcare landscapeand especially within CDI, where success is well documented.
While CDI is an excellent and proven use case for AI in healthcare, providers should understand that not all AI is the same. In fact, many legacy systems that deploy the wrong type of AI to CDI are unable to see all the gains possible with the correct deployment.
The key to leveraging AI in CDI is to utilize technology that can truly emulate the way clinicians think. It must read, digest, understandand make statistical predictions on the entirety of the clinical record similarly to how physicians look at all the evidence to assess and diagnose to appropriately provide patient care.
Thats where machine learning holds the key. Machine-learning is, at its heart, a pattern-recognition engine that can digest a plethora of individual pieces of data, recognize patterns and then use those patterns to make statistical predictions. If properly applied to clinical information, it is a very powerful technology.Fed over time with millions of patient encounters, machine learning begins to emulate the way clinicians think, automating numerous tasks or challenges that otherwise would only be solvable by a human. While it does not replace clinicians, it does reduce clinical staff burden, providing more time to be spent on patient care.
Additionally, by automatically the review of every patient record in real-time every day, cases can be prioritized so a CDI specialist knows what to look atversus wasting time on those with no documentation irregularities.This type of machine learning interprets the clinical evidence, compares it to the existing documentationand highlights and prioritizes which cases have discrepancies automatically.
RELATED:Healthcare AI investment will shift to these 5 areas in the next 2 years: survey
Many legacy applications attempt to use another AI technology, natural language processing (NLP), to automate complex clinical tasks. While NLP has some useful applications for tasks like clinical narrationwhere the dictionary-like look up function of NLP suggests a better or more accurate wordNLP is only a partial solution for CDI.
For example, NLP can translate the narrative documentation from the clinician into text understoodby a computer. However, unless its paired with a machine learning solution that simultaneously reads and emulates clinical decision-making (thus enabling a comparison between what was written and what the clinical evidence says), its an inadequate solution to solving the core challenges in CDI.
Additionally, rules-based technology solutions that utilize rules or markers to automate clinical tasks fail entirely to emulate the way that clinicians think. As a result, they cannot reflect the many permutations of the way clinical conditions are presented.
Robotic process automation (RPA) is another buzzword in healthcare that has been cited as a tool for handling repeatable basic tasks. However, within the mid-revenue cycle (and thus CDI), nearly all tasks have a clinical element, requiring clinical understanding to complete.That means RPA definitionally is not suited for more complex tasks that require higher-level thinking.
Instead, intelligent process automation (IPA) is the right solution, as IPA applies machine learning to RPA to automate complex tasks that require human judgment (much like the work of CDI).Thus, to apply IPA in the revenue cycle, not only is machine learning critical, it also is the only technology available today that specifically emulates clinical thinking and judgment.
RELATED: Iodine Software acquires competitor ChartWise Medical Systems to expand its reach to more hospitals
As technology gets better at emulating a clinicians mind, increasingly powerful AI engines will soon be able to capture documentation and coding instantaneously. By accurately automating clinical condition documentation directly into EMRs and identifying the final code set, the process will become even more efficient and will have fewer translation errors.
Ultimately, that means smaller teams will be able to support the entire documentation process, which reduces costs for providers and stress on clinicians.
There is no doubt that managing a health system has become increasingly complex, and thats especially true for CDI teams that must capture data accurately and efficiently. However, AI has become a critical tool that is truly making an impact in the mid-revenue cycle, and there is much more innovation to come in the next few years. But, while we wait for that larger revolution, its important that health systems implement a stable and efficient CDI program now, powered by the right technology.
William Chan is theco-founder and CEO ofIodine Software.
Excerpt from:
Posted in Artificial Intelligence
Comments Off on Industry VoicesNot all automation is created equally for clinical documentation improvement – FierceHealthcare
Olympia residents shaken by violence between groups allied with Antifa and Proud Boys – KING5.com
Posted: at 5:48 am
Olympia business owners and residents are concerned about the violence that took place last Saturday and theyre worried it could happen again.
OLYMPIA, Wash. Drew Langer was shocked at the scene outside his window while working at Schwartzs Caf in Olympia last Saturday.
We saw a group of maybe 30, 40 people in military gear and various weapons and American flags and camo kind of stuff, Langer recalled.
As the group made its way down Washington Street, Langer said he was worried he would cross paths with them.
My first thought was, I hope they dont come in, he said. They ended up turning the corner, so that was kind of a relief.
Police released this video showing two groups, one allied with Antifa, the other with the Proud Boys, at the bus station on State Avenue.
Police say the Proud Boys were pursuing members of the group allied with Antifa, before clashing at the bus station.
The clash ended in multiple people being assaulted, and police confirmed that the shots were fired by someone from the Antifa group, and the bullet struck a member of the Proud Boys.
While police investigate, Olympians are trying to make sense of the violence brought into their city. City residents are also worried this might not be the end.
Knowing that they were hunting members of our community by name, feels really unsafe and concerning. It doesnt feel good for the people that actually live here, said Alden Davis, owner of Underhill Plants.
Ive had to change my schedule so I dont have employees who feel unsafe scheduled when they know that theyre supposed to be coming back, Davis said.
Olympia Police encourage anyone with any information on Saturdays shooting to reach out to them at Crime Stoppers: 1-800-222-8477.
Excerpt from:
Olympia residents shaken by violence between groups allied with Antifa and Proud Boys - KING5.com
Posted in Antifa
Comments Off on Olympia residents shaken by violence between groups allied with Antifa and Proud Boys – KING5.com
Artificial Intelligence A New Portal to Promote Global Cooperation Launched with 8 International Organisations – Council of Europe
Posted: at 5:48 am
On 14 September 2021, eight international organisations joined forces to launch a new portal promoting global co-operation on artificial intelligence (AI). The portal is a one-stop shop for data, research findings and good practices in AI policy.
The objective of the portal is to help policymakers and the wider public navigate the international AI governance landscape. It provides access to the necessary tools and information, such as projects, research and reports to promote trustworthy and responsible AI that is aligned with human rights at global, national and local level.
Key partners in this joint effort include the Council of Europe, the European Commission, the European Union Agency for Fundamental Rights, the Inter-American Development Bank, the Organisation for Economic Co-operation and Development (OECD), the United Nations (UN), the United Nations Educational, Scientific and Cultural Organization (UNESCO), and the World Bank Group.
Access the website: https://globalpolicy.ai
Useful links:
Go here to see the original:
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence A New Portal to Promote Global Cooperation Launched with 8 International Organisations – Council of Europe
Corbella: Anti-vaxxers and others should be banned from protesting in front of hospitals – Calgary Herald
Posted: at 5:48 am
Breadcrumb Trail Links
'I wish those people who think COVID is a farce and that the vaccines are more dangerous than the disease could see what we all see,' said a nurse inside the building
Author of the article:
Publishing date:
I have a 91-year-old loved one who has recently been diagnosed with an extremely rare blood cancer. Its so rare, his condition is 15 words long.
This advertisement has not loaded yet, but your article continues below.
The thought of this elderly, mentally sharp but now quite medically frail man having to battle through throngs of anti-vaccine protesters to go to Vancouver General Hospital to undergo tests and get blood transfusions is infuriating. That there is footage of thousands of people in front of the same hospital slowing down ambulances is, frankly, disgusting.
On Monday, every main federal political leader spoke out against people protesting in front of hospitals slowing access to medical staff and patients, as did Premier Jason Kenney.
Todays protests must in no way obstruct the important operations of our hospitals, including the arrival and departure of emergency vehicles and workers. Blocking an ambulance is most definitely not peaceful protest, Kenney said in a written statement Monday.
This advertisement has not loaded yet, but your article continues below.
In Alberta, local law enforcement is fully empowered to enforce the law in a timely fashion, including the potential use of the Critical Infrastructure Defence Act,said Kenney, referring to an act that came into effect June 17, 2020, and makes it illegal to block railways, pipelines and highways and which is interpreted to include roads heading into and out of hospitals.
And while Canadians are entitled to peaceful protest, one can still question the appalling judgment of those protesting across the country today. It is outrageous that a small minority feel its appropriate to protest at hospitals during the pandemic while our health-care workers continue to tirelessly battle the global menace of COVID-19.
This advertisement has not loaded yet, but your article continues below.
Liberal Leader Justin Trudeau vowed that if returned to power he will make it a criminal offence to block access to buildings that provide health care . . . It is not OK that across the country hospitals are having to put up barricades today to manage the mobs coming their way, said Trudeau.
And further, were going to make it a criminal offence for anyone to threaten or intimidate any health-care practitioner on their way into work in the exercise of their duty or a patient on their way to get medical services, added Trudeau.
Huh? Its of course already a criminal offence to threaten anybody anywhere, not just staff or patients entering hospitals. How does he get away with such inanity?
The anti-vaccine passport protest in front of Foothills hospital on Monday brought out dozens of Calgary police officers and many other peace officers, including officers armed with cameras to record what was taking place.
This advertisement has not loaded yet, but your article continues below.
Protesters had to stay off of hospital property, and if nearby residents, staff and patients found the protesters annoying it was actually the counter-protesters a group of eight Antifa members dressed in black and carting around powerful speakers blasting hideous death metal music at ear-splitting decibels who were truly the annoying ones. Calgary police on scene confirmed that those in attendance were from the radical anti-fascist group.
Ask them for an interview and their only response is F off! screamed almost as loud as their music. But at least their sign read We stand with AHS. With friends like this, who needs enemies. It was clear Antifa tried to incite a fight, which didnt materialize.
A nurse inside the building, reached via telephone, was at first incredulous when she learned it was people who supported them who were making that nauseous sound.
This advertisement has not loaded yet, but your article continues below.
Its really demoralizing all of us inside. We can hear it and we just assumed it was the anti-vaxxers, said the nurse, who asked to remain anonymous.
I wish those people who think COVID is a farce and that the vaccines are more dangerous than the disease could see what we all see, she said. People are dying of COVID or fighting for their lives. They regret not getting vaccinated.
Albertas intensive-care units are at 90 per cent capacity and 88.5 per cent of hospitalized COVID patients fighting for their lives are unvaccinated, and 82.7 per cent of those who died of COVID-19 since Jan. 1 were unvaccinated or diagnosed within two weeks from the first dose immunization date.
Albertas chief medical officer of health Dr. Deena Hinshaw tweeted that of the 198 in ICU on Monday (the number rose to 202 by noon), 90.4 per cent are unvaccinated or partially vaccinated.
This advertisement has not loaded yet, but your article continues below.
So, just who are the 300 or so folk who turned out Monday to protest mandatory vaccination and/or vaccine passports?
One woman held a sign that read: Say no to the Jab: Side effects = blood clots, hart attack & more. Pretty hard to take someone seriously who cant even spell the word heart correctly.
Pamphlets for Kevin J. Johnston the Calgary mayoral candidate who has been repeatedly jailed for spreading hate and threatening people were everywhere.
Johnston spent the equivalent of seven weeks in jail in May and June for harassing and threatening an AHS inspector as well as causing a disturbance at a downtown Calgary shopping mall when he berated staff who demanded he wear a mask.
Just last week, Johnston was sentenced to 40 days in jail for breaching three judges orders aimed at preventing the spread of COVID-19. He must also pay the $20,000 legal bills of Alberta Health Services.
This advertisement has not loaded yet, but your article continues below.
With Antifas death metal music blaring, one woman who would only speak to me on the condition that I remove my mask and then wouldnt give her name burned sweet grass and sage and tried to push the smoke over the Antifa protesters to counter their negative and evil energy.
The protest was organized by Canadian Frontline Nurses, which includes two former Ontario nurses who attended the Jan. 6 riotsat the United States Capitol.
Canadian Frontline Nurses spearheaded arally attended by an estimated 1,500 people in downtown Calgary on Sunday to denounce COVID-19 vaccine mandates.
Several people interviewed threw around names like the Rockefellers, George Soros and Bill Gates as people who want to control our minds using the vaccine and to sterilize us.
This advertisement has not loaded yet, but your article continues below.
There were a lot of people who called reporters on scene fake news, and charged that you never tell the other side of the story but then refused to give their side of the story or provide their sources of information.
Trevor Simpson was one of the very few who gave his full name. He said he showed up because hes a proud Canadian whos fighting for freedom and our Constitution, and therefore against vaccine mandates and passports that would bar the unvaccinated from some non-essential public events.
When asked how protesting in front of a hospital makes sense, since doctors and nurses are not policy-makers, all he could say was that maybe we shouldnt be protesting there.
Whether people like them or not, vaccine passports are coming. Businesses are demanding them to prevent lockdowns.
More of these protests are expected to take place around hospitals across the country on Tuesday.
If they slow down traffic going into or out of the hospital, participants should be charged.
Licia Corbella is a Postmedia columnist in Calgary.
Twitter: @LiciaCorbella
This advertisement has not loaded yet, but your article continues below.
Sign up to receive daily headline news from the Calgary Herald, a division of Postmedia Network Inc.
A welcome email is on its way. If you don't see it, please check your junk folder.
The next issue of Calgary Herald Headline News will soon be in your inbox.
We encountered an issue signing you up. Please try again
Postmedia is committed to maintaining a lively but civil forum for discussion and encourage all readers to share their views on our articles. Comments may take up to an hour for moderation before appearing on the site. We ask you to keep your comments relevant and respectful. We have enabled email notificationsyou will now receive an email if you receive a reply to your comment, there is an update to a comment thread you follow or if a user you follow comments. Visit our Community Guidelines for more information and details on how to adjust your email settings.
Read the original:
Posted in Antifa
Comments Off on Corbella: Anti-vaxxers and others should be banned from protesting in front of hospitals – Calgary Herald







