The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: February 25, 2021
Opportunities for The Global Artificial Intelligence Market to Reach $70 Billion By 2025 – Yahoo Finance
Posted: February 25, 2021 at 1:57 am
DALLAS, TX / ACCESSWIRE / February 22, 2021 / According to a new market report published by Lucintel, the future of the global artificial intelligence market looks attractive with opportunities in the healthcare, security, retail, automotive, manufacturing, and financial technology (fintech) sectors. The global artificial intelligence market is expected to decline in 2020 due to global economic recession led by COVID-19. However, market will witness recovery in the year 2021 and it is expected to reach an estimated $70 billion by 2025 with a CAGR of 21% from 2020 to 2025. The major drivers for this market are increasing demand for virtual assistance for easy accessibility of services and growing adoption of cloud-based technology.
To download report brochure, please go to https://www.lucintel.com/artificial-intelligence-market.aspx and click "report brochure" tab from the menu.
In this market, different types of artificial intelligence such as machine learning, natural learning processing, and others are used as technology. On the basis of comprehensive research, Lucintel forecasts that the machine learning technology will remain the largest segment and it is also expected to witness the highest growth over the forecast period due to increasing adoption of this technology in the autonomous applications and growing consumer preference for IoT-enabled devices.
Within the artificial intelligence market, media and advertising will remain the largest application due to an increasing adoption of customer-centric marketing strategies and increasing use of social platform for advertisements. The healthcare segment is expected to witness the highest growth over forecast period due to advancements in clinical research and growing demand for electronics-based medical equipment and sensors in the healthcare applications.
APAC will remain the largest region and it is also expected to witness the highest growth over the forecast period due to the growing adoption of IoT (internet of things), increasing installation of smart home devices, and growing industrial automation in countries, such as China, India, and Taiwan.
Story continues
Emerging trends, which have a direct impact on the dynamics of the artificial intelligence industry, include growing adoption of artificial intelligence in IoT applications and increasing demand for AI-enabled processors. Intel, IBM, Amazon, Facebook, NVIDIA, Apple, Microsoft, General Electric, and NEC Corporation and others are among the major artificial intelligence manufacturers.
Lucintel, a leading global strategic consulting and market research firm, has analyzed the global artificial intelligence market by end use industry, technology, product and service, and region and has come up with a comprehensive research report entitled "Growth Opportunities in the Global Artificial Intelligence Market 2020-2025: Trends, Forecast, and Opportunity Analysis." The Lucintel report serves as a catalyst for growth strategy as it provides a comprehensive data and analysis on trends, key drivers, and directions. The study includes a forecast for the global artificial intelligence market by end use industry, technology, product and service, and region as follows:
By End Use Industry [ $B shipment analysis from 2014 to 2025]:
By Technology [$B shipment analysis from 2014 to 2025]:
By Product and Service [$B shipment analysis from 2014 to 2025]:
By Region [$B shipment analysis for 2014 to 2025]:
North America
United States
Canada
Mexico
Europe
United Kingdom
France
Germany
Asia Pacific
The Rest of the World
This 206-page research report will enable you to make confident business decisions in this globally competitive marketplace. For a detailed table of contents, contact Lucintel at +1-972-636-5056 or click on this link helpdesk@lucintel.com.
About Lucintel
Lucintel, the premier global management consulting and market research firm, creates winning strategies for growth. It offers market assessments, competitive analysis, opportunity analysis, growth consulting, M&A, and due diligence services to executives and key decision-makers in a variety of industries. For further information, visit http://www.lucintel.com.
Brandon FitzgeraldLucintelDallas, Texas, USAEmail: brandon.fitzgerald@lucintel.comTel. 972.636.5056Cell: 303.775.0751
Related reports
Power over Ethernet Solution Market:
For more details click here https://www.lucintel.com/power-over-ethernet-solution-market.aspx
GPS Tracking Device Market:
For more details click here https://www.lucintel.com/gps-tracking-device-market.aspx
Hybrid Memory Cube (HMC) and High-Bandwidth Memory (HBM) Market:
For more details click here https://www.lucintel.com/hybrid-memory-cube-and-high-bandwidth-memory-market.aspx
Micro LED Market:
For more details click here https://www.lucintel.com/micro-led-market.aspx
Metal Terminal MLCC Market:
For more details click here https://www.lucintel.com/metal-terminal-mlcc-market.aspx
Augmented Reality and Virtual Reality Market:
For more details click here https://www.lucintel.com/augmented-reality-and-virtual-reality-market.aspx
Compound Semiconductor Market:
For more details click here https://www.lucintel.com/compound-semiconductor-market.aspx
Copper Clad Laminates Market:
For more details click here https://www.lucintel.com/copper-clad-laminates-market.aspx
Germanium Market:
For more details click here https://www.lucintel.com/germanium-market.aspx
Bonding Wire Packaging Material Market:
For more details click here https://www.lucintel.com/bonding-wire-packaging-material-market.aspx
SOURCE: Lucintel
View source version on accesswire.com: https://www.accesswire.com/630968/Opportunities-for-The-Global-Artificial-Intelligence-Market-to-Reach-70-Billion-By-2025
See the rest here:
Posted in Artificial Intelligence
Comments Off on Opportunities for The Global Artificial Intelligence Market to Reach $70 Billion By 2025 – Yahoo Finance
European Space Agency selects CGI to develop services combining Artificial Intelligence and Earth Observation for Wildfire Mapping – PRNewswire
Posted: at 1:57 am
Stock Market SymbolsGIB (NYSE)GIB.A (TSX)www.cgi.com/newsroom
LONDON, Feb. 22, 2021 /PRNewswire/ -CGI (NYSE: GIB) (TSX: GIB.A) was awarded by the European Space Agency (ESA) a contract to develop a new wildfire mapping service that combines recent advances in Earth Observation (EO), Artificial Intelligence(AI) and cloud computing to help better map and monitor the impact of wildfires.
CGI and its project partner, the University of Leicester, is working with nationally mandated user organisationsfrom Australia (Geoscience Australia) and France (ONF France) to implement and demonstrate EO services based on their requirements for improved wildfire risk management. The consortium will evaluate a variety of AI algorithms that could help meet these requirements. It is expected that combining the machine learning capabilities of these AI algorithms with the increased availability of frequent, high quality satellite observations will allow better burnt area mapping products to be generated where and when users want them.
The resulting AI-enabled wildfire mapping service will be made available to Geoscience Australia and ONF France, as well as the wider environmental community, through the EO4SD Lab portal. This online data portal, which has been developed by CGI for ESA, utilises cloud computing to provide free access to a range of EO data, tools and services to the sustainable development and wider environmental community.
The recent extensive fire disasters in the USA, Southern Europe and Australia have shown both the environmental and human cost of wildfires. Climate change is contributing to more frequent wildfires, with studiesfinding a 19% increase in global mean fire weather season length between 1979 to 2013. Better monitoring and analysing of burnt areas is important to improve land management and help mitigate the impact.
Tara McGeehan, President at CGI in UK & Australia said: "We are excited to be part of this cutting edge project that brings to bear the potential of AI to help the scientific and environmental community to better understand the extent and impact of damaging wildfires throughout the world. Our ongoing partnership with ESA for EO and Thematic Exploitation Platforms is enabling rapid progress in monitoring the Earth's environment to support scientific research and government policy."
Kevin Tansey, Professor of Remote Sensing and Principal Investigator at the University of Leicester said: "After 20 years of research into the use of satellite data to measure burned area and severity from local to global scales, the opportunity to work with CGI and agency partners to develop new wildfire services is very exciting. I am further delighted that this project will be one of the first to be delivered out ofSpace Park Leicester, our new state-of-the-art, high-tech facility for research, development and manufacturing.".
CGI has been delivering complex, mission-critical spacesoftware systems for clients across Europe, Australia, Asia and North America for over 40 years, supporting satellite navigation, communications and operations, to space enabled applications. CGI's partner, the University of Leicester, have a wealth of experience within EO domain and are one of the leading academic institutions with the UK.CGI is a partner in the Manufacturing, Engineering, Technology and Earth Observation Centre (METEOR) at Space Park Leicester.
About CGIFounded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 76,000 consultants and other professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform theirorganizations and accelerate results. CGI Fiscal 2020 reported revenue is C$12.16 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB).Learn more atcgi.com
SOURCE CGI Inc.
More here:
Posted in Artificial Intelligence
Comments Off on European Space Agency selects CGI to develop services combining Artificial Intelligence and Earth Observation for Wildfire Mapping – PRNewswire
How NSF and Amazon Are Collectively Tackling Artificial Intelligence-Based Bias – Nextgov
Posted: at 1:57 am
The National Science Foundation and Amazon teamed up to fund a second round of research projects aimed at promoting trustworthy artificial intelligence and mitigating bias in systems.
The latest cohort selected to participate in the Program on Fairness in AI include multi-university projects to confront structural bias in hiring, algorithms to help ensure fair AI use in medicine, principles to guide how humans interact with AI systems, and others that focus on education, criminal justice and human services applications.
With increasingly widespread deployments, AI has a huge impact on peoples lives, Henry Kautz, NSF division director for Information and Intelligent Systems, said. As such, it is important to ensure AI systems are designed to avoid adverse biases and make certain that all people are treated fairly and have equal opportunity to positively benefit from its power.
Kautz, whose division oversees the program, briefed Nextgov on the complexities that accompany addressing fairness in AIand the joint initiative NSF and Amazon are backing to help contribute to the creation of more trustworthy technological systems.
What is fair?
AI is already an invisible variable that touches many, crucial aspects of Americans lives. Uses range from helping facial recognition unlock smartphones to making recommendations about punishments judges should impose for criminal convictions. But theres still no universal guarantee that the rapidly evolving technology won't be harmful to certain people.
It is important to note that we are still trying to understand fairness, Kautz explained. And once we have a better understanding of the many facets of fairness, the challenge is not just to design AI systems that are as fair as people are, but to actually be even more fair and unbiasedsince we know people can make biased decisions, either implicitly or explicitly.
Mathematical definitions of fairness can hone in on the algorithmic outcomes of different groups using a statistical approach, he noted, so methods in that realm might look to ensure various metrics are consistent across different groups. From a social perspective, on the other hand, officials might consider how AI could improve fairness and equality across society. An AI system might be used to help determine a novel vaccination or food distribution method or the location of medical resources that users would not have thought of without the analysis from the system, Kautz noted. Or, in technical approaches to fairness, officials might consider the accountability of the users of an AI system and what information is needed to guarantee they feel confident that informed decisions can be made.
Thus, there are many ways to look at fairness in AI, and that is what NSF and Amazon are trying to do through this joint effort, Kautz said. We are making progress but are still in the early stages, where we need to understand the different aspects of fairness, in real-world settings, so that we can in turn understand how we can design our systems with fairness built into them.
Advancing Fair AI
NSF has been funding research to promote fairness in AI systems for some time, according to Kautz, while Amazon grasps the importance of building out systems designed with such approaches.
Given our mutual interest in this space, it seemed natural for NSF and Amazon to partner to leverage the resources and expertise that each organization brings, Kautz said, adding that the two intend to provide approximately $10 million each, for a total of about $20 million, over the three-year life of the program they cooperatively steer.
The first cohort came last year, this announcement marks the second, and another is anticipated to rollout in 2022.
Amazon does not play a role in the selection of proposals for the research grants, only NSF selects the awardees, Kautz confirmed.
Through the partnership, the research community submits proposals to NSF, which in turn uses its standard peer review process to identify meritorious proposals, he explained. Agency officials complete NSFs standard award process and provide grants to those chosen while Amazon separately sends its funding contributions. The company additionally offers consultation to the researchers who receive awards.
The response to the solicitations has grown, indicating growing importance and interest in the research community in addressing fairness in AI since the programs inception, Kautz added. The award topics have also broadened, and now include projects in natural language processing, computer vision, and applications to criminal justice.
But what hasnt changed is the efforts overall aim and potential to help scientists push forward toward new technical breakthroughs, accelerate the transition of their research results from laboratories to practiceand train the next generation of researchers and practitioners, which Kautz deemed another dimension that is really important to NSF.
We all appreciate there is a real need for competencies in AI across all sectors of our economy. Providing students studying fairness in AI with exposure to industry, and the problems that they are facing, is one way to develop and nurture talent that our research ecosystem is going to need going forward, he said. Finally, students participating in this programs projects may get exposure to future job opportunities as a result of Amazons engagement.
RELATED PODCAST
More:
How NSF and Amazon Are Collectively Tackling Artificial Intelligence-Based Bias - Nextgov
Posted in Artificial Intelligence
Comments Off on How NSF and Amazon Are Collectively Tackling Artificial Intelligence-Based Bias – Nextgov
What were they thinking?: Where artificial intelligence meets family dysfunction – scarsdalenews.com
Posted: at 1:57 am
What if a groundbreaking technology designed to gather and channel contemporary insights from the worlds most brilliant leaders Churchill, Gandhi, Lincoln took a sharp detour and exposed long buried secrets of its creator and his family? In his debut novel, Scarsdale author Marc Sheinbaum explores that possibility.
In Memories Live Here, Sheinbaum writes about what happens when an artificial intelligence project known as CHERL (Computerized Human Experienced as Real Life) and a cybersecurity breach lead to the near downfall of A.I. engineer Josh Brodsky and his two brothers. In their search for the source of CHERLs ransomware they unearth decades-old revelations about their late parents.
Sheinbaum drew some of his knowledge of A.I. from his days as an IT consultant at PricewaterhouseCoopers, and his exposure to neural networks (algorithms that mimic the operations of a human brain) that identify fraud in finance while working in risk management at GE Capital.
He describes Memories Live Here as not so much a hard science fiction novel, as it is a mystery thriller about a dysfunctional family.
I did a lot of searching around the web and conversations with subject matter experts, but mostly on broad concepts rather than the detailed science, Sheinbaum said. Luckily, I didnt have to do too much research on the dysfunctional family part. But well get to that later.
There are those who would think our government would learn a great deal from historys greatest leaders, said Sheinbaum, but I was drawn to the more personal story. What deceased person would you speak to from your own family? From your own past? And could A.I. uncover skeletons in the closet? Secrets we never knew, that we never realized had such a great impact on our lives and shaped who we became? Thats the story I really wanted to explore, but with the pulse of a fast-paced mystery thriller.
As with many second act novelists, Sheinbaum put his early affinity for creative writing on hold while pursuing a 35-year career in finance at corporations like JPMorgan Chase and American Express. After retiring, he enrolled in a writers workshop.
We were given simple prompts and 20 minutes, where we were told to write whatever comes to mind and well share your work. Talk about feeling exposed! Sheinbaum said. Before I knew it, I had two full pages in front of me. The concepts within Memories Live Here came from one of those early prompts, a riff from that one sentence that kept going.
Sheinbaum cites a combination of influences on his writing, like author Michael Crichton (Jurassic Park) and an array of mysteries that dig up family secrets.
Like the brothers in the book, Sheinbaum grew up in Sheepshead Bay, a neighborhood in Brooklyn, but said there are only limited similarities to his own life. Its not autobiographical at all, but of course I mined my own experiences when creating the characters within this family, he said. Luckily, I have a good relationship with my brother, but there was plenty of friction between my parents.
Sheinbaum admits that writing the story was somewhat cathartic. In a way, I gave my parents a voice, to portray one possible explanation, he said. Even though it wasnt their story, or my story, it could have been.
A father of two grown children, Sheinbaum graduated from State University of New York at Albany, where he majored in business administration, and NYU, where he earned an MBA. He and his wife are empty nesters who lived in Chappaqua and Ardsley before moving to Scarsdale in 2018.
In the book, Sheinbaums dedication reads, Its never too late, which, he says, carries a double meaning. Obviously, its never too late to pursue your dreams. But in terms of the story, its also never too late to try to understand, or to let go of the things that can be destructive in our lives, and to forgive. I hope thats a piece of what all readers take away from the story.
See the article here:
What were they thinking?: Where artificial intelligence meets family dysfunction - scarsdalenews.com
Posted in Artificial Intelligence
Comments Off on What were they thinking?: Where artificial intelligence meets family dysfunction – scarsdalenews.com
Here’s what happened when AI and humans met in a strawberry-growing contest – Big Think
Posted: at 1:57 am
Strawberries can be easy to grow especially, it seems, if you're an algorithm.
When farmers in China competed to grow the fruit with technology including machine learning and artificial intelligence, the machines won, by some margin.
Data scientists produced 196% more strawberries by weight on average compared with traditional farmers.
The technologists also outperformed farmers in terms of return on investment by an average of 75.5%
The inaugural Smart Agriculture Competition was co-organized by Pinduoduo, China's largest agri-focused technology platform, and the China Agricultural University, with the Food and Agriculture Organization of the United Nations as a technical adviser.
Teams of data scientists competed over four months to grow strawberries remotely using Internet of Things technology coupled with artificial intelligence (AI) and machine learning-driven algorithms.
In the competition, the technology teams had the advantage of being able to control temperature and humidity through greenhouse automation, the organizers said. Using technology such as intelligent sensors, they were also more precise at controlling the use of water and nutrients. The traditional farmers had to achieve the same tasks by hand and experience.
One of the teams, Zhi Duo Mei, set up a company to provide its technology to farming cooperatives after it generated a lot of interest during the competition.
The contest helped the traditional farmers and the data scientists better understand each other's work and how they could collaborate to everyone's advantage, the leader of the Zhi Duo Mei team, Cheng Biao, said.
Pinduoduo
Numerous studies show the potential for Fourth Industrial Revolution technologies like AI to boost economic growth and productivity.
By 2035, labour productivity in developed countries could rise by 40% due to the influence of AI, according to analysis from Accenture and Frontier Economics.
Sweden, the US and Japan are expected to see the highest productivity increases.
In its Future of Jobs Report 2020, the World Economic Forum estimates that by 2025, 85 million jobs may be displaced by a shift in the division of labour between humans and machines, while 97 million new roles may emerge that are more adapted to the new division of labour between humans, machines and algorithms.
Emerging technologies including AI and drones will also play a vital role in helping the world recover from COVID-19, according to a separate Forum report compiled with professional services firm Deloitte.
The Global Technology Governance Report 2021 considers some of the most important applications for these technologies and the governance challenges that should be addressed for these technologies to reach their full potential.
Reprinted with permission of the World Economic Forum. Read the original article.
From Your Site Articles
Related Articles Around the Web
Read this article:
Here's what happened when AI and humans met in a strawberry-growing contest - Big Think
Posted in Artificial Intelligence
Comments Off on Here’s what happened when AI and humans met in a strawberry-growing contest – Big Think
We can’t trust big tech or the government to weed out fake news, but a public-led approach just might work – The Conversation AU
Posted: at 1:56 am
The federal governments News Media and Digital Platforms Mandatory Bargaining Code, which passed the Senate today, makes strong points about the need to regulate misinformation.
In response, Google, Facebook, Microsoft, TikTok, Redbubble and Twitter have agreed to abide by a code of conduct targeting misinformation.
Suspiciously, however, the so-called Australian Code of Practice on Disinformation and Misinformation was developed by, well, these same companies. Behind it is the Digital Industries Group (DIGI), an association formed by them and some other companies.
In self-regulating, they hope to show the government theyre addressing the proliferation of misinformation (false content spread despite intent to deceive) and disinformation (content that intends to deceive) on their platforms.
But the only real commitment under the code would be to appear to be doing something. Since the code is voluntary, the platforms signed up can basically opt in to the measures at their own discretion.
The code suggests platforms might release data trends about known misinformation, or might label known false content or content spread by seemingly unreliable sources. They might identify and restrict paid political ads trying to deceive users, or they might reveal the sources of misinformation.
These are all great actions the platforms might take, as they arent bound by the code. Rather, the code will likely encourage them to police misinformation around an issue of the day by taking visible action around one topic, without confronting the spread of other profitable false information on their platforms.
The consequences of this would be great. False news can lead to dangerous conspiracies and armed attacks. It can even influence elections, which we saw in 2019 when Facebook hosted posts claiming the Labor party would introduce a death tax on inheritance. Things quickly spiralled.
The government has promised tougher regulation of misinformation if it feels the voluntary code isnt working. Although, we should be careful about allowing the powerful regulate the powerful.
Its unclear, for instance, whether the Morrison government would view posts about a supposed Labor death tax as being a real threat to democracy even though this is misinformation.
Read more: How political parties legally harvest your data and use it to bombard you with election spam
Regulating speech on the internet is difficult. In particular, misinformation is hard to define because often the distinction between genuinely dangerous misinformation, and valued myth or opinion, is based on a communitys values.
The latter is information that may not be accurate but which people still have a right to express. For instance:
Nickelback is the best band on the planet.
This is probably untrue. But the statement is relatively harmless. While the actual truthfullness is lacking, its subjective nature is clear. Considering this nuance, the solution then is for misinformation to be policed by the community itself, not an elite body.
Reset Australia, an independent group that targets digital threats to democracy, recently proposed a project in which interested tech platforms and members of the public could be subscribed to a live list of the most popular misinformation content.
A citizen-run jury could monitor the list to help ensure public oversight. This would involve the whole public sphere in the debate about misinformation, not just the government and platforms.
Once fake news is in the open, it becomes easier for public figures, journalists and academics to expose.
Another effective strategy would be to create a national register of misinformation sources and content. Anyone could register what they think is misinformation to the Australian Communications and Media Authority, helping it quickly identify malicious sources and alert the platforms.
Digital platforms already do this internally, both through moderators and and by allowing the public to report posts. But they dont show how posts are judged and dont release the data. By creating a public register, ACMA could monitor whether platforms are self-regulating effectively.
Such a register could also keep a record of legitimate and illegitimate information sources and give each one a reputation score. People who accurately reported misinformation could also receive high ratings, similar to Ubers ratings for drivers and passengers.
While this wouldnt restrict anyones right to expression, it would be easier to point to the reliability of the source of information.
Its worth noting this type of community-based peer review system would be open to potential abuse. Movie review site Rotten Tomatoes has had serious problems with people trolling film reviews.
For example, Captain Marvel was awarded a low audience rating because toxic online communities decided they didnt like the idea of a female superhero, so they coordinated to rate the film poorly. But the platform was able to identify this pattern of behaviour.
The site ultimately protected the films score by ensuring only people who had bought a ticket to see the movie could rate it. While any system is open to abuse, so is self regulation and communities have shown they can (and are willing to) solve such problems.
Wikipedia is another community-driven peer review resource and one which most people consider highly valuable. It works because there are enough people in the world who care about the truth.
Judging the accuracy of claims made in public allows for a consensus that is open to be challenged. On the other hand, leaving decisions about truth to private companies or political parties could actually exacerbate the misinformation problem.
The news media bargaining code has finally passed. Facebook is set to bring news back to Australia, as well as start making deals to pay local news publishers for content.
The agreement between the government and Facebook which serves the interests of those parties seems like just another echo of the past. Large media players will retain some revenue and Google and Facebook will continue to expand their immense control of the internet.
Meanwhile, users remain reliant on the benevolence of tech platforms to do just enough about misinformation to satisfy the government of the day. We should be careful about surrendering power to both platforms and governments.
This new code wont force significant change out of either, despite the pressing need for it.
Read more: Google is leading a vast, covert human experiment. You may be one of the guinea pigs
See the article here:
Posted in Fake News
Comments Off on We can’t trust big tech or the government to weed out fake news, but a public-led approach just might work – The Conversation AU
Totally Not Fake News: Official Polling Results from Our Completely Accurate and Unbiased Totally Not Fake Ne – Battle Red Blog
Posted: at 1:56 am
HOUSTON, TX In the past few months, the concept of polling has come under considerable scrutiny. Whether it is in politics or sports, the idea of the poll, while having some role to play, is also sometimes thought to be a poor representation of reality and what people are really thinking. Still, for some reason, people still do enjoy their polls, and one could argue that there were some significant market corrections, at least on the political side for Jan 2021. As for the sporting side, well, at least for college football, there is still some work to do (like Cincinnati getting screwed out of the right to make the FBS Playoffs so that they could be annihilated by the best professional team in college football that is Alabama)oh, sorryanyway, back to the polls.
We at Totally Not Fake News understand that the raison detre of polls are for internet debate. However, we have supreme confidence in our polling system, from the questions to the tabulations on how the results are posted. Completely objective and (mostly) above repute. Our methods are completely legal (in our eyes) and we can assure the reading audience that we simply go with the factsnothing else.
Therefore, we will now update you on the results of a couple of polls we recently put out into the field. The first one concerned finding the correct, parallel figure for current Texans Executive Vice President for Football Operations, Jack Easterby. Here is the recap. Based on the responses, here is the final result of that poll:
It is official. Any comparisons of Jack Easterby to figures, real or imaginary, must be with Rasputin. This is the certified resulhold on, we are getting word that somehow, there may have been a flaw in the system. Yesyes, it appears that we at Totally Not Fake News, in spite of our best efforts, got this one wrong. Apparently, Rasputin is not an acceptable response. Therefore, we will go with the next highest option, that Easterby is his own level of vileness. This is a legitimate revision and has absolutely nothing to do with some disturbing email correspondence we received from some dude named Vladimir, who threatened to use some group calling itself the FSB to mess with our underwear drawer and/or threaten to reveal our internet search history to our significant others if we did not correct the error of besmirching the good name of Rasputin with the Easterby comparisons.
Anyway, now that this poll is done, let us move on to the next (less contentious) polling question. In this one, we asked about what term/phrase best describes the current state of the Texans. Here is the recap. The results:
Apparently, the public cannot limit itself to just one descriptive adjective. Maybe that is a case in America that we want more than the rest of the world. Either that, or the situation for the Texans warrants multiple descriptors. This is quite the drunk internet topic for the future. Would have been interesting to see the results if we could have revealed that one answer, which was [Censored by the Order of the Rasputinwhoops, the Easterby. We said/meant EASTERBY!!! VLAD, IT IS EASTERBY, NOT RASPUTIN!!! TELL YOUR FSB DROOGS TO CHILL, OK!!!]
With those results out of the way, we can move on to other areas of public debate/concern. Our next great poll question for consideration:
[Editors Note: Stepping out of character here, but while you ponder the poll results, consider that a lot of Texans fans are in some dire straits right now. The unseasonable weather is making life in Texas rough. Many need help now and many more are going to need some significant help in the near future. For those that can, here are some options for how to assist:
Any little bit you can provide will be of some help. Thank you].
Continue reading here:
Posted in Fake News
Comments Off on Totally Not Fake News: Official Polling Results from Our Completely Accurate and Unbiased Totally Not Fake Ne – Battle Red Blog
GCHQ takes on Russian fake news bots with plans to use AI to find troll farms, new paper reveals – Telegraph.co.uk
Posted: at 1:56 am
GCHQ is taking on Russian fake news bots with plans to use artificial intelligence (AI) to find troll farms, it has been announced.
Britains cyber spy agency has outlined how it protects the country against state-backed disinformation campaigns and other cyber attacks using AI.
Shadowy organisations such as Russias Internet Research Agency, the St Petersburg-based troll farm indicted in the US for meddling in the 2016 Presidential election, will come under increased scrutiny.
In a paper released today titled Ethics of AI: Pioneering a New National Security, GCHQ has explained how it counters malign foreign cyber attacks, whilst adhering to an ethical framework for using AI in operations.
Jeremy Fleming, the GCHQ Director, said AIallows our brilliant analysts to manage vast volumes of complex data and improves decision-making in the face of increasingly complex threats from protecting children to improving cyber security.
While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ.
Today we are setting out our plan and commitment to the ethical use of AI in our mission. I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI.
The paper describes how hostile states can use AI to mount disinformation attacks by automating the production of fake news and other content to undermine public debate.
This could include the production of deepfake videos with fictional but very convincing content. False audio material designed to mislead can also be produced with AI.
A growing number of states are using AI-enabled tools and techniques to pursue political ends by spreading disinformation to shape public perceptions and undermine trust, the paper states.
GCHQ says it uses AI to assist in tackling this threat by fact checking information with trusted sources, detecting when images and videos have been altered and blocking botnets - networks of private computers infected with malicious software and controlled as a group without the owners knowledge, that are used to send spam and other malicious material.
So-called troll farms, organisations making deliberately offensive or provocative online posts to cause conflict or manipulate public opinion, are being targeted by GCHQ.
The report says AI will help identify such places so that online operations could be mounted to counteract these malicious accounts.
Russia has, in recent years, made a number of attempts to disrupt our own and our allies way of life, andGCHQ also responded to the release of the online WannaCry attack by North Korean cyber actors, the report says.
Continued here:
Posted in Fake News
Comments Off on GCHQ takes on Russian fake news bots with plans to use AI to find troll farms, new paper reveals – Telegraph.co.uk
Political ads, fake news targeted in newly formed misinformation code – Sydney Morning Herald
Posted: at 1:56 am
Loading
The code was created by industry group DIGI and has been signed by Google, Microsoft, Tik-Tok, Twitter, Redbubble and Facebook, despite its decision last week to restrict news content on its platform. It will be overseen by the Australian media regulator and is being released at a time when vaccine conspiracies and misinformation about COVID-19 are rampant on the internet.
Google and Facebook are separately waiting on the legislation of new laws that will make them pay for use of news content on their platforms. The news media bargaining code was passed by the House of Representatives last week and is expected to be debated in the Senate this week.
Under the commitments to tackle the spread of harmful content, measures such as labelling false content or using trust indicators on articles, demoting content that exposes users to misinformation and disinformation and suspension and disabling of accounts of users which engage in inauthentic behaviours could be introduced. Users may also be notified if theyve been exposed to disinformation.
Misinformation is defined as false or misleading information that is likely to cause harm. Disinformation is false or misleading information that is distributed by users via spam and bots.
Loading
Prioritisation of credible news sources and providing funding for fact checkers are also among the measures that could be taken up. This could be a challenge for Facebook following its introduction of a blanket news ban last Thursday.
It is unclear how Facebook could meet all its commitments of the code in the absence of news on its platform, but it does have measures in place such as third party fact checkers to handle misleading content.
The social media giant said earlier this month it would ban vaccine conspiracy theories from its platforms, including claims they cause autism, in an attempt to reduce COVID-19 misinformation.
Political advertising is not considered misinformation or disinformation under the code, but the tech platforms will implement measures to provide users with transparency about the source of the ads. They could also ban ads that misrepresent or deceive users about the advertiser. Political ads which appear in news or editorial content will also be disclosed as paid for communication.
Decisions on what measures need to be taken will be based on factors such as the severity of the post or article, who is involved in its creation, the speed at which it is disseminated, whether it is maliciously motivated and if the content is misleading or harmful.
Loading
All signatories are required to publish information for users on what measures they have in place and will release an annual report on their efforts. They will also establish a way to address non-compliance.
The Australian Communications and Media Authority chair Nerida OLoughlin welcomed the code and encouraged all tech platforms to opt-in.
The code anticipates platforms actions will be graduated and proportionate to the risk of harm. This will assist them to strike an appropriate balance between dealing with troublesome content and the right to freedom of speech and expression, Ms OLoughlin said.
Communications Minister Paul Fletcher said the government would carefully assess if the new code was effective.
Weve all seen the damage that online disinformation can cause, particularly among vulnerable groups, Mr Fletcher said. This has been especially apparent during the COVID-19 pandemic.
The Morrison government will be watching carefully to see whether this voluntary code is effective in providing safeguards against the serious harms that arise from the spread of disinformation and misinformation on digital platforms.
Our Morning Edition newsletter is a curated guide to the most important and interesting stories, analysis and insights. Sign up to The Sydney Morning Heralds newsletter here, The Ages here, Brisbane Times here, and WAtodays here.
Zoe Samios is a media and telecommunications reporter at The Sydney Morning Herald and The Age.
Read this article:
Political ads, fake news targeted in newly formed misinformation code - Sydney Morning Herald
Posted in Fake News
Comments Off on Political ads, fake news targeted in newly formed misinformation code – Sydney Morning Herald
Do more to battle fake news on vaccines, say experts – Free Malaysia Today
Posted: at 1:56 am
Medical experts say efforts to counter falsehood on the Covid-19 vaccines are insufficient. (AP pic)
PETALING JAYA: Several medical experts have attributed some Malaysians concerns over vaccine safety to falsehood spread through social media.
They commended the government for its efforts to inspire confidence in Covid-19 vaccines, but said conspiracy theories and fake news had somewhat dampened the effect of those efforts.
Dr M Murallitharan, the director of the National Cancer Society, said one of the most common misconceptions was that there were bound to be side effects from vaccines produced in just a few months.
Those holding this belief would wait for large numbers of other people to get inoculated before getting themselves immunised, he told FMT.
He said the government had done quite a bit to counter the myth, but added that there had been an underestimation of the power of social media to push false narratives.
Murallitharan said efforts to counter myths about vaccines were important because the more we inoculate, the safer it will be.
Dr Sharifa Ezat Wan Puteh, a professor of health economics and public health at Universiti Kebangsaan Malaysia, said the governments efforts against the spread of falsehood were apparently not sufficient.
She called for the enactment of laws to prevent people from promoting false claims about vaccines, such as they could lead to autism or death.
Previously, Malaysia saw a rise in polio and diphtheria because people were hesitant to get vaccinated, she said.
This was why it was important for the government to explain potential side effects, how to resolve them and where to go for questions and post-vaccination complaints.
But the effects of not getting vaccinated also need to be explained, she said.
Dr Muhammad Yusri Musa, the president of the Islamic Medical Association of Malaysia, said Putrajaya had not done enough to counter the myths.
The common myths associated with vaccines, he said, included genetic modification, theories relating to an Israeli agenda and allegations that vaccines contain non-halal and dangerous substances.
Vaccine myths are quite prevalent in society, but the majority of the population are not affected by it, he said.
But he said it was vital to address these issues to ensure the take-up rate of vaccines was good.
Dr Sibrandes Poppema, the president of Sunway University, said some people were underestimating the danger of the virus, some were overestimating the risks of vaccination and some did not understand that vaccination was to protect society and not just the individual.
Some, he said, believed that the Russian and Chinese vaccines might not be sufficiently effective and others believed in fake news about hidden intentions of the vaccination effort.
The truth of the matter is that all approved vaccines have been shown to prevent serious disease and have been demonstrated to be safe through rigorous clinical trials in different countries, he said.
Poppema, a specialist in immunopathology, noted that Putrajaya and various experts had repeatedly debunked these myths.
But, he said, the onslaught of fake news, including statements by those claiming to be physicians and experts, makes it necessary to redouble our efforts to not only fight the Covid-19 pandemic but the fake news pandemic as well.
He said it was essential to keep providing reliable information with the first batch of vaccines set to roll out.
The vaccination strategy will work only when more than 75% of the population are vaccinated, he said. Otherwise, the virus will smoulder and new variants may develop.
CLICK HERE FOR THE LATEST DATA ON THE COVID-19 SITUATION IN MALAYSIA
Read more:
Do more to battle fake news on vaccines, say experts - Free Malaysia Today
Posted in Fake News
Comments Off on Do more to battle fake news on vaccines, say experts – Free Malaysia Today