The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: September 2021
NY redistricting commissions obscenely partisan maps defy will of voters (Guest Opinion by Mark Braiman) – syracuse.com
Posted: September 20, 2021 at 9:32 am
Mark Braiman, of Cazenovia, is treasurer of Madison County Libertarians.
Here is a sure-fire recipe for short-circuiting the open redistricting process New York voters demanded with a 2014 constitutional amendment. Start with a shift of New Yorks primaries from September to June; add a pandemic that delayed 2020 Census results by several months; and toss into the mix a partisan deadlock on the Independent Redistricting Commission. With the process now under extreme time pressure, the unforeseen consequence will likely be the states three top state politicians sitting in a room somewhere to doodle out the final maps a few days before legislative approval; or else a federal court doing it all on its own, without any input from the states politicians or voters.
New York Democrats feel an urgent need to engage in pre-emptive gerrymandering to counter what will happen in red states. This is unappealing behavior but seems inevitable. I am nevertheless greatly irritated that this national gerrymandering war impacts me directly, in the form of the Democratic IRC members proposed sea-serpent-shaped Central New York district extending from Tompkins County to Utica, with a neck through northern Madison County. This would be the first time in its 215-year history for Madison County to be divided between Congressional districts. My home is so close to the obnoxiously arbitrary boundary, it will take a lot of scrutiny before I can discern which side I live on.
Forcing incumbent Republican Reps. John Katko and Claudia Tenney into the same district could be accomplished without dividing my county or any county at all. The combined 2020 populations of Onondaga, Madison and Oneida counties is 776,657. This is almost exactly the ideal district size of 776,971 (1/26 of the state population). Drawing a new congressional district from just these three counties would satisfy the Democrats urge to force Tenney (Oneida County) and Katko (Onondaga County) to compete against each other, without dismembering Madison or other counties. This can furthermore be done without forcing any other anomalies in the surrounding districts, as can be mathematically proven. (See map for Upstate Congressional Districts that I have just proposed to the IRC at MarkBraiman.com). This map keeps every NY county undivided between Congressional districts, excepting of course the nine over 776,971 in size.
The Democrats on the IRC have also proposed obscenely gerrymandered New York Senate districts for Madison and Onondaga Counties. In their map, Madison is one of the few lucky small Upstate counties that escapes being divided into multiple Senate districts. However, it is once again thrown in with a motley collection of barely contiguous Onondaga County towns, henceforth to bear the appearance of a grotesque bobcat, curled almost all the way around the city of Syracuse in an act of animalistic self-grooming.
Speaking of animalistic behavior, the Republican members of the IRC have responded with a map that is just as obnoxiously partisan, despite featuring much simpler-shaped state Senate districts for Madison and Onondaga Counties. Their map takes Madison County entirely out of Sen. Rachel Mays Syracuse district entirely reasonable but puts her and fellow incumbent Democratic Sen. John Mannion into a single elongated district not so reasonable. In the process, the Republicans propose to split Onondaga County into four distinct Senate districts. None of these are contained entirely within Onondaga County, despite it having a population 1.5 times the ideal State Senate district size of 320,655. Could the two nonpartisan members of the IRC have the integrity to stand up and say, A plague on both your houses!?
In sum, both Democratic and Republican wings of the IRC have put raw partisan self-interests over the reasonable and constitutionally mandated goal of keeping small counties intact wherever possible. The New York Constitution, Article IV, section 4, paragraph (c)(6) states clearly: The requirements that senate districts not divide counties or towns ... shall remain in effect. These requirements have been part of our state Constitution for nearly 250 years, but over the past half-century have increasingly been breached for partisan purposes.
Dividing smaller Upstate counties between multiple congressional and legislative districts puts unnecessary burdens on voters, to figure out what races they are voting in. It thereby alienates us further from the electoral process. It also burdens these small counties Boards of Elections, by unnecessarily increasing the number of races they have to count.
More important, the ongoing violation of constitutional districting provisions since the 1970s has weakened the voices of local leaders in state government. It has likely contributed to the growth of state mandates on counties and other local governments, for example the requirement for counties to fund Medicaid using property taxes.
Whatever the need may be to divide large Downstate New York counties and cities among multiple districts in order to keep these districts nearly equal in size, this need is not present for smaller Upstate jurisdictions, as my math shows. This keeps every Upstate city, village, and town undivided , as well as all 49 of the counties with a 2020 population under 320,655. It also follows another key precept of fairness to all counties, by guaranteeing each of the other six larger counties north of New York City (including Onondaga) at least one core Senate district entirely within the county. It even manages to do all this without forcing Sens. May and Mannion, who live barely five miles apart, into the same Senate district.
Also in Opinion: Editorial cartoons for Sept. 19, 2021: Gen. Milleys back channel, Bidens Covid mandate, California recall
Read the original post:
NY redistricting commissions obscenely partisan maps defy will of voters (Guest Opinion by Mark Braiman) - syracuse.com
Posted in Libertarianism
Comments Off on NY redistricting commissions obscenely partisan maps defy will of voters (Guest Opinion by Mark Braiman) – syracuse.com
WWE Mayor Kane Defies Authority, Will Not Comply with Vaccine Mandate – Bleeding Cool News
Posted: at 9:32 am
|
Former WWE Superstar turned Mayor of Knox County, Tennessee, Kane, may have once been a stooge for The Authority of Triple H and Stephanie McMahon, but when it comes to a Democratic president, it's another story. Mayor Kane unleashed hellfire and brimstone on President Joe Biden, rival of Mayor Kane's fellow WWE Hall of Famer former president Donald Trump, over Biden's COVID-19 vaccine mandates. According to The Big Red Machine, Knox County Tennesee will not comply with the federal rules.
Mayor Kane tweeted:
He added:
In the letter, Mayor Kane accuses Biden of violating the Constitution with the order. "Mr. President, if we as elected officials ignore, disregard, and contravene the laws which bind us, how can we expect our fellow citizens to respect and follow the laws which bind all of us as a society?" asked The Devil's Favorite Demon while vowing to ignore, disregard, and contravene Biden's executive order.Mayor Kane also went on to take President Biden to task for the war in Afghanistan, which makes sense, since the only time Kane thinks Americans should travel to the Middle East is when they're teaming with The Undertaker to battle Triple H and Shawn Michaels in front of the Saudi Royal Family.
Under the leadership of Mayor Kane, the only Libertarian political figure to receive the endorsements of both Senator Rand Paul and Bryan Danielson, Knox County is currently experiencing a coronavirus inspection spike higher than at any other time during the pandemic, which is no surprise, considering Mayor Kane opposes pretty much every effort to stem the disease's spread. Kane has previously complained about bans on large gatherings after it prevented him from speaking at an event known as the Juggalo Gathering for Libertarians. Kane was later forced to apologize to Knox County's own Board of Health after cutting a shoot promo on them over coronavirus safety protocols. Later, it was reported that 975 COVID-19 vaccines went missing under Mayor Kane's regime, though it was later found that the vaccines were accidentally thrown in the trash and not, as originally reported, stolen.
View original post here:
WWE Mayor Kane Defies Authority, Will Not Comply with Vaccine Mandate - Bleeding Cool News
Posted in Libertarianism
Comments Off on WWE Mayor Kane Defies Authority, Will Not Comply with Vaccine Mandate – Bleeding Cool News
Proud Boy Afghan refugee hunting permit stickers found on University of Michigan campus – MLive.com
Posted: at 9:31 am
ANN ARBOR, MI Several Proud Boy stickers marked as Afghan refugee hunting permits were discovered on the University of Michigan campus by a student recently.
A student spotted the insensitive stickers and reported them to police on Sunday, Sept. 12, according to Rick Fitzgerald, University of Michigan spokesman.
The stickers, which each had Proud Boy and Afghan Refugee Hunting Permit written on them were found on various properties near the universitys West Hall. They were removed by the student who found them, Fitzgerald said.
The stickers had a permit number of 09*11*01 with no bag limit and no expiration to hunt and kill Afghan refugees nationwide.
It is unknown where the stickers came from or who placed them. The matter remains under investigation, according to officials.
Bigotry has no place on this campus, Fitzgerald said.
The stickers were discovered a day after the 20-year anniversary of the Sept. 11, 2001, terrorist attacks which led to the U.S. invasion of Afghanistan. After 20 years in Afghanistan, U.S. withdrew from the country in August in what was described as a chaotic evacuation where thousands of refugees were left in limbo, struggling to get out of the country.
Several private organizations in Michigan are taking in refugees including the Jewish Family Services in Ann Arbor. Grand Rapids is expected to take about 500 refugees by the end of the month.
As Michigan prepares to receive Afghan refugees, Grand Rapids vigil honors their struggle
The Proud Boys is described as a far-right organization that uses intimidation to instigate conflict while regularly spouting white nationalist memes and maintaining affiliations with known extremists, according to the Southern Poverty Law Center.
The organization marched in Kalamazoo in September of 2020 which ended in violence between the group and counter protestors.
Why the Proud Boys visited Kalamazoo
Anyone with information about the incident is asked to contact University of Michigan Division of Public Safety and Security at 734-763-1131.
More from MLive:
Man tells police he was robbed by friend at gunpoint after showing him $1,700 engagement ring
Michigan State University, Henry Ford Health join forces to bolster cancer, health care research
Ypsilanti advances proposal to allow accessory apartments on 3K more properties
Originally posted here:
Proud Boy Afghan refugee hunting permit stickers found on University of Michigan campus - MLive.com
Posted in Proud Boys
Comments Off on Proud Boy Afghan refugee hunting permit stickers found on University of Michigan campus – MLive.com
Improved algorithms may be more important for AI performance than faster hardware – VentureBeat
Posted: at 9:30 am
The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!
When it comes to AI, algorithmic innovations are substantially more important than hardware at least where the problems involve billions to trillions of data points. Thats the conclusion of a team of scientists at MITs Computer Science and Artificial Intelligence Laboratory (CSAIL), who conducted what they claim is the first study on how fast algorithms are improving across a broad range of examples.
Algorithms tell software how to make sense of text, visual, and audio data so that they can, in turn, draw inferences from it. For example, OpenAIs GPT-3 was trained on webpages, ebooks, and other documents to learn how to write papers in a humanlike way. The more efficient the algorithm, the less work the software has to do. And as algorithms are enhanced, less computing power should be needed in theory. But this isnt settled science. AI research and infrastructure startups like OpenAI and Cerberus are betting that algorithms will have to increase in size substantially to reach higher levels of sophistication.
The CSAIL team, led by MIT research scientist Neil Thompson, who previously coauthored a paper showing that algorithms were approaching the limits of modern computing hardware, analyzed data from 57 computer science textbooks and more than 1,110 research papers to trace the history of where algorithms improved. In total, they looked at 113 algorithm families, or sets of algorithms that solved the same problem, that had been highlighted as most important by the textbooks.
The team reconstructed the history of the 113, tracking each time a new algorithm was proposed for a problem and making special note of those that were more efficient. Starting from the 1940s to now, the team found an average of eight algorithms per family of which a couple improved in efficiency.
For large computing problems, 43% of algorithm families had year-on-year improvements that were equal to or larger than the gains from Moores law, the principle that the speed of computers roughly doubles every two years. In 14% of problems, the performance improvements vastly outpaced those that came from improved hardware, with the gains from better algorithms being particularly meaningful for big data problems.
The new MIT study adds to a growing body of evidence that the size of algorithms matters less than their architectural complexity. For example, earlier this month, a team of Google researchers published a study claiming that a model much smaller than GPT-3 fine-tuned language net (FLAN) bests GPT-3 by a large margin on a number of challenging benchmarks. And in a 2020 survey, OpenAI found that since 2012, the amount of compute needed to train an AI model to the same performance on classifying images in a popular benchmark, ImageNet, has been decreasing by a factor of two every 16 months.
Theres findings to the contrary. In 2018, OpenAI researchers released a separate analysis showing that from 2012 to 2018, the amount of compute used in the largest AI training runs grew more than 300,000 times with a 3.5-month doubling time, exceeding the pace of Moores law. But assuming algorithmic improvements receive greater attention in the years to come, they could solve some of the other problems associated with large language models, like environmental impact and cost.
In June 2020, researchers at the University of Massachusetts at Amherst released a report estimating that the amount of power required for training and searching a certain model involves the emissions of roughly 626,000 pounds of carbon dioxide, equivalent to nearly 5 times the lifetime emissions of the average U.S. car. GPT-3 alone used 1,287 megawatts during training and produced 552 metric tons of carbon dioxide emissions, a Google study found the same amount emitted by 100 average homes electricity usage over a year.
On the expenses side, a Synced report estimated that the University of Washingtons Grover fake news detection model cost $25,000 to train; OpenAI reportedly racked up $12 million training GPT-3; and Google spent around $6,912 to train BERT. While AI training costs dropped 100-fold between 2017 and 2019, according to one source, these amounts far exceed the computing budgets of most startups and institutions let alone independent researchers.
Through our analysis, we were able to say how many more tasks could be done using the same amount of computing power after an algorithm improved, Thompson said in a press release. In an era where the environmental footprint of computing is increasingly worrisome, this is a way to improve businesses and other organizations without the downside.
Read more from the original source:
Improved algorithms may be more important for AI performance than faster hardware - VentureBeat
Posted in Ai
Comments Off on Improved algorithms may be more important for AI performance than faster hardware – VentureBeat
Abductive inference: The blind spot of artificial intelligence – TechTalks
Posted: at 9:30 am
Welcome toAI book reviews, a series of posts that explore the latest literature on artificial intelligence.
Recent advances in deep learning have rekindled interest in the imminence of machines that can think and act like humans, or artificial general intelligence. By following the path of building bigger and better neural networks, the thinking goes, we will be able to get closer and closer to creating a digital version of the human brain.
But this is a myth, argues computer scientist Erik Larson, and all evidence suggests that human and machine intelligence are radically different. Larsons new book, The Myth of Artificial Intelligence: Why Computers Cant Think the Way We Do, discusses how widely publicized misconceptions about intelligence and inference have led AI research down narrow paths that are limiting innovation and scientific discoveries.
And unless scientists, researchers, and the organizations that support their work dont change course, Larson warns, they will be doomed to resignation to the creep of a machine-land, where genuine invention is sidelined in favor of futuristic talk advocating current approaches, often from entrenched interests.
From a scientific standpoint, the myth of AI assumes that we will achieve artificial general intelligence (AGI) by making progress on narrow applications, such as classifying images, understanding voice commands, or playing games. But the technologies underlying these narrow AI systems do not address the broader challenges that must be solved for general intelligence capabilities, such as holding basic conversations, accomplishing simple chores in a house, or other tasks that require common sense.
As we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress, but rather picking the low-hanging fruit, Larson writes.
The cultural consequence of the myth of AI is ignoring the scientific mystery of intelligence and endlessly talking about ongoing progress on deep learning and other contemporary technologies. This myth discourages scientists from thinking about new ways to tackle the challenge of intelligence.
We are unlikely to get innovation if we choose to ignore a core mystery rather than face it up, Larson writes. A healthy culture for innovation emphasizes exploring unknowns, not hyping extensions of existing methods Mythology about inevitable success in AI tends to extinguish the very culture of invention necessary for real progress.
You step out of your home and notice that the street is wet. Your first thought is that it must have been raining. But its sunny and the sidewalk is dry, so you immediately cross out the possibility of rain. As you look to the side, you see a road wash tanker parked down the street. You conclude that the road is wet because the tanker washed it.
This is an example inference, the act of going from observations to conclusions, and is the basic function of intelligent beings. Were constantly inferring things based on what we know and what we perceive. Most of it happens subconsciously, in the background of our mind, without focus and direct attention.
Any system that infers must have some basic intelligence, because the very act of using what is known and what is observed to update beliefs is inescapably tied up with what we mean by intelligence, Larson writes.
AI researchers base their systems on two types of inference machines: deductive and abductive. Deductive inference uses prior knowledge to reason about the world. This is the basis of symbolic artificial intelligence, the main focus of researchers in the early decades of AI. Engineers create symbolic systems by endowing them with a predefined set of rules and facts, and the AI uses this knowledge to reason about the data it receives.
Inductive inference, which has gained more traction among AI researchers and tech companies in the past decade, is the acquisition of knowledge through experience. Machine learning algorithms are inductive inference engines. An ML model trained on relevant examples will find patterns that map inputs to outputs. In recent years, AI researchers have used machine learning, big data, and advanced processors to train models on tasks that were beyond the capacity of symbolic systems.
A third type of reasoning, abductive inference, was first introduced by American scientist Charles Sanders Peirce in the 19th century. Abductive inference is the cognitive ability to come up with intuitions and hypotheses, to make guesses that are better than random stabs at the truth.
For example, there can be numerous reasons for the street to be wet (including some that we havent directly experienced before), but abductive inference enables us to select the most promising hypotheses, quickly eliminate the wrong ones, look for new ones and reach a reliable conclusion. As Larson puts it in The Myth of Artificial Intelligence, We guess, out of a background of effectively infinite possibilities, which hypotheses seem likely or plausible.
Abductive inference is what many refer to as common sense. It is the conceptual framework within which we view facts or data and the glue that brings the other types of inference together. It enables us to focus at any moment on whats relevant among the ton of information that exists in our mind and the ton of data were receiving through our senses.
The problem is that the AI community hasnt paid enough attention to abductive inference.
Abduction entered the AI discussion with attempts at Abductive Logic Programming in the 1980s and 1990s, but those efforts were flawed and later abandoned. They were reformulations of logic programming, which is a variant of deduction, Larson told TechTalks.
Abduction got another chance in the 2010s as Bayesian networks, inference engines that try to compute causality. But like the earlier approaches, the newer approaches shared the flaw of not capturing true abduction, Larson said, adding that Bayesian and other graphical models are variants of induction. In The Myth of Artificial Intelligence, he refers to them as abduction in name only.
For the most part, the history of AI has been dominated by deduction and induction.
When the early AI pioneers like [Alan] Newell, [Herbert] Simon, [John] McCarthy, and [Marvin] Minsky took up the question of artificial inference (the core of AI), they assumed that writing deductive-style rules would suffice to generate intelligent thought and action, Larson said. That was never the case, really, as should have been earlier acknowledged in discussions about how we do science.
For decades, researchers tried to expand the powers of symbolic AI systems by providing them with manually written rules and facts. The premise was that if you endow an AI system with all the knowledge that humans know, it will be able to act as smartly as humans. But pure symbolic AI has failed for various reasons. Symbolic systems cant acquire and add new knowledge, which makes them rigid. Creating symbolic AI becomes an endless chase of adding new facts and rules only to find the system making new mistakes that it cant fix. And much of our knowledge is implicit and cannot be expressed in rules and facts and fed to symbolic systems.
Its curious here that no one really explicitly stopped and said Wait. This is not going to work! Larson said. That would have shifted research directly towards abduction or hypothesis generation or, say, context-sensitive inference.
In the past two decades, with the growing availability of data and compute resources, machine learning algorithmsespecially deep neural networkshave become the focus of attention in the AI community. Deep learning technology has unlocked many applications that were previously beyond the limits of computers. And it has attracted interest and money from some of the wealthiest companies in the world.
I think with the advent of the World Wide Web, the empirical or inductive (data-centric) approaches took over, and abduction, as with deduction, was largely forgotten, Larson said.
But machine learning systems also suffer from severe limits, including the lack of causality, poor handling of edge cases, and the need for too much data. And these limits are becoming more evident and problematic as researchers try to apply ML to sensitive fields such as healthcare and finance.
Some scientists, including reinforcement learning pioneer Richard Sutton, believe that we should stick to methods that can scale with the availability of data and computation, namely learning and search. For example, as neural networks grow bigger and are trained on more data, they will eventually overcome their limits and lead to new breakthroughs.
Larson dismisses the scaling up of data-driven AI as fundamentally flawed as a model for intelligence. While both search and learning can provide useful applications, they are based on non-abductive inference, he reiterates.
Search wont scale into commonsense or abductive inference without a revolution in thinking about inference, which hasnt happened yet. Similarly with machine learning, the data-driven nature of learning approaches means essentially that the inferences have to bein the data, so to speak, and thats demonstrably not true of many intelligent inferences thatpeople routinelyperform, Larson said. We dont just look to the past, captured, say, in a large dataset, to figure out what to conclude or think or infer about the future.
Other scientists believe that hybrid AI that brings together symbolic systems and neural networks will have a bigger promise of dealing with the shortcomings of deep learning. One example is IBM Watson, which became famous when it beat world champions at Jeopardy! More recent proof-of-concept hybrid models have shown promising results in applications where symbolic AI and deep learning alone perform poorly.
Larson believes that hybrid systems can fill in the gaps in machine learningonly or rules-basedonly approaches. As a researcher in the field of natural language processing, he is currently working on combining large pre-trained language models like GPT-3 with older work on the semantic web in the form of knowledge graphs to create better applications in search, question answering, and other tasks.
But deduction-induction combos dont get us to abduction, because the three types of inference are formally distinct, so they dont reduce to each other and cant be combined to get a third, he said.
In The Myth of Artificial Intelligence, Larson describes attempts to circumvent abduction as the inference trap.
Purely inductively inspired techniques like machine learning remain inadequate, no matter how fast computers get, and hybrid systems like Watson fall short of general understanding as well, he writes. In open-ended scenarios requiring knowledge about the world like language understanding, abduction is central and irreplaceable. Because of this, attempts at combining deductive and inductive strategies are always doomed to fail The field needs a fundamental theory of abduction. In the meantime, we are stuck in traps.
The AI communitys narrow focus on data-driven approaches has centralized research and innovation in a few organizations that have vast stores of data and deep pockets. With deep learning becoming a useful way to turn data into profitable products, big tech companies are now locked in a tight race to hire AI talent, driving researchers away from academia by offering them lucrative salaries.
This shift has made it very difficult for non-profit labs and small companies to become involved in AI research.
When you tie research and development in AI to the ownership and control of very large datasets, you get a barrier to entry for start-ups, who dont own the data, Larson said, adding that data-driven AI intrinsically creates winner-take-all scenarios in the commercial sector.
The monopolization of AI is in turn hampering scientific research. With big tech companies focusing on creating applications in which they can leverage their vast data resources to maintain the edge over their competitors, theres little incentive to explore alternative approaches to AI. Work in the field starts to skew toward narrow and profitable applications at the expense of efforts that can lead to new inventions.
No one at present knows how AI would look in the absence of such gargantuan centralized datasets, so theres nothing really on offer for entrepreneurs looking to compete by designing different and more powerful AI, Larson said.
In his book, Larson warns about the current culture of AI, which is squeezing profits out of low-hanging fruit, while continuing to spin AI mythology. The illusion of progress on artificial general intelligence can lead to another AI winter, he writes.
But while an AI winter might dampen interest in deep learning and data-driven AI, it can open the way for a new generation of thinkers to explore new pathways. Larson hopes scientists start looking beyond existing methods.
In The Myth of Artificial Intelligence, Larson provides an inference framework that sheds light on the challenges that the field faces today and helps readers to see through the overblown claims about progress toward AGI or singularity.
My hope is that non-specialists have some tools to combat this kind of inevitability thinking, which isnt scientific, and that my colleagues and other AI scientists can view it as a wake-up call to get to work on the very real problems the field faces, Larson said.
View original post here:
Abductive inference: The blind spot of artificial intelligence - TechTalks
Posted in Ai
Comments Off on Abductive inference: The blind spot of artificial intelligence – TechTalks
This AI could predict 10 years of scientific prioritiesif we let it – MIT Technology Review
Posted: at 9:30 am
Thesurveycommittee, which receives input from a host of smaller panels, takes into account a gargantuan amount of information to create research strategies. Although the Academies wont release the committees final recommendation to NASA for a few more weeks, scientists are itching to know which of their questions will make it in, and which will be left out.
The Decadal Survey really helps NASA decide howtheyregoing to lead the future of human discovery in space, soitsreally important thattheyrewell informed, saysBrant Robertson, a professor of astronomy and astrophysics at UC Santa Cruz.
One teamof researcherswants to useartificial intelligenceto make this process easier. Their proposal isnt for a specific mission or line of questioning; rather, they say,their AI can help scientists make tough decisions about which other proposals to prioritize.
The idea is that by training an AI to spot research areas that are either growing or declining rapidly, the tool could make it easier for survey committees and panels to decide what should make the list.
What we wanted was to have a system that would do a lot of the work that the Decadal Survey does, and let the scientists working on the Decadal Survey do what they will do best, saysHarley Thronson, a retired senior scientist at NASAs Goddard Space Flight Center and lead authorof the proposal.
Although members of each committee are chosen for their expertise in their respective fields,itsimpossible for every member to grasp the nuance of every scientific theme. The number of astrophysics publications increases by 5%every year, according to the authors. Thats a lot for anyone to process.
Thats where Thronsons AI comes in.
It took just over a year to build, but eventually, Thronsons team was able to train it on more than 400,000 pieces of research published in the decade leading up to the Astro2010 survey. They were also able to teach the AI to sift through thousands of abstracts toidentifyboth low-and high-impact areasfromtwo-and three-word topic phrases likeplanetary systemorextrasolar planet.
According to theresearcherswhitepaper, the AI successfullybackcastedsix popular research themesofthe last 10 years, including a meteoric rise in exoplanet research and observation of galaxies.
One of the challenging aspects of artificial intelligence is that they sometimes will predict, or come up with, or analyze things that are completely surprising to the humans, says Thronson. And we saw this a lot.
Thronson and his collaborators think the steering committee should use their AI to help review and summarize the vast amounts of text the panel must sift through, leaving human experts to makethe final call.
Their research isnt the first to try to use AI to analyze and shape scientific literature. Other AIs have already been usedto help scientistspeer-reviewtheircolleagueswork.
But could it be trusted with a task as important and influential as the DecadalSurvey?
Read the rest here:
This AI could predict 10 years of scientific prioritiesif we let it - MIT Technology Review
Posted in Ai
Comments Off on This AI could predict 10 years of scientific prioritiesif we let it – MIT Technology Review
AI Disruption: What VCs Are Betting On – Forbes
Posted: at 9:30 am
Venture Capital concept image with business icons and copyspace.For more variation of this image ... [+] please visit my portfolio
According to data from PitchBook, the funding for AI deals has continued its furious pace.In the latest quarter, the amount invested came to a record $31.6 billion.Note that there were 11 deals the closed more than $500 million.
Granted, plenty of these startups will fade away or even go bust.But of course, some will ultimately disrupt industries and change the landscape of the global economy.
To be disrupted, you have to believe the AI is going to make 10x better recommendations than whats available today, said Eric Vishria, who is a General Partner at Benchmark.I think that is likely to happen in really complex, high dimensional spaces, where there are so many intermingled factors at play that finding correlations via standard analytical techniques is really difficult.
So then what are some of the industries that are vulnerable to AI disruption?Well, lets see where some of the top VCs are investing today:
Software Development:There have been advances in DevOps and IDEs.Yet software development remains labor intensive.And it does not help that its extremely difficult to recruit qualified developers.
But AI can make a big difference.Advancements in state-of-the-art natural language processing algorithms could revolutionize software development, initially by significantly reducing the boilerplate code that software developers write today and in the long-run by writing entire applications with little assistance from humans, said Nnamdi Iregbulem, who is a Partner at Lightspeed Venture Partners.
Consider the use of GPT-3, which is a neural network that trains models to create content.Products like GitHub Copilot, which are also based on GPT-3, will also disrupt software development, said Jai Das, who is the President and Partner at Sapphire Ventures.
Cybersecurity:This is one of the biggest software markets.But the technologies really need retooling.After all, there continues to be more and more breaches and hacks.
Cybersecurity is likely to turn into an AI-vs-AI game very soon, said Deepak Jeevankumar, who is a Managing Director at Dell Technologies Capital.Sophisticated attackers are already using AI and bots to get over defenses.
Construction:This is a massive industry and will continue to grow, as the global population continues to increase.Yet construction has seen relatively small amounts of IT investment.But AI could be a game changer.
An incremental 1% increase in efficiency can mean millions of dollars in cost savings, said Shawn Carolan, who is a Managing Partner at Menlo Ventures.There are many companies, like Openspace.ai, doing transformative work using AI in the construction space. Openspace leverages AI and machine vision to essentially become a photographic memory for job sites. It automatically uploads and stitches together images of a job site so that customers can do a virtual walk-through and monitor the project at any time.
Talent Management:HR has generally lagged with innovation.The fact is that many of the processes are manual and inefficient.
But AI can certainly be a solution. In fact, AI startups like Eightfold.ai have been able to post substantial growth in the HR category.In June, the company announced funding of $220 million, which was led by the SoftBank Vision Fund 2.
Every single company is talking about talent as a key priority, and the companies that embrace AI to find better candidates faster, cheaper, at scale, they have a true competitive advantage, said Kirthiga Reddy, who is a Partner at SoftBank.Understanding how to use AI to amplify the interactions in the talent lifecycle is a differentiator and advantage for these businesses."
Drug Discovery: The development of the Covid-19 vaccinesfrom companies like Pfizer, Moderna and BioNTechhas highlighted the power of innovation in the healthcare industry.But despite this, there is still much be done.The fact is that drug development is costly and time-consuming.
It's becoming impossible to process these large datasets without using the latest AI/ML technologies, said Dusan Perovic, who is a partner at Two Sigma Ventures.Companies that are early adopters of these data science tools and thereby are able to analyze larger datasets are going to make faster progress than companies that rely on older data analytics tools.
Tom (@ttaulli) is an advisor/board member to startups and the author of Artificial Intelligence Basics: A Non-Technical Introduction, The Robotic Process Automation Handbook: A Guide to Implementing RPA Systems and Implementing AI Systems: Transform Your Business in 6 Steps. He also has developed various online courses, such as for the COBOL.
See the rest here:
Posted in Ai
Comments Off on AI Disruption: What VCs Are Betting On – Forbes
AI in Pro Sport: Laying the Groundwork – SportTechie
Posted: at 9:30 am
Even advocates of artificial intelligence (AI) will acknowledge that the concepthas endured some false starts over the years. However, the past decade hasbrought a transformation in how AI is perceived in sport -- with the clubs, leagues,organizations and businesses that underpin the industry discovering the innovationsthat can emerge through the simulation of human intelligence in machines.
Members of the public rely on this technology every day, and we take it forgranted, says Dr. Patrick Lucey, chief scientist at sports data and analyticsprovider Stats Perform.
The availability of data these days is one big difference that has driven AIadoption. There is also a greater appreciation of AI, and the return on investmentis there to see via objective measures. It also helps that AI can be applied acrossall business segments.
Barriers to Adoption
AI is fueled by crunching swathes of data via iterative processing and algorithmsthat allow software platforms to identify patterns and predict future outcomes. So,it follows that increasing volumes of data being collected and analyzed in thesports industry in recent years have refined such processes, generating moreaccurate results and bottom-line benefits.
However, given its definition, it is hardly surprising that AI has also been adifficult notion to grasp for many, especially given how it is often usedinterchangeably with machine learning a strand of AI that focuses on howcomputers can imitate the way that humans learn.
This barrier to adoption, though, has slowly evaporated as clubs and franchiseshave gradually learned to gauge the real-life results from an idea that manyinitially considered to be abstract.
When terms like AI and data science were first being bandied around, I was oneof those who didnt understand the value of it, says Ben Mackriell, VP data, AIand pro products at Stats Perform.
But now there is a greater level of understanding in the market in general thatAI is simply a mechanism that enables better experiences, with the coreingredient being data. The challenge is to make AI consumable and break downsome of the myths. The process is complex, but the output doesnt have to becomplex.
Journey of Understanding
Sports clubs have been on this journey of understanding how deploying AI canultimately improve results and there is certainly no turning back now.From a performance perspective, more than 350 clubs across various sports relyon Stats Performs data and technology services, of which AI is a centralcomponent.
Stats Perform was the first company to offer player tracking technology inbasketball more than a decade ago. It is now unthinkable for a team in the NBA as well as any other leading league not to have analysts on the payroll.It is an area that has grown exponentially over the past 10 years, Mackrielladds. Most Premier League clubs had one or two analysts a decade ago. Now, itis common for them to have more than 10 people working across multipleaspects data analytics.
Clubs are hiring data engineers now and you would not have seen that evenjust three years ago.
Vivid Illustration
During this summers delayed UEFA Euro 2020 soccer tournament, Stats Performpresented a vivid illustration of how consumable its AI capabilities can be forfans across Europe and beyond with its Euros Prediction model.Through Stats Performs public-facing digital platform, The Analyst, themodel estimated the probability of each match outcome by using a series of inputs that ranged from historical team and player performances to betting marketodds and team rankings.
Hundreds of thousands of scenarios were being crunched every time a goalwent in, Mackriell says.For clubs though, AI-driven predictive modelling can provide insights that delveeven deeper.Stats Performs Playing Styles framework, for example, takes into considerationnumerous events and factors to determine a teams tendencies. Eight playingstyles are put under the microscope, from build-up play to counter attacks.
Such data-based insights can then be used to identify the roles of individualplayers within each style and also analyze crucially, in an age of sky-high salariesand transfer fees how a possible new signing would slot into the existing teamsystem.Every action and phase on the field is broken down, Mackriell adds. Everyaction on the pitch can be quantified in terms of how likely it is to lead to a goal,and you can see how individuals contribute towards a goal-scoring opportunity.
This supports decision-making and assists in terms of scouting and investing inthe team.One of the most common questions we are asked by a club is: How will thisplayers skills translate into our team and league. That is where teams areseeing a return on investment with AI.
Moneyball
For sports clubs and franchises, AI is Moneyball 2.0 using data to introducelayers of predictive insights that can help them make sound businessdecisions.Most importantly, it is about focusing on solving a problem at the outset.We spend time with clubs across multiple sports to identify the problems theyare trying to solve, Mackriell says. This problem-solving approach is how wedeploy AI as a company, rather than just trying to bring together AI tech anddata.
Given increasing levels of data coverage, the results for clubs and franchisesworldwide will become increasingly sophisticated, according to Lucey.Sport has been a slow adopter as clubs are understandably private about howthey operate, he says. Like anything in sport, though, once there is success,there is a snowball effect.
See the original post:
Posted in Ai
Comments Off on AI in Pro Sport: Laying the Groundwork – SportTechie
Healthily and Best Practice AI publish world’s first AI Explainability Statement reviewed by the ICO – Yahoo Finance
Posted: at 9:30 am
The team included Simmons & Simmons and Jacob Turner of Fountain Court Chambers to bring a 360 degree legal, regulatory, technical and commercial AI perspective
LONDON, Sept. 20, 2021 /PRNewswire/ -- One of the world's leading AI smart symptom checkers has taken the groundbreaking decision to publish a statement explaining how it works.
Healthily, supported by Best Practice AI together with Simmons & Simmons and Jacob Turner of Fountain Court Chambers today publish the first AI Explainability Statement to have been reviewed by the UK Information Commissioner's Office (ICO).
The Healthily AI Explainability Statement explains how Healthily uses AI in its app including why AI is being used, how the AI system was designed and how it operates.
The statement, which can be viewed here, provides a non-technical explanation of the Healthily AI to its customers, regulators and the wider public.
Around the world, there is a growing regulatory focus and consensus around the need for transparent and understandable AI. AI Explainability Statements are public-facing documents intended to provide transparency, particularly so as to comply with global best practices and AI ethical principles, as well as binding legislation. AI Explainability Statements such as this are intended to facilitate compliance with Articles 13, 14, 15 and 22 of the GDPR for organisations using AI to process personal data. The lack of such transparency has been at the heart of recent EU court cases and regulatory decisions, involving Uber and Ola in the Netherlands and Foodinho in Italy.
Healthily, a leading consumer digital healthcare company, worked with a team from the AI advisory firm, Best Practice AI, the international law firm Simmons & Simmons, and Jacob Turner from Fountain Court Chambers to create the first AI Explainability Statement in the sector.
They also engaged with the ICO. A spokesperson for the ICO confirmed:
"In preparing its Explainability Statement, Healthily received feedback from the UK's data protection regulator, the Information Commissioner's Office (ICO) and the published Statement reflects that input.
Story continues
It is the first AI Explainability Statement which has had consideration from a regulator.
The ICO has welcomed the Healthily publication of its Explainability Statement as an example of how organisations can practically apply the guidance on Explaining Decisions Made With AI".
Matteo Berlucchi, CEO of Healthily said:
"We are proud to continue our effort to be at the forefront of transparency and ethical AI use for our global consumer base. It was great to work with Best Practice AI on this valuable exercise."
Simon Greenman, Partner at Best Practice AI, said:
"Businesses need to understand that AI Explainability Statements will be a critical part of rolling out AI systems that retain the necessary levels of public trust. We are proud to have worked with Healthily and the ICO to have started this journey."
To learn more about how Best Practice AI, Simmons & Simmons LLP, and Jacob Turner from Fountain Court Chambers built the AI Explainability Statement, please contact us below.
Notes for Editors
About Healthily
Healthily is the first AI healthcare platform to put self-care at the heart of healthcare, with a mix of user-friendly health tools, an award winning app and a Smart Symptom Checker, one of the most accurate and advanced symptoms checkers in the world coupled with medical-grade information all approved by the Healthily Clinical Advisory Board. The first self-care platform registered as a Class 1 Medical Device, Healthily helps anyone, anywhere decide when to see a doctor and how to manage wellbeing safely at home. The Healthily AI platform is also licensed to telemedicine companies, health insurers, national health services and big pharma to help them scale their services more cost effectively. All part of the Healthily mission to help one billion people find their health through informed self-care. For more information visit https://www.livehealthily.com
About Best Practice AI Ltd
Best Practice AI is a London based AI management consultancy that advises corporates, start-ups and investors on AI strategy, implementation, risk and governance. The firm is a member of the World Economic Forum's Centre for the Fourth Industrial Revolution and work on the WEF's Empowering AI Leadership Board Toolkit and AI Governance Frameworks. They are on the WEF's Global AI Council and the UK All Party Parliamentary Group on AI's Enterprise Adoption Task Force. The firm publishes the world's large library of AI case studies and use cases at https://www.bestpractice.ai/
About Simmons & Simmons
Simmons & Simmons is an international law firm with a dedicated AI Group and extensive data protection compliance experience. The firm has around 280 partners and 1300 staff working in Asia, Europe and the Middle East across 21 offices in 19 countries.They work across Asset Management & Investment Funds, Financial Institutions, Healthcare & Life Sciences and Telecoms, Media & Technology (TMT). For more information visit https://www.simmons-simmons.com
About Jacob Turner and Fountain Court Chambers
Jacob Turner is a barrister at Fountain Court Chambers with AI and data protection experience. He is the author of Robot Rules: Regulating Artificial Intelligence. He advises governments, regulators and businesses on AI regulation.
Fountain Court Chambers is a leading commercial chambers with expertise across financial and commercial disputes, regulatory proceedings and commercial crime.
ContactsTim Gordon press@bestpractice.ai
press@livehealthily.com or Matteo Berlucchi, CEO, matteo@livehealthily.com
Carl Philip Brandgard CarlPhilip.Brandgard@simmons-simmons.com
Helen Griffiths Helen@fountaincourt.co.uk
ICO Information
For more information on ICO guidelines for explaining decisions made with AI visit
Read the original:
Posted in Ai
Comments Off on Healthily and Best Practice AI publish world’s first AI Explainability Statement reviewed by the ICO – Yahoo Finance
Artificial intelligence success is tied to ability to augment, not just automate – ZDNet
Posted: at 9:30 am
Artificial intelligence is only a tool, but what a tool it is. It may be elevating our world into an era of enlightenment and productivity, or plunging us into a dark pit. To help achieve the former, and not the latter, it must be handled with a great deal of care and forethought. This is where technology leaders and practitioners need to step up and help pave the way, encouraging the use of AI to augment and amplify human capabilities.
Those are some of the observations drawn from Stanford University's recently released report, the next installment out of itsOne-Hundred-Year Study on Artificial Intelligence, an extremely long-term effort to track and monitor AI as it progresses over the coming century. The report, first launched in 2016, was prepared by a standing committee that includes a panel of 17 experts, and urges that AI be employed as a tool to augment and amplify human skills. "All stakeholders need to be involved in the design of AI assistants to produce a human-AI team that outperforms either alone. Human users must understand the AI system and its limitations to trust and use it appropriately, and AI system designers must understand the context in which the system will be used."
AI has the greatest potential when it augments human capabilities, and this is where it can be most productive, the report's authors argue. "Whether it's finding patterns in chemical interactions that lead to a new drug discovery or helping public defenders identify the most appropriate strategies to pursue, there are many ways in which AI can augment the capabilities of people. An AI system might be better at synthesizing available data and making decisions in well-characterized parts of a problem, while a human may be better at understanding the implications of the data -- say if missing data fields are actually a signal for important, unmeasured information for some subgroup represented in the data -- working with difficult-to-fully quantify objectives, and identifying creative actions beyond what the AI may be programmed to consider."
Complete autonomy "is not the eventual goal for AI systems," the co-authors state. There needs to be "clear lines of communication between human and automated decision makers. At the end of the day, the success of the field will be measured by how it has empowered all people, not by how efficiently machines devalue the very people we are trying to help."
The report examines key areas where AI is developing and making a difference in work and lives:
Discovery:"New developments in interpretable AI and visualization of AI are making it much easier for humans to inspect AI programs more deeply and use them to explicitly organize information in a way that facilitates a human expert putting the pieces together and drawing insights," the report notes.
Decision-making:AI helps summarize data too complex for a person to easily absorb. "Summarization is now being used or actively considered in fields where large amounts of text must be read and analyzed -- whether it is following news media, doing financial research, conducting search engine optimization, or analyzing contracts, patents, or legal documents. Nascent progress in highly realistic (but currently not reliable or accurate) text generation, such as GPT-3, may also make these interactions more natural."
AI as assistant:"We are already starting to see AI programs that can process and translate text from a photograph, allowing travelers to read signage and menus. Improved translation tools will facilitate human interactions across cultures. Projects that once required a person to have highly specialized knowledge or copious amounts of time may become accessible to more people by allowing them to search for task and context-specific expertise."
Language processing:Language processing technology advances have been supported by neural network language models, including ELMo, GPT, mT5, and BERT, that "learn about how words are used in context -- including elements of grammar, meaning, and basic facts about the world -- from sifting through the patterns in naturally occurring text. These models' facility with language is already supporting applications such as machine translation, text classification, speech recognition, writing aids, and chatbots. Future applications could include improving human-AI interactions across diverse languages and situations."
Computer vision and image processing:"Many image-processing approaches use deep learning for recognition, classification, conversion, and other tasks. Training time for image processing has been substantially reduced. Programs running on ImageNet, a massive standardized collection of over 14 million photographs used to train and test visual identification programs, complete their work 100 times faster than just three years ago." The report's authors caution, however, that such technology could be subject to abuse.
Robotics: "The last five years have seen consistent progress in intelligent robotics driven by machine learning, powerful computing and communication capabilities, and increased availability of sophisticated sensor systems. Although these systems are not fully able to take advantage of all the advances in AI, primarily due to the physical constraints of the environments, highly agile and dynamic robotics systems are now available for home and industrial use."
Mobility: "The optimistic predictions from five years ago of rapid progress in fully autonomous driving have failed to materialize. The reasons may be complicated, but the need for exceptional levels of safety in complex physical environments makes the problem more challenging, and more expensive, to solve than had been anticipated. The design of self-driving cars requires integration of a range of technologies including sensor fusion, AI planning and decision-making, vehicle dynamics prediction, on-the-fly rerouting, inter-vehicle communication, and more."
Recommender systems:The AI technologies powering recommender systems have changed considerably in the past five years, the report states. "One shift is the near-universal incorporation of deep neural networks to better predict user responses to recommendations. There has also been increased usage of sophisticated machine-learning techniques for analyzing the content of recommended items, rather than using only metadata and user click or consumption behavior."
The report's authors caution that "the use of ever-more-sophisticated machine-learned models for recommending products, services, and content has raised significant concerns about the issues of fairness, diversity, polarization, and the emergence of filter bubbles, where the recommender system suggests. While these problems require more than just technical solutions, increasing attention is paid to technologies that can at least partly address such issues."
View original post here:
Artificial intelligence success is tied to ability to augment, not just automate - ZDNet
Posted in Ai
Comments Off on Artificial intelligence success is tied to ability to augment, not just automate – ZDNet







