The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: April 2020
How AI will earn your trust – JAXenter
Posted: April 9, 2020 at 6:27 pm
In the world of applying AI to IT Operations one of the major enterprise concerns is a lack of trust in the technology. This tends to be an emotional rather than intellectual response. When I evaluate the sources of distrust in relation to IT Ops, I can narrow it down to four specific causes.
The algorithms used in AIOps are fairly complex, even if you are addressing an audience which has a background in computer science. The way in which these algorithms are constructed and deployed is not covered in academia. Modern AI is mathematically intensive and many IT practitioners havent even seen this kind of mathematics before. The algorithms are outside the knowledge base of todays professional developers and IT operators.
SEE ALSO: 3 global manufacturing brands at the forefront of AI and ML
When you analyse the specific types of mathematics used in popular AI-based algorithms, deployed in an IT operations context, the maths is basically intractable. What is going on inside the algorithms cannot be teased out or reverse engineered. The mathematics generates patterns whose sources cannot be determined due to the very nature of the algorithm itself.
For example, an algorithm might tell you a number of CPUs have passed a usage threshold of 90% which will result in end user response time degrading. Consequently, the implicit instruction is to offload the usage of some servers. When you have this situation, executive decision makers will want to know why the algorithm indicates there is an issue. If you were using an expert system it could go back and show you all the invoked rules until you reverted back to the original premise. Its almost like doing a logical inference in reverse. The fact that you can trace it backwards lends credibility and validates the conclusion.
What happens in the case of AI is that things get mixed up and switched around, which means links are broken from the conclusion back to the original premise. Even if you have enormous computer power it doesnt help as the algorithm loses track of its previous steps. Youre left with a general description of the algorithm, the start and end data, but no way to link all these things together. You cant run it in reverse. Its intractable. This generates further distrust, which lives on a deeper level. Its not just about not being familiar with the mathematical logic.
Lets look at the way AI has been marketed since its inception in the late 1950s. The general marketing theme has been that AI is trying to create a human mind, when this is translated into a professional context people view it as a threat to their jobs. This notion has been resented for a long time. Scepticism is rife but it is often a tactic used to preserve livelihoods.
How AI has been marketed as an intellectual goal and a meaningful business endeavour, lends credibility to that concern. This is when scepticism starts to shade into genuine distrust. Not only is this technology that may not work, it is also my personal enemy.
IT Operations, in terms of all the various enterprise disciplines, is always being threatened with cost cutting and role reduction. Therefore, this isnt just paranoia, theres a lot of justification behind the fear.
IT Operations has had a number of bouts with commercialized AI which first emerged in the final days of the cold war when a lot of code was repackaged and sold to the IT Ops as it was a plausible use case. Many of the people who are now in senior enterprise positions, were among the first wave of people who were excited about AI and what it could achieve. Unfortunately, AI didnt initially deliver on expectations. So for these people, AI is not something new, its a false promise. Therefore, in many IT Operations circles there is a bad memory of previous hype. A historical reason for scepticism which is unique to the IT Ops world.
These are my four reasons why enterprises dont trust AIOps and AI in general. Despite these four concerns, the use of AI-based algorithms in an IT Operations context is inevitable, despite the distrust.
Take your mind back to a very influential Gartner definition of big data in 2001. Gartner came up with the idea of the 3Vs. The 3Vs (volume, variety and velocity) are three defining properties or dimensions of big data. Volume refers to the amount of data, variety refers to the number of types of data and velocity refers to the speed of data processing. At the time the definition was very valuable and made a lot of sense.
The one thing Gartner missed is the issue of dimensionality i.e. how many attributes a dataset has. Traditional data has maybe four or five attributes. If you have millions of these datasets, with a few attributes, you can store them in a database and it is fairly straightforward to search on key values and conduct analytics to obtain answers from the data.
However, when youre dealing with high dimensions and a data item that has a thousand or a million attributes, suddenly your traditional statistical techniques dont work. Your traditional search methods become ungainly. It becomes impossible to formulate a query.
As our systems become more volatile and dynamic, we are unintentionally multiplying data items and attributes which leads me onto AI. Almost all of the AI techniques developed to date are attempts to handle high dimensional data structures and collapse them into a smaller number of manageable attributes.
When you go to the leading Universities, youre seeing fewer courses on Machine Learning, but more geared towards embedding Machine Learning topics into courses on high dimensional probability and statistics. Whats happening is that Machine Learning per se is starting to resemble practical oriented bootcamps, while the study of AI is now more focussed on understanding probability, geometry and statistics in relation to high dimensions.
How did we end up here? The brain uses algorithms to process high dimensional data and reduces it to low dimensional attributes, it then processes and ends up with a conclusion. This is the path AI has taken. Lets codify what the brain is doing and you end up realizing that what youre actually doing is high dimensional probability and statistics.
SEE ALSO: Facebook AIs Demucs teaches AI to hear in a more human-like way
I can see discussions about AI being repositioned around high dimensional data which will provide a much clearer vision of what is trying to be achieved. In terms of IT operations, there will soon be an acknowledgement that modern IT systems contain high volume, high velocity and high variety data, but now also high dimensional datasets. In order to cope with this were going to need high dimensional probability and statistics and model it in high dimensional geometry. This is why AIOps is inevitable.
More here:
Posted in Ai
Comments Off on How AI will earn your trust – JAXenter
Surge in Remote Working Leads iManage to Launch Virtual AI University for Companies that Want to Harness the Power of the RAVN AI Engine -…
Posted: at 6:27 pm
CHICAGO, April 09, 2020 (GLOBE NEWSWIRE) -- iManage, the company dedicated to transforming how professionals work, today announced that it has rolled out a virtual Artificial Intelligence University (AIU), as an adjunct to its customer on-site model. With the virtual offering, legal and financial services professionals can actively participate in project-driven, best-practice remote AI workshops that use their own, real-world data to address specific business issues even amidst the disruption caused by the COVID-19 outbreak.
AIU helps clients to quickly and efficiently learn to apply machine learning and rules-based modeling to classify, find, extract and analyze data within contracts and other legal documents for further action, often automating time-consuming manual processes. In addition to delivering increases in speed and accuracy of data search results, AI frees practitioners to focus on other high-value work. Driven both by the need of organizations to reduce operational costs and to adapt to fundamental shifts toward remote work practices, virtual AIU is playing an important role in helping iManage clients continue to work and collaborate productively. The curriculum empowers end users with all the skills they need to quickly ramp up the efficiency and breadth of their AI projects using the iManage RAVN AI engine.
Participating in AIU was a huge win for us. We immediately saw the impact AI would have in surfacing information we need and allowing us to action it to save time, money and frustration, said Nikki Shaver, Managing Director, Innovation and Knowledge, Paul Hastings. The workshop gave us deep insight into how to train the algorithm effectively for the best possible effect. And, very quickly, more opportunities came to light as to how AI could augment our business in the longer term, continued Shaver.
AI is a transformation technology thats continuing to gain momentum in the legal, financial and professional services sectors. But many firms dont yet have the internal knowledge or training to deliver on its promise. iManage is committed to helping firms establish AI Centers of Excellence not just sell them a kit and walk away, said Nick Thomson, General Manager, iManage RAVN. Weve found the best way to ensure client success is to educate and build up experience inside the firm about how AI works and how to apply it to a broad spectrum of business problems.
Deep Training Delivers Powerful Results
iManage AIUs targeted, hands-on training starts with the fundamentals but delves much deeper enabling organizations to put the flexibility and speed of the technology to work across myriad scenarios. RAVN easily helps facilitate actions like due diligence, compliance reviews or contract repapering, as well as more sophisticated modeling that taps customized rule development to address more unique use cases.
The advanced combination of machine learning and rules-based extraction capabilities in RAVN make it the most trainable platform on the market. Users can teach the software what to look for, where to find it and then how to analyze it using the RAVN AI engine.
Armed with the tools and training to put AI to work across their data stores and documents, AIU graduates can help their organizations unlock critical knowledge and insights in a repeatable way across the enterprise.
Interactive Curriculum Builds Strong Skillsets
The personalized, interactive course is delivered over three half-day sessions, via video conferencing, to a small team of customer stakeholders. Such teams may include data scientists, knowledge managers, lawyers, partners, contract specialists, and trained legal staff. AIU is also available to firms that are considering integrating the RAVN engine and would like to see AI in action as they assess the potential impact of the solution on their businesses.
Expert iManage AI instructors, with deep technology and legal expertise, work with clients in advance to help identify use cases for the virtual AIU. The iManage team fully explores client use cases prior to the training to facilitate the most effective approach to extraction techniques for client projects.
The daily curriculum includes demonstrations with user data and individual and group exercises to evaluate and deepen user skills. Virtual breakout rooms for project drill down and feedback mechanisms, such as polls and surveys, help solidify learning and make the sessions more interactive. Recordings and transcripts allow customers to revisit AIU sessions at any time.
For more information on iManage virtual AIU or on-site training read our AI blog post or contact us at AIU@imanage.com.
Follow iManage via: Twitter: https://twitter.com/imanageinc LinkedIn: https://www.linkedin.com/company/imanage
About iManageiManage transforms how professionals in legal, accounting and financial services get work done by combining artificial intelligence, security and risk mitigation with market leading document and email management. iManage automates routine cognitive tasks, provides powerful insights and streamlines how professionals work, while maintaining the highest level of security and governance over critical client and corporate data. Over one million professionals at over 3,500 organizations in over 65 countries including more than 2,500 law firms and 1,200 corporate legal departments and professional services firms rely on iManage to deliver great client work securely.
Press Contact:Anastasia BullingeriManage +1.312.868.8411press@imanage.com
Read the rest here:
Posted in Ai
Comments Off on Surge in Remote Working Leads iManage to Launch Virtual AI University for Companies that Want to Harness the Power of the RAVN AI Engine -…
How AI is helping scientists in the fight against COVID-19, from robots to predicting the future – GeekWire
Posted: at 6:27 pm
Artificial intelligence is helping researchers through different stages of the COVID-19 pandemic. (NIST Illustration / N. Hanacek)
Artificial intelligence is playing a part in each stage of the COVID-19 pandemic, from predicting the spread of the novel coronavirus to powering robots that can replace humans in hospital wards.
Thats according to Oren Etzioni, CEO of Seattles Allen Institute for Artificial Intelligence (AI2) and a University of Washington computer science professor. Etzioni and AI2 senior assistant Nicole DeCario have boiled down AIs role in the current crisis to three immediate applications: Processing large amounts of data to find treatments, reducing spread, and treating ill patients.
AI is playing numerous roles, all of which are important based on where we are in the pandemic cycle, the two told GeekWire in an email. But what if the virus could have been contained?
Canadian health surveillance startup BlueDot was among the first in the world to accurately identify the spread of COVID-19 and its risk, according to CNBC. In late December, the startups AI software discovered a cluster of unusual pneumonia cases in Wuhan, China, and predicted where the virus might go next.
Imagine the number of lives that would have been saved if the virus spread was mitigated and the global response was triggered sooner, Etzioni and DeCario said.
Can AI bring researchers closer to a cure?
One of the best things artificial intelligence can do now is help researchers scour through the data to find potential treatments, the two added.
The COVID-19 Open Research Dataset (CORD-19), an initiative building on Seattles Allen Institute for Artificial Intelligence (AI2) Semantic Scholar project, uses natural language processing to analyze tens of thousands of scientific research papers at an unprecedented pace.
Semantic Scholar, the team behind the CORD-19 dataset at AI2, was created on the hypothesis that cures for many ills live buried in scientific literature, Oren and DeCario said.Literature-based discovery has tremendous potential to inform vaccine and treatment development, which is a critical next step in the COVID-19 pandemic.
The White House announced the initiative along with a coalition that includes the Chan Zuckerberg Initiative, Georgetown Universitys Center for Security and Emerging Technology, Microsoft Research, the National Library of Medicine, and Kaggle, the machine learning and data science community owned by Google.
Within four days of the datasets release on March 16, itreceived more than 594,000 views and 183 analyses.
Computer models map out infected cells
Coronaviruses invade cells through spike proteins, but they take on different shapes in different coronaviruses. Understanding the shape of the spike protein in SARS-Cov-2 that causes coronavirus is crucial to figuring out how to target the virus and develop therapies.
Dozens of research papers related to spike proteins are in the CORD-19 Explorer to better help people understand existing research efforts.
The University of Washingtons Institute for Protein Design mapped out 3D atomic-scale models of the SARS-CoV-2 spike protein that mirror those first discovered in a University of Texas Austin lab.
The team is now working to create new proteins to neutralize the coronavirus, according to David Baker, director of the Institute for Protein Design. These proteins would have to bind to the spike protein to prevent healthy cells from being infected.
Baker suggests that its a pretty small chance that artificial intelligence approaches will be used for vaccines.
However, he said, Asfar as drugs, I think theres more of a chance there.
It has been a few months since COVID-19 first appeared in a seafood-and-live-animal market in Wuhan, China. Now the virus has crossed borders, infecting more than one million people worldwide, and scientists are scrambling to find a vaccine.
This is one of those times where I wish I had a crystal ball to see the future, Etzioni said of the likelihood of AI bringing researchers closer to a vaccine. I imagine the vaccine developers are using all tools available to move as quickly as possible. This is, indeed, a race to save lives.
More than 40 organizations are developing a COVID-19 vaccine, including three that have made it to human testing.
Apart from vaccines, several scientists and pharmaceutical companies are partnering to develop therapies to combat the virus. Some treatments include using antiviral remdesivir, developed by Gilead Sciences, and the anti-malaria drug hydroxychloroquine.
AIs quest to limit human interaction
Limiting human interaction in tandem with Washington Gov. Jay Inslees mandatory stay-at-home order is one way AI can help fight the pandemic, according to Etzioni and DeCario.
People can order groceries through Alexa without stepping foot inside a store. Robots are replacing clinicians in hospitals, helping disinfect rooms, provide telehealth services, and process and analyze COVID-19 test samples.
Doctors even used a robot to treat the first person diagnosed with COVID-19 in Everett, Wash., according to the Guardian. Dr. George Diaz, the section chief of infectious diseases at Providence Regional Medical Center, told the Guardian he operated the robot while sitting outside the patients room.
The robot was equipped with a stethoscope to take the patients vitals and a camera for doctors to communicate with the patient through a large video screen.
Robots are one of many ways hospitals around the world continue to reduce risk of the virus spreading. AI systems are helping doctors identify COVID-19 cases through CT scans or x-rays at a rapid rate with high accuracy.
Bright.md is one of many startups in the Pacific Northwest using AI-powered virtual healthcare software to help physicians treat patients more quickly and efficiently without having them actually step foot inside an office.
Two Seattle startups, MDmetrix and TransformativeMed, are using their technologies to help hospitals across the nation, including University of Washington Medicine and Harborview Medical Center in Seattle. The companies software helps clinicians better understand how patients ages 20 to 45 respond to certain treatments versus older adults. It also gauges the average time period between person-to-person vs. community spread of the disease.
The Centers for Disease Control and Prevention uses Microsofts HealthCare Bot Service as a self-screening tool for people wondering whether they need treatment for COVID-19.
AI raises privacy and ethics concerns amid pandemic
Despite AIs positive role in fighting the pandemic, the privacy and ethical questions raised by it cannot be overlooked, according to Etzioni and DeCario.
Bellevue, Wash., residents are asked to report those in violation of Inslees stay home order to help clear up 911 lines for emergencies, Geekwire reported last month. Believe police then track suspected violations on the MyBellevue app, which shows hot spots of activity.
Bellevue is not the first. The U.S. government is using location data from smartphones to help track the spread of COVID-19. However, privacy advocates, like Jennifer Lee of Washingtons ACLU, are concerned about the long-term implications of Bellevues new tool.
Etzioni and DeCario also want people to consider the implications AI has on hospitals. Even though deploying robots to take over hospital wards helps reduce spread, it also displaces staff. Job loss because of automation is already at the forefront of many discussions.
Hear more from Oren Etzioni on this recent episode of the GeekWire Health Tech podcast.
Visit link:
Posted in Ai
Comments Off on How AI is helping scientists in the fight against COVID-19, from robots to predicting the future – GeekWire
Google expands AI calling service Duplex to Australia, Canada, and the UK – The Verge
Posted: at 6:27 pm
Googles automated, artificial intelligence-powered calling service Duplex is now available in more countries, according to a support page updated today. In addition to the US and New Zealand, Duplex is now available in Australia, Canada, and the UK, reports VentureBeat, which discovered newly added phone numbers on the support page that Google says it will use when calling via Duplex from a distinct country.
It isnt a full rollout of the service, however, as Google clarified to The Verge its using Duplex mainly to reach businesses in those new countries to update business hours for Google Maps and Search.
And indeed, CEO Sundar Pichai did in fact outline this use of Duplex last month, writing in a blog post, In the coming days, well make it possible for businesses to easily mark themselves as temporarily closed using Google My Business. Were also using our artificial intelligence (AI) technology Duplex where possible to contact businesses to confirm their updated business hours, so we can reflect them accurately when people are looking on Search and Maps. Its not clear if a consumer version of the service will be made available at a later date in those countries.
Duplex launched as an early beta in the US via the Google Assistant back in late 2018 after a splashy yet controversial debut at that years Google I/O developer conference. There were concerns about the use of Duplex without a restaurant or other small business express consent and without proper disclosure that the automated call was being handled by a digital voice assistant and not a human being.
Google has since tried to address those concerns, with limited success, by adding disclosures at the beginning of calls and giving businesses the option to opt out of being recording and speak with a human. Duplex now has human listeners who annotate the phone calls to improve Duplexs underlying machine learning algorithms and to take over in the event the call either goes awry or the person on the other end chooses not to talk with the AI.
Google has also expanded the service in waves, from starting on just Pixel phones to iOS devices and then more Android devices. The services first international expansion was New Zealand in October 2019.
Update April 9th, 2:15PM ET: Clarified that the Duplex rollout is to help Google update business hours for Google Maps and Search.
See the article here:
Google expands AI calling service Duplex to Australia, Canada, and the UK - The Verge
Posted in Ai
Comments Off on Google expands AI calling service Duplex to Australia, Canada, and the UK – The Verge
Google releases SimCLR, an AI framework that can classify images with limited labeled data – VentureBeat
Posted: at 6:27 pm
A team of Google researchers recently detailed a framework called SimCLR, which improves previous approaches to self-supervised learning, a family of techniques for converting an unsupervised learning problem (i.e., a problem in which AI models train on unlabeled data) into a supervised one by creating labels from unlabeled data sets. In a preprint paper and accompanying blog post, they say that SimCLR achieved a new record for image classification with a limited amount of annotated data and that its simple enough to be incorporated into existing supervised learning pipelines.
That could spell good news for enterprises applying computer vision to domains with limited labeled data.
SimCLR learns basic image representations on an unlabeled corpus and can be fine-tuned with a small set of labeled images for a classification task. The representations are learned through a method called contrastive learning, where the model simultaneously maximizes agreement between differently transformed views of the same image and minimizes agreement between transformed views of different images.
Above: An illustration of the SimCLR architecture.
Image Credit: Google
SimCLR first randomly draws examples from the original data set, transforming each sample twice by cropping, color-distorting, and blurring them to create two sets of corresponding views. It then computes the image representation using a machine learning model, after which it generates a projection of the image representation using a module that maximizes SimCLRs ability to identify different transformations of the same image. Finally, following the pretraining stage, SimCLRs output can be used as the representation of an image or tailored with labeled images to achieve good performance for specific tasks.
Google says that in experiments SimCLR achieved 85.8% top 5 accuracy on a test data set (ImageNet) when fine-tuned on only 1% of the labels, compared with the previous best approachs 77.9%.
[Our results show that] preretraining on large unlabeled image data sets has the potential to improve performance on computer vision tasks, wrote research scientist Ting Chen and Google Research VP and engineering fellow and Turing Award winner Geoffrey Hinton in a blog post. Despite its simplicity, SimCLR greatly advances the state of the art in self-supervised and semi-supervised learning.
Both the code and pretrained models of SimCLR are available on GitHub.
Read more:
Posted in Ai
Comments Off on Google releases SimCLR, an AI framework that can classify images with limited labeled data – VentureBeat
Self-supervised learning is the future of AI – The Next Web
Posted: at 6:27 pm
Despite the huge contributions of deep learning to the field of artificial intelligence, theres something very wrong with it: It requires huge amounts of data. This is one thing that boththe pioneersandcritics of deep learningagree on. In fact, deep learning didnt emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.
Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.
In hiskeynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for self-supervised learning, his roadmap to solve deep learnings data problem. LeCun is one of thegodfathers of deep learningand the inventor ofconvolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.
Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, its really hard to predict which technique will succeed in creating the next AI revolution (or if well end up adopting a totally different strategy). But heres what we know about LeCuns masterplan.
First, LeCun clarified that what is often referred to as the limitations of deep learning is, in fact, a limit ofsupervised learning. Supervised learning is the category of machine learning algorithms that require annotated training data. For instance, if you want to create an image classification model, you must train it on a vast number of images that have been labeled with their proper class.
[Deep learning] is not supervised learning. Its not justneural networks. Its basically the idea of building a system by assembling parameterized modules into a computation graph, LeCun said in his AAAI speech. You dont directly program the system. You define the architecture and you adjust those parameters. There can be billions.
Deep learning can be applied to different learning paradigms, LeCun added, including supervised learning,reinforcement learning, as well as unsupervised or self-supervised learning.
But the confusion surrounding deep learning and supervised learning is not without reason. For the moment, the majority of deep learning algorithms that have found their way into practical applications are based on supervised learning models, which says a lot aboutthe current shortcomings of AI systems. Image classifiers, facial recognition systems, speech recognition systems, and many of the other AI applications we use every day have been trained on millions of labeled examples.
Reinforcement learning and unsupervised learning, the other categories of learning algorithms, have so far found very limited applications.
Supervised deep learning has given us plenty of very useful applications, especially in fields such ascomputer visionand some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such aswith some caveatsreviewing the huge amount of content being posted on social media every day.
If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble, LeCun says. They are completely built around it.
But as mentioned, supervised learning is only applicable where theres enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases,showing an object from a slightly different anglemight be enough to confound a neural network into mistaking it with something else.
ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)
Deep reinforcement learning has shownremarkable results in games and simulation. In the past few years, reinforcement learning has conquered many games that were previously thought to off-limits for artificial intelligence. AI programs have already decimated human world champions atStarCraft 2, Dota, and the ancient Chinese board game Go.
But the way these AI programs learn to solve problems is drastically different from that of humans. Basically, a reinforcement learning agent starts with a blank slate and is only provided with a basic set of actions it can perform in its environment. The AI is then left on its own to learn through trial-and-error how to generate the most rewards (e.g., win more games).
This model works when the problem space is simple and you have enough compute power to run as many trial-and-error sessions as possible. In most cases, reinforcement learning agents take an insane amount of sessions to master games. The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.
Reinforcement learning agents must be trained on hundreds of years worth of session to master games, much more than humans can play in a lifetime (source: Yann LeCun).
Reinforcement learning systems are very bad attransfer learning. A bot that plays StarCraft 2 at grandmaster level needs to be trained from scratch if it wants to play Warcraft 3. In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI. In contrast, humans are very good at extracting abstract concepts from one game and transferring it to another game.
Reinforcement learning really shows its limits when it wants to learn to solve real-world problems that cant be simulated accurately. What if you want to train a car to drive itself? And its very hard to simulate this accurately, LeCun said, adding that if we wanted to do it in real life, we would have to destroy many cars. And unlike simulated environments, real life doesnt allow you to run experiments in fast forward, and parallel experiments, when possible, would result in even greater costs.
LeCun breaks down the challenges of deep learning into three areas.
First, we need to develop AI systems that learn with fewer samples or fewer trials. My suggestion is to use unsupervised learning, or I prefer to call it self-supervised learning because the algorithms we use are really akin to supervised learning, which is basically learning to fill in the blanks, LeCun says. Basically, its the idea of learning to represent the world before learning a task. This is what babies and animals do. We run about the world, we learn how it works before we learn any task. Once we have good representations of the world, learning a task requires few trials and few samples.
Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth. While theres debate on how much of these capabilities are hardwired into the brain and how much of it is learned, what is for sure is that we develop many of our abilities simply by observing the world around us.
The second challenge is creating deep learning systems that can reason. Current deep learning systems are notoriously bad at reasoning and abstraction, which is why they need huge amounts of data to learn simple tasks.
The question is, how do we go beyond feed-forward computation and system 1? How do we make reasoning compatible with gradient-based learning? How do we make reasoning differentiable? Thats the bottom line, LeCun said.
System 1 is the kind of learning tasks that dont require active thinking, such as navigating a known area or making small calculations. System 2 is the more active kind of thinking, which requires reasoning.Symbolic artificial intelligence, the classic approach to AI, has proven to be much better at reasoning and abstraction.
But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested. His vision for the future of AI is much more in line with that of Yoshua Bengio, another deep learning pioneer, who introduced the concept ofsystem 2 deep learningat NeurIPS 2019 and further discussed it at AAAI 2020. LeCun, however, did admit that nobody has a completely good answer to which approach will enable deep learning systems to reason.
The third challenge is to create deep learning systems that can lean and plan complex action sequences, and decompose tasks into subtasks. Deep learning systems are good at providing end-to-end solutions to problems but very bad at breaking them down into specific interpretable and modifiable steps. There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text. Capsule networks, invented by Geoffry Hinton, address some of these challenges.
But learning to reason about complex tasks is beyond todays AI. We have no idea how to do this, LeCun admits.
The idea behind self-supervised learning is to develop a deep learning system that can learn to fill in the blanks.
You show a system a piece of input, a text, a video, even an image, you suppress a piece of it, mask it, and you train a neural net or your favorite class or model to predict the piece thats missing. It could be the future of a video or the words missing in a text, LeCun says.
The closest we have to self-supervised learning systems are Transformers, an architecture that has proven very successful innatural language processing. Transformers dont require labeled data. They are trained on large corpora of unstructured text such as Wikipedia articles. And theyve proven to be much better than their predecessors at generating text, engaging in conversation, and answering questions. (But they are stillvery far from really understanding human language.)
Transformers have become very popular and are the underlying technology for nearly all state-of-the-art language models, including Googles BERT, Facebooks RoBERTa,OpenAIs GPT2, and GooglesMeena chatbot.
More recently, AI researchers have proven thattransformers can perform integration and solve differential equations, problems that require symbol manipulation. This might be a hint that the evolution of transformers might enable neural networks to move beyond pattern recognition and statistical approximation tasks.
So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols. Its easy to train a system like this because there is some uncertainty about which word could be missing but we can represent this uncertainty with a giant vector of probabilities over the entire dictionary, and so its not a problem, LeCun says.
But the success of Transformers has not transferred to the domain of visual data. It turns out to be much more difficult to represent uncertainty and prediction in images and video than it is in text because its not discrete. We can produce distributions over all the words in the dictionary. We dont know how to represent distributions over all possible video frames, LeCun says.
For each video segment, there are countless possible futures. This makes it very hard for an AI system to predict a single outcome, say the next few frames in a video. The neural network ends up calculating the average of possible outcomes, which results in blurry output.
This is the main technical problem we have to solve if we want to apply self-supervised learning to a wide variety of modalities like video, LeCun says.
LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models. The key idea is to introduce a latent variable Z which computes the compatibility between a variable X (the current frame in a video) and a prediction Y (the future of the video) and selects the outcome with the best compatibility score. In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.
Energy-based models use a latent variable Z to compute the compatibility between a variable X and a prediction Y and select the outcome with the best compatibility score (image credit: Yann LeCun).
I think self-supervised learning is the future. This is whats going to allow to our AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the AAAI Conference.
One of the key benefits of self-supervised learning is the immense gain in the amount of information outputted by the AI. In reinforcement learning, training the AI system is performed at scalar level; the model receives a single numerical value as reward or punishment for its actions. In supervised learning, the AI system predicts a category or a numerical value for each input.
In self-supervised learning, the output improves to a whole image or set of images. Its a lot more information. To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.
We must still figure out how the uncertainty problem works, but when the solution emerges, we will have unlocked a key component of the future of AI.
If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says. The next revolution in AI will not be supervised, nor purely reinforced.
This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:
Published April 5, 2020 05:00 UTC
View original post here:
Posted in Ai
Comments Off on Self-supervised learning is the future of AI – The Next Web
Storytelling & Diversity: The AI Edge In LA – Forbes
Posted: at 6:27 pm
LA is known as the land of storytellers, but when it comes to its own story, the entertainment business is still front and center. In fact, LA has been at the core of a flourishing AI scene for decades. Since the 1920s through today, elite mathematicians and engineers have been putting their knowledge to work for a multitude of industries such as health, aerospace, and media with relatively little visibility in the tech limelight.
Now, these industries are poised to bring together a convergence of knowledge across cutting edge technologies and LA may finally have its day in the spotlight as a focal point for cross-disciplinary innovation.
LAs history in technology has its roots in the aerospace world, where because of its perfect weather and vast open spaces, it became an ideal setting for the aerospace industry to plant its roots in the early 1900s. Companies like Douglas Aircraft and JPL were able to find multi-acre properties to test rockets and build large airfields.
The engineering know-how and nature of aviation work fueled the manufacturing sector in Southern California during WWII and eventually became the birthplace to the internet as we know it, when UCLA, funded by the Department of Defense, sent the first virtual message via ARPANET in the same year we first landed a man on the moon.
Black Girls Code
Through busts and booms, engineering talent was both attracted to the area and nurtured at many well known and respected educational institutions such as Caltech, USC, and UCLA, helping to augment the labor pool as well as becoming important sources of R&D.
This engineering talent continued to extend its branches out into other industries, such as health and wellness which are natural extensions for a population already obsessed with youth, fitness and body perfection.
Today, LA sits as a unifying center for life sciences, entertainment, media, and aerospace with frontier technologies such as AI pushing innovation across these core industries and providing a platform for new discoveries, cures, and social interactions.
Dave Whelan, Chief Executive Officer of BioscienceLA believes diversity is LAs secret weapon when it comes to its potential to become the global epicenter for AI innovation. He notes LAs widely diverse diaspora, which makes it a perfect place to train AI.
The entire worlds global population resides in LA. If you look at AI for healthcare, you have the raw materials in patient and health data that provide the widest range of possibilities. Combine that with the mix of the creative workforce, diversity of economies, and SoCal mindset, all together making LA a prime center for innovation that has yet to rightly take its place in the sun when compared to the attention that Silicon Valley receives.
The AI opportunity to save lives is particularly meaningful, especially in todays pandemic times. How do we apply AI in a way that can help with early detection, identify clusters, sequence DNA, or source the right treatments? Many aspects of life sciences are computational, and mathematical biologists have been entrenched in LA for some time providing services such as computational epidemiology, which is a multidisciplinary field that leverages computer science and mathematics to understand the spread of diseases and other public health issues.
Brian Dolan, CEO, and Founder of VerdantAI who has his roots in statistical genetics and biomathematics has seen the converging evolution of the tech scene in LA and is actively committed to building out the AI ecosystem. His startup studio is focused on launching new AI companies into market and partnering with large enterprises to help them turn their data into products.
It's not hard to argue that now is the time to focus on the big problems, like COVID and climate change. We need curious, dedicated, intelligent people to take these things on, and Los Angeles certainly offers that kind of talent. Our innovation diversity goes beyond demographics and into industries, geographies, and even ecologies. No other city can really offer that.
Brians previous company Deep 6 AI applies artificial intelligence to the clinical trial process by finding patients for medical trials and getting life-saving cures to people more quickly. Today, Brian and his team at Verdant are incubating technologies to optimize carbon-neutral supply chain networks, leveraging advanced medical NLP technology to read medical texts to create precision digital health experiences, and working on a mental health solution aimed at addiction and recovery.
Building a thriving ecosystem takes time and imagination. AI is both a disruptive force and a major opportunity, but dispelling the myths around AI is important in order to map out its impact and full potential.
Ronni Kimm, founder of Collective Future uses future visioning to help bring outside perspectives into organizations. Future visioning is important for accelerating innovation as it provides the ability to respond to and proactively be part of the stories of change. Her design and innovation studio helps bring strategic transformation to companies from a top-down and bottom-up perspective.
Ronni Kim
Health sciences and life sciences have some of the most interesting challenges in the world but there are not enough stories to help people understand how powerful approaches such as predictive analytics in health science can dramatically impact successful organ transplants predict at-risk patient complications, says Ronni. I see storytelling as one of the most important aspects of accelerating technology, creating more stories around these incredible innovations is where LA can excel in building resilient ecosystems and bringing more of these technologies to market.
Today LA sits at the center of multiple industries, where talent pools cross-pollinate and inspire new ideas. Its diverse and colorful population offers data not readily available in other geographies, making it ideal for big data applications that leverage AI. Its educational institutions feed and train new labor pools and its proximity to creative fields inspires new ways to leverage technology in traditional industries.
Ideas such as bringing the spatial web to life, holograms to offer new methods of care, and digital twins to create cross reality environments are just some of the ideas coming to life in LA.
As technology continues to advance, be sure to be on the lookout for more stories about the rise and influence of AI across these massive industries.
See the article here:
Posted in Ai
Comments Off on Storytelling & Diversity: The AI Edge In LA – Forbes
AI can overhaul patient experience, but knowing its limitations is key – MobiHealthNews
Posted: at 6:27 pm
Healthcare may be bracing for a major shortage of providers and services in the coming years, but even now the industry is straining to meet an ever-growing demand for personalized, patient-friendly care. Artificial intelligence has often been touted as the panacea for this challenge, with many pointing to finance, retail and other industries that have embraced automation.
But the consumerism adopted by other sectors doesn't always translate cleanly into healthcare, says Nagi Prabhu, chief product officer at Solutionreach. Whereas people may be ready to trust automation to handle their deliveries or even manage their finances, they still prefer the human touch when it comes to their personal health.
"That's what makes it challenging. There's an expectation that there's an interaction happening between the patient and provider, but the tools and services and resources that are available on the provider side are insufficient," Prabhu said during a HIMSS20 Virtual Webinar on AI and patient experience. "And that's what causing this big disconnect between what patients are seeing and wanting, compared to other industries where they have experienced it.
"You have got to be careful in terms of where you apply that AI, particularly in healthcare, because it must be in use cases that enrich human interaction. Human interaction is not replaceable," he said.
Despite the challenge, healthcare still has a number of "low-hanging fruit" use cases where automation can reduce the strain on healthcare staff without harming overall patient experience, Prabhu said. Chief among these patient communications, scheduling and patient feedback analysis, where the past decade's investments into natural language processing and machine learning have yielded tools that can handle straightforward requests at scale.
But even these implementations need to strike the balance between automation and a human touch, he warned. Take patient messaging, for example. AI can handle simple questions about appointment times or documentation. But if the patient asks a complex question about their symptoms or care plan, the tool should be able to gracefully hand off the conversation to a human staffer without major interruption.
"If you push the automation too far, from zero automation ... to 100% automation, there's going to be a disconnect because these tools aren't perfect," he said. "There needs to be a good balancing ... even in those use cases."
These types of challenges and automation strategies are already being considered, if not implemented, among major provider organizations, noted Kevin Pawl, senior director of patient access at Boston Children's Hospital.
"We've analyzed why patients and families call Boston Children's over 2 million phone calls to our call centers each year and about half are for non-scheduling matters," Pawl said during the virtual session. "Could we take our most valuable resource, our staff, and have them work on those most critical tasks? And could we use AI and automation to improve that experience and really have the right people in the right place at the right time?"
Pawl described a handful of AI-based programs his organization has deployed in recent years, such as Amazon Alexa skills for recording personal health information and flu and coronavirus tracking models to estimate community disease burden. In the patient experience space, he highlighted self-serve kiosks placed in several Boston Children's locations that guide patients through the check-in process but that still encourage users to walk over to a live receptionist if they become confused or simply are more comfortable speaking to a human.
For these projects, Pawl said that Boston Children's needed to design their offerings around unavoidable hurdles like patients' fear of change, or even around broader system interoperability and security. For others looking to deploy similar AI tools for patient experience, he said that programs must keep in mind the need for iterative pilots,the value of walking providers and patients alike through each step of any new experience,and how the workflows and preferences of these individuals will shape their adoption of the new tools.
"These are the critical things that we think about as we are evaluating what we are going to use," he said. "Err on the side of caution."
Prabhu punctuated these warnings with his own emphasis on the data-driven design of the models themselves. These systems need to have enough historical information available to understand to answer the patient's questions, as well as the intelligence to know when a human is necessary.
"And, when it is not confident, how do you get a human being involved to respond but at the same time from the patient perspective [the interaction appears] to continue?" he asked. "I think that is the key."
Read this article:
AI can overhaul patient experience, but knowing its limitations is key - MobiHealthNews
Posted in Ai
Comments Off on AI can overhaul patient experience, but knowing its limitations is key – MobiHealthNews
AI And Account Based Marketing In A Time Of Disruption – Forbes
Posted: at 6:27 pm
Getty
We dont know how the massive shifts in consumer behavior brought on by the COVID-19 pandemic will evolve or endure.But we do know that as our lives change, marketers data change.Both the current impact and the future implications may be significant.
I asked Alex Atzberger, CEO of Episerver, a digital experience company, to put the issues in perspective.
Paul Talbot:How is AI holding up? Has the pandemic impacted the quality of data used to feed analytic tools that help marketers create both strategic and tactical scenarios and insights?
Alex Atzberger:There is more data and more need for automation and AI now than ever. Website traffic is up, and digital engagement is way up due to COVID-19.
Business leaders and marketers now need automation and AI to free up headspace as they have to deal with so many fires.
Many marketers rely on personalization from AI engines that run in the background so that they can adjust their messaging to our times. AI is a good thing for them right now. Theyre able to get data faster, analyze faster and make better decisions.
However, they need to be aware of what has changed. For example, some of the data inputs may not be as good as before as people work from home and IP addresses are no longer identifying the company someone is with.
Talbot:Given the unknowns we all face, how can marketing strategy be adjusted thoughtfully?
Atzberger:A practitioners time horizon for strategy shortens dramatically in crisis, and you need to spend more time on it. Planning is done in weeks and months, and you need to be ready to re-plan, especially since you have limited visibility into demand.
It can still be done thoughtfully but needs to adapt to the new situation and requires input from sales, partners and others on what channels and activities are working. The more real-time you can assess what is working, the better you can adjust and plan for the future.
Talbot:On a similar note, how have coronavirus disruptions altered the landscape of account-based marketing?
Atzberger:It has created massive disruptions. ABM depends on being able to map visitors to accounts. We see companies where that mapping ability has dropped 50% since working from home started. This is a big challenge.
A lot of the gains in ABM in recent years rests on our ability to target ads, content, direct sales team efforts and look at third-party intent signals. Without a fundamental piece of data, the picture is fuzzy again. Its like being fitted with a worse prescription of glasses you just cant see as clearly.
Talbot:With the soaring numbers of people working from home, how does this impact marketing strategy for the B2B organization?
Atzberger:In a big way. Anything based on account is going to be affected because its now more difficult to identify these buyers who are at home and look the same.
Direct mail programs are a big challenge because you cant really send stuff to their homes, thats a little creepy. Events are severely impacted too and sponsoring or attending an online version of a big industry trade show just isnt quite the same thing.
The marketing mix has to shift, your website has to work harder, your emails have to work harder, webinars have to work harder, all these digital channels will need to deliver much more to make up for systemic softness in other areas.
Talbot:Any other insights youd like to share?
Atzberger:We like to say, you are what you read. Rather than relying on IP addresses, you can 1:1 personalize content based on a visitors actual site activity.
This is what ABM is all about: to figure out whats more relevant for a person based on their industry. Now leapfrog that and go to the individual to act on what shes interested in at that moment. The current crisis might give you the best reason for change.
Read more:
AI And Account Based Marketing In A Time Of Disruption - Forbes
Posted in Ai
Comments Off on AI And Account Based Marketing In A Time Of Disruption – Forbes
Automation May Take Jobsbut AI Will Create Them – WIRED
Posted: at 6:27 pm
Chances are youve already encountered, more than a few times, truly frightening predictions about artificial intelligence and its implications for the future of humankind. The machines are coming and they want your job, at a minimum. Scary stories are easy to find in all the erudite places where the tech visionaries of Silicon Valley and Seattle, the cosmopolitan elite of New York City, and the policy wonks of Washington, DC, convergeTED talks, Davos, ideas festivals, Vanity Fair, the New Yorker, The New York Times, Hollywood films, South by Southwest, Burning Man. The brilliant innovator Elon Musk and the genius theoretical physicist Stephen Hawking have been two of the most quotable and influential purveyors of these AI predictions. AI poses an existential threat to civilization, Elon Musk warned a gathering of governors in Rhode Island one summers day.
Musks words are very much on my mind as the car I drive (its not autonomous, not yet) crests a hill in the rural southern Piedmont region of Virginia, where I was born and raised. From here I can almost see home, the fields once carpeted by lush green tobacco leaves and the roads long ago bustling with workers commuting from profitable textile mills and furniture plants. But that economy is no more. Poverty, unemployment, and frustration are high, as they are with our neighbors across the Blue Ridge Mountains in Appalachia and to the north in the Rust Belt. I am driving between Rustburg, the county seat, and Gladys, an unincorporated farming community where my mom and brother still live.
I left this community, located down the road from where Lee surrendered to Grant at Appomattox Court House, because even as a kid I could see the bitter end of an economy that used to hum along, and I couldnt wait to chase my own dreams of building computers and software. But these are still my people, and I love them. Today, as one of the many tech entrepreneurs on the West Coast, my feet are firmly planted in both urban California and rural Southern soil. Ive come home to talk with my classmates; to reconcile those bafflingly confident, anxiety-producing warnings about the future of jobs and artificial intelligence that I frequently hear among thought leaders in Silicon Valley, New York City, and DC, to see for myself whether there might be a different story to tell.
If I can better understand how the friends and family I grew up with in Campbell County are faring today, a generation after one economic tidal wave swept through, and in the midst of another, perhaps I can better influence the development of advanced technologies that will soon visit their lives and livelihoods. In addition to serving as Microsofts CTO, I also am the executive vice president of AI and research. Its important for those of us building these technologies to meet people where they are, on factory floors, the rooms and hallways of health care facilities, in the classrooms and the agricultural fields.
I pull off Brookneal Highway, the two-lane main road, into a wide gravel parking lot thats next to the old house my friends W. B. and Allan Bass lived in when we were in high school. A sign out front proclaims that Ive arrived at Bass Sod Farm. The house is now headquarters for their sprawling agricultural operation. Its just around the corner from my moms house, and in a sign of the times, near a nondescript cinder-block building that houses a CenturyLink hub for high-speed internet access. Prized deer antlers, a black bear skin, and a stuffed bobcat adorn its conference room, which used to be the family kitchen.
W.B. and Allan were popular back in the day. They always had a nice truck with a gun rack, and were known for their hunting and fishing skills. The Bass family has worked the same plots of Campbell County tobacco land for five generations, dating back to the Civil War. Within my lifetime, Barksdale the grandfather, Walter the father, and now W.B. (Walter Barksdale) and brother Allan have worked the land alongside a small team of seasonal workers, mostly immigrants from Mexico.
More:
Posted in Ai
Comments Off on Automation May Take Jobsbut AI Will Create Them – WIRED







