Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Big Tech
- Black Lives Matter
- Boca Chica Texas
- Casino Affiliate
- Cbd Oil
- Chess Engines
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Designer Babies
- Donald Trump
- Elon Musk
- Ethical Egoism
- Fake News
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Jordan Peterson
- Las Vegas
- Life Extension
- Marie Byrd Land
- Mars Colonization
- Mars Colony
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- National Vanguard
- New Utopia
- Online Casino
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Proud Boys
- Quantum Computing
- Quantum Physics
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Category Archives: Ai
Tesla’s Head of AI Says The Firm Uses a Harder Approach to Self-Driving for Scalability Reasons – Interesting Engineering
Posted: June 20, 2020 at 10:41 am
Earlier this week, Tesla's head of Artificial Intelligence (AI)Andrej Karpathy took part in a CVPR20 workshop on Scalability in Autonomous Drivingduring which he discussed the firm's approach to self-driving. In the talk, he confessed that Tesla is using a harder approach to autonomous driving but one that is more likely to scale properly.
RELATED:NEW VIDEO SHOWS TESLA'S FULL SELF-DRIVING TECHNOLOGY AT WORK
The executive gave a presentation where he shared two videos: one of Teslas self-driving car doing a turn and one of Waymos doing the same. He explained that while both turns looked identical, the decision making behind them was very different.
"Waymo and many others in the industry use high-definition maps. You have to first drive some car that pre-maps the environment, you have to have lidar with centimeter-level accuracy, and you are on rails. You know exactly how you are going to turn in an intersection, you know exactly which traffic lights are relevant to you, you where they are positioned and everything. We do not make these assumptions. For us, every single intersection we come up to, we see it for the first time. Everything has to be sold just like what a human would do in the same situation," said Kaparthy.
Kaparthy went on to saythat Tesla is working on a scalable self-driving system deployable in millions of cars which is why the firm is using a vision-based approach. Because it is easier to scale.
"Speaking of scalability, this is a much harder problem to solve, but when we do essentially solve this problem, theres a possibility to beam this down to again millions of cars on the road. Whereas building out these lidar maps on the scale that we operate in with the sensing that it does require would be extremely expensive. And you cant just build it, you have to maintain it and the change detection of this is extremely difficult," added Kaparthy.
The rest is here:
AI in Healthcare Market projected a CAGR of 52.3% during the forecast period, 2020-2026 – 3rd Watch News
Posted: at 10:41 am
According toBlueWeave Consultingthe globalAI in Healthcare marketis estimated to reach US$ 37.9 Billion by 2026 with a growing CAGR of 52.3 % during the forecast period 2020- 2026. Several factors driving growth are the increasing need to reduce healthcare costs, rising importance of big data in healthcare, increased acceptance of precision medicine and raising hardware costs. Increasing applicability of AI-based software in medical care and growing investment in venture capital can also be attributed to the surge in demand for this technology. For example, Care Predict, Inc. is using AI technology to track changes in behavioral patterns and activity to predict health issues early.
Request to get the report sample pages at :https://www.blueweaveconsulting.com/ai-in-healthcare-market-bwc19396/report-sample
Increasing number of cross-industry partnerships are expected to boost the healthcare sectors adoption of AI, which is further responsible for its lucrative growth rate. GNS Healthcare entered into a cross-industry partnership with Alliance and Amgen in September 2018 to conduct oncology clinical trials. The goal of the collaboration was to use data from clinical trials and Artificial Intelligence (AI) to identify factors that improve treatment responses in patients with metastatic colorectal cancer (CRC).
AI adoption in healthcare is increasing, with an increased focus on improving patient care quality through the use of artificial intelligence in various aspects of healthcare services, such as virtual assistants & surgeries. The AI-based technologies, such as clinical decision support systems & voice recognition software, help streamline hospital workflow procedures, and optimize medical care, thus improving patient experience. Incorporating AI into healthcare has multiple advantages for both patients and healthcare providers. AI, such as, allows personalized treatment, based on health conditions and past medical history. In addition, AI-based software can be used for continuous health monitoring, which in effect can ensure prompt care & treatment and may ultimately decrease hospital stay. On the other side, medical practitioners unwillingness to adopt new technology, a drastic lack of predetermined and uniform regulatory guidelines, a shortage of curated health care data and data privacy issues impede the markets potential to attain higher grounds.
AI-enabled bots are an AI program that patients can communicate with on a website or by telephone via a chat window. Applications such as scheduling appointments; reviewing insurance coverage parameters; quick access to information on drug interactions and side effects; collecting up-to-date information on patient medications, health care staff and recent procedures; designing special diet strategies for nutritional limited patients; and contacting discharged patients to follow up on treatment plans and ads. Such technologies are expected to lead the growth of hospital and inpatient care systems. Furthermore, the growing need for accurate & early diagnosis of chronic diseases and disorders further supports this markets growth. Nevertheless, the reluctance to implement AI technologies among end-users, lack of trust and potential risks associated with AI in the healthcare sector somewhat restrict the growth of this market.
Application for patient management to see significant growth in the market with significant pace in coming years as successful patient management is one of the most important needs for hospital facilities. Several studies have shown how important patient participation is in improving health outcomes. Lack of such participation contributed greatly to preventable deaths. Smart wearables also play a crucial role in transforming the current healthcare sector. Consumers are also becoming more aware of wearables, and many consumers today believe that wearing a smart device that monitors their vitalities will lead to increased average life expectancy.
Request to get the report description pages at :https://www.blueweaveconsulting.com/ai-in-healthcare-market-bwc19396/
Artificial intelligence in healthcare market is fragmented owing to the presence of number of large-sized companies, mid-sized & small-sized companies, and many start-ups that provide artificial intelligence in healthcare industry. However, the companies that hold the majority share of artificial intelligence in healthcare market are NVIDIA, Intel, IBM, Microsoft, Google, Siemens Healthineer, General Electric (GE) Company, Medtronic, Amazon Web Services (AWS), Koninklijke Philips, Johnson & Johnson Services, Butterfly Network, Welltok, Inc., Micron Technology and Other Prominent Players.
BlueWeave Consultingis a one-stop solution for market intelligence regarding various products and services online & offline. We offer worldwide market research reports by analyzing both qualitative and quantitative data to boost up the performance of your business solution. BWC has built its reputation from the scratches by delivering quality performance and nourishing the long-lasting relationships with its clients. We are one of the promising digital market intelligence generation company delivering unique solutions for blooming your business and making the morning, more rising & shining.
Global Contact: +1 866 658 6826,+1 425 320 4776
See the article here:
‘Its fundamental’: Graphcore CEO believes new kinds of AI will prove the worth of a new kind of computer – ZDNet
Posted: June 1, 2020 at 3:09 am
"We've got a very different approach and a very different architecture" from conventional computer chips, says Nigel Toon, CEO of AI chip startup Graphcore. "The conversations we have with customers are, here's a new tool for your toolbox that allows you to do different things, and to solve different problems."
Most computers in the world tend to do one thing and then move on to the next thing, a series of sequential tasks. For decades, computer scientists have struggled to get machines to do multiple things in parallel.
With the boom in artificial intelligence in recent years, an ideal workload has arrived, a kind of software programming that naturally gets better as its mathematical operations are spread across either many chips, or across circuits inside of a chip that work in parallel.
For upstart chip technology vendors, the surge in popularity of AI means, they are convinced, that their time has come, the chance to sell new kinds of parallel processing computers.
"It's fundamental," Nigel Toon, co-founder and chief executive of computer startup Graphcore, told ZDNet in a video interview last week from his home in England.
"We've got a very different approach and a very different architecture" from conventional computer chips, said Toon. "The conversations we have with customers are, here's a new tool for your toolbox that allows you to do different things, and to solve different problems."
Graphcore, founded in 2016 and based in the quaint medieval town of Bristol, a couple hours west of London, has spent the last several years amassing an amazing war chest of venture money in a bid to be one of the companies that can make the dream of parallel computing a reality.
Last week, Toon had a nice proof of concept to offer of where things might be going.
Microsoft machine learning scientist Sujeeth Bharadwaj gave a demonstration of work he's done on the Graphcore chip to recognize COVID-19 in chest X-rays, during a virtual conference about AI in healthcare. Bharadwaj's work showed, he said, that the Graphcore chip could do in 30 minutes what it would take five hours to do on a conventional chip from Nvidia, the Silicon Valley company that dominates the running of the neural network.
Why should that be? Bharadwaj made the case that the way his program, called SONIC, needs a different kind of machine, a machine where more things can run in parallel.
Also: 'We are doing in a few months what would normally take a drug development process years to do': DoE's Argonne Labs battles COVID-19 with AI
"There's a very strong synergy," he asserted, between the SONIC program, and the Graphcore chip.
If Bharadwaj's point is broadly right, it means tomorrow's top-performing neural networks, generally referred to as state of the art, would open a big market opportunity for Graphcore, and for competitors who have novel computers of various sorts, presenting a big threat to Nvidia.
Graphcore has raised over $450 million, including a $150 million D round in February, "Timing turned out to be absolutely perfect" for raising new money, he said. The latest infusion gives Graphcore a post-money valuation "just shy of two billion dollars." The company had $300 million in the bank as of February, he noted.
Investors include "some of the biggest public-market investors in tech," such as U.K. investment manager Baillie Gifford. Other giant backers include Microsoft, Bosch, BMW, and Demis Hassabis, a co-founder of Google's DeepMind AI unit.
A firm such as Baillie Gifford are "investing here in a private company obviously anticipating that we might at some point in the future go public," Toon remarked.
As for when Graphcore might go public, "I've no idea," he said with a laugh.
A big part of why SONIC, and programs like it, are able to achieve parallel carrying out of tasks, is computer memory. Memory may be the single most important aspect that's changing in chip design as a result of AI. In order for many tasks to work in parallel, the need for memory capacity to store data rises rapidly.
Memory on chips such as Nvidia's, or Intel's, is traditionally limited to tens of millions of bytes. Newer chips such as Graphcore's intelligence processing unit, or IPU, beef up the memory count, with 300 million bytes. The IPU, like other modern chips, spread that memory throughput the silicon die, so that memory is close to each of the over 1,000 individual computing units.
The result is that memory can be accessed much quicker than going off of the chip to a computer's main memory, which is still the approach of Nvidia's latest GPUs. Nvidia has ameliorated the situation by amplifying the conduit that leads from the GPU to that external memory, in part through the acquisition of communications technology vendor Mellanox, last year.
But the movement from GPU to main memory is still no match for the speed of on-chip memory, which can be up to 45 billion bytes per second. That access to memory is a big reason why Bharadwaj's SONIC neural network was able to see a dramatic speed-up in training compared to how long it took to run on an Nvidia GPU.
The Graphcore "Intelligence Processing Unit," or IPU, is composed of over 1,000 computers operating in parallel, each with its own batch of memory, to parallelize tasks that would usually have to run sequentially on conventional chips.
SONIC is an example to Toon of the new kinds of emerging neural nets that he argues will increasingly make the IPU a must for doing cutting-edge AI development.
"I think one of the things that the IPU is able to help innovators do is to create these next generation image perception models, make them much more accurate, much more efficiently implemented," said Toon.
An important question is whether SONIC's results are a fluke, or whether the IPU can speed up many different kinds of AI programs by doing things in parallel.
To hear Bharadwaj describe it, the union of his program and the Graphcore chip is somewhat specific. "SONIC was designed to leverage the IPU's capabilities," said Bharadwaj in his talk.
Toon, however, downplayed the custom aspect of the program. "There was no tweaking backwards and forwards in this case," he said of SONIC's development. "This was just an amazing output that they found from using the technology and the standard tools."
The work happened independent of Graphcore, Toon said. "The way this came about was, Microsoft called us up one day and they said, Wow, look what we were able to do."
Although the IPU was "designed so that it will support these types of more complex algorithms," said Toon, it is built to be much broader than a single model, he indicated. "Equally it will apply in other kinds of models." He cited, for example, natural language processing systems, "where you want to use sparse processing in those networks."
Microsoft AI scientist Sujeeth Bharadjwaj told a healthcare technology conference about how his SONIC neural network had been constructed to take advantage of the Graphcore IPU chip.
The market for chips for both training, and, especially, for inference, has become a very crowded one. Nvidia is the dominant force in training, while Intel commands the most market share in inference. Along with Graphcore, Cerebras Systems of Los Altos, in Silicon Valley, is shipping systems and getting work from major research labs such as Argonne National Laboratory in the U.S. Department of Energy. Other major names have gotten funding and are in the development stage, such as SambaNova Systems, with a Stanford University pedigree.
Toon nevertheless depicted the market as a two-horse race. "Every time we go and talk to customers it's kind of us and Nvidia," he said. The competition has made little progress, he told ZDNet. In the case of Cerebras, the company "have shipped a few systems to a few customers," adding, "I don't know what traction they're getting."
In the case of Intel, which last year acquired the Israeli startup Habana, "They still have a lot to prove," said Toon. "They haven't really delivered a huge amount, they've got some inference products out there, but nothing for training that customers can use," he said.
Some industry observers view the burden of proof lying more heavily on Graphcore's shoulders.
"Intel's acquisition of Habana makes it the top challenger to Nvidia in both AI inference and training," Linley Gwennap, editor of the prestigious chip newsletter Microprocessor Report, told ZDNet. Habana's benchmark results for its chips are better than the numbers for either Nvidia's V100, its current best chip, or Graphcore's part, contended Gwennap. "Once Intel ports its extensive AI software stack to the Habana hardware, the combination will be well ahead of any startup's platform."
Also: 'It's not just AI, this is a change in the entire computing industry,' says SambaNova CEO
Nvidia two weeks ago announced its newest chip for AI, called the "A100." Graphcore expects to leapfrog the A100 when Graphcore ships its second-generation processor, sometime later this year, said Toon. "When our next generation products come, we should continue to stay ahead."
Gwennap is skeptical. The Nvidia part, he said, "raises the performance bar well above every existing product," and that, he says, leaves all competitors "in the same position: claiming that their unannounced next-generation chip will leapfrog the A100's performance while trying to meet customers' software needs with a far smaller team than either Intel or Nvidia can deploy."
Technology executives tend to over-use the tale of David and Goliath as a metaphor for their challenge to an incumbent in a given market. With a viral pandemic spreading around the world, Toon chose a different image, that of Graphcore's technology spreading like a contagion.
"We've all learned about R0 and exponential growth," he said, referring to the propagation rate of COVID-19, known as the R-naught. "What we've got to do is to keep our R0 above 1."
See the rest here:
Posted: at 3:09 am
Microsoft is laying off dozens of journalists and editorial workers at its Microsoft News and MSN organizations. The layoffs are part of a bigger push by Microsoft to rely on artificial intelligence to pick news and content thats presented on MSN.com, inside Microsofts Edge browser, and in the companys various Microsoft News apps. Many of the affected workers are part of Microsofts SANE (search, ads, News, Edge) division, and are contracted as human editors to help pick stories.
Like all companies, we evaluate our business on a regular basis, says a Microsoft spokesperson in a statement. This can result in increased investment in some places and, from time to time, re-deployment in others. These decisions are not the result of the current pandemic.
While Microsoft says the layoffs arent directly related to the ongoing coronavirus pandemic, media businesses across the world have been hit hard by advertising revenues plummeting across TV, newspapers, online, and more.
Business Insider first reported the layoffs on Friday, and says that around 50 jobs are affected in the US. The Microsoft News job losses are also affecting international teams, and The Guardian reports that around 27 are being let go in the UK after Microsoft decided to stop employing humans to curate articles on its homepages.
Microsoft has been in the news business for more than 25 years, after launching MSN all the way back in 1995. At the launch of Microsoft News nearly two years ago, Microsoft revealed it had more than 800 editors working from 50 locations around the world.
Microsoft has gradually been moving towards AI for its Microsoft News work in recent months, and has been encouraging publishers and journalists to make use of AI, too. Microsoft has been using AI to scan for content and then process and filter it and even suggest photos for human editors to pair it with. Microsoft had been using human editors to curate top stories from a variety of sources to display on Microsoft News, MSN, and Microsoft Edge.
Go here to see the original:
Posted: at 3:09 am
If youve ever been driven to insanity by not being able to find Wally (Waldo in the US), or if you love to ruin the fun of the game - theres an AI-powered robot for you. Enter the aptly named Theres Wally
The creative design agency Redpepper have made an AI-powered robot that can find Wally in 4.45 seconds: especially useful, according to the agency themselves, if you want to usurp a five year old in a Wheres Wally contest.
The robot consists of a Raspberry Pi 3B connected to a Vision camera kit for facial recognition and a metal robotic arm (the enthrallingly named uArm Swift Pro). This arm is connected to a novelty silicon rubber hand which points out Wally on the page - thereby also pointing out AIs triumphant victory over humanity in the key battleground of Wheres Wally competitions.
But how does it work?
First, the creator - Matt Reed - created a database of around 130 pictures of Wally using Google Images image search. Then, he used Googles AutoML Vision service to train AI on photos of Wally (you can train AI using AutoML too - as its drag and drop functionality means you dont need prior coding knowledge). This technology is the same as what enables Google Photos to recognise faces from photographs. Next, showtime The camera takes dozens of high-resolution pictures of each target page in the Wheres Wally book and these images get fed into an AI algorithm. The algorithm analyses the photos and when it finds a face it is 95% or more confident in of being Wally, the robotic arm (controlled via Pythons PyArm library) moves the silicon hand to reveal Wally - et voila!
Admittedly, AI-powered robots to serve the need of finding Wally is a bit of a niche, but the technology used to create this robot does have wider implications - apart from possibly ostracising you from game night for ruining the game. Theres Wally acts as testament to the capability of AI facial recognition software - its evident how quickly AI algorithms can be used to pick a face out from the crowd. Reed actually used the technology for this in a more consequential creation named FaceDeals. FaceDeals is a facial recognition system that essentially uses ones face as a barcode - scanning you in when you enter a location like a supermarket. FaceDeals then checks you into that location on FaceBook before sending you a unique discount code via text based off of your individual preferences (adjudged by FaceBook). For example, if your FaceBook preferences were geared towards your undying love for mozzarella, then when entering a cheese shop, youd likely receive a discount code for mozzarella cheese.
Thus, although the technology in Theres Wally is quite novel and at first glance, inconsequential, the facial recognition software powering the robot has serious implications for the future - both in terms of safety and ethics.
Thumbnail GIF by Redpepper
Posted: at 3:09 am
As AI has grown from niche to mission-critical technology, the companies that enable it have multiplied and in many cases prospered. A good example of that success is DefinedCrowd, which has gone from the Disrupt stage to globe-spanning AI toolkit to the Fortune 500 in just a couple of years. The company just raised a new $50.5 million B round to further fuel its expansion.
DefinedCrowd doesnt make AI, but rather supplies data used to create it, specializing in natural language processing. After all, someone has to vet the 500 different ways you could ask for the weather otherwise it would be much more difficult for machine learning systems to tell what users mean. The same goes for computer vision, sentiment recognition and other domains for which the company creates and sorts data. DefinedCrowd has a paid community hundreds of thousands strong doing this highly necessary but voluminous work.
As AI has worked its way into everything from creating and editing media to enterprise software, theres been no shortage of companies in search of training data.
The demand for data has consistently been growing over the last couple years companies are more and more aware of the impact that data has on their systems, and have been looking for more languages and domains that werent considered five years ago, co-founder and CEO Daniela Braga told TechCrunch.
She emphasized inclusivity, the potential for bias and more multilingual deployments as drivers of that demand. New markets and applications are opening up constantly and entrants need high-quality data to develop consumer-ready products.
This puts us in a very good position, as our data is agnostic and we can work pretty much across all verticals, Braga said.
As evidence this is not simply wishful thinking, the company reported a tremendous 656% increase in revenue year-over-year. Theyve also nearly tripled the size of their workforce in that time to more than 250 people.
Its toward hiring that Braga expects a great deal of the $50 million round to go: got to have the developers to make the products to follow the road map. That means doubling the employee count again.
I asked whether the present pandemic has had a major effect on DefinedCrowds operations or business. Braga noted that she hasnt noticed a significant downturn in the industry, presumably because product development has continued in anticipation of consumer and enterprise needs returning to normal.
We decided to make our business fully remote before lockdown measures were implemented, she explained. Transferring every employee to remote working in a short space of time was challenging; however, considering we were already a global company with four offices in three different countries, the adaptation phase was fairly smooth, and we were able to maintain full speed during the process.
Semapa Next and Hermes GPE were added this round to the increasingly long list of investors, which now includes Evolution Equity Partners, Kibo Ventures, Portugal Ventures, Bynd Venture Capital, EDP Ventures, IronFire Ventures, Amazon Alexa Fund, Sony Innovation Fund and Mastercard.
The rest is here:
Anoto’s subsidiary Knowledge AI Inc expands the customer base for its education solution KAIT by signing a fully paid subscription agreement with a…
Posted: at 3:09 am
Stockholm, June 1, 2020 Anoto Group AB (Anoto or the Company) today announces that Anotos education subsidiary Knowledge AI Inc has added Kyunghee University in South Korea as its full subscription customer for its education solution KAIT.
The contract was entered into between Anotos subsidiary Knowledge AI Inc, through its Korean distribution partner Soltworks Co. Ltd., and the Business School of Kyunghee University, a leading university in Seoul, South Korea. Kyunghee Business School will be using KAITs proprietary assessment and testing platform for its 3,000 business school students starting in the second half of 2020.
This deal shows that our product can be used for the higher education segment. In fact, we saw substantial excitement and demand for this type of platform which expands customer diversification. We believe this university platform could become a major revenue contributor in the near future, says Joonhee Won, CEO of Knowledge AI Inc.
For further information, please contact:
Johannes Haglund, Chief of Staff
This information is information that Anoto Group AB (publ) is obliged to make public pursuant to the EU Market Abuse Regulation. The information was submitted for publication, through the agency of the contact person set out above, on June 1, 2020 at 08:00 CET.
About Anoto Group
Anoto is a publicly held Swedish technology company known globally for innovation in the area of information-rich patterns and the optical recognition of those patterns. It is a leader in digital writing and drawing solutions, having historically used its proprietary technology to develop smartpens and the related software. These smartpens enrich the daily lives of millions of people around the world. Now Anoto is a cloud based software solution provider based on its patented dot pattern technology which provides a methodology for accumulating digital big data from analogue inputs. Anoto Cloud includes Anotos four solutions: KAIT the worlds first AI solution for offline education; ACE Anotos new and improved enterprise forms solutions;aDNA Anotos secure interactive marketing solution; and Dr. Watson Anotos biometric authentication and security solution. Anoto is traded on the Small Cap list of Nasdaq Stockholm under ANOT.
Posted: at 3:09 am
Albert Hsiao, M.D., Ph.D., and his colleagues at the University of California San Diego (UCSD) health system had been working for 18 months on anartificial intelligence program designed to help doctors identify pneumonia on a chest X-ray.
When thecoronavirushit the U.S., they decided to see what it could do.
The researchers quickly deployed the application, which dots X-ray images with spots of color where there may be lung damage or other signs of pneumonia. It has now been applied to more than 6,000 chest X-rays, and its providing some value in diagnosis, said Hsiao, director of UCSDs augmented imaging and artificial intelligence data analytics laboratory.
His team is one of several around the country that has pushed AI programs developed in a calmer time into the COVID-19 crisis to perform tasks like deciding which patients face the greatest risk of complications and which can be safely channeled into lower-intensity care.
The machine-learning programs scroll through millions of pieces of data to detect patterns that may be hard for clinicians to discern. Yet few of the algorithms have been rigorously tested against standard procedures. So while they often appear helpful, rolling out the programs in the midst of a pandemic could be confusing to doctors or even dangerous for patients, some AI experts warn.
AI is being used for things that are questionable right now, said Eric Topol, M.D., director of the Scripps Research Translational Institute and author of several books on health IT.
Topol singled out a system created by Epic, a major vendor of electronic health records software, that predicts which coronavirus patients may become critically ill. Using the tool before it has been validated is pandemic exceptionalism, he said.
RELATED:Boston startup using AI, remote monitoring to fight coronavirus
Epic said the companys model had been validated with data from more 16,000 hospitalized COVID-19 patients in 21 healthcare organizations. No research on the tool has been published, but, in any case, it was developed to help clinicians make treatment decisions and is not a substitute for their judgment, said James Hickman, a software developer on Epics cognitive computing team.
Others see the COVID-19 crisis as an opportunity to learn about the value of AI tools.
My intuition is its a little bit of the good, bad and ugly, said Eric Perakslis, Ph.D., a data science fellow at Duke University and former chief information officer at the Food and Drug Administration. Research in this setting is important.
Nearly $2 billion poured into companies touting advancements in healthcare AI in 2019. Investments in the first quarter of 2020 totaled $635 million, up from $155 million in the first quarter of 2019, according to digital health technology funderRock Health.
At least three healthcare AI technology companies have made funding deals specific to the COVID-19 crisis, including Vida Diagnostics, an AI-powered lung-imaging analysis company, according to Rock Health.
Overall, AIs implementation in everyday clinical care is less common than hype over the technology would suggest. Yet the coronavirus crisis has inspired some hospital systems to accelerate promising applications.
UCSD sped up its AI imaging project, rolling it out in only two weeks.
Hsiaos project, with research funding from Amazon Web Services, the University of California and the National Science Foundation, runs every chest X-ray taken at its hospital through an AI algorithm. While no data on the implementation has been published yet, doctors report that the tool influences their clinical decision-making about a third of the time, said Christopher Longhurst, M.D., UCSD Healths chief information officer.
The results to date are very encouraging, and were not seeing any unintended consequences, he said. Anecdotally, were feeling like its helpful, not hurtful.
RELATED:Headlines have touted AI over docs in reading medical images. New review finds evidence is limited
AI has advanced further in imaging than other areas of clinical medicine because radiological images have tons of data for algorithms to process, and more data makes the programs more effective, said Longhurst.
But while AI specialists have tried to get AI to do things like predict sepsis and acute respiratory distressresearchers at Johns Hopkins University recently won a National Science Foundation grantto use it to predict heart damage in COVID-19 patientsit has been easier to plug it into less risky areas such as hospital logistics.
In New York City, two major hospital systems are using AI-enabled algorithms to help them decide when and how patients should move into another phase of care or be sent home.
AtMount Sinai Health System, an artificial intelligence algorithm pinpoints which patients might be ready to be discharged from the hospital within 72 hours, said Robbie Freeman, vice president of clinical innovation at Mount Sinai. Freeman described the AIs suggestion as a conversation starter, meant to help assist clinicians working on patient cases decide what to do. AI isnt making the decisions.
NYU Langone Health has developed a similar AI model. It predicts whether a COVID-19 patient entering the hospital will suffer adverse events within the next four days, said Yindalon Aphinyanaphongs, M.D., Ph.D., who leads NYU Langones predictive analytics team.
The model will be run in a four- to six-week trial with patients randomized into two groups: one whose doctors will receive the alerts, and another whose doctors will not. The algorithm should help doctors generate a list of things that may predict whether patients are at risk for complications after theyre admitted to the hospital, Aphinyanaphongs said.
RELATED:Microsoft launches $40M AI for Health program to accelerate medical research
Some health systems are leery of rolling out a technology that requires clinical validation in the middle of a pandemic. Others say they didnt need AI to deal with the coronavirus.
Stanford Health Careis not using AI to manage hospitalized patients with COVID-19, saidRon Li, M.D., the centers medical informatics director for AI clinical integration. The San Francisco Bay Area hasnt seen the expected surge of patientswho would have provided the mass of data needed to make sure AI works on a population, he said.
Outside the hospital, AI-enabled risk factor modeling is being used to help health systems track patients who arent infected with the coronavirus but might be susceptible to complications if they contract COVID-19.
At Scripps Health in San Diego, clinicians are stratifying patients to assess their risk of getting COVID-19 and experiencing severe symptoms using a risk-scoring model that considers factors like age, chronic conditions and recent hospital visits. When a patient scores 7 or higher, a triage nurse reaches out with information about the coronavirus and may schedule an appointment.
Though emergencies provide unique opportunities to try out advanced tools, its essential for health systems to ensure doctors are comfortable with them, and to use the tools cautiously, with extensive testing and validation, Topol said.
When people are in the heat of battle and overstretched, it would be great to have an algorithm to support them, he said. We just have to make sure the algorithm and the AI tool isnt misleading, because lives are at stake here.
ThisKHNstory first published onCalifornia Healthline, a service of theCalifornia Health Care Foundation.Kaiser Health Newsis a nonprofit news service covering health issues. It is an editorially independent program of the Kaiser Family Foundation, which is not affiliated with Kaiser Permanente.
Go here to read the rest:
Posted: at 3:09 am
In January, my coworker received a peculiar email. The message, which she forwarded to me, was from a handful of corporate Walmart employees calling themselves the Concerned Home Office Associates. (Walmarts headquarters in Bentonville, Arkansas, is often referred to as the Home Office.) While its not unusual for journalists to receive anonymous tips, they dont usually come with their own slickly produced videos.
The employees said they were past their breaking point with Everseen, a small artificial intelligence firm based in Cork, Ireland, whose technology Walmart began using in 2017. Walmart uses Everseen in thousands of stores to prevent shoplifting at registers and self-checkout kiosks. But the workers claimed it misidentified innocuous behavior as theft, and often failed to stop actual instances of stealing.
They told WIRED they were dismayed that their employerone of the largest retailers in the worldwas relying on AI they believed was flawed. One worker said that the technology was sometimes even referred to internally as NeverSeen because of its frequent mistakes. WIRED granted the employees anonymity because they are not authorized to speak to the press.
The workers said they had been upset about Walmarts use of Everseen for years, and claimed colleagues had raised concerns about the technology to managers, but were rebuked. They decided to speak to the press, they said, after a June 2019 Business Insider article reported Walmarts partnership with Everseen publicly for the first time. The story described how Everseen uses AI to analyze footage from surveillance cameras installed in the ceiling, and can detect issues in real time, such as when a customer places an item in their bag without scanning it. When the system spots something, it automatically alerts store associates.
Everseen overcomes human limitations. By using state-of-the-art artificial intelligence, computer vision systems, and big data we can detect abnormal activity and other threats, a promotional video referenced in the story explains. Our digital eye has perfect vision and it never needs a day off.
In an effort to refute the claims made in the Business Insider piece, the Concerned Home Office Associates created a video, which purports to show Everseens technology failing to flag items not being scanned in three different Walmart stores. Set to cheery elevator music, it begins with a person using self-checkout to buy two jumbo packages of Reeses White Peanut Butter Cups. Because theyre stacked on top of each other, only one is scanned, but both are successfully placed in the bagging area without issue.
The same person then grabs two gallons of milk by their handles, and moves them across the scanner with one hand. Only one is rung up, but both are put in the bagging area. They then put their own cell phone on top of the machine, and an alert pops up saying they need to wait for assistancea false positive. Everseen finally alerts! But does so mistakenly. Oops again, a caption reads. The filmmaker repeats the same process at two more stores, where they fail to scan a heart-shaped Valentines Day chocolate box with a puppy on the front and a Philips Sonicare electric toothbrush. At the end, a caption explains that Everseen failed to stop more than $100 of would-be theft.
The video isnt definitive proof that Everseens technology doesnt work as well as advertised, but its existence speaks to the level of frustration felt by the group of anonymous Walmart employees, and the lengths they went to prove their objections had merit.
In interviews, the workers, whose jobs include knowledge of Walmarts loss prevention programs, said their top concern with Everseen was false positives at self-checkout. The employees believe that the tech frequently misinterprets innocent behavior as potential shoplifting, which frustrates customers and store associates, and leads to longer lines. Its like a noisy tech, a fake AI that just pretends to safeguard, said one worker.
View original post here:
Posted: at 3:09 am
Jheronimus Academy of Data Science, Eindhoven University of Technology, and Tilburg University, together with Utrecht University of Applied Sciences, are creating a Toolkit Artificial Intelligence that should be applicable for the industry. To this end, the public-private partnership CERTIF-AI has been set up, consisting of companies from the high-tech manufacturing industry in addition to the knowledge partners. NWO makes this possible with a four-year subsidy. The toolkit consists of algorithms and methods for industrial applications.
The CERTIF-AI project (Certification of production process quality through Artificial Intelligence) will start on 1 September 2020. The aim is to use the large amounts of data generated by machines and in production processes to certify processes, improve product quality, and identify problems. For the industrial end-users, this means fewer errors in production and therefore lower costs or greater process reliability and therefore a better product or service, the consortium says in an explanatory note. The project is not only academically innovative by applying AI techniques to real-time sensor data, but also JADS and consortium partners contribute to the deployment of AI for concrete industrial applications.
The AI Toolkit is developed and implemented by a research team from JADS, Eindhoven University of Technology, Tilburg University, and Utrecht University of Applied Sciences. Industrial partners Damen Shipyards, Omron, Additive Industries, and VTEC have defined four concrete applications. Sioux Mathware, BrightCape, and UNIT040 are assisting in the implementation of the toolkit together with Hogeschool Utrecht and practical researchers from JADS.
Tim Foreman (Omron) expects the toolkit to deliver efficiency benefits: The Omron IPC is a Build-to-Order product and has thousands of variants. With CERTIF-AI we are automating and integrating the validation process of new variants into the normal production process in order to reduce the delivery time of new variants. Damen Shipyards expects to acquire new knowledge with CERTIF-AI: A more data-driven approach in operational processes is a strategic spearhead for Damen and makes it possible for us to create insights that were not previously possible, says Jasper Schuringa. These insights offer optimization opportunities but also generate additional value for both Damen, the customer, and the customers customer.
According to the initiators, the CERTIF-AI project is a good example of the connecting role that JADS has in the collaboration between the parent universities TU Eindhoven and Tilburg University and the affiliated research institutes such as the Eindhoven AI Systems Institute (EAISI) that focuses, among other things, on the use of AI for high-tech systems.