The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: March 2017
The Legal Reality Of Virtual Reality – Forbes
Posted: March 11, 2017 at 8:15 am
Forbes | The Legal Reality Of Virtual Reality Forbes Many of the legal issues that arise (or will arise) in connection with virtual reality are tried and true intellectual property issues that are not unique to VR, but many have a unique spin given the unexplored U.S. legal terrain of the technology. 1 ... |
Visit link:
Posted in Virtual Reality
Comments Off on The Legal Reality Of Virtual Reality – Forbes
Ready to get social in virtual reality? Here’s what’s coming to Gear VR and Oculus Rift – TechRadar
Posted: at 8:15 am
Ready to get social in virtual reality? Then hold onto your headset.
Oculus has announced a number of new features today that have everything to do with social interaction, plus added voice controls for even more convenient VR usage.
First up is Facebook Livestreaming for Gear VR. You read that right: users of the Samsung headset can stream what they're experiencing to a viewing audience.
To get streaming, look for the "Livestream to Facebook" option located in the Universal Menu. Click on this, and your livestream will launch right away, beaming out to all your pals on the world's largest social media network.
There is one small catch: Only Samsung Gear VR users outside the US can livestream right now, however Oculus says Facebook Livestreaming will arrive for every Samsung phone running the latest version of Android in the coming weeks.
Among Oculus' other announcements are new ways to be more social within a virtual reality experience and use your voice for search.
Social interaction is one of the sticking points of VR - it is an inherently isolating experience - so introducing more ways for users to interact with one another is key to headsets like the Oculus Rift's long-term success.
To that end, Oculus Rooms 1.2 is an update to the Rooms feature that lets you meet up with others in a virtual room to hangout and do things like watch Vimeo music videos.
With Rooms 1.2, users can also watch 360 videos, or videos shot to provide more immersion than a standard 2D clip. The Rooms update also includes voice search, letting you vocalize a request to find specific content.
To use voice search in Rooms, select Search in the TV zone and hit the microphone button.
Speaking of voice search (heh), a new addition by the name of Oculus Voice is now available to English speakers of both the Oculus Rift and Gear VR. You can now conduct searches from Oculus Home to find games, apps and other VR experiences with voice commands.
Depending on how well the voice recognition really works, this could be a convenient way to navigate around a VR world without needing to employ your hands. Oculus says more voice-enabled functions are on the way, such as looking for online friends and sending out game invites.
Finally, Oculus is breaking the champagne on a new feature called Oculus Events. Akin to Facebook Events, these let you locate (who else) friends and join a virtual reality experience as a group.
These events are public, and highlighted happenings will be featured in the Oculus Home. A new Events tab will list all Oculus Events for you to peruse and choose to join.
What kind of events are we talking here? Oculus says they entail everything from multiplayer games, tournaments, tech talks and trivia.
Will these new features entice more people to buy a Samsung Gear VR or Oculus Rift? Perhaps not on alone, but they definitely provide some fun new uses for the VR headsets, serving to bolster the ecosystem as competitors continue to crop up.
Follow this link:
Ready to get social in virtual reality? Here's what's coming to Gear VR and Oculus Rift - TechRadar
Posted in Virtual Reality
Comments Off on Ready to get social in virtual reality? Here’s what’s coming to Gear VR and Oculus Rift – TechRadar
Virtual Reality Filmmakers Tackle Smuttynose Island Murders – New Hampshire Public Radio
Posted: at 8:15 am
Imagine if you could be transported to a different place and time. Where would you go? For Daniel Gaucherand his film crew, that place is Smuttynose Island, off the New Hampshire coast. And the time? 1873, the year of the infamous Smuttynose Island murders. And they want you to be there, too, through the power of virtual reality. But filmmakers have a lot to learn when it comes to using this technology.
Its a frigid winter day. The sky is a brilliant blue. Its gusty, and the ocean looks choppy and cold. And in the distance, a lighthouse shines bright white on the rocky coast.
This is exactly the kind of place Daniel Gaucher was looking for. "I was looking for something that said New England, and had a sense of place," he says.
Gaucher is the director and co-creator of a film called Marens Rock." It's based on the true story of Maren Hontvet, who in 1873 was able to hide from a man who had already murdered two people in an incident known as the Smuttynose Island murders.
"Maren, in her night clothes in March with her tiny little dog, was able to hide in the crevice of a rock and elude this murderer all night long," Gaucher says.
But Marens Rock isn't just a historical New England horror story. Its a 360-degree immersive virtual reality (VR) film. It puts you right in the time and place of the story, and theres no turning away.
Its great in VR to have that sense of fear, that sense of whats behind you and things you dont know. A sense of dark spaces. And VR is the kind of medium that will put you right in there and tap right into those basic emotions.
Gaucher explains that during a traditional film, horror or otherwise, you can escape. If you're scared or upset, you can look away or grab onto the person next to you.
"But when you're immersed in VR, you do have to be a little bit aware of the audiences level of sensitivity because there is no escape."
And if you're not careful, a really horrifying film might have the potential to become really, truly horrific in VR.
"We're not sure that it's potentially more traumatizing than in other media, but I think if we look at the results so far and if we look at these strong illusions, there are good reasons to think that it could be traumatizing."
Thats Dr. Michael Madary. He is a post-doctoral researcher at the Johannes Gutenberg University in Mainz, Germany, and is co-author of the first code of ethical conduct for using and consuming VR technologies.
While he emphasizes that they don't know for sure if VR has more potential to traumatize than traditional media, he says creators who do use VR have a lot of responsibility.
"I guess what filmmakers might want to keep in mind is that they're using a new technology, and in effect what they're doing is running experiments."
And "Maren's Rock" certainly is an experiment. Gaucher has 20 years of professional experience under his belt, but he says just about every step in the VR filmmaking process has been like a blank slatewhether it's finding the line between what's scary and what's potentially traumatizing, or trying to direct a scene without getting in the 360-degree shot.
"The rules for the medium haven't been written yet. This is 100 years of film/AV language, and this is a whole new chapter. Were talking about having to completely re-address everything we've been taught. EverythingIve learned for 20 years is going to be different now."
But he says the uncertainty, as well as the creative and intellectual challenges that come with this new technology, is what's driven him to really delve into the medium.
"I just realized the impact that shooting in VR was going to have, specifically on post-production industry. And that as editors, we were going to have to learn a whole new language of what was acceptable and effective, and what was just too much in VR."
Gaucher is currently teaching a course on VR film production at Emerson College in Boston.
And as "Maren's Rock" makes its way through post-production, Gaucher says he and his collaborators aren't even close to finished with virtual reality.
"Theres lots of other things that are begging to be experienced, and I'm dying to keep pushing this thing forward."
Marens Rock is on track to be released around mid-May, possibly on Samsung Gear.
Excerpt from:
Virtual Reality Filmmakers Tackle Smuttynose Island Murders - New Hampshire Public Radio
Posted in Virtual Reality
Comments Off on Virtual Reality Filmmakers Tackle Smuttynose Island Murders – New Hampshire Public Radio
For an AI to understand thirst, you must first teach it about Drake – Quartz
Posted: at 8:14 am
For an AI to understand thirst, you must first teach it about Drake Quartz What if Messenger Day automatically decided which groups received your photos based on an AI's understanding of what thirst looks like, combined with its knowledge of who should receive thirsty posts? More ambitious still: Could AI power a bot that ... |
Read more from the original source:
For an AI to understand thirst, you must first teach it about Drake - Quartz
Posted in Ai
Comments Off on For an AI to understand thirst, you must first teach it about Drake – Quartz
Facebook AI chief: We can give machines common sense – ZDNet
Posted: at 8:14 am
Charlie Osborne | ZDNet
Neural networking could pave the way for AI systems to be given a capability which we have, until now, considered a human trait: the possession of common sense.
While some of us may have less of this than others, the idea of "common sense" -- albeit a vague concept -- is the general idea of making fair and good decisions in what is a complex environment, drawing on our own experience and an understanding of the world, rather than relying on structured information -- something which artificial intelligence has trouble with.
This kind of intuition is a human concept, but according to Facebook AI research group director Yann LeCun, leaps forward in neural networking and machine vision could one day lead to software with common sense.
Speaking to MIT's Technology Review, LeCun said there is still "progress to be made" when it comes to neural networking which is required for machine vision.
Neural networks are artificial systems which mimic the structure of the human brain, and by combining this with more advanced machine vision -- which are ways to pull data from imagery for use in tasks and decision-making -- LeCun says common sense will be the result.
For example, if you have a dominant object in an image, and enough data in object categories, machines can recognize specific objects like dogs, plants, or cars. However, some AI systems can now also recognize more abstract groupings, such as weddings, sunsets, and landscapes.
LeCun says that just five years ago, this wasn't possible, but as machines are granted vision, machine expertise is growing.
AI is still limited to the specific areas that humans train them in. You could show an AI system an image of a dog at a wedding, but unless the AI has seen one before and understands the context of the image, the response is likely to be what the executive calls "garbage." As such, they lack common sense.
Facebook wants to change this. LeCun says that while you can interact with an intelligent system through language to recognize objects, "language is a very low-bandwidth channel" -- and humans have a wealth of background knowledge which helps them interpret language, something machines do not currently have the capability to draw on in real-time to make contextual connections in a way which mimics common sense.
One way to solve this problem could be through visual learning and media such as streamed images and video.
"If you tell a machine "This is a smartphone," "This is a steamroller," "There are certain things you can move by pushing and others you cannot," perhaps the machine will learn basic knowledge about how the world works," LeCun told the publication. "Kind of like how babies learn."
"One of the things we really want to do is get machines to acquire the very large number of facts that represent the constraints of the real world just by observing it through video or other channels," the executive added. "That's what would allow them to acquire common sense, in the end."
By giving intelligent machines the power to observe the world, contextual gaps will be filled and it may be that AI could make a serious leap from programmed algorithms and set answers. One area, for example, Facebook wants to explore is the idea of AI systems being able to predict future events by showing them a few frames.
"If we can train a system to do this we think we'll have developed techniques at the root of an unsupervised learning system," LeCun says. "That is where, in my opinion, a lot of interesting things are likely to happen. The applications for this are not necessarily in vision -- it's a big part of our effort in making progress in AI."
The next 2 Facebook moves that will disrupt the world:
Continue reading here:
Facebook AI chief: We can give machines common sense - ZDNet
Posted in Ai
Comments Off on Facebook AI chief: We can give machines common sense – ZDNet
Flippy the robot uses AI to cook burgers – ZDNet
Posted: at 8:14 am
Flippy the robot is starting its culinary career with one simple task, but just like any rookie, it is learning on the job. With some practice and training, Flippy will be able to do everything from chopping vegetables to plating meals like a pro. Miso Robotics created the robot, which debuted in a kitchen at the restaurant chain CaliBurger in Pasadena, Calif., this week.
"Flippy will initially only focus on flipping burgers and placing them on buns," David Zito, CEO of Miso Robotics tells ZDNet. He adds, "But since Flippy is powered by our own cooking AI software, it will continuously learn from its experiences to improve and adapt over time. This means Flippy will learn to take on additional tasks including grilling chicken, bacon, onions, and buns in addition to frying, prepping, and finishing plates. Eventually, Flippy will support CaliBurger's entire menu."
The robot can be installed in kitchens in less than five minutes, and it's designed to work alongside restaurant staff. Flippy will even politely move aside if it gets in someone's way. Computer vision and deep learning software make it much smarter than your average kitchen appliance.
"Flippy features a Sensor Bar allowing it to see in 3D, thermal, and regular vision for detecting the exact temperatures of the grill as well as readiness of each burger, which will expand to other menu items as Flippy continues to learn and adapt," says Zito.
Flippy uses computer vision and AI to cook burgers. (Image: Miso Robotics)
Flippy will be installed in more than 50 CaliBurger restaurants worldwide by the end of 2019.
"The application of artificial intelligence to robotic systems that work next to our employees in CaliBurger restaurants will allow us to make food faster, safer and with fewer errors," said John Miller, chairman of Cali Group, in a statement. "Our investment in Miso Robotics is part of our broader vision for creating a unified operating system that will control all aspects of a restaurant from in-store interactive gaming entertainment to automated ordering and cooking processes, 'intelligent' food delivery and real-time detection of operating errors and pathogens."
Automation is creeping into kitchens in many forms. This week, Chowbotics (formerly Casabots) announced that it raised $5 million of Series A funding for food service robots. Then there's also Grillbot pro, which is like a Roomba for your grill. Moley Robotics is developing a fully automated and intelligent robotic chef. Various robots sell pizza, cook it, deliver it, and can even print it in outer space .
VIDEO: FDNY uses drone to tame Bronx fire
More:
Posted in Ai
Comments Off on Flippy the robot uses AI to cook burgers – ZDNet
Google’s AI subsidiary turns to blockchain technology to track UK … – The Verge
Posted: at 8:14 am
Forays by Google subsidiary DeepMind Health into the UKs medical institutions have been characterized by two major themes. First, amazing results powered by cutting-edge AI; and second, a lack of transparency over the handling of the UKs public-funded data. With the science going swimmingly, DeepMind Health is focusing more than ever on reassuring UK citizens that their medical records are in safe hands. Its latest plan is a public ledger that shows which bits of data its using; when; and for what purposes.
The initiative is called the Verifiable Data Audit, and was announced this week in a blogpost written by DeepMind co-founder Mustafa Suleyman and the companys head of security and transparency, Ben Laurie. The Audit technology is not yet in place, but would keep a publicly accessible record of every time DeepMind accesses hospital data, using technology related to the blockchain.
Each time theres any interaction with data, well begin to add an entry to a special digital ledger, write Suleyman and Laurie. That entry will record the fact that a particular piece of data has been used, and also the reason why for example, that blood test data was checked against the NHS national algorithm to detect possible acute kidney injury.
Like blockchain technologies, this information will be write-only it cant be edited after the fact or deleted. It will also make use of cryptographic proofs that will allow experts to verify the integrity of the data. Unlike most blockchain systems, though, the ledger wont be distributed among members of the public, but stored by a number of entities including data processors like DeepMind Health and health care providers. The company says this wont impede the verification process, and that the choice was made to make the ledger more efficient. Blockchain entities like Bitcoin are distributed among lots of different players require a lot of power (computing and literal) to compile and check as much as a small country, according to some estimates.
Speaking to The Guardian, Nicola Perrin of the Wellcome Trust said the technology should create a robust audit trail for public health data managed by DeepMind. One of the main criticisms about DeepMinds collaboration with the Royal Free [Hospital Trust] was the difficulty of distinguishing between uses of data for care and for research, said Perrin. This type of approach could help address that challenge, and suggests they are trying to respond to the concerns. DeepMind Health says it wants implement the first pieces of the audit later this year.
More:
Google's AI subsidiary turns to blockchain technology to track UK ... - The Verge
Posted in Ai
Comments Off on Google’s AI subsidiary turns to blockchain technology to track UK … – The Verge
Satellites and AI will bring real-time, real-world data to your phone – TNW
Posted: at 8:14 am
The line for the SXSW panel Eyes in the Sky: The Future of AI and Satellites snaked around many corners in Austins JW Marriot Hotel understandably, AI coupled with space shit, bring it on.
Spaceknow Incs CEO Pavel Machalek did most of the talking during this session. Spaceknow is a San Francisco based company building an AI system that can process the petabytes of data from the hundreds of commercial satellites circling us up above.
Run an early-stage company? We're inviting 250 to exhibit at TNW Conference and pitch on stage!
We are digitizing the physical world, so we can build apps on top it, Machalek stated. According to the Czech CEO, were currently going through a sea of change in how we use satellite data.
Everything from camera technology to actual satellites to launching those satellites to space is getting cheaper. Couple that with the abundance of computer power and the development of more robust machine learning system, and it follows that we can start extracting actionable information about the world, Machalek says
His company works for lots of industrial clients, who want to know how many ships visit a certain harbor, or how many trucks pull up to a refinery to move oil. But some of the information theyre extracting is also coupled to the Bloomberg terminal, informing investors about the growth of industrial areas in China.
By counting and classifying things you get an as objective grip on reality as possible, he says, after telling a story on how the information they collect contradicted the official numbers the Chinese government put out. In a world like this, in which people make up statistics, our numbers offer an objective look.
In a similar way, Spaceknow also distributes the Africa Night Lights Index, an index that is based on the light intensity measured by satellites and then aggregated according to individual countries as a more reliable economic indicator for developing countries in Africa.
In the end, Machalek says that hed like to cover the whole world with Spaceknows system, allowing anyone with a smartphone to do real-time queries about real-world data meaning you could check from space how long the line is for the bar you want to go to, I guess.
Read next: Future voice interfaces could turn us all into geniuses -- or idiots
The rest is here:
Satellites and AI will bring real-time, real-world data to your phone - TNW
Posted in Ai
Comments Off on Satellites and AI will bring real-time, real-world data to your phone – TNW
What’s AI, and what’s not – GCN.com
Posted: at 8:14 am
Whats AI, and whats not
Artificial intelligence has become as meaningless a description of technology as all natural is when it refers to fresh eggs. At least, thats the conclusion reached by Devin Coldewey, a Tech Crunch contributor.
AI is also often mentioned as a potential cybersecurity technology. At the recent RSA conference in San Francisco, RSA CTO Zulfikar Ramzan advised potential users to consider AI-based solutions carefully, in particular machine learning-based solutions, according to an article on CIO.
AI-based tools are not as new or productive as some vendors claim, he cautioned, explaining that machine learning-based cybersecurity has been available for over a decade via spam filters, antivirus software and online fraud detection systems. Plus, such tools suffer from marketing hype, he added.
Even so, AI tools can still benefit those with cybersecurity challenges, according to the article, which noted that IBM had announced its Watson supercomputer can now also help organizations enhance their cybersecurity defenses.
AI has become a popular buzzword, he said, precisely because its so poorly defined. Marketers use it to create an impression of competence and to more easily promote intelligent capabilities as trends change.
The popularity of the AI buzzword, however, has to do at least partly with the conflation of neural networks with artificial intelligence, he said. Without getting too into the weeds, the two are not interchangeable -- but marketers treat them as if they are.
AI vs. neural networks
By using the human brain and large digital databases as metaphors, developers have been able to show ways AI has at least mimicked, if not substituted for, human cognition.
The neural networks we hear so much about these days are a novel way of processing large sets of data by teasing out patterns in that data through repeated, structured mathematical analysis, Coldeway wrote.
The method is inspired by the way the brain processes data, so in a way the term artificial intelligence is apropos -- but in another, more important way its misleading, he added. While these pieces of software are interesting, versatile and use human thought processes as inspiration in their creation, theyre not intelligent.
AI analyst Maureen Caudill, meanwhile, described artificial neural networks (ANNs) as algorithms or actual hardware loosely modeled after the structure of the mammalian cerebral cortex but on much smaller scales.
A large neural network might have hundreds or thousands of processor units, whereas a brain has billions of neurons.
Caudill, the author of Naturally Intelligent Systems, said that while researchers have generally not been concerned with whether their ANNs resemble actual neurological systems, they have built systems that have accurately simulated the function of the retina and modeled the eye rather well.
So what is AI?
There about as many definitions of AI as researchers developing the technology.
The late MIT professor Marvin Minsky, often called the father of artificial intelligence, defined AI as the science of making machines do those things that would be considered intelligent if they were done by people.
Infosys CEO Vishal Sikka sums up AI as any activity that used to only be done via human intelligence that now can be executed by a computer, including speech recognition, machine learning and natural language processing.
When someone talks about AI, or machine learning, or deep convolutional networks, what theyre really talking about is a lot of carefully manicured math, Coldewey recently wrote.
In fact, he said, the cost of a bit of fancy supercomputing is mainly what stands in the way of using AI in devices like phones or sensors that now boast comparatively little brain power.
If the cost could be cut by a couple orders of magnitude, he said, AI would be unfettered from its banks of parallel processors and free to inhabit practically any device.
The federal government sketched out its own definition of AI last October. In a paper on Preparing for the future of AI, the National Science and Technology Councilsurveyed the current state of AI and its existing and potential applications.
The panel reported progress made on narrow AI," which addresses single-task applications, including playing strategic games, language translation, self-driving vehicles and image recognition.
Narrow AI now underpins many commercial services such as trip planning, shopper recommendation systems, and ad targeting, according to the paper.
The opposite end of the spectrum, sometimes called artificial general intelligence (AGI), refers to a future AI system that exhibits apparently intelligent behavior at least as advanced as a person across the full range of cognitive tasks. NSTC said those capabilities will not be achieved for a decade or more.
In the meantime, the panel recommended the federal government explore ways for agencies to apply AI to their missions by creating organizations to support high-risk, high-reward AI research. Models for such an organization include the Defense Advanced Research Projects Agency and what the Department of Education Department has done with its proposal to create an ARPA-ED, which was designed to support research on whether AI could help significantly improve student learning.
See the rest here:
Posted in Artificial Intelligence
Comments Off on What’s AI, and what’s not – GCN.com
Artificial intelligence virtual consultant helps deliver better patient care – Science Daily
Posted: at 8:14 am
Interventional radiologists at the University of California at Los Angeles (UCLA) are using technology found in self-driving cars to power a machine learning application that helps guide patients' interventional radiology care, according to research presented today at the Society of Interventional Radiology's 2017 Annual Scientific Meeting.
The researchers used cutting-edge artificial intelligence to create a "chatbot" interventional radiologist that can automatically communicate with referring clinicians and quickly provide evidence-based answers to frequently asked questions. This allows the referring physician to provide real-time information to the patient about the next phase of treatment, or basic information about an interventional radiology treatment.
"We theorized that artificial intelligence could be used in a low-cost, automated way in interventional radiology as a way to improve patient care," said Edward W. Lee, M.D., Ph.D., assistant professor of radiology at UCLA's David Geffen School of Medicine and one of the authors of the study. "Because artificial intelligence has already begun transforming many industries, it has great potential to also transform health care."
In this research, deep learning was used to understand a wide range of clinical questions and respond appropriately in a conversational manner similar to text messaging. Deep learning is a technology inspired by the workings of the human brain, where networks of artificial neurons analyze large datasets to automatically discover patterns and "learn" without human intervention. Deep learning networks can analyze complex datasets and provide rich insights in areas such as early detection, treatment planning, and disease monitoring.
"This research will benefit many groups within the hospital setting. Patient care team members get faster, more convenient access to evidence-based information; interventional radiologists spend less time on the phone and more time caring for their patients; and, most importantly, patients have better-informed providers able to deliver higher-quality care," said co-author Kevin Seals, MD, resident physician in radiology at UCLA and the programmer of the application.
The UCLA team enabled the application, which resembles online customer service chats, to develop a foundation of knowledge by feeding it more than 2,000 example data points simulating common inquiries interventional radiologists receive during a consultation. Through this type of learning, the application can instantly provide the best answer to the referring clinician's question. The responses can include information in various forms, including websites, infographics, and custom programs. If the tool determines that an answer requires a human response, the program provides the contact information for a human interventional radiologist. As clinicians use the application, it learns from each scenario and progressively becomes smarter and more powerful.
The researchers used a technology called Natural Language Processing, implemented using IBM's Watson artificial intelligence computer, which can answer questions posed in natural language and perform other machine learning functions. This prototype is currently being tested by a small team of hospitalists, radiation oncologists and interventional radiologists at UCLA.
"I believe this application will have phenomenal potential to change how physicians interact with each other to provide more efficient care," said John Hegde, MD, resident physician in radiation oncology at UCLA. "A key point for me is that I think it will eventually be the most seamless way to share medical information. Although it feels as easy as chatting with a friend via text message, it is a really powerful tool for quickly obtaining the data you need to make better-informed decisions."
As the application continues to improve, researchers aim to expand the work to assist general physicians in interfacing with other specialists, such as cardiologists and neurosurgeons. Implementing this tool across the health care spectrum, said Lee, has great potential in the quest to deliver the highest-quality patient care.
Abstract 354: "Utilization of Deep Learning Techniques to Assist Clinicians in Diagnostic and Interventional Radiology: Development of a Virtual Radiology Assistant." K. Seals; D. Dubin; L. Leonards; E. Lee; J. McWilliams; S. Kee; R. Suh; David Geffen School of Medicine at UCLA, Los Angeles, CA. SIR Annual Scientific Meeting, March 4-9, 2017. This abstract can be found at sirmeeting.org.
Story Source:
Materials provided by Society of Interventional Radiology. Note: Content may be edited for style and length.
Follow this link:
Artificial intelligence virtual consultant helps deliver better patient care - Science Daily
Posted in Artificial Intelligence
Comments Off on Artificial intelligence virtual consultant helps deliver better patient care – Science Daily







