The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: August 8, 2017
Richard Arnold tried out a terrifying virtual reality slide on GMB and he was hysterical – Metro
Posted: August 8, 2017 at 4:12 am
Richard tried out the Shard VR experiences (Picture: ITV)
Poor Richard Arnold is having an absolute mare this week.
On yesterdays Good Morning Britain, the showbiz correspondent was forced to dress as a shark to celebrate the GMB hosts roles in Sharknado 5.
And today, he was made walk the plank 800 feet in the air and essentially throw himself off the Shard.
Richard volunteered to try out new virtual reality experiences at the top of Londons Shard, but that may have been a terrible idea, considering he is afraid of heights.
To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video
Firstly, he tried out a VR experience which put you in the shoes of the builders working on the tallest building in the United Kingdom.
As Richard put on his glasses, he was transported to a plank of scaffolding 95 storeys up, causing him to scream which was fairly amusing, considering we were just watching him stand still.
He shouted: Oh my god, its horrible. Thats horrendous. I cant look down.
Back in the studio, Kate Garraway joked: Its torture Richard week on Good Morning Britain, welcome.
But things got worse when Richard had to try out a virtual reality slide that made you think you were plunging 800 feet down and across the London skyline.
When the reporter strapped himself into the slide and saw himself whizzing through the air at speeds up to 100mph, Richard screamed and held onto the sides of his slide, leaving Kate and guest host Jeremy Kyle in stitches.
Better you than us, Rich.
Game Of Thrones' Jon Snow's cape is just an Ikea rug
EastEnders star Davood Ghadami signs up to Strictly Come Dancing 2017
Nicola to witness killer Phelan's dark side in Corrie
If youre brave and/or stupid, you can book your own Shard VR experiences here.
Although we doubt you could beat Richards screams.
MORE: Sharnado 5 stars Good Morning Britains Kate Garraway, Charlotte Hawkins and Laura Tobin
MORE: EastEnders star Davood Ghadami is second celebrity to sign up to Strictly Come Dancing 2017
View post:
Richard Arnold tried out a terrifying virtual reality slide on GMB and he was hysterical - Metro
Posted in Virtual Reality
Comments Off on Richard Arnold tried out a terrifying virtual reality slide on GMB and he was hysterical – Metro
Can ‘Star Wars’ Ignite Cinematic Virtual Reality? – MediaPost Communications
Posted: at 4:12 am
In the past few years, Ive been on the lookout for virtual reality experiences that cross the line into believable experiences. Ive demod Microsoft HoloLens and explored Vive, Oculus, and Samsung Gear.
They all have their place, but none of them took me out of this world, and into another -- except one.
Two years ago, I was one of the first people to demo a new technology platform called The Void at the TED conference in Vancouver.
The Void describes Itself as hyper-reality": a whole-body, fully immersive VR experience.
I wore a haptic vest that uses sound and vibration to ramp up the sense of realism for explorers. I was transported to an ancient temple. From there, I walked down the stone-lined pathways, solving puzzles to open a door into the next chamber. On the wall, a torch was burning, and a voice in my headset suggested I take it along with me.
The plot was carefully choreographed to play out from room to room, with actual walls and stone chairs that drove the sense of reality. The floors shook and the walls felt cold to the touch.
advertisement
advertisement
Then a floor dropped away and a lake emerged with a rumble, and a massive serpent rose up and moved in for the kill. Thankfully, I had my torch to keep the serpent at bay.
The Utah-based startup has developed a proprietary head-mounted display, the haptic vest, a tracking system and software called Rapture.
TED's Katherine McCartney said The Void "is pioneering a new form of cinematic virtual reality.
Because The Void is both digital and physical, it takes your mind places that just images and sound cannot. The images of the waves crashing on the shore are combined with a mist of water -- and that little physical clue takes you there. Its not fake, its real. And it felt to me then as if id seen a glimpse into the future.
At The VOID, we combine the magic of illusion, advanced technology and virtual reality to create fully immersive social experiences that take guests to new worlds, said Curtis Hickman, co-founder and chief creative officer at The VOID. A truly transformative experience is so much more than what you see with your eyes; its what you hear, feel, touch, and even smell.
Help me, Obi-Wan Kenobi. Youre my only hope. Princess Leia Organa Smell? Yes, the idea is to engage all your senses and turn audience members into active participants. How many of us have imagined having a light saber in our hands, hearing the sound as it cuts through the air, and our hands tingling when our saber connects with a combatant's weapon? Im SO THERE!
The Force will be with you. Always. Obi-Wan Kenobi
The executive in charge of ILMxLab, Vicki Dobbs Beck said, By combining Lucasfilms storytelling expertise with cutting-edge imagery, and immersive sound from the team at Skywalker Sound, while invoking all the senses, we hope to truly transport all those who experience 'Star Wars: Secrets of the Empire 'to a galaxy far, far away.
For a generation raised on "Star Wars," this is a journey weve been waiting for.
Do. Or do not. There is no try. Yoda
If you want to see what it feels like to be inside The Void, this was my experience at TED:
Read more:
Can 'Star Wars' Ignite Cinematic Virtual Reality? - MediaPost Communications
Posted in Virtual Reality
Comments Off on Can ‘Star Wars’ Ignite Cinematic Virtual Reality? – MediaPost Communications
Firefox soon will help you lose yourself in the VR web – CNET
Posted: at 4:12 am
A demonstration shows Mozilla's Firefox catching up to Google Chrome and Microsoft Edge with WebVR support.
Mozilla plans to release a version of its Firefox browser Tuesday that embraces a version of virtual reality for the web.
Back in 2014, Mozilla developers including Vladimir Vukicevic put together a concept called WebVR. The idea was to let web browsers navigate virtual realms, and make it easier for people to create a VR world once that would work on all sorts of devices.
But Vukicevic headed off to game engine maker Unity, and Google's Chrome browser beat Mozilla with WebVR support. Microsoft's Edge also edged out Firefox, adding WebVR support in April. Microsoft and Google, which both sell devices to experience virtual reality and its augmented reality cousin, have a big incentive to make virtual reality real.
"WebVR is the major platform feature shipping in Firefox 55," the latest Firefox release calendar update says. "Firefox users with an HTC Vive or Oculus Rift headset will be able to experience VR content on the web and can explore some exciting demos."
There's plenty to do on the web with a PC, and plenty of apps to run on a phone. But for VR to thrive, there has to be plenty of stuff for us to do online virtually, too. WebVR is an important part of keeping keep us supplied with games, tourist attractions, educational lessons and other interesting things to do in virtual realms.
There are caveats to using WebVR today. Chrome's support only is on Android-powered devices right now, and WebVR on Edge requires you to put the browser in a developer mode.
WebVR is also important for Mozilla. The nonprofit organization is fighting to reclaim its relevance and restore its reputation after Firefox slid into Chrome's shadow in recent years. The work to get Firefox back into fighting trim will culminate with Firefox 57, due to arrive Nov. 14.
There's plenty of VR hardware available, from high-end headsets like Facebook's Oculus Rift and HTC's Vive to basic models like Google's inexpensive Cardboard, which relies on your phone to show VR views. With WebVR, it's in principle easier to build those VR destinations, because developers don't have to re-create them for each device.
WebVR isn't the only way to bridge the divide, though: Unity also offers tools to span multiple headsets.
And WebVR is no universal cure. Some VR headsets don't support WebVR, and some browsers don't support all devices.
Mozilla has high hopes for VR. Its senior vice president of emerging technologies, Sean White, has been working with VR for more than two decades.
"In the 1990s, unless you had $5 million or $10 million, you couldn't do it," he said in a recent interview. "Now if there's somebody with Parkinson's disease who can't move or travel, I could take them to Angkor Wat."
In the long term, he and his boss, Mozilla Chief Executive Chris Beard, think VR could be eclipsed by augmented reality. VR immerses you in fully computerized worlds of VR, but AR overlays computer-generated imagery atop the real world.
"VR will beget AR pretty quickly as a mass-market opportunity," Beard said. "Browsers play a very meaningful role."
First published Aug. 8, 5 a.m. PT. Update, 10:55 a.m.: Adds detail about Microsoft and Chrome support for WebVR.
Virtual reality 101: CNET tells you everything you need to know about VR.
Tech Enabled: CNET chronicles tech's role in providing new kinds of accessibility.
View original post here:
Firefox soon will help you lose yourself in the VR web - CNET
Posted in Virtual Reality
Comments Off on Firefox soon will help you lose yourself in the VR web – CNET
4-D camera could improve robot vision, virtual reality and self-driving cars – Phys.Org
Posted: at 4:12 am
August 7, 2017 Two 138-degree light field panoramas (top and center) and a depth estimate of the second panorama (bottom). Credit: Stanford Computational Imaging Lab and Photonic Systems Integration Laboratory at UC San Diego
Engineers at Stanford University and the University of California San Diego have developed a camera that generates four-dimensional images and can capture 138 degrees of information. The new camerathe first-ever single-lens, wide field of view, light field cameracould generate information-rich images and video frames that will enable robots to better navigate the world and understand certain aspects of their environment, such as object distance and surface texture.
The researchers also see this technology being used in autonomous vehicles and augmented and virtual reality technologies. Researchers presented their new technology at the computer vision conference CVPR 2017 in July.
"We want to consider what would be the right camera for a robot that drives or delivers packages by air. We're great at making cameras for humans but do robots need to see the way humans do? Probably not," said Donald Dansereau, a postdoctoral fellow in electrical engineering at Stanford and the first author of the paper.
The project is a collaboration between the labs of electrical engineering professors Gordon Wetzstein at Stanford and Joseph Ford at UC San Diego.
UC San Diego researchers designed a spherical lens that provides the camera with an extremely wide field of view, encompassing nearly a third of the circle around the camera. Ford's group had previously developed the spherical lenses under the DARPA "SCENICC" (Soldier CENtric Imaging with Computational Cameras) program to build a compact video camera that captures 360-degree images in high resolution, with 125 megapixels in each video frame. In that project, the video camera used fiber optic bundles to couple the spherical images to conventional flat focal planes, providing high-performance but at high cost.
The new camera uses a version of the spherical lenses that eliminates the fiber bundles through a combination of lenslets and digital signal processing. Combining the optics design and system integration hardware expertise of Ford's lab and the signal processing and algorithmic expertise of Wetzstein's lab resulted in a digital solution that not only leads to the creation of these extra-wide images but enhances them.
The new camera also relies on a technology developed at Stanford called light field photography, which is what adds a fourth dimension to this camerait captures the two-axis direction of the light hitting the lens and combines that information with the 2-D image. Another noteworthy feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction. Robots could use this technology to see through rain and other things that could obscure their vision.
"One of the things you realize when you work with an omnidirectional camera is that it's impossible to focus in every direction at oncesomething is always close to the camera, while other things are far away," Ford said. "Light field imaging allows the captured video to be refocused during replay, as well as single-aperture depth mapping of the scene. These capabilities open up all kinds of applications in VR and robotics."
"It could enable various types of artificially intelligent technology to understand how far away objects are, whether they're moving and what they're made of," Wetzstein said. "This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it."
And while this camera can work like a conventional camera at far distances, it is also designed to improve close-up images. Examples where it would be particularly useful include robots that have to navigate through small areas, landing drones and self-driving cars. As part of an augmented or virtual reality system, its depth information could result in more seamless renderings of real scenes and support better integration between those scenes and virtual components.
The camera is currently at the proof-of-concept stage and the team is planning to create a compact prototype to test on a robot.
Explore further: Lensless camera technology for adjusting video focus after image capture
More information: Technical paper: http://www.computationalimaging.org/w 04/LFMonocentric.pdf
Hitachi today announced the development of a camera technology that can capture video images without using a lens and adjust focus after image capture by using a film imprinted with a concentric-circle pattern instead of ...
By combining 3-D curved fiber bundles with spherical optics, photonics researchers at the University of California San Diego have developed a compact, 125 megapixel per frame, 360 video camera that is useful for immersive ...
When taking a picture, a photographer must typically commit to a composition that cannot be changed after the shutter is released. For example, when using a wide-angle lens to capture a subject in front of an appealing background, ...
Traditional cameraseven those on the thinnest of cell phonescannot be truly flat due to their optics: lenses that require a certain shape and size in order to function. At Caltech, engineers have developed a new camera ...
A camera that can record 3D images and video is under development at the University of Michigan, with $1.2 million in funding from the W.M. Keck Foundation.
(Tech Xplore)A team of researchers with the University of Stuttgart has used advanced 3-D printing technology to create an extremely small camera that uses foveated imaging to mimic natural eagle vision. In their paper ...
Most of the nuclear reactions that drive the nucleosynthesis of the elements in our universe occur in very extreme stellar plasma conditions. This intense environment found in the deep interiors of stars has made it nearly ...
Energy loss due to scattering from material defects is known to set limits on the performance of nearly all technologies that we employ for communications, timing, and navigation. In micro-mechanical gyroscopes and accelerometers, ...
New results show a difference in the way neutrinos and antineutrinos behave, which could help explain why there is so much matter in the universe.
A research team at the University of Central Florida has demonstrated the fastest light pulse ever developed, a 53-attosecond X-ray flash.
Engineers at Stanford University and the University of California San Diego have developed a camera that generates four-dimensional images and can capture 138 degrees of information. The new camerathe first-ever single-lens, ...
Imperial researchers have tested a 'blued' gauntlet from a 16th-century suit of armour with a method usually used to study solar panels.
Please sign in to add a comment. Registration is free, and takes less than a minute. Read more
Visit link:
4-D camera could improve robot vision, virtual reality and self-driving cars - Phys.Org
Posted in Virtual Reality
Comments Off on 4-D camera could improve robot vision, virtual reality and self-driving cars – Phys.Org
Virtual reality video showcases UCLA campus as 2028 Olympic … – Daily Bruin
Posted: at 4:12 am
LA 2024 knew it couldnt bring every member of the International Olympic Committee to UCLA.
So it brought UCLA to the IOC.
LA 2024 which, following a deal with the committee, is now LA 2028 gave a presentation at IOC headquarters in Lausanne, Switzerland last month and brought a virtual reality video with it. The goal was to convince the committee that Los Angeles was games-ready because the facilities at UCLA will serve as the Olympic Village.
The video takes the viewer around UCLAs sports and living facilities in 360 degrees, seamlessly transitioning from spots like the basketball courts at the John Wooden Center to the dining room at Bruin Plate.
We wanted to be able to showcase this and really put people on UCLAs campus and in the middle of the village, even if they couldnt be here, said LA 2028s director of marketing, Matt Rohmer. With the latest developments in (virtual reality), we were able to develop a VR film that literally puts you on the middle of campus.
The video came out of a joint effort between LA 2028s team and two other companies: advertising agency 72andSunny and virtual reality team Jaunt.
The partnership between 72andSunny and LA 2028 has lasted for the last three years, with the two working together to develop a brand for the movement. 72andSunny was the first team to come up with the idea to use VR.
Sean Matthews one of two creative directors at 72andSunny said developing an athletes village is one of the main challenges of creating an Olympic bid, so a key message in the video was that Los Angeles already has a fully equipped facility. With the village ready to go, there will be no need to invest billions of dollars in building one.
Paris has plans to build a city, or to build this Olympic Village, Matthews said. You can put on this headset and well actually show you how an athlete will train, will live and will dine. Instead of showing you renders and blueprints, lets just show you the real thing.
From there, LA 2028 and 72andSunny reached out to Jaunt, which started as a Silicon Valley technology company four years ago, but has since started Jaunt Studios, a content-driven, cinematic VR producer located in Santa Monica.
When 72andSunny approached us, (the company) had a very tight deadline to create a very high-end piece of immersive content to help seal the deal for (its) bid, said Jaunt Studios creative director Patrick Meegan.
Jaunt had less than six weeks to shrink UCLAs campus down to the size of a VR headset. The approaching summer break accelerated its timeline even more, since it was crucial that the campus be populated with students.
On the timeline we were doing, this would have been very difficult a year ago, Meegan said. You could have done it potentially a year ago, but definitely not five years ago or even two years ago.
Meegan said technologies that Jaunt developed in the past year allowed them to meet the deadline. Jaunt used hardware like waterproof cameras, drones, remote control cameras and cable cams in addition to recently developed software to help ease the transitions between scenes.
Though the IOC was LA 2028s initial target audience, the video has been shared on Facebook, amassing more than 340 thousand views, 7,000 reacts and 1,000 shares, many of which came from within the UCLA community.
Meegan added she thinks VRs ability to make an empathetic connection, and how the new medium lends itself to a certain type of honesty and authenticity, allows the campus to speak for itself.
With a 360 view, you cant hide anything, Meegan said. I think that part of why it resonates with UCLAs current students and alumni is that youre very much put back there; its very familiar.
Even with all the outside people LA 2028 had to bring in to make the video, it didnt have to look far to find athletes. UCLA swimmers, divers and track and field runners participated in the production of the video and even made the final cut.
UCLA has such an amazing athletics program; we could really get the highest caliber of athletes to do set pieces with us, Meegan said.
One of those athletes was diver Annika Lenz, who holds the UCLA record with a platform score of 323.15. At one point Lenz was atop Spieker Aquatics Centers 10-meter platform, eye-to-eye with a drone camera hovering above the pool.
Ive always loved the Olympics, Lenz said. Ive wanted to go to the Olympics. I mean, Ive been to Olympic trials, but I didnt qualify, so I think its great to be part of the Olympic spirit that brings us all together.
The best part about the video, though, is that it worked. The IOC was convinced.
Los Angeles was officially named the host of the 2028 games July 31. In just 11 short years, UCLA will be the site of the Olympic Village in more than just virtual reality.
See the article here:
Virtual reality video showcases UCLA campus as 2028 Olympic ... - Daily Bruin
Posted in Virtual Reality
Comments Off on Virtual reality video showcases UCLA campus as 2028 Olympic … – Daily Bruin
AI can now detect anthrax which could help the fight against … – The Verge
Posted: at 4:12 am
In an effort to combat bioterrorism, scientists in South Korea have trained artificial intelligence to speedily spot anthrax. The new technique is not 100 percent accurate yet, but its orders of magnitude faster than our current testing methods. And it could revolutionize how we screen mysterious white powders for the deadly bioweapon.
Researchers at the Korea Advanced Institute of Science and Technology combined a detailed imaging technique called holographic microscopy with artificial intelligence. The algorithm they created can analyze images of bacterial spores to identify whether theyre anthrax in less than a second. Its accurate about 96 percent of the time, according to a paper published last week in the journal Science Advances.
Anthrax can kill quickly, if left untreated
Anthrax is an infection caused by the bacteria Bacillus anthracis, which lives in soil. (Both the infection and the bacteria are often referred to as anthrax.) People can accidentally get anthrax infections when they handle the skin or meat of infected animals. But anthrax can also be a dangerous bioweapon: in 2001, anthrax spores sent in the mail infected 22 people and killed five of them.
Once the spores enter the body, they germinate and multiply, causing a flu-like illness that poisons the blood. At least 85 percent of people infected by inhaling the spores die if left untreated, sometimes within just one to two days after symptoms appear. (Anthrax infections of the skin, by contrast, tend to be less fatal.) For people especially at risk of contracting anthrax, like lab workers or people who work with animal hair, theres a vaccine. For the rest of us, there are antibiotics but these work best when theyre started as soon as possible after exposure.
Its important to detect anthrax fast
So its important to detect anthrax fast. Right now, one of the most common methods is to analyze the genetic material of the spores or, once someone is infected, of the bacteria found in infected tissue. But that typically requires giving the spores a little time to multiply in order to yield enough genetic material to analyze. Its still going to take the better part of a day with the most rapid approaches to get a result, says bacteriologist George Stewart at the University of Missouri, who has also developed an anthrax detector and was not involved in this study.
In search of a quicker screening technique, the studys lead author, physicist YongKeun Park, teamed up with South Koreas Agency for Defense Development. The goal is to be prepared in case North Korea is developing anthrax as a bioweapon, he says.
Park turned to an imaging technique called holographic microscopy: unlike conventional microscopes, which can only capture the intensity of the light scattering off an object, a holographic microscope can also capture the direction that light is traveling. Since the structure and makeup of a cell can change how light bounces off of it, the researchers suspected that the holographic microscope might capture key, but subtle, differences between spores produced by anthrax and those produced by closely related, but less toxic species.
The AI could ID the anthrax spores within seconds
Park and his team then trained a deep learning algorithm to spot these key differences in more than 400 individual spores from five different species of bacteria. One species was Bacillus anthracis, which causes anthrax, and four were closely related doppelgngers. The researchers didnt tell the neural network exactly how to spot the different species the AI figured that out on its own. After some training, it could distinguish the anthrax spores from the non-anthrax doppelgnger species about 96 percent of the time.
The technique isnt perfect, and as a tool intended to detect bioweapons, it has to be. The drawback is that the accuracy is lower than conventional methods, Park says. There are also multiple strains of each of the bacteria species analyzed but the machine was trained on only one strain per species. Subtle differences between the strains might be able to throw off the algorithm, Stewart says. Still, the new technique is so rapid that it could come in handy. It doesnt require culturing organisms, it doesnt require extracting DNA, it doesnt require much of anything other than being able to visualize the spores themselves, Stewart says.
It could enhance our preparation for this kind of biological threat.
Next, Park wants to feed the neural network more spore images, in order to boost accuracy. In the meantime, the method could be used as a pre-screening tool to rapidly determine whether a white powder that people have been exposed to is anthrax, and if they should start antibiotics. A slower, more accurate method could then confirm the results.
This paper will not change everything, Park says, but its one step toward a method that can quickly detect anthrax. It could enhance our preparation for this kind of biological threat.
Go here to see the original:
AI can now detect anthrax which could help the fight against ... - The Verge
Posted in Ai
Comments Off on AI can now detect anthrax which could help the fight against … – The Verge
An artificial intelligence researcher reveals his greatest fears about the future of AI – Quartz
Posted: at 4:12 am
As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.
And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?
I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?
The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systemsthe RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plantengineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.
That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disastersinking a ship, blowing up two shuttles, and spreading radioactive contamination across Europe and Asiaa set of relatively small failures combined together to create a catastrophe.
I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm, and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.
Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master; these are not world-changing consequences. Indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.
But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.
Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolutionand factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty, and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.
Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.
One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.
Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collectedand get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.
Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.
In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political selftogether with the rest of humanitymay be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.
There is one last fear, embodied by HAL 9000, the Terminator, and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?
The key question in this scenario is: Why should a superintelligence keep us around?
I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.
But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge, or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.
Fortunately, we need not justify our existence quite yet. We have some timesomewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different thingsas are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.
This article was originally published on The Conversation. Read the original article.
Read more:
An artificial intelligence researcher reveals his greatest fears about the future of AI - Quartz
Posted in Ai
Comments Off on An artificial intelligence researcher reveals his greatest fears about the future of AI – Quartz
Legal Services Innovator Axiom Unveils Its First AI Offering – Above the Law
Posted: at 4:12 am
Since its founding in 2000 as an alternative provider of legal services to large corporations, Axiom has grown to have more than 1,500 lawyers and 2,000 employees across three continents, serving half the Fortune 100. With a reputation for innovation, it describes itself as a provider of tech-enabled legal services.
Given that description, it would seem inevitable that Axiom would bring artificial intelligence into the mix of the services it offers. Now it has. This week, it announced the launch of AxiomAI, a program that aims to leverage AI to improve the efficiency and quality of Axioms contract work.
AxiomAI has two components, Paul Carr, senior executive overseeing Axioms AI efforts, told me during an interview Friday. One is research, development and testing of AI tools for legal services. The other is deploying AI tools within Axioms standard workflows as testing proves them ready.
The first such deployment will come later this month, as Axiom embeds Kira Systems machine-learning contract analysis technology into its M&A diligence and integration offering. In an M&A deal, which can require review of thousands of corporate contracts, Kira automates the process of identifying key provisions such as change of control and assignment.
In the context of M&As, the AI will be invisible to our clients, Carr said. They know they have to understand the risks that may be in those agreements. They need someone to sort that out which agreements apply, whats in them in a very accurate way. And they need actionable recommendations, very specific recommendations. Thats what we deliver today, but well deliver it better and faster using AI behind the scenes.
Beyond this immediate deployment, AxiomAI will encompass a program of ongoing research and testing of AIs applicability to the delivery of legal services. In fact, it turns out that Axiom has quietly been performing this research for four years, including partnering with leading experts and vendors in the field of machine learning.
Weve been watching this space for a while, Carr said. Weve been testing really actively, running proofs of concept, of various AI tools over the last four years. At a fundamental level, we do believe that for a lot of legal work, AI will have really important applications and will change legal workflows into the future.
The focus of Axioms AI research is, as Carr put it, all things contracting, from creating single contracts to applying analytics to collections of contracts. And the type of AI on which it is focused is machine learning. We think the area that is most interesting is machine learning and, specifically, the whole area of deep learning within machine learning.
In the case of Kira, Axioms testing had demonstrated that the product was ready for deployment. We felt that the maturity of the technology which is really code for the ability of the technology to perform at a level that makes economic sense was such that it makes sense to move it, in a sense, from the lab to production, in a business-as-usual context.
Going forward, Axiom plans to keep testing other AI tools in partnership with leading practitioners in the field. A key benefit Axiom brings to the equation is an enormous collection of contractual data that can be used to train the AI technology.
We analyze over 10 million pieces of contractual information every year, Carr said. We have a very powerful data set that we plan to use to train AI technology. What we will certainly do is train and improve that technology with our training data.
The training that is performed using Axioms data will remain proprietary, and Carr believes that will add greater value for Axioms customers in the use of these AI tools.
The roadmap for Axioms research has two tracks, Carr said. One is to explore how to go deeper and further into the M&A offering its launching this month, in order to train AI tools to do even more of the work. The second is to consider the other use cases to focus on next.
One use case under consideration involves regulatory remediation for banks. Another would assist pharmaceutical companies in the negotiation and execution of clinical trial agreements.
Carr came to Axiom in 2008 from American Express, where he had run its International Insurance Services division and was its global head of strategy. He started his career working on systems integration design. He believes that technological integration takes much longer to achieve than technological innovation.
You need to put in place the surrounding capabilities that allow you to take advantage of that technology and, not immaterially, you need to go through the process of change management and behavioral change, he said. In the legal industry, thats a big deal. Theres a lot that has to happen for technical innovations to be consumed.
Driving that adoption curve is the heart of Axioms business, Carr suggests. The best way to do that, the company believes, is to combine people, process and technology in ways that allow the value of the technology to be realized. That is what Axiom now plans to do for AI.
AI today is like the internet in the late 90s, Carr said. I have no doubt that in a couple of decades, AI will be embedded in everything that impacts corporate America. But how it unfolds and takes shape is the stage were in now.
Robert Ambrogiis aMassachusetts lawyerand journalist who has been covering legal technology and the web for more than 20 years, primarily through his blogLawSites.com. Former editor-in-chief of several legal newspapers, he is a fellow of theCollege of Law Practice Managementand an inauguralFastcase 50honoree. He can be reached by email atambrogi@gmail.com, and you can follow him onTwitter(@BobAmbrogi).
Link:
Legal Services Innovator Axiom Unveils Its First AI Offering - Above the Law
Posted in Ai
Comments Off on Legal Services Innovator Axiom Unveils Its First AI Offering – Above the Law
SK Telecom launches portable version of AI speaker – ZDNet
Posted: at 4:12 am
Courtesy of SK Telecom.
SK Telecom has launched an outdoor version of its AI speaker in South Korea.
The NUGU Mini is a portable version of NUGU, the mobile carrier's AI speaker for the home that has sold over 150,000 units since launching last year.
NUGU, the name of both the speaker and the AI platform, uses a cloud-based deep-learning framework that increases its speech recognition accuracy over time.
The mobile carrier said over 130 million conversations have been stored in its cloud server. Voice recognition rate for both children and regional Korean dialects significantly increased since launching, it said.
NUGU Mini weighs 219 grams -- almost a fifth of NUGU's 1,030 grams -- and can connect to external speakers and a battery that lasts over four hours.
Users can turn on Melon, Korea's number one music streamer, use smart home features offered by SK Telecom, and control their set-top boxes.
They can also order food, manage schedules, check for weather and traffic information, enquire about currency exchange rates, and register to visit local bank branches.
NUGU Mini will cost 99,000 won ($87) but SK Telecom will offer it for 49,900 won for three months when sales begin August 11.
AI speakers and speech recognition have been a big trend in the Korean tech scene this year. Kakao, which owns the country's biggest chat app KakaoTalk, launched the Kakao Mini, its own AI speaker, and is also cooperating with compatriot car maker Hyundai for in-car speech recognition.
South Korea's largest search giant Naver and its chat app subsidiary Line also partnered with Qualcomm to make AI-based Internet of Things devices.
Read more:
Posted in Ai
Comments Off on SK Telecom launches portable version of AI speaker – ZDNet
Demystifying AI: Understanding the human-machine relationship – MarTech Today
Posted: at 4:12 am
The artificial intelligence oftodayhas almost nothing in common with the AI of science fiction.In Star Wars, Star Trek and Battlestar Galactica, were introduced to robots who behave like we do they are aware of their surroundings, understand the context of their surroundings and can move around and interact with people just as I can with you. These characters and scenarios are postulated by writers and filmmakers as entertainment, and while one day humanity will inevitably develop an AI like this, it wont happen in the lifetime of anyone reading this article.
Because we can rapidly feed vast amounts of data to them, machines appear to be learning and mimicking us, but in fact they are still at the mercy of the algorithms we provide. The way for us to think of modern artificial intelligence is to understand two concepts:
To illustrate this in grossly simplified terms, imagine a computer system in an autonomous car. Data comes from cameras placed around the vehicle, from road signs, from pictures that can be identified as hazards and so on. Rules are then written for the computer system to learn about all the data points and make calculations based on the rules of the road. The successful result is the vehicle driving from point A to B without making mistakes (hopefully).
The important thing to understand is that these systems dont think like you and me.People are ridiculously good at pattern recognition, even to the point where we prefer forcing ourselves to see patterns when there are none.We use this skill to ingest less information and make quick decisions about what to do.
Computers have no such luxury; they have to ingest everything, and if youll forgive the pun, they cant think outside the box. If a modern AI were to be programmed to understand a room (or any other volume) it would have to measure all of it.
Think of the little Roomba robot that can automatically vacuum your house.It runs randomly around until it hits every part of your room.An AI would do this (very fast) and then would be able to know how big the room is.A person could just open the door, glance at the room and say (based on prior experience), Oh, its about 20 ft. long and 12 ft. wide. Theyd be wrong, but it would be close enough.
Over the past two decades, weve delved into data science and developed vast analytical capabilities.Data is put into systems, people look it, manipulate it, identify trends and make decisions based on it.
Broadly speaking, any job like this can be automated.Computer systems are programmed with machine learning algorithms and continuously learn to look at more data more quickly than any human would be able to.Any rule or pattern that a person is looking for, a computer can be programmed to understand and will be more effective than a person at executing.
We see examples of this while running digital advertising campaigns. Before, a person would log into a system, choose which data provider to use, choose which segments to run (auto intenders, fashionistas, moms and so on), run the campaign, and then check in on it periodically to optimize.
Now, all the data is available to an AI the computer system decides how to run the campaign based on given goals (CTR, CPA, site visits and so on) and tells you during and after the campaign about the decisions it made and why.Put this AI up against the best human opponent, and the computer should win unless a new and hitherto unknown variable is introduced or required data is unavailable.
There are still lots of things computers cannot do for us. For example, look at the United Airlines fiasco last April, when a man was dragged out of his seat after the flight was overbooked. Uniteds tagline is Fly the friendly skies. The incident was anything but friendly, and any current ad campaign touting so would be balked at.
To a human, the negative sentiment is obvious. The ad campaign would be pulled and a different strategy would be attempted in this case, a major PR push. But a computer would just notice that the ads arent performing as they once were but would continue to look for ways to optimize the campaign. It might even notice lots of interactions when Fly the Friendly Skies ads are placed next to images of a person being brutally pulled off the plane and place more ads there!
The way that artificial intelligence will affect us as consumers is more subtle than we think.Were unlikely to have a relationship with Siri or Alexa (see the movie Her), and although self-driving cars will become real in our lifetime, its unlikely that traffic will improve dramatically, since not everyone will use them, and ride-sharing or service-oriented vehicles will still infiltrate our roads, contributing to traffic.
The difference will be that cars, roads and signals may all be connected with an AI running the system based on our rules. We could expect the same amount of traffic, but the flow of traffic will be much better because AI will follow the rules, meaning no slow drivers in the fast lane! And we can do whatever we want while stuck in traffic rather than being wedded to the steering wheel.
Artificial intelligence, machine learning and self-aware systems are real.They will affect us and the way we do our jobs. All of us have opportunities in our current work to embrace these new tools and effect change in our lives that will make us more efficient.
While these systems may not be R2-D2, they are still revolutionary. If you invest in and take advantage of what AI can do for your business, good things are likely to happen to you. And if you dont, youll still discover that the revolution is real but you might not be on the right side of history.
Some opinions expressed in this article may be those of a guest author and not necessarily MarTech Today. Staff authors are listed here.
Go here to read the rest:
Demystifying AI: Understanding the human-machine relationship - MarTech Today
Posted in Ai
Comments Off on Demystifying AI: Understanding the human-machine relationship – MarTech Today