The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: August 9, 2017
Jeff Bridges wants Tron 3 to be a virtual reality movie – EW.com
Posted: August 9, 2017 at 5:13 am
The upcoming firefighter movieOnly the Bravefinds the great Jeff Bridges reteaming with his Tron: Legacy director, Joseph Kosinski. So, when your writer chatted with the actor Tuesday morning for the EW radio show Entertainment Weirdly at Sirius XMs New York studios, I had to ask about the ongoing rumors that we might one day see a third Tron film.
Yeah, yeah, Ive heard those rumors too, Bridges replied. I hope that happens. I think Joes got the script and everything, you know. Yeah, I dont know that Im supposed to talk about it or not. I dont know. It should be the first virtual reality movie, you know? Wouldnt that be coolto see Tron in that world?
It would indeed, Dude. It would indeed.
Bridges was in town to attend the premiere of his new film, comedy-drama The Only Living Boy in New York, which costars Callum Turner, Kate Beckinsale, Pierce Brosnan, Cynthia Nixon, and Kiersey Clemons. You can hear the full interview with Bridges next Monday, Aug. 15, at 1 p.m. ET on Entertainment Weirdly onEW Radio. And you can watch the trailers for both The Only Living Boy in New York and Only the Brave, below.
The Only Living Boy in New Yorkwill be released Aug. 11, whileOnly the Bravearrives Oct. 20.
Read more from the original source:
Jeff Bridges wants Tron 3 to be a virtual reality movie - EW.com
Posted in Virtual Reality
Comments Off on Jeff Bridges wants Tron 3 to be a virtual reality movie – EW.com
Take a Virtual Reality Ride Along in a Shelby GT350 – The Drive
Posted: at 5:13 am
Watching a 2017 Ford Mustang Shelby GT350 doing what it was made to do is already a treat to the eyes and ears, but Future Motoring just released a video that makes it a whole new kind of experience. The guys over there mounted a 360-degree camera to the back of a slightly modified Shelby to take us for a virtual reality ride.
The video is set in a rural area on country roads where the GT350 shines. This is a great demonstration of the straight-line performance this Shelby is capable of (not that thats the only thing its good at) Granted, the setting doesnt show us much more than road, trees, and sky, but its still a cool thing to watch. Its especially cool if you have a VR headset. If you dont, you can still drag the view around on YouTube.
As for the car itself, its no ordinary GT350. This mean blue Mustang has been equipped with Ford Performance intake and exhaust making the 5.2-liter flat plane crank Voodoo V-8 under the hood breathe better and sound even more amazing than it does in stock form.
This isnt the first time weve gotten a Mustang VR experience. Back in February, Ford Performance released a video called ReRendezvous which was a 360-degree virtual reality ride through Paris from the point of view of a 2016 Mustang. This GT350 video is a bit lower budget, but it gives us a much more satisfying sound.
Read more from the original source:
Take a Virtual Reality Ride Along in a Shelby GT350 - The Drive
Posted in Virtual Reality
Comments Off on Take a Virtual Reality Ride Along in a Shelby GT350 – The Drive
The AFI FEST Interview: Wevr’s James Kaelan on Virtual Reality Storytelling – American Film Magazine (blog)
Posted: at 5:13 am
Each year, AFI FEST presented by Audi highlights cutting-edge virtual reality (VR) storytelling with the State of the Art Technology Showcase. AFI spoke with James Kaelan, current Director of Development + Acquisitions at VRcreative studio and production company Wevr, about his work in VR and the future of the medium. Formerly Creative Director at Seed&Spark, Kaelan brought his immersive short-film horror experience THE VISITOR to AFI FEST last year for the Showcase.
AFI: What got you interested in creating VR work in the first place?
JK: Im as surprised as anyone to find myself working in VR. Ive always considered myself something of a Luddite skeptical, generally, of the advance of technology. But back at the end of 2014, Anthony Batt, whos a co-founder of Wevr, was advising at Seed&Spark (which I helped co-found), and invited our team to visit their offices and watch some of the preliminary 360 video and CGI work they were producing. I remember sitting in the conference room and putting on the prototype of the Samsung Gear VR, and being immediately shocked by the potential of the technology. This wasnt some shiny new feature grafted onto cinema like 3D or a rumble pack in your theater chair. This was a new medium, requiring a brand new language.
AFI: What misconceptions do you think are out there among audiences when they first encounter VR work?
JK: I think audiences, rightfully, expect a lot from the medium. Most people whove had any direct contact with the very broad array of experiences that we broadly group together as VR have still only seen monoscopic 360 video, either on a Google Cardboard or a Gear. And with such work, after youve gotten over the initial thrill of discovering that you can look around, essentially, the inside of a sphere, your expectations accelerate. Two years ago we were still at the Lumirebrothers stage of VR. Workers leaving a factory? Awesome. Train pulling into a station? Super awesome. But unlike with cinema in its early years, the audience for VR has extremely high expectations about narrative complexity and image fidelity gleaned from the last 130 years of film. They wont tolerate inferior quality for very long. So those of us on the creative and technical side of the medium have to find a way to meet those assumptions. Some creators, in a rush to find a viable language in VR, have resorted to jamming it into the paradigm of framed storytelling, force-mediating the viewers perspective through edits, and teaching the audience to remain passive. And I dont want to dismiss those techniques out of hand. But I think its our job to actually forget the rules we apply to other media, and continue striving to invent a brand new way of telling stories. When we begin to master that new language, audiences will come in droves.
AFI: Whats the biggest challenge documentary filmmakers encounter when creating something for the VR space?
JK: I would actually say that documentary filmmakers are better equipped, naturally, to transition into VR or at least the 360 video element of it. And I say this because, without painting nonfiction storytellers with too broad a brush (and without sinking into the mire of the objectivity versus subjectivity debate), documentary filmmakers engage with existing subjects, rather than inventing new ones from scratch. Certainly when you look to the vrit side of documentary film, where the goal is observation rather than participation or investigation, 360 should feel quite natural to those artists because its actually closer (I say with great trepidation) to a purer strain of objectivity: because youve gotten rid of the frame. Youve chosen where to place the camera and when, but youre capturing the entirety of the environment simultaneously. Fiction filmmakers are probably less likely to encounter or invent story-worlds that unfold in both halves of the sphere simultaneously. All of that is to say, I literally wish Id spent more time making long-take docs before moving into VR!
AFI: What types of artists are you looking to work with at Wevr?
JK:Wevr is in this unique place where weve made a name for ourselves making some of the most phenomenal, intricate, interactive, CG, room-scale VR like theBlu and Gnomes & Goblins while simultaneously making, and being recognized on the international film festival circuit, for 360 monoscopic video work that has cost less than $10,000 to produce. So I dont want to pigeonhole Wevr. We make simulations with Jon Favreau on one end, and on the other, we work with college students who are interning with us during the summer. What unites those two groups is that both maximize, or exceed, whats capable within the constraints of their given budgets. Within reason, you give any artist enough time and money and shell make something incredible. More impressive and more attractive to us is the artist who can innovate in times of scarcity and abundance. Atthis moment in the history of VR, if you can tell stories dynamically without having to hire a team of engineers to execute your vision, youll get more work done. Youll actually get to practice your craft. Later you can have a team of 100, and a budget of a million times that.
AFI: Whats a common mistake you see new artists making when they first start creating work for the VR space?
JK: Artists working in VR try to replicate whats already familiar to them. And ironically, its the filmmakers who have the toughest time transitioning myself included. We miss the frame. We miss the authorial hand that mediates perspective and attention. We miss the freedom to juxtapose through editing. And because we miss those things, our first inclination is to figure out how to port them into VR. The best and least possible approach is to forget everything you know, like Pierre Menard trying to write the Quixote. Whereas artists from theater, from the gallery and museum installation world, come to VR almost naturally. They think about physical navigation and multi-sensory experience. They think about how things feel to the touch. They think about how things smell. They think about how the viewer moves, most importantly. Thats an invaluable perspective to have at this still-early stage in VR.
AFI: What was your experience like showcasing VR work at AFI FEST?
JK:For me and for my collaborators on the project, Blessing Yen and Eve Cohen showing THE VISITOR at AFI FEST last year was an honor. In order to earn a living while being a filmmaker, Ive done a lot of different jobs. In the beginning I bussed tables. Later I got to write about film for living. Now I get to create, and help others create, VR. But during that entire time, from clearing dishes at Mohawk Bend in Echo Park six years ago to working at Wevr now, AFI FEST has been the same: a free festival, stocked with the most discerning slate of films (and now VR) from around the world. And Ive gone every year since Ive lived in LA. So, it meant a lot to me to be included last year. On top of that, the presentation of the VR experiences themselves, spread around multiple dedicated spaces that never felt oppressively crowded or loud, made AFI one of my favorite stops on the circuit last year.
Interactive and virtual reality entries for AFI FEST 2017 presented by Audi are now being accepted for the State of the Art Technology Showcase, which highlights one-of-a-kind projects and events at the intersection of technology, cinema and innovation. The deadline to submit your projects is August 31, 2017. Submit today at AFI.com/AFIFEST or Withoutabox.com.
Read more:
Posted in Virtual Reality
Comments Off on The AFI FEST Interview: Wevr’s James Kaelan on Virtual Reality Storytelling – American Film Magazine (blog)
Teenage team develops AI system to screen for diabetic retinopathy – MobiHealthNews
Posted: at 5:13 am
Kavya Kopparapu might be considered something of a whiz kid. After all, she had yet to enter her senior year of high school when she started Eyeagnosis, a smartphone app and 3D-printed lens that allows patients to be screened for diabetic retinopathy with a quick photo, avoiding the time and expense of a typical diagnostic procedure. In June 2016, Kopparapus grandfather had recently been diagnosed with diabetic retinopathy, a complication of diabetes that damages retinal blood vessels and can eventually cause blindness. He caught the symptoms in time to receive treatment, but it was close. A little too close for Kopparapus comfort. According to the IEEE Spectrum, Kopparapu, her 15-year-old brother Neeyanth and her classmate Justin Zhang trained an artificial intelligence system to scan photos of eyes and detect, and diagnose, signs of diabetic retinopathy. She unveiled the technology at the OReilly Artificial Intelligence conference in New York City in July. After diving into internet-based research and emailing opthamologists, biochemists, epidemiologists, neuroscientists and the like, she and her team worked on the diagnostic AI using a machine-learning architecture called a convolutional neural network. CNNs, as theyre called, parse through vast data sets -- like photos -- to look for patterns of similarity, and to date have shown an aptitude for classifying images. The network itself was the ResNet-50, developed by Microsoft. But to train it to make retinal diagnoses, Kopparapu had to feed it images from the National Institute of Healths EyeGene database, which essentially taught the architecture how to spot signs of retinal degeneration. One hospital has already tested the technology, fitting a 3D-printed lens onto a smartphone and training the phones flash to illuminate the retinas of five different patients. Tested against opthalmologists, the system went five for five on diagnoses. Kopparapus invention still needs lots of tests and additional data to prove its efficacy before it sees widespread clinical adoption, but so far, its off to a pretty good start. Eyeagnosis is operating in a space that's recently become interesting to some very large companies. Last fall, a team of Google researchers published a paper in the Journal of the American Medical Association showing that Google's deep learning algorithm, trained on a large data set of fundus images, can detect diabetic retinopathy with better than 90 percent accuracy. That algorithm was then tested on 9,963 deidentified images retrospectively obtained from EyePACS in the United States, as well as three eye hospitals in India. A second, publicly available research data set of 1,748 was also used. The accuracy was determined by comparing its diagnoses to those done by a panel of at least seven U.S. board-certified ophthalmologists. The two data sets had 97.5 percent and 96.1 percent sensitivity, and 93.4 percent and 93.9 percent specificity respectively.
And Google isnt the only player in that space. IBM has a technology utilizing a mix of deep learning, convolutional neural networks and visual analytics technology based on 35,000 images accessed via EyePACs; in research conducted earlier this year, the technology learned to identify lesions and other markers of damage to the retinas blood vessels, collectively assessing the presence and severity of disease. In just 20 seconds, the method was successful in classifying diabetic retinopathy severity with 86 percent accuracy, suggesting doctors and clinicians could use the technology to have a better idea of how the disease progresses as well as identify effective treatment methods.
Lower-tech options are also taking a stab at improving access to screenings. Using a mix of in-office visits, telemedicine and web-based screening software, the Los Angeles Department of Health Services has been able to greatly expand the number of patients in its safety net hospital who got screenings and referrals. In an article published in the journal JAMA Internal Medicine, researchers describe how the two-year collaboration using Safety Net Connects eConsult platform resulted in more screenings, shorter wait times and fewer in-person specialty care visits. By deploying Safety Net Connects eConsult system to a group of 21,222 patients, the wait times for screens decreased by almost 90 percent, and overall screening rates for diabetic retinopathy increased 16 percent. The digital program also eliminated the need for 14,000 visits to specialty care professionals
More:
Teenage team develops AI system to screen for diabetic retinopathy - MobiHealthNews
Posted in Ai
Comments Off on Teenage team develops AI system to screen for diabetic retinopathy – MobiHealthNews
Advancing AI by Understanding How AI Systems and Humans Interact – Windows IT Pro
Posted: at 5:13 am
Artificial intelligence as a technology is rapidly growing, but much is still being learned about how AI and autonomous systems make decisions based on the information they collect and process.
To better explain those relationships so humans and autonomous systems can better understand each other and collaborate more deeply, researchers at PARC, the Palo Alto Research Center, have been awarded a multi-million dollar federal government contract to create an "interactive sense-making system" that could answer many related questions.
The research for the proposed system, called COGLE (CommonGroundLearning andExplanation), is being funded by the Defense Advanced Research Projects Agency (DARPA),using an autonomous Unmanned Aircraft System (UAS) test bed but would later be applicable to a variety of autonomous systems.
The idea is that since autonomous systems are becoming more widely used, it would behoove humans who are using them to understand how the systems behave based on the information they are provided, Mark Stefik, a PARC research fellow who runs the lab's human machine collaboration research group, told ITPro.
"Machine learning is becoming increasing important," said Stefik. "As a consequence, if we are building systems that are autonomous, we'd like to know what decisions they will make. There is no established technique to do that today with systems that learn for themselves."
In the field of human psychology, there is an established history about how people form assumptions about things based on their experiences, but since machines aren't human, their behaviors can vary, sometimes with results that can be harmful to humans, said Stefik.
In one moment, an autonomous machine can do something smart or helpful, but then the next moment it can do something that is "completely wonky, which makes things unpredictable," he said. For example, a GPS system seeking the shortest distance between two points could erroneously and catastrophically send a user driving over a cliff or the wrong way onto a one-way street. Being able to delve into those autonomous "thinking" processes to understand them is the key to this research, said Stefik.
The COGLE research will help researchers pursue answers to these issues, he said. "We're insisting that the program be explainable," for the autonomous systems to say why they are doing what they are doing. "Machine learning so far has not really been designed to explain what it is doing."
The researchers involved with the project will essentially have roles educators and teachers for the machine learning processes to improve their operations and make it more useable and even more human like, said Stefik. "It's a sort of partnership where humans and machines can learn from each other."
That can be accomplished in three ways, he added, including reinforcement at the bottom level, using reasoning patterns like the ones humans use at the cognitive or middle level, and through explanation at the top sense-making level. The research aims to enable people to test, understand, and gain trust in AI systems as they continue to be integrated into our lives in more ways.
The research project is being conducted under DARPA's Explainable Artificial Intelligence (XAI) program, which seeks to create a suite of machine learning techniques that produce explainable models and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.
PARC, which is a Xerox company, is conducting the COGLE work with researchers at Carnegie Mellon University, West Point, the University of Michigan, the University of Edinburgh and the Florida Institute for Human & Machine Cognition. The key idea behind COGLE is to establish common ground between concepts and abstractions used by humans and the capabilities learned by a machine. These learned representations would then be exposed to humans using COGLE's rich sense-making interface, enabling people to understand and predict the behavior of an autonomous system.
Go here to read the rest:
Advancing AI by Understanding How AI Systems and Humans Interact - Windows IT Pro
Posted in Ai
Comments Off on Advancing AI by Understanding How AI Systems and Humans Interact – Windows IT Pro
Salesforce AI helps brands track images on social media | TechCrunch – TechCrunch
Posted: at 5:13 am
Brands have long been able to search for company mentions on social media, but theyve lacked the ability to search for pictures of their logos or products in an easy way. Thats where Salesforces latest Einstein artificial intelligence feature comes into play.
Today the company introduced Einstein Vision for Social Studio, which provides a way for marketers to search for pictures related to their brands on social media in the same way they search for other mentions. The product takes advantage of a couple of Einstein artificial intelligence algorithms including Einstein Image Classification for image recognition. It uses visual search, brand detection and product identification. It also makes use of Einstein Object Detection to recognize objects within images including the type and quantity of object.
AI has gotten quite good at perception and cognition tasks in recent years. One result of this has been the ability to train an algorithm to recognize a picture. With cheap compute power widely available and loads of pictures being uploaded online, it provides a perfect technology combination for better image recognition.
Rob Begg, VP of product marketing for social and advertising products at Salesforce, says its about letting the machine loose on tasks for which its better suited. If you think of it from a company point of view, there is a huge volume of tweets and [social] posts. What AI does best is help surface and source the ones that are relevant, he says.
As an example, he says there could be thousands of posts about cars, but only a handful of those would be relevant to your campaign. AI can help find those much more easily.
Begg sees three possible use cases for this tool. First of all, it could provide better insight into how people are using your products. Secondly it could provide a way to track brand displays online hidden within pictures, and finally it could let you find out when influencers such as actors or athletes are using your products.
The product comes trained to recognize two million logos, 60 scenes (such as an airport), 200 foods and 1000 objects. That should be enough to get many companies started. Customizing isnt available in the first release, so if you have a logo or object not included out of the box, you will need to wait for a later version to be able to customize the content.
Begg says it should be fairly easy for marketers used to using Social Studio to figure out how to incorporate the visual recognition tools into their repertoire. The new functionality should be available immediately to Salesforce Social Studio users.
Follow this link:
Salesforce AI helps brands track images on social media | TechCrunch - TechCrunch
Posted in Ai
Comments Off on Salesforce AI helps brands track images on social media | TechCrunch – TechCrunch
True AI cannot be developed until the ‘brain code’ has been cracked: Starmind – ZDNet
Posted: at 5:13 am
Marc Vontobel, CTO & Pascal Kaufmann, CEO, Starmind
Artificial intelligence is stuck today because companies are likening the human brain to a computer, according to Swiss neuroscientist and co-founder of Starmind Pascal Kaufmann. However, the brain does not process information, retrieve knowledge, or store memories like a computer does.
When companies claim to be using AI to power "the next generation" of their products, what they are unknowingly referring to is the intersection of big data, analytics, and automation, Kaufmann told ZDNet.
"Today, so called AI is often just the human intelligence of programmers condensed into source code," said Kaufmann, who worked on cyborgs previously at DARPA.
"We shouldn't need 300 million pictures of cats to be able to say whether something is a cat, cow, or dog. Intelligence is not related to big data; it's related to small data. If you can look at a cat, extract the principles of a cat like children do, then forever understand what a cat is, that's intelligence."
He even said that it's not "true AI" that led to AlphaGo -- a creation of Google subsidiary DeepMind -- mastering what is revered as the world's most demanding strategy game, Go.
The technology behind AlphaGo was able to look at 10 to 20 potential future moves and lay out the highest statistics for success, Kaufmann said, and so the test was one of rule-based strategy rather than artificial intelligence.
The ability for a machine to strategise outside the context of a rule-based game would reflect true AI, according to Kaufmann, who believes that AI will cheat without being programmed not to do so.
Additionally, the ability to automate human behaviour or labour is not necessarily a reflection of machines getting smarter, Kaufmann insisted.
"Take a pump, for example. Instead of collecting water from the river, you can just use a pump. But that is not artificial intelligence; it is the automation of manual work ... Human-level AI would be able to apply insights to new situations," Kaufmann added.
While Facebook's plans to build a brain-computer interface and Elon Musk's plans to merge the human brain with AI have left people wondering how close we are to developing true AI, Kaufmann believes the "brain code" needs to be cracked before we can really advance the field. He said this can only be achieved through neuroscientific research.
Earlier this year, founder of DeepMind Demis Hassabis communicated a similar sentiment in a paper, saying the fields of AI and neuroscience need to be reconnected, and that it's only by understanding natural intelligence that we can develop the artificial kind.
"Many companies are investing their resources in building faster computers ... we need to focus more on [figuring out] the principles of the brain, understand how it works ... rather than just copy/paste information," Kaufmann said.
Kaufmann admitted he doesn't have all the answers, but finds it "interesting" that high-profile entrepreneurs such as Musk and Mark Zuckerberg, none of whom have AI or neuroscience backgrounds, have such strong and opposing views on AI.
Musk and Zuckerberg slung mud at each other in July, with the former warning of "evil AI" destroying humankind if not properly monitored and regulated, while the latter spoke optimistically about AI contributing to the greater good, such as diagnosing diseases before they become fatal.
"One is an AI alarmist and the other makes AI look charming ... AI, like any other technology, can be used for good or used for bad," said Kaufmann, who believes AI needs to be assessed objectively.
In the interim, Kaufmann believes systems need to be designed so that humans and machines can work together, not against each other. For example, Kaufmann envisions a future where humans wear smart lenses -- comparable to the Google Glass -- that act as "the third half of the brain" and pull up relevant information based on conversations they are having.
"Humans don't need to learn stuff like which Roman killed the other Roman ... humans just need to be able to ask the right questions," he said.
"The key difference between human and machine is the ability to ask questions. Machines are more for solutions."
Kaufmann admitted, however, that humans don't know how to ask the right questions a lot of the time, because we are taught to remember facts in school, and those who remember the most facts are the ones who receive the best grades.
He believes humans need to be educated to ask the right questions, adding that the question is 50 percent of the solution. The right questions will not only allow humans to understand the principles of the brain and develop true AI, but will also keep us relevant even when AI systems proliferate, according to Kaufmann.
If we want to slow down job loss, AI systems need to be designed so that humans are at the centre of it, Kaufmann said.
"While many companies want to fully automate human work, we at Starmind want to build a symbiosis between humans and machines. We want to enhance human intelligence. If humans don't embrace the latest technology, they will become irrelevant," he added.
The company claims its self-learning system autonomously connects and maps the internal know-how of large groups of people, allowing employees to tap into their organisation's knowledge base or "corporate brain" when they have queries.
Starmind platform
Starmind is integrated into existing communication channels -- such as Skype for Business or a corporate browser -- eliminating the need to change employee behaviour, Kaufmann said.
Questions typed in the question window are answered instantly if an expert's answer is already stored in Starmind, and new questions are automatically routed to the right expert within the organisation, based on skills, availability patterns, and willingness to share know-how. All answers enhance the corporate knowledge base.
"Our vision is if you connect thousands of human brains in a smart way, you can outsmart any machine," Kaufmann said.
On how this is different to asking a search engine a question, Kaufmann said Google is basically "a big data machine" and mines answers to questions that have been already asked, but is not able to answer brand new questions.
"The future of Starmind is we actually anticipate questions before they're even asked because we know so much about the employee. For example, we can say if you are a new hire and you consume a certain piece of content, there will be a 90 percent probability that you will ask the following three questions within the next three minutes and so here are the solutions."
Starmind is being currently used across more than 40 countries by organisations such as Accenture, Bayer, Nestl, and Telefonica Deutschland.
While Kaufmann thinks it is important at this point in time to enhance human intelligence rather than replicate it artificially, he does believe AI will eventually substitute humans in the workplace. But unlike the grim picture painted by critics, he doesn't think it's a bad thing.
"Why do humans need to work at all? I look forward to all my leisure time. I do not need to work in order to feel like a human," Kaufmann said.
When asked how people would make money and sustain themselves, Kaufmann said society does not need to be ruled by money.
"In many science fiction scenarios, they do not have money. When you look at the ant colonies or other animals, they do not have cash," Kaufmann said.
Additionally, if humans had continuous access to intelligent machines, Kaufmann said "the acceleration of human development will pick up" and "it will give rise to new species".
"AI is the ultimate tool for human advancement," he firmly stated.
Link:
True AI cannot be developed until the 'brain code' has been cracked: Starmind - ZDNet
Posted in Ai
Comments Off on True AI cannot be developed until the ‘brain code’ has been cracked: Starmind – ZDNet
REVEALED: AI is turning RACIST as it learns from humans – Express.co.uk
Posted: at 5:13 am
GETTY
In parts of the US, when a suspect is taken in for questioning they are given a computerised risk assessment which works out the likelihood of the person reoffending.
The judges can then use this data when giving his or her verdict.
However, an investigation has revealed that the artificial intelligence behind the software exhibits racist tendencies.
Reporters from ProPublica obtained more than 7,000 test results from Florida in 2013 and 2014 and analysed the reoffending rate among the individuals.
GETTY
The suspects are asked a total of 137 questions by the AI system Correctional Offender Management Profiling for Alternative Sanctions (Compas) including questions such as Was one of your parents ever sent to jail or prison? or How many of your friends/acquaintances are taking drugs illegally?, with the computer generating its results at the end.
Overall, the AI system claimed black people (45 per cent) were almost twice as likely as white people (24 per cent) to reoffend.
In one example outlined by ProPublica, risk scores were provided for a black and white suspect, both of which on drug possession charges.
GETTY
The white suspect had prior offences of attempted burglary and the black suspect had resisting arrest.
Seemingly giving no indication as to why, the black suspect was given a higher chance of reoffending and the white suspect was considered low risk.
But, over the next two years, the black suspect stayed clear of illegal activity and the white suspect was arrested three more times for drug possession.
However, researchers warn the problem does not lie with robots, but with the human race as AI uses machine learning algorithms to pick up on human traits.
Joanna Bryson, a researcher at the University of Bath, told the Guardian: People expected AI to be unbiased; thats just wrong. If the underlying data reflects stereotypes, or if you train AI from human culture, you will find these things.
Asus
1 of 9
Asus Zenbo: This adorable little bot can move around and assist you at home, express emotions, and learn and adapt to your preferences with proactive artificial intelligence.
This is not an isolated incident either.
Microsofts TayTweets (AI) chatbot was unleashed on Twitter last year which was designed to learn from users.
However, it almost instantly turned to anti-semitism and racism, tweeting: "Hitler did nothing wrong" and "Hitler was right I hate the Jews.
See the original post:
REVEALED: AI is turning RACIST as it learns from humans - Express.co.uk
Posted in Ai
Comments Off on REVEALED: AI is turning RACIST as it learns from humans – Express.co.uk
DOE Backs AI for Clean Tech Investors – IEEE Spectrum
Posted: at 5:13 am
The U.S. Department of Energy wants to make investing in energy technology easier, less risky, and less expensive (for the government, at least).
A new initiative by the DOEs office of Energy Efficiency & Renewable Energy (EERE) is looking for ideas on how to reduce barriers to private investment in energy technologies. Rho AI, one of 11 companies awarded a grant through the EEREs US $7.8-million programcalled Innovative Pathways, plans to use artificial intelligence and data science to efficiently connect investors to startups. By using natural language processing tools to sift through publicly available information, Rho AI will build an online network of potential investors and energy technology companies, sort of like a LinkedIn for the energy sector.The Rho AI team wants to develop a more extensive network than any individual is capable of having on their own, and theyre relying on artificial intelligence to make smarter connections faster than a human could.
Youre limited by the human networking capability when it comes to trying to connect technology and investment, says Josh Browne, co-Founder and vice president of operations at Rho AI. Theres only so many hours in a day and theres only so many people in your network.
Using theUS $750,000 it received from the DOE, Rho AI has just two years to build, test, and prove the efficacy of its system. The two-year timeline for demonstrating proof of concept is a stipulation of the grant. With this approach,the DOE hopes to streamline the underlying process for getting new energy technologies to the market, instead of investing inparticularcompanies.
Its a fairly small grant, relative to some of the larger grants where they invest in the actual hard technology, Browne says. In this case, theyre investing in ways to unlock money to invest in hard technology.
Rho AIs database will not only contain information about energy technology companies and investor interests, it will also track where money is coming from and who its going to in the industry. Browne imagines the interface will look something like a Bloomberg terminal.
To build the database, Rho AI will use Google Tensor Flow and Natural Language Toolkittools that can read and analyze human languageto scan public documents such as Securities and Exchange Commission filings and news articles on energy companies. The system will then use software tools that help analyze and visualize patterns in data, such as MUXviz and NetMiner, to understand how people and companies are connected.
In order to measure how well its helping investors and emerging clean technology companies find better business partners faster, Rho AI will compare the machine-built network with the real professional network of Carmichael Roberts, a leading venture capitalist in clean technology.
This tool is intended to emulate and perhaps surpass the networking capability of a leading clean tech venture capitalist, Browne says. It should be able to match their network, and it should be able to very rapidly be ten times their network.
Rho AIs program should create a longer, more comprehensive list of possible investments than Roberts canwithin seconds. The intention is for the final product to be robust enough that members of the private sector could and would adopt it after one year.
If Rho AI is able to be successful in what theyre building, that will be in some sense self-scaling, says Johanna Wolfson, director of the tech-to-market program at the DOE.
In other words, Rho AI could grow on its own and the industry could start seeing the effects of these connections. Investors and clean energy technology companies could find each other directly, while reducing the burden on the government to invest so much in energy innovation.
Improving the underlying pathway for getting new energy technology to market actually can be done for relatively small dollar amounts, relative to what the government sometimes supports, in ways that can be catalytic, but sustained by the private sector, said Wolfson.
Editors note: This post was corrected on August 8to reflect thespecifications of the DOEs grant.
IEEE Spectrums energy, power, and green tech blog, featuring news and analysis about the future of energy, climate, and the smart grid.
Sign up for the EnergyWise newsletter and get biweekly news on the power & energy industry, green technology, and conservation delivered directly to your inbox.
CCS still not close to widespread deployment, in spite of continued government support16Aug2011
South Carolina utilities abandoned a pair of troubled reactors projected to cost more than twice their original price 2Aug
Why the carmakers audacious plan to electrify every Volvo from 2019 may stall out 21Jul
Clean coal technology suffered a setback when efforts to start up the gasification portion of an IGCC plant in Mississippi were halted 30Jun
The world running on wind, sunlight, and hydropowerchampioned by Stanford's Mark Jacobsonhas captured the public imagination, but it faces a fierce attack from 21 climate and power-grid experts 19Jun
Leaders of major tech companies say they have good reason to stay the courseeven if the federal government won't 2Jun
Methane rules and the Paris Accord expose friction within the GOP and the Trump administration over climate and energy policy 15May
Values are personal, not necessarily logical, and when applied to electricity choices they can impact the market in unpredictable ways 11May
The incubator agencys 2017 budget victory last week still says little about its fate in 2018 9May
A crisis that threatened Southern California's electric grid enabled energy storage to demonstrate its flexibility and rapid deployment 8May
When considering only the portion of fossil fuel support that relates to electric power, renewables receive far more federal help 4May
Unfavorable market conditions are forcing some nuclear power plants to close, removing a carbon-free source of power from the U.S. grid. Plant operators are fighting back 18Apr
A mathematical rethink suggests that carbon dioxide will warm Earth more in the future than it does today. But better satellitessuch as those Trump wants to scrapare needed to reduce climate uncertainty 17Apr
The sweeping attack on climate action that President Trump demanded in his executive order is likely to prove but short-lived relief for coal miners who cheered him at the EPA 29Mar
How changes in regulations allowed wind power to increase in Texas without increasing operational reserves 23Mar
The new president could use advice from a practical problem solver 22Mar
President Donald Trump outlined a fiscal 2018 budget request that asks Congress to stamp out federal climate science and slash investment in energy innovation 17Mar
NASAs new geosensing satellites may be on the chopping block. The timing could hardly be worse 9Mar
Market-calibrated forecasts for natural gas prices show historical trends, bode well for future updates with recent data sets 7Mar
The EPA approaches this task differently for greenhouse gases than for other pollutants 23Feb
Go here to read the rest:
Posted in Ai
Comments Off on DOE Backs AI for Clean Tech Investors – IEEE Spectrum
The Real Threat of Artificial Intelligence – The New York Times
Posted: at 5:11 am
Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs mostly lower-paying jobs, but some higher-paying ones, too.
This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)
We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?
Part of the answer will involve educating or retraining people in tasks A.I. tools arent good at. Artificial intelligence is poorly suited for jobs involving creativity, planning and cross-domain thinking for example, the work of a trial lawyer. But these skills are typically required by high-paying jobs that may be hard to retrain displaced workers to do. More promising are lower-paying jobs involving the people skills that A.I. lacks: social workers, bartenders, concierges professions requiring nuanced human interaction. But here, too, there is a problem: How many bartenders does a society really need?
The solution to the problem of mass unemployment, I suspect, will involve service jobs of love. These are jobs that A.I. cannot do, that society needs and that give people a sense of purpose. Examples include accompanying an older person to visit a doctor, mentoring at an orphanage and serving as a sponsor at Alcoholics Anonymous or, potentially soon, Virtual Reality Anonymous (for those addicted to their parallel lives in computer-generated simulations). The volunteer service jobs of today, in other words, may turn into the real jobs of the future.
Other volunteer jobs may be higher-paying and professional, such as compassionate medical service providers who serve as the human interface for A.I. programs that diagnose cancer. In all cases, people will be able to choose to work fewer hours than they do now.
Who will pay for these jobs? Here is where the enormous wealth concentrated in relatively few hands comes in. It strikes me as unavoidable that large chunks of the money created by A.I. will have to be transferred to those whose jobs have been displaced. This seems feasible only through Keynesian policies of increased government spending, presumably raised through taxation on wealthy companies.
As for what form that social welfare would take, I would argue for a conditional universal basic income: welfare offered to those who have a financial need, on the condition they either show an effort to receive training that would make them employable or commit to a certain number of hours of service of love voluntarism.
To fund this, tax rates will have to be high. The government will not only have to subsidize most peoples lives and work; it will also have to compensate for the loss of individual tax revenue previously collected from employed individuals.
This leads to the final and perhaps most consequential challenge of A.I. The Keynesian approach I have sketched out may be feasible in the United States and China, which will have enough successful A.I. businesses to fund welfare initiatives via taxes. But what about other countries?
They face two insurmountable problems. First, most of the money being made from artificial intelligence will go to the United States and China. A.I. is an industry in which strength begets strength: The more data you have, the better your product; the better your product, the more data you can collect; the more data you can collect, the more talent you can attract; the more talent you can attract, the better your product. Its a virtuous circle, and the United States and China have already amassed the talent, market share and data to set it in motion.
For example, the Chinese speech-recognition company iFlytek and several Chinese face-recognition companies such as Megvii and SenseTime have become industry leaders, as measured by market capitalization. The United States is spearheading the development of autonomous vehicles, led by companies like Google, Tesla and Uber. As for the consumer internet market, seven American or Chinese companies Google, Facebook, Microsoft, Amazon, Baidu, Alibaba and Tencent are making extensive use of A.I. and expanding operations to other countries, essentially owning those A.I. markets. It seems American businesses will dominate in developed markets and some developing markets, while Chinese companies will win in most developing markets.
The other challenge for many countries that are not China or the United States is that their populations are increasing, especially in the developing world. While a large, growing population can be an economic asset (as in China and India in recent decades), in the age of A.I. it will be an economic liability because it will comprise mostly displaced workers, not productive ones.
So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software China or the United States to essentially become that countrys economic dependent, taking in welfare subsidies in exchange for letting the parent nations A.I. companies continue to profit from the dependent countrys users. Such economic arrangements would reshape todays geopolitical alliances.
One way or another, we are going to have to start thinking about how to minimize the looming A.I.-fueled gap between the haves and the have-nots, both within and between nations. Or to put the matter more optimistically: A.I. is presenting us with an opportunity to rethink economic inequality on a global scale. These challenges are too far-ranging in their effects for any nation to isolate itself from the rest of the world.
Kai-Fu Lee is the chairman and chief executive of Sinovation Ventures, a venture capital firm, and the president of its Artificial Intelligence Institute.
Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.
A version of this op-ed appears in print on June 25, 2017, on Page SR4 of the New York edition with the headline: The Real Threat of Artificial Intelligence.
Visit link:
The Real Threat of Artificial Intelligence - The New York Times
Posted in Artificial Intelligence
Comments Off on The Real Threat of Artificial Intelligence – The New York Times