Monthly Archives: June 2017

Study: This virtual reality simulation could reduce fear of death – TNW

Posted: June 7, 2017 at 5:17 pm

If youve ever played a virtual reality game, youre probably used to dying at least digitally. But not like this.

Scientists are using VR headsets to create out-of-body experiences that may be able to reduce the fear of death, according to a recently published study. According to Mel Slater, one of the studys authors and a research professor at the University of Barcelona:

My lab has been working for many years on the influence of changing someones body in virtual reality on their attitudes, perceptions, behavior and cognition. For example, placing White people in a Black virtual body reduces their implicit racial bias, while putting adults into a child body changes their perceptions and self-identification.

Here we wanted to see what the effects were of establishing a strong feeling of ownership over a virtual body, and then moving people out of it, so simulating an out-of-body experience. According to the literature, out-of-body experiences are typically associated with changes of attitudes about death, so we wanted to see if this would happen with a virtual out-of-body experience.

The study, published in PLOS One, uses an Oculus Rift headset and a virtual reality simulation known as the full body ownership illusion. In it, researchers created a virtual human body designed to be the participants own. Once the participant assimilated to the illusion, the view shifted from first-person to third-person, creating an experience similar to how some describeout-of-body incidents.

So far, the study has only attempted the simulation on 32 women, 16 of which experienced the out-of-body incident, and 16 more in a control group who didnt experience this phenomena.

After the study, participants in the main group reported lower anxiety about death than the control group, althoughresearchers admit the studyis still in the preliminary stages. Limited as it may be, it should surprise no one that a virtual reality simulation could help overcome fears even the fear of death. It is, after all, being studied in multiple other scientific disciplines as a way to do just that.

A Virtual Out-of-Body Experience Reduces Fear of Death on PLOS

Read next: Alien mystery solved: It was just gas

See the article here:

Study: This virtual reality simulation could reduce fear of death - TNW

Posted in Virtual Reality | Comments Off on Study: This virtual reality simulation could reduce fear of death – TNW

Can virtual reality help drug addicts recover? Researchers from SFU aim to find out – CBC.ca

Posted: at 5:17 pm

A recoveringcocaine addict walks into a room where the party is in full swing: drinks are flowing, music is pulsing and drugs are being passed around. A person approaches, offering coke.

It's a situation loaded with triggers for the addict, which is exactly the point.

It's also a situation that this time doesn't exist in any real way.

The room, the party and the cocaine are all simulated,and theperson offering the drugs anavatar. All have beencreated by a team of virtual reality specialists tasked with building a worst casescenario for the addict as a way to gauge whethertreatment is in fact working.

Virtual reality is a computer generated, three-dimensional environment that is projected inside a headset. It's supposed be an immersive experience that mimics reality. (Shutterstock / Wayne0216)

Inthe next two months, 60 students enrolled at Surrey's John Volken Academy, a long-term residential addictions treatment centre, will be strapping on VR headsets and immersing themselves in virtual situations that have been tailor-made for their personal experiences and addiction issues.

The cutting edge program is being led by SFU professorFaranakFarzan, chair in technology innovations for youth addiction recovery.

"They clients come to the [John VolkenAcademy] to recover from their bad habits but after twoyears they have to go back and live their lives," said Farzan.

"We're hoping to use virtual reality to slowly introduce environmental cues that they were exposed to back home, but in a very safe environmentto assess where they are in terms of relapse or giving into their impulses."

The project is still in the start-up phase withresearchers interviewing the students to gather information to create apersonalized VR environment.

"If someone is taking opioids for pain management for instance, my guess is the environment they're using in is much different than someone who is using cocaine. We don't want to put them in the same context, it wouldn'tmake sense,"saidFarzan.

"Weneed to...find out what they are prone to. Itcould be a party for someone, but it could be a school yard for someoneelse. And it could be at home in the back yard for another individual."

Virtual reality has long been talked about as potentially useful in addictions therapy, but the technology has only recently become "real feeling" enoughto be considered a serious tool.

And because the area of study is so new, researchers still need to answer what Farzan describes as the "million dollar question" in VR application: willan addict's behaviour in the virtual world transfer to real life?

"This is what we are trying to understand," she said. "Right now we're designing what makes intuitive sense and rolling that out. But at the end of the day we need also to run randomized controlled trials."

60 recovering drug and alcohol addicts from the John Volken Academy in Surrey will participate in the virtual reality project. (CBC)

John Volkenhas made a five-year commitment to the research, and hopes to expand the VR program to his two addiction treatment centres in the United States.

"The students are excited about it because they feel they're gettingsome real professional help," he said.

"The key part here is that we're working with people who have lived through addiction," said Farzan. "We're not sitting in our research labs trying to design something based just on what we think."

Read the original post:

Can virtual reality help drug addicts recover? Researchers from SFU aim to find out - CBC.ca

Posted in Virtual Reality | Comments Off on Can virtual reality help drug addicts recover? Researchers from SFU aim to find out – CBC.ca

How AI And Machine Learning Are Helping Drive The GE Digital Transformation – Forbes

Posted: at 5:17 pm


Forbes
How AI And Machine Learning Are Helping Drive The GE Digital Transformation
Forbes
This is the story of how GE has accomplished this digital transformation by leveraging AI and machine learning fueled by the power of Big Data. Undertaking the Digital Transformation. The GE transformation is an effort that is still in progress, but ...

Read the original here:

How AI And Machine Learning Are Helping Drive The GE Digital Transformation - Forbes

Posted in Ai | Comments Off on How AI And Machine Learning Are Helping Drive The GE Digital Transformation – Forbes

How Apple reinvigorated its AI aspirations in under a year – Engadget

Posted: at 5:17 pm

Well, technically, it's been three years of R&D, but Apple had a bit of trouble getting out of its own way for the first two. See, back in 2010, when Apple released the first version of Siri, the tech world promptly lost its mind. "Siri is as revolutionary as the Mac," the Harvard Business Review crowed, though CNN found that many people feared the company had unwittingly invented Skynet v1.0. But for as revolutionary as Siri appeared to be at first, its luster quickly wore off once the general public got ahold of it and recognized the system's numerous shortcomings.

Fast forward to 2014. Apple is at the end of its rope with Siri's listening and comprehension issues. The company realizes that minor tweaks to Siri's processes can't fix its underlying problems and a full reboot is required. So that's exactly what they did. The original Siri relied on hidden Markov models -- a statistical tool used to model time series data (essentially reconstructing the sequence of states in a system based only on the output data) -- to recognize temporal patterns in handwriting and speech recognition.

The company replaced and supplemented these models with a variety of machine learning techniques including Deep Neural Networks and "long short-term memory networks" (LSTMNs). These neural networks are effectively more generalized versions of the Markov model. However, because they posses memory and can track context -- as opposed to simply learning patterns as Markov models do -- they're better equipped to understand nuances like grammar and punctuation to return a result closer to what the user really intended.

The new system quickly spread beyond Siri. As Steven Levy points out, "You see it when the phone identifies a caller who isn't in your contact list (but who did email you recently). Or when you swipe on your screen to get a shortlist of the apps that you are most likely to open next. Or when you get a reminder of an appointment that you never got around to putting into your calendar."

By the WWDC 2016 keynote, Apple had made some solid advancements in its AI research. "We can tell the difference between the Orioles who are playing in the playoffs and the children who are playing in the park, automatically," Apple senior vice president Craig Federighi told the assembled crowd.

The company also released during WWDC 2016 its neural network API running Basic Neural Network Subroutines, an array of functions enabling third party developers to construct neural networks for use on devices across the Apple ecosystem.

However, Apple had yet to catch up with the likes of Google and Amazon, both of whom had either already released an AI-powered smart home companion (looking at you, Alexa) or were just about to (Home would be released that November). This is due in part to the fact that Apple faced severe difficulties recruiting and retaining top AI engineering talent because it steadfastly refused to allow its researchers to publish their findings. That's not so surprising coming from a company so famous for its tight-lipped R&D efforts that it once sued a news outlet because a drunk engineer left a prototype phone in a Palo Alto bar.

"Apple is off the scale in terms of secrecy," Richard Zemel, a professor in the computer science department at the University of Toronto, told Bloomberg in 2015. "They're completely out of the loop." The level of secrecy was so severe that new hires to the AI teams were reportedly directed not to announce their new positions on social media.

"There's no way they can just observe and not be part of the community and take advantage of what is going on," Yoshua Bengio, a professor of computer science at the University of Montreal, told Bloomberg. "I believe if they don't change their attitude, they will stay behind."

Luckily for Apple, those attitudes did change and quickly. After buying Seattle-based machine learning AI startup Turi for around $200 million in August 2016, Apple hired AI expert Russ Salakhutdinov away from Carnegie Mellon University that October. It was his influence that finally pushed Apple's AI out of the shadows and into the light of peer review.

In December 2016, while speaking at the the Neural Information Processing Systems conference in Barcelona, Salakhutdinov stunned his audience when he announced that Apple would begin publishing its work, going so far as to display an overhead slide reading, "Can we publish? Yes. Do we engage with academia? Yes."

Later that month Apple made good on Salakhutdinov's promise, publishing "Learning from Simulated and Unsupervised Images through Adversarial Training". The paper looked at the shortcomings of using simulated objects to train machine vision systems. It showed that while simulated images are easier to teach than photographs, the results don't work particularly well in the real world. Apple's solution employed a deep-learning system, known as known as Generative Adversarial Networks (GANs), that pitted a pair of neural networks against one another in a race to generate images close enough to photo-realistic to fool a third "discriminator" network. This way, researchers can exploit the ease of training networks using simulated images without the drop in performance once those systems are out of the lab.

In January 2017, Apple further signaled its seriousness by joining Amazon, Facebook, Google, IBM and Microsoft in the Partnership on AI. This industry group seeks to establish ethical, transparency and privacy guidelines in the field of AI research while promoting research and cooperation between its members. The following month, Apple drastically expanded its Seattle AI offices, renting a full two floors at Two Union Square and hiring more staff.

"We're trying to find the best people who are excited about AI and machine learning excited about research and thinking long term but also bringing those ideas into products that impact and delight our customers," Apple's director of machine learning Carlos Guestrin told GeekWire.

By March 2017, Apple had hit its stride. Speaking at the EmTech Digital conference in San Francisco, Salakhutdinov laid out the state of AI research, discussing topics ranging from using "attention mechanisms" to better describe the content of photographs to combining curated knowledge sources like Freebase and WordNet with deep-learning algorithms to make AI smarter and more efficient. "How can we incorporate all that prior knowledge into deep-learning?" Salakhutdinov said. "That's a big challenge."

That challenge could soon be a bit easier once Apple finishes developing the Neural Engine chip that it announced this May. Unlike Google devices, which shunt the heavy computational lifting required by AI processes up to the cloud where it is processed on the company's Tensor Processing Units, Apple devices have traditionally split that load between the onboard CPU and GPU.

This Neural Engine will instead handle AI processes as a dedicated standalone component, freeing up valuable processing power for the other two chips. This would not only save battery life by diverting load from the power-hungry GPU, it would also boost the device's onboard AR capabilities and help further advance Siri's intelligence -- potentially exceeding the capabilities of Google's Assistant and Amazon's Alexa.

But even without the added power that a dedicated AI chip can provide, Apple's recent advancements in the field have been impressive to say the least. In the span between two WWDCs, the company managed to release a neural network API, drastically expand its research efforts, poach one of the country's top minds in AI from one of the nation's foremost universities, reverse two years of backwards policy, join the industry's working group as a charter member and finally -- finally -- deliver a Siri assistant that's smarter than a box of rocks. Next year's WWDC is sure to be even more wild.

Image: AFP/Getty (Federighi on stage / network of photos)

See original here:

How Apple reinvigorated its AI aspirations in under a year - Engadget

Posted in Ai | Comments Off on How Apple reinvigorated its AI aspirations in under a year – Engadget

How AI is transforming customer service – TNW

Posted: at 5:17 pm

There will always be a need for a real humanspresence in customer service, but with the rise of AI comes the glaring reality that many things can be accomplished through the implementation of an AI-powered customer servicevirtualassistant. As our technology and understanding of machine learning grows, so does the possibilities for services that could benefit from a knowledgeable chatbot. What does this mean for the consumer and how will this affect the job market in the years to come?

How many times have you been placed on hold, on the phone or through a live chat option, when all you wanted to do was ask a simple question about your account? Now, how many times as that wait taken longer than the simple question you had? While chatbots may never be able to completely replace the human customer service agent, they most certainly are already helping answer simple questions and pointing users in the right direction when needed.

Credit: Unsplash

As virtual assistants become more knowledgeable and easier to implement, more businesses will begin to use them to assist with more advancedquestions a customer or interested party may have, meaning (hopefully) quicker answers for the consumer. But just how much of customer service will be taken over by virtual assistants? According toone report from Gartnerit is believed that by the year 2020, 85% of customer relationships will be through AI-powered services.

Thats a pretty staggering number, but I talked with Diego Ventura of NoHold, a company that provides virtual agents for enterprise level businesses, and he believes those numbers need to be looked at a bit closer.

The statement could end up being true but with two important proviso: For one, we most consider all aspects of AI, not just Virtual Assistants and two, we apply the statements to specific sectors and verticals.

AI is a vast field that includes multiple disciplines like Predictive Analytics, Suggestion engines, etc. In this sense you have to just think about companies like Amazon to see how most of customer interactions are already handled automatically though some form of AI. Having said this, there are certain sectors of the industry that will always require, at least for the foreseeable future, human intervention. Think of Medical for example, or any company that provides very high end B2B products or services.

Basically, what Diego is saying is that there are many aspects of customer service already being handled by AI that we dont even realize, so when discussing that 85% mentioned above we cant look at it as 85% of customer service jobs will be replaced by AI, but, even if were not talking about 85% of the jobs involved in customer service, surely there will be some jobs that will be completely eliminated by the use of chatbots, so where does that leave us?

Its unfair to look at virtual assistants as the enemy that is taking our precious jobs. Throughout history, technology has made certain jobs obsolete as smarter, more efficient methods are implemented . Look at our manufacturing sector and it will not take long to see that many of the jobs our grandparents and great grandparents had have been completely eliminated through advancements in machinery and other technologies, the rise in AI is simply another example of us growing as humans.

Credit: Unsplash

While it may take some jobs away, it also opens up the possibility for completely new jobs that have not existed prior. Chatbot technicians and specialists being but two examples. Couple that with the fact that many of these virtual assistants actual workwiththe customer services reps to make their jobs easier, and we start seeing that virtual assistant implementation is not as scary as it might seem. Ventura seems to agree,

I see Virtual Assistants, VAs, for one as a way to primarily improve the customer experience and, two, augmenting the capabilities of existing employees rather than simply taking their jobs. VAs help users find information more easily. Most of the VA users are people who were going to the Web to self-serve anyway, we are just making it easier for them to find what they are looking for and yes, prevent escalations to the call center.

VAs are also used at the call center to help agents be more successful in answering questions, therefore augmenting their capabilities. Having said all this, there are jobs that will be replaced by automation, but I think it is just part of progress and hopefully people will see it as an opportunity to find more rewarding opportunities.

I think back to my time at a startup that was located in an old Masonic Temple. We were on the 6th floor and every morning the lobby clerk, James, would put down the crumpled paper he was reading and hobble out from behind his small desk in the middle of the lobbyand take us up to our floor on one of those old elevators that required someone to manually push and pull a lever to get their guests to a certain floor. James was a professional at it, he reminded me of an airplane pilot the way he twisted certain knobs and manipulated the lever to get us to our destination only once missing our floor in the entire two years I was there.

While James might have been an expert at his craft, technology has all but eliminated that position. When was the last time you had someone manually cart you to a floor in a hotel? When was the last time you thought about it? Were you mad at technology for taking away someones job?

As humans, we advance, thats what we do. And the rise of AI in the customer service field is just another step in our advancement and should be looked at as such. There might be some growing pains during the process, but we shouldnt let that stop us from growing and extending our knowledge. When we look at the benefits these chatbots can provide to the consumer and the business, it becomes clear that we are moving in the right direction.

Read next: How Marketing Will Change in 2017

Follow this link:

How AI is transforming customer service - TNW

Posted in Ai | Comments Off on How AI is transforming customer service – TNW

Sesame Workshop and IBM team up to test a new AI-powered teaching method – TechCrunch

Posted: at 5:17 pm


TechCrunch
Sesame Workshop and IBM team up to test a new AI-powered teaching method
TechCrunch
Can A.I. help build better educational apps for kids? That's a question Sesame Workshop, the nonprofit organization behind the popular children's TV program Sesame Street and others, aims to answer. The company has teamed up with IBM to create the ...
IBM Brings AI to Kindergartners with New Sesame Workshop AppTheStreet.com

all 10 news articles »

Original post:

Sesame Workshop and IBM team up to test a new AI-powered teaching method - TechCrunch

Posted in Ai | Comments Off on Sesame Workshop and IBM team up to test a new AI-powered teaching method – TechCrunch

The Inaugural AI for Good Global Summit Is a Milestone but Must Focus More on Risks – Council on Foreign Relations (blog)

Posted: at 5:17 pm

The followingis a guest post by Kyle Evanoff,research associate for International Economics and U.S. Foreign Policy.

Today through Friday, artificial intelligence (AI) experts are meeting with international leaders in Geneva, Switzerland, for the inaugural AI for Good Global Summit. Organized by the International Telecommunications Union (ITU), a UN agency that specializes in information and communication technologies, and the XPRIZE Foundation, a Silicon Valley nonprofit that awards competitive prizes for solutions addressing some of the worlds most difficult problems, the gathering will discuss AI-related issues and promote international dialogue and cooperation on AI innovation.

The summit comes at a critical time and should help increase policymakers awareness of the possibilities and challenges associated with AI. The downside is that it may encourage undue optimism, by giving short shrift to the significant risks that AI poses to international security.

Although many policymakers and citizens are unaware of it, narrow forms of AI are already here. Software programs have long been able to defeat the worlds best chess players, and newer ones are succeeding at less-defined tasks, such as composing music, writing news articles, and diagnosing medical conditions. The rate of progress is surprising even tech leaders, and future developments could bring massive increases in economic growth and human well-being, as well as cause widespread socioeconomic upheaval.

This weeks forum provides a much-needed opportunity to discuss how AI should be governed at the global levela topic that has garnered little attention from multilateral institutions like the United Nations. The draft program promises to educate policymakers on multiple AI issues, from sessions on moonshots to ethics, sustainable living, and poverty reduction, among other topics. Participants will include prominent individuals drawn from multilateral institutions, nongovernmental organizations (NGOs), the private sector, and academia.

This inclusivity is typical of the complex governance models that increasingly define and shape global policymakingwith internet governance being a case in point. Increasingly, NGOs, public-private partnerships, industry codes of conduct, and other flexible arrangements have assumed many of the global governance functions once reserved for intergovernmental organizations. The new partnership between ITU and the XPRIZE Foundation suggests that global governance of AI, although in its infancy, is poised to follow this same model.

For all its strengths, however, this multistakeholder approach could afford private sector organizers excessive agenda-setting power. The XPRIZE Foundation, founded by outspoken techno-optimist Peter Diamandis, promotes technological innovation as a means of creating a more abundant future. The summits mission and agenda hews to this attitude, placing disproportionate emphasis on how AI technologies can overcome problems and too little attention on the question of mitigating risks from those same technologies.

This is worrisome, since the risks of AI are numerous and non-trivial. Unrestrained AI innovation could threaten international stability, global security, and possibly even humanitys survival. And, because many of the pertinent technologies have yet to reach maturity, the risks associated with them have received scant attention on the international stage.

One area in which the risk of AI is obvious is electioneering. Since the epochal June 2016 Brexit referendum, state and nonstate actors with varying motivations have used AI to create and/or distribute propaganda via the internet. An Oxford study found that during the recent French presidential election, the proportion of traffic originating from highly automated Twitter accounts doubled between the first and second rounds of voting. Some even attribute Donald J. Trumps victory over Hillary Clinton in the U.S. presidential election to weaponized artificial intelligence spreading misinformation. Automated propaganda may well call the integrity of future elections into question.

Another major AI risk lies in the development and use of lethal autonomous weapons systems (LAWS). After the release of a 2012 Human Rights Watch report, Losing Humanity: The Case Against Killer Robots, the United Nations began considering including restrictions on LAWS in the Convention on Certain Conventional Weapons (CCW). Meanwhile, both China and the United States have made significant headway with their autonomous weapons programs, in what is quickly escalating into an international arms race. Since autonomous weapons might lower the political cost of conflict, they could make war more commonplace and increase death tolls.

A more distant but possibly greater risk is that of artificial general intelligence (AGI). While current AI programs are designed for specific, narrow purposes, future programs may be able to apply their intelligence to a far broader range of applications, much as humans do. An AGI-capable entity, through recursive self-improvement, could give rise to a superintelligence more capable than any humanone that might prove impossible to control and pose an existential threat to humanity, regardless of the intent of its initial programming. Although the AI doomsday scenario is a common science fiction trope, experts consider it to be a legitimate concern.

Given rapid recent advances in AI and the magnitude of potential risks, the time to begin multilateral discussions on international rules is now. AGI may seem far off, but many experts believe that it could become a reality by 2050. This makes the timeline for AGI similar to that of climate change. The stakes, though, could be much higher. Waiting until a crisis has occurred to act could preclude the possibility of action altogether.

Rather than allocating their limited resources to summits promoting AI innovation (a task for which national governments and the private sector are better suited), multilateral institutions should recognize AIs risks and work to mitigate them. Finalizing the inclusion of LAWS in the CCW would constitute an important milestone in this regard. So too would the formal adoption of AI safety principles such as those established at the Beneficial AI 2017 conference, one of the many artificial intelligence summits occurring outside of traditional global governance channels.

Multilateral institutions should also continue working with nontraditional actors to ensure that AIs benefits outweigh its costs. Complex governance arrangements can provide much-needed resources and serve as stopgaps when necessary. But intergovernmental organizations, as well as the national governments that govern them, should be careful in ceding too much agenda-setting power to private organizations. The primary danger of the AI for Good Global Summit is not that it distorts perceptions of AI risk; it is that Silicon Valley will wield greater influence over AI governance with each successive summit. Since technologists often prioritize innovation over risk mitigation, this could undermine global security.

More important still, policymakers should recognize AIs unprecedented transformative power and take a more proactive approach to addressing new technologies. The greatest risk of all is inaction.

See the article here:

The Inaugural AI for Good Global Summit Is a Milestone but Must Focus More on Risks - Council on Foreign Relations (blog)

Posted in Ai | Comments Off on The Inaugural AI for Good Global Summit Is a Milestone but Must Focus More on Risks – Council on Foreign Relations (blog)

Want to Understand Creativity? Enlist an AI Collaborator – WIRED

Posted: at 5:17 pm

Slide: 1 / of 1. Caption: Stephanie Berger

A metronome ticks time. Not for the student, but for the teacher, who plays a short piano melody. Without missing a measure, the student follows with an improvised, yet derivative, cello run. The student plays the same run again, and then again. I have it looping, actually, so you can hear the response over and over again, says the teacher, Jesse Engel, a computer scientist with Google Brain. And you can hear some similarities with what I played, but its not doing the job of trying to replicate what I played. Its trying to continue it in a meaningful way.

The student here is an artificial intelligence algorithm; the instrument, a synthesizer. And the reallesson is teaching an audience of hundreds how computers might someday become capable of producing real works of art. Engels is onstage at NYUs Skirball Center for the Performing Arts as part of the 2017 World Science Festival, along with three likeminded experts. Eachof themis there to showcasehow they nurture creativity in computers.

Which begs the question: What is creativity? The broadest definition is any nonlinear solution to a problem. Music is a creative way of making noises that sound pleasant. Language is creative communication. Airplanes are a creative solution to the problem of flight. But the fact that we can build airplanes that fly faster and higher than birds does not necessarily explain how birds fly, or how they evolved to fly, says Peter Ulric Tse, a neuroscientist at Dartmouth College. Tse is onstage with Engel, but rather than using AI to tackle a creative endeavor, such as music, he believes they are a vehicle for understanding the nature of creativity itself.

In humans, creativity evolved mysteriously. Homo sapiens became a distinct species around 200,000 years ago. Our ancestors characteristic (or, sapient, if you will) feature wastheir huge foreheads: the site ofthe frontal cortex, where high-level reasoning occurs. But the earliest indications of creativity in humans didnt appearuntil relatively recently. Asculpture of a human with a lions headone of the earliest examplesdates to around 40,000 years ago. That, and other archaeological evidence from the same time period meanswe Homo sapienslikely spent most of ourevolutionary history with unrealized creative potential. However, no physical evidence exists toexplain whatflipped the switch. Thoughts dont leave fossils, neurocircuits dont leave fossils, says Tse. All we have are bones and skulls and artifacts.

Artificial intelligences path towards creativity probably wont ever fully explain how it evolved in humans. At most, it will give neuroscientists like Tse ways to examine the problem laterally.But it could help scientists understand creativitys theoretical limits. Lav Varshney, another member of the onstage panel, is working on a mathematical theory of creativity. The way Ive been defining it is things that are both novel, and of high quality in their domain, says Varshney, an engineering theorist at the University of Illinois Urbana-Champaign. For example, a new kind of food.

In the case of cuisine, Varshney says he trains his AI to measure goodness based on things like hedonic psychophysicsa branch of research that studies the molecular properties of human flavor perception. He does similar work in fashion, feeding his algorithm information on color matching, and so on. And according to his research, creativity has theoretical limits. Varshney says that as you increase the value of both quality and novelty, you get more and more noise. That is, it becomes harder and harder to distinguish the newness, and the goodness, of a thing. This probably explains why the avant garde is so well, avant garde.

Like Engel, Varshney is also teaching algorithms to compose music. On stage, he demonstrates one that is learning to compose in the style of Bach. But, he points out, this is not pure creativity. The computer learns by having another algorithma teacherprogressively introduce constraintshere are different available instruments, these are chords, this what it means to sing in soprano. In essence, the algorithm is replicating Bachs creativity based, not evolving its own creative genius. As such, AI algorithms are best suited to be creative collaborators.

Which is exactly whatSougwen Chungdisplays next. Chung is a visual artist, currently an in residence at Bell Labs, who draws with a robotic arm assistant. Ive had a lot of human collaborators, and thought it was time to switch it up a little bit, she says. Watching the pairwoman and machinework together is mesmerizing. At first it looks like the arm is mirroring her strokes. But as a piece progresses, you see that the arm has its own style. Yes, a style that is derivativeof Chungsbut still not the same.

When Chung first started using the robotic armcalled DOUGshe thought the collaboration itself might be part of the artistic performance. However, she now believes the arm is pushing her to consider new creative frontiers. When I collaborate with this algorithm, theres a real randomness and sense of unpredictability to it, and a lack of understanding thats kind of exciting, she says.

If that kind of freedomis at the heart of creativity, the next logical question is whether algorithms could ever eclipse human creativity.Engel, who has settled back into his seat after his performance, seems to think the answer is no. The intentionality is human on both ends of the spectrum, he says. That is, humans are both the input and the consumer for anything a computer creates. You can treat it more like a garden, he says. You control the garden at a high level: planting seeds, watering it, pruning as necessary. But the garden grows on its own.

Original post:

Want to Understand Creativity? Enlist an AI Collaborator - WIRED

Posted in Ai | Comments Off on Want to Understand Creativity? Enlist an AI Collaborator – WIRED

An AI Can Now Predict How Much Longer You’ll Live For – Futurism

Posted: at 5:17 pm

In Brief Researchers at the University of Adelaide have developed an AI that can analyze CT scans to predict if a patient will die within five years with 69 percent accuracy. This system could eventually be used to save lives by providing doctors with a way to detect illnesses sooner. Predicting the Future

While many researchers are looking for ways to use artificial intelligence (AI) to extend human life, scientists at the University of Adelaidecreated an AI that could help them better understand death. The system they created predicts ifa person will die within five years after analyzingCT scans of their organs, and it was able to do sowith 69 percent accuracy a rate comparable to that of trained medical professionals.

The system makes use of thetechnique of deep learning, and it was tested using images taken from 48 patients, all over the age of 60. Its the first study to combine medical imaging and artificial intelligence, and the results have been published in Scientific Reports.

Instead of focusing on diagnosing diseases, the automated systems can predict medical outcomes in a way that doctors are not trained to do, by incorporating large volumes of data and detecting subtle patterns, explained lead authorLuke Oakden-Rayner in a university press release. This method of analysis can explore the combination of genetic and environmental risks better than genome testing alone,according to the researchers.

While the findings are only preliminary given the small sample size, the next stage will apply the AI to tens of thousands of cases.

While this study does focus on death, the most obvious and exciting consequence of it is how it could help preserve life. Our research opens new avenues for the application of artificial intelligence technology in medical image analysis, and could offer new hope for the early detection of serious illness, requiring specific medical interventions, said Oakden-Rayner. Because it encourages more precise treatment using firmer foundational data, the system has the potential to save many lives and provide patients with less intrusive healthcare.

An added benefit of this AI is its wide array of potential uses. Because medical imaging of internal organs is a fairly routine part of modern healthcare, the data is already plentiful. The system could be used to predict medical outcomes beyond just death, such as the potential for treatment complications, and it could work with any number of images, such as MRIs or X-rays, not just CT scans. Researchers will just need to adjustthe AItotheir specifications, andtheyll be able to obtain predictions quickly and cheaply.

AIsystems are becoming more and more prevalentin the healthcare industry.Deepmind is being usedto fight blindness in the United Kingdom, and IBM Watson is already as competent as human doctors at detecting cancer. It is in medicine, perhaps more than any other field, that we see AIs huge potential to help the human race.

Go here to see the original:

An AI Can Now Predict How Much Longer You'll Live For - Futurism

Posted in Ai | Comments Off on An AI Can Now Predict How Much Longer You’ll Live For – Futurism

Nvidia Steps Up AI Data Center Push – Forbes

Posted: at 5:17 pm


Forbes
Nvidia Steps Up AI Data Center Push
Forbes
Recently, Nivida unveiled Volta, the most advanced data-center graphics-processing unit ever built. With 21.1 billion transistors and a massive 815 mm2 footprint, it will facilitate the next generation of artificial intelligence. Nvidia is still ...

Go here to read the rest:

Nvidia Steps Up AI Data Center Push - Forbes

Posted in Ai | Comments Off on Nvidia Steps Up AI Data Center Push – Forbes