Robot hijacking threat in homes, cars could paralyse robotics industry, cyber expert warns – ABC Online

Updated August 15, 2017 09:50:12

Imagine having a robot in your kitchen which is capable of cooking you dinner.

Well, for some it will soon be a reality. Now imagine what happens if your cooking robot is hijacked?

Dr Nicholas Patterson, a cyber security lecturer at Deakin University, has to take more than just the average laptop or smartphone into account nowadays; he also has to plan for if or when a robo chef is hacked.

"Think about if someone does hack that, how powerful it could be it's wielding knives and God knows what else," he said.

"Cyber security for robots is still a really new area, but I've spotted the holes quite early so I can see it's going to be a big problem.

"Someone in a certain country overseas can hack a robot in Australia and take control of that, spy on you, or attack you.

"You don't have to be in the next street or next house; you can be in another country."

Dr Patterson said robotic hacking had the potential to put a halt on the robotics industry.

With things such as robotic vacuum cleaners and drones becoming more common household items, he said other consumer robotics would be introduced a lot sooner than people thought.

By 2019, Dr Patterson said we could see up to 1.4 million new industrial robots installed in factories globally, and more would begin entering our homes as technology advanced at an alarming rate.

According to Dr Patterson, smaller robots might not pose much of a physical threat, however their speakers and microphones could be used to listen in to people's conversations.

"The larger ones are probably more the physical threat, like your robotic chef or the industrial type of robots," he said.

"The industrial ones are upwards of 200 pounds and they have things like lasers, welding devices and the clamping devices."

An SUV was hacked in the United States just last year.

"They could take over control of the car while it was mid-driving," Dr Patterson said.

He said in the past a person had also been able to hack into an airplane mid-flight.

"I think we're too much focused on laptops and phones, but there's these new avenues which are not looked at as much in terms of robots and passenger planes."

To prevent robotic hacking, Dr Patterson suggests updating anti-malware software and turning off Bluetooth and the wi-fi on robotic devices when not required.

He also recommends regularly changing the password you use to access the robot.

"Any remote doorways into the robot you want to switch off as best you can.

"Do we really need internet on a fridge or a TV? Probably not.

"Do we need it in a car? Yes, it helps download the GPS maps much more easily, but do we need that really?"

He said not only did it have the potential to cause problems surrounding privacy, but it could risk people's lives as well.

Topics: robots-and-artificial-intelligence, hacking, computers-and-technology, internet-culture, internet-technology, canberra-2600, australia, united-states

First posted August 15, 2017 07:30:00

More:

Robot hijacking threat in homes, cars could paralyse robotics industry, cyber expert warns - ABC Online

Robotics ETF Races To $1 Billion in AUM – ETF Trends

The ROBOGlobal Robotics & Automation Index ETF (NASDAQ:ROBO), the original exchange traded fund dedicated to robotics investing, has a new milestone to celebrate. Last week, the ETF topped the prestigious $1 billion in assets under management mark.

The once far flung concept of robotics is gaining some momentum. For example, the International Federation of Robotics expects that worldwide sales of robots will rise by 6 percent between 2014 and 2016, and over 190,000 industrial robots will be supplied to companies around the globe in 2016, said ROBO Global in the statement. ROBO debuted in late 2013.

The ETF tracks the ROBO Global Robotics & Automation Index, which is the brainchild of a team deeply entrenched in the robotics industry who created the innovative methodology, according to Dallas-based ROBO Global. The index and subsequent ETF offer investors access to the entire value chain of robotics, automation and artificial intelligence. The ROBO Global Robotics & Automation Index is comprised of 83 global companies from 14 countries in North America, Europe, Asia and the Middle East and offers almost no overlap with traditional equity indices.

ROBO follows a two-tiered, equal-weighted system that ensures the strategy provides diversified exposure to a broad global ecosystem of new and enabling technologies as well as established automation/robotic providers. The ETF holds 92 stocks.

The robotics ETFs portfolio may also provide exposure to companies with sustainable growth opportunities, as the underlying ROBO Global Robotics & Automation Index has exhibited attractive sales growth, EBITDA growth and earnings-per-share growth. The underlying index has even outperformed the broader technology and S&P 500 index since the 2008 financial downturn.

ROBO Global said there is now $1.6 billion allocated, on an international basis, its robotics index. That includes products in Europe and Asia tracking the benchmark.

Year-to-date, investors have added $828.6 million to ROBO. The ETF charges 0.95% per year, or $95 on a $10,000 investment.

For more information on the tech sector, visit our technology category.

More here:

Robotics ETF Races To $1 Billion in AUM - ETF Trends

Why 802.11ax is the next big thing in Wi-Fi – Network World

Zeus Kerravala is the founder and principal analyst with ZK Research, and provides a mix of tactical advice to help his clients in the current business climate.

I know, I know, Ive heard it before. A new technology comes along, and it promises to be the next big thing. Consumers and businesses buy it, and what happens? It fails to live up to the hype. In my opinion, almost every iPhone release over the past five years has been that way. Sure there were some cool new features, but overall its not something Id say was game changing.

One technology that does promise to live up to the hype is 802.11ax, the next standard for wireless LANs. I say that because this next generation of Wi-Fi was engineered for the world we live in where everything is connected and theres an assumption that upload and download traffic will be equivalent. Previous generations of Wi-Fi assumed more casual use and that there would be far more downloading of information than uploading.

I agree that 802.11ac made things somewhat faster, but it was a faster version of something that was designed with old-school assumptions in mind. Im sure everyone reading this has been in a situation where youve been at a conference center, stadium or in another public space, and everything is great. Then the keynote or concert starts or something else happens to get tens of thousands of people SnapChatting, tweeting, Facebooking (or SnapFacing if your New England Patriots Coach Bill Belichick), and things come to a crawl.

The problem with Wi-Fi isnt always the speed of the system. 802.11ac wave 2 gets us to or over the Gigabit barrier, which should be plenty of bandwidth for most people. The bigger problem with Wi-Fi is congestion and how current Wi-Fi handles lots of people trying to do wireless things and overcrowding the network. The ax standard solves these problems and others by completely redesigning how Wi-Fi works and taking some best practices from LTE.

Ax will be anywhere from 4x to 10x faster than existing Wi-Fi, but the wider and multiple channels greatly increase throughput. For example, if one assumes the speed is increased by 4x with 160 MHz channels, the speed of a single 802.11ax stream will be 3.5Gbps. The equivalent 802.11ac connection will be 866 Mbps. A 4x4 MIMO environment would result in a total capacity of about 14 Gbps. A client device that supported two or three streams would easily top 1 Gbps or much more.

If one knocked the channel width down to 40 MHz, which could happen in crowded areas like stadiums or college dorms, a single .11ax stream would be about 800 Mbps for a total capacity of 3.2 Gbps. Regardless of the channel size, 802.11ax will provide a huge boost in speed and total capacity.

One of the big advancements in LTE is something called orthogonal frequency division multiple access (ODMFA), which is an alphabet soup way of saying it does frequency division multiplexing. With previous versions of Wi-Fi, channels were held open until the data transmission had finished. Think of a line at a bank with only one teller where people have to queue up. MU-MIMO means there can be four tellers and four lines, but the people still need to wait for the transaction ahead of them is complete.

With OFDMA, each channel is chopped up into hundreds of smaller sub-channels, each with a different frequency. The signals are then turned orthogonally (at right angles) so they can be stacked on top of each other and de-multiplexed. With the bank analogy, imagine a teller being able to handle multiple customers when they are free. So customer one hands the teller a check and while that person is signing the check, the teller deals with the next customer, etc. The use of OFDMA means up to 30 clients can share each channel instead of having to take turns broadcasting and listening on each.

From a user perspective, the network will seem much less congested than with 802.11ac. Another benefit is that the 2.4 GHz and 5 GHz bands can be combined creating even more channels for data. The ax specification also includes something called QAM (quadrature amplitude modulation) encoding, which allows for more data to be transmitted per packet.

Any new Wi-Fi standard will improve battery life, since the range is typically further and data is transmitted faster so the client does not need to work as hard. However, ax has a new feature called wake time scheduling. This enables APs to tell clients when to go to sleep and provides a schedule of when to wake. These are very short periods of time, but being able to sleep a bunch of short times will make a big difference on battery life.

Ive talked with chip, AP and client device manufacturers about when to expect 802.11ax products, and we should see the first consumer Wi-Fi routers in the early part of 2018 with an outside shot of late 2017. After that, the business grade APs and clients will follow. We are certainly close enough that network managers should be starting the educational process and planning now.

If youre not sure what this means for your business, talk to your Wi-Fi vendor, as all the major wireless LAN suppliers are planning to support 802.11ax. One final point: If you need to upgrade now, I certainly wouldnt put if off and wait for ax. Wi-Fi is extremely important to businesses of all sizes and will become more important as the Internet of Things (IoT) becomes more widely adopted.

The evolution of client devices has been game changing, as theres almost nothing we do that doesnt involve them. The 802.11ax specification finally brings a Wi-Fi standard to the network that can support all of the things we want to do with our wireless LANs.

The rest is here:

Why 802.11ax is the next big thing in Wi-Fi - Network World

How women are gaining ground in virtual reality – The Guardian

Prof Anneke Smelik says female artists need to start appropriating new technologies for their own storytelling. Photograph: Alamy

Virtual reality may be an industry in its infancy, but it is expected to generate $7.2bn (5.6bn) globally by the end of this year and be worth $150bn by 2020. Given that the technology is new and unlike much else in Silicon Valley, can it offer female creators the chance to start from and maintain a level playing field? Prof Anneke Smelik, an expert in visual culture at Radboud University in the Netherlands, believes the moment is ripe. Gaming, and VR generally, is considered very much a male genre, but female artists and filmmakers need to start appropriating new genres and technologies for their own storytelling, she says.

Why? Well, for one, the industrys biggest investments are being made in adrenaline-fuelled gaming experiences and pornography meaning that much of the content is dominated by men. In February, an extensive survey in the UK found that more men than women are likely to use VR; 20% said they had already, compared with 13% of women. Another study showed that two-thirds of women are not enthusiastic about trying VR.

It is not hard to see why: the tech world has a well-documented problem with sexism and virtual reality has yet to prove itself an inclusive space. Last year, gamer Jordan Belamire went viral after writing about being sexually assaulted online, highlighting questions of ethics, behaviour and consent in the virtual world, while Silicon Valley startup UploadVR faced a lawsuit over myriad claims, including gender discrimination and sexual assault suggesting that sexism in the industry has begun to infiltrate its content.

However, a number of female producers are determined to ensure that virtual reality will not share the same fate as other entertainment and tech sectors and are helping women reclaim the space by making content for and about women.

Independent filmmaker Jayisha Patel is one woman trying to exploit VRs potential. Her film Notes to My Father is a short documentary that explores the story of a human-trafficking survivor, an Indian woman named Ramadevi. When viewed through a headset, the perspective is chilling. One of the most harrowing scenes positions the viewer inside a train carriage full of men. In virtual reality, it is a vivid and uncomfortable depiction of what it is like to be the subject of the male gaze. I was trying to get the viewer to feel what its like being the only woman in the carriage and having all these men staring at you, hearing them adjust their belts, breathing heavily. You start to understand what its really like to be objectified, says Patel.

What I wanted to do with this film was not just use the female gaze in a story about sexual abuse, which is typically a womens issue, but use it to address the fact that men are often complicit in it and are instigators of it, she says. Doing stories about women is not just about showing empowered women on screen for a female audience, its also about showing vulnerability, so it can be a piece not just for a female audience, but for everyone. Here, the female gaze in virtual reality puts the viewer in the shoes of a character, offering an empathetic, sensory exploration of the female experience.

Another example of virtual reality that positions the viewer in a female space comes from producer and curator Catherine Allen. She runs a VRvirtual reality diversity initiative that tries to get more women to create virtual reality. Weve got this golden opportunity to make the VR space as inclusive and diverse as possible, but right now it is so male-dominated and the content reflects that. When I go on the Oculus store, Im hit by so many pieces that feel like theyre made by men, for men, she says.

Allen wanted to rectify this. Last year, she created No Small Talk, a VR talk show aimed at millennial women. Filmed in 360 degrees, it features presenter Cherry Healey and blogger Emma Gannon in a coffee shop chatting about everything from how to take photos with your smartphone to how it feels to suffer from anxiety. It feels like a visual podcast and is designed to make the viewer feel as if they are the third person at the table. We wanted to make it feel as though youre the quiet friend whos just sitting there and listening, says Allen.

The show was a step forward in creating virtual reality content that is accessible for female audiences, but it was not popular with everyone. Some of the male viewers we tested with just didnt get it. When women are having a conversation, men often describe it as gossip or chit-chat; it all sounds quite frivolous and unproductive. But when men are having a conversation, its described as discussion or deliberation or debate. We used this piece to really try to change that, by showing how women talk about big topics through everyday things, she says. It moves away from the thrill-seeking gimmick that so much virtual reality content is made up of these days.

Finding ways to amplify womens voices, stories and narratives is no mean feat, but virtual reality is starting to look like a positive space in which to execute those stories. Were still working out what virtual reality even is, how it fits into society and who experiences it, Allen says. I dont think it has more opportunity to expose people to womens stories than any other medium, but because, as an industry, it is newer we have a responsibility to help make it the most diverse form of entertainment it can be and one that can be reflective of society.

Read this article:

How women are gaining ground in virtual reality - The Guardian

Virtual Reality: Cost of viewing headsets goes down, number of experiences goes up – KATU

by Stuart Tomlinson, KATU News

Whether its climbing mount Everest., floating around the International Space Station or taking a virtual tour of the White House, virtual reality expert Brandon Boone says there's never been a better time to jump into the virtual world. (KATU)

As the cost of virtual reality headsets continues to drop, the scope and magnitude of the available experiences is going up.

Whether it's climbing Mount Everest, floating around the International Space Station or taking a virtual tour of the White House, virtual reality expert Brandon Boone says there's never been a better time to jump into the virtual world.

Consider: the Occulus Rift VR headset dropped from about $700 last year to $400 right now.

"In virtual reality I can go climb Mount Everest and have the feeling of being high but at the same time knowing deep down in my mind that I'm not actually on Mount Everest," Boone said.

In addition to heart-pounding experiences, there is virtual reality software for meditation, sightseeing or just hanging out at the beach.

"People want to come right back into it as soon as they get out," Boone said.

Boone says virtual reality is not all fun and games. The devices are being used to train doctors, police officers and even sales people.

More here:

Virtual Reality: Cost of viewing headsets goes down, number of experiences goes up - KATU

When whales attack… in Virtual Reality – WCSH-TV

NOW: VR in Rockport Library

Amanda Hill, WCSH 7:16 PM. EDT August 14, 2017

Rockport Virtual Reality

ROCKPORT, Maine (NEWS CENTER) --While the internet and e-readers have taken away the necessity of a library, it's forced library directors to get creative; offering experiences you can't get from a smartphone.

"We've been sort of applying ourselves to different technologies and different things to make a library better for Rockport," said Ben Blackmon, the director of the Rockport Public Library.

Three weeks ago, the library installed a Virtual Reality system for patrons of any age.

"We've got a guided tour through the human vascular system, we've got some demos that let you walk around the Titanic," said Blackmon.

"It can facilitate experiences you wouldn't be able to have it any other way, specifically in STEM fields. It can take you to places you could never go, which is really neat, and we're going to hopefully use it to engage younger kids and the teen population, which is a little harder to grab."

2017 WCSH-TV

Read the original:

When whales attack... in Virtual Reality - WCSH-TV

CNN and Volvo Present the Solar Eclipse in an Unprecedented 360 Virtual Reality Live-Stream – CNN (blog)

CNNand Volvo Cars USA will present the solar eclipse from multiple locations, coast to coast,in an immersive two-hour360 live-stream experiencestarting at 1PM ET on August 21, 2017.

The astronomical and historicvirtual reality event will be available all around the world in 4K resolution atCNN.com/eclipse, CNNs mobile apps, Samsung Gear VR powered by Oculus via Samsung VR, Oculus Riftvia Oculus Video and through CNNs Facebook page via Facebook Live 360.

"CNN'sEclipse of the Century"will allow users to witness the first total solar eclipse totraverse the United States for the first time in nearly 40 years. The live show, hosted byCNNs Space and Science CorrespondentRachel Craneand former NASA AstronautMark Kelly, will harness stunningimagery from specially-designed 4K 360 cameras, optimized for low-light, that will capture seven 'total eclipse'moments stretching from Oregon to South Carolina.

While only a fraction of the countrywill be able to witness thetotaleclipse in-person, CNN's immersivelivestream will enable viewers nationwideto "go there" virtually and experiencea moment in history, seven times over. The livestream will be enhanced by real-timegraphics, close-up views of the sun, and experts from the science communityjoining along the way to explain the significance of this phenomenon.

As part ofVolvos partnership with CNN, four of the sevenlive-streamswill feature brandedcontent produced by CNNs brand studio Courageous forVolvo and integrate 2018 Volvo XC60s specially outfitted with advanced 360 cameras. The groundbreaking live 360 content by Volvo will spotlight four influencers in different locations, sharing their unique perspective and excitement for the future as they witness the solar eclipse from helicopters and road tours along the narrow path of totality. For more on Volvos partnership with CNN centered on the 2017 total solar eclipse, visitwww.RacingTheSun.com.

Additionally, on television, CNNmeteorologistChad Myerswill explain the science behind the solar eclipse, its course and timing; and CNN correspondentAlex Marquardtwill profile the excitement around the historical event. CNN correspondents will report live from locations across the path of the solar eclipse, with Marquardt in Oregon for the start,Stephanie Elamin Missouri,Martin Savidgein Tennessee, andKaylee Hartungin South Carolina.

For more information visitCNN.com/eclipse, and tune in to experience the event on August 21, 2017 at 1pm ET.

See original here:

CNN and Volvo Present the Solar Eclipse in an Unprecedented 360 Virtual Reality Live-Stream - CNN (blog)

Virtual Reality a real venture in Vernon – FOX 61

Please enable Javascript to watch this video

VERNON -- It is "game on" in Vernon where two recent college grads have taken their idea to another dimension.

Matt McGivern and Joe Eilert lived across the hall from each other at Rensselaer Polytechnic Institute and were always avid gamers. These days the games they play are in virtual reality -- their new venture, Spark VR, opened in Vernon in May.

Spark showcases four bays (think indoor golf) where players can put elaborate headsets on and can immerse themselves in about 15 different games. McGivern, who works at Pratt and Whitney said"the technology has come such a long way in such a short period of time." Eilert, who works as an engineer by day at Electric Boat added, "you can fight zombies, swim with fish, or defend the castle, anything you want."

Both McGivern and Eilert bill Spark VR as the first virtual reality arcade in Connecticut and say the experience is a memorable one. "We love when people come in after using their little glasses at home because this blow their mind every time," McGivern said.

After demonstrating a zombie game called the Brookhaven Experiment, Eilert said, "the bigger picture is touse social virtual reality and see how far we can take it."

To learn more click here.

41.818680 -72.479037

Here is the original post:

Virtual Reality a real venture in Vernon - FOX 61

Virtual reality experience offered at Chico library – Chico Enterprise-Record

Chico >> Library patrons looking to escape reality now have more than just books to turn to for an adventure.

The Chico library now offers the opportunity to try the Oculus virtual reality headset, which the Butte County Library acquired through a virtual reality grant.

On Tuesday, about six people got acquainted with the headset during the second of three planned sessions this month to provide the community with a virtual reality experience.

During the program, participants played games such as Oculus First Contact and Blocks by Google. The games are meant to serve as an introduction on how to use the equipment and let people become acquainted to interacting with the system.

Library assistant Alex Chen said he wants people to keep an open mind about future possibilities on how the headset can be used.

Virtual reality can be an extremely interactive and immersive learning environment. Imagine traveling through your anatomy or stepping onto the International Space Station to learn how things operate. Ways to treat patients with posttraumatic stress disorder and anxiety orders are also being explored.

Most of the people at the library Tuesday said they couldnt wait for the Google Earth Virtual Reality game to be set up. The idea of being able to travel places anywhere in the world, places they otherwise would not have been able to go to, thrilled them.

The library will be hosting another virtual reality session 12:30-2 p.m. Aug. 29, for people 13 years or older. From there, the hope is to bring the headset out once a week and incorporate it in the new makerspace that should be completed later this year.

For more information visit buttecounty.net/bclibrary.

Advertisement

Follow this link:

Virtual reality experience offered at Chico library - Chico Enterprise-Record

Stanford Hosting Innovations In Psychiatry And Behavioral Health: Virtual Reality And Behavior Change Conference – UploadVR

The Stanford Department of Psychiatry and Behavioral Sciences is hosting its third-annual Innovations in Psychiatry and Behavioral Health conference on the Stanford campus at the Li Ka Shing Center for Learning and Knowledge at Stanford, CA on October 6 and 7.

The main focus of the conference will be using virtual and augmented reality as applied to treating anxiety, addiction, psychosis, pain, depression, PTSD, psychosomatic illness and other psychological disorders.

Speakers this year include Walter Greenleaf, Giuseppe Riva, Skip Rizzo, Pat Bordnick, JoAnn Difede, Diane Gromala, Hunter Hoffman, David Thomas, Jacob Ballon, Kim Bullock, Tom Caruso, Anne Dubin, Kate Hardy, Hadi Hosseini, Alan Louie, Sean Mackey, Elizabeth McMahon, Laura Roberts, Sam Rodriguez, Nina Vasan and Leanne Williams, among others.

Stanford is also putting out a call for VR Poster Abstracts due August 21, 2017 and Brainstorm VR Innovation Lab Entries, which are due September 1, 2017. You can submit your abstracts and ideas on the Stanford Medicine website.

For more information about the conference and to register, please visit the official Innovations in Psychiatry and Behavioral Health: Virtual Reality and Behavior Change site.

Read more:

Stanford Hosting Innovations In Psychiatry And Behavioral Health: Virtual Reality And Behavior Change Conference - UploadVR

Picture of the Day: Summer Bytes presents Colossus in virtual reality – Electronics Weekly (blog)

Running until Sunday, 27 August, the idea is to use virtual reality (VR) technologies to bring to life the history of Colossus the first electronic computer.

The web and mobile company Entropy Reality, which specialises in advanced content management using Ruby on Rails, iOS, Android, Windows, HTML5 and more, is based in the Bletchley Park Science and Innovation Centre.

It has worked with the management at the Bletchley Park museum to create a virtual reality experience in the Colossus and Tunny galleries, where users can walk around the galleries and immerse themselves in the story of how code breakers shortened the Second World War by unravelling Lorenz, the most complex enemy cipher, used in communications by the German High Command.

Margaret Sale, a trustee at the NMC, said the VR experience is astonishingly good and pushes the boundaries of current technology in homage to the worlds first computer. It brings a whole new dimension to the possibilities of computer conservation and for the outreach display of Museum artefacts, she said.

Eddie Vassallo, CEO of Entropy Reality, described the challenge of creating the VR representation of Colossus. Its size and detail are mind-blowing in real life, he said.

For the virtual world, we required massive servers to process its 65 million points of data. Each shot took 31 hours to process and export. Then we had the huge post-production task of stitching together all our images and deploy various tricks of the trade, just like a magician, to make sure the viewer looks where we want them to.

The National Museum of Computing, Summer Bytesspecial opening times are from Thursdays to Sundays from 12 noon to 5pm, until 27 August.

You can view the full details of opening times and guided tours over the next few weeks.

Read more here:

Picture of the Day: Summer Bytes presents Colossus in virtual reality - Electronics Weekly (blog)

Ash Koosha x TheWaveVR Host Live Virtual Reality Concert Beyond Political Borders – EARMILK (blog)

In a world where borders seem to be valued more than pathways, we are hindered in the pursuit of not just physical liberalities, but mental ones as well. Forming walls around creative expression itself, Donald Trump and his travel bans have constricted many individuals of their freedoms, including Ash Koosha, the Iranian-born London-based music producer and virtual reality advocate that scheduled a United States tour far before the travel bans went into action. When one door was shut, a whole world of possibilities opened. The minds behind TheWaveVR set out to collaborate with Ash Koosha, creating a virtual concert where boundaries are dissolved into a fusion of psychedelically inspired audio and visual synesthesia to audiences currently restricted from seeing the performance. Watch the trailer below to witness a sneak peak of the groundbreaking show.

TheWaveVR CCO and Co-Founder Aaron Lemke stated, There are no borders inside VR. Weve been working hard to develop a community thats both positive and inclusive, where all are welcome. Soon artists will be able to use our platform to reach all their fans at once, and these physical, man-made boundaries wont have so much power. Ash Koosha, a rebellious political activist in nature, was imprisoned and blacklisted by his home country for creating rock music and films. Wanting to push beyond the boundaries of governmental legislation,VR allows an inimitable gateway to deeper dimensions of Ashs vision, interweaving art and sound with infinite frontiers.Virtual reality has pioneered an era of mankind where we can connect to each other in an authentically impactful way, regardless of your location.

The show, titled AKTUAL, debuts on TheWaveVR concert platform on August 16 at 7pm pacific time, when fans with an HTC Vive or Oculus Rift can be transported to the show to interact as avatars in a shared virtual space in real-time. Ash Koosha will be manipulating both the music and visuals from London, all done in his HTC Vive virtual reality headset, transporting each viewer into the live TheWaveVRs virtual realm. TheWaveVR platform is in pursuit to redefine what live concerts can be, pushing the political, artistic, and musical agenda of our current reality.

The show will be streaming on TheWaveVRs Facebook page and Twitch page live.

TheWaveVR is the worlds first interactive music concert platform in VR and is currently in beta on Steam Early Access.

Connect with Ash Koosha: Instagram | Twitter | Soundcloud

Connect with TheWaveVR: Instagram | Twitter| Facebook

See more here:

Ash Koosha x TheWaveVR Host Live Virtual Reality Concert Beyond Political Borders - EARMILK (blog)

Peter Funt: Is Football’s Future a Virtual Reality? – Noozhawk

By Peter Funt | August 14, 2017 | 3:30 p.m.

Will football someday become the worlds first virtual professional sport?

With the NFLs preseason underway, high school and college players back on the practice fields, and tens of thousands of fantasy leagues conducting their annual drafts, lets put the question another way:

Which will happen first: the collapse of the NFL due to a shortage of players willing to risk injury? Or the development of computer-based football so compelling and unpredictable that it actually replaces the pro game loved by millions of fans?

For now, both scenarios seem far fetched but somethings gotta give. Football is being jolted as never before by both scientific and anecdotal evidence about the effects of repeated blows to the head.

What could the long-term future possibly be for a sport in which, for example, 40 former pros conduct a charity golf tournament (in California this summer) to raise money for research on traumatic brain injuries?

For a game in which more than 2,000 women turn to a Facebook page devoted to the health consequences faced by their loved ones employed as pro players?

The Federation of State High School Associations tabulates that participation in football has fallen for the fourth straight year with the latest seasonal drop totaling roughly 26,000 players. If the pipeline of human pro players eventually dries up, perhaps replacements will emerge from computer labs.

In fact, pro football has been inching toward virtual status for more than three decades. The crude computer efforts of the early 1980s, developed by companies such as Nintendo, have evolved into modern, high-definition versions so life-like that they are played by many NFL pros in their spare time.

The NFL has enthusiastically supported this in large part because of the license fees, but also, I believe, with an eye toward the future. The league also backs fantasy football, which continues to grow in popularity as more and more fans create and manage their own teams in computer-based leagues.

The problem, of course, is that computer games and fantasy leagues depend, at least for now, on real players and real on-field results. But that might someday change.

Consider what two of my acquaintances one a former pro player, the other an armchair fanatic say when asked about the state of football today.

The fan explains that he never goes to games anymore theyre too expensive, too rowdy and, moreover, not as enjoyable as watching on a large-screen, high-def television. He prefers a comfy chair, with reasonably priced snacks at hand and a computer propped on his lap to track multiple fantasy squads.

The former pro explains that if the average fan were ever to stand on the field during an NFL game he would be so sickened by the sounds of collisions and screams of pain that he would cease loving the sport. What you see on TV, he adds, are guys in helmets and pads looking very much like avatars in a video game.

Football is the only sport in which you can watch a player for several seasons yet very possibly have no clue whatsoever about what he looks like in person.

To my mind, these two insightful fellows are describing the foundation for totally virtual football. The NFL could control it, the networks would cover it, and gamblers might even support it.

Given the pace at which computer science is advancing, a truly equivalent virtual game can likely be crafted in a decades time.

Personally, Im finding it increasingly difficult to rationalize my passion for a sport that is so clearly proving to cause lifelong suffering for its participants. Im tired of all the dirty looks from my wife as she wonders why I so stubbornly support this game.

Ive grown used to getting the scores and stats from Siri and Alexa. I suppose Id be willing to have their colleagues play the game as well.

Peter Funt is a writer, speaker and author of the book, Cautiously Optimistic. He is syndicated by Cagle Cartoons and can be contacted at http://www.candidcamera.com. Click here for previous columns. The opinions expressed are his own.

More here:

Peter Funt: Is Football's Future a Virtual Reality? - Noozhawk

How AI Is Creating Building Blocks to Reshape Music and Art – New York Times

As Mr. Eck says, these systems are at least approaching the point still many, many years away when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different. But that end game as much a way of undermining art as creating it is not what he is after. There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.

In the 1990s, at that juke joint in New Mexico, Mr. Eck combined Johnny Rotten and Johnny Cash. Now, he is building software that does much the same thing. Using neural networks, he and his team are crossbreeding sounds from very different instruments say, a bassoon and a clavichord creating instruments capable of producing sounds no one has ever heard.

Much as a neural network can learn to identify a cat by analyzing hundreds of cat photos, it can learn the musical characteristics of a bassoon by analyzing hundreds of notes. It creates a mathematical representation, or vector, that identifies a bassoon. So, Mr. Eck and his team have fed notes from hundreds of instruments into a neural network, building a vector for each one. Now, simply by moving a button across a screen, they can combine these vectors to create new instruments. One may be 47 percent bassoon and 53 percent clavichord. Another might switch the percentages. And so on.

For centuries, orchestral conductors have layered sounds from various instruments atop one other. But this is different. Rather than layering sounds, Mr. Eck and his team are combining them to form something that didnt exist before, creating new ways that artists can work. Were making the next film camera, Mr. Eck said. Were making the next electric guitar.

Called NSynth, this particular project is only just getting off the ground. But across the worlds of both art and technology, many are already developing an appetite for building new art through neural networks and other A.I. techniques. This work has exploded over the last few years, said Adam Ferris, a photographer and artist in Los Angeles. This is a totally new aesthetic.

In 2015, a separate team of researchers inside Google created DeepDream, a tool that uses neural networks to generate haunting, hallucinogenic imagescapes from existing photography, and this has spawned new art inside Google and out. If the tool analyzes a photo of a dog and finds a bit of fur that looks vaguely like an eyeball, it will enhance that bit of fur and then repeat the process. The result is a dog covered in swirling eyeballs.

At the same time, a number of artists like the well-known multimedia performance artist Trevor Paglen or the lesser-known Adam Ferris are exploring neural networks in other ways. In January, Mr. Paglen gave a performance in an old maritime warehouse in San Francisco that explored the ethics of computer vision through neural networks that can track the way we look and move. While members of the avant-garde Kronos Quartet played onstage, for example, neural networks analyzed their expressions in real time, guessing at their emotions.

The tools are new, but the attitude is not. Allison Parrish, a New York University professor who builds software that generates poetry, points out that artists have been using computers to generate art since the 1950s. Much like as Jackson Pollock figured out a new way to paint by just opening the paint can and splashing it on the canvas beneath him, she said, these new computational techniques create a broader palette for artists.

A year ago, David Ha was a trader with Goldman Sachs in Tokyo. During his lunch breaks he started toying with neural networks and posting the results to a blog under a pseudonym. Among other things, he built a neural network that learned to write its own Kanji, the logographic Chinese characters that are not so much written as drawn.

Soon, Mr. Eck and other Googlers spotted the blog, and now Mr. Ha is a researcher with Google Magenta. Through a project called SketchRNN, he is building neural networks that can draw. By analyzing thousands of digital sketches made by ordinary people, these neural networks can learn to make images of things like pigs, trucks, boats or yoga poses. They dont copy what people have drawn. They learn to draw on their own, to mathematically identify what a pig drawing looks like.

Then, you ask them to, say, draw a pig with a cats head, or to visually subtract a foot from a horse or sketch a truck that looks like a dog or build a boat from a few random squiggly lines. Next to NSynth or DeepDream, these may seem less like tools that artists will use to build new works. But if you play with them, you realize that they are themselves art, living works built by Mr. Ha. A.I. isnt just creating new kinds of art; its creating new kinds of artists.

Read more:

How AI Is Creating Building Blocks to Reshape Music and Art - New York Times

Did Elon Musk’s AI champ destroy humans at video games? It’s complicated – The Verge

You might not have noticed, but over the weekend a little coup took place. On Friday night, in front of a crowd of thousands, an AI bot beat a professional human player at Dota 2 one of the worlds most popular video games. The human champ, the affable Danil "Dendi" Ishutin, threw in the towel after being killed three times, saying he couldnt beat the unstoppable bot. It feels a little bit like human, said Dendi. But at the same time, its something else.

The bots patron was none other than tech billionaire Elon Musk, who helped found and fund the institution that designed it, OpenAI. Musk wasnt present, but made his feelings known on Twitter, saying: OpenAI first ever to defeat world's best players in competitive eSports. Vastly more complex than traditional board games like chess & Go. Even more exciting, said OpenAI, was that the AI had taught itself everything it knew. It learned purely by playing successive versions of itself, amassing lifetimes of in-game experience over the course of just two weeks.

But how big a deal is all this? Was Friday nights showdown really more impressive than Googles AI victories at the board game Go? The short answer is probably not, but it still represents a significant step forward both for the world of e-sports and the world of artificial intelligence.

First, we need to look at Musks claim that Dota is vastly more complex than traditional board games like chess & Go. This is completely true. Real-time battle and strategy games like Dota and Starcraft II pose major challenges that computers just cant handle yet. Not only do these games demand long-term strategic thinking, but unlike board games they keep vital information hidden from players. You can see everything thats happening on a chess board, but you cant in a video game. This means you have to predict and preempt what your opponent will do. It takes imagination and intuition.

In Dota, this complexity is increased as human players are asked to work together in teams of five, coordinating strategies that will change on the fly based on which characters players choose. To make things even more complex, there are more than 100 different characters in-game, each with their own unique skill set; and characters can be equipped with a number of unique items, each of which can be game-winning if deployed at the right moment. All this means its basically impossible to comprehensively program winning strategies into a Dota bot.

But, the game that OpenAIs bot played was nowhere near as complex as all this. Instead of 5v5, it took on humans at 1v1; and instead of choosing a character, both human and computer were limited to the same hero a fellow named the Shadow Fiend, who has a pretty straightforward set of attacks. My colleague Vlad Savov, a confirmed Dota addict who also wrote up his thoughts on Fridays match, said the 1v1 match represents only a fraction of the complexity of the full team contest. So: probably not as complex as Go.

The second major caveat is knowing what advantages OpenAIs agent had over its human opponents. One of the major points of discussion in the AI community was whether or not the bot had access to Dotas bot API which would let it tap directly into streams of information from the game, like the distances between players. OpenAIs Greg Brockman confirmed to The Verge that the AI did indeed use the API, and that certain techniques were hardcoded in the agent, including the items it should use in the game. It was also taught certain strategies (like one called creep block) using a trial-and-error technique known as reinforcement learning. Basically, it did get a little coaching.

Andreas Theodorou, a games AI researcher at the University of Bath and an experienced Dota player, explains why this makes a difference. One of the main things in Dota is that you need to calculate distances to know how far some [attacks] travel, he says. The API allows bots to have specific indications of range. So you can say, If someone is in 500 meters range, do that, but the human player has to calculate it themselves, learning through trial and error. It really gives them an advantage if they have access to information that a human player does not. This is particularly true in a 1v1 setting with a hero like Shadow Fiend; where players have to focus on timing their attacks correctly, rather than overall strategy.

Brockmans response is that this sort of skill is trivial for an AI to learn, and was never the focus of OpenAIs research. He says the institutes bot could have done without information from the API, but youd just be spending a lot more of your time learning to do vision, which we already know works, so whats the benefit?

So, knowing all this, should we dismiss OpenAIs victory? Not at all, says Brockman. He points out that, perhaps more important than the bots victory, was how it taught itself in the first place. While previous AI champions like AlphaGo have learned how to play games by soaking up past matches by human champions, OpenAIs bot taught itself (nearly) everything it knows.

You have this system that has just played against itself, and it has learned robust enough strategies to beat the top pros. Thats not something you should take for granted, says Brockman. And its a big question for any machine learning system: how does complexity get into the model? Where does it come from?

As OpenAIs Dota bot shows, he says, we dont have to teach computers complexity: they can learn it themselves. And although some of the bots behavior was preprogrammed, it did develop some strategies by itself. For example, it learned how to fake out its opponents by pretending to trigger an attack, only to cancel at the last second, leaving the human player to dodge an attack that never comes exactly like a feint in boxing.

Others, though, are still a little skeptical. AI researcher Denny Britz, who wrote a popular blog post that put the victory in context, tells The Verge that its difficult to judge the scale of this achievement without knowing more technical details. (Brockman says these are forthcoming, but couldnt give an exact time frame.) Its not clear what the technical contribution is at this point before the paper comes out, says Britz.

Theodorou points out that although OpenAIs bot beat Dendi onstage, once players got a good look at its tactics, they were able to outwit it. If you look at the strategies they used, they played outside the box a bit and they won, he says. The players used offbeat strategies the sort that wouldnt faze a human opponent, but which the AI had never seen before. It didnt look like the bot was flexible enough, says Theodorou. (Brockman counters that once the bot learned these strategies, it wouldnt fall for them twice.)

All the experts agree that this was a major achievement, but that the real challenge is yet to come. That will be a 5v5 match, where OpenAIs agents have to manage not just a duel in the middle of the map, but a sprawling, chaotic battlefield, with multiple heroes, dozens of support units, and unexpected twists. Brockman says that OpenAI is currently targeting next years grand Dota tournament in 12 months time to pull this off. Between now and then, theres much more training to be done.

Read this article:

Did Elon Musk's AI champ destroy humans at video games? It's complicated - The Verge

MIT’s AI streaming software aims to stop those video stutters – TechCrunch

MITs Computer Science and Artificial Intelligence Lab (CSAIL) wants to ensure your streaming video experience stays smooth. A research team led by MIT professor Mohammad Alizadeh has developed an artificial intelligence (dubbed Pensieve) that can select the best algorithms for ensuring video streams both without interruption, and at the best possible playback quality.

The method improves upon existing tech, including the adaptive bitrate (ABR) method used by YouTube that throttles back quality to keep videos playing, albeit with pixelation and other artifacts. The AI can select different algorithms depending on what kind of network conditions a device is experiencing, cutting down on the downsides associated with any one method.

During experimentation, the CSAIL research team behind this method found that video streamed with between 10 and 30 percent less rebuffing, with 10 to 25 percent improved quality. Those gains would definitely add up to a significantly improved experience for most video viewers, especially over a long period.

The difference in CSAILs Pensieve approach vs. traditional methods is mainly in its use of a neural network instead of sticking to a strictly algorithmic-based approach. The neural net learns how to optimize through a reward system that incentivizes smoother video playback, rather than setting out defined rules about what algorithmic techniques to use when buffering video.

Researchers say the system is also potentially tweakable on the user end, depending on what they want to prioritize in playback: You could, for instance, set Pensieve to optimize for playback quality, or conversely, for playback speed, or even for conservation of data.

The team is making their project code open source for Pensieve at SIGCOMM next week in LA, and they expect that when trained on a larger data set, it could provide even greater improvements in terms of performance and quality. Theyre also now going to test applying it to VR video, since the high bitrates required for a quality experience there are well suited to the kinds of improvements Pensieve can offer.

See the rest here:

MIT's AI streaming software aims to stop those video stutters - TechCrunch

China’s Plan for World Domination in AI Isn’t So Crazy After All – Bloomberg

Xu Lis software scans more faces than maybe any on earth. He has the Chinese police to thank.

Xu runs SenseTime Group Ltd., which makes artificial intelligence software that recognizes objects and faces, and counts Chinas biggest smartphone brands as customers. In July, SenseTime raised $410 million, a sum it said was the largest single round for an AI company to date. That feat may soon be topped, probably by another startup in China.

The nation is betting heavily on AI. Money is pouring in from Chinas investors, big internet companies and its government, driven by a belief that the technology can remake entire sectors of the economy, as well as national security. A similar effort is underway in the U.S., but in this new global arms race, China has three advantages: A vast pool of engineers to write the software, a massive base of 751 million internet users to test it on, and most importantlystaunch government support that includes handing over gobs of citizens data - something that makes Western officials squirm.

Data is key because thats how AI engineers train and test algorithms to adapt and learn new skills without human programmers intervening. SenseTime built its video analysis software using footage from the police force in Guangzhou, a southern city of 14 million. Most Chinese mega-cities have set up institutes for AI that include some data-sharing arrangements, according to Xu. "In China, the population is huge, so its much easier to collect the data for whatever use-scenarios you need," he said. "When we talk about data resources, really the largest data source is the government."

This flood of data will only rise. China just enshrined the pursuit of AI into a kind of national technology constitution. A state plan, issued in July, calls for the nation to become the leader in the industry by 2030. Five years from then, the government claims the AI industry will create 400 billion yuan ($59 billion) in economic activity. Chinas tech titans, particularly Tencent Holdings Ltd. and Baidu Inc., are getting on board. And the science is showing up in unexpected places: Shanghais courts are testing an AI system that scours criminal cases to judge the validity of evidence used by all sides, ostensibly to prevent wrongful prosecutions.

Data access has always been easier in China, but now people in government, organizations and companies have recognized the value of data, said Jiebo Luo,a computer science professor at the University of Rochester who has researched China. As long as they can find someone they trust, they are willing to share it.

The AI-MATHS machine took the math portion of Chinas annual university entrance exam in Chengdu.

Photographer: AFP via Getty Images

Every major U.S. tech company is investing deeply as well. Machine learning -- a type of AI that lets driverless cars see, chatbots speak and machines parse scores of financial information -- demands computers learn from raw data instead of hand-cranked programming. Getting access to that data is a permanent slog. Chinas command-and-control economy, and its thinner privacy concerns, mean that country can dispense video footage, medical records, banking information and other wells of data almost whenever it pleases.

Xu argued this is a global phenomenon. "Theres a trend toward making data more public. For example, NHS and Google recently shared some medical image data," he said. But that example does more to illustrate Chinas edge.

DeepMind, the AI lab of Googles Alphabet Inc., has labored for nearly two years to access medical records from the U.K.s National Health Service for a diagnostics app. The agency began a trial with the company using 1.6 million patient records. Last month, the top U.K. privacy watchdog declared the trial violates British data-protection laws, throwing its future into question.

Go player Lee Se-Dol, right, in a match against Googles AlphaGo, during the DeepMind Challenge Match in March 2016.

Photographer: Google via Getty Images

Contrast that with how officials handled a project in Fuzhou. Government leaders from that southeastern Chinese city of more than seven million people held an event on June 26. Venture capital firm Sequoia Capital helped organize the event, which included representatives from Dell Inc., International Business Machines Corp. and Lenovo Group Ltd.A spokeswoman for Dell characterized the event as the nations first "Healthcare and Medical Big Data Ecology Summit."

The summit involved a vast handover of data. At the press conference, city officials shared 80 exabytes worth of heart ultrasound videos, according to one company that participated. With the massive data set, some of the companies were tasked with building an AI tool that could identify heart disease, ideally at rates above medical experts. They were asked to turn it around by the fall.

"The Chinese AI market is moving fast because people are willing to take risks and adopt new technology more quickly in a fast-growing economy," said Chris Nicholson, co-founder of Skymind Inc., one of the companies involved in the event. "AI needs big data, and Chinese regulators are now on the side of making data accessible to accelerate AI."

Representatives from IBM and Lenovo declined to comment. Last month, Lenovo Chief Executive Officer Yang Yuanqing said he will invest $1 billion into AI research over the next three to four years.

Along with health, finance can be a lucrative business in China. In part, thats because the country has far less stringent privacy regulations and concerns than the West. For decades the government has kept a secret file on nearly everyone in China called a dangan. The records run the gamut from health reports and school marks to personality assessments and club records. This dossier can often decide a citizens future -- whether they can score a promotion or be allowed to reside in the city they work.

U.S. companies that partner in China stress that AI efforts, like those in Fuzhou, are for non-military purposes. Luo, the computer science professor, said most national security research efforts are relegated to select university partners. However, one stated goal of the governments national plan is for a greater integration of civilian, academic and military development of AI.

The government also revealed in 2015 that it was building a nationwide database that would score citizens on their trustworthiness, which in turn would feed into their credit ratings. Last year, China Premier Li Keqiang said 80 percent of the nations data was in public hands and would be opened to the public, with an unspecific pledge to protect privacy. The raging popularity of live video feeds -- where Chinese internet users spend hours watching daily footage caught by surveillance video -- shows the gulf in privacy concerns between the country and the West. Embraced in China, the security cameras also reel in mountains of valuable data.

Some machine-learning researchers dispel the idea that data can be a panacea. Advanced AI operations, like DeepMind, often rely on "simulated" data, co-founder Demis Hassabis explained during a trip to China in May. DeepMind has used Atari video games to train its systems. Engineers building self-driving car software frequently test it this way, simulating stretches of highway or crashes virtually.

"Sure, there might be data sets you could get access to in China that you couldnt in the U.S.," said Oren Etzioni, director of the Allen Institute for Artificial Intelligence. "But that does not put them in a terrific position vis-a-vis AI. Its still a question of the algorithm, the insights and the research."

Historically, the country has been a lightweight in those regards. Its suffered through a "brain drain," a flight of academics and specialists out of the country. "China currently has a talent shortage when it comes to top tier AI experts," said Connie Chan, a partner at venture capital firm Andreessen Horowitz. "While there have been more deep learning papers published in China than the U.S. since 2016, those papers have not been as influential as those from the U.S. and U.K."

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

But China is gaining ground. The country is producing more top engineers, who craft AI algorithms for U.S. companies and, increasingly, Chinese ones. Chinese universities and private firms are actively wooing AI researchers from across the globe. Juo, the University of Rochester professor, said top researchers can get offers of $500,000 or more in annual compensation from U.S. tech companies, while Chinese companies will often double that.

Meanwhile, Chinas homegrown talent is starting to shine. A popular benchmark in AI research is the ImageNet competition, an annual challenge to devise a visual recognition system with the lowest error rate. Like last year, this years top winners were dominated by researchers from China, including a team from the Ministry of Public Securitys Third Research Institute.

Relentless pollution in metropolises like Beijing and Shanghai has hurt Chinese companies ability to nab top tech talent. In response, some are opening shop in Silicon Valley. Tencent recently set up an AI research lab in Seattle.

Photographer: David Paul Morris/Bloomberg

Baidu managed to pull a marquee name from that city. The firm recruited Qi Lu, one of Microsofts top executives, to return to China to lead the search giants push into AI. He touted the technologys potential for enhancing Chinas "national strength" and cited a figure that nearly half of the bountiful academic research on the subject globally has ethnically Chinese authors, using the Mandarin term "huaren" -- a term for ethnic Chinese that echoes government rhetoric.

"China has structural advantages, because China can acquire more and better data to power AI development," Lu told the cheering crowd of Chinese developers. "We must have the chance to lead the world!"

Continue reading here:

China's Plan for World Domination in AI Isn't So Crazy After All - Bloomberg

AI artist conjures up convincing fake worlds from memories – New Scientist

Out of this world

Stanford University and Intel

By Matt Reynolds

Take a look at the above image of a German street. At a glance it could be a blurry dashcam photo, or a snap thats gone through one of those apps that turns photos into paintings.

But you wont find this street anywhere on Google Maps. Thats because it was generated by an imaginative neural network, stitching together its memories of real streets it was trained on.

Nothing in the image actually exists, says Qifeng Chen at Stanford University, California, and Intel. Instead, his AI works from rough layouts that tell it what should be in each part of the image. The centre of the image might be labelled road while other sections are labelled trees or cars its painting by numbers for an AI artist.

Chen says the technique could eventually create game worlds that truly resemble the real world. Using deep learning to render video games could be the future, he says. He has already experimented with using the algorithm to replace the game world in Grand Theft Auto V.

Noah Snavely at Cornell University, New York, is impressed. Generating realistic-looking artificial scenes is a tricky problem, he says, and even the best existing approaches cant do it. Chens system creates the largest and most detailed examples of their kind he has seen.

Snavely says that the technology could allow people to describe a world, and then have an AI build it in virtual reality. Itd be great if you could conjure up a photorealistic scene just by describing it aloud, he says.

Chens system starts by processing a photo of a real street it hasnt seen before, but that has been labelled so the AI knows which bits are supposed to be cars, people, roads and so on. The AI then uses this layout as a guide to generate a completely new image.

The AI was trained on 3000 images of German streets, so when it comes across part of the photolabelled car it draws on its existing knowledge to generate a car there in its own creation. We want the network to memorise what its seen in the data, Chen says.

Intel researchers will present the work at this years International Conference on Computer Vision, which takes place in Venice, Italy, in late October.

The algorithm was also trained and tested on a smaller database of photos of domestic interiors, but Snavely says that to realise its potential it needs a data set that captures the true diversity of the world. Thats easier said than done, however, as each component in the training images needs to be labelled by hand, and creating a data set with that level of detail is extremely labour-intensive.

Chen says his system still has a long way to go before it can build truly photorealistic worlds. The images it produces right now have a blurry, dreamlike quality, as the network isnt able to fill in all the details we expect in photos. He is already working on a larger version of the system that he hopes will be much more capable.

But when it comes to building worlds in virtual reality, that dreamlike nature might not be such a bad thing, says Snavely. Were used to seeing super-slick and realistic worlds on film and in video games, but theres not quite that level of expectation when it comes to VR. You dont need total photorealism, he says.

Reference: arxiv.org/abs/1707.09405

More on these topics:

Continued here:

AI artist conjures up convincing fake worlds from memories - New Scientist

Google Hires Former Star Apple Engineer for Its AI Team – Bloomberg

By and

August 14, 2017, 1:44 PM EDT

Chris Lattner, a legend in the world of Apple software, has joined another rival of the iPhone maker: Alphabet Inc.s Google, where he will work on artificial intelligence.

Lattner announced the news on Twitter on Monday, saying he will start next week. His arrival at Mountain View, California-based Google comes after a brief stint as head of the automated driving program at Tesla Inc., which he left in June. Lattner made a name for himself during a decade-plus career at Apple Inc., where he created the popular programming language Swift.

Exclusive insights on technology around the world.

Get Fully Charged, from Bloomberg Technology.

Lattner said he is joining Google Brain, the search giants research unit. There he will work on a different software language: TensorFlow, Googles system designed to simplify the programming steps for AI, according to a person with knowledge of the matter. Since Google released the software for free last year, it has become a key part of its strategy to spread and make money from AI. Last May, Google introduced a specialized chip set catered for the software, called a TPU, that rents through its cloud-computing service.

A Google spokesman didnt immediately respond to a request for comment.

After leaving Apple in January, Lattner went to Tesla, a recruiting coup for Chief Executive Officer Elon Musk. Lattner left after six months. In the end, Elon and I agreed that he and I did not work well together and that I should leave, so I did, he wrote in an update to his resume.

For more on artificial intelligence, check out the Decrypted podcast:

Go here to read the rest:

Google Hires Former Star Apple Engineer for Its AI Team - Bloomberg

Why AI is now at the heart of our innovation economy | TechCrunch – TechCrunch

Andrew Keen is the author of three books: Cult of the Amateur, Digital Vertigo and The Internet Is Not The Answer. He produces Futurecast, and is the host of Keen On.

There are few more credible authorities on artificial intelligence (AI) thanHilary Mason the New York-based founder and chief executive of the data science and machine learning consultancyFast Forward Labs.

So, I asked Mason, who is also theData Scientist in Residenceat Accel Partners and theformer Chief Scientist at Bitly, whether todays AI revolution is for real? Or is it, I wondered, just another catch-all phrase used by entrepreneurs and investors to describe the latest Silicon Valley mania?

Mason who sees AI as theumbrella term to describemachine learning andbig data acknowledges that it has become avery trendy area of start-up activity. That said, she says, there has been such rapid technological progress in machine learning over the last five years to make the fieldlegitimately exciting. This progress has been so profound, Mason insists, that it is making AIclose to the heart of our new innovation economy.

But in contrast withthe fearsof prominent technologists like Elon Musk, Mason doesnt worry about the threat to the human species of super intelligent machines. We humans, she says, use machines as tools and the advent of AI doesnt change this.Machines arent rational, she thus argues, implying that there are many more important things for us to worry about than an imminent singularity.

What does concern Mason, however, are questions about the role of women in tech. Thats a question interviewers like myself should be asking men rather than women, she insists. It just createsextra burden for female technologists and thus isnt something that she wants to publicly discuss.

Many thanks to the folks at theGreater Providence Chamber of Commercefor their help in producing this interview.

Continued here:

Why AI is now at the heart of our innovation economy | TechCrunch - TechCrunch