Daily Archives: May 30, 2017

Virtual Reality, the Future of TV as Cinema, and the Final Takeaways from the 2017 Cannes Film Festival – W Magazine

Posted: May 30, 2017 at 2:31 pm

The best part of the Cannes Film Festival s guessing the drama behind the scenes. Was it a love fest or was blood splattered on the walls and the carpets? a journalist asked the jury about their selection process for the awards, announced Sunday night.

Jury president Pedro Almodvar was clear that the bonhomie exhibited by this jury (including Jessica Chastain and Will Smith) on the red carpet was real and said they were very democratic in making their choices. This was clear from their selection of The Square and 120 Beats Per Minute, respectively, for the Palme dOr top prize and the runner-up Grand Prix; these were films liked by many and hated by few. Another film, Loveless, was more divisive, with some naming it the front-runner for the Palme while others (including, ahem, this critic) detested the overly controlled filmmaking and mild misogyny.

Could this have been one of the movies that Chastain took a jab at in the post-awards press conference? She had a fascinating response to a question about the female filmmakers awarded (including Sofia Coppola named as best director for The Beguiled, only the second woman to win and the first in 56 years). Chastain said:

The one thing I really took away from this experience is how the world views women, from the female characters that I saw represented on screen. And it was quite disturbing to me, to be honest. There are some exceptions. But for the most part, I was surprised with the representation of female characters on screen in these films. And I do hope when we include more female storytellers, we will have more of the women that I recognize in my day-to-day life.

A couple of black folks wouldnt hurt none for next year, either, interjected Will Smith.

Courtesy

Another female filmmaker honored with an award was Lynne Ramsay, who won best screenplay for You Were Never Really Here, the internal noir about trauma and systematic abuses of power by men, with Joaquin Phoenix (also awarded Best Actor) as an off-the-books detective. This was the last competition film and the first one to be divisive in a way that indicates its quality. I was stunned after the last credits and remained in my seat, processing the intensity. But there was an isolated boo in my screening. A few boos is often the mark of Cannes films that become classics, maybe even more consistently than the winner of the Palme dOr.

Another female filmmaker honored with an award was Lonor Serraille for Jeune Femme, which was described to me as the French Frances Ha. It is similarly about a young (but not so young) woman in a city who suddenly becomes homeless after a break-up, in Paris instead of New York. But Serrailles film pays attention to economics in a way that Frances Ha ignored in favor of fantasy.

Frances had parents she could return to while Paula, the red-headed effervescent mess in Jeune Femme, is not so lucky. Frances Ha resolved when she simply decided to get a full-time office job, even though it was made during the recession when full-time office jobs were rarer than unicorns. Paulas resolution is less easy and so then more meaningful, more political. The movie is also a rarity in that it shows a Paris thats not all white.

Courtesy

Also making an impressive debut was Zambian-British filmmaker Rungano Nyoni with I Am Not a Witch, about 9-year-old Shula who is exiled to a traveling witch camp, where women are held in place by long spools of ribbon. Shes told if she cuts the ribbon shell become a goat, but also told she can escape her fate (sort of) by marrying and following the route of respectability. It is a surprisingly hilarious film, with gorgeous imagery and incredible use of sound, from pop songs in headphones to cell phone ringtones.

This seemed the film that could be the biggest sleeper hit of the festival, a true crowd-pleaser. It was incredibly moving to see it premiere to a long standing ovation and tears in Nyonis eyes framed by her black and blue braids.

The best things I saw in all of Cannes, though, were two things on TV. One was the 1954 Jean Renoir movie, French Cancan, soundless on French TV while waiting to be taken off-site by black car to see Alejandro Gonzlez Irritu's virtual reality exhibit, Carne y Arena. Is this French Cancan?" I asked the woman behind the desk in the wait area outside the screening room. She looked at Jean Gabin on screen and nodded. It was moments like that youre reminded that youre in the center of worship for cinema, in France and especially in Cannes these ten days.

Will Smith also commented on the rich cinematic history of France: "I watch movies everywhere in the world, and the French film-going audience is an evolved audience. Because of the way it's been ingrained in the culture, there will always be a discerning sometimes harsh eye that the world will always look to for a higher perspective on cinema."

Suzanne Tenner

Watching Carne y Arenawhich was co-produced by Legendary Entertainment and Fondazione Prada (where it will be exhibited in full from June through December)felt simultaneously like cinemas future and past. As I watched a group of migrants trying to cross from Mexico to the U.S., spotlights flashed on me and I felt fear, no longer an observer but a mistaken victim. When the guard dog chased after me within the VR experience, I screamed and dived into the sand, feeling both moved and like a sucker, like those audiences who screamed when the train came towards them at the earliest film screenings of The Great Train Robbery in 1903.

Even more remarkable than the VR exhibit was the process of getting there, a mysterious affair full of secrecy, which felt like something out of Twin Peaks. And interestingly, it was watching the first two episodes of Twin Peaks a few days after it aired on American television, though projected on a screen and edited as one movie, which was hands down the most challenging movie I saw in the Cannes Film Festival.

What role will TV play at Cannes in the future? What role will cinematic history play in TV going forwar?. Its always a privilege to be at Cannes, to see the first public screenings of interesting films in a red-carpeted seaside town where theyre worshiped. But this year felt like a special privilege to see a festival in transition. The 70th anniversary felt like the last of what it was and the first of what it will become, no longer a place solely of white men auteurs as priests and the movie theater as the only place of worship.

Bella Hadid, Jessica Chastain, and Elle Fanning Lead the Charge at the 2017 Cannes Film Festival

Watch: The Best Advice Sofia Coppola Received From Her Dad, Francis Ford Coppola: "Don't Wait for Permission"

Read the original post:

Virtual Reality, the Future of TV as Cinema, and the Final Takeaways from the 2017 Cannes Film Festival - W Magazine

Posted in Virtual Reality | Comments Off on Virtual Reality, the Future of TV as Cinema, and the Final Takeaways from the 2017 Cannes Film Festival – W Magazine

AI, the humanity! – The Verge

Posted: at 2:30 pm

A loss for humanity! Man succumbs to machine!

If you heard about AlphaGos latest exploits last week crushing the worlds best Go player and confirming that artificial intelligence had mastered the ancient Chinese board game you may have heard the news delivered in doomsday terms.

There was a certain melancholy to Ke Jies capitulation, to be sure. The 19-year-old Chinese prodigy declared he would never lose to an AI following AlphaGos earthshaking victory over Lee Se-dol last year. To see him onstage last week, nearly bent double over the Go board and fidgeting with his hair, was to see a man comprehensively put in his place.

But focusing on that would miss the point. DeepMind, the Google-owned company that developed AlphaGo, isnt attempting to crush humanity after all, the company is made up of humans itself. AlphaGo represents a major human achievement and the takeaway shouldnt be that AI is surpassing our abilities, but instead that AI will enhance our abilities.

When speaking to DeepMind and Google developers at the Future of Go Summit in Wuzhen, China last week, I didnt hear much about the four games AlphaGo won over Lee Se-dol last year. Instead, I heard a lot about the one that it lost.

We were interested to see if we could fix the problems, the knowledge gaps as we call them, that Lee Se-dol brilliantly exposed in game four with his incredible win, showing that there was a weakness in AlphaGos knowledge, DeepMind co-founder and CEO Demis Hassabis said on the first day of the event. We worked hard to see if we could fix that knowledge gap and actually teach, or have AlphaGo learn itself, how to deal with those kinds of positions. Were confident now that AlphaGo is better in those situations, but again we dont know for sure until we play against an amazing master like Ke Jie.

AlphaGo Master has become its own teacher.

As it happened, AlphaGo steamrolled Ke into a 3-0 defeat, suggesting that those knowledge gaps have been closed. Its worth noting, however, that DeepMind had to learn from AlphaGos past mistakes to reach this level. If the AI had stood still for the past year, its entirely possible that Ke would have won; hes a far stronger player than Lee. But AlphaGo did not stand still.

The version of AlphaGo that played Ke has been completely rearchitected DeepMind calls it AlphaGo Master. The main innovation in AlphaGo Master is that its become its own teacher, says Dave Silver, DeepMinds lead researcher on AlphaGo. So [now] AlphaGo actually learns from its own searches to improve its neural networks, both the policy network and the value network, and this makes it learn in a much more general way. One of the things were most excited about is not just that it can play Go better but we hope that thisll actually lead to technologies that are more generally applicable to other challenging domains.

AlphaGo is comprised of two networks: a policy network that selects the next move to play, and a value network that analyzes the probability of winning. The policy network was initially based on millions of historical moves from actual games played by Go professionals. But AlphaGo Master goes much further by searching through the possible moves that could occur if a particular move is played, increasing its understanding of the potential fallout.

The original system played against itself millions of times, but it didnt have this component of using the search, Hassabis tells The Verge. [AlphaGo Master is] using its own strength to improve its own predictions. So whereas in the previous version it was mostly about generating data, in this version its actually using the power of its own search function and its own abilities to improve one part of itself, the policy net. Essentially, AlphaGo is now better at assessing why a particular move would be the strongest possible option.

The whole idea is to reduce your reliance on that human bootstrapping step.

I asked Hassabis whether he thought this system could work without the initial dataset taken from historical games of Go. Were running those tests at the moment and were pretty confident, actually, he said. The initial results have been that its looking pretty good. Thatll be part of this future paper that were going to publish, so were not talking about that at the moment, but its looking promising. The whole idea is to reduce your reliance on that human bootstrapping step.

But in order to defeat Ke, DeepMind needed to fix the weaknesses in the original AlphaGo that Lee exposed. Although the AI gets ever stronger by playing against itself, DeepMind couldnt rely on that baseline training to cover the knowledge gaps nor could it hand-code a solution. Its not like a traditional program where you just fix a bug, says Hassabis, who believes that similar knowledge gaps are likely to be a problem faced by all kinds of learning systems in the future. You have to kind of coax it to learn new knowledge or explore that new area of the domain, and there are various strategies to do that. You can use adversarial opponents that push you into exploring those spaces, and you can keep different varieties of the AlphaGo versions to play each other so theres more variety in the player pool.

Another thing we did is when we assessed what kinds of positions we thought AlphaGo had a problem with, we looked at the self-play games and we identified games algorithmically we wrote another algorithm to look at all those games and identify places where AlphaGo seemed to have this kind of problem. So we have a library of those sorts of positions, and we can test our new systems not only against each other in the self-play but against this database of known problematic positions, so then we could quantify the improvement against that.

None of this increase in performance has required an increase in power. In fact, AlphaGo Master uses much less power than the version of AlphaGo that beat Lee Se-dol; it runs on a single second-gen Tensor Processing Unit machine in the Google Cloud, whereas the previous version used 50 TPUs at once. You shouldnt think of this as running on compute power thats beyond the access of normal people, says Silver. The special thing about it is the algorithm thats being used as opposed to the amount of compute.

AlphaGo learned from humans, and humans are learning from AlphaGo

AlphaGo is learning from humans, then, even if it may not need to in the future. And in turn, humans have learned from AlphaGo. The simplest demonstration of this came in Ke Jies first match against the AI, where he used a 3-3 point as part of his opening strategy. Thats a move that fell out of favor over the past several decades, but its seen a resurgence in popularity after AlphaGo employed it to some success. And Ke pushed AlphaGo to its limits in the second game; the AI determined that his first 50 moves were perfect, and his first 100 were better than anyone had ever played against the Master version.

Although the Go community might not necessarily understand why a given AlphaGo move works in the moment, the AI provides a whole new way to approach the game. Go has been around for thousands of years, and AlphaGo has sparked one of the most profound shifts yet in how the game is played and studied.

But if youre reading this in the West, you probably dont play Go. What can AlphaGo do for you?

Say youre a data center architect working at Google. Its your job to make sure everything runs efficiently and coolly. To date, youve achieved that by designing the system so that youre running as few pieces of cooling equipment at once as possible you turn on the second piece only after the first is maxed out, and so on. This makes sense, right? Well, a variant of AlphaGo named Dr. Data disagreed.

What Dr. Data decided to do was actually turn on as many units as possible and run them at a very low level, Hassabis says. Because of the switching and the pumps and the other things, that turned out to be better and I think theyre now taking that into new data center designs, potentially. Theyre taking some of those ideas and reincorporating them into the new designs, which obviously the AI system cant do. So the human designers are looking at what the AlphaGo variant was doing, and then thats informing their next decisions. Dr. Data is at work right now in Googles data centers, saving the company 40 percent in electricity required for cooling and resulting in 15 percent overall less energy usage.

DeepMind believes that the same principle will apply to science and health care, with deep-learning techniques helping to improve the accuracy and efficiency of everything from protein-folding to radiography. Perhaps less ambitiously but no less importantly, it may also lead to more sensible workflows. You can imagine across a hospital or many hospitals you might be able to figure out that theres this process one hospitals using, or one nurse is using, thats super effective over time, says Hassabis. Maybe theyre doing something slightly different to this other hospital, and perhaps the other hospital can learn from that. I think at the moment youd never know that was happening, but you can imagine that an AI system might be able to pick up on that and share that knowledge effectively between different doctors and hospitals so they all end up with the best practice.

These are areas particularly fraught with roadblocks and worries for many, of course. And its natural for people to be suspicious of AI I experienced it myself somewhat last week. My hotel was part of the same compound as the Future of Go Summit, and access to certain areas was gated by Baidus machine learning-powered facial recognition tech. It worked instantly, every time, often without me even knowing where the camera was; Id just go through the gate and see my Verge profile photo flash up on a screen. I never saw it fail for the thousands of other people at the event, either. And all of this worked based on nothing more than a picture of me taken on an iPad at check-in.

I know that Facebook and Google and probably tons of other companies also know what I look like. But the weird feeling I got from seeing my face flawlessly recognized multiple times a day for a week shows that companies ought to be sensitive about the way they roll out AI technologies. It also, to some extent, probably explains why so many people seem unsettled by AlphaGos success.

But again, that success is a success built by humans. AlphaGo is already demonstrating the power of what can happen not only when AI learns from us, but when we learn from AI. At this stage, its technology worth being optimistic about.

Photography by Sam Byford / The Verge

Link:

AI, the humanity! - The Verge

Posted in Ai | Comments Off on AI, the humanity! – The Verge

People.ai raises $7M to automate sales ops for the enterprise … – TechCrunch

Posted: at 2:30 pm

People.aiis a startup usingAI to give sales managers a predictive playbook forthe best way to close a deal. The company is announcing it has raised $7 million in Series A funding led by Lightspeed Venture Partners. Index Ventures and Shasta Ventures also participated in the round, alongside existing investors Y Combinator and SV Angel. Nakul Mandan, partner at Lightspeed,is also joining People.ais board of directors.

The problem the sales management platformis trying to solve is that managers coach teams based on intuition rather than data. People.ai wants to change this by providing a holistic view of every outreach and action reps take to close deals. The softwarelets yousee where in the pipeline sales repsare spending the most timeand identify the metrics tied to success. Work smarter, not harder, right?

The goal is to have full visibility intosalespeoples processeswith a visualization that showsmuch time top performers are spending at each phase of a deal, and where struggling reps may bedeviating from typically successful methodology. Are salespeople too zoned in on one phase of a deal? Not spending enough time talking to product managers, executives or other decision makers? Are they even focusingon the right leads? Those are the questions People.ais algorithms seek to answer.

Thesolution tracks activity across different communication touch points between salespeople and clients. The tech scans email, phone calls, calendar meetings and produces a dashboard showing how much time is spentin each phase of a deal, who was contacted and what the outcome was.

When People.ai launched last year, CEO and machine learning veteran Oleg Rogynskyy wanted to build an AI that would help automate sales ops as a function. Since then, the company realized they wanted to refocus on solving this same problem but with an eye on the enterprise.

There are many companies building out features related to conversational AI like Chorus.ai and VoiceOps. People.ai sees these companies as data sources, but that their own solution isthe backbone that readsall types of sales activity.

Rogynskyy tells me that recently, the company isseeing strong interest coming from the enterprise and Fortune 500 companies. People.ai will use the funding to scale out its product and sales teams, and engage in more enterprise-focused R&D.

Original post:

People.ai raises $7M to automate sales ops for the enterprise ... - TechCrunch

Posted in Ai | Comments Off on People.ai raises $7M to automate sales ops for the enterprise … – TechCrunch

Judah vs. the Machines: Kairos face recognition AI can tell how you … – TechCrunch

Posted: at 2:30 pm

Sometimes the lips dont say what the heart feels, but instead of humanity working together on our own collectivesense of caring and empathy, we have made the brave decision to build computers that can interpret emotions for us.

In this installment of Judah vs. the Machines, actor Judah Friedlander touches down in Miami to discover the wits of Kairos, a computer visiontechnologythatclaims to understand people with face recognition technology.

The startup claims that their technology can detect emotions like anger, fear, disgust, sadness and joy (as well as a lack of emotion). Friedlander seemedmost concerned about whether the machine would be able to detect the emotion of victory that he soon planned to be feeling.

Friedlander sat down with Kairos CEO Brian Brackeen to figure out how the machine worked and see if he could get an upper hand in beating it head-to-head.

There are 85 points on your face and the distance between those points is like a fingerprint or a faceprint Brackeen told Friedlander. We feed the algorithm millions and millions of faces, and we say to the algorithm, This is a male, This is a female, or This is Brian and it learns over time who these people are or what they are.

Things are a bit more scrappy at Kairos machine learning team than when Friedlander visited the sprawling offices of Facebook. Kairos AR was founded in 2012 and has received just over $4.2 million in funding. Their primary customers appear to be companies looking to gauge brand perception or organize data through facial recognition, though they also tout their AIs ability to serve as an authentication tool.

In his head-to-head challenge with Kairos, Friedlander was forced to guess the emotions of strangers watching videos designed to elicit reactions ranging from surprise to dislike to delight. Check out the video above to see how Friedlander fares.

Read this article:

Judah vs. the Machines: Kairos face recognition AI can tell how you ... - TechCrunch

Posted in Ai | Comments Off on Judah vs. the Machines: Kairos face recognition AI can tell how you … – TechCrunch

This dystopian device warns you when AI is trying to impersonate … – ScienceAlert

Posted: at 2:30 pm

Scared of a future where you can no longer discern if you're dealing with a human or a computer? A team of Australian researchers have come up with what they call theAnti-AI AI.

The wearable prototype device is designed to identify synthetic speech and alert the user that the voice they're listening doesn't belong to a flesh-and-blood individual.Developed as a proof of concept in just five days, the prototype makes use of a neural network powered by Google'sTensorflowmachine learning software.

As artificial intelligence (AI) and robotic technology rapidly evolve, we're facing an uncertain future where machines can seemingly do all sorts of things better than people can from mastering gamesto working our jobs, and even making new, more powerful forms of AI.

While the gravest concerns envision a future dystopia where unregulated, super-powerful AIs threaten humanity's very existence, the truth is we're already entering a new, unsettling era in which machines can deceive humans by impersonating the ways we speak and look.

DT

As this technology gets even more sophisticated, it's becoming easier to imagine a world where soon it may be difficult or even impossible to tell when a 'person' you're talking to on the phone or watching on TV is or isn't a real human being.

But while AI is what empowers this nightmare scenario, it could also be what helps us reveal these synthetic impostors for what they are.

A team at Australian creative technology agency DTtrained its AI up on a database of synthetic voices, teaching the offline network to recognise artificial speech patterns.

When the wearable prototype operates, it captures audio spoken in the device's presence and sends it to this neural network in the cloud. If the AI detects an actual human voice (code green), all is fine:

But if the system picks up on synthetic speech, it has a unique way of subtly letting the human know that they're talking to a digital clone.

Rather than using light, sound, or vibration to alert the user, the prototype includes a miniature thermoelectric cooling element to reinforce that the voice they're hearing is coming from a "a cold, lifeless machine".

"We wanted the device to give the wearer a unique sensation that matched what they were experiencing when a synthetic voice is detected," the team explains on DT's R&D blog.

"By using a 4x4 mm thermoelectric Peltier plate, we were able to create a noticeable chill on the skin near the back of the neck without drawing too much current."

DT

That's right, guys, this device literally sends a chill down your spine when you're talking to a digital doppelgnger made up of 0s and 1s, and we can't think of a more fitting example of UI feedback.

Of course, because the Anti-AI AI is just a work-in-progress concept piece for now, it's unlikely the device will actually be released any time soon.

But the researchers behind it say that they're still refining their prototype and intend to improve the neural net with more synthetic content in the future.

Is this something you and I might need in the future? It's possible.

After all, in a post-truth world dominated by fake news misinformation where world leaders can so easily be manipulated to say things they never actually said nothing's for certain.

More here:

This dystopian device warns you when AI is trying to impersonate ... - ScienceAlert

Posted in Ai | Comments Off on This dystopian device warns you when AI is trying to impersonate … – ScienceAlert

Nvidia wants to drive the future of AI (with ice hockey) – CNET

Posted: at 2:30 pm

Nvidia founder and CEO Jensen Huang shows off the company's vision for the future -- self-training AI.

According to Nvidia, the age of Moore's Law is coming to an end.

The solution? We don't just need to get smaller, we need to get smarter.

Nvidia took to the stage at Computex in Taipei today, talking up the future of artificial intelligence and machine learning, all powered by its GPU computing technology and what it's calling the Isaac Initiative.

The name might look back to Asimov, but the Isaac Initiative is all about building an AI future on four key pillars: smart processors (Nvidia certainly has a legacy on this front), smart software, reference designs for robots (created by partners like Ford) and something called Isaac's Lab.

That last part is where we get futuristic. Nvidia wants create a virtual world -- what it's calling a Holodeck -- where machine learning can be developed and artificial intelligence can self train. Think of it like the Matrix, but for AI.

Nvidia demo'd a version of Isaac's Lab on screen at its keynote, where row upon row of 3D-rendered robots were practising hitting a 3D hockey puck into a goal.

In the words of Nvidia CEO Jensen Huang, "We train for a while ... we replicate the smartest brain, and then we continue... Imagine if we could teach children like that!"

If you want a vision of the future, imagine a robot playing hockey, forever.

Nvidia's Jensen Huang explains the company's vision for the future of AI -- the Isaac Initiative.

Nvidia is no stranger to machine learning. At CES this year, the company showed off a supercomputer designed for self-driving cars.

The company's driverless concept car, BB8, also got a showing today, but Nvidia isn't stopping at driverless cars.

"If we can solve this technology [machine-learning] for self-driving cars, it's the beginning of the road for solving it for all kinds of machines," said Huang.

That means you won't just see driverless cars, you'll see smart drones that can intelligently map their surroundings, robots that can learn to mimic famous artists to paint their own works and technology that can identify diseases.

And plenty of ice hockey, too.

Be sure to check out CNET's full coverage from the Computex 2017 show floor right here.

Link:

Nvidia wants to drive the future of AI (with ice hockey) - CNET

Posted in Ai | Comments Off on Nvidia wants to drive the future of AI (with ice hockey) – CNET

An AI Robot Learned How to Pick up Objects After Training Only in the Virtual World – Futurism

Posted: at 2:30 pm

In Brief Researchers at the University of California, Berkeley, used a data set of information on more than a thousand objects to successfully train a deep learning system to pick up unfamiliar objects in the "real world."

While some researchers attempt to build artificial intelligences (AI) that can solve problems that humans might not have even thought of yet, others are focused on creating ones that do something most of us take for granted: pick things up.

For a robot, knowing how to properly grasp and lift an object is no easy task. To address this issue, researchers at the University of California, Berkeley, trained a deep learning system on a cloud-based data set of more than a thousand objects, exposing it to each ones 3D shape and appearance, as well of the physics of grasping it.

Afterward, they tested their system using physical objects that werent included in its digital training set. When the system thought it had a better than 50 percent chance of successfully picking up a new object, it was actually able to do it 98 percent of the time all without having trained on any objects outside of the virtual world.

The researchers have submitted their work for publication. They plan to publicly release their data set, which should help others create their own dexterous robots and perhaps even inspire a few innovators to think of other ways to use the virtual world for training AI systems.

Its hard to collect large data sets of robotic data, Stefanie Tellex, an assistant professor specializing in robot learning at Brown University,explained to MIT Technology Review. This paper is exciting because it shows that a simulated data set can be used to train a model for grasping. And this model translates to real successes on a physical robot.

See original here:

An AI Robot Learned How to Pick up Objects After Training Only in the Virtual World - Futurism

Posted in Ai | Comments Off on An AI Robot Learned How to Pick up Objects After Training Only in the Virtual World – Futurism

Artificial Intelligence In Digital Storage – Forbes

Posted: at 2:30 pm


Forbes
Artificial Intelligence In Digital Storage
Forbes
Unstructured data, that is ordinary information, videos and sensor measurements not in a formal structure; such as a database, is growing by leaps and bounds. Factors such as higher resolution, higher frame rate, multi-camera video projects and the ...

Continue reading here:

Artificial Intelligence In Digital Storage - Forbes

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence In Digital Storage – Forbes

Apple iPhones could soon be fitted with artificial intelligence thanks to new ‘neural engine’ chip – The Sun

Posted: at 2:30 pm

Latest tech gossip indicates Apple's new smartphone will be very clever indeed

APPLE is reportedly planning to install an artificial intelligence chip in upcoming iPhones.

The tech giant is said to be working on a chip called the Apple Neural Engine which would be dedicated to carrying out artificial intelligence (AI) processing, news.com.au reports.

EPA

Although artificial intelligence is being used already to power digital assistants like Siri and Google Assistant, these technologies rely on computer servers to process data sent to them rather than the processing happening on the mobile device itself.

The technology will bring new types of capabilities to mobile devices and should reduce or even eliminate the need for an internet connection.

The uses are potentially limitless and will bring about a new phase in how we rely on applications and our mobile devices in everyday life.

For example, health applications could use AI to tell when body readings from sensors on the phone or wearable devices are abnormal and need addressing.

Apple is one of many companies working to develop AI tech.

Googles AI hardware, called the Tensor Processing Unit, is 15 to 30 times faster than the fastest computer processors (CPUs) and graphic processors (GPUs) that power computers today.

These TPUs were what gave Googles DeepMind its ability to beat the world champions of the Chinese game of Go.

They have also vastly improved Googles automated language translation software, Google Translate.

The inclusion of AI in mobile software is going to massively increase the potential usefulness of software.

Our state of health, for example, is really about how we are doing relative to how we normally feel.

Changes in behaviour can signal changes in mental health, including conditions like dementia and Parkinsons, as well as revealing precursors of illnesses such as diabetes, respiratory and cardiovascular diseases.

Our phones could monitor patterns of activity and even how we walk to assess our health.

This ability would involve the software learning our normal patterns and flagging up any changes it detects.

Eventually, smartphones could also become part of a self-directed ecosystem of intelligent and autonomous machines including cars.

It is likely that people will eventually share the use of these cars when needed rather than own one themselves, meaning AI will again be essential for managing how this sharing functions to manage the most efficient distribution of cars.

To do this, the scheduling AI service would need to liaise with software on everyones phones to determine where and when they will be at a given location and where they need to get to.

AI on a mobile device will also be used to protect the device by checking if applications and communications are secure or likely to be a threat.

This technology is already being implemented in smart home appliances but as software. The addition of special AI chips will allow them to be much faster and to do more.

However, it's feared that artificial intelligence could erode humans' smartness.

As we come to rely on devices to do things, we may lose the ability to maintain certain skills and become too dependent on machines.

The iPhone 8 is expected to come out later this year and feature all manner of exciting innovations including a cool new "infinity screen".

This article originally appeared on news.com.au.

We pay for your stories! Do you have a story for The Sun Online news team? Email us at tips@the-sun.co.uk or call 0207 782 4368

Original post:

Apple iPhones could soon be fitted with artificial intelligence thanks to new 'neural engine' chip - The Sun

Posted in Artificial Intelligence | Comments Off on Apple iPhones could soon be fitted with artificial intelligence thanks to new ‘neural engine’ chip – The Sun

Banks Eager For Artificial Intelligence, But Slow To Adopt – Forbes

Posted: at 2:30 pm


Forbes
Banks Eager For Artificial Intelligence, But Slow To Adopt
Forbes
Facebook, Google, Microsoft and Baidu spent at least $8.5 billion beefing up their AI talent. Amazon spends $228 million a year just to find people to run Alexa and related machine learning initiatives. Even small to medium businesses in every sector ...

View original post here:

Banks Eager For Artificial Intelligence, But Slow To Adopt - Forbes

Posted in Artificial Intelligence | Comments Off on Banks Eager For Artificial Intelligence, But Slow To Adopt – Forbes