The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: July 2017
Quad-City Times to present Bix 7 in virtual reality – Quad City Times
Posted: July 21, 2017 at 12:16 pm
For the record, this also is new to us.
That's not a set up for lower expectations. It's just that plenty of us in news don't have technical brains, but we're getting our geek on for future's sake.
Let me explain: Early this year, our executive editor, Autumn Phillips, got wind of a partnership between Eastern Iowa Community Colleges and a California-based company called EON Reality. Dubbed the Innovation Academy, the digital-tech experts at EON came to Davenport to teach college students how to develop content and tools for the up-and-coming world of virtual reality and augmented reality.
Phillips contacted EICC Chancellor Don Doucette to see if there was room in the partnership for us.
"I've been interested in VR (virtual reality) for a long time," she said. "I wondered how to use this local partnership as a learning experience for the newsroom.
"Don enthusiastically made things happen. Lee Enterprises put up the R&D money we needed for the partnership."
Phillips then asked for newsroom volunteers people who wanted to learn something about virtual reality. Why not?
I had one experience with virtual reality, and I loved it. About a year ago, the Baseball Hall of Fame ("We Are Baseball") trailers showed up in the parking lot at Modern Woodmen Park, and I went down and plopped into a swivel chair and strapped the virtual-reality gear to my head. It was a cool experience, even for someone who can take or leave baseball.
The virtual technology put you right there in the dugout, on the field, behind the plate.
The only downer was the turning in my seat, combined with the subtly unstable camera shots, made me feel woozy. I since have learned that 360-degree viewing makes many people nauseous.
But technology has come a long way.
"If you look at the iPhone 5 and the iPhone 7S, you can see there is much better stability," said Aubrey Jimenez, training coordinator at the Innovation Academy. "Smartphone makers are infinitely aware of what's coming for this technology."
So, what's in it for Quad-City Times readers?
In March, at the first meeting of our little volunteer group -- publisher Deb Anselm, photographer Andy Abeyta, assistant city editor Liz Boardman, Phillips and myself -- we came up with a plan. We asked ourselves: What story could we tell that would best benefit from this 360-degree technology that virtually brings you all along with us?
My mind instantly went to the starting line of the Bix. In the moments leading up to the firing of the starter pistol, the air on Brady Street feels like the air during a lightning storm. As thousands of voices turn into a white-noise hum, goosebumps pop onto your skin in fleshy anticipation.
We decided the Quad-City Times Bix 7 would be the perfect launching pad for our first virtual-reality project. But we wanted to do more than shoot immersive images; we wanted to tell a story. So, we agreed we would find a runner and tell the runner's story.
A standout sprinter at Sherrard High School, Nolte went to Western Illinois University on a track scholarship. Now 31, Nolte is married and the mother of two young girls, working full-time. She started training months ago to run the entirety of the Bix for the first time.
She regards running a treat a way to do something for herself. For those of us who regard running as something to do in an emergency, Nolte's drive is impressive, especially since distance isn't her thing.
The Innovation Academy students followed Nolte on her last Bix at 6 training run.
"We're going to be violating your personal space," Jimenez warned as several students aimed their cell phones and a video camera at Nolte. "We need some close shots."
On Bix 7 race day, we'll have VR professionals from EON Reality Sports filming the event. Nolte will remain in the spotlight from the starting line to the finish line of the Bix, and the thousands of runners and Bix spectators will serve as extras in the 360-degree virtual story that follows.
Our efforts will culminate in a virtual-reality app, called QCT VR. Once the VR experience is ready, we'll provide a link in the weeks following Bix, so everyone can download the free app and follow Nolte's story while reliving the 2017 race. All you need is a smartphone and a set of Google Cardboard glasses. If you want to make sure you don't miss it, sign up for our Bix 7 e-newsletter atqctimes.com/email/We'll send a link to your email.
If you want to learn more about virtual reality or this project, stop by the Quad-City Times booth at the Bix 7 packet pickup on Thursday evening from 5-9 p.m. or Friday, from 9 a.m. to 9 p.m. atRiverCenter South Hall at 136 East Third Street, Davenport. Or visit the Quad-City Times tent in the newspaper parking lot during the race after-party on Saturday.
See the article here:
Quad-City Times to present Bix 7 in virtual reality - Quad City Times
Posted in Virtual Reality
Comments Off on Quad-City Times to present Bix 7 in virtual reality – Quad City Times
How AI Is Already Changing Business – Harvard Business Review
Posted: at 12:16 pm
Erik Brynjolfsson, MIT Sloan School professor, explains how rapid advances in machine learning are presenting new opportunities for businesses. He breaks down how the technology works and what it can and cant do (yet). He also discusses the potential impact of AI on the economy, how workforces will interact with it in the future, and suggests managers start experimenting now. Brynjolfsson is the co-author, with Andrew McAfee, of the HBR Big Idea article, The Business of Artificial Intelligence. Theyre also the co-authors of the new book, Machine, Platform, Crowd: Harnessing Our Digital Future.
Download this podcast
SARAH GREEN CARMICHAEL: Welcome to the HBR IdeaCast from Harvard Business Review. Im Sarah Green Carmichael.
Its a pretty sad photo when you look at it. A robot, just over a meter tall and shaped kind of like a pudgy rocket ship, laying on its side in a shallow pool in the courtyard of a Washington, D.C. office building. Workers human ones stand around, trying to figure out how to rescue it.
The security robot had just been on the job for a few days when the mishap occurred. One entrepreneur who works in the office complex wrote: We were promised flying cars. Instead we got suicidal robots.
For many people online, the snapshot symbolized something about the autonomous future that awaits. Robots are coming, and computers can do all kinds of new work for us. Cars can drive themselves. For some people this is exciting, but there is also clearly fear out there about dystopia. Tesla CEO Elon Musk calls artificial intelligence an existential threat.
But our guest on the show today is cautiously optimistic. Hes been watching how businesses are using artificial intelligence and how advances in machine learning will change how we work. Erik Brynjolfsson teaches at MIT Sloan School and runs the MIT Initiative on the Digital Economy. And hes the co-author with Andrew McAfee of the new HBR article, The Business of Artificial Intelligence.
Erik, thanks for talking with the HBR IdeaCast.
ERIK BRYNJOLFSSON: Its a pleasure.
SARAH GREEN CARMICHAEL: Why are you cautiously optimistic about the future of AI?
ERIK BRYNJOLFSON: Well actually that story you told about the robot that had trouble was a great lead in because in many ways it epitomizes some of the strengths and weaknesses of robots today. Machines are quite powerful and in many ways, theyre superhuman you know just as a calculator can do arithmetic a lot better than me, were having artificial intelligence thats able to do all sorts of functions in terms of recognizing different kinds of cancer images, or now getting superhuman even in speech recognition in some applications but theyre also quite narrow. They dont have general intelligence the way people do. And thats why partnerships of humans and machines are often going to be the most successful in business.
SARAH GREEN CARMICHAEL: You know its funny, cause when you talk about image recognition I think about a fantastic image in your article that is called Puppy or Muffin. I was amazed at how much puppies and muffins look alike in sort of even more amazed that robots can tell them apart.
ERIK BRYNJOLFSSON: Yeah, its a funny image. It always gets a laugh and encourage people to go take a look at it. And there are lots of things that humans are pretty good at in distinguishing different kinds of images. And for a long time, machines were nowhere near as good as recently as seven, eight years ago, machines made about a 30 percent error rate on image net, this big database that Fei Fei Li created of over 10 million images. Now machines are down less, you know, less than 5%, 3-4% depending on how its set up. Humans still have about a 5% error rate. Sometimes they get those puppies and nothings wrong. Be careful what you reach for next time youre at that breakfast bar. But thats a good example.
The reason its improved so much in the past few years is because of this new approach using deep neural nets thats gotten much more powerful for image recognition and really all sorts of different applications. I think thats a big reason why theres so much excitement these days.
SARAH GREEN CARMICHAEL: Yeah, its one of those things where we all kind of like to make fun of machines that get it wrong but also its sort of terrifying when they get it right.
ERIK BRYNJOLFSSON: Yeah. Machines are not going to be perfect drivers, theyre not going to be perfect at making credit decisions that are going to be perfect at distinguishing you know muffins and puppies. And so, we have to make sure we build systems that are robust to those imperfections. But the point we make an article, Andy and I point out that you know humans arent perfect at any of those tasks either. And so, the benchmark for most entrepreneurs and managers is: whos going to be better for solving this particular task or better yet can we create a system that combines the strengths of both humans and machines and does something better than either of them would do individually.
SARAH GREEN CARMICHAEL: With photo recognition and facial recognition, I know that Facebook facial recognition software cant tell the difference between me wearing makeup and me not wearing makeup, which is also sort of funny and horrifying right? But at the same time, you know, I think a lot of us struggle to recognize people out of context, we see someone at the grocery store and we think you know, I know that person from somewhere. So, its something that humans dont always get right either.
ERIK BRYNJOLFSSON: Oh yeah. Im the worlds worst. You know at conferences I would love it if there was a little machine whispering in my ear who this person is and how I met them before. So there, you know, there are those kinds of tradeoffs. But it can lead to some risks. For instance, you know if machines are making bad decisions on important things, like who should get parole or who gets credit or not. That could be really problematic. Worse yet, sometimes they have biases that are built in from the data sets they use. If the people you hire in the past all had a certain kind of ethnic or gender tilt to them, then if you use a training set and teach the machine how to hire people it will learn the same biases that the humans had previously. And, of course, that can be perpetuated and scaled up in ways that we wouldnt like to see.
SARAH GREEN CARMICHAEL: There is a lot of hype right now around AI or artificial intelligence. Some people say machine learning, other people come along and say: hold on hold on hold on, like a lot of this is just software and weve been using it for a long time. So how do you kind of think through the different terms and what they really mean?
ERIK BRYNJOLFSSON: Well theres a really important difference between the way the machines are working now versus previously you know any McAfee and I wrote this book The Second Machine Age where we talked about having machines do more and more cognitive tasks. And for most of the past 30 or 40 years thats been done by us painstakingly programming, writing code of exactly what we want the machine to do. You know if its doing tax preparation, add up this number and multiply it by that number, and of course we had to understand exactly what the task was in order to specify it.
But now the new machine learning approaches literally have the machines learn on their own things that we dont know how to explain the face recognition is a perfect example. It would be really hard for me to describe you know my mothers face, you know how far apart are her eyes or what does her ear look like.
ERIK BRYNJOLFSSON: I can recognize it but I couldnt really write code to do it. And the way the machines are working now is, instead of having us write the code, we give them lots and lots of examples. You know here are pictures of my mom from different perspectives, or here pictures of cats and dogs or heres a piece of speech you know with the word yes and the word no. And if you give them enough examples the machine learning algorithms figure out the rules on their own.
Thats a real breakthrough. It overcomes what we call Polanyis paradox. Michael Polanyi the Polymath and philosopher from the 1960s famously said We all know more than we can tell but with machine learning we dont have to be able to tell or explain what to do. We just have to show examples. That change is whats opening up so many new applications for machines and allowing it to do a whole set of things that previously only humans could do.
SARAH GREEN CARMICHAEL: So, its interesting to think about kind of the human work that has to just go into training the machines like someone who would sit there literally looking at pictures of blueberry muffins and tagging them muffin, muffin, muffin so the machine you know learns thats not a Chihuahua, thats a blueberry muffin. Is that the kind of thing where in the future you could see that kind of rote algorithm, machine training work being kind of a low-paid dead-end job whereas maybe that person once would have had a more interesting job but now the machine has the more interesting job.
ERIK BRYNJOLFSSON: I dont think thats going to be a big source of employment, but it is true there are places like Amazons Mechanical Turk where thousands of people do exactly what you said, they tag images and label them. Thats how ImageNet the database of millions of images got labeled. And so, there are people being hired to do that. Companies sometimes find that training machines by having humans tagged the data is one way to proceed.
But often they can find ways of having data thats already tagged in some way, thats generated from their enterprise resource planning system or from their call center. And if theyre clever, that will lead to the creation of this tag data, and I should back up a bit and say that machines, one of their big weaknesses is that they really do need tag data. Thats the most powerful kind of algorithm, sometimes called supervised learning, where humans have the advanced tag and explained what the data means.
And then the machine learns from those examples and eventually can extrapolate it to other kinds of examples. But unlike humans, they often need thousands or even millions of examples to do a good job whereas you know, a two-year-old probably would learn after one or two times what a cat was versus a dog was that you wouldnt have to show, you know, 10,000 pictures of a cat before they got it.
SARAH GREEN CARMICHAEL: Right. Given where we are with AI and machine learning right now, on balance, do you feel like this is something that is overhyped and people talk too much about sort of too science fiction terms or is it something thats not quite hyped enough and actually people are underestimating what it could do in the relatively near future.
ERIK BRYNJOLFSSON: Well its actually both at the same time, if you can believe it. I think that people have unrealistic expectations about machines having all these general capabilities kind of from watching science fiction like the Terminator. And if a machine can understand Chinese characters you might think it also could understand Chinese speech and it could recommend a good Chinese restaurant, know a little bit about the Xing dynasty and none of that would be true. A machine that can play expert chess cant even play checkers or go or other games. So, in a way theyre very narrow and fragile.
But on the other hand, I think the set of applications for those narrow capabilities is quite large, using that supervised learning algorithms, I think there are a lot more specific tasks that could be done that weve only scratched the surface of and because theyve improved so much in the past five or 10 years, most of those opportunities have not yet really been explored or even discovered yet. Theres a few places where the big giants like Google and Microsoft and Facebook have made rapid progress, but I think that there are literally tens of thousands of more narrow applications that small and medium businesses could start using machine learning for in their own areas.
SARAH GREEN CARMICHAEL: What are some examples of ways that companies are using this technology right now?
ERIK BRYNJOLFSSON: Well one of my favorite ones I learned from my friend Sebastian Thrun Hes the founder of Udacity, the online learning course, which by the way is a good way to learn more about these technologies. But he found that when people were coming to his site and asking questions on the chat room, some of the sales people were doing a really good job of come to the right course and closing the sale and others, well, not so much. This created a set of training data.
He and his grad student realized that if they took the transcripts they would see that certain sets of words in certain dialogues lead to success and sales and others didnt. And he fed that information into a machine learning algorithm and it started identifying which patterns of phrases and answers were the most successful.
But what happened next was I think especially interesting instead of just trying to build a bot that would answer all the questions, they built a bot that would advise the human salespeople. So now when people go to the site the bot kind of looks over the shoulder of the human and when it sees some of those key words it whispers into his or her ear: hey, you know you might want to try this phrase or you might want to point him to this particular course.
And that works well for the most common kinds of queries, but the more obscure ones that the bot has never seen before the human is much better at. And this kind of partnership is a great example of an effective use of AI and also how you can use existing data to turn into a tag data set that the supervised learning system benefits from.
SARAH GREEN CARMICHAEL: So how did these people feel about being coached by a bot?
ERIK BRYNJOLFSSON: Well, its helped them close their sales so its made them more productive. Sebastian says its about 50% more successful when theyre using the bot. So I think its been its been beneficial in helping them learn more rapidly than they would have if they just kind of stumbled all along.
Going forward, I think this is an example of how the bots are often good at the more routine repetitive kinds of tasks, the machines can do the ones that they have lots of data for. And the humans tend to excel at the more unusual tasks for most of us. I think thats kind of a good trade-off. Most of us would prefer having kind of more interest in varied work lives rather than doing the same thing over and over.
SARAH GREEN CARMICHAEL: So, sales is a form of knowledge work right and you sort of gave an example there. One of the big challenges in that kind of work is that you cant its really hard to scale up one persons productivity if you are a law firm, for example, and you want to serve more clients have to hire more lawyers. It sounds like AI could be one way to get finally around that conundrum.
ERIK BRYNJOLFSSON: Yeah AI certainly can be a big force multiplier. Its a great way of taking some of your best, you know, lawyers or doctors and having them explain how they go about doing things and give examples of successes and the machine can learn from those and replicate it or be combined with people who are already doing the jobs and help in a way coached them or handle some of the cases that are most common.
SARAH GREEN CARMICHAEL: So, is it just about being more productive or did you see other examples of human machine collaboration that tackled different types of business challenges?
ERIK BRYNJOLFSSON: Well in some cases its a matter of being more productive, in many cases, a matter of doing the job better than you could before. So there are systems now that can help read medical images and diagnose cancer quite well, the best ones often are still combined with humans because the machines make different kinds of mistakes in the humans so that the machine often will create what are called false positives where it thinks theres cancer but its really not and the humans are better at ruling those out. You know maybe there was an eyelash on the image or something that was getting in the way.
And so, by having the machine first filter through all the images and say hey here are the ones that look really troubling. And then having a human look at those ones and focus more closely on the ones that are problematic, you end up getting much better outcomes than if that person had to look at all the images herself or himself and maybe, maybe overlook some potentially troubling cases.
SARAH GREEN CARMICHAEL: Why now? Because people predicted for a long time that I was just around the corner and sounds like its finally starting to happen and really make its way into businesses. Why are we seeing this finally start to happen right now?
ERIK BRYNJOLFSSON: Yes, thats a great question. Its really the combination of three forces that have come together. The first one is simply that we have much better computer power than we did before. So, Moores Law, the doubling of computer power is part of it. Theres also specialized chips called GPUs and TPUs that are another tenfold or even a hundredfold faster than ordinary chips. As a result, training a system that might have taken a century or more if you done it with 1990s computers can be done in a few days today.
And so obviously that opens up a whole new set of possibilities that just wouldnt have been practical before. The second big force is the explosion of digital data. Data is the lifeblood of these systems, you need them to train. And now we have so many more digital images, digital transcripts, digital data from factory gauges and keeping track of information, and that all can be fed into these systems to train them.
And as I said earlier, they need lots and lots of examples. And now we have digital examples in a way we didnt previously and in the end with the Internet are things you can imagine its going to be a lot more digital data going forward. And last but not least, there have been some significant improvements in the algorithms the men and women working in these fields have improved on the basic algorithms. Some of them were first developed literally 30 years ago, but theyve now been tweaked and improved, and by having faster computers and more data you can learn more rapidly what works and what doesnt work. When you put these three things together, computer power, more data, and better algorithms, you get sometimes as much as a millionfold improvement on some applications, for instance recognizing pedestrians as they cross the street, which of course is really important for applications like self-driving cars.
SARAH GREEN CARMICHAEL: If those are sort of the factors that are pushing us forward, what are some of the factors that might be inhibiting progress?
ERIK BRYNJOLFSSON: Whats not holding us back is the technology, what is holding us back is the imagination of business executives to use these new tools in their businesses. You know, with every general-purpose technology, whether its electricity or the internal combustion engine the real power comes from thinking of new ways of organizing your factory, new ways of connecting to your customers, new business models. Thats where the real value comes. And one of the reasons we were so happy to write for Harvard Business Review was to reach out to people and help them be more creative about using these tools to change the way they do business. Thats where the real value is.
SARAH GREEN CARMICHAEL: I feel like so much of the broader conversation that AI is about, will this create jobs or destroy jobs? And Im just wondering is that a question that you get asked a lot, and are you sick of answering it?
ERIK BRYNJOLFSSON: Well of course it gets asked a lot. And Im not sick of answering because its really important. I think the biggest challenge for our society over the next 10 years is going to be, how are we going to handle the economic implications of these new technologies. And you introduced me in the beginning as a cautious optimist, I think you said, and I think thats about right. I think that if we handle this well this can and should be the best thing that ever happened to humanity.
But I dont think its automatic. Im cautious about that. Its entirely possible for us to not invest in the kind of education and retraining of people to not do the kinds of new policies, to encourage business formation and new business models even. Income distribution has to be rethought and tax policy things like the earned income tax credit in the United States and similar wage subsidies in other countries.
ERIK BRYNJOLFSSON: We need to make a bunch of changes across the board at the policy level. Businesses need to rethink how they work. Individuals need to take personal responsibility for learning the new skills that are going to be needed going forward. If we do all those things Im pretty optimistic.
But I wouldnt want people to become complacent, because already over the past 10 years a lot of people have been left behind by the digital revolution that weve had so far. And looking forward, Id say we aint seen nothing yet. We have incredibly powerful technologies especially in artificial intelligence that are opening up new possibilities. But I want us to think about how we can use technology to create shared prosperity for the many, not just the few.
SARAH GREEN CARMICHAEL: Are there tasks or jobs that machine learning, in your opinion, cant do or wont do?
ERIK BRYNJOLFSSON: Oh, there are so many. Just to be totally clear, most things, machine learning cant do. Its able to do a few narrow areas really, really well. Just like a calculator can do a few things really, really well, but humans are much more general, much more broad set of skills, and the set of skills that humans can do it is being encroached on.
Machines are taking over more and more tasks are combining, teaming up in more and more tasks but in particular, machines are not very good at very broad-scale creativity you know. Being an entrepreneur or writing a novel or developing a new scientific theory or approach, those kinds of creativity are beyond what machines can do today by and large.
Secondly, and perhaps for an even broader impact, is interpersonal skills, connecting with the humans. You know were wired to trust and care for it and be interested in other humans in a way that we arent with other machines.
So, whether its coaching or sales or negotiation or caring for people, persuading, people those are all areas where humans have an edge. And I think there will be an explosion of new jobs whether its for personal coaches or trainers or team oriented activities. I would love to see more people learning those kinds of softer skills that machines are not good at. Thats where theyll be a lot of jobs in the future.
SARAH GREEN CARMICHAEL: I was surprised to see in the article though, that some of these AI programs are actually surprisingly good at recognizing human emotions. I was really startled by that.
ERIK BRYNJOLFSSON: I have to be careful. One of the main things I learned working with Andy and going to visit all these places is never say never, any particular thing that one of us said oh this will never happen, you know, we find out that someone is working in a lab.
So my advice is that their relative strengths and relative weaknesses and emotional intelligence, I still think is a relative strength of humans, but there are particular narrow applications where machines are improving quite rapidly. Affectiva, a company here in Boston has gotten very good at reading emotions, is part of what you need to do to be a good coach to be a caring person, is not the whole picture, but it is one piece of the interpersonal skills that machines are helping with.
SARAH GREEN CARMICHAEL: What do you see as the biggest risks with AI?
ERIK BRYNJOLFSSON: I think there are a few. One of the big risks is that these machine learning algorithms can have implicit biases and they can be very hard to detect or correct. If the training data is biased, has some kind of racial or ethnic or other biases in its data, then those can be perpetuated in the sample. And so, we need to be very careful about how we train the systems and what data we give them.
And its especially important because they dont have the kind of explicit rules that earlier waves of technology had. So, its hard to even know. Its unlikely to have a rule that says, you know, dont give loans to black people or whatever, but it may implicitly have its thumb on the scale in one way or the other if the training data were biased.
SARAH GREEN CARMICHAEL: Right. Because it might notice for instance that, statistically speaking, black people get turned down more for loans that kind of thing.
ERIK BRYNJOLFSSON: Yeah, if the people who you had made those decisions before were biased in a use for the training data that could end up creating a biased training set. And you know, maybe nobody explicitly says that they were biased, but it sort of shows up in other subtle ways based on the, you know, the zip code that someones coming from or their last name or their first name or whatever. So that would be subtle things that you need to be careful of.
The other thing is what we touched on earlier just the whole, whats happening with income inequality and opportunity as the machines get better at many kinds of tasks, you know, driving a truck or handling a call center. The people who had been doing those jobs need to find new things to do. And often those new jobs wont be paying as well if we arent careful. So that could be a real income hit. Already we see growing income inequality.
We have to be aggressive about thinking how we can create broadly shared prosperity. One of the things we did at MIT is we launched something called the Inclusive Innovation Challenge which recognizes and rewards organizations that are using technology to create shared prosperity, theyre innovating in ways that do that. Id love to see more and more entrepreneurs think in that way not just how they can create concentrated wealth, but how they can create broadly shared prosperity.
SARAH GREEN CARMICHAEL: Elon Musk has been out there saying artificial intelligence could be an existential threat to human beings. Other people have talked about fears that the machines could take over and turn against us. How do you feel about those kinds of concerns?
ERIK BRYNJOLFSSON: Well, like I said earlier, you can never say never and, you know, as machines kept getting more and more powerful I can imagine them having enormous powers especially as we delegate more of the operations of our critical infrastructure in our electricity and our water system and our air traffic control and even our military operations to them. But the reason I didnt list it is I dont see it as the most immediate risk right now, the technologies that are being rolled out right now, they have effects on bias and decision making their effect on jobs and income. But by and large they dont have those kinds of existential risks.
I think its important that we have researchers working in those areas and thinking about them but I wouldnt want to, to panic Congress or the people right now into doing something that would probably be counterproductive if we overreacted right now.
I think its an area for research but in terms of devoting billions of dollars of effort, I would put that towards education and retraining and handling bias the things that are facing us right now and will be facing us for the next five and 10 years.
SARAH GREEN CARMICHAEL: What do you feel is the appropriate role of regulation as AI develops?
ERIK BRYNJOLFSSON: I think we need to be watchful, because theres the potential for AI to lead to more concentration of power and more concentration of wealth. The best antidote to that is competition.
And what weve seen the tech industries, for most of the past 10, 20, 30 years is that as one monopolist, whether its IBM or Microsoft, gets a lot of power, another company comes along and knocks it off its perch. I remember teaching a class where about 15 years ago a speaker said you know Yahoo has search locked up no ones ever going to displace Yahoo. So you know we need to be humble and realize that the giants of today face threats and could be overturned.
That said, if there becomes a sort of a stagnant loss of innovation and these companies have a stranglehold on markets and maybe have other adverse effects in areas like privacy, then it would be right for government to step in. My instinct right now would be sort of watchful waiting, keeping an eye on these companies and doing what we could to foster innovation and competition as the best way to protect consumers.
SARAH GREEN CARMICHAEL: So, if all of this still sounds quite futuristic to the average manager, if theyre kind of like: OK, you know this is sort of way outside of what Im working on in my role, what are the sort of things that youd advise people to keep in mind or think about?
ERIK BRYNJOLFSSON: Well it starts with realizing this is not futuristic and way out there. There are lots of small and medium sized companies that are learning how to apply this right now, whether its, you know, sorting cucumbers to be more effective, somebody wrote an application that did that, to helping with recommendations online. Theres a company Im advising called Infinite Analytics that is giving customers better recommendations about what products they should be choosing, to helping with, you know, credit decisions.
There are so many areas where you can apply these technologies right now you can take courses or you can have people in your organization take courses or you can hire people at places like Udacity or fast.ai, my friend Jeremy Howard runs a great course in that area, and put it to work right away and start with something small and simple.
But definitely dont think of this as futuristic. Dont be put off by the science fiction movies whether, you know, the Terminator or other AI shows. Thats not whats going on. Its a bunch of very specific practical applications that are completely feasible in 2017.
SARAH GREEN CARMICHAEL: Erik, thanks so much for talking with us today about all of this.
ERIK BRYNJOLFSSON: Its been a real pleasure.
SARAH GREEN CARMICHAEL: Thats Erik Brynjolfsson. Hes the director of the MIT Initiative on the Digital Economy. And hes the co-author with Andrew McAfee of the new HBR article, The Business of Artificial Intelligence.
You can read their HBR article, and also read about how Facebook uses AI and Machine learning in almost everything you see, and you can watch a video shot in my own kitchen! about how IBMs Watson uses AI to create new recipes. Thats all at hbr.org/AI.
Thanks for listening to the HBR IdeaCast. Im Sarah Green Carmichael.
Here is the original post:
How AI Is Already Changing Business - Harvard Business Review
Posted in Ai
Comments Off on How AI Is Already Changing Business – Harvard Business Review
Graphcore’s AI chips now backed by Atomico, DeepMind’s Hassabis – TechCrunch
Posted: at 12:16 pm
Is AI chipmaker Graphcore out to eat Nvidias lunch? Co-founder and CEO Nigel Toon laughs at that interview opener perhaps because he sold his previous company to the chipmaker back in 2011.
Im sure Nvidia will be successful as well, he ventures. Theyre already being very successful in this market And being a viable competitor and standing alongside them, I think that would be a worthy aim for ourselves.
Toon also flags what he couches an interesting absence in the competitive landscape vis-a-vis other major players that youd expect to be there e.g. Intel. (Though clearly Intel is spending to plug the gap.)
A recent report by analyst Gartner suggests AI technologies will be in almost every software product by 2020. The race for more powerful hardware engines to underpin the machine-learning software tsunami is, very clearly, on.
We started on this journey rather earlier than many other companies, says Toon. Were probably two years ahead, so weve definitely got an opportunity to be one of the first people out with a solution that is really designed for this application. And because were ahead weve been able to get the excitement and interest from some of these key innovators who are giving us the right feedback.
Bristol, UK based Graphcore has just closed a $30 million Series B round, led by Atomico, fast-following a $32M Series A in October 2016. Its building dedicated processing hardware plus a software framework for machine learning developers to accelerate building their own AI applications with the stated aim of becoming the leader in the market for machine intelligence processors.
In a supporting statement, Atomico Partner Siraj Khaliq, who is joining the Graphcore board, talks up its potential as being to accelerate the pace of innovation itself. Graphcores first IPU delivers one to two orders of magnitude more performance over the latest industry offerings, making it possible to develop new models with far less time waiting around for algorithms to finish running, he adds.
Toon says the company saw a lot of investor interest after uncloaking at the time of its Series A last October hence it decided to do an earlier than planned Series B. That would allow us to scale the company more quickly, support more customers, and just grow more quickly, he tells TechCrunch. And it still gives us the option to raise more money next year to then really accelerate that ramp after weve got our product out.
The new funding brings on board some new high profile angel investors including DeepMind co-founder DemisHassabis and Uber chief scientistZoubin Ghahramani. So you can hazard a pretty educated guess as to which tech giants Graphcore might be working closely with during the development phase of its AI processing system (albeit Toon is quick to emphasize that angels such as Hassabis are investing in a personal capacity).
We cant really make any statements about what Google might be doing, he adds. We havent announced any customers yet but were obviously working with a number of leading players here and weve got the support from these individuals which you can infer theres quite a lot of interest in what were doing.
Other angels joining the Series B includeOpenAIs Greg Brockman, Ilya Sutskever,Pieter Abbeel andScott Gray. While existing Graphcore investors Amadeus Capital Partners,Robert Bosch Venture Capital, C4 Ventures, Dell Technologies Capital, Draper Esprit, Foundation Capital, Pitango and Samsung Catalyst Fund also participated in the round.
Commenting in a statement, Ubers Ghahramani argues that current processing hardware is holding back the development of alternative machine learning approaches that he suggests could contribute to radical leaps forward in machine intelligence.
Deep neural networks have allowed us to make massive progress over the last few years, but there are also many other machine learning approaches, he says.A new type of hardware that can support and combine alternative techniques, together with deep neural networks, will have a massive impact.
Graphcore has raised around $60M to date with Toon saying its now 60-strong team has been working in earnest on the business for a full three years, though the company origins stretch back as far as 2013.
Co-founders Nigel Toon (CEO, left) and Simon Knowles (CTO, right)
In 2011 the co-founders sold their previous company, Icera which did baseband processing for 2G, 3G and 4G cellular technology for mobile comms to Nvidia. After selling that company we started thinking about this problem and this opportunity. We started talking to some of the leading innovators in the space and started to put a team together around about 2013, he explains.
Graphcore is building what it calls an IPU aka an intelligence processing unit offering dedicated processing hardware designed for machine learning tasks vs the serendipity of repurposed GPUs which have been helping to drive the AI boom thus far. Or indeed the vast clusters of CPUs needed (but not well suited) for such intensive processing.
Its also building graph-framework software for interfacing with the hardware, called Poplar, designed to mesh with different machine learning frameworks to enable developers to easily tap into a system that it claims will increase the performance of both machine learning training and inference by 10x to 100x vs the fastest systems today.
Toon says its hoping to get the IPU in the hands of early access customers by the end of the year. That will be in a system form, he adds.
Although at the heart of what were doing is were building a processor, were building our own chip leading edge process, 16 nanometer were actually going to deliver that as a system solution, so well deliver PCI express cards and well actually put that into a chassis so that you can put clusters of these IPUs all working together to make it easy for people to use.
Through next year well be rolling out to a broader number of customers. And hoping to get our technology into some of the larger cloud environments as well so its available to a broad number of developers.
Discussing the difference between the design of its IPU vs GPUs that are also being used to power machine learning, he sums it up thus: GPUs are kind of rigid, locked together, everything doing the same thing all at the same time, whereas we have thousands of processors all doing separate things, all working together across the machine learning task.
The challenge that [processing via IPUs] throws up is to actually get those processors to work together, to be able to share the information that they need to share between them, to schedule the exchange of information between the processors and also to create a software environment thats easy for people to program thats really where the complexity lies and thats really what we have set out to solve.
I think weve got some fairly elegant solutions to those problems, he adds. And thats really whats causing the interest around what were doing.
Graphcores team is aiming for a completely seamless interface between its hardware via its graph-framework and widely used high level machine learning frameworks including Tensorflow, Caffe2, MxNet and PyTorch.
You use the same environments, you write exactly the same model, and you feed it through what we call Poplar [a C++ framework], he notes. In most cases that will be completely seamless.
Although he confirms that developers working more outside the current AI mainstream say by trying to create new neural network structures, or working with other machine learning techniques such as decision trees or Markov field may need to make some manual modifications to make use of its IPUs.
In those cases there might be some primitives or some library elements that they need to modify, he notes. The libraries we provide are all open so they can just modify something, change it for their own purposes.
The apparently insatiable demand for machine learning within the tech industry is being driven at least in part by a major shift in the type of data that needs to be understood from text to pictures and video, says Toon. Which means there are increasing numbers of companies that really need machine learning. Its the only way they can get their head around and understand what this sort of unstructured data is thats sitting on their website, he argues.
Beyond that, he points to various emerging technologies and complex scientific challenges its hoped could also benefit from accelerated development of AI from autonomous cars to drug discovery with better medical outcomes.
A lot of cancer drugs are very invasive and have terrible side effects, so theres all kinds of areas where this technology can have a real impact, he suggests. People look at this and think its going to take 20 years [for AI-powered technologies to work] but if youve got the right hardware available [development could be sped up].
Look at how quickly Google Translate has got better using machine learning and that same acceleration I think can apply to some of these very interesting and important areas as well.
In a supporting statement, DeepMinds Hassabis goes to far as to suggest that dedicated AI processing hardware might also offer a leg up to the sci-fi holy grail goal of developing artificial general intelligence (vs the more narrow AIs that comprise the current cutting edge).
Building systems capable of general artificial intelligence means developing algorithms that can learn from raw data and generalize this learning across a wide range of tasks. This requires a lot of processing power, and the innovative architecture underpinning Graphcores processors holds a huge amount of promise, he says.
Read more from the original source:
Graphcore's AI chips now backed by Atomico, DeepMind's Hassabis - TechCrunch
Posted in Ai
Comments Off on Graphcore’s AI chips now backed by Atomico, DeepMind’s Hassabis – TechCrunch
Google’s Newest AI is Turning Street View Images into Landscape Art – Futurism
Posted: at 12:16 pm
In Brief Google engineers have created an artificial intelligence (AI) that is capable of turning Google Street View images into professional-quality artistic portraits. The AI chooses and crops the image, alters both light and coloration, and then applies an appropriate filter. Google Art
Most of us are probably familiar with Google Street View; a feature of Google Maps that allows users to see actual images of the areas theyre looking up. Its both a useful navigational feature and one that allows people to explore far-off regions just for fun. Engineers at Google are taking these images from Street View one step further with the help of artificial intelligence (AI).
Hui Feng is one of several software engineers who are using machine learning techniques to teach a neural network how to scan Street View in search of exceptionally beautiful images. This AI then, on its own, mimics the workflow of a professional photographer.
This AI system will act as an artist and photo editor, recognizing beauty and specific aspects that make for a good photograph. Despite being a subjective matter, the AI proved to be successful, creating professional-quality imagery from Street View images that the system itself located.
Googles many different AI programs have been exploring a wide variety of potential applications for the technology. From recent dabbling in online Go playingto improving job huntingand even creating its own AI better than Google engineers, Googles AI has been at the forefront of its field.
But AI technologies are progressing faster and further than many have expected, so much so that some AI, like the one mentioned here, are capable of creating art. So, while robots will never make humans completely obsolete in artistic endeavors, this step forward marks a new era of technology.
Read the original:
Google's Newest AI is Turning Street View Images into Landscape Art - Futurism
Posted in Ai
Comments Off on Google’s Newest AI is Turning Street View Images into Landscape Art – Futurism
Have We Reached Peak AI Hysteria? – Niskanen Center (press release) (blog)
Posted: at 12:16 pm
July 21, 2017 by Ryan Hagemann
At the recent annual meeting of the National Governors Association, Elon Musk spoke with his usual cavalier optimism on the future of technology and innovation. From solar power to our place among the stars, humanitys future looks pretty bright, according to Musk. But he was particularly dour on one emerging technology that supposedly poses an existential threat to humankind: artificial intelligence.
Musk called for strict, preemptive regulations on developments in AI, referencing numerous hypothetical doomsaying scenarios that might emerge if we go too far too fast. Its not the first time Musk has said that AI could portend a Terminator-style future, but it does seem to be the first time hes called for such stringent controls on the technology. And hes not alone.
In the preface to his book Superintelligence, Nick Bostrom contends that developing AI is quite possibly the most important and most daunting challenge humanity has ever faced. Andwhether we succeed or failit is probably the last challenge we will ever face. Even Stephen Hawking has jumped on the panic wagon.
These concerns arent uniquely held by innovators, scientists, and academics. A Morning Consult poll found that a significant majority of Americans supported both domestic and international regulations on AI.
All of this suggests that we are in the midst of a full blown AI techno-panic. Fear of mass unemployment from automation and public safety concerns over autonomous vehicles have only exacerbated the growing tensions between man and machine.
Luckily, if history is any guide, the height of this hysteria means were probably on the cusp of a period of deflating dread. New emerging technologies often stoke frenzied fears over worst-case scenariosat least at the beginning. These concerns eventually rise to the point of peak alarm, followed by a gradual hollowing out of panic. Eventually, the technologies that were once seen as harbingers of the end times become mundane, common, and indispensable parts of our daily lives. Look no further than the early days of the automobile, RFID chips, and the Internet; so too will it be with AI.
Of course detractors will argue that we should hedge against worst-possible outcomes, especially if the costs are potentially civilization-ending. After all, if theres something the government could do to minimize the costs while maximizing the benefits of AI, then policymakers should be all over that. So whats the solution?
Gov. Doug Ducey (R-AZ) asked that very question: Youve given some of these examples of how AI can be an existential threat, but I still dont understand, as policymakers, what type of regulations, beyond slow down, which typically policymakers dont get in front of entrepreneurs or innovators should be enacted. Musks response? First, government needs to gain insight by standing up an agency to make sure the situation is understood. Then put in place regulations to protect public safety. Thats it. Well, not quite.
The government has, in fact, already taken a stab at whether or not such an approach would be an ideal treatment of this technology. Last year, the Obama administrations Office of Science and Technology Policy released a report on the future of AI, derived from hundreds of comments from industry, civil society, technical experts, academics, and researchers.
While the report recognized the need for government to be privy to ongoing developments, its recommendations were largely benignand it certainly didnt call for preemptive bans and regulatory approvals for AI. In fact, it concluded that it was very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years.
In short, put off those end-of-the-world parties, because AI isnt going to snuff out civilization any time soon. Instead, embracing preemptive regulations could just smother domestic innovation in this field.
Despite Musks claims, firms will actually outsource research and development elsewhere. Global innovation arbitrage is a very real phenomenon in an age of abundant interconnectivity and capital that can move like quicksilver across national boundaries. AI research is even less constrained by those artificial barriers than most technologies, especially in an era of cloud computing and diminishing costs to computer processing speedsto say nothing of the rise of quantum computing.
Musks solution to AI is uncharacteristically underwhelming. New federal agencies that impose precautionary regulations on AI arent going to chart a better course to the future, any more than preemptive regulations for Google would have paved the way to our current age of information abundance.
Musk of all people should know the future is always rife with uncertaintyafter all, he helps construct it with each new revolutionary undertaking. Imagine if there had been just a few additional regulatory barriers for SpaceX or Tesla to overcome. Would the world have been a better place if the public good demanded even more stringent regulations for commercial space launch or autopilot features? Thats unlikelyand, notwithstanding Musks apprehensions, the same is probably true for AI.
Excerpt from:
Have We Reached Peak AI Hysteria? - Niskanen Center (press release) (blog)
Posted in Ai
Comments Off on Have We Reached Peak AI Hysteria? – Niskanen Center (press release) (blog)
China announces goal of leadership in artificial intelligence by 2030 – CBS News
Posted: at 12:16 pm
FILE PHOTO: A computer mouse is illuminated by a projection of a Chinese flag in this photo illustration from October 1, 2013.
REUTERS, Tim Wimborne
BEIJING -- China's government has announced a goal of becoming a global leader in artificial intelligence in just over a decade, putting political muscle behind growing investment by Chinese companies in developing self-driving cars and other advances.
Communist leaders see AI as key to making China an "economic power," said a Cabinet statement on Thursday. It calls for developing skills and research and educational resources to achieve "major breakthroughs" by 2025 and make China a world leader by 2030.
Play Video
It might not be long before machines begin thinking for themselves -- creatively, independently, and sometimes with better judgment than a human....
Artificial intelligence is one of the emerging fields along with renewable energy, robotics and electric cars where communist leaders hope to take an early lead and help transform China from a nation of factory workers and farmers into a technology pioneer.
They have issued a series of development plans over the past decade, some of which have prompted complaints Beijing improperly subsidizes its technology developers and shields them from competition in violation of its free-trade commitments.
Already, Chinese companies including Tencent Ltd., Baidu Inc. and Alibaba Group are spending heavily to develop artificial intelligence for consumer finance, e-commerce, self-driving cars and other applications.
Manufacturers also are installing robots and other automation to cope with rising labor costs and improve efficiency.
Play Video
Business leaders weigh in on the possibility of artificial intelligence replacing jobs
Thursday's statement gives no details of financial commitments or legal changes. But previous initiatives to develop Chinese capabilities in solar power and other technologies have included research grants and regulations to encourage sales and exports.
"By 2030, our country will reach a world leading level in artificial intelligence theory, technology and application and become a principal world center for artificial intelligence innovation," the statement said.
That will help to make China "in the forefront of innovative countries and an economic power," it said.
The announcement follows a sweeping plan issued in 2015, dubbed "Made in China 2025," that calls for this country to supply its own high-tech components and materials in 10 industries from information technology and aerospace to pharmaceuticals.
That prompted complaints Beijing might block access to promising industries to support its fledgling suppliers. The Chinese industry minister defended the plan in March, saying all competitors would be treated equally. He rejected complaints that foreign companies might be required to hand over technology in exchange for market access.
China has had mixed success with previous strategic plans to develop technology industries including renewable energy and electric cars.
Beijing announced plans in 2009 to become a leader in electric cars with annual sales of 5 million by 2020. With the help of generous subsidies, China passed the United States last year as the biggest market, but sales totaled just over 300,000.
2017 The Associated Press. All Rights Reserved. This material may not be published, broadcast, rewritten, or redistributed.
Continued here:
China announces goal of leadership in artificial intelligence by 2030 - CBS News
Posted in Artificial Intelligence
Comments Off on China announces goal of leadership in artificial intelligence by 2030 – CBS News
Despite Musk’s dark warning, artificial intelligence is more benefit than threat – STLtoday.com
Posted: at 12:16 pm
We expect scary predictions about the technological future from philosophers and science fiction writers, not famous technologists.
Elon Musk, though, turns out to have an imagination just as dark as that of Arthur C. Clarke and Stanley Kubrick, who created the sentient and ultimately homicidal computer HAL 9000 in 2001: A Space Odyssey.
Musk, the founder of Tesla, SpaceX, HyperLoop, Solar City and other companies, spoke to the National Governors Association last week on a variety of technology topics. When he got to artificial intelligence, the field of programming computers to replace humans in tasks such as decision making and speech recognition, his words turned apocalyptic.
He called artificial intelligence, or AI, a fundamental risk to the existence of human civilization. For example, Musk said, an unprincipled user of AI could start a war by spoofing email accounts and creating fake news to whip up tension.
Then Musk did something unusual for a businessman who has described himself as somewhat libertarian: He urged the governors to be proactive in regulating AI. If we wait for the technology to develop and then try to rein it in, he said, we might be too late.
Are scientists that close to creating an uncontrollable, HAL-like intelligence? Sanmay Das, associate professor of computer science and engineering at Washington University, doesnt think so.
This idea of AI being some kind of super-intelligence, becoming smarter than humans, I dont think anybody would subscribe to that happening in the next 100 years, Das said.
Society does have to face some regulatory questions about AI, he added, but theyre not the sort of civilization-ending threat Musk was talking about.
The pressing issues are more like one ProPublica raised last year in its Machine Bias investigation. States are using algorithms to tell them which convicts are likely to become repeat offenders, and the software may be biased against African-Americans.
Algorithms that make credit decisions or calculate insurance risks raise similar issues. In a process called machine learning, computers figure out which pieces of information have the most predictive value. What if these calculations have a discriminatory result, or perpetuate inequalities that already exist in society?
Self-driving cars raise some questions, too. How will traffic laws and insurance companies deal with the inevitable collisions between human- and machine-steered vehicles?
Regulators are better equipped to deal with these problems than with a mandate to prevent the end of civilization. If we write sweeping laws to police AI, we risk sacrificing the benefits of the technology, including safer roads and cheaper car insurance.
Whats going to be important is to have a societal discussion about what we want and what our definitions of fairness are, and to ensure there is some kind of transparency in the way these systems get used, Das says.
Every technology, from the automobile to the internet, has both benefits and costs, and we dont always know the costs at the outset. At this stage in the development of artificial intelligence, regulations targeting super-intelligent computers would be almost impossible to write.
I dont frankly see how you put the toothpaste back in the tube at this point, said James Fisher, a professor of marketing at St. Louis University. You need to have a better sense of what you are regulating against or for.
A good starting point is to recognize that HAL is still science fiction. Instead of worrying about the distant future, Das says, We should be asking about whats on the horizon and what we can do about it.
Make it your business. Get twice-daily updates on what the St. Louis business community is talking about.
Read the original here:
Despite Musk's dark warning, artificial intelligence is more benefit than threat - STLtoday.com
Posted in Artificial Intelligence
Comments Off on Despite Musk’s dark warning, artificial intelligence is more benefit than threat – STLtoday.com
These Non-Tech Firms Are Making Big Bets On Artificial Intelligence … – Investor’s Business Daily
Posted: at 12:16 pm
While much has been written about information technology companies investing in artificial intelligence, Loup Ventures managing partner Doug Clinton notes that many non-tech companies are capitalizing on AI technology as well.
Clinton has put together a portfolio of 17 publicly traded non-tech companies that are making investments in AI to improve their businesses. In a recent blog post, Clinton notes that he assembled the portfolio as a "fun exercise" and a way to draw attention to the sweeping nature of AI advancements. Loup Ventures is an early-stage venture capital firm.
Clinton selected the companies from a range of industries including health care, retail, logistics, professional services, finance, transportation, energy, construction and food/agriculture.
"In 10 years, every company will have to be an artificial intelligence company or they won't be competitive," Clinton said.
Among the companies included is IBD 50 stock Idexx Laboratories (IDXX). Idexx makes products for the animal health-care sector. On its last earnings call, the company said that its latest diagnostic products are using machine learning so the instruments always have the ability to learn and train on new data. One such product that leverages AI is its SediVue Dx analyzer, Clinton said.
The other companies on the Loup Ventures list are: Accenture (ACN), Avis Budget Group (CAR), Boeing (BA), Caterpillar (CAT), Deere (DE), Domino's Pizza (DPZ), FedEx (FDX) andGlaxoSmithKline (GSK).
There's alsoHalliburton (HAL), Interpublic Group (IPG), Macy's (M), Monsanto (MON), Nasdaq (NDAQ), Northern Trust (NTRS), Pioneer Natural Resources (PXD) and Under Armour (UA).
IBD'S TAKE:Cloud-computing leaders Amazon.com, Microsoft and Google, along with internet giants, have the inside track in monetizing artificial intelligence technology, Mizuho Securities said in a report earlier this month.
Among those venturing into the space, Clinton says:
RELATED:
Are Drone Deliveries And Robotic Gofers Ready To Serve You?
Where Are The Early Investing Hot Spots In Artificial Intelligence?
Shale Producer Eyes Drilling With Artificial Intelligence
IBM Rated Buy On 'Upside Potential,' Artificial Intelligence Move
11:36 AM ET Intuitive Surgical is hamstrung on a "psychological barrier" at 1,000, an analyst argued Friday as shares toppled despite the firm's...
11:36 AM ET Intuitive Surgical is hamstrung on a "psychological barrier" at 1,000,...
Continued here:
These Non-Tech Firms Are Making Big Bets On Artificial Intelligence ... - Investor's Business Daily
Posted in Artificial Intelligence
Comments Off on These Non-Tech Firms Are Making Big Bets On Artificial Intelligence … – Investor’s Business Daily
Artificial intelligence boosts wine’s bottom line – Phys.Org
Posted: at 12:16 pm
July 21, 2017 by Caleb Radford Credit: Ailytic
The Australian wine industry is turning to artificial intelligence to streamline its manufacturing.
South Australian tech firm Ailytic has developed an artificial intelligence (AI) program to significantly increase production efficiency by optimising machine use.
It uses an AI technique called 'prescriptive analytics' to account for all the variables that go into mass-producing wines such temperature, wine changeover and inventory.
The program then creates the best possible operation schedule, allowing companies to save considerable time and money.
Ailytic's list of clients includes world-renown wine companies such as Pernod Ricard, Accolade Wines and Treasury Wine Estates.
It has now included South Australian company Angove Family Winemakers as well.
Pernot Ricard Global Business Solutions Manager Pauline Paterson said AI was highly beneficial for the wine industry and helped to increase the bottom line.
"We use it mainly around production line and use it to derive the most efficient way to produce our product," she said.
"It is definitely helpful with changeover, how many bottles we need, how much wine and what order to do everything in."
Ailytic's system is able to obtain essential information from wineries using remote sensors, which are placed around machines and vineyards.
These sensors track a number of key procedures including the changeover from red to white bottling.
This includes the sub-classification of each colour such as sweet red, dry red, aromatic white and fortified wines.
Ailytic's program ensures that wine is changed quickly, without contamination, bottled using appropriate glassware, labelled and then packaged appropriately.
The sensors then transmit the data to a computer in real time using Wi-Fi.
A single pass can take anywhere between three to six hours but Ailytic's system reduces this by up to 30 per cent.
Pernod Ricard is the world's second leading wine and spirits company, with a network of grows across six countries and 8.68 billion in sales in 2015.
Its brands include Jacobs Creek, Campo Viejo, Brancott Estate, Kenwood Vineyards and Wyndham Estate.
Ailytic co-founder and CEO James Balzary said the company's AI program was perfect for the wine industry because it thrived in complex environments.
"Our algorithms work well for things like packaging, bottling, general manufacturing and sink manufacturing the wine industry is where we are seeing a lot of appetite and the most uptake," he said.
"People think of wine as a romantic artisan type of process, and it is, when you are producing small batch, but the majority of wines we drink are mass manufactured in big operations. That's where we come in the more complex the business, the bigger the benefit."
Ailytic's involvement in wine manufacturing has seen it nominated at the 2017 Wine Industry IMPACT Awards in Adelaide.
Ailytic's other clients are also based out of South Australia and include Australia's lone sink manufacturer Tasman Sinkware.
However, it does plan to expand its clientele and has already garnered international interest in their product.
"Even though the bigger wineries would find this more useful, even smaller operations will benefit from this," Balzary said.
"It's an affordable solution that used to only be accessible to bigger companies but we try to focus on bringing advanced capabilities to T2 and T3 manufacturers."
Explore further: Wine descriptions make people more emotional about wine
Research by the University of Adelaide has shown that consumers are much more influenced by wine label descriptions than previously thought.
A fine wine has an ideal balance of ingredients. Too much or too little of a component could mean the difference between a wine with a sweet and fruity aroma and one that smells like wet newspaper. To help wineries avoid ...
New supply chain research from the Martin J. Whitman School of Management at Syracuse University shows a wine distributor can significantly improve its profits by investing in wine futures, in addition to bottled wine. The ...
A US startup says it has created the world's first "smart" bottle which can keep wine as fresh as the day it was uncorked for up to a month.
To wine makers, stink bugs are more than a nuisance. These tiny pests can hitch rides on grapes going through the wine making process, releasing stress compounds that can foul the smell and taste of the finished product. ...
A French startup is looking to change the way people drink wine, one glass at a time.
US entrepreneur Elon Musk said Thursday he'd received tentative approval from the government to build a conceptual "hyperloop" system that would blast passenger pods down vacuum-sealed tubes from New York to Washington at ...
Google and the EU are gearing up for a battle that could last years, with the Silicon Valley behemoth facing a relentless challenge to its ambition to expand beyond search results.
Cheap, plastic toysno manufacturer necessary. The 2020 toy and game market is projected to be $135 billion, and 3-D printing brings those profits home.
From aerospace and defense to digital dentistry and medical devices, 3-D printed parts are used in a variety of industries. Currently, 3-D printed parts are very fragile and only used in the prototyping phase of materials ...
An underwater robot entered a badly damaged reactor at Japan's crippled Fukushima nuclear plant Wednesday, capturing images of the harsh impact of its meltdown, including key structures that were torn and knocked out of place.
Microsoft's cloud computing platform will be used outside China for collaboration by members of a self-driving car alliance formed by Chinese internet search giant Baidu, the companies announced on Tuesday.
Please sign in to add a comment. Registration is free, and takes less than a minute. Read more
Continue reading here:
Artificial intelligence boosts wine's bottom line - Phys.Org
Posted in Artificial Intelligence
Comments Off on Artificial intelligence boosts wine’s bottom line – Phys.Org
Artificial intelligence suggests recipes based on food photos – MIT News
Posted: at 12:16 pm
There are few things social media users love more than flooding their feeds with photos of food. Yet we seldom use these images for much more than a quick scroll on our cellphones.
Researchers from MITs Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that analyzing photos like these could help us learn recipes and better understand people's eating habits. In a new paper with the Qatar Computing Research Institute (QCRI), the team trained an artificial intelligence system called Pic2Recipe to look at a photo of food and be able to predict the ingredients and suggest similar recipes.
In computer vision, food is mostly neglected because we dont have the large-scale datasets needed to make predictions, says Yusuf Aytar, an MIT postdoc who co-wrote a paper about the system with MIT Professor Antonio Torralba. But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences.
The paper will be presented later this month at the Computer Vision and Pattern Recognition conference in Honolulu. CSAIL graduate student Nick Hynes was lead author alongside Amaia Salvador of the Polytechnic University of Catalonia in Spain. Co-authors include CSAIL postdoc Javier Marin, as well as scientist Ferda Ofli and research director Ingmar Weber of QCRI.
How it works
The web has spurred a huge growth of research in the area of classifying food data, but the majority of it has used much smaller datasets, which often leads to major gaps in labeling foods.
In 2014 Swiss researchers created the Food-101 dataset and used it to develop an algorithm that could recognize images of food with 50 percent accuracy. Future iterations only improved accuracy to about 80 percent, suggesting that the size of the dataset may be a limiting factor.
Even the larger datasets have often been somewhat limited in how well they generalize across populations. A database from the City University in Hong Kong has over 110,000 images and 65,000 recipes, each with ingredient lists and instructions, but only contains Chinese cuisine.
The CSAIL teams project aims to build off of this work but dramatically expand in scope. Researchers combed websites like All Recipes and Food.com to develop Recipe1M, a database of over 1 million recipes that were annotated with information about the ingredients in a wide range of dishes. They then used that data to train a neural network to find patterns and make connections between the food images and the corresponding ingredients and recipes.
Given a photo of a food item, Pic2Recipe could identify ingredients like flour, eggs, and butter, and then suggest several recipes that it determined to be similar to images from the database. (The team has an online demo where people can upload their own food photos to test it out.)
You can imagine people using this to track their daily nutrition, or to photograph their meal at a restaurant and know whats needed to cook it at home later, says Christoph Trattner, an assistant professor at MODUL University Vienna in the New Media Technology Department who was not involved in the paper. The teams approach works at a similar level to human judgement, which is remarkable.
The system did particularly well with desserts like cookies or muffins, since that was a main theme in the database. However, it had difficulty determining ingredients for more ambiguous foods, like sushi rolls and smoothies.
It was also often stumped when there were similar recipes for the same dishes. For example, there are dozens of ways to make lasagna, so the team needed to make sure that system wouldnt penalize recipes that are similar when trying to separate those that are different. (One way to solve this was by seeing if the ingredients in each are generally similar before comparing the recipes themselves).
In the future, the team hopes to be able to improve the system so that it can understand food in even more detail. This could mean being able to infer how a food is prepared (i.e. stewed versus diced) or distinguish different variations of foods, like mushrooms or onions.
The researchers are also interested in potentially developing the system into a dinner aide that could figure out what to cook given a dietary preference and a list of items in the fridge.
This could potentially help people figure out whats in their food when they dont have explicit nutritional information, says Hynes. For example, if you know what ingredients went into a dish but not the amount, you can take a photo, enter the ingredients, and run the model to find a similar recipe with known quantities, and then use that information to approximate your own meal.
The project was funded, in part, by QCRI, as well as the European Regional Development Fund (ERDF) and the Spanish Ministry of Economy, Industry, and Competitiveness.
Read this article:
Artificial intelligence suggests recipes based on food photos - MIT News
Posted in Artificial Intelligence
Comments Off on Artificial intelligence suggests recipes based on food photos – MIT News







