The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: June 3, 2017
Rumbling seats. Virtual reality. Booze. How cinemas are adapting to uncertain future – Los Angeles Times
Posted: June 3, 2017 at 12:30 pm
Like many people, one of my first jobs growing up was in a movie theater. I spent summer 2005 sweeping up popcorn and sneaking into midday screenings of Wedding Crashers at an UltraStar Cinemas in San Diego. At the time, cup holders were considered fairly innovative and stadium seating was the height of luxury. Everyone still bought paper tickets at the box office, and the food menu was limited to popcorn, bad hot dogs and Junior Mints.
Today, moviegoers pay for tickets online and get their phones scanned at the door. They eat restaurant-style food and sip movie-themed cocktails in theater lounges before the films. They can even order food and wine while relaxing in their leather recliner seats.
Moviegoers have increasingly innovative and expensive options, especially in Los Angeles, a laboratory of multiplex innovation. The cinema industry is trying everything it can motion seats, virtual reality and even competitive video gaming to see what takes hold.
Its a matter of survival. Cinemas need to reinvent themselves for younger audiences who arent going to the multiplex as much. Movie theaters sold 1.3 billion tickets in the U.S. and Canada last year, down from the recent peak of 1.6 billion in 2002, according to data from the Motion Picture Assn. of America.
What you can get at a theater now is vastly different from five years ago, says Eric Handler, a media analyst with MKM Partners who follows the theatrical exhibition industry. The exhibitors finally realized people were willing to pay a premium for a higher-quality viewing experience.
The 3-year-old iPic Theaters location in Westwood revels in luxury. Going to the venue, which has a concierge-like front desk and full bar and restaurant, is more like checking into a hotel than a movie theater.
The premium section of the auditorium only fits six rows of seats, but thats the trade-off for full recliners equipped with pillows and blankets, plus wide aisles for the wait staff. Each pair of seats ($58 for two) comes with a menu created by Sherry Yard, who was Wolfgang Pucks longtime pastry chef, and a blue-light button to summon a server for wine and snacks.
Introducing food and wait service to the theatrical experience has forced companies to get creative. Smelly and crunchy dishes arent ideal, so instead they serve gourmet finger foods like green goddess turkey sliders, meatza pizza and tandoori chicken skewers.
Afterward, couples can venture to the darkly lit Tuck Room Tavern, the restaurant Yard opened a year ago. The bar features a glass tower that uses liquid nitrogen to create special cocktail flavoring.
Why the pampering?
We're competing with your home, says Hamid Hashemi, CEO of Florida-based iPic Entertainment, which also operates a theater in Pasadena. It's really simple. If there's a way to watch a movie and improve the experience, why not do it?
Rivals have taken note and are also attempting to turn a trip to the movies into a more plush and boozy date night. AMC Theatres, the worlds largest cinema chain, has been rapidly adding recliner chairs and dine-in options, and recently completed renovations of two Burbank locations. The exhibition giant has opened 250 of its MacGuffins bars at its theaters, with movie-themed cocktail tie-ins, including a Baywatch Banana Hammock and a Guardians of the Galaxy Vol. 2 Awesome Mix.
The U.S. division of the Mexican cinema chain Cinepolis has its own luxury theater in Westlake Village, featuring waiter service and a full bar (some Cinepolis locations also have auditoriums with play areas for kids). And Cinemark opened its Playa Vista and XD location in 2015 with a reserve level VIP experience for patrons to order food and drinks.
As the competition heats up, iPic is looking for ways to make its offerings even fancier. The company is introducing a new seating pod that creates a private cocoon around pairs of moviegoers.
The Vine Theatre on Hollywood Boulevard, one of Los Angeles many single-screen theaters dating back to 1940, doesnt look like much from the sidewalk. No movies are advertised on its marquee. But inside is a center of advanced technology and cinema innovation. Cinema tech company Dolby Laboratories gutted and remodeled the space several years ago and now uses it to show its projection and surround-sound advancements to filmmakers such as Ang Lee.
There are 72 Dolby Cinema theaters in the United States with partner AMC L.A. locations include the AMC Burbank 16 and AMC Century City 15 complete with laser projection and an advanced 360-degree ring of speakers wrapped around the audience.
The Vine theater showcases the latest technologies. The tour starts with Dolbys signature audio-visual pathway from the lobby to the auditorium, a curved screen with projected images related to the movie the guests are about to see. As people walk into a screening of The Lego Batman Movie, for instance, they see animated graphics of the characters on the wall.
Once inside, Dolby executive Stuart Bowling uses a before-and-after shot of a white dot on black screen to show how the companys laser projectors can create a true inky black color, instead of the milky gray people are used to seeing on the silver screen.
It really delivers true black level for the filmmaker to deliver a more compelling image, Bowling says. They all have gasps, whoas, occasionally an expletive from a filmmaker.
The Dolby Atmos surround-sound technology uses dozens of speakers on the ceilings and walls around the auditorium to simulate sounds coming from different directions.
San Francisco-based Dolby is just one of the companies using better screening technology to get people off the couch and into the theaters. If its size youre looking for, go to TCL Chinese Theatre in Hollywood.
The Canadian cinema technology company Imax Corp. put its stamp on the legendary theater in 2013, installing a 94-wide screen (among the largest largest Imax theaters in North America). Later, Imax added a 4K laser projection system in what it called a giant leap forward for cinema technology.
Not to be outdone, the biggest theater chains, including AMC, Regal and Cinemark, are rolling out their own premium, large-format auditoriums for a more grandiose experience. Cinemark two years ago unveiled its revamped Playa Vista location, which includes a 450-seat auditorium known as XD with a giant 70-feet wide screen and a sound system that has more than 60 speakers.
Meanwhile, Belgian projector company Barco has been trying to promote its Barco Escape, an immersive three-screen format that surrounds the audience, though few movies have been designed for the experience. Regal L.A. Live, recently branded as a Barco Innovation Center, includes a Barco Escape auditorium, as does the Cinemark multiplex at Howard Hughes Center.
Another technological innovation that could change the movie business is the much-hyped virtual reality. Filmmakers and executives have talked up the grand possibilities of storytelling for the pricey headsets that promise an intense, immersive experience.
Many hurdles have prevented VR from going mainstream, including the high cost of the headsets, which can cost thousands of dollars each, and the lack of compelling content.
Still, Hollywood is adapting films to virtual-reality video games and designing promotional tie-ins for movies to supplement marketing efforts. Some major filmmakers are making VR a part of their toolkit. Oscar-winner Alejandro G. Irritu recently displayed his VR project Carne y Arena at the Cannes Film Festival.
And theaters have become testing grounds for VR experiments. At the Regal L.A. Live entertainment complex, a marketing team for 20th Century Fox recently roped off part of the cinema lobby and set up a row of chairs and Oculus Rift rigs. The team persuaded moviegoers wandering the lobby to strap on headsets and watch the free promotional tool Alien: Covenant In Utero, a two-minute, 360-degree video that lets users experience what its like for an alien to burst out of someones chest.
Universal Pictures took a different approach with its own VR tie-in for The Mummy. The studio teamed with Glendale-based VR seating company Positron to create virtual-reality theaters with rows of swiveling seats equipped with headsets. The studios free 10-minute VR video simulates a scene in which Tom Cruise weightlessly tries to survive in a plummeting airplane.
This VR technology really allowed us to create content that would immerse audiences in a way that wouldn't have been possible before, said Austin Barker, head of creative content for Universal Pictures. You can't ignore its potential.
Indeed, Imax is betting its new VR centers will become a part of the theatrical experience. During Memorial Day weekend, the company opened a virtual-reality hub in the lobby at the AMC Kips Bay in New York and has others in the works.
In L.A., Imax opened its new virtual-reality center, modeled after a video-game arcade, near the Grove shopping complex in January. Customers pay $7 to $10 for a virtual-reality experience, including games based on movies such as shoot-em-up action flick John Wick.
Down the hall from the Alien VR setup at Regal L.A. Live, moviegoers trickle into a 3:15 p.m. screening of Guardians of the Galaxy Vol. 2 in a 4DX auditorium promising an absolute cinema experience. Translated, that means audience members pay $24.50 a ticket for a theater that uses moving seats, plus wind, water and odor effects, to simulate whats happening on the screen.
The seats pull back and rumble as Drax the Destroyer takes a flying leap at an alien foe. When something explodes, simulated smoke fills the theater. The idea of 4DX, created by South Korean company CJ 4Dplex Co., is to make people feel as if theyre part of the action. Its a bit like a Universal Studios ride.
About 18 miles south of L.A. Live, a Torrance-based company called MediaMation makes its own competing version of the motion-seat technology, called MX4D. MediaMation workers in protective goggles assemble rows of seats that they will ship across the country, and crash dummies wait to test the seats safety.
93456202
The company uses an on-site miniature theater to demonstrate for studio executives how their movies will be seen with its motion seating and other effects. In its version of Mad Max: Fury Road, moviegoers faces are blasted with air during a scene where a characters face is sprayed with chrome paint.
While popular in Asia, the technology has spread slowly in the U.S., partly because of the cost. Still, MediaMation CEO Daniel Jamele says the idea is catching on with American moviegoers who want an experience that they cant replicate in their living rooms.
We think theres a real market here, Jamele said.
Some theaters are even turning their theaters into video game centers. MediaMation is working with the TCL Chinese Theatre to retrofit one of its auditoriums for e-sports competitive video-game tournaments where people play on the big screen.
Cinephiles may balk at the apparent sacrilege of the cinematic space, but theaters have been experimenting for years with this kind of alternative content, especially during weekday business hours when auditoriums are empty.
iPic and other exhibitors have been getting into the in-theater gaming business too. The company has teamed with video-gaming league Super League Gaming to host one-week Minecraft tournaments at its locations. Five-day passes for its upcoming July event cost $100.
Thats the dream of every theater, Jamele said. It gives them an alternate source of income, which is what they need.
Twitter: @rfaughnder
ALSO
Hollywood's summer has flopped so far. Here comes 'Wonder Woman' to the rescue
'Wonder Woman' could be the first female-fronted superhero blockbuster. No pressure, Warner Bros.
Meet Tony Vinciquerra, the 'not flashy' executive hired to fix struggling Sony Pictures
Read more here:
Posted in Virtual Reality
Comments Off on Rumbling seats. Virtual reality. Booze. How cinemas are adapting to uncertain future – Los Angeles Times
Experts think this is how long we have before AI takes all of our jobs – ScienceAlert
Posted: at 12:29 pm
According to a survey of artificial intelligence experts, AI will probably be good enough to take on pretty much most of our jobs within half a century.
While there's plenty of room for debate on the details, the predicted applications of AI could serve as an alarm bell for us to consider how our economy and job market will adapt to ever smarter technology.
A team of researchers from the University of Oxford and Yale University received 352 responses to a survey they'd sent out to over 1,600 academics who had presented at conferences on machine learning and neural information processing in 2015.
The survey asked the experts to assign probabilities to dates in the future that AI might be capable of performing specific tasks, from folding laundry to translating languages.
They also asked for predictions on when machines would be superior to humans in fulfilling certain occupations, such as surgery or truck-driving; when they thought AI would be better than us at all tasks; and what they thought the social impacts could be.
The researchers then combined the results to determine a range of time stretching from a low 25 percent confidence to 75 percent certain, calculating a median point when most experts were hedging their bets.
You can check out the results in the table below.
Grace, Salvatier, Dafoe, Zhang, Evans
Most of the academics seem fairly confident that we'll have an AI be better than all humans at playing Angry Birds within the next seven years, and that we can start to place bets on AI winning World Series Poker within a decade.
We can bet there's a 50 percent chance robots will be better than us at folding laundry in about six years, followed very soon by an AI winning the strategy computer game StarCraft.
If you drive a truck for a living, there's a slim chance you'll be competing with automated drivers in just over five years, but you can be fairly sure there'll giving up the road to driverless trucks in just over 20 years.
There's a good chance we'll see a book written by an AI in the New York Times bestseller list in 26 years and a top 40 pop song in maybe 12 years.
And just in case you think you'll play it smart and develop the AI that is going to take over the world, the experts think there's a slim chance that machines will be the ones developing AI within half a century, and odds-on they'll be running the show in about 80 years.
The researchers set the 50 percent chance line for artificial intelligence being better at just about everything a point they describe as High Level Machine Intelligence in just under 50 years. At which point they think there's a likelihood that AI will be capable of doing just about any job you can imagine by 2140.
Just how much should we trust the experts on this one?
It might help to know that when they completed the survey back in 2015, on average they estimated there was only half a chance AI would beat the world champion at the game Go in about 12 years.
That's a skill we can now pretty much set to 100 percent accomplished, with the recent news of Google's DeepMind AlphaGo AI beating world champion Ke Jie in the first of a three-match series.
So it's possible the academics might simply be a little conservative in their estimates. In addition, experts in Asia saw AI progress occurring on average much sooner than those in North America, suggesting culture will affect our ability to predict.
While some jobs and skills are clearly on the horizon, it's a big call to say AI will probably be good at just about everything in 45 years.
An article at MIT Technology Review makes a compelling argument on why any prediction of "40 years in the future" should ring alarm bells: it is also the length of a single career, putting it outside of the span of any one person's current job.
In other words, we tend to optimistically imagine the technological advances required for some things occurring after our time.
The research is available on the pre-publish website arXiv.org, so hasn't yet been peer reviewed.
Even if the details are still up for discussion, we can be fairly confident that as time goes on, technology will at least be capable of doing a better job than your average human.
"While individual breakthroughs are unpredictable, longer term progress in R&D [research and development] for many domains (including computer hardware, genomics, solar energy) has been impressively regular," the researchers write.
As determined recently by a pair of US economists, we should expect significant impacts to particular fields in industry.
That isn't to say we can expect economics collapse and universal joblessness, either; going by history, technology creates more jobs than it destroys.
Give it a few decades and we'll be able to ask AI what they think we should do.
Original post:
Experts think this is how long we have before AI takes all of our jobs - ScienceAlert
Posted in Ai
Comments Off on Experts think this is how long we have before AI takes all of our jobs – ScienceAlert
Google Is Already Late to China’s AI Revolution – WIRED
Posted: at 12:29 pm
v0z ILyh{"e{h@Jbb3p<^DR+HUUUOy^A2~cY?{='%)>goqykC[`7C?$T*F9~[.x`I>UU +jub@<~:+jYOm7;?I.d2D8 Nemy tKE4{L8S^P G[[^p"w n[x l:Je2t%'P:k W$e NlX!Bv }^{w#J^8^J|IRI:KkzstgVO-18Pv.wle;vF
Go here to see the original:
Posted in Ai
Comments Off on Google Is Already Late to China’s AI Revolution – WIRED
How AI Is Changing the World of Advertising Forever – TNW
Posted: at 12:29 pm
For how much Hollywood loves remakes, Im curious to see what a futuristic Mad Men is going to look like. Dont get me wrong; Im not expecting to see robotic Don Draper, who writes poignant lines of copy aggregated from data points all over the world (thatd be cheesy and boring). Rather, Id be more excited to see how technology is going to change the world of advertising for good.
A lot of people might think that under the reigns of Artificial Intelligence every job will suddenly be replaced by a robot. However, the core component of advertising is storytelling, which is something that requires a human touch. Even more, AI isnt going to replace storytellers, but rather empower them. Yes, the world of artificial intelligence is about to make advertising more human. Heres why:
Its no secret that the advertising world goes giddy over any innovation in the tech realm. After all, a big portion of how firms gain an edge in their industry is by being up on the latest and greatest, as well as demonstrating a capacity to look at how new practices can be applied to client campaigns. And when it comes to AI, a lot of major agencies have already situated themselves ahead of the curve.
The interesting thing to note here isnt necessarily that these agencies are using AI in general, but rather, how theyre using it. For example, the link above notes how a few firms have teamed up with AI firms to work on targeting and audience discovery. While these practices have been implemented long before, Artificial Intelligence has been accelerating the process. However, even with major players teaming up with the likes of IBM Watson, smaller agencies and startups have been on this trend as well.
An excellent example of this is the company Frank, an AI based advertising firm for startups. Franks goal is to use AI in the same manner of targeting mentioned above, only offering it to those businesses that could really use the savings. The platform allows you to set the goals of your campaign, as well as hones in on targeting and bidding efficiently. This saves time and money often devoted to outsourcing digital advertising efforts, as well as gives an accurate depiction of how ads are performing in real time. Expect players like Frank to make a significant change in how small businesses and startups approach how to use AI in their marketing.
One of the biggest news stories to hit about AI and advertising was Goldman Sachs $30 million investment into Persado. If you havent heard about it yet, Persado essentially aggregates and compiles cognitive content, which is copy backed by data. It breaks down everything, from sentence structure, word choice, emotion, time of day, and even can bring in a more accurate call-to-action. And for those that hire digital marketers and advertisers, this sounds like a dream come true in saving time and money. However, when it comes to writing, AI can only go so far.
While some content creators and digital copywriters might be a little nervous that AI will eventually take their jobs, thats simply not the case. Writing involves a certain sense of emotional intelligence and response that no computer can feel. Moreover, the type of content that AI can create is limited to short-term messages. Im not sure about you, but Ill safely bet that no major marketing director is willing to put their Super Bowl ad in the hands of a computer. Overall, while Wall Street recognizes Artificial Intelligences potential impact in the creative world, its safe to say when it comes to telling a story, that human touch will never go away.
Perhaps one of the most underrated things about AI is its potential to eliminate practices altogether. While we mentioned above that, yes, certain jobs in the creative field will never go away, theres a possibility that certain processes in the marketing channel might change drastically.
For example, companies like Leadcrunch are using AI to build up B2B sales leads. While before B2B sales could rely on either targeted ads or sales teams to bring clients in, software like Leadcrunchs is eliminating those processes altogether. Granted, this isnt exactly a bad thing as a lot of B2B communications relies heavily on educating consumers, something a banner ad cant do as accurately as a person. Overall, companies like this are going to drastically change how our pipelines work, potentially changing how the relationship between advertising and AI work hand-in-hand for a long time.
Read next: How Automation is Making The Sales Process Easier
See more here:
Posted in Ai
Comments Off on How AI Is Changing the World of Advertising Forever – TNW
Researchers Have Created an AI That Could Read and React to Emotions – Futurism
Posted: at 12:29 pm
In Brief University of Cambridge researchers have developed an AI algorithm that can assess how much pain a sheep is in by reading its facial expressions. This system can facilitate the early detection of painful conditions in livestock, and eventually, it could be used as the basis for AIs that read emotions on human faces. Reading Sheep
One of todays more popular artificially intelligent (AI) androids comes from the TV series MARVELs Agents of S.H.I.E.L.D. Those of you who followed the latest seasons story no spoilers here! probably love or hate ADA by now. One of the most interesting things about this fictional AI character is that it can read peoples emotions. Thanks to researchers from theUniversity of Cambridge,this AI ability might soonmake the jump from sci-fi to reality.
The first step in creating such a system is training an algorithm on simplerfacial expressions and justone specificemotion or feeling. To that end, the Cambridge team focused onusing a machine learning algorithm to figure out if a sheep is in pain, and this week, they presentedtheir research at the IEEE International Conference on Automatic Face and Gesture Recognition in Washington, D.C.
The system they developed, the Sheep Pain Facial Expression Scale (SPFES), was trained using a dataset of 500 sheep photographs to learn how to identify five distinct features of a sheeps face when the animal is in pain. The algorithm then ranks the features on a scale of 1 to 10 to determine the severity of the pain. Early tests showed that the SPFES could estimate pain levels with an 80 percent accuracy.
SPFES was a departure for Peter Robinson, the Cambridge professor leading the research, as he typically focuses on systems designed toread human facial expressions. Theres been much more study over the years with people, Robinson explained in a press release.But a lot of the earlier work on the faces of animals was actually done by Darwin, who argued that all humans and many animals show emotion through remarkably similar behaviors, so we thought there would likely be crossover between animals and our work in human faces.
Asco-author Marwa Mahmoud explained, The interesting part is that you can see a clear analogy between these actions in the sheeps faces and similar facial actions in humans when they are in pain there is a similarity in terms of the muscles in their faces and in our faces.
Next, the team hopes to teach SPFES how to read sheep facial expressions from moving images, as well as train the system to work when a sheep isnt looking directly at a camera. Even as is, though, the algorithm could improve the quality of life of livestock like sheep by facilitating the early detection of painful conditions that require quick treatment, adding it to thegrowing list ofpractical and humane applications for AI.
Additionaldevelopments could lead to systems that are able to accurately recognize and react to human emotions, further blurring the line between natural and artificialintelligences.
Go here to read the rest:
Researchers Have Created an AI That Could Read and React to Emotions - Futurism
Posted in Ai
Comments Off on Researchers Have Created an AI That Could Read and React to Emotions – Futurism
How to Integrate AI Into Your Digital Marketing Strategy – Search Engine Journal
Posted: at 12:29 pm
Artificial intelligence (AI) used to be pure science fiction. But now AIis science fact.
This once futuristictechnologyis nowimplemented in almost every aspect of our lives. Wemust adjust.
Luckily, were used to adjusting, pivoting, and expecting a change of any kind and quickly. The use of AI in our digital marketingstrategies is no different.
We have to think of AI as we thought of mobile years ago. If we dont learn about it and apply it, then we are destined to be out of a job.
AI encompasses a large scope of technologies. The basic concept of AI is that it is a machine that learns to mimic human behavior.
Googlehas built AI into pretty much every product they have frompaid and organic search, to Gmail, to YouTube. Facebook powers all its experiences using AI, whether itsposts that appear inthe news feed or the advertising you see.
AI can be used for many things, but today were only focusing on what it can do for your marketing strategy and how to implement it.
Hereare fourways that businesses of any size can start using AI.
Contentcreation is expensive and time-consuming. But now we have access to AItools that allow users to input data and output content.
Companies likeWordsmithallow us to connect data and write short outlines, then theyll generate a story in seconds. This allows for companies everywhere to scale their content production while improving the quality of the content they put out.
AI generated content is built using an NLG (natural language generation) engine. These tools make it easy toformat and translate content quickly.
Not ready yet to produce content at scale with robots? No problem. You can still integrate AI into your talent sourcing for writers.
Companies like Scripted connect you to freelance writers by analyzing content to findthe best writer for yourjob. This saves youhours of work reading through content.
Basically, AI generated content creation and content creator sourcing is going to improve the quality of content. It is important to jump on board and get your content process in order before your competition moves ahead of you.
Chatbots are copying human behavior just like any other sort of AI, but they have a specialty. Chatbots interpret consumer questions and queries. They even help them to complete orders.
There are many companies out there creating chatbots and virtual assistants to help companies keep up with the times. For instance, Apples Siri, Google Assistant, Amazon Echo, and others are all forms of chatbots.
Chatbots are also being created specifically to help marketers with customer service.
Facebook is particularly interested in helping brands create chatbots toimprove customer service. You can access the tools they have created, called wit.ai bot engine.
For people who find this to be way too developer centered and would like other companies to build these for you, you can use tools like ChattyPeople.
Whichever tools you choose, its important for any company that providescustomer service to start implementing chatbotsnow.
Anyone who has ever had to do an image audit on a large or commercial website should truly understand the value in a tool that can auto-recognize images.
When issues of image and video licensing, poor image and video tagging, and UX come into play, AI can solve our problems.
Next time youre hit with an image audit or a need to categorize your images and video for improved UX, you should look to AI tools like Dextro or Clarifai.
Youcan also use tools for smart tagging. Instead of bringing in tools to review current assets, you can use Adobe Experience Manager to maintain appropriate tagging with their smart tagging features.
All of these tools will save you so much time, energy, and (in some cases) money.
Voice search tools, such as Amazon Echo and Google Home, will change the face of marketing forever.
We must start looking at the ways voice search will change our content, websites, and customer service options.
We have to think about how a person would ask for something instead of how they would search for it using text.
Studiessuggest that query length in voice search is much longer than in text search. Sofocus on long-tail keywords instead of short.
We also know that language is a much greater signifier of intent. This means that as we start to use more voice search, the conversions (in theory) should be higher and the quality of leads should be better.
Voice search isnt on its way. Its here. Make sure your brand stays ahead of the game.
AI once soundedscary and futuristic. But it is neither.
If youstudy, test, and start implementing what youknow about AI into your strategies now, then youll be able to keep up.
Dont let yourself get left behind. Its only a matter of time before AIbecomes the new normal and the next big thing hits.
See more here:
How to Integrate AI Into Your Digital Marketing Strategy - Search Engine Journal
Posted in Ai
Comments Off on How to Integrate AI Into Your Digital Marketing Strategy – Search Engine Journal
AlphaGo AI stuns go community – The Japan Times
Posted: at 12:29 pm
Googles artificial intelligence program AlphaGo stunned players and fans of the ancient Chinese board game of go last year by defeating South Korean grandmaster Lee Sedol 4-1. Last month, an upgraded version of the program achieved a more astonishing feat by trouncing Ke Jie from China, the worlds top player, 3-0 in a five-game contest. In the world of go, AI appears to have surpassed humans, ushering in an age in which human players will need to learn from AI. What happened in the game of go also poses a larger question of how humans can coexist with AI in other fields.
In a go match, two players alternately lay black and white stones on 361 points of intersection on a board with a 19-by-19 grid of lines, trying to seal off a larger territory than the opponent. It is said that the number of possible moves amounts to 10 to the power of 360. This huge variety of options compels even top-class players to differ on the question of which moves are the best. Such freedom to maneuver caused experts to believe it would take a while before AI would catch up with humans in the world of go. Against this background, AlphaGos sweeping victory over the worlds No. 1 player is a significant event that not only symbolizes the rapid development of computer science but is also encouraging for the application of AI in various fields.
In part of the contest with Lee in Seoul in March 2016, AlphaGo made irrational moves, cornering itself into a disadvantageous position. But in the case of its contest with Ke in the eastern Chinese city of Wuzhen in late May, it made convincing moves throughout, subjecting the human to a horrible experience. He called AlphaGo a go player like a god.
AlphaGo was built by DeepMind, a Google subsidiary. It takes advantage of technology known as deep learning, which utilizes neural networks similar to those of human brains to learn from a vast amount of data and enhance judging power. This is analogous to a baby learning a language by being exposed to a huge volume of utterances over a period of time. The program not only learns effective patterns of moves for go by studying enormous volumes of documented previous games but also hones its skills by playing millions of games against itself. In this manner, it has accomplished a remarkable evolution over the past year. Unlike humans, it is free of fatigue and emotional fluctuations. Because it grows stronger by playing games against itself, there is no knowing how good it will become in the future.
Feeling intimidated by AI programs should not be the only reaction of human go players. They can receive inspiration from AlphaGo since it shows a superior grasp of the whole situation of a contest, instead of being obsessed with localized moves, and it often lays stones in and around the center of the board. Human players usually first try to seal off territory around the corners. Its playing records also prove that even some moves traditionally considered as bad have advantages. By learning from AlphaGo, go players can acquire new skills and make their contests more interesting.
AlphaGo does have a weak point. It cannot explain its thinking behind the particular moves that it makes. When watching ordinary go contests, fans can enjoy listening to analyses by professional players. Also, ordinary go contests are interesting since psychology plays such an important part of the game, especially at critical points. This shows there are some elements of go that AI cannot take over.
DeepMind is thinking about how it can apply the know-how it has accumulated through the AlphaGo program to other areas, such as developing drugs and diagnosing patients through data analysis. But the fact that the program made irrational moves during its match with South Koreas Lee shows that the technology is not error-free a problem that must be resolved before AI can be applied to such fields as medical services and self-driving vehicles. Many problems may have to be overcome to make AI safe enough for application in areas where human lives are at stake.
A report issued by Nomura Research Institute says that in 10 to 20 years, AI may be capable of taking over jobs now being done by 49 percent of Japans workforce. At the same time, it says AI cannot intrude into fields where cooperation or harmony between people is needed or where people create abstract concepts like art, historical studies, philosophy and theology. It will be all the more important for both the public and private sectors to make serious efforts to cultivate peoples ability to think and create while finding out what proper roles AI should play in society.
Follow this link:
Posted in Ai
Comments Off on AlphaGo AI stuns go community – The Japan Times
When Will AI Exceed Human Performance? Researchers Just Gave A Timeline – Fossbytes
Posted: at 12:29 pm
Short Bytes: Various leaders in the technology world have predicted that AI and computers would overpower us in the future. These assumptions have become clearer with a new research published by Oxford and Yale University. There are 50% chances that in the next 45 years AI could be used to automate almost all of the human tasks.
According to a survey conducted by the researchers which involved 353 responses (out of 1634 requests) from the AI experts who published at NIPS and ICML conferences in 2015 There is a 50% chance of AI achieving the efficiency which would put it on par with humans, within the next 45 years.
The participants were asked to estimate the timing for specificAI capabilities like language translation and folding laundry. The superiority at specific occupations like surgeons, truck drivers, superiority over humans at all tasks, and the how these advancements would impact the society.
The researchers calculated the median figure from the data collected from various participants and created an estimated timeline which shows the number of approximate years from 2016 for AI to excel in various activities.
The intervals in the figure represent the date range from 25% to 75% probability of the event occurring, with the black dot representing 50% probability.
So, what do the numbers say? Would we have AIs playing Angry Birds better than us in the next seven years? Would AIs be able to replace those following-you-forever salespersons in the supermarkets, maybe in the next 15 to 30 years? In fact, you will be surprised to know that KFC has even launched an AI-powered store in China where the bots know which item you would prefer.
Similarly, you could expect AI surgeons opening up your body parts by 2060 and AI researchers creating more advanced AI by 2100. The research predicts that by 2140, AI would be able to almost everything that humans can do.
Clearly, these numbers induce a sense of insecurity amongst us. But still, it appears that the researchers have underrated the extent of the development currently going on in this field.
The report suggested a time span of around 12 years for AI to defeat humans in the game GO. But the recent news about Googles AlphaGO winning over Chinese world champion Ke Jie says a different story about the future.
One important thing to consider here is the t speed of AI development. Experts based in Asia might have witnessed a faster growth rate than the ones in the United States. The researchers further noted that the age and know-how of the experts didnt affect the predictions but their locations did.
North American researchers estimated around 74 years for AI to outperform humans while the number was only 30 in the case of Asian researchers.
Also, the 45-year prediction made for AI to outperform humans should be taken with a pinch of salt. Its a long timespan, often more than the complete professional life of a person. Thus, any of the predicted changes are less likely to happen with the technology currently accessible to us. This suggests that it is a number to be treated with caution.
The research which is yet to be peer-reviewed has been published on arxiv.org.
Got something to add? Drop your thoughts and feedback.
Read the original here:
When Will AI Exceed Human Performance? Researchers Just Gave A Timeline - Fossbytes
Posted in Ai
Comments Off on When Will AI Exceed Human Performance? Researchers Just Gave A Timeline – Fossbytes
Timeline of artificial intelligence – Wikipedia
Posted: at 12:29 pm
Date Development Antiquity Greek myths of Hephaestus and Pygmalion incorporated the idea of intelligent robots (such as Talos) and artificial beings (such as Galatea and Pandora).[1] Antiquity Yan Shi presented King Mu of Zhou with mechanical men.[2] Antiquity Sacred mechanical statues built in Egypt and Greece were believed to be capable of wisdom and emotion. Hermes Trismegistus would write "they have sensus and spiritus ... by discovering the true nature of the gods, man has been able to reproduce it." Mosaic law prohibits the use of automatons in religion.[3] 384 BC322 BC Aristotle described the syllogism, a method of formal, mechanical thought. 1st century Heron of Alexandria created mechanical men and other automatons.[4] 260 Porphyry of Tyros wrote Isagog which categorized knowledge and logic.[5] ~800 Geber develops the Arabic alchemical theory of Takwin, the artificial creation of life in the laboratory, up to and including human life.[6] 1206 Al-Jazari created a programmable orchestra of mechanical human beings.[7] 1275 Ramon Llull, Spanish theologian invents the Ars Magna, a tool for combining concepts mechanically, based on an Arabic astrological tool, the Zairja. The method would be developed further by Gottfried Leibniz in the 17th century.[8] ~1500 Paracelsus claimed to have created an artificial man out of magnetism, sperm and alchemy.[9] ~1580 Rabbi Judah Loew ben Bezalel of Prague is said to have invented the Golem, a clay man brought to life.[10] Early 17th century Ren Descartes proposed that bodies of animals are nothing more than complex machines (but that mental phenomena are of a different "substance").[11] 1623 Wilhelm Schickard drew a calculating clock on a letter to Kepler. This will be the first of five unsuccessful attempts at designing a direct entry calculating clock in the 17th century (including the designs of Tito Burattini, Samuel Morland and Ren Grillet)).[12] 1641 Thomas Hobbes published Leviathan and presented a mechanical, combinatorial theory of cognition. He wrote "...for reason is nothing but reckoning".[13][14] 1642 Blaise Pascal invented the mechanical calculator,[15] the first digital calculating machine[16] 1672 Gottfried Leibniz improved the earlier machines, making the Stepped Reckoner to do multiplication and division. He also invented the binary numeral system and envisioned a universal calculus of reasoning (alphabet of human thought) by which arguments could be decided mechanically. Leibniz worked on assigning a specific number to each and every object in the world, as a prelude to an algebraic solution to all possible problems.[17] 1726 Jonathan Swift published Gulliver's Travels, which includes this description of the Engine, a machine on the island of Laputa: "a Project for improving speculative Knowledge by practical and mechanical Operations " by using this "Contrivance", "the most ignorant Person at a reasonable Charge, and with a little bodily Labour, may write Books in Philosophy, Poetry, Politicks, Law, Mathematicks, and Theology, with the least Assistance from Genius or study."[18] The machine is a parody of Ars Magna, one of the inspirations of Gottfried Leibniz' mechanism. 1750 Julien Offray de La Mettrie published L'Homme Machine, which argued that human thought is strictly mechanical.[19] 1769 Wolfgang von Kempelen built and toured with his chess-playing automaton, The Turk.[20] The Turk was later shown to be a hoax, involving a human chess player. 1818 Mary Shelley published the story of Frankenstein; or the Modern Prometheus, a fictional consideration of the ethics of creating sentient beings.[21] 18221859 Charles Babbage & Ada Lovelace worked on programmable mechanical calculating machines.[22] 1837 The mathematician Bernard Bolzano made the first modern attempt to formalize semantics. 1854 George Boole set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed, to give expression to them in the symbolic language of a calculus", inventing Boolean algebra.[23] 1863 Samuel Butler suggested that Darwinian evolution also applies to machines, and speculates that they will one day become conscious and eventually supplant humanity.[24] Date Development 1913 Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which revolutionized formal logic. 1915 Leonardo Torres y Quevedo built a chess automaton, El Ajedrecista and published speculation about thinking and automata.[25] 1923 Karel apek's play R.U.R. (Rossum's Universal Robots) opened in London. This is the first use of the word "robot" in English.[26] 1920s and 1930s Ludwig Wittgenstein and Rudolf Carnap lead philosophy into logical analysis of knowledge. Alonzo Church develops Lambda Calculus to investigate computability using recursive functional notation. 1931 Kurt Gdel showed that sufficiently powerful formal systems, if consistent, permit the formulation of true theorems that are unprovable by any theorem-proving machine deriving all possible theorems from the axioms. To do this he had to build a universal, integer-based programming language, which is the reason why he is sometimes called the "father of theoretical computer science". 1941 Konrad Zuse built the first working program-controlled computers.[27] 1943 Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity" (1943), laying foundations for artificial neural networks.[28] 1943 Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics". Wiener's popular book by that name published in 1948. 1945 Game theory which would prove invaluable in the progress of AI was introduced with the 1944 paper, Theory of Games and Economic Behavior by mathematician John von Neumann and economist Oskar Morgenstern. 1945 Vannevar Bush published As We May Think (The Atlantic Monthly, July 1945) a prescient vision of the future in which computers assist humans in many activities. 1948 John von Neumann (quoted by E.T. Jaynes) in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church-Turing thesis which states that any effective procedure can be simulated by a (generalized) computer. Date Development 1950 Alan Turing proposes the Turing Test as a measure of machine intelligence.[29] 1950 Claude Shannon published a detailed analysis of chess playing as search. 1950 Isaac Asimov published his Three Laws of Robotics. 1951 The first working AI programs were written in 1951 to run on the Ferranti Mark 1 machine of the University of Manchester: a checkers-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz. 19521962 Arthur Samuel (IBM) wrote the first game-playing program,[30] for checkers (draughts), to achieve sufficient skill to challenge a respectable amateur. His first checkers-playing program was written in 1952, and in 1955 he created a version that learned to play.[31] 1956 The first Dartmouth College summer AI conference is organized by John McCarthy, Marvin Minsky, Nathan Rochester of IBM and Claude Shannon. 1956 The name artificial intelligence is used for the first time as the topic of the second Dartmouth Conference, organized by John McCarthy[32] 1956 The first demonstration of the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert A. Simon (Carnegie Institute of Technology, now [[Carnegie Mellon University] or CMU]). This is often called the first AI program, though Samuel's checkers program also has a strong claim. 1957 The General Problem Solver (GPS) demonstrated by Newell, Shaw and Simon while at CMU. 1958 John McCarthy (Massachusetts Institute of Technology or MIT) invented the Lisp programming language. 1958 Herbert Gelernter and Nathan Rochester (IBM) described a theorem prover in geometry that exploits a semantic model of the domain in the form of diagrams of "typical" cases. 1958 Teddington Conference on the Mechanization of Thought Processes was held in the UK and among the papers presented were John McCarthy's Programs with Common Sense, Oliver Selfridge's Pandemonium, and Marvin Minsky's Some Methods of Heuristic Programming and Artificial Intelligence. 1959 John McCarthy and Marvin Minsky founded the MIT AI Lab. Late 1950s, early 1960s Margaret Masterman and colleagues at University of Cambridge design semantic nets for machine translation. Date Development 1960s Ray Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction. 1960 Man-Computer Symbiosis by J.C.R. Licklider. 1961 James Slagle (PhD dissertation, MIT) wrote (in Lisp) the first symbolic integration program, SAINT, which solved calculus problems at the college freshman level. 1961 In Minds, Machines and Gdel, John Lucas[33] denied the possibility of machine intelligence on logical or philosophical grounds. He referred to Kurt Gdel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior. 1961 Unimation's industrial robot Unimate worked on a General Motors automobile assembly line. 1963 Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests. 1963 Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence. 1963 Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt 1964 Danny Bobrow's dissertation at MIT (technical report #1 from MIT's AI group, Project MAC), shows that computers can understand natural language well enough to solve algebra word problems correctly. 1964 Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems. 1965 J. Alan Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language. 1965 Joseph Weizenbaum (MIT) built ELIZA, an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed. 1965 Edward Feigenbaum initiated Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system. 1966 Ross Quillian (PhD dissertation, Carnegie Inst. of Technology, now CMU) demonstrated semantic nets. 1966 Machine Intelligence workshop at Edinburgh the first of an influential annual series organized by Donald Michie and others. 1966 Negative report on machine translation kills much work in Natural language processing (NLP) for many years. 1967 Dendral program (Edward Feigenbaum, Joshua Lederberg, Bruce Buchanan, Georgia Sutherland at Stanford University) demonstrated to interpret mass spectra on organic chemical compounds. First successful knowledge-based program for scientific reasoning. 1968 Joel Moses (PhD work at MIT) demonstrated the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics. 1968 Richard Greenblatt (programmer) at MIT built a knowledge-based chess-playing program, MacHack, that was good enough to achieve a class-C rating in tournament play. 1968 Wallace and Boulton's program, Snob (Comp.J. 11(2) 1968), for unsupervised classification (clustering) uses the Bayesian Minimum Message Length criterion, a mathematical realisation of Occam's razor. 1969 Stanford Research Institute (SRI): Shakey the Robot, demonstrated combining animal locomotion, perception and problem solving. 1969 Roger Schank (Stanford) defined conceptual dependency model for natural language understanding. Later developed (in PhD dissertations at Yale University) for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner. 1969 Yorick Wilks (Stanford) developed the semantic coherence view of language called Preference Semantics, embodied in the first semantics-driven machine translation program, and the basis of many PhD dissertations since such as Bran Boguraev and David Carter at Cambridge. 1969 First International Joint Conference on Artificial Intelligence (IJCAI) held at Stanford. 1969 Marvin Minsky and Seymour Papert publish Perceptrons, demonstrating previously unrecognized limits of this feed-forward two-layered structure. This book is considered by some to mark the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI. Nevertheless, significant progress in the field continued (see below). 1969 McCarthy and Hayes started the discussion about the frame problem with their essay, "Some Philosophical Problems from the Standpoint of Artificial Intelligence". Date Development Early 1970s Jane Robinson and Don Walker established an influential Natural Language Processing group at SRI. 1970 Jaime Carbonell (Sr.) developed SCHOLAR, an interactive program for computer assisted instruction based on semantic nets as the representation of knowledge. 1970 Bill Woods described Augmented Transition Networks (ATN's) as a representation for natural language understanding. 1970 Patrick Winston's PhD program, ARCH, at MIT learned concepts from examples in the world of children's blocks. 1971 Terry Winograd's PhD thesis (MIT) demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English. 1971 Work on the Boyer-Moore theorem prover started in Edinburgh.[34] 1972 Prolog programming language developed by Alain Colmerauer. 1972 Earl Sacerdoti developed one of the first hierarchical planning programs, ABSTRIPS. 1973 The Assembly Robotics Group at University of Edinburgh builds Freddy Robot, capable of using visual perception to locate and assemble models. (See Edinburgh Freddy Assembly Robot: a versatile computer-controlled assembly system.) 1973 The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities. 1974 Ted Shortliffe's PhD dissertation on the MYCIN program (Stanford) demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems. 1975 Earl Sacerdoti developed techniques of partial-order planning in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems. 1975 Austin Tate developed the Nonlin hierarchical planning system able to search a space of partial plans characterised as alternative approaches to the underlying goal structure of the plan. 1975 Marvin Minsky published his widely read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together. 1975 The Meta-Dendral learning program produced new results in chemistry (some rules of mass spectrometry) the first scientific discoveries by a computer to be published in a refereed journal. Mid-1970s Barbara Grosz (SRI) established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in Natural language processing. Mid-1970s David Marr and MIT colleagues describe the "primal sketch" and its role in visual perception. 1976 Douglas Lenat's AM program (Stanford PhD dissertation) demonstrated the discovery model (loosely guided search for interesting conjectures). 1976 Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford. 1978 Tom Mitchell, at Stanford, invented the concept of Version spaces for describing the search space of a concept formation program. 1978 Herbert A. Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing". 1978 The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments. 1979 Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells". 1979 Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge. 1979 Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming. 1979 The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab. 1979 BKG, a backgammon program written by Hans Berliner at CMU, defeats the reigning world champion. 1979 Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford begin publishing work on non-monotonic logics and formal aspects of truth maintenance. Late 1970s Stanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration. Date Development 1980s Lisp machines developed and marketed. First expert system shells and commercial applications. 1980 First National Conference of the American Association for Artificial Intelligence (AAAI) held at Stanford. 1981 Danny Hillis designs the connection machine, which utilizes Parallel computing to bring new power to AI, and to computation in general. (Later founds Thinking Machines Corporation) 1982 The Fifth Generation Computer Systems project (FGCS), an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" (see history of computing hardware) which was supposed to perform much calculation utilizing massive parallelism. 1983 John Laird and Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on Soar (program). 1983 James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events. Mid-1980s Neural Networks become widely used with the Backpropagation algorithm (first described by Paul Werbos in 1974). 1985 The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference (based on more than a decade of work, and with subsequent work showing major developments). 1986 The team of Ernst Dickmanns at Bundeswehr University of Munich builds the first robot cars, driving up to 55mph on empty streets. 1986 Barbara Grosz and Candace Sidner create the first computation model of discourse, establishing the field of research.[35] 1987 Marvin Minsky published The Society of Mind, a theoretical description of the mind as a collection of cooperating agents. He had been lecturing on the idea for years before the book came out (c.f. Doyle 1983).[36] 1987 Around the same time, Rodney Brooks introduced the subsumption architecture and behavior-based robotics as a more minimalist modular model of natural intelligence; Nouvelle AI. 1987 Commercial launch of generation 2.0 of Alacrity by Alacritous Inc./Allstar Advice Inc. Toronto, the first commercial strategic and managerial advisory system. The system was based upon a forward-chaining, self-developed expert system with 3,000 rules about the evolution of markets and competitive strategies and co-authored by Alistair Davidson and Mary Chung, founders of the firm with the underlying engine developed by Paul Tarvydas. The Alacrity system also included a small financial expert system that interpreted financial statements and models.[37] 1989 Dean Pomerleau at CMU creates ALVINN (An Autonomous Land Vehicle in a Neural Network). Date Development Early 1990s TD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement (learning) is powerful enough to create a championship-level game-playing program by competing favorably with world-class players. 1990s Major advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics. 1991 DART scheduling application deployed in the first Gulf War paid back DARPA's investment of 30 years in AI research.[38] 1993 Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds (1 meter/second). 1993 Rodney Brooks, Lynn Andrea Stein and Cynthia Breazeal started the widely publicized MIT Cog project with numerous collaborators, in an attempt to build a humanoid robot child in just five years. 1993 ISX corporation wins "DARPA contractor of the year"[39] for the Dynamic Analysis and Replanning Tool (DART) which reportedly repaid the US government's entire investment in AI research since the 1950s.[40] 1994 With passengers on board, the twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars. 1994 English draughts (checkers) world champion Tinsley resigned a match against computer program Chinook. Chinook defeated 2nd highest rated player, Lafferty. Chinook won the USA National Tournament by the widest margin ever. 1995 "No Hands Across America": A semi-autonomous car drove coast-to-coast across the United States with computer-controlled steering for 2,797 miles (4,501km) of the 2,849 miles (4,585km). Throttle and brakes were controlled by a human driver.[41][42] 1995 One of Ernst Dickmanns' robot cars (with robot-controlled throttle and brakes) drove more than 1000 miles from Munich to Copenhagen and back, in traffic, at up to 120mph, occasionally executing maneuvers to pass other cars (only in a few critical situations a safety driver took over). Active vision was used to deal with rapidly changing street scenes. 1997 The Deep Blue chess machine (IBM) defeats the (then) world chess champion, Garry Kasparov. 1997 First official RoboCup football (soccer) match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators. 1997 Computer Othello program Logistello defeated the world champion Takeshi Murakami with a score of 60. 1998 Tiger Electronics' Furby is released, and becomes the first successful attempt at producing a type of A.I to reach a domestic environment. 1998 Tim Berners-Lee published his Semantic Web Road map paper.[43] 1998 Leslie P. Kaelbling, Michael Littman, and Anthony Cassandra introduce the first method for solving POMDPs offline, jumpstarting widespread use in robotics and automated planning and scheduling[44] 1999 Sony introduces an improved domestic robot similar to a Furby, the AIBO becomes one of the first artificially intelligent "pets" that is also autonomous. Late 1990s Web crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web. Late 1990s Demonstration of an Intelligent room and Emotional Agents at MIT's AI Lab. Late 1990s Initiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network. Date Development 2000 Interactive robopets ("smart toys") become commercially available, realizing the vision of the 18th century novelty toy makers. 2000 Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describing Kismet (robot), with a face that expresses emotions. 2000 The Nomad robot explores remote regions of Antarctica looking for meteorite samples. 2002 iRobot's Roomba autonomously vacuums the floor while navigating and avoiding obstacles. 2004 OWL Web Ontology Language W3C Recommendation (10 February 2004). 2004 DARPA introduces the DARPA Grand Challenge requiring competitors to produce autonomous vehicles for prize money. 2004 NASA's robotic exploration rovers Spirit and Opportunity autonomously navigate the surface of Mars. 2005 Honda's ASIMO robot, an artificially intelligent humanoid robot, is able to walk as fast as a human, delivering trays to customers in restaurant settings. 2005 Recommendation technology based on tracking web activity or media usage brings AI to marketing. See TiVo Suggestions. 2005 Blue Brain is born, a project to simulate the brain at molecular detail.[1] 2006 The Dartmouth Artificial Intelligence Conference: The Next 50 Years (AI@50) AI@50 (1416 July 2006) 2007 Philosophical Transactions of the Royal Society, B Biology, one of the world's oldest scientific journals, puts out a special issue on using AI to understand biological intelligence, titled Models of Natural Action Selection[45] 2007 Checkers is solved by a team of researchers at the University of Alberta. 2007 DARPA launches the Urban Challenge for autonomous cars to obey traffic rules and operate in an urban environment. 2009 Google builds self driving car.[46] Date Development 2010 Microsoft launched Kinect for Xbox 360, the first gaming device to track human body movement, using just a 3D camera and infra-red detection, enabling users to play their Xbox 360 wirelessly. The award-winning machine learning for human motion capture technology for this device was developed by the Computer Vision group at Microsoft Research, Cambridge.[47][48] 2011 IBM's Watson computer defeated television game show Jeopardy! champions Rutter and Jennings. 2011 Apple's Siri, Google's Google Now and Microsoft's Cortana are smartphone apps that use natural language to answer questions, make recommendations and perform actions. 2013 Robot HRP-2 built by SCHAFT Inc of Japan, a subsidiary of Google, defeats 15 teams to win DARPAs Robotics Challenge Trials. HRP-2 scored 27 out of 32 points in 8 tasks needed in disaster response. Tasks are drive a vehicle, walk over debris, climb a ladder, remove debris, walk through doors, cut through a wall, close valves and connect a hose.[49] 2013 NEIL, the Never Ending Image Learner, is released at Carnegie Mellon University to constantly compare and analyze relationships between different images.[50] 2015 An open letter to ban development and use of autonomous weapons signed by Hawking, Musk, Wozniak and 3,000 researchers in AI and robotics.[51] 2015 Google DeepMind's AlphaGo defeated 3 time European Go champion 2 dan professional Fan Hui by 5 games to 0.[52] 2016 Google DeepMind's AlphaGo defeated Lee Sedol 4-1. Lee Sedol is a 9 dan professional Korean Go champion who won 27 major tournaments from 2002 to 2016.[53] Before the match with AlphaGo, Lee Sedol was confident in predicting an easy 5-0 or 4-1 victory.[54] 2017 Google DeepMind's AlphaGo won 60-0 rounds on two public Go websites including 3 wins against world Go champion Ke Jie. [55] 2017 Libratus, designed by Carnegie Mellon professor Tuomas Sandholm and his grad student Noam Brown won against four top players at no-limit Texas hold 'em, a very challenging version of poker. Unlike Go and Chess, Poker is a game in which some information is hidden (the cards of the other player) which makes it much harder to model. [56]
Read the rest here:
Posted in Artificial Intelligence
Comments Off on Timeline of artificial intelligence – Wikipedia
Artificial Intelligence: From The Cloud To Your Pocket – Seeking Alpha
Posted: at 12:29 pm
Artificial Intelligence ('AI') is a runaway success and we think it is going to be one of the biggest, if not the biggest driver of future economic growth. There are major AI breakthroughs on a fundamental level leading to a host of groundbreaking applications in autonomous driving, medical diagnostics, automatic translation, speech recognition and a host more.
See for instance the acceleration in speech recognition in the last year or so:
We're only at the beginning of these developments, which is going in several overlapping stages:
We have described the development of specialist AI chips in an earlier article, where we already touched on the new phase emerging - the move of AI from the cloud to the device (usually the mobile phone).
This certainly isn't a universal movement but involves inference (the application of the algorithms to answer queries), rather than the more computing-heavy training (where the algorithms are improved through iteration rounds with the help of massive amounts of data).
Since GPUs weren't designed with AI in mind, so in principle, it isn't much of a stretch to assume that specialist AI chips will take performance higher, even if Nvidia is now designing new architectures like the Volta with AI in mind at least in part, from Medium:
Although Pascal has performed well in deep learning, Volta is far superior because it unifies CUDA Cores and Tensor Cores. Tensor Cores are a breakthrough technology designed to speed up AI workloads. The Volta Tensor Cores can generate 12 times more throughput than Pascal, allowing the Tesla V100 to deliver 120 teraflops (a measure of GPU power) of deep learning performance... The new Volta-powered DGX-1 leapfrogs its previous version with significant advances in TFLOPS (170 to 960), CUDA cores (28,672 to 40,960), Tensor Cores (0 to 5120), NVLink vs. PCIe speed-up (5X to 10X), and deep learning training speed (1X to 3X).
However, while the systems on a chip (SoC) that drive mobile devices contain a GPU processor, these are pretty tiny compared to their desktop and server equivalents. There is room here too for adding intelligence locally (or, as the jargon has it, 'on the edge').
Advantages
Why would one want to put AI processing 'on the edge' (on the device rather than in the cloud)? There are a few reasons:
The privacy issue was best explained by SA contributor Mark Hibben:
The motivation for this is customer privacy. Currently, AI assistants such as Siri, Cortana, Google Assistant, and Alexa are all hosted in the cloud and require Internet connections to access. The simple reason for this is that AI functionality requires a lot of processing horsepower that only datacenters could provide. But this constitutes a potential privacy issue for users, since cloud-hosted AIs are most effective when they are observing the actions of the user. That way they can learn the users' needs and be more "assistive". This means that virtually every user action, including voice and text messaging, could be subject to such observation. This has prompted Apple to look for ways to host some AI functionality on the mobile device, where it can be locked behind the protection of Apple's redoubtable Secure Enclave. The barrier to this is simply the magnitude of the processing task.
Lower latency and a possible lack of internet connection are crucial where there are life and death decisions that have to be taken instantly, for instance in autonomous driving.
Security of devices might benefit from AI-driven behavioural malware applications, which could run more efficient on specialist chips locally, rather than via the cloud.
Specialist AI chips might also provide an energy advantage, especially when some AI applications already use the local resources (CPU, GPU), and/or depend for data on the cloud (especially in scenarios where there is no Wi-Fi available). We understand that this is one motivation for Apple (NASDAQ:AAPL) to develop its own AI chips.
But here are some of the challenges, very well explained by Google (NASDAQ:GOOG) (NASDAQ:GOOGL):
These low-end phones can be about 50 times slower than a good laptop-and a good laptop is already much slower than the data centers that typically run our image recognition systems. So how do we get visual translation on these phones, with no connection to the cloud, translating in real-time as the camera moves around? We needed to develop a very small neural net, and put severe limits on how much we tried to teach it-in essence, put an upper bound on the density of information it handles. The challenge here was in creating the most effective training data. Since we're generating our own training data, we put a lot of effort into including just the right data and nothing more.
One route is what Google is doing by optimizing these very small neural nets and feeding it with just the right amount of data. However, if more resources were available locally on the device, these constraints would loosen. Hence, the search for a mobile AI chip that is more efficient in handling these neural networks.
ARM
ARM, now part of the Japanese SoftBank (OTCPK:SFTBY), is adapting its architecture to produce better results for AI. For instance, its DynamiQ architecture, from The Verge:
Dynamiq goes beyond offering just additional flexibility, and will also let chip makers optimize their silicon for tasks like machine learning. Companies will have the option of building AI accelerators directly into chips, helping systems manage data and memory more efficiently. These accelerators could mean that machine learning-powered software features (like Huawei's latest OS, which studies the apps users use most and allocates processing power accordingly) could be implemented more efficiently.
ARM is claiming that DynamiQ will deliver a 50 times increase in "AI-related performance" over the next three to five years. That remains to be seen, but it's noteworthy that they are designing chips with AI in mind.
Qualcomm (NASDAQ:QCOM)
The major user of ARM designs is Qualcomm and this company is also adding AI capabilities to its chips. It isn't adding hardware, but a machine learning platform called Zeroth, or the Snapdragon Neural Processing Engine.
It's a software development kit that will make it easier to develop deep learning programs directly on the mobile (and other devices run by Snapdragon processors). Here is the selling point ( The Verge):
This means that if companies want to build their own deep learning analytics, they won't have to rent servers to deliver their software to customers. And although running deep learning operations locally means limiting their complexity, the sort of programs you can run on your phone or any other portable device are still impressive. The real limitation will be Qualcomm's chips. The new SDK will only work with the latest Snapdragon 820 processors from the latter half of 2016, and the company isn't saying if it plans to expand its availability.
Snapdragons like the 825, the flagship 835 and some of the 600-tier chips incorporate some machine learning capabilities. And they're not doing this all by themselves either, from Qualcomm:
An exciting development in this field is Facebook's stepped up investment in Caffe2, the evolution of the open source Caffe framework. At this year's F8 conference, Facebook and Qualcomm Technologies announced a collaboration to support the optimization of Caffe2, Facebook's open source deep learning framework, and the Qualcomm Snapdragon neural processing engine (NPE) framework. The NPE is designed to do the heavy lifting needed to run neural networks efficiently on Snapdragon, leaving developers with more time and resources to focus on creating their innovative user experiences.
IBM (NYSE:IBM)
IBM is developing its own specialist AI chip called True North. It is a unique product that mirrors the design of neural networks. It will be like a 'brain on a phone' the size of the brain of a small rodent, packing 48 million electronic nerve cells, from Wired:
Each chip mimics about a million neurons, and these can communicate with each other via something similar to a synapse, the connections between neurons in the brain.
The chip won't be out for quite some time, but its main benefit is that it's exceptionally frugal, from Wired:
The upshot is a much simpler architecture that consumes less power. Though the chip contains 5.4 billion transistors, it draws about 70 milliwatts of power. A standard Intel computer processor, by comparison, includes 1.4 billion transistors and consumes about 35 to 140 watts. Even the ARM chips that drive smartphones consume several times more power than the TrueNorth.
For now, it will do the less computationally heavy stuff involved in inferencing, not the training part of machine learning (feeding algorithms massive amounts of data in order to improve them). From Wired:
But the promise is that IBM's chip can run these algorithms in smaller spaces with considerably less electrical power, letting us shoehorn more AI onto phones and other tiny devices, including hearing aids and, well, wristwatches.
Considering its energy needs, IBM's True North is perhaps the prime candidate to add local intelligence to devices, even tiny ones. This could ultimately revolutionize the internet of things (IoT), which itself is still in its infancy but based on simple processors and sensors.
Adding intelligence to IoT devices and interconnecting these opens up distributed computing on a staggering scale, but speculation about its possibilities is best left for another time.
Apple
Apple is also working on an AI chip for mobile devices, Apple's Neural Engine. There isn't much known in terms of detail; its use is to offload tasks from the CPU and GPU so saving battery and speed up stuff like face and speech recognition and mixed reality.
Groq
Then there is the startup called Groq, founded by some of the people who developed the Tensor at Google. Unfortunately, at this stage, there is very little known about the company, apart from the fact that they're developing a Tensor like AI chip. Here is Venture capitalist Chamath Palihapitiya (from CNBC):
There are no promotional materials or website. All that exists online are a couple SEC filings from October and December showing that the company raised $10.3 million, and an incorporation filing in the state of Delaware on Sept. 12. "We're really excited about Groq," Palihapitiya wrote in an e-mail. "It's too early to talk specifics, but we think what they're building could become a fundamental building block for the next generation of computing."
It's certainly a daring venture as the cost of erecting a new chip company from scratch can be exorbitant and the company faces well established competitors with Google, Apple and Nvidia (NASDAQ:NVDA).
What is also unknown is whether the chip is for datacenters or smaller devices providing local AI processing.
Nvidia
The current leader for datacenter "AI" chips (obviously, these are not specific AI chips but GPUs that are used to do most of the massive parallel computing of training neural networks to improve the accuracy of the algorithms.
But it is building its own solution for local AI computing in the form of the Xavier SoC, integrating CPU, CUDA GPU and deep learning accelerators and the GPU now contains the new Volta architecture. It is built for the forthcoming Drive PX3 (autonomous driving).
However, Nvidia's Xavier will feature its own form of TPU that it calls a Tensor Core, and this is built into the SOC.
The advantage for on-device computing in autonomous driving is clear - it reduces latency and the risk of loss of internet connection. Critical autonomous driving functions simply cannot rely on spotty internet connections or long latencies.
From what we understand, it's like a supercomputer in a box, but that's still too big (and too power hungry, sipping 20W) for smartphones. But needless to say, autonomous driving is a big emerging market in and by itself, and in time, this stuff tends to miniaturize, and that TPU itself will be a lot smaller and less energy hungry so it might very well be applicable in other environments.
Conclusion
Before we get too excited, there are serious limitations to putting too much AI computing on small devices like smartphones, here is Voicebot:
The third chip approach seems logical for on-device AI processing. However, few AI processes actually occur on-device today. Whether it is Amazon's Alexa or Apple's Siri, the language processing and understanding occurs in the cloud. It would be impressive if Apple could actually bring all of Siri's language understanding processing onto a mobile device, but that is unlikely in the near term. It's not just about analyzing the data, it's also about having access to information that helps you interpret and respond to requests. The cloud is well suited to these challenges.
Most AI requires massive amounts of computing power and massive amounts of data. While some of that can be shifted from the cloud to devices, especially where latency and secure coverage are essential (autonomous driving), there are still significant limitations for what can be done locally.
However, the development of specialist AI chips for local (rather than cloud) use is only starting today and a new and exciting market is opening up here, with big companies like Apple, Nvidia, STMicroelectronics (NYSE:STM), and IBM all at it. And the companies developing cloud AI chips, like Google and Groq might very well crack this market too, as Google's Tensor seems particularly efficient in terms of energy use.
Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.
I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.
Original post:
Artificial Intelligence: From The Cloud To Your Pocket - Seeking Alpha
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence: From The Cloud To Your Pocket – Seeking Alpha