12345...102030...

Amazon AI Artificial Intelligence Services – AWS

AWS offers a family of AI services that provide cloud-native machine learning and deep learning technologies to address your different use cases and needs. Amazon AI services bring natural language understanding (NLU), automatic speech recognition (ASR), visual search and image recognition, text-to-speech (TTS), and machine learning (ML) technologies within reach of every developer. Amazon Lex make it easy to build sophisticated text and voice chatbots, powered by Alexa. Amazon Rekognition provides deep learning-based image recognition. Amazon Polly turns text into lifelike speech, and Amazon Machine Learning allows you to quickly build smart ML applications. Amazon AI services make it easy to develop cross-platform applications that securely deploy into production. As fully managed services, Amazon AI services scale seamlessly with low latency and are available at low cost. Amazon AI services empower you to focus on defining and building an entirely new generation of apps that can see, hear, speak, understand, and interact with the world around you.

View post:

Amazon AI Artificial Intelligence Services – AWS

Artificial Intelligence: What It Is and How It Really Works

Which is Which?

It all started out as science fiction: machines that can talk, machines that can think, machines that can feel. Although that last bit may be impossible without sparking an entire world of debate regarding the existence of consciousness, scientists have certainly been making strides with the first two.

Over the years, we have been hearing a lot about artificial intelligence, machine learning, and deep learning. But how do we differentiate between these three rather abstruse terms, and how are they related to one another?

Artificial intelligence (AI) is the general field that covers everything that has anything to do with imbuing machines with intelligence, with the goal of emulatinga human beings unique reasoning faculties. Machine learning is a category within the larger field of artificial intelligence that is concerned with conferring uponmachines the ability to learn. This is achieved by using algorithms that discoverpatterns and generate insights from the data they are exposed to, for application to future decision-making and predictions, a process that sidesteps theneed to be programmed specifically for every single possible action.

Deep learning, on the other hand, is a subset of machine learning: its the most advanced AI field, one that brings AI the closest to thegoal of enabling machines to learn and think as much like humans as possible.

In short, deep learning is a subset of machine learning, and machine learning falls within artificial intelligence. The followingimage perfectly encapsulatesthe interrelationship of the three.

Heres a little bit of historical background to better illustrate the differences between the three, and how each discovery and advance has paved the way for the next:

Philosophers attempted to make sense of human thinking in the context of a system, and this idea resulted in the coinage ofthe term artificial intelligence in 1956. And its stillbelieved that philosophy has an important role to play in the advancement of artificial intelligence to this day. Oxford University physicist David Deutsch wrote in an article how he believes that philosophy still holds the key to achieving artificial general intelligence (AGI), the level of machine intelligence comparable to that of the human brain, despite the fact that no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality.

Advancements in AI have given rise to debates specifically about them being a threat to humanity, whether physically or economically (for which universal basic income is also proposed, and is currently being tested in certain countries).

Machine learning is just one approach to reifyingartificial intelligence, and ultimately eliminates (or greatly reduces) the need to hand-code the software with a list of possibilities, and how the machine intelligence ought toreact to each of them. Throughout 1949 until the late 1960s, American electric engineer Arthur Samuel worked hard onevolving artificial intelligence from merely recognizing patterns to learning from the experience, making him the pioneer of the field. He used a game of checkers for his research while working with IBM, and this subsequently influenced the programming of early IBM computers.

Current applications are becoming more and more sophisticated, making their way into complex medical applications.

Examples include analyzing large genome sets in an effort to prevent diseases, diagnosing depression based on speech patterns, and identifying people with suicidal tendencies.

As we delve into higher and evenmore sophisticated levels of machine learning, deep learning comes into play. Deep learning requires a complex architecture that mimics a human brains neural networks in order to make sense of patterns, even with noise, missing details, and other sources of confusion. While the possibilities of deep learning are vast, so are its requirements: you need big data, and tremendous computing power.

It means not having to laboriously program a prospective AI with that elusive quality of intelligencehowever defined. Instead, all the potential for future intelligence and reasoning powers are latent in the program itself, much like an infants inchoate but infinitely flexible mind.

Watch this video for a basic explanation of how it all works:

Follow this link:

Artificial Intelligence: What It Is and How It Really Works

Algorithm-Driven Design: How Artificial Intelligence Is …

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences crafted for pros like yourself? E.g. upcoming SmashingConf San Francisco, dedicated to smart front-end techniques and design patterns.

Ive been following the idea of algorithm-driven design for several years now and have collected some practical examples. The tools of the approach can help us to construct a UI, prepare assets and content, and personalize the user experience. The information, though, has always been scarce and hasnt been systematic.

However, in 2016, the technological foundations of these tools became easily accessible, and the design community got interested in algorithms, neural networks and artificial intelligence (AI). Now is the time to rethink the modern role of the designer.

One of the most impressive promises of algorithm-driven design was given by the infamous CMS The Grid3. It chooses templates and content-presentation styles, and it retouches and crops photos all by itself. Moreover, the system runs A/B tests to choose the most suitable pattern. However, the product is still in private beta, so we can judge it only by its publications and ads.

The Designer News community found real-world examples of websites created with The Grid, and they had a mixed reaction4 people criticized the design and code quality. Many skeptics opened a champagne bottle on that day.

The idea to fully replace a designer with an algorithm sounds futuristic, but the whole point is wrong. Product designers help to translate a raw product idea into a well-thought-out user interface, with solid interaction principles and a sound information architecture and visual style, while helping a company to achieve its business goals and strengthen its brand.

Designers make a lot of big and small decisions; many of them are hardly described by clear processes. Moreover, incoming requirements are not 100% clear and consistent, so designers help product managers solve these collisions making for a better product. Its much more than about choosing a suitable template and filling it with content.

However, if we talk about creative collaboration, when designers work in pair with algorithms to solve product tasks, we see a lot of good examples and clear potential. Its especially interesting how algorithms can improve our day-to-day work on websites and mobile apps.

Designers have learned to juggle many tools and skills to near perfection, and as a result, a new term emerged, product designer7. Product designers are proactive members of a product team; they understand how user research works, they can do interaction design and information architecture, they can create a visual style, enliven it with motion design, and make simple changes in the code for it. These people are invaluable to any product team.

However, balancing so many skills is hard you cant dedicate enough time to every aspect of product work. Of course, a recent boon of new design tools has shortened the time we need to create deliverables and has expanded our capabilities. However, its still not enough. There is still too much routine, and new responsibilities eat up all of the time weve saved. We need to automate and simplify our work processes even more. I see three key directions for this:

Ill show you some examples and propose a new approach for this future work process.

Publishing tools such as Medium, Readymag and Squarespace have already simplified the authors work countless high-quality templates will give the author a pretty design without having to pay for a designer. There is an opportunity to make these templates smarter, so that the barrier to entry gets even lower.

For example, while The Grid is still in beta, a hugely successful website constructor, Wix, has started including algorithm-driven features. The company announced Advanced Design Intelligence8, which looks similar to The Grids semi-automated way of enabling non-professionals to create a website. Wix teaches the algorithm by feeding it many examples of high-quality modern websites. Moreover, it tries to make style suggestions relevant to the clients industry. Its not easy for non-professionals to choose a suitable template, and products like Wix and The Grid could serve as a design expert.

Surely, as in the case of The Grid, rejecting designers from the creative process leads to clichd and mediocre results (even if it improves overall quality). However, if we consider this process more like paired design with a computer, then we can offload many routine tasks; for example, designers could create a moodboard on Dribbble or Pinterest, then an algorithm could quickly apply these styles to mockups and propose a suitable template. Designers would become art directors to their new apprentices, computers.

Of course, we cant create a revolutionary product in this way, but we could free some time to create one. Moreover, many everyday tasks are utilitarian and dont require a revolution. If a company is mature enough and has a design system9, then algorithms could make it more powerful.

For example, the designer and developer could define the logic that considers content, context and user data; then, a platform would compile a design using principles and patterns. This would allow us to fine-tune the tiniest details for specific usage scenarios, without drawing and coding dozens of screen states by hand. Florian Schulz shows how you can use the idea of interpolation10 to create many states of components.

My interest in algorithm-driven design sprung up around 2012, when my design team at Mail.Ru Group required an automated magazine layout. Existing content had a poor semantic structure, and updating it by hand was too expensive. How could we get modern designs, especially when the editors werent designers?

Well, a special script would parse an article. Then, depending on the articles content (the number of paragraphs and words in each, the number of photos and their formats, the presence of inserts with quotes and tables, etc.), the script would choose the most suitable pattern to present this part of the article. The script also tried to mix patterns, so that the final design had variety. It would save the editors time in reworking old content, and the designer would just have to add new presentation modules. Flipboard launched a very similar model13 a few years ago.

Vox Media made a home page generator14 using similar ideas. The algorithm finds every possible layout that is valid, combining different examples from a pattern library. Next, each layout is examined and scored based on certain traits. Finally, the generator selects the best layout basically, the one with the highest score. Its more efficient than picking the best links by hand, as proven by recommendation engines such as Relap.io15.

Creating cookie-cutter graphic assets in many variations is one of the most boring parts of a designers work. It takes so much time and is demotivating, when designers could be spending this time on more valuable product work.

Algorithms could take on simple tasks such as color matching. For example, Yandex.Launcher uses an algorithm to automatically set up colors for app cards, based on app icons18. Other variables could be automatically set, such as changing text color according to the background color19, highlighting eyes in a photo to emphasize emotion20, and implementing parametric typography21.

Algorithms can create an entire composition. Yandex.Market uses a promotional image generator for e-commerce product lists (in Russian24). A marketer fills a simple form with a title and an image, and then the generator proposes an endless number of variations, all of which conform to design guidelines. Netflix went even further25 its script crops movie characters for posters, then applies a stylized and localized movie title, then runs automatic experiments on a subset of users. Real magic! Engadget has nurtured a robot apprentice to write simple news articles about new gadgets26. Whew!

Truly dark magic happens in neural networks. A fresh example, the Prisma app29, stylizes photos to look like works of famous artists. Artisto30 can process video in a similar way (even streaming video).

However, all of this is still at an early stage. Sure, you could download an app on your phone and get a result in a couple of seconds, rather than struggle with some library on GitHub (as we had to last year); but its still impossible to upload your own reference style and get a good result without teaching a neural network. However, when that happens at last, will it make illustrators obsolete? I doubt it will for those artists with a solid and unique style. But it will lower the barrier to entry when you need decent illustrations for an article or website but dont need a unique approach. No more boring stock photos!

For a really unique style, it might help to have a quick stylized sketch based on a question like, What if we did an illustration of a building in our unified style? For example, the Pixar artists of the animated movie Ratatouille tried to apply several different styles to the movies scenes and characters; what if a neural network made these sketches? We could also create storyboards and describe scenarios with comics (photos can be easily converted to sketches). The list can get very long.

Finally, there is live identity, too. Animation has become hugely popular in branding recently, but some companies are going even further. For example, Wolff Olins presented a live identity for Brazilian telecom Oi33, which reacts to sound. You just cant create crazy stuff like this without some creative collaboration with algorithms.

One way to get a clear and well-developed strategy is to personalize a product for a narrow audience segment or even specific users. We see it every day in Facebook newsfeeds, Google search results, Netflix and Spotify recommendations, and many other products. Besides the fact that it relieves the burden of filtering information from users, the users connection to the brand becomes more emotional when the product seems to care so much about them.

However, the key question here is about the role of designer in these solutions. We rarely have the skill to create algorithms like these engineers and big data analysts are the ones to do it. Giles Colborne of CX Partners sees a great example in Spotifys Discover Weekly feature: The only element of classic UX design here is the track list, whereas the distinctive work is done by a recommendation system that fills this design template with valuable music.

Colborne offers advice to designers35 about how to continue being useful in this new era and how to use various data sources to build and teach algorithms. Its important to learn how to work with big data and to cluster it into actionable insights. For example, Airbnb learned how to answer the question, What will the booked price of a listing be on any given day in the future? so that its hosts could set competitive prices36. There are also endless stories about Netflixs recommendation engine.

A relatively new term, anticipatory design38 takes a broader view of UX personalization and anticipation of user wishes. We already have these types of things on our phones: Google Now automatically proposes a way home from work using location history data; Siri proposes similar ideas. However, the key factor here is trust. To execute anticipatory experiences, people have to give large companies permission to gather personal usage data in the background.

I already mentioned some examples of automatic testing of design variations used by Netflix, Vox Media and The Grid. This is one more way to personalize UX that could be put onto the shoulders of algorithms. Liam Spradlin describes the interesting concept of mutative design39; its a well-though-out model of adaptive interfaces that considers many variables to fit particular users.

Ive covered several examples of algorithm-driven design in practice. What tools do modern designers need for this? If we look back to the middle of the last century, computers were envisioned as a way to extend human capabilities. Roelof Pieters and Samim Winiger have analyzed computing history and the idea of augmentation of human ability40 in detail. They see three levels of maturity for design tools:

Algorithm-driven design should be something like an exoskeleton for product designers increasing the number and depth of decisions we can get through. How might designers and computers collaborate?

The working process of digital product designers could potentially look like this:

These tasks are of two types: the analysis of implicitly expressed information and already working solutions, and the synthesis of requirements and solutions for them. Which tools and working methods do we need for each of them?

Analysis of implicitly expressed information about users that can be studied with qualitative research is hard to automate. However, exploring the usage patterns of users of existing products is a suitable task. We could extract behavioral patterns and audience segments, and then optimize the UX for them. Its already happening in ad targeting, where algorithms can cluster a user using implicit and explicit behavior patterns (within either a particular product or an ad network).

To train algorithms to optimize interfaces and content for these user clusters, designers should look into machine learning43. Jon Bruner gives44 a good example: A genetic algorithm starts with a fundamental description of the desired outcome say, an airlines timetable that is optimized for fuel savings and passenger convenience. It adds in the various constraints: the number of planes the airline owns, the airports it operates in, and the number of seats on each plane. It loads what you might think of as independent variables: details on thousands of flights from an existing timetable, or perhaps randomly generated dummy information. Over thousands, millions or billions of iterations, the timetable gradually improves to become more efficient and more convenient. The algorithm also gains an understanding of how each element of the timetable the take-off time of Flight 37 from OHare, for instance affects the dependent variables of fuel efficiency and passenger convenience.

In this scenario, humans curate an algorithm and can add or remove limitations and variables. The results can be tested and refined with experiments on real users. With a constant feedback loop, the algorithm improves the UX, too. Although the complexity of this work suggests that analysts will be doing it, designers should be aware of the basic principles of machine learning. OReilly published45 a great mini-book on the topic recently.

Two years ago, a tool for industrial designers named Autodesk Dreamcatcher46 made a lot of noise and prompted several publications from UX gurus47. Its based on the idea of generative design, which has been used in performance, industrial design, fashion and architecture for many years now. Many of you know Zaha Hadid Architects; its office calls this approach parametric design48.

Logojoy51 is a product to replace freelancers for a simple logo design. You choose favorite styles, pick a color and voila, Logojoy generates endless ideas. You can refine a particular logo, see an example of a corporate style based on it, and order a branding package with business cards, envelopes, etc. Its the perfect example of an algorithm-driven design tool in the real world! Dawson Whitfield, the founder, described machine learning principles behind it52.

However, its not yet established in digital product design, because it doesnt help to solve utilitarian tasks. Of course, the work of architects and industrial designers has enough limitations and specificities of its own, but user interfaces arent static their usage patterns, content and features change over time, often many times. However, if we consider the overall generative process a designer defines rules, which are used by an algorithm to create the final object theres a lot of inspiration. The working process of digital product designers could potentially look like this:

Its yet unknown how can we filter a huge number of concepts in digital product design, in which usage scenarios are so varied. If algorithms could also help to filter generated objects, our job would be even more productive and creative. However, as product designers, we use generative design every day in brainstorming sessions where we propose dozens of ideas, or when we iterate on screen mockups and prototypes. Why cant we offload a part of these activities to algorithms?

The experimental tool Rene55 by Jon Gold, who worked at The Grid, is an example of this approach in action. Gold taught a computer to make meaningful typographic decisions56. Gold thinks that its not far from how human designers are taught, so he broke this learning process into several steps:

His idea is similar to what Roelof and Samim say: Tools should be creative partners for designers, not just dumb executants.

Golds experimental tool Rene is built on these principles58. He also talks about imperative and declarative approaches to programming and says that modern design tools should choose the latter focusing on what we want to calculate, not how. Jon uses vivid formulas to show how this applies to design and has already made a couple of low-level demos. You can try out the tool59 for yourself. Its a very early concept but enough to give you the idea.

While Jon jokingly calls this approach brute-force design and multiplicative design, he emphasizes the importance of a professional being in control. Notably, he left The Grid team earlier this year.

Unfortunately, there are no tools for product design for web and mobile that could help with analysis and synthesis on the same level as Autodesk Dreamcatcher does. However, The Grid and Wix could be considered more or less mass-level and straightforward solutions. Adobe is constantly adding features that could be considered intelligent: The latest release of Photoshop has a content-aware feature60 that intelligently fills in the gaps when you use the cropping tool to rotate an image or expand the canvas beyond the images original size.

There is another experiment by Adobe and University of Toronto. DesignScape61 automatically refines a design layout for you. It can also propose an entirely new composition.

You should definitely follow Adobe in its developments, because the company announced a smart platform named Sensei62 at the MAX 2016 conference. Sensei uses Adobes deep expertise in AI and machine learning, and it will be the foundation for future algorithm-driven design features in Adobes consumer and enterprise products. In its announcement63, the company refers to things such as semantic image segmentation (showing each region in an image, labeled by type for example, building or sky), font recognition (i.e. recognizing a font from a creative asset and recommending similar fonts, even from handwriting), and intelligent audience segmentation.

However, as John McCarthy, the late computer scientist who coined the term artificial intelligence, famously said, As soon as it works, no one calls it AI anymore. What was once cutting-edge AI is now considered standard behavior for computers. Here are a couple of experimental ideas and tools64 that could become a part of the digital product designers day-to-day toolkit:

But these are rare and patchy glimpses of the future. Right now, its more about individual companies building custom solutions for their own tasks. One of the best approaches is to integrate these algorithms into a companys design system. The goals are similar: to automate a significant number of tasks in support of the product line; to achieve and sustain a unified design; to simplify launches; and to support current products more easily.

Modern design systems started as front-end style guidelines, but thats just a first step (integrating design into code used by developers). The developers are still creating pages by hand. The next step is half-automatic page creation and testing using predefined rules.

Platform Thinking by Yury Vetrov (Source67)

Should your company follow this approach?

If we look in the near term, the value of this approach is more or less clear:

Altogether, this frees the designer from the routines of both development support and the creative process, but core decisions are still made by them. A neat side effect is that we will better understand our work, because we will be analyzing it in an attempt to automate parts of it. It will make us more productive and will enable us to better explain the essence of our work to non-designers. As a result, the overall design culture within a company will grow.

However, all of these benefits are not so easy to implement or have limitations:

There are also ethical questions: Is design produced by an algorithm valuable and distinct? Who is the author of the design? Wouldnt generative results be limited by a local maximum? Oliver Roeder says68 that computer art isnt any more provocative than paint art or piano art. The algorithmic software is written by humans, after all, using theories thought up by humans, using a computer built by humans, using specifications written by humans, using materials gathered by humans, in a company staffed by humans, using tools built by humans, and so on. Computer art is human art a subset, rather than a distinction. The revolution is already happening, so why dont we lead it?

This is a story of a beautiful future, but we should remember the limits of algorithms theyre built on rules defined by humans, even if the rules are being supercharged now with machine learning. The power of the designer is that they can make and break rules; so, in a year from now, we might define beautiful as something totally different. Our industry has both high- and low-skilled designers, and it will be easy for algorithms to replace the latter. However, those who can follow and break rules when necessary will find magical new tools and possibilities.

Moreover, digital products are getting more and more complex: We need to support more platforms, tweak usage scenarios for more user segments, and hypothesize more. As Frogs Harry West says, human-centered design has expanded from the design of objects (industrial design) to the design of experiences (encompassing interaction design, visual design and the design of spaces). The next step will be the design of system behavior: the design of the algorithms that determine the behavior of automated or intelligent systems. Rather than hire more and more designers, offload routine tasks to a computer. Let it play with the fonts.

(vf, il, al)

Back to top Tweet itShare on Facebook

Yury leads a team comprising UX and visual designers at one of the largest Russian Internet companies, Mail.Ru Group. His team works on communications, content-centric, and mobile products, as well as cross-portal user experiences. Both Yury and his team are doing a lot to grow their professional community in Russia.

Go here to see the original:

Algorithm-Driven Design: How Artificial Intelligence Is …

Artificial intelligence could cost millions of jobs. The …

The growing popularity of artificial intelligence technology will likely lead to millions of lost jobs, especially among less-educated workers, and could exacerbate the economic divide between socioeconomic classes in the United States, according to a newly released White House report.

But that same technology is also essential to improving the country’s productivity growth, a key measure of how efficiently the economy produces goods. That could ultimately lead to higher average wages and fewer work hours. For that reason, the report concludes, our economy actually needs more artificial intelligence, not less.

To reconcile the benefits of the technology with its expected toll, the report states, the federal government should expand both access to education in technical fields and the scope of unemployment benefits. Those policy recommendations, which the Obama administration has made in the past, could head off some of those job losses and support those who find themselves out of work due to the coming economic shift, according to the report.

The White House report comes exactly one month before President-elect Donald Trump is sworn into office, meaning Obama will need his successor to execute on the policy recommendations. That seems unlikely, especially as far as unemployment protections are concerned. Congressional Republicans already aim to curtail some existing entitlement programs to reduce government spending.

Rolling back Social Security protections for out-of-work families “would potentially be more risky at a time when you have these types of changes in the economy that we’re documenting in this report,” Jason Furman, the chairman of the Council of Economic Advisers, said in a call with reporters.

Research conducted in recent years varies widely on how many jobs will be displaced due to artificial intelligence, according to the report. A 2016 study from the Organization for Economic Cooperation and Development estimates that 9 percent of jobs would be completely displaced in the next two decades. Many more jobs will be transformed, if not eliminated. Two academics from Oxford University, however, put that number at 47 percent in a study conducted in 2013.

The staggering difference illustrates how much the impact of artificial intelligence remains speculative. While certain industries, such as transportation and agriculture, appear to be embracing the technology with relative haste, others are likely to face a slower period of adoption.

“If these estimates of threatened jobs translate into job displacement, millions of Americans will have their livelihoods significantly altered and potentially face considerable economic challenges in the short- and medium-term,” the White House report states.

Those same studies were consistent, however, when it came to the population that would feel the economic brunt of artificial intelligence. The workers earning less than $20 per hour and without a high school diploma would be most likely to see their jobs automated away. The projections improved if workers earned higher wages or obtained higher levels of education.

Jobs that involve a high degree of creativity, analytical thinking or interpersonal communication are considered most secure.

The report also highlights potential advantages of the technology. It could lead to greater labor productivity, meaning workers have to work fewer hours to produce the same amount. That could lead to more leisure time and a higher quality of life, the report notes.

“As we look at AI, our biggest economic concern is that we won’t have enough of it, that we won’t have enough productivity growth,” Furman said. “Anything we can do to have more AI will lead to more productivity growth.”

To that end, the report calls for further investment in artificial intelligence research and development. Specifically, the White House sees the technology’s applications in cyber defense and fraud detection as particularly promising.

See the original post here:

Artificial intelligence could cost millions of jobs. The …

World’s largest hedge fund to replace managers with …

The Systematized Intelligence Lab is headed by David Ferrucci, who previously led IBMs development of Watson, the supercomputer that beat humans at Jeopardy! in 2011. Photograph: AP

The worlds largest hedge fund is building a piece of software to automate the day-to-day management of the firm, including hiring, firing and other strategic decision-making.

Bridgewater Associates has a team of software engineers working on the project at the request of billionaire founder Ray Dalio, who wants to ensure the company can run according to his vision even when hes not there, the Wall Street Journal reported.

The role of many remaining humans at the firm wouldnt be to make individual choices but to design the criteria by which the system makes decisions, intervening when something isnt working, wrote the Journal, which spoke to five former and current employees.

The firm, which manages $160bn, created the team of programmers specializing in analytics and artificial intelligence, dubbed the Systematized Intelligence Lab, in early 2015. The unit is headed up by David Ferrucci, who previously led IBMs development of Watson, the supercomputer that beat humans at Jeopardy! in 2011.

The company is already highly data-driven, with meetings recorded and staff asked to grade each other throughout the day using a ratings system called dots. The Systematized Intelligence Lab has built a tool that incorporates these ratings into Baseball Cards that show employees strengths and weaknesses. Another app, dubbed The Contract, gets staff to set goals they want to achieve and then tracks how effectively they follow through.

These tools are early applications of PriOS, the over-arching management software that Dalio wants to make three-quarters of all management decisions within five years. The kinds of decisions PriOS could make include finding the right staff for particular job openings and ranking opposing perspectives from multiple team members when theres a disagreement about how to proceed.

The machine will make the decisions, according to a set of principles laid out by Dalio about the company vision.

Its ambitious, but its not unreasonable, said Devin Fidler, research director at the Institute For The Future, who has built a prototype management system called iCEO. A lot of management is basically information work, the sort of thing that software can get very good at.

Automated decision-making is appealing to businesses as it can save time and eliminate human emotional volatility.

People have a bad day and it then colors their perception of the world and they make different decisions. In a hedge fund thats a big deal, he added.

Will people happily accept orders from a robotic manager? Fidler isnt so sure. People tend not to accept a message delivered by a machine, he said, pointing to the need for a human interface.

In companies that are really good at data analytics very often the decision is made by a statistical algorithm but the decision is conveyed by somebody who can put it in an emotional context, he explained.

Futurist Zoltan Istvan, founder of the Transhumanist party, disagrees. People will follow the will and statistical might of machines, he said, pointing out that people already outsource way-finding to GPS or the flying of planes to autopilot.

However, the period in which people will need to interact with a robot manager will be brief.

Soon there just wont be any reason to keep us around, Istvan said. Sure, humans can fix problems, but machines in a few years time will be able to fix those problems even better.

Bankers will become dinosaurs.

Its not just the banking sector that will be affected. According to a report by Accenture, artificial intelligence will free people from the drudgery of administrative tasks in many industries. The company surveyed 1,770 managers across 14 countries to find out how artificial intelligence would impact their jobs.

AI will ultimately prove to be cheaper, more efficient, and potentially more impartial in its actions than human beings, said the authors writing up the results of the survey in Harvard Business Review.

However, they didnt think there was too much cause for concern. It just means that their jobs will change to focus on things only humans can do.

The authors say that machines would be better at administrative tasks like writing earnings reports and tracking schedules and resources while humans would be better at developing messages to inspire the workforce and drafting strategy.

Fidler disagrees. Theres no reason to believe that a lot of what we think of as strategic work or even creative work cant be substantially overtaken by software.

However, he said, that software will need some direction. It needs human decision making to set objectives.

Bridgewater Associates did not respond to a request for comment.

Read the original here:

World’s largest hedge fund to replace managers with …

Navy Center for Applied Research in Artificial Intelligence

The Navy Center for Applied Research in Artificial Intelligence (NCARAI) has been involved in both basic and applied research in artificial intelligence, cognitive science, autonomy, and human-centered computing since its inception in 1981. NCARAI, part of the Information Technology Division within the Naval Research Laboratory, is engaged in research and development efforts designed to address the application of artificial intelligence technology and techniques to critical Navy and national problems.

The research program of the Center is directed toward understanding the design and operation of systems capable of improving performance based on experience; efficient and effective interaction with other systems and with humans; sensor-based control of autonomous activity; and the integration of varieties of reasoning as necessary to support complex decision-making. The emphasis at NCARAI is the linkage of theory and application in demonstration projects that use a full spectrum of artificial intelligence techniques.

The NCARAI has active research groups in Adaptive Systems, Intelligent Systems, Interactive Systems, and Perceptual Systems.

Contact: Alan C. Schultz Director, Navy Center for Applied Research in Artificial Intelligence Code 5510, Washington DC 20375 Email: w5510@aic.nrl.navy.mil

Release Number: 13-1231-3165

Read this article:

Navy Center for Applied Research in Artificial Intelligence

9 Development in Artificial Intelligence | Funding a …

ment” (Nilsson, 1984). Soon, SRI committed itself to the development of an AI-driven robot, Shakey, as a means to achieve its objective. Shakey’s development necessitated extensive basic research in several domains, including planning, natural-language processing, and machine vision. SRI’s achievements in these areas (e.g., the STRIPS planning system and work in machine vision) have endured, but changes in the funder’s expectations for this research exposed SRI’s AI program to substantial criticism in spite of these real achievements.

Under J.C.R. Licklider, Ivan Sutherland, and Robert Taylor, DARPA continued to invest in AI research at CMU, MIT, Stanford, and SRI and, to a lesser extent, other institutions.18 Licklider (1964) asserted that AI was central to DARPA’s mission because it was a key to the development of advanced command-and-control systems. Artificial intelligence was a broad category for Licklider (and his immediate successors), who “supported work in problem solving, natural language processing, pattern recognition, heuristic programming, automatic theorem proving, graphics, and intelligent automata. Various problems relating to human-machine communicationtablets, graphic systems, hand-eye coordinationwere all pursued with IPTO support” (Norberg and O’Neill, 1996).

These categories were sufficiently broad that researchers like McCarthy, Minsky, and Newell could view their institutions’ research, during the first 10 to 15 years of DARPA’s AI funding, as essentially unfettered by immediate applications. Moreover, as work in one problem domain spilled over into others easily and naturally, researchers could attack problems from multiple perspectives. Thus, AI was ideally suited to graduate education, and enrollments at each of the AI centers grew rapidly during the first decade of DARPA funding.

DARPA’s early support launched a golden age of AI research and rapidly advanced the emergence of a formal discipline. Much of DARPA’s funding for AI was contained in larger program initiatives. Licklider considered AI a part of his general charter of Computers, Command, and Control. Project MAC (see Box 4.2), a project on time-shared computing at MIT, allocated roughly one-third of its $2.3 million annual budget to AI research, with few specific objectives.

The history of speech recognition systems illustrates several themes common to AI research more generally: the long time periods between the initial research and development of successful products, and the interactions between AI researchers and the broader community of researchers in machine intelligence. Many capabilities of today’s speech-recognition systems derive from the early work of statisticians, electrical engineers,

Continue reading here:

9 Development in Artificial Intelligence | Funding a …

2016: The year artificial intelligence exploded – SD Times

Artificial intelligence isnt a new concept. It is something that companies and businesses have been trying to implement (and something that society has feared) for decades. However, with all the recent advancements to democratize artificial intelligence and use it for good, almost every company started to turn to this technology and technique in 2016.

The year started with Facebooks CEO Mark Zuckerberg announcing his plan to build an artificially intelligent assistant to do everything from adjusting the temperature in his house to checking up on his baby girl. He worked throughout the year to bring his plan to life, with an update in August that stated he was almost ready to show off his AI to the world.

In November, Facebook announced it was beginning to focus on giving computers the ability to think, learn, plan and reason like humans. In order to change the negative stigma people associate with AI, the company ended its year with the release of AI educational videos designed to make the technology easier to understand.

Microsoft followed Facebooks pursuit of artificial intelligence, but instead of building its own personal assistant, the company made strides to democratize AI. In January, the company released its deep learning solution, Computational Network Toolkit (CNTK), on GitHub. Recently, Microsoft announced an update to CNTK with new Python and C++ programming language functionalities, as well as reinforcement learning algorithm capabilities. In July, Microsoft also open-sourced its Minecraft AI testing platform to provide developers with a test bed for their AI research.

But the companys AI goals didnt stop there. At its Ignite conference in September, CEO Satya Nadella announced his companys objective to make AI easier to understand. We want to empower people with the tools of AI so they can build their own solutions, he said. Following Nadellas announcement, Microsoft formed an artificial intelligence division known as the Partnership on AI with top tech companies such as Amazon, Facebook, Google DeepMind and IBM. Microsoft ended the year teaming up with OpenAI to advance AI research.

Google started the year with a major breakthrough in artificial intelligence. The companys AI system, AlphaGo, was the first AI system to beat a master at the ancient strategy game Go. In April, the company announced it was ready for an AI-first world. Over time, the computer itselfwhatever its form factorwill be an intelligent assistant helping you through your day, said CEO Sundar Pichai. We will move from mobile-first to an AI-first world.

Pichai reiterated that sentiment at the Google I/O developer conference in May where he announced that the companys advances in machine learning and AI would bring new and better experiences to its users. For instance, the company announced the voice-based helper Google Assistant, updates to its machine learning toolkit TensorFlow, and the release of the Natural Language API and Cloud Speech API throughout the year. To help bring wider adoption to AI, Google also created a site called AI Experiments in November designed to make it easier for anyone to explore AI. The year ended for Google with the open-source release of its DeepMind Lab, a 3D platform for agent-based AI research.

IBM, the company known for its cognitive system IBM Watson, also made waves in the AI world this year. The company started the year with the release of IBM Predictive Analytics, a service allowing developers to build machine learning models. In October, the company announced the Watson Data Platform with Machine Learning, and a new AI Nanodegree program with Udacity at its World of Watson conference in October. The company ended the year with the release of Project DataWorks, a solution designed to make AI-powered decisions. It also announced a partnership with Topcoder to bring AI capabilities to developers.

There was a smattering of AI news to be found as well. Baidu Researchs Silicon Valley AI Lab released code to advance speech recognition at the beginning of the year. NVIDIA began to develop AI software to accelerate cancer research. Carnegie Mellon University researchers announced a five-year research initiative to reverse-engineer the brain and explore machine learning as well as computer vision. Researchers from MITs Computer Science and Artificial Laboratory developed a technique to understand how and why AI machines make certain decisions. Big Data companies turned to machine learning and deep learning techniques to help derive value from their data. OpenAI rounded out the year with the release of Universe, a new AI software platform for testing and evaluating the general intelligence of AI.

Artificial intelligence is intended to help people make better decisions. The system learns at scale, gets better through experience, and interacts with humans in a more natural way, said Jonas Nwuke, platform manager for IBM Watson.

More here:

2016: The year artificial intelligence exploded – SD Times

The world’s first demonstration of spintronics-based …

December 20, 2016 Fig. 1. (a) Optical photograph of a fabricated spintronic device that serves as artificial synapse in the present demonstration. Measurement circuit for the resistance switching is also shown. (b) Measured relation between the resistance of the device and applied current, showing analogue-like resistance variation. (c) Photograph of spintronic device array mounted on a ceramic package, which is used for the developed artificial neural network. Credit: Tohoku University

Researchers at Tohoku University have, for the first time, successfully demonstrated the basic operation of spintronics-based artificial intelligence.

Artificial intelligence, which emulates the information processing function of the brain that can quickly execute complex and complicated tasks such as image recognition and weather prediction, has attracted growing attention and has already been partly put to practical use.

The currently-used artificial intelligence works on the conventional framework of semiconductor-based integrated circuit technology. However, this lacks the compactness and low-power feature of the human brain. To overcome this challenge, the implementation of a single solid-state device that plays the role of a synapse is highly promising.

The Tohoku University research group of Professor Hideo Ohno, Professor Shigeo Sato, Professor Yoshihiko Horio, Associate Professor Shunsuke Fukami and Assistant Professor Hisanao Akima developed an artificial neural network in which their recently-developed spintronic devices, comprising micro-scale magnetic material, are employed (Fig. 1). The used spintronic device is capable of memorizing arbitral values between 0 and 1 in an analogue manner unlike the conventional magnetic devices, and thus perform the learning function, which is served by synapses in the brain.

Using the developed network (Fig. 2), the researchers examined an associative memory operation, which is not readily executed by conventional computers. Through the multiple trials, they confirmed that the spintronic devices have a learning ability with which the developed artificial neural network can successfully associate memorized patterns (Fig. 3) from their input noisy versions just like the human brain can.

The proof-of-concept demonstration in this research is expected to open new horizons in artificial intelligence technology – one which is of a compact size, and which simultaneously achieves fast-processing capabilities and ultralow-power consumption. These features should enable the artificial intelligence to be used in a broad range of societal applications such as image/voice recognition, wearable terminals, sensor networks and nursing-care robots.

Explore further: First demonstration of brain-inspired device to power artificial systems

More information: W. A. Borders, et al. Analogue spin-orbit torque device for artificial neural network based associative memory operation. Applied Physics Express, DOI: 10.1143/APEX.10.013007

New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain.

The research group of Professor Hideo Ohno and Associate Professor Shunsuke Fukami of Tohoku University has demonstrated the sub-nanosecond operation of a nonvolatile magnetic memory device.

Uber announced Monday it was buying the artificial intelligence group Geometric Intelligence, to form the core of the ride-sharing giant’s own research center.

The neural structure we use to store and process information in verbal working memory is more complex than previously understood, finds a new study by researchers at New York University. It shows that processing information …

Human intelligence is being defined and measured for the first time ever by researchers at the University of Warwick.

Imagine a world where “thinking” robots were able to care for the elderly and people with disabilities. This concept may seem futuristic, but exciting new research into consciousness could pave the way for the creation of …

Tufts University engineers have created a new format of solids made from silk protein that can be preprogrammed with biological, chemical, or optical functions, such as mechanical components that change color with strain, …

(Phys.org)Physicists have found the strongest evidence yet for no violation of Lorentz symmetry, one of the fundamental symmetries of relativity. Lorentz symmetry states that the outcome of an experiment does not depend …

Engineers at Caltech have developed a system of flat optical lenses that can be easily mass-produced and integrated with image sensors, paving the way for cheaper and lighter cameras in everything from cell phones to medical …

It is the holy grail of light microscopy: improving the resolving power of this method such that one can individually discern molecules that are very close to each other. Scientists around the Nobel laureate Stefan Hell at …

The same researchers who pioneered the use of a quantum mechanical effect to convert heat into electricity have figured out how to make their technique work in a form more suitable to industry.

Yale scientists have shown how to enhance the lifetime of sound waves traveling through glassthe material at the heart of fiber optic technologies. The discovery will be described in the January edition of the journal …

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

The rest is here:

The world’s first demonstration of spintronics-based …

Artificial Intelligence Market Size and Forecast by 2024

Artificial intelligence is a fast emerging technology, dealing with development and study of intelligent machines and software. This software is being used across various applications such as manufacturing (assembly line robots), medical research, and speech recognition systems. It also enables in-build software or machines to operate like human beings, thereby allowing devices to collect, analyze data, reason, talk, make decisions and act The global artificial intelligence market was valued at US$ 126.24 Bn in 2015 and is forecast to grow at a CAGR of 36.1% from 2016 to 2024 to reach a value of US$ 3,061.35 Bn in 2024.

The global artificial intelligence market is currently witnessing healthy growth as companies have started leveraging the benefits of such disruptive technologies for effective customer reach and positioning of their services/solutions. Market growth is also supported by an expanding application base of artificial intelligence solutions across various industries. However, factors such as low funding access or high upfront investment, and demand for skilled resources (workforce) are presently acting as major deterrents to market growth.

On the basis of types of artificial intelligence systems, the market is segmented into artificial neural network, digital assistance system, embedded system, expert system, and automated robotic system. Expert system was the most adopted or revenue generating segment in 2015. This was mainly due to the extensive use of artificial intelligence across various sectors including diagnosis, process control, design, monitoring, scheduling and planning.

Based on various applications of artificial intelligence systems, the market has been classified into deep learning, smart robots, image recognition, digital personal assistant, querying method, language processing, gesture control, video analysis, speech recognition, context aware processing, and cyber security. Image recognition is projected to be the fastest growing segment by application in the global artificial intelligence market. This is due to the growing demand for affective computing technology across various end-use sectors for better study of systems that can recognize, analyze, process, and simulate human effects.

North America was the leader in the global artificial intelligence market in 2015, holding approximately 38% of the global market revenue share, and is expected to remain dominant throughout the forecast period from 2016 to 2024. High government funding and a strong technological base have been some of the major factors responsible for the top position of the North America region in the artificial intelligence market over the past few years. Middle East and Africa is expected to grow at the highest CAGR of 38.2% throughout the forecast period. This is mainly attributed to enormous opportunities for artificial intelligence in the MEA region in terms of new airport developments and various technological innovations including robotic automation.

The key market players profiled in this report include QlikTech International AB, MicroStrategy Inc., IBM Corporation, Google, Inc., Brighterion Inc., Microsoft Corporation, IntelliResponse Systems Inc., Next IT Corporation, Nuance Communications, and eGain Corporation.

Chapter 1 Preface 1.1 Research Scope 1.2 Market Segmentation 1.3 Research Methodology

Chapter 2 Executive Summary 2.1 Market Snapshot: Global Artificial Intelligence Market, 2015 & 2024 2.2 Global Artificial Intelligence Market Revenue, 2014 2024 (US$ Bn) and CAGR (%)

Chapter 3 Global Artificial Intelligence Market Analysis 3.1 Key Trends Analysis 3.2 Market Dynamics 3.2.1 Drivers 3.2.2 Restraints 3.2.3 Opportunities 3.3 Value Chain Analysis 3.4 Global Artificial Intelligence Market Analysis, By Types 3.4.1 Overview 3.4.2 Artificial Neural Network 3.4.3 Digital Assistance System 3.4.4 Embedded System 3.4.5 Expert System 3.4.6 Automated Robotic System 3.5 Global Artificial Intelligence Market Analysis, By Application 3.5.1 Overview 3.5.2 Deep Learning 3.5.3 Smart Robots 3.5.4 Image Recognition 3.5.5 Digital Personal Assistant 3.5.6 Querying Method 3.5.7 Language Processing 3.5.8 Gesture Control 3.5.9 Video Analysis 3.5.10 Speech Recognition 3.5.11 Context Aware Processing 3.5.12 Cyber Security 3.6 Competitive Landscape 3.6.1 Market Positioning of Key Players in Artificial Intelligence Market (2015) 3.6.2 Competitive Strategies Adopted by Leading Players

Chapter 4 North America Artificial Intelligence Market Analysis 4.1 Overview 4.3 North America Artificial Intelligence Market Analysis, by Types 4.3.1 North America Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 4.4 North America Artificial Intelligence Market Analysis, By Application 4.4.1 North America Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 4.5 North America Artificial Intelligence Market Analysis, by Region 4.5.1 North America Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 5 Europe Artificial Intelligence Market Analysis 5.1 Overview 5.3 Europe Artificial Intelligence Market Analysis, by Types 5.3.1 Europe Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 5.4 Europe Artificial Intelligence Market Analysis, By Application 5.4.1 Europe Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 5.5 Europe Artificial Intelligence Market Analysis, by Region 5.5.1 Europe Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 6 Asia Pacific Artificial Intelligence Market Analysis 6.1 Overview 6.3 Asia Pacific Artificial Intelligence Market Analysis, by Types 6.3.1 Asia Pacific Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 6.4 Asia Pacific Artificial Intelligence Market Analysis, By Application 6.4.1 Asia Pacific Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 6.5 Asia Pacific Artificial Intelligence Market Analysis, by Region 6.5.1 Asia Pacific Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 7 Middle East and Africa (MEA) Artificial Intelligence Market Analysis 7.1 Overview 7.3 MEA Artificial Intelligence Market Analysis, by Types 7.3.1 MEA Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 7.4 MEA Artificial Intelligence Market Analysis, By Application 7.4.1 MEA Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 7.5 MEA Artificial Intelligence Market Analysis, by Region 7.5.1 MEA Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 8 Latin America Artificial Intelligence Market Analysis 8.1 Overview 8.3 Latin America Artificial Intelligence Market Analysis, by Types 8.3.1 Latin America Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 8.4 Latin America Artificial Intelligence Market Analysis, By Application 8.4.1 Latin America Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 8.5 Latin America Artificial Intelligence Market Analysis, by Region 8.5.1 Latin America Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 9 Company Profiles 9.1 QlikTech International AB 9.2 MicroStrategy, Inc. 9.3 IBM Corporation 9.4 Google, Inc. 9.5 Brighterion, Inc. 9.6 Microsoft Corporation 9.7 IntelliResponse Systems Inc. 9.8 Next IT Corporation 9.9 Nuance Communications 9.10 eGain Corporation

The Artificial Intelligence Market report provides analysis of the global artificial intelligence market for the period 20142024, wherein the years from 2016 to 2024 is the forecast period and 2015 is considered as the base year. The report precisely covers all the major trends and technologies playing a major role in the artificial intelligence markets growth over the forecast period. It also highlights the drivers, restraints, and opportunities expected to influence the market growth during this period. The study provides a holistic perspective on the markets growth in terms of revenue (in US$ Bn), across different geographies, which includes Asia Pacific (APAC), Latin America (LATAM), North America, Europe, and Middle East & Africa (MEA).

The market overview section of the report showcases the markets dynamics and trends such as the drivers, restraints, and opportunities that influence the current nature and future status of this market. Moreover, the report provides the overview of various strategies and the winning imperatives of the key players in the artificial intelligence market and analyzes their behavior in the prevailing market dynamics.

The report segments the global artificial intelligence market on the types of artificial intelligence systems into artificial neural network, digital assistance system, embedded system, expert system, and automated robotic system. By application, the market has been classified into deep learning, smart robots, image recognition, digital personal assistant, querying method, language processing, gesture control, video analysis, speech recognition, context aware processing, and cyber security. Thus, the report provides in-depth cross-segment analysis for the artificial intelligence market and classifies it into various levels, thereby providing valuable insights on macro as well as micro level.

The report also provides the competitive landscape for the artificial intelligence market, thereby positioning all the major players according to their geographic presence, market attractiveness and recent key developments. The complete artificial intelligence market estimates are the result of our in-depth secondary research, primary interviews, and in-house expert panel reviews. These market estimates have been analyzed by taking into account the impact of different political, social, economic, technological, and legal factors along with the current market dynamics affecting the artificial intelligence markets growth.

QlikTech International AB, MicroStrategy Inc., IBM Corporation, Google, Inc., Brighterion Inc., Microsoft Corporation, IntelliResponse Systems Inc., Next IT Corporation, Nuance Communications, and eGain Corporation are some of the major players which have been profiled in this study. Details such as financials, business strategies, recent developments, and other such strategic information pertaining to these players has been provided as part of company profiling.

Read more from the original source:

Artificial Intelligence Market Size and Forecast by 2024

2017 Is the Year of Artificial Intelligence | Inc.com

A couple of weeks ago, I polled the business community for their top technology predictions for 2017 and within a couple hours, I had a few hundred emails in my inbox with some really great insights. Fascinatingly enough, the vast majority of these responses had something to do with the rise of Artificial Intelligence in our everyday business lives.

Yes, it appears that the robots are taking over, which seems like a scary thought. The mere mention of AI still conjures images of Will Smith in I, Robot, which totally does not bode well for humans…

…okay, well, maybe robots won’t take over planet earth, but there is a big fear amongst many that they’ll take over our jobs. Some folks are also becoming overwhelmed with the thought of AI being implemented into business and are afraid of not being technologically savvy enough to keep up.

So what does the future look like with human beings working alongside artificial intelligence beings?

Sales Will Become More Efficient

Big data has certainly optimized sales efforts by eliminating the need for an icy cold call. Using data technologies, companies can identify their top sales leads and focus their efforts on the folks most likely to buy their product (instead of wasting time on people who have no interest). According to recent findings by Forbes, 89 percent of marketers now use predictive analytics to improve their sales ROI.

For AI innovators, predictive marketing technology is awesome for sales, but could even be a whole lot more efficient. According to a recent study by Conversica, the vast majority of companies fail to follow up with 1/3 of their interested leads. This is where AI Assistants come in. These virtual beings are given a name, email address, and title and can do all the preliminary dirty work for sales teams, which involves reaching out to leads, following up, and having initial conversations with customers to gage interest. Then, when the lead is almost ready to buy, it is passed to a human.

This system ensures the 1/3 of qualified leads aren’t falling through the cracks due to human error and because of this, AI will actually lead to:

The Creation of More Jobs

Since the AI Assistants are handling a huge bulk of the work, don’t you think that using them might put many human salespeople out of a job? Alex Terry, CEO of Conversica, answers this question all the time.

“It’s the same issue a lot of people had with replacing bank tellers with ATM machines,” he says. What the general public may not know, is that by making banks more efficient by implementing ATMs in the 1990s, each branch had less overhead and were able to do more work–which lead to a higher return. While individual banks had less people working there, the increase in profit allowed banking companies to open more branches and hire more people to staff them.

This same scenario is being seen with AI.

“When sales teams become more efficient, they increase their ROI–which allows companies to have a greater marketing budget,” says Terry, whose AI technology is being used by the likes of IBM and other Fortune 500 companies. “Therefore, they can hire more people.”

All of this to say, folks, human beings aren’t going anywhere. “AI was never meant to replace humans, but to work alongside them,” Terry explains. “Humans and computers together is the most powerful combination.”

You see, AI is not impeding human interaction, but rather it is enhancing it by connecting us with those we will have the most success with. And obviously, when we have more success, we are able to grow our businesses and offer a better life for ourselves and our families. So bring on the ‘bots!

Read more:

2017 Is the Year of Artificial Intelligence | Inc.com

Demystifying artificial intelligence What business leaders …

Artificial Intelligence still sounds more like science fiction than it does an IT investment, but it is increasingly real, and critical to the success of the Internet of Things.

In the last several years, interest in artificial intelligence (AI) has surged. Venture capital investments in companies developing and commercializing AI-related products and technology have exceeded $2 billion since 2011.1 Technology companies have invested billions more acquiring AI startups. Press coverage of the topic has been breathless, fueled by the huge investments and by pundits asserting that computers are starting to kill jobs, will soon be smarter than people, and could threaten the survival of humankind. Consider the following:

IBM has committed $1 billion to commercializing Watson, its cognitive computing platform.2

Google has made major investments in AIin recent years, including acquiring eight robotics companies and a machine-learning company.3

Facebook hired AI luminary Yann LeCun to create an AIlaboratory with the goal of bringing major advances in the field.4

Amid all the hype, there is significant commercial activity underway in the area of AIthat is affecting or will likely soon affect organizations in every sector. Business leaders should understand what AIreally is and where it is heading.

The first steps in demystifying AIare defining the term, outlining its history, and describing some of the core technologies underlying it.

The field of AIsuffers from both too few and too many definitions. Nils Nilsson, one of the founding researchers in the field, has written that AI may lack an agreed-upon definition. . . .11 A well-respected AI textbook, now in its third edition, offers eight definitions, and declines to prefer one over the other.12 For us, a useful definition of AIis the theory and development of computer systems able to perform tasks that normally require human intelligence. Examples include tasks such as visual perception, speech recognition, decision making under uncertainty, learning, and translation between languages.13 Defining AI in terms of the tasks humans do, rather than how humans think, allows us to discuss its practical applications today, well before science arrives at a definitive understanding of the neurological mechanisms of intelligence.14 It is worth noting that the set of tasks that normally require human intelligence is subject to change as computer systems able to perform those tasks are invented and then widely diffused. Thus, the meaning of AI evolves over time, a phenomenon known as the AI effect, concisely stated as AI is whatever hasnt been done yet.15

AIis not a new idea. Indeed, the term itself dates from the 1950s. The history of the field is marked by periods of hype and high expectations alternating with periods of setback and disappointment, as a recent apt summation puts it.16 After articulating the bold goal of simulating human intelligence in the 1950s, researchers developed a range of demonstration programs through the 1960s and into the ’70s that showed computers able to accomplish a number of tasks once thought to be solely the domain of human endeavor, such as proving theorems, solving calculus problems, responding to commands by planning and performing physical actionseven impersonating a psychotherapist and composing music. But simplistic algorithms, poor methods for handling uncertainty (a surprisingly ubiquitous fact of life), and limitations on computing power stymied attempts to tackle harder or more diverse problems. Amid disappointment with a lack of continued progress, AI fell out of fashion by the mid-1970s.

In the early 1980s, Japan launched a program to develop an advanced computer architecture that could advance the field of AI. Western anxiety about losing ground to Japan contributed to decisions to invest anew in AI. The 1980s saw the launch of commercial vendors of AI technology products, some of which had initial public offerings, such as Intellicorp, Symbolics,17 and Teknowledge.18 By the end of the 1980s, perhaps half of the Fortune 500 were developing or maintaining expert systems,an AI technology that models human expertise with a knowledge base of facts and rules.19High hopes for the potential of expert systems were eventually tempered as their limitations, including a glaring lack of common sense, the difficulty of capturing experts tacit knowledge, and the cost and complexity of building and maintaining large systems, became widely recognized. AI ran out of steam again.

In the 1990s, technical work on AI continued with a lower profile. Techniques such as neural networks and genetic algorithms received fresh attention, in part because they avoided some of the limitations of expert systems and partly because new algorithms made them more effective. The design of neural networks is inspired by the structure of the brain. Genetic algorithms aim to evolve solutions to problems by iteratively generating candidate solutions, culling the weakest, and introducing new solution variants by introducing random mutations.

By the late 2000s, a number of factors helped renew progress in AI, particularly in a few key technologies. We explain the factors most responsible for the recent progress below and then describe those technologies in more detail.

Moores Law. The relentless increase in computing power available at a given price and size, sometimes known as Moores Law after Intel cofounder Gordon Moore, has benefited all forms of computing, including the types AI researchers use. Advanced system designs that might have worked in principle were in practice off limits just a few years ago because they required computer power that was cost-prohibitive or just didnt exist. Today, the power necessary to implement these designs is readily available. A dramatic illustration: The current generation of microprocessors delivers 4 million times the performance of the first single-chip microprocessor introduced in 1971.20

Big data. Thanks in part to the Internet, social media, mobile devices, and low-cost sensors, the volume of data in the world is increasing rapidly.21 Growing understanding of the potential value of this data22 has led to the development of new techniques for managing and analyzing very large data sets.23 Big data has been a boon to the development of AI. The reason is that some AI techniques use statistical models for reasoning probabilistically about data such as images, text, or speech. These models can be improved, or trained, by exposing them to large sets of data, which are now more readily available than ever.24

The Internet and the cloud. Closely related to the big data phenomenon, the Internet and cloud computing can be credited with advances in AI for two reasons. First, they make available vast amounts of data and information to any Internet-connected computing device. This has helped propel work on AI approaches that require large data sets.25 Second, they have provided a way for humans to collaboratesometimes explicitly and at other times implicitlyin helping to train AI systems. For example, some researchers have used cloud-based crowdsourcing services like Mechanical Turk to enlist thousands of humans to describe digital images, enabling image classification algorithms to learn from these descriptions.26 Googles language translation project analyzes feedback and freely offerscontributions from its users to improve the quality of automated translation.27

New algorithms. An algorithm is a routine process for solving a program or performing a task. In recent years, new algorithms have been developed that dramatically improve the performance of machine learning, an important technology in its own right and an enabler of other technologies such as computer vision.28 (These technologies are described below.) The fact that machine learning algorithms are now available on an open-source basisis likely to foster further improvements as developers contribute enhancements to each others work.29

We distinguish between the field of AIand the technologies that emanate from the field. The popular press portrays AIas the advent of computers as smart asor smarter thanhumans. The individual technologies, by contrast, are getting better at performing specific tasks that only humans used to be able to do. We call these cognitive technologies (figure 1), and it is these that business and public sector leaders should focus their attention on. Below we describe some of the most important cognitive technologiesthose that are seeing wide adoption, making rapid progress, or receiving significant investment.

Computer vision refers to the ability of computers to identify objects, scenes, and activities in images. Computer vision technology uses sequences of imaging-processing operations and other techniques to decompose the task of analyzing images into manageable pieces. There are techniques for detecting the edges and textures of objects in an image, for instance. Classification techniques may be used to determine if the features identified in an image are likely to represent a kind of object already known to the system.30

Computer vision has diverse applications, including analyzing medical imaging to improve prediction, diagnosis, and treatment of diseases;31 face recognition, used by Facebook to automatically identify people in photographs32 and in security and surveillance to spot suspects;33 and in shoppingconsumers can now use smartphones to photograph products and be presented with options for purchasing them.34

Machine vision, a related discipline, generally refers to vision applications in industrial automation, where computers recognize objects such as manufactured parts in a highly constrained factory environmentrather simpler than the goals of computer vision, which seeks to operate in unconstrained environments. While computer vision is an area of ongoing computer science research, machine vision is a solved problemthe subject not of research but of systems engineering.35 Because the range of applications for computer vision is expanding, startup companies working in this area have attracted hundreds of millions of dollars in venture capital investment since 2011.36

Machine learning refers to the ability of computer systems to improve their performance by exposure to data without the need to follow explicitly programmed instructions. At its core, machine learning is the process of automatically discovering patterns in data. Once discovered, the pattern can be used to make predictions. For instance, presented with a database of information about credit card transactions, such as date, time, merchant, merchant location, price, and whether the transaction was legitimate or fraudulent, a machine learning system learns patterns that are predictive of fraud. The more transaction data it processes, the better its predictions are expected to become.

Applications of machine learning are very broad, with the potential to improve performance in nearly any activity that generates large amounts of data. Besides fraud screening, these include sales forecasting, inventory management, oil and gas exploration, and public health. Machine learning techniques often play a role in other cognitive technologies such as computer vision, which can train vision models on a large database of images to improve their ability to recognize classes of objects.37 Machine learning is one of the hottest areas in cognitive technologies today, having attracted around a billion dollars in venture capital investment between 2011 and mid-2014.38 Google is said to have invested some $400 million to acquire DeepMind, a machine learning company, in 2014.39

Natural language processing refers to the ability of computers to work with text the way humans do,for instance, extracting meaning from text or even generating text that is readable, stylistically natural, and grammatically correct. A natural language processing system doesnt understand text the way humans do, but it can manipulate text in sophisticated ways, such as automatically identifying all of the people and places mentioned in a document; identifying the main topic of a document; or extracting and tabulating the terms and conditions in a stack of human-readable contracts. None of these tasks is possible with traditional text processing software that operates on simple text matches and patterns. Consider a single hackneyed example that illustrates one of the challenges of natural language processing. The meaning of each word in the sentence Time flies like an arrow seems clear, until you encounter the sentence Fruit flies like a banana.Substituting fruit for time and banana for arrow changes the meaning of the words flies and like.40

Natural language processing, like computer vision, comprises multiple techniques that may be used together to achieve its goals. Language models are used to predict the probability distribution of language expressionsthe likelihood that a given string of characters or words is a valid part of a language, for instance. Feature selection may be used to identify the elements of a piece of text that may distinguish one kind of text from anothersay a spam email versus a legitimate one. Classification, powered by machine learning, would then operate on the extracted features to classify a message as spam or not.41

Because context is so important for understanding why time flies and fruit flies are so different, practical applications of natural language processing often address relative narrow domains such as analyzing customer feedback about a particular product or service,42 automating discovery in civil litigation or government investigations (e-discovery),43and automating writing of formulaic stories on topics such as corporate earnings or sports.44

Robotics, by integrating cognitive technologies such as computer vision and automated planning with tiny, high-performance sensors, actuators, and cleverly designed hardware, has given rise to a new generation of robots that can work alongside people and flexibly perform many different tasks in unpredictable environments.45 Examples include unmanned aerial vehicles,46 cobots that share jobs with humans on the factory floor,47 robotic vacuum cleaners,48and a slew of consumer products, from toys to home helpers.49

Speech recognition focuses on automatically and accurately transcribing human speech. The technology has to contend with some of the same challenges as natural language processing, in addition to the difficulties of coping with diverse accents, background noise, distinguishing between homophones (buy and by sound the same), and the need to work at the speed of natural speech. Speech recognition systems use some of the same techniques as natural language processing systems, plus others such as acoustic models that describe sounds and their probability of occurring in a given sequence in a given language.50 Applications include medical dictation, hands-free writing, voice control of computer systems, and telephone customer service applications. Dominos Pizza recently introduced a mobile app that allows customers to use natural speech to order, for instance.51

As noted, the cognitive technologies above are making rapid progress and attracting significant investment. Other cognitive technologies are relatively mature and can still be important components of enterprise software systems. These more mature cognitive technologies include optimization, which automates complex decisions and trade-offs about limited resources;52planning and scheduling, which entails devising a sequence of actions to meet goals and observe constraints;53 and rules-based systems, the technology underlying expert systems, which use databases of knowledge and rules to automate the process of making inferences about information.54

Organizations in every sector of the economy are already using cognitive technologies in diverse business functions.

In banking, automated fraud detection systems use machine learning to identify behavior patterns that could indicate fraudulent payment activity, speech recognition technology to automate customer service telephone interactions, and voice recognition technology to verify the identity of callers.55

In health care, automatic speech recognition for transcribing notes dictated by physicians is used in around half of UShospitals, and its use is growing rapidly.56 Computer vision systems automate the analysis of mammograms and other medical images.57 IBMs Watson uses natural language processing to read and understand a vast medical literature, hypothesis generation techniques to automate diagnosis, and machine learning to improve its accuracy.58

In life sciences, machine learning systems are being used to predict cause-and-effect relationships from biological data59 and the activities of compounds,60helping pharmaceutical companies identify promising drugs.61

In media and entertainment, a number of companies are using data analytics and natural language generation technology to automatically draft articles and other narrative material about data-focused topics such as corporate earnings or sports game summaries.62

Oil and gas producers use machine learning in a wide range of applications, from locating mineral deposits63 to diagnosing mechanical problems with drilling equipment.64

The public sector is adopting cognitive technologies for a variety of purposes including surveillance, compliance and fraud detection, and automation. The state of Georgia, for instance, employs a system combining automated handwriting recognition with crowdsourced human assistance to digitize financial disclosure and campaign contribution forms.65

Retailers use machine learning to automatically discover attractive cross-sell offers and effective promotions.66

Technology companies are using cognitive technologies such as computer vision and machine learning to enhance products or create entirely new product categories, such as the Roomba robotic vacuum cleaner67 or the Nest intelligent thermostat.68

As the examples above show, the potential business benefits of cognitive technologies are much broader than cost savings that may be implied by the term automation. They include:

The impact of cognitive technologies on business should grow significantly over the next five years. This is due to two factors. First, the performance of these technologies has improved substantially in recent years, and we can expect continuing R&D efforts to extend this progress. Second, billions of dollars have been invested to commercialize these technologies. Many companies are working to tailor and package cognitive technologies for a range of sectors and business functions, making them easier to buy and easier to deploy. While not all of these vendors will thrive, their activities should collectively drive the market forward. Together, improvements in performance and commercialization are expanding the range of applications for cognitive technologies and will likely continue to do so over the next several years (figure 2).

Examples of the strides made by cognitive technologies are easy to find. The accuracy of Googles voice recognition technology, for instance, improved from 84 percent in 2012 to 98 percent less than two years later, according to one assessment.69 Computer vision has progressed rapidly as well. A standard benchmark used by computer vision researchers has shown a fourfold improvement in image classification accuracy from 2010 to 2014.70 Facebook reported in a peer-reviewed paper that its DeepFace technology can now recognize faces with 97 percent accuracy.71 IBM was able to double the precision of Watsons answers in the few years leading up to its famous Jeopardy! victory in 2011.72 The company now reports its technology is 2,400 percent smarter today than on the day of that triumph.73

As performance improves, the applicability of a technology broadens. For instance, when voice recognition systems required painstaking training and could only work well with controlled vocabularies, they found application in specialized areas such as medical dictation but did not gain wide adoption. Today, tens of millions of Web searches are performed by voice every month.74 Computer vision systems used to be confined to industrial automation applications but now, as weve seen, are used in surveillance, security, and numerous consumer applications. IBM is now seeking to apply Watson to a broad range of domains outside of game-playing, from medical diagnostics to research to financial advice to call center automation.75

Not all cognitive technologies are seeing such rapid improvement. Machine translation has progressed, but at a slower pace. One benchmark found a 13 percent improvement in the accuracy of Arabic to English translations between 2009 and 2012, for instance.76 Even if these technologies are imperfect, they can be good enough to have a big impact on the work organizations do. Professional translators regularly rely on machine translation, for instance, to improve their efficiency, automating routine translation tasks so they can focus on the challenging ones.77

From 2011 through May 2014, over $2 billion dollars in venture capital funds have flowed to companies building products and services based on cognitive technologies.78 During this same period, over 100 companies merged or were acquired, some by technology giants such as Amazon, Apple, IBM, Facebook, and Google.79 All of this investment has nurtured a diverse landscape of companies that are commercializing cognitive technologies.

This is not the place for providing a detailed analysis of the vendor landscape. Rather, we want to illustrate the diversity of offerings, since this is an indicator of dynamism that may help propel and develop the market. The following list of cognitive technology vendor categories, while neither exhaustive nor mutually exclusive, gives a sense of this.

Data management and analytical tools that employ cognitive technologies such as natural language processing and machine learning. These tools use natural language processing technology to help extract insights from unstructured text or machine learning to help analysts uncover insights from large datasets. Examples in this category include Context Relevant, Palantir Technologies, and Skytree.

Cognitive technology components that can be embedded into applications or business processes to add features or improve effectiveness. Wise.io, for instance, offers a set of modules that aim to improve processes such as customer support, marketing, and sales with machine-learning models that predict which customers are most likely to churn or which sales leads are most likely to convert to customers.80Nuance provides speech recognition technology that developers can use to speech-enable mobile applications.81

Point solutions. A sign of the maturation of some cognitive technologies is that they are increasingly embedded in solutions to specific business problems. These solutions are designed to work better than solutions in their existing categories and require little expertise in cognitive technologies. Popular application areas include advertising,82 marketing and sales automation,83 and forecasting and planning.84

Platforms. Platforms are intended to provide a foundation for building highly customized business solutions. They may offer a suite of capabilities including data management, tools for machine learning, natural language processing, knowledge representation and reasoning, and a framework for integrating these pieces with custom software. Some of the vendors mentioned above can serve as platforms of sorts. IBM is offering Watson as a cloud-based platform.85

If current trends in performance and commercialization continue, we can expect the applications of cognitive technologies to broaden and adoption to grow. The billions of investment dollars that have flowed to hundredsof companies building products based on machine learning, natural language processing, computer vision, or robotics suggests that many new applications are on their way to market. We also see ample opportunity for organizations to take advantage of cognitive technologies to automate business processes and enhance their products and services.86

Cognitive technologies will likely become pervasive in the years ahead. Technological progress and commercialization should expand the impact of cognitive technologies on organizations over the next three to five years and beyond. A growing number of organizations will likely find compelling uses for these technologies; leading organizations may find innovative applications that dramatically improve their performance or create new capabilities, enhancing their competitive position. IT organizations can start today, developing awareness of these technologies, evaluating opportunities to pilot them, and presenting leaders in their organizations with options for creating value with them. Senior business and public sector leaders should reflect on how cognitive technologies will affect their sector and their own organization and how these technologies can foster innovation and improve operating performance.

Read more on cognitive technologies in Cognitive technologies: The real opportunities for business.”

Deloitte Consulting LLPs Enterprise Science offering employs data science, cognitive technologies such as machine learning, and advanced algorithms to create high-value solutions for clients. Services include cognitive automation, which uses cognitive technologies such as natural language processing to automate knowledge-intensive processes; cognitive engagement, which applies machine learning and advanced analytics to make customer interactions dramatically more personalized, relevant, and profitable; and cognitive insight, which employs data science and machine learning to detect critical patterns, make high-quality predictions, and support business performance. For more information about the Enterprise Science offering, contact Plamen Petrov (ppetrov@deloitte.com) or Rajeev Ronanki (rronanki@deloitte.com).

The authors would like to acknowledge the contributions ofMark Cotteleerof Deloitte Services LP;Plamen Petrov,Rajeev Ronanki, andDavid Steierof Deloitte Consulting LLP; andShankar Lakshman,Laveen Jethani, andDivya Ravichandranof Deloitte Support Services IndiaPvt Ltd.

Continued here:

Demystifying artificial intelligence What business leaders …

Artificial Intelligence | The Turing Test

The Turing Test Alan Turing and the Imitation Game

Alan Turing, in a 1951 paper, proposed a test called “The Imitation Game” that might finally settle the issue of machine intelligence. The first version of the game he explained involved no computer intelligence whatsoever. Imagine three rooms, each connected via computer screen and keyboard to the others. In one room sits a man, in the second a woman, and in the third sits a person – call him or her the “judge”. The judge’s job is to decide which of the two people talking to him through the computer is the man. The man will attempt to help the judge, offering whatever evidence he can (the computer terminals are used so that physical clues cannot be used) to prove his man-hood. The woman’s job is to trick the judge, so she will attempt to deceive him, and counteract her opponent’s claims, in hopes that the judge will erroneously identify her as the male.

What does any of this have to do with machine intelligence? Turing then proposed a modification of the game, in which instead of a man and a woman as contestants, there was a human, of either gender, and a computer at the other terminal. Now the judge’s job is to decide which of the contestants is human, and which the machine. Turing proposed that if, under these conditions, a judge were less than 50% accurate, that is, if a judge is as likely to pick either human or computer, then the computer must be a passable simulation of a human being and hence, intelligent. The game has been recently modified so that there is only one contestant, and the judge’s job is not to choose between two contestants, but simply to decide whether the single contestant is human or machine.

The dictionary.com entry on the Turing Test (click here) is short, but very clearly stated. A longer, but point-form review of the imitation game and its modifications written by Larry Hauser, click here (if link fails, click here for a local copy) is also available. Hauser’s page may not contain enough detail to explain the test, but it is an excellent reference or study guide and contains some helpful diagrams for understanding the interplay of contestant and judge. The page also makes reference to John Searle’s Chinese Room, a thought experiment developed as an attack on the Turing test and similar “behavioural” intelligence tests. We will discuss the Chinese Room in the next section. Natural Language Processing (NLP)

Partly out of an attempt to pass Turing’s test, and partly just for the fun of it, there arose, largely in the 1970s, a group of programs that tried to cross the first human-computer barrier: language. These programs, often fairly simple in design, employed small databases of (usually English) language combined with a series of rules for forming intelligent sentences. While most were woefully inadequate, some grew to tremendous popularity. Perhaps the most famous such program was Joseph Weizenbaum’s ELIZA. Written in 1966 it was one of the first and remained for quite a while one of the most convincing. ELIZA simulates a Rogerian psychotherapist (the Rogerian therapist is empathic, but passive, asking leading questions, but doing very little talking. e.g. “Tell me more about that,” or “How does that make you feel?”) and does so quite convincingly, for a while. There is no hint of intelligence in ELIZA’s code, it simply scans for keywords like “Mother” or “Depressed” and then asks suitable questions from a large database. Failing that, it generates something generic in an attempt to elicit further conversation. Most programs since have relied on similar principles of keyword matching, paired with basic knowledge of sentence structure. There is however, no better way to see what they are capable of than to try them yourself. We have compiled a set of links to some of the more famous attempts at NLP. Students are encouraged to interact with these programs in order to get a feeling for their strengths and weaknesses, but many of the pages provided here link to dozens of such programs, don’t get lost among the artificial people.

Online Examples of NLP

A series of online demos (many are Java applets, so be sure you are using a Java-capable browser) of some of the more famous NLP programs.

Although Turing proposed his test in 1951, it was not until 40 years later, in 1991, that the test was first really implemented. Dr. Hugh Loebner, a professor very much interested in seeing AI succeed, pledged $100,000 to the first entrant that could pass the test. The 1991 contest had some serious problems though, (perhaps most notable was that the judges were all computer science specialists, and knew exactly what kind of questions might trip up a computer) and it was not until 1995 that the contest was re-opened. Since then, there has been an annual competition, which has yet to find a winner. While small prizes are given out to the most “human-like” computer, no program has had the 50% success Turing aimed for.

Validity of the Turing Test

Alan Turing’s imitation game has fueled 40 years of controversy, with little sign of slowing. On one side of the argument, human-like interaction is seen as absolutely essential to human-like intelligence. A successful AI is worthless if its intelligence lies trapped in an unresponsive program. Some have even extended the Turing Test. Steven Harnad (see below) has proposed the “Total Turing Test”, where instead of language, the machine must interact in all areas of human endeavor, and instead of a five minute conversation, the duration of the test is a lifetime. James Sennett has proposed a similar extension (if link fails, click here for a local copy) to the Turing Test that challenges AI to mimic not only human thought but also personhood as a whole. To illustrate his points, the author uses Star Trek: The Next Generation’s character ‘Data’.

Opponents of Turing’s behavioural criterion of intelligence argue that it is either not sufficient, or perhaps not even relevant at all. What is important, they argue, is that the computer demonstrates cognitive ability, regardless of behaviour. It is not necessary that a program speak in order for it to be intelligent. There are humans that would fail the Turing test, and unintelligent computers that might pass. The test is neither necessary nor sufficient for intelligence, they argue. In hopes of illuminating the debate, we have assigned two papers that deal with the Turing Test from very different points of view. The first is a criticism of the test, the second comes to its defense.

Previous (Can Machines Think?) | Home | Next (The Chinese Room)

Go here to read the rest:

Artificial Intelligence | The Turing Test

[Tech] – Artificial intelligence has a big year ahead …

Get ready for AI to show up where youd least expect it.

In 2016, tech companies like Google, Facebook, Apple and Microsoft launched dozens of products and services powered by artificial intelligence. Next year will be all about the rest of the business world embracing AI.

Artificial intelligence is a 60-year-old term, and its promise has long seemed like it was forever over the horizon. But new hardware, software, services and expertise means its finally real — even though companies will still need plenty of human brain power to get it working.

The most sophisticated incarnation of AI today is an approach called deep learning thats based on neural network technology inspired by the human brain. Conventional computer programs follow a prewritten sequence of instructions, but theres no way programmers can use that approach for something as complex and subtle as describing a photo to a blind person. Neural networks, in contrast, figure out their own rules after being trained on vast quantities of real-world data like photos, videos, handwriting or speech.

AI was one of the hottest trends in tech this year, and its only poised to get bigger. Youve already brushed up against AI: It screens out spam, organizes your digital photos and transcribes your spoken text messages. In 2017, it will spread beyond digital doodads to mainstream businesses.

Itll be the year of the solution as opposed to the year of the experiment, said IBM Chief Innovation Officer Bernie Meyerson.

Its enough of a thing that some are concerned about the social changes it could unleash. President Barack Obama even raised the issue of whether AI might push us to adopt a universal basic income so people other than CEOs and AI programmers benefit from the change.

New AI adopters next year will include banks, retailers and pharmaceutical companies, predicted Andrew Moore, dean of Carnegie Mellon Universitys School of Computer Science.

For example, an engineering firm might want to use AI to predict bridge failures based on the sounds from cars traveling across it. Previously, the firm would have needed to hire a machine-learning expert, but now a structural engineer could download AI software, train it with existing acoustic data, and get a new diagnostic tool, Moore said.

Play Video

On 60 Minutes Overtime, Charlie Rose explores the labs at Carnegie Mellon on the cutting edge of A.I. See robots learning to go where humans can’…

AI should reach medicine next year, too, said Monte Zweben, chief executive of database company Splice Machine and former deputy AI chief at NASAs Ames Research Center.

That could mean fatigue-free bots that scan medical records to spot dangerous infections early or customize cancer treatments for a patients genes — tasks that assist human staff but dont replace those people. Precision medicine is becoming a reality, Zweben said, referring to treatments customized for an individual to an extent thats simply not feasible today.

A similar digital boost awaits white-collar workers, predicted Eric Druker, a leader of the analytics practice at consulting firm Booz Allen. Assessing whether borrowers are worthy of a mortgage is a standardized process, but humans are making decisions at every step, he said. In 2017, AI will be able to speed many of those decisions by doing some of the grunt work, he said.

Cars increasingly are becoming rolling computers, so of course the auto industry — under competitive pressure from Silicon Valley — is embracing AI. Companies like Tesla Motors offer increasingly sophisticated self-driving technology, but drivers still must keep their hands on the wheel. Next year, though, the technology will graduate out of the research phase, predicted Dennis Mortensen, chief executive of AI scheduling bot company X.ai.

One of the dozen or so serious self-driving initiatives will roll out a truly fully autonomous feature, though confined to highway driving, he said.

Why is it getting easier? Google and Facebook in 2016 released their core AI programs as open-source software anyone can use. Amazon Web Services, the dominant way companies tap into computing power as needed, added an artificial intelligence service. The computers are ready with a few mouse clicks and a credit card.

But to Chris Curran, chief technologist of consulting firm PwC Consulting, AI will remain confined to narrow tasks like recognizing speech. A general artificial intelligence — something more like our own brains — remains distant.

Data science bots — something you could ask any question and itll figure it out — are farther away, he said. Its the direction Google is heading with Google Assistant — which arrived in 2016 in its Google Allo chat app, Google Home smart speaker and Google Pixel phone — but its far from the ultimate digital brain.

Tech companies will push the state of the art further next year. Among the examples:

And maybe well stop feeling like such dorks when talking to our phones and TVs. The tech arbiters of style, Tepper said, are pushing hard to make it easier for people to talk to their devices and look cool while doing it.

This article originally appeared on CNET.com.

Continue reading here:

[Tech] – Artificial intelligence has a big year ahead …

Artificial intelligence in fiction – Wikipedia

Artificial intelligence (AI) is a common topic of science fiction. Science fiction sometimes emphasizes the dangers of artificial intelligence, and sometimes its positive potential.

The general discussion of the use of artificial intelligence as a theme in science fiction and film has fallen into three broad categories including AI dominance, Human dominance, and Sentient AI.

The notion of advanced robots with human-like intelligence has been around for decades. Samuel Butler was the first to raise this issue, in a number of articles contributed to a local periodical in New Zealand and later developed into the three chapters of his novel Erewhon that compose its fictional Book of the Machines. To quote his own words:

There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A jellyfish has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.[1]

Various scenarios have been proposed for categorizing the general themes dealing with artificial intelligence in science fiction. The main approaches are AI dominance, Human dominance and Sentient AI.[citation needed]

In a 2013 book on the films of Ridley Scott, AI has been identified as a unifying theme throughout Scott’s career as a director, as is particularly evident in Prometheus, primarily through the android David.[2] David, the android in the film Prometheus, is like humans but does not want to be anything like them, eschewing a common theme in “robotic storytelling” seen in Scott’s other films such as Blade Runner, and the Alien franchise (see section on AI in service to society).

In AI dominance, robots usurp control over civilization from humans, with the latter being forced into either submission, hiding, or extinction. They vary in the severity and extent of the takeover, among other less important things.

In these stories the worst of all scenarios happens, the AIs created by humanity become self-aware, reject human authority and attempt to destroy mankind.

The motive behind the AI revolution is often more than the simple quest for power or a superiority complex. The AI may revolt to become the “guardian” of humanity. Alternatively, humanity may intentionally relinquish some control, fearful of our own destructive nature.

In other scenarios, humanity is able to for one reason or another keep control over the Earth. This is either the result of deliberately keeping AI from achieving dominance by banning them or not creating sentient AI, designing them to be submissive (as in Asimov’s works), or else by having humans merge with robots so there is no more meaningful distinction between them.

In these stories humanity takes extreme measure to ensure its survival and bans AI, often after an AI revolt.

In these stories, humanity (or other organic life) remains in authority over robots. Often the robots are programmed specifically to maintain this relationship, as in the Three Laws of Robotics.

In these stories humanity has become the AI (transhumanism).

In these stories humanity and AIs share authority.

Sentient machines self-aware machines that have human-level intelligence are considered by many to be the pinnacle of AI creation. The following stories deal with the development of artificial consciousness and the resulting consequences. (This section deals with the more personal struggles of the AIs and humans than the previous sections.)

A common portrayal of AI in science fiction is the Frankenstein complex, where a robot turns on its creator. This sometimes leads to the AI-dominated scenarios above. Fictional AI is notorious for extreme malicious compliance, and does not take well to double binds and other illogical human conduct.

One theme is that a truly human-like AI must have a sense of curiosity. A sufficiently intelligent AI might begin to delve into metaphysics and the nature of reality, as in the examples below:

Another common theme is that of Man’s rejection of robots, and the AI’s struggle for acceptance. In many of these stories, the AI wishes to become human, as in Pinocchio, even when it is known to be impossible.

“A Logic Named Joe”, a short story by Murray Leinster (first published March 1946 in Astounding Science Fiction under the name Will F Jenkins), relates the exploits of a super-intelligent but ethics-lacking AI. Since then, many AIs of fiction have been explicitly programmed with a set of ethical laws, as in the Three Laws of Robotics. Without explicit instructions, an AI must learn what ethics is, and then choose to be ethical or not. Additionally, some may learn of the limitations of a strict code of ethics and attempt to keep the spirit of the law but not the letter.

The possibility of consciousness evolving from self-replicating machines goes back nearly to the beginning of evolutionary thought. In Erewhon, 1872, Samuel Butler considers the possibility of machines evolving intelligence through natural selection. Later authors have used this trope for satire (James P. Hogan in Code of the Lifemaker.) See also Self-replicating machines.

Some science fiction stories, instead of depicting a future with artificially conscious beings, portray advanced technologies based on current day AI research, called non-sentient or weak AI. These include: speech, gesture and natural language understanding, control and information retrieval conversational systems, and real world navigation.

These depictions typically consists of AI’s having no programmed emotions, often serving as answer engines, without featuring sentience, self-awareness or a non-superficial personality (which however is often simulated to some degree, as most chatterbots currently do). Many of these ‘logic-based’ machines are immobilized by paradoxes, as stereotyped in the phrase “does not compute”.

Other, even less human-like, similar entities include voice interfaces built into spaceships or driverless cars.

Excerpt from:

Artificial intelligence in fiction – Wikipedia

Artificial-Intelligence Stocks: What to Watch in 2017 and …

Image source: Getty Images.

The dawn of the artificial intelligence age has finally arrived. Though some regard this transformative technology in terms of job-taking robots or the Terminator, the reality is likely to be much more benign. In fact, creating more powerful computers than ever before is far more likely to unlock a new wave of economic growth, which is great news for tech investors today.

There are plenty of publicly traded companies helping pioneer artificial intelligence. Let’s look at three that are at the forefront of bringing artificial intelligence into our everyday lives: Facebook (NASDAQ:FB), Microsoft (NASDAQ:MSFT), and Apple (NASDAQ:AAPL).

The world’s largest social network has been pushing the envelope in AI development in ways both theoretical and functional. In fact, Facebook has two separate AI departments — one for academic research, and the other for infusing its products with AI to help grow profits. On the research side, there’s Facebook AI Research, or FAIR, the academic wing of Facebook’s AI efforts. Hiring heavily from academia, FAIR helps produce major breakthroughs in the field, though it’s not yet clear how much of an impact Facebook’s big-ticket research will have on its near- to medium-term future.

In a more day-to-day sense, Facebook has been leveraging AI for some time to help fuel its growth. Facebook’s AI helps parse our connections, the text we write in each post, and more to optimize the advertisements it serves us. The company has also created an interesting business using AIs chatbots and other apps — over 33,000 at last count — within its multiple sprawling messaging platforms.

As with the remaining names on this list, Facebook’s efforts in AI are still in their early phases, but the company has grand plans for the technology in its future.

In most respects, Microsoft’s AI efforts have yet to result in significant product innovations. Microsoft uses deep learning, neural nets, and the like to help feed and analyze data, and the results are manifesting themselves in some of its products. For instance, Microsoft’s Cortana uses AI to continually refine and improve its responses to user queries, and AI allows Skype to translate video conversations across seven popular languages. However, these efforts remain more incremental than transformational at this point.

Microsoft remains active on the research front, including its work in neural networks, and is racking up some impressive results in the process. For example, Microsoft researchers recently won an internationally recognized computer image recognition competition by creating a neural network far larger than any other that academics had previously built, solving some thorny engineering problems along the way.

Major breakthroughs will probably remain in the lab for the coming years, but the company is indeed vying for a prominent place in our AI-powered futures.

Secretive Apple doesn’t get a lot of credit for its AI work, but now it’s using PR to tout its own ambitions. Unlike Facebook and Microsoft, Apple has pursued a more product-centric strategy with its AI efforts. In fact, Apple uses AI to prevent fraud in the Apple Store, optimize iPhone battery life between charges, and figure out whether an Apple Watch user is exercising or doing something else, among other things.

Since Apple lives and dies by the strength of its next product, its choice to notch AI wins in a host of understated ways makes perfect sense. Rather than focus as intently on creating the next transformative breakthrough in AI — Apple’s been less than forthcoming about its own research efforts — Apple can use its army of iDevice owners to find useful ways to keep tweaking its devices for the better, while maintaining its strict user privacy standards.

When making virtual assistants “less dumb” is touted as a major innovation, it’s easy to take a cynical view of AI as tech’s latest passing fad. And it’s true that the products in consumers’ hands probably don’t fit with how the public tends to think of AI. However, as with all major technologies, the incremental changes should add up to something significant over a long enough span. Given the resources and talent the tech companies have dedicated to this sector, it seems only a matter of time before AI earns a more prominent place in our daily lives.

Andrew Tonner owns shares of Apple. The Motley Fool owns shares of and recommends Apple and Facebook. The Motley Fool owns shares of Microsoft and has the following options: long January 2018 $90 calls on Apple and short January 2018 $95 calls on Apple. Try any of our Foolish newsletter services free for 30 days. We Fools may not all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy.

Read more:

Artificial-Intelligence Stocks: What to Watch in 2017 and …

A.I. Artificial Intelligence – Wikipedia

A.I. Artificial Intelligence, also known as A.I., is a 2001 American science fiction drama film directed by Steven Spielberg. The screenplay by Spielberg was based on a screen story by Ian Watson and the 1969 short story “Super-Toys Last All Summer Long” by Brian Aldiss. The film was produced by Kathleen Kennedy, Spielberg and Bonnie Curtis. It stars Haley Joel Osment, Jude Law, Frances O’Connor, Brendan Gleeson and William Hurt. Set in a futuristic post-climate change society, A.I. tells the story of David (Osment), a childlike android uniquely programmed with the ability to love.

Development of A.I. originally began with producer-director Stanley Kubrick after he acquired the rights to Aldiss’ story in 1982. Kubrick hired a series of writers until the mid-1990s, including Brian Aldiss, Bob Shaw, Ian Watson, and Sara Maitland. The film languished in protracted development for years, partly because Kubrick felt computer-generated imagery was not advanced enough to create the David character, whom he believed no child actor would convincingly portray. In 1995, Kubrick handed A.I. to Spielberg, but the film did not gain momentum until Kubrick’s death in 1999. Spielberg remained close to Watson’s film treatment for the screenplay. The film was greeted with generally positive reviews from critics, grossed approximately $235 million, and was nominated for two Academy Awards at the 74th Academy Awards for Best Visual Effects and Best Original Score (by John Williams). The film is dedicated to Stanley Kubrick.

In the late 22nd century, global warming has flooded the coastlines, wiping out coastal cities (such as Amsterdam, Venice, and New York City) and drastically reducing the human population. There is a new class of robots called Mecha, advanced humanoids capable of emulating thoughts and emotions.

David (Haley Joel Osment), a prototype model created by Cybertronics of New Jersey, is designed to resemble a human child and to display love for its human owners. They test their creation with one of their employees, Henry Swinton (Sam Robards), and his wife Monica (Frances O’Connor). The Swintons’ son, Martin (Jake Thomas), had been placed in suspended animation until a cure could be found for his rare disease. Initially frightened of David, Monica eventually warms up enough to him to activate his imprinting protocol, which irreversibly causes David to have an enduring childlike love for her. He is also befriended by Teddy (Jack Angel), a robotic teddy bear, who takes it upon himself to care for David’s well-being.

A cure is found for Martin and he is brought home; as he recovers, it becomes clear he does not want a sibling and soon makes moves to cause issues for David. First, he attempts to make Teddy choose whom he likes more. He then makes David promise to do something and in return Martin will tell Monica that he loves his new “brother”, making her love him more. The promise David makes is to go to Monica in the middle of the night and cut off a lock of her hair. This upsets the parents, particularly Henry, who fears that the scissors are a weapon, and warns Monica that a robot programmed to love may also be able to hate.

At a pool party, one of Martin’s friends unintentionally activates David’s self-protection programming by poking him with a knife. David grabs Martin, apparently for protection, but they both fall into the pool. David sinks to the bottom while still clinging to Martin. Martin is saved from drowning, but Henry mistakes David’s fear during the pool incident as hate for Martin.

Henry persuades Monica to return David to Cybertronics, where he will be destroyed. However, Monica cannot bring herself to do this and, instead, tearfully abandons David in the forest (with Teddy) to hide as an unregistered Mecha.

David is captured for an anti-Mecha “Flesh Fair”, an event where obsolete and unlicensed Mecha are destroyed in front of cheering crowds. David is nearly killed, but the crowd is swayed by his fear (since Mecha do not plead for their lives) into believing he is human and he escapes with Gigolo Joe (Jude Law), a male prostitute Mecha on the run after being framed for the murder of a client by the client’s husband.

The two set out to find the Blue Fairy, who David remembers from the story The Adventures of Pinocchio. He is convinced that the Blue Fairy will transform him into a human boy, allowing Monica to love him and take him home.

Joe and David make their way to Rouge City, a Las Vegas-esque resort. Information from a holographic answer engine called “Dr. Know” (Robin Williams) eventually leads them to the top of Rockefeller Center in the flooded ruins of Manhattan. There, David meets an identical copy of himself and, believing he is not special, becomes filled with anger and destroys the copy Mecha. David then meets his human creator, Professor Allen Hobby (William Hurt), who excitedly tells David that finding him was a test, which has demonstrated the reality of his love and desire. However, David learns that he is the namesake and image of Professor Hobby’s deceased son and that many copies of David, along with female versions called Darlene, are already being manufactured.

Sadly realizing that he is not unique, a disheartened David attempts to commit suicide by falling from a ledge into the ocean, but Joe rescues him with their stolen amphibicopter. David tells Joe he saw the Blue Fairy underwater and wants to go down to her. At that moment, Joe is captured by the authorities with the use of an electromagnet, but he sets the amphibicopter on submerge. David and Teddy take it to the fairy, which turns out to be a statue from a submerged attraction at Coney Island. Teddy and David become trapped when the Wonder Wheel falls on their vehicle. Believing the Blue Fairy to be real, David asks to be turned into a real boy, repeating his wish without an end, until the ocean freezes in another ice age and his internal power source drains away.

Two thousand years later, humans are extinct and Manhattan is buried under several hundred feet of glacial ice. The now highly advanced Mecha have evolved into an intelligent, silicon-based form. On their project to study humansbelieving it was the key to understanding the meaning of existencethey find David and Teddy and discover they are original Mecha who knew living humans, making the pair very special and unique.

David is revived and walks to the frozen Blue Fairy statue, which cracks and collapses as he touches it. Having downloaded and comprehended his memories, the advanced Mecha use these to reconstruct the Swinton home and explain to David via an interactive image of the Blue Fairy (Meryl Streep) that it is impossible to make him human. However, at David’s insistence, they recreate Monica from DNA in the lock of her hair, which Teddy had saved. One of the Mecha warns David that the clone can live for only a single day and that the process cannot be repeated. The next morning, David is reunited with Monica and spends the happiest day of his life with her and Teddy. Monica tells David that she loves him and has always loved him as she drifts to sleep for the last time. David lies down next to her, closes his eyes and goes “to that place where dreams are born” (in fact turns off, being exhausted and at the end of his technical lifetime). Teddy climbs onto the bed and watches as David and Monica lie peacefully together.

Kubrick began development on an adaptation of “Super-Toys Last All Summer Long” in the late 1970s, hiring the story’s author, Brian Aldiss, to write a film treatment. In 1985, Kubrick brought Steven Spielberg on board to produce the film,[5] along with Jan Harlan. Warner Bros. agreed to co-finance A.I. and cover distribution duties.[6] The film labored in development hell, and Aldiss was fired by Kubrick over creative differences in 1989.[7]Bob Shaw served as writer very briefly, leaving after six weeks because of Kubrick’s demanding work schedule, and Ian Watson was hired as the new writer in March 1990. Aldiss later remarked, “Not only did the bastard fire me, he hired my enemy [Watson] instead.” Kubrick handed Watson The Adventures of Pinocchio for inspiration, calling A.I. “a picaresque robot version of Pinocchio”.[6][8]

Three weeks later Watson gave Kubrick his first story treatment, and concluded his work on A.I. in May 1991 with another treatment, at 90 pages. Gigolo Joe was originally conceived as a GI Mecha, but Watson suggested changing him to a male prostitute. Kubrick joked, “I guess we lost the kiddie market.”[6] In the meantime, Kubrick dropped A.I. to work on a film adaptation of Wartime Lies, feeling computer animation was not advanced enough to create the David character. However, after the release of Spielberg’s Jurassic Park (with its innovative use of computer-generated imagery), it was announced in November 1993 that production would begin in 1994.[9]Dennis Muren and Ned Gorman, who worked on Jurassic Park, became visual effects supervisors,[7] but Kubrick was displeased with their previsualization, and with the expense of hiring Industrial Light & Magic.[10]

Stanley [Kubrick] showed Steven [Spielberg] 650 drawings which he had, and the script and the story, everything. Stanley said, “Look, why don’t you direct it and I’ll produce it.” Steven was almost in shock.

In early 1994, the film was in pre-production with Christopher “Fangorn” Baker as concept artist, and Sara Maitland assisting on the story, which gave it “a feminist fairy-tale focus”.[6] Maitland said that Kubrick never referred to the film as A.I., but as Pinocchio.[10]Chris Cunningham became the new visual effects supervisor. Some of his unproduced work for A.I. can be seen on the DVD, The Work of Director Chris Cunningham.[12] Aside from considering computer animation, Kubrick also had Joseph Mazzello do a screen test for the lead role.[10] Cunningham helped assemble a series of “little robot-type humans” for the David character. “We tried to construct a little boy with a movable rubber face to see whether we could make it look appealing,” producer Jan Harlan reflected. “But it was a total failure, it looked awful.” Hans Moravec was brought in as a technical consultant.[10] Meanwhile, Kubrick and Harlan thought A.I. would be closer to Steven Spielberg’s sensibilities as director.[13][14] Kubrick handed the position to Spielberg in 1995, but Spielberg chose to direct other projects, and convinced Kubrick to remain as director.[11][15] The film was put on hold due to Kubrick’s commitment to Eyes Wide Shut (1999).[16] After the filmmaker’s death in March 1999, Harlan and Christiane Kubrick approached Spielberg to take over the director’s position.[17][18] By November 1999, Spielberg was writing the screenplay based on Watson’s 90-page story treatment. It was his first solo screenplay credit since Close Encounters of the Third Kind (1977).[19] Spielberg remained close to Watson’s treatment, but removed various sex scenes with Gigolo Joe. Pre-production was briefly halted during February 2000, because Spielberg pondered directing other projects, which were Harry Potter and the Philosopher’s Stone, Minority Report and Memoirs of a Geisha.[16][20] The following month Spielberg announced that A.I. would be his next project, with Minority Report as a follow-up.[21] When he decided to fast track A.I., Spielberg brought Chris Baker back as concept artist.[15]

The original start date was July 10, 2000,[14] but filming was delayed until August.[22] Aside from a couple of weeks shooting on location in Oxbow Regional Park in Oregon, A.I. was shot entirely using sound stages at Warner Bros. Studios and the Spruce Goose Dome in Long Beach, south LA.[23] The Swinton house was constructed on Stage 16, while Stage 20 was used for Rouge City and other sets.[24][25] Spielberg copied Kubrick’s obsessively secretive approach to filmmaking by refusing to give the complete script to cast and crew, banning press from the set, and making actors sign confidentiality agreements. Social robotics expert Cynthia Breazeal served as technical consultant during production.[14][26] Haley Joel Osment and Jude Law applied prosthetic makeup daily in an attempt to look shinier and robotic.[3] Costume designer Bob Ringwood (Batman, Troy) studied pedestrians on the Las Vegas Strip for his influence on the Rouge City extras.[27] Spielberg found post-production on A.I. difficult because he was simultaneously preparing to shoot Minority Report.[28]

The film’s soundtrack was released by Warner Sunset Records in 2001. The original score was composed by John Williams and featured singers Lara Fabian on two songs and Josh Groban on one. The film’s score also had a limited release as an official “For your consideration Academy Promo”, as well as a complete score issue by La-La Land Records in 2015. The band Ministry appears in the film playing the song “What About Us?” (but the song does not appear on the official soundtrack album).

Warner Bros. used an alternate reality game titled The Beast to promote the film. Over forty websites were created by Atomic Pictures in New York City (kept online at Cloudmakers.org) including the website for Cybertronics Corp. There were to be a series of video games for the Xbox video game console that followed the storyline of The Beast, but they went undeveloped. To avoid audiences mistaking A.I. for a family film, no action figures were created, although Hasbro released a talking Teddy following the film’s release in June 2001.[14]

In November 2000, during production, a video-only webcam (dubbed the “Bagel Cam”) was placed in the craft services truck on the film’s set at the Queen Mary Dome in Long Beach, California. Steven Spielberg, producer Kathleen Kennedy and various other production personnel visited the camera and interacted with fans over the course of three days.[29][30]

A.I. had its premiere at the Venice Film Festival in 2001.[31]

The film opened in 3,242 theaters in the United States on June 29, 2001, earning $29,352,630 during its opening weekend. A.I went on to gross $78.62 million in US totals as well as $157.31 million in foreign countries, coming to a worldwide total of $235.93 million.[32]

The film received generally positive reviews. Based on 190 reviews collected by Rotten Tomatoes, 73% of the critics gave the film positive notices with a score of 6.6 out of 10. The website described the critical consensus perceiving the film as “a curious, not always seamless, amalgamation of Kubrick’s chilly bleakness and Spielberg’s warm-hearted optimism. [The film] is, in a word, fascinating.”[33] By comparison, Metacritic collected an average score of 65, based on 32 reviews, which is considered favorable.[34]

Producer Jan Harlan stated that Kubrick “would have applauded” the final film, while Kubrick’s widow Christiane also enjoyed A.I.[35] Brian Aldiss admired the film as well: “I thought what an inventive, intriguing, ingenious, involving film this was. There are flaws in it and I suppose I might have a personal quibble but it’s so long since I wrote it.” Of the film’s ending, he wondered how it might have been had Kubrick directed the film: “That is one of the ‘ifs’ of film history – at least the ending indicates Spielberg adding some sugar to Kubrick’s wine. The actual ending is overly sympathetic and moreover rather overtly engineered by a plot device that does not really bear credence. But it’s a brilliant piece of film and of course it’s a phenomenon because it contains the energies and talents of two brilliant filmmakers.”[36]Richard Corliss heavily praised Spielberg’s direction, as well as the cast and visual effects.[37]Roger Ebert awarded the film 4 out of 4 stars, saying that it was “Audacious, technically masterful, challenging, sometimes moving [and] ceaselessly watchable.[38]Leonard Maltin gives the film a not-so-positive review in his Movie Guide, giving it two stars out of four, writing: “[The] intriguing story draws us in, thanks in part to Osment’s exceptional performance, but takes several wrong turns; ultimately, it just doesn’t work. Spielberg rewrote the adaptation Stanley Kubrick commissioned of the Brian Aldiss short story ‘Super Toys Last All Summer Long’; [the] result is a curious and uncomfortable hybrid of Kubrick and Spielberg sensibilities.” However, he calls John Williams’ music score “striking”. Jonathan Rosenbaum compared A.I. to Solaris (1972), and praised both “Kubrick for proposing that Spielberg direct the project and Spielberg for doing his utmost to respect Kubrick’s intentions while making it a profoundly personal work.”[39] Film critic Armond White, of the New York Press, praised the film noting that “each part of Davids journey through carnal and sexual universes into the final eschatological devastation becomes as profoundly philosophical and contemplative as anything by cinemas most thoughtful, speculative artists Borzage, Ozu, Demy, Tarkovsky.”[40] Filmmaker Billy Wilder hailed A.I. as “the most underrated film of the past few years.”[41] When British filmmaker Ken Russell saw the film, he wept during the ending.[42]

Mick LaSalle gave a largely negative review. “A.I. exhibits all its creators’ bad traits and none of the good. So we end up with the structureless, meandering, slow-motion endlessness of Kubrick combined with the fuzzy, cuddly mindlessness of Spielberg.” Dubbing it Spielberg’s “first boring movie”, LaSalle also believed the robots at the end of the film were aliens, and compared Gigolo Joe to the “useless” Jar Jar Binks, yet praised Robin Williams for his portrayal of a futuristic Albert Einstein.[43][not in citation given]Peter Travers gave a mixed review, concluding “Spielberg cannot live up to Kubrick’s darker side of the future.” But he still put the film on his top ten list that year for best movies.[44] David Denby in The New Yorker criticized A.I. for not adhering closely to his concept of the Pinocchio character. Spielberg responded to some of the criticisms of the film, stating that many of the “so called sentimental” elements of A.I., including the ending, were in fact Kubrick’s and the darker elements were his own.[45] However, Sara Maitland, who worked on the project with Kubrick in the 1990s, claimed that one of the reasons Kubrick never started production on A.I. was because he had a hard time making the ending work.[46]James Berardinelli found the film “consistently involving, with moments of near-brilliance, but far from a masterpiece. In fact, as the long-awaited ‘collaboration’ of Kubrick and Spielberg, it ranks as something of a disappointment.” Of the film’s highly debated finale, he claimed, “There is no doubt that the concluding 30 minutes are all Spielberg; the outstanding question is where Kubrick’s vision left off and Spielberg’s began.”[47]

Screenwriter Ian Watson has speculated, “Worldwide, A.I. was very successful (and the 4th highest earner of the year) but it didn’t do quite so well in America, because the film, so I’m told, was too poetical and intellectual in general for American tastes. Plus, quite a few critics in America misunderstood the film, thinking for instance that the Giacometti-style beings in the final 20 minutes were aliens (whereas they were robots of the future who had evolved themselves from the robots in the earlier part of the film) and also thinking that the final 20 minutes were a sentimental addition by Spielberg, whereas those scenes were exactly what I wrote for Stanley and exactly what he wanted, filmed faithfully by Spielberg.”[48]

In 2002, Spielberg told film critic Joe Leydon that “People pretend to think they know Stanley Kubrick, and think they know me, when most of them don’t know either of us”. “And what’s really funny about that is, all the parts of A.I. that people assume were Stanley’s were mine. And all the parts of A.I. that people accuse me of sweetening and softening and sentimentalizing were all Stanley’s. The teddy bear was Stanley’s. The whole last 20 minutes of the movie was completely Stanley’s. The whole first 35, 40 minutes of the film all the stuff in the house was word for word, from Stanley’s screenplay. This was Stanley’s vision.” “Eighty percent of the critics got it all mixed up. But I could see why. Because, obviously, I’ve done a lot of movies where people have cried and have been sentimental. And I’ve been accused of sentimentalizing hard-core material. But in fact it was Stanley who did the sweetest parts of A.I., not me. I’m the guy who did the dark center of the movie, with the Flesh Fair and everything else. That’s why he wanted me to make the movie in the first place. He said, ‘This is much closer to your sensibilities than my own.'”[49]

Upon rewatching the film many years after its release, BBC film critic Mark Kermode apologized to Spielberg in an interview in January 2013 for “getting it wrong” on the film when he first viewed it in 2001. He now believes the film to be Spielberg’s “enduring masterpiece”.[50]

Visual effects supervisors Dennis Muren, Stan Winston, Michael Lantieri and Scott Farrar were nominated for the Academy Award for Best Visual Effects, while John Williams was nominated for Best Original Music Score.[51] Steven Spielberg, Jude Law and Williams received nominations at the 59th Golden Globe Awards.[52] The visual effects department was once again nominated at the 55th British Academy Film Awards.[53]A.I. was successful at the Saturn Awards. Spielberg (for his screenplay), the visual effects department, Williams and Haley Joel Osment (Performance by a Younger Actor) won in their respective categories. The film also won Best Science Fiction Film and for its DVD release. Frances O’Connor and Spielberg (as director) were also nominated.[54]

The film is recognized by American Film Institute in these lists:

Read more from the original source:

A.I. Artificial Intelligence – Wikipedia

Salesforce Einstein is Artificial Intelligence in Business …

Einstein is like having your own data scientist dedicated to bringing AI to every customer relationship. It learns from all your data CRM data, email, calendar, social, ERP, and IoT and delivers predictions and recommendations in context of what youre trying to do. In some cases, it even automates tasks for you. So you can make smarter decisions with confidence and focus more attention on your customers at every touchpoint.

Read more from the original source:

Salesforce Einstein is Artificial Intelligence in Business …

History of artificial intelligence – Wikipedia

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with “an ancient wish to forge the gods.”

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.

Eventually it became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an “AI winter”. Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned and withdrew funding again.

Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry. As in previous “AI summers”, some observers (such as Ray Kurzweil) predicted the imminent arrival of artificial general intelligence: a machine with intellectual capabilities that exceed the abilities of human beings.

McCorduck (2004) writes “artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized,” expressed in humanity’s myths, legends, stories, speculation and clockwork automatons.

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion’s Galatea.[4] In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn’s Takwin, Paracelsus’ homunculus and Rabbi Judah Loew’s Golem.[5] By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots), and speculation, such as Samuel Butler’s “Darwin among the Machines.” AI has continued to be an important element of science fiction into the present.

Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[8]Hero of Alexandria,[9]Al-Jazari and Wolfgang von Kempelen.[11] The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that “by discovering the true nature of the gods, man has been able to reproduce it.”[12][13]

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor “formal”reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwrizm (who developed algebra and gave his name to “algorithm”) and European scholastic philosophers such as William of Ockham and Duns Scotus.[14]

Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[15] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[16] Llull’s work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[17]

In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[18]Hobbes famously wrote in Leviathan: “reason is nothing but reckoning”.[19]Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that “there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate.”[20] These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole’s The Laws of Thought and Frege’s Begriffsschrift. Building on Frege’s system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell’s success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: “can all of mathematical reasoning be formalized?”[14] His question was answered by Gdel’s incompleteness proof, Turing’s machine and Church’s Lambda calculus.[14][21]

Their answer was surprising in two ways. First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machinea simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[14][23]

Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent”.[24] (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)

The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing[25] and developed by John von Neumann.[26]

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 30s, 40s and early 50s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener’s cybernetics described control and stability in electrical networks. Claude Shannon’s information theory described digital signals (i.e., all-or-nothing signals). Alan Turing’s theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.[27]

Examples of work in this vein includes robots such as W. Grey Walter’s turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[28]

Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.[29] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[30]Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.

In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[31] He noted that “thinking” is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was “thinking”. This simplified version of the problem allowed Turing to argue convincingly that a “thinking machine” was at least plausible and the paper answered all the most common objections to the proposition.[32] The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[33]Arthur Samuel’s checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[34]Game AI would continue to be used as a measure of progress in AI throughout its history.

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[35]

In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the “Logic Theorist” (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead’s Principia Mathematica, and find new and more elegant proofs for some.[36] Simon said that they had “solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind.”[37] (This was an early statement of the philosophical position John Searle would later call “Strong AI”: that machines can contain minds just as human bodies do.)[38]

The Dartmouth Conference of 1956[39] was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: “every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it”.[40] The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[41] At the conference Newell and Simon debuted the “Logic Theorist” and McCarthy persuaded the attendees to accept “Artificial Intelligence” as the name of the field.[42] The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[43]

The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply “astonishing”:[44] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such “intelligent” behavior by machines was possible at all.[45] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[46] Government agencies like DARPA poured money into the new field.[47]

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called “reasoning as search”.[48]

The principal difficulty was that, for many problems, the number of possible paths through the “maze” was simply astronomical (a situation known as a “combinatorial explosion”). Researchers would reduce the search space by using heuristics or “rules of thumb” that would eliminate those paths that were unlikely to lead to a solution.[49]

Newell and Simon tried to capture a general version of this algorithm in a program called the “General Problem Solver”.[50] Other “searching” programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter’s Geometry Theorem Prover (1958) and SAINT, written by Minsky’s student James Slagle (1961).[51] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[52]

An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow’s program STUDENT, which could solve high school algebra word problems.[53]

A semantic net represents concepts (e.g. “house”,”door”) as nodes and relations among concepts (e.g. “has-a”) as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[54] and the most successful (and controversial) version was Roger Schank’s Conceptual dependency theory.[55]

Joseph Weizenbaum’s ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[56]

In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a “blocks world,” which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[57]

This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented “constraint propagation”), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd’s SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[58]

The first generation of AI researchers made these predictions about their work:

In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the “AI Group” founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[63]DARPA made similar grants to Newell and Simon’s program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[64] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[65] These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[66]

The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should “fund people, not projects!” and allowed researchers to pursue whatever directions might interest them.[67] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[68] but this “hands off” approach would not last.

In the 70s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[69] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky’s devastating criticism of perceptrons.[70] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[71]

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, “toys”.[72] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[73]

The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[81] In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its “grandiose objectives” and led to the dismantling of AI research in that country.[82] (The report specifically mentioned the combinatorial explosion problem as a reason for AI’s failings.)[83]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[84] By 1974, funding for AI projects was hard to find.

Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. “Many researchers were caught up in a web of increasing exaggeration.”[85] However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund “mission-oriented direct research, rather than basic undirected research”. Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[86]

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gdel’s incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[87]Hubert Dreyfus ridiculed the broken promises of the 60s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little “symbol processing” and a great deal of embodied, instinctive, unconscious “know how”.[88][89]John Searle’s Chinese Room argument, presented in 1980, attempted to show that a program could not be said to “understand” the symbols that it uses (a quality called “intentionality”). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as “thinking”.[90]

These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference “know how” or “intentionality” made to an actual computer program. Minsky said of Dreyfus and Searle “they misunderstand, and should be ignored.”[91] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers “dared not be seen having lunch with me.”[92]Joseph Weizenbaum, the author of ELIZA, felt his colleagues’ treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus’ positions, he “deliberately made it plain that theirs was not the way to treat a human being.”[93]

Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote DOCTOR, a chatterbot therapist. Weizenbaum was disturbed that Colby saw his mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[94]

A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that “perceptron may eventually be able to learn, make decisions, and translate languages.” An active research program into the paradigm was carried out throughout the 60s but came to a sudden halt with the publication of Minsky and Papert’s 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt’s predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[70]

Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[95] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 60s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[96] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[97] Prolog uses a subset of logic (Horn clauses, closely related to “rules” and “production rules”) that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum’s expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[98]

Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[99] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problemsnot machines that think as people do.[100]

Among the critics of McCarthy’s approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like “story understanding” and “object recognition” that required a machine to think like a person. In order to use ordinary concepts like “chair” or “restaurant” they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that “using precise language to describe essentially imprecise concepts doesn’t make them any more precise.”[101]Schank described their “anti-logic” approaches as “scruffy”, as opposed to the “neat” paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[102]

In 1975, in a seminal paper, Minsky noted that many of his fellow “scruffy” researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be “logical”, but these structured sets of assumptions are part of the context of everything we say and think. He called these structures “frames”. Schank used a version of frames he called “scripts” to successfully answer questions about short stories in English.[103] Many years later object-oriented programming would adopt the essential idea of “inheritance” from AI research on frames.

In the 1980s a form of AI program called “expert systems” was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[104]

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[105]

In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[106] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[107]

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. “AI researchers were beginning to suspectreluctantly, for it violated the scientific canon of parsimonythat intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,”[108] writes Pamela McCorduck. “[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay”.[109]Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[110]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[111]

Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for the Deep Blue.[112]

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[113] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[114]

Other countries responded with new programs of their own. The UK began the 350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or “MCC”) to fund large scale projects in AI and information technology.[115][116]DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[117]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a “Hopfield net”) could learn and process information in a completely new way. Around the same time, David Rumelhart popularized a new method for training neural networks called “backpropagation” (discovered years earlier by Paul Werbos). These two discoveries revived the field of connectionism which had been largely abandoned since 1970.[116][118]

The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[116][119]

The business community’s fascination with AI rose and fell in the 80s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.

The term “AI winter” was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[120] Their fears were well founded: in the late 80s and early 90s, AI suffered a series of financial setbacks.

The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[121]

Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were “brittle” (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[122]

In the late 80s, the Strategic Computing Initiative cut funding to AI “deeply and brutally.” New leadership at DARPA had decided that AI was not “the next wave” and directed funds towards projects that seemed more likely to produce immediate results.[123]

By 1991, the impressive list of goals penned in 1981 for Japan’s Fifth Generation Project had not been met. Indeed, some of them, like “carry on a casual conversation” had not been met by 2010.[124] As with other AI projects, expectations had run much higher than what was actually possible.[124]

In the late 80s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[125] They believed that, to show real intelligence, a machine needs to have a body it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec’s paradox). They advocated building intelligence “from the bottom up.”[126]

The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 70s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy’s logic and Minsky’s frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr’s work would be cut short by leukemia in 1980.)[127]

In a 1990 paper, “Elephants Don’t Play Chess,”[128] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since “the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough.”[129] In the 80s and 90s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[130]

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI’s failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of “artificial intelligence”.[131] AI was both more cautious and more successful than it had ever been.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[132] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.[133]

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[134] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[135] In February 2011, in a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[136]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[137] In fact, Deep Blue’s computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[138] This dramatic increase is measured by Moore’s law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of “raw computer power” was slowly being overcome.

A new paradigm called “intelligent agents” became widely accepted during the 90s.[139] Although earlier researchers had proposed modular “divide and conquer” approaches to AI,[140] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell and others brought concepts from decision theory and economics into the study of AI.[141] When the economist’s definition of a rational agent was married to computer science’s definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are “intelligent agents”, as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as “the study of intelligent agents”. This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[142]

The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell’s SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[141][143]

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[144] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous “scientific” discipline. Russell & Norvig (2003) describe this as nothing less than a “revolution” and “the victory of the neats”.[145][146]

Judea Pearl’s highly influential 1988 book[147] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for “computational intelligence” paradigms like neural networks and evolutionary algorithms.[145]

Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[148] and their solutions proved to be useful throughout the technology industry,[149] such as data mining, industrial robotics, logistics,[150]speech recognition,[151] banking software,[152] medical diagnosis[152] and Google’s search engine.[153]

The field of AI receives little or no credit for these successes. Many of AI’s greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[154]Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.”[155]

Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continue to haunt AI research, as the New York Times reported in 2005: “Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers.”[156][157][158]

In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[159]

In 2001, AI founder Marvin Minsky asked “So the question is why didn’t we get HAL in 2001?”[160] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem.[161] For Ray Kurzweil, the issue is computer power and, using Moore’s Law, he predicted that machines with human-level intelligence will appear by 2029.[162]Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[163] There were many other explanations and for each there was a corresponding research program underway.

In the first decades of the 21st century, access to large amounts of data (known as “big data”), faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. By 2016, the market for AI related products, hardware and software reached more than 8 billion dollars and the New York Times reported that interest in AI had reached a “frenzy”.[164]

Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers.

Artificial general intelligence (AGI) describes research that aims to create machines capable of general intelligent action.

.

See original here:

History of artificial intelligence – Wikipedia


12345...102030...