The Observer view on artificial intelligence | Observer editorial … – The Guardian

An artificial intelligence called Libratus beats four of the worlds best poker players in Pittsburgh last week. Photograph: Carnegie Mellon University

First it was checkers (draughts to you and me), then chess, then Jeopardy!, then Go and now poker. One after another, these games, all of which require significant amounts of intelligence and expertise if they are to be played well, have fallen to the technology we call artificial intelligence (AI). And as each of these milestones is passed, speculation about the prospect of superintelligence (the attainment by machines of human-level capabilities) reaches a new high before the media caravan moves on to its next obsession du jour. Never mind that most leaders in the field regard the prospect of being supplanted by super-machines as exceedingly distant (one has famously observed that he is more concerned about the dangers of overpopulation on Mars): the solipsism of human nature means that even the most distant or implausible threat to our uniqueness as a species bothers us.

The public obsession with the existential risks of artificial superintelligence is, however, useful to the tech industry because it distracts attention from the type of AI that is now part of its core business. This is weak AI and is a combination of big data and machine-learning algorithms that ingest huge volumes of data and extract patterns and actionable predictions from them. This technology is already ubiquitous in the search engines and apps we all use every day. And the trend is accelerating: the near-term strategy of every major technology company can currently be summarised as AI Everywhere.

The big data/machine-learning combination is powerful and enticing. It can and often does lead to the development of more useful products and services search engines that can make intelligent guesses about what the user is trying to find, movies or products that might be of interest, sources of information that one might sample, connections that one might make and so on. It also enables corporations and organisations to improve efficiency, performance and services by learning from the huge troves of data that they routinely collect but until recently rarely analysed.

Human freedoms and options are increasingly influenced by opaque, inscrutable algorithms

Theres no question that this is a powerful and important new technology and it has triggered a gadarene stampede of venture and corporate capital. We are moving into what one distinguished legal scholar calls the black box society, a world in which human freedoms and options are increasingly influenced by opaque, inscrutable algorithms. Whose names appear on no-fly lists? Who gets a loan or a mortgage? Which prisoners get considered for parole? Which categories of fake news appear in your news feed? What price does Ryanair quote you for that particular flight? Why has your credit rating suddenly and inexplicably worsened?

In many cases, it may be that these decisions are rational and/or defensible. The trouble is that we have no way of knowing. And yet the black boxes that yield such outcomes are not inscrutable to everyone just to those who are affected by them. They are perfectly intelligible to the corporations that created and operate them. This means that the move towards an algorithmically driven society also represents a radical power-shift, away from citizens and consumers and towards a smallish number of powerful, pathologically secretive technology companies, whose governing philosophy seems to be that they should know everything about us, but that we should know as little as possible about their operations.

Whats even more remarkable is that these corporations are now among the worlds largest and most valuable enterprises. Yet, on the whole, they dont receive the critical scrutiny their global importance warrants. On the contrary, they get an easier ride from the media than comparable companies in other industries. If the CEO of an oil company, a car manufacturer or a mining corporation were to declare, for example, that his motto was Dont Be Evil, even the most somnolent journalist might raise a sceptical eyebrow. But when some designer-stubbled CEO in a hoodie proclaims his belief in the fundamental goodness of humanity, the media yawn tolerantly and omit to notice his companys marked talent for tax avoidance. This has to stop: transparency is a two-way process.

See the article here:

The Observer view on artificial intelligence | Observer editorial ... - The Guardian

Artificial Intelligence: What It Is and How It Really Works

Which is Which?

It all started out as science fiction: machines that can talk, machines that can think, machines that can feel. Although that last bit may be impossible without sparking an entire world of debate regarding the existence of consciousness, scientists have certainly been making strides with the first two.

Over the years, we have been hearing a lot about artificial intelligence, machine learning, and deep learning. But how do we differentiate between these three rather abstruse terms, and how are they related to one another?

Artificial intelligence (AI) is the general field that covers everything that has anything to do with imbuing machines with intelligence, with the goal of emulatinga human beings unique reasoning faculties. Machine learning is a category within the larger field of artificial intelligence that is concerned with conferring uponmachines the ability to learn. This is achieved by using algorithms that discoverpatterns and generate insights from the data they are exposed to, for application to future decision-making and predictions, a process that sidesteps theneed to be programmed specifically for every single possible action.

Deep learning, on the other hand, is a subset of machine learning: its the most advanced AI field, one that brings AI the closest to thegoal of enabling machines to learn and think as much like humans as possible.

In short, deep learning is a subset of machine learning, and machine learning falls within artificial intelligence. The followingimage perfectly encapsulatesthe interrelationship of the three.

Heres a little bit of historical background to better illustrate the differences between the three, and how each discovery and advance has paved the way for the next:

Philosophers attempted to make sense of human thinking in the context of a system, and this idea resulted in the coinage ofthe term artificial intelligence in 1956. And its stillbelieved that philosophy has an important role to play in the advancement of artificial intelligence to this day. Oxford University physicist David Deutsch wrote in an article how he believes that philosophy still holds the key to achieving artificial general intelligence (AGI), the level of machine intelligence comparable to that of the human brain, despite the fact that no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality.

Advancements in AI have given rise to debates specifically about them being a threat to humanity, whether physically or economically (for which universal basic income is also proposed, and is currently being tested in certain countries).

Machine learning is just one approach to reifyingartificial intelligence, and ultimately eliminates (or greatly reduces) the need to hand-code the software with a list of possibilities, and how the machine intelligence ought toreact to each of them. Throughout 1949 until the late 1960s, American electric engineer Arthur Samuel worked hard onevolving artificial intelligence from merely recognizing patterns to learning from the experience, making him the pioneer of the field. He used a game of checkers for his research while working with IBM, and this subsequently influenced the programming of early IBM computers.

Current applications are becoming more and more sophisticated, making their way into complex medical applications.

Examples include analyzing large genome sets in an effort to prevent diseases, diagnosing depression based on speech patterns, and identifying people with suicidal tendencies.

As we delve into higher and evenmore sophisticated levels of machine learning, deep learning comes into play. Deep learning requires a complex architecture that mimics a human brains neural networks in order to make sense of patterns, even with noise, missing details, and other sources of confusion. While the possibilities of deep learning are vast, so are its requirements: you need big data, and tremendous computing power.

It means not having to laboriously program a prospective AI with that elusive quality of intelligencehowever defined. Instead, all the potential for future intelligence and reasoning powers are latent in the program itself, much like an infants inchoate but infinitely flexible mind.

Watch this video for a basic explanation of how it all works:

Follow this link:

Artificial Intelligence: What It Is and How It Really Works

Algorithm-Driven Design: How Artificial Intelligence Is …

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences crafted for pros like yourself? E.g. upcoming SmashingConf San Francisco, dedicated to smart front-end techniques and design patterns.

Ive been following the idea of algorithm-driven design for several years now and have collected some practical examples. The tools of the approach can help us to construct a UI, prepare assets and content, and personalize the user experience. The information, though, has always been scarce and hasnt been systematic.

However, in 2016, the technological foundations of these tools became easily accessible, and the design community got interested in algorithms, neural networks and artificial intelligence (AI). Now is the time to rethink the modern role of the designer.

One of the most impressive promises of algorithm-driven design was given by the infamous CMS The Grid3. It chooses templates and content-presentation styles, and it retouches and crops photos all by itself. Moreover, the system runs A/B tests to choose the most suitable pattern. However, the product is still in private beta, so we can judge it only by its publications and ads.

The Designer News community found real-world examples of websites created with The Grid, and they had a mixed reaction4 people criticized the design and code quality. Many skeptics opened a champagne bottle on that day.

The idea to fully replace a designer with an algorithm sounds futuristic, but the whole point is wrong. Product designers help to translate a raw product idea into a well-thought-out user interface, with solid interaction principles and a sound information architecture and visual style, while helping a company to achieve its business goals and strengthen its brand.

Designers make a lot of big and small decisions; many of them are hardly described by clear processes. Moreover, incoming requirements are not 100% clear and consistent, so designers help product managers solve these collisions making for a better product. Its much more than about choosing a suitable template and filling it with content.

However, if we talk about creative collaboration, when designers work in pair with algorithms to solve product tasks, we see a lot of good examples and clear potential. Its especially interesting how algorithms can improve our day-to-day work on websites and mobile apps.

Designers have learned to juggle many tools and skills to near perfection, and as a result, a new term emerged, product designer7. Product designers are proactive members of a product team; they understand how user research works, they can do interaction design and information architecture, they can create a visual style, enliven it with motion design, and make simple changes in the code for it. These people are invaluable to any product team.

However, balancing so many skills is hard you cant dedicate enough time to every aspect of product work. Of course, a recent boon of new design tools has shortened the time we need to create deliverables and has expanded our capabilities. However, its still not enough. There is still too much routine, and new responsibilities eat up all of the time weve saved. We need to automate and simplify our work processes even more. I see three key directions for this:

Ill show you some examples and propose a new approach for this future work process.

Publishing tools such as Medium, Readymag and Squarespace have already simplified the authors work countless high-quality templates will give the author a pretty design without having to pay for a designer. There is an opportunity to make these templates smarter, so that the barrier to entry gets even lower.

For example, while The Grid is still in beta, a hugely successful website constructor, Wix, has started including algorithm-driven features. The company announced Advanced Design Intelligence8, which looks similar to The Grids semi-automated way of enabling non-professionals to create a website. Wix teaches the algorithm by feeding it many examples of high-quality modern websites. Moreover, it tries to make style suggestions relevant to the clients industry. Its not easy for non-professionals to choose a suitable template, and products like Wix and The Grid could serve as a design expert.

Surely, as in the case of The Grid, rejecting designers from the creative process leads to clichd and mediocre results (even if it improves overall quality). However, if we consider this process more like paired design with a computer, then we can offload many routine tasks; for example, designers could create a moodboard on Dribbble or Pinterest, then an algorithm could quickly apply these styles to mockups and propose a suitable template. Designers would become art directors to their new apprentices, computers.

Of course, we cant create a revolutionary product in this way, but we could free some time to create one. Moreover, many everyday tasks are utilitarian and dont require a revolution. If a company is mature enough and has a design system9, then algorithms could make it more powerful.

For example, the designer and developer could define the logic that considers content, context and user data; then, a platform would compile a design using principles and patterns. This would allow us to fine-tune the tiniest details for specific usage scenarios, without drawing and coding dozens of screen states by hand. Florian Schulz shows how you can use the idea of interpolation10 to create many states of components.

My interest in algorithm-driven design sprung up around 2012, when my design team at Mail.Ru Group required an automated magazine layout. Existing content had a poor semantic structure, and updating it by hand was too expensive. How could we get modern designs, especially when the editors werent designers?

Well, a special script would parse an article. Then, depending on the articles content (the number of paragraphs and words in each, the number of photos and their formats, the presence of inserts with quotes and tables, etc.), the script would choose the most suitable pattern to present this part of the article. The script also tried to mix patterns, so that the final design had variety. It would save the editors time in reworking old content, and the designer would just have to add new presentation modules. Flipboard launched a very similar model13 a few years ago.

Vox Media made a home page generator14 using similar ideas. The algorithm finds every possible layout that is valid, combining different examples from a pattern library. Next, each layout is examined and scored based on certain traits. Finally, the generator selects the best layout basically, the one with the highest score. Its more efficient than picking the best links by hand, as proven by recommendation engines such as Relap.io15.

Creating cookie-cutter graphic assets in many variations is one of the most boring parts of a designers work. It takes so much time and is demotivating, when designers could be spending this time on more valuable product work.

Algorithms could take on simple tasks such as color matching. For example, Yandex.Launcher uses an algorithm to automatically set up colors for app cards, based on app icons18. Other variables could be automatically set, such as changing text color according to the background color19, highlighting eyes in a photo to emphasize emotion20, and implementing parametric typography21.

Algorithms can create an entire composition. Yandex.Market uses a promotional image generator for e-commerce product lists (in Russian24). A marketer fills a simple form with a title and an image, and then the generator proposes an endless number of variations, all of which conform to design guidelines. Netflix went even further25 its script crops movie characters for posters, then applies a stylized and localized movie title, then runs automatic experiments on a subset of users. Real magic! Engadget has nurtured a robot apprentice to write simple news articles about new gadgets26. Whew!

Truly dark magic happens in neural networks. A fresh example, the Prisma app29, stylizes photos to look like works of famous artists. Artisto30 can process video in a similar way (even streaming video).

However, all of this is still at an early stage. Sure, you could download an app on your phone and get a result in a couple of seconds, rather than struggle with some library on GitHub (as we had to last year); but its still impossible to upload your own reference style and get a good result without teaching a neural network. However, when that happens at last, will it make illustrators obsolete? I doubt it will for those artists with a solid and unique style. But it will lower the barrier to entry when you need decent illustrations for an article or website but dont need a unique approach. No more boring stock photos!

For a really unique style, it might help to have a quick stylized sketch based on a question like, What if we did an illustration of a building in our unified style? For example, the Pixar artists of the animated movie Ratatouille tried to apply several different styles to the movies scenes and characters; what if a neural network made these sketches? We could also create storyboards and describe scenarios with comics (photos can be easily converted to sketches). The list can get very long.

Finally, there is live identity, too. Animation has become hugely popular in branding recently, but some companies are going even further. For example, Wolff Olins presented a live identity for Brazilian telecom Oi33, which reacts to sound. You just cant create crazy stuff like this without some creative collaboration with algorithms.

One way to get a clear and well-developed strategy is to personalize a product for a narrow audience segment or even specific users. We see it every day in Facebook newsfeeds, Google search results, Netflix and Spotify recommendations, and many other products. Besides the fact that it relieves the burden of filtering information from users, the users connection to the brand becomes more emotional when the product seems to care so much about them.

However, the key question here is about the role of designer in these solutions. We rarely have the skill to create algorithms like these engineers and big data analysts are the ones to do it. Giles Colborne of CX Partners sees a great example in Spotifys Discover Weekly feature: The only element of classic UX design here is the track list, whereas the distinctive work is done by a recommendation system that fills this design template with valuable music.

Colborne offers advice to designers35 about how to continue being useful in this new era and how to use various data sources to build and teach algorithms. Its important to learn how to work with big data and to cluster it into actionable insights. For example, Airbnb learned how to answer the question, What will the booked price of a listing be on any given day in the future? so that its hosts could set competitive prices36. There are also endless stories about Netflixs recommendation engine.

A relatively new term, anticipatory design38 takes a broader view of UX personalization and anticipation of user wishes. We already have these types of things on our phones: Google Now automatically proposes a way home from work using location history data; Siri proposes similar ideas. However, the key factor here is trust. To execute anticipatory experiences, people have to give large companies permission to gather personal usage data in the background.

I already mentioned some examples of automatic testing of design variations used by Netflix, Vox Media and The Grid. This is one more way to personalize UX that could be put onto the shoulders of algorithms. Liam Spradlin describes the interesting concept of mutative design39; its a well-though-out model of adaptive interfaces that considers many variables to fit particular users.

Ive covered several examples of algorithm-driven design in practice. What tools do modern designers need for this? If we look back to the middle of the last century, computers were envisioned as a way to extend human capabilities. Roelof Pieters and Samim Winiger have analyzed computing history and the idea of augmentation of human ability40 in detail. They see three levels of maturity for design tools:

Algorithm-driven design should be something like an exoskeleton for product designers increasing the number and depth of decisions we can get through. How might designers and computers collaborate?

The working process of digital product designers could potentially look like this:

These tasks are of two types: the analysis of implicitly expressed information and already working solutions, and the synthesis of requirements and solutions for them. Which tools and working methods do we need for each of them?

Analysis of implicitly expressed information about users that can be studied with qualitative research is hard to automate. However, exploring the usage patterns of users of existing products is a suitable task. We could extract behavioral patterns and audience segments, and then optimize the UX for them. Its already happening in ad targeting, where algorithms can cluster a user using implicit and explicit behavior patterns (within either a particular product or an ad network).

To train algorithms to optimize interfaces and content for these user clusters, designers should look into machine learning43. Jon Bruner gives44 a good example: A genetic algorithm starts with a fundamental description of the desired outcome say, an airlines timetable that is optimized for fuel savings and passenger convenience. It adds in the various constraints: the number of planes the airline owns, the airports it operates in, and the number of seats on each plane. It loads what you might think of as independent variables: details on thousands of flights from an existing timetable, or perhaps randomly generated dummy information. Over thousands, millions or billions of iterations, the timetable gradually improves to become more efficient and more convenient. The algorithm also gains an understanding of how each element of the timetable the take-off time of Flight 37 from OHare, for instance affects the dependent variables of fuel efficiency and passenger convenience.

In this scenario, humans curate an algorithm and can add or remove limitations and variables. The results can be tested and refined with experiments on real users. With a constant feedback loop, the algorithm improves the UX, too. Although the complexity of this work suggests that analysts will be doing it, designers should be aware of the basic principles of machine learning. OReilly published45 a great mini-book on the topic recently.

Two years ago, a tool for industrial designers named Autodesk Dreamcatcher46 made a lot of noise and prompted several publications from UX gurus47. Its based on the idea of generative design, which has been used in performance, industrial design, fashion and architecture for many years now. Many of you know Zaha Hadid Architects; its office calls this approach parametric design48.

Logojoy51 is a product to replace freelancers for a simple logo design. You choose favorite styles, pick a color and voila, Logojoy generates endless ideas. You can refine a particular logo, see an example of a corporate style based on it, and order a branding package with business cards, envelopes, etc. Its the perfect example of an algorithm-driven design tool in the real world! Dawson Whitfield, the founder, described machine learning principles behind it52.

However, its not yet established in digital product design, because it doesnt help to solve utilitarian tasks. Of course, the work of architects and industrial designers has enough limitations and specificities of its own, but user interfaces arent static their usage patterns, content and features change over time, often many times. However, if we consider the overall generative process a designer defines rules, which are used by an algorithm to create the final object theres a lot of inspiration. The working process of digital product designers could potentially look like this:

Its yet unknown how can we filter a huge number of concepts in digital product design, in which usage scenarios are so varied. If algorithms could also help to filter generated objects, our job would be even more productive and creative. However, as product designers, we use generative design every day in brainstorming sessions where we propose dozens of ideas, or when we iterate on screen mockups and prototypes. Why cant we offload a part of these activities to algorithms?

The experimental tool Rene55 by Jon Gold, who worked at The Grid, is an example of this approach in action. Gold taught a computer to make meaningful typographic decisions56. Gold thinks that its not far from how human designers are taught, so he broke this learning process into several steps:

His idea is similar to what Roelof and Samim say: Tools should be creative partners for designers, not just dumb executants.

Golds experimental tool Rene is built on these principles58. He also talks about imperative and declarative approaches to programming and says that modern design tools should choose the latter focusing on what we want to calculate, not how. Jon uses vivid formulas to show how this applies to design and has already made a couple of low-level demos. You can try out the tool59 for yourself. Its a very early concept but enough to give you the idea.

While Jon jokingly calls this approach brute-force design and multiplicative design, he emphasizes the importance of a professional being in control. Notably, he left The Grid team earlier this year.

Unfortunately, there are no tools for product design for web and mobile that could help with analysis and synthesis on the same level as Autodesk Dreamcatcher does. However, The Grid and Wix could be considered more or less mass-level and straightforward solutions. Adobe is constantly adding features that could be considered intelligent: The latest release of Photoshop has a content-aware feature60 that intelligently fills in the gaps when you use the cropping tool to rotate an image or expand the canvas beyond the images original size.

There is another experiment by Adobe and University of Toronto. DesignScape61 automatically refines a design layout for you. It can also propose an entirely new composition.

You should definitely follow Adobe in its developments, because the company announced a smart platform named Sensei62 at the MAX 2016 conference. Sensei uses Adobes deep expertise in AI and machine learning, and it will be the foundation for future algorithm-driven design features in Adobes consumer and enterprise products. In its announcement63, the company refers to things such as semantic image segmentation (showing each region in an image, labeled by type for example, building or sky), font recognition (i.e. recognizing a font from a creative asset and recommending similar fonts, even from handwriting), and intelligent audience segmentation.

However, as John McCarthy, the late computer scientist who coined the term artificial intelligence, famously said, As soon as it works, no one calls it AI anymore. What was once cutting-edge AI is now considered standard behavior for computers. Here are a couple of experimental ideas and tools64 that could become a part of the digital product designers day-to-day toolkit:

But these are rare and patchy glimpses of the future. Right now, its more about individual companies building custom solutions for their own tasks. One of the best approaches is to integrate these algorithms into a companys design system. The goals are similar: to automate a significant number of tasks in support of the product line; to achieve and sustain a unified design; to simplify launches; and to support current products more easily.

Modern design systems started as front-end style guidelines, but thats just a first step (integrating design into code used by developers). The developers are still creating pages by hand. The next step is half-automatic page creation and testing using predefined rules.

Platform Thinking by Yury Vetrov (Source67)

Should your company follow this approach?

If we look in the near term, the value of this approach is more or less clear:

Altogether, this frees the designer from the routines of both development support and the creative process, but core decisions are still made by them. A neat side effect is that we will better understand our work, because we will be analyzing it in an attempt to automate parts of it. It will make us more productive and will enable us to better explain the essence of our work to non-designers. As a result, the overall design culture within a company will grow.

However, all of these benefits are not so easy to implement or have limitations:

There are also ethical questions: Is design produced by an algorithm valuable and distinct? Who is the author of the design? Wouldnt generative results be limited by a local maximum? Oliver Roeder says68 that computer art isnt any more provocative than paint art or piano art. The algorithmic software is written by humans, after all, using theories thought up by humans, using a computer built by humans, using specifications written by humans, using materials gathered by humans, in a company staffed by humans, using tools built by humans, and so on. Computer art is human art a subset, rather than a distinction. The revolution is already happening, so why dont we lead it?

This is a story of a beautiful future, but we should remember the limits of algorithms theyre built on rules defined by humans, even if the rules are being supercharged now with machine learning. The power of the designer is that they can make and break rules; so, in a year from now, we might define beautiful as something totally different. Our industry has both high- and low-skilled designers, and it will be easy for algorithms to replace the latter. However, those who can follow and break rules when necessary will find magical new tools and possibilities.

Moreover, digital products are getting more and more complex: We need to support more platforms, tweak usage scenarios for more user segments, and hypothesize more. As Frogs Harry West says, human-centered design has expanded from the design of objects (industrial design) to the design of experiences (encompassing interaction design, visual design and the design of spaces). The next step will be the design of system behavior: the design of the algorithms that determine the behavior of automated or intelligent systems. Rather than hire more and more designers, offload routine tasks to a computer. Let it play with the fonts.

(vf, il, al)

Back to top Tweet itShare on Facebook

Yury leads a team comprising UX and visual designers at one of the largest Russian Internet companies, Mail.Ru Group. His team works on communications, content-centric, and mobile products, as well as cross-portal user experiences. Both Yury and his team are doing a lot to grow their professional community in Russia.

Go here to see the original:

Algorithm-Driven Design: How Artificial Intelligence Is ...

Artificial intelligence could cost millions of jobs. The …

The growing popularity of artificial intelligence technology will likely lead to millions of lost jobs, especially among less-educated workers, and could exacerbate the economic divide between socioeconomic classes in the United States, according to a newly released White House report.

But that same technology is also essential to improving the country's productivity growth, a key measure of how efficiently the economy produces goods. That could ultimately lead to higher average wages and fewer work hours. For that reason, the report concludes, our economy actually needs more artificial intelligence, not less.

To reconcile the benefits of the technology with its expected toll, the report states, the federal government should expand both access to education in technical fields and the scope of unemployment benefits. Those policy recommendations, which the Obama administration has made in the past, could head off some of those job losses and support those who find themselves out of work due to the coming economic shift, according to the report.

The White House report comes exactly one month before President-elect Donald Trump is sworn into office, meaning Obama will need his successor to execute on the policy recommendations. That seems unlikely, especially as far as unemployment protections are concerned. Congressional Republicans already aim to curtail some existing entitlement programs to reduce government spending.

Rolling back Social Security protections for out-of-work families "would potentially be more risky at a time when you have these types of changes in the economy that we're documenting in this report," Jason Furman, the chairman of the Council of Economic Advisers, said in a call with reporters.

Research conducted in recent years varies widely on how many jobs will be displaced due to artificial intelligence, according to the report. A 2016 study from the Organization for Economic Cooperation and Development estimates that 9 percent of jobs would be completely displaced in the next two decades. Many more jobs will be transformed, if not eliminated. Two academics from Oxford University, however, put that number at 47 percent in a study conducted in 2013.

The staggering difference illustrates how much the impact of artificial intelligence remains speculative. While certain industries, such as transportation and agriculture, appear to be embracing the technology with relative haste, others are likely to face a slower period of adoption.

"If these estimates of threatened jobs translate into job displacement, millions of Americans will have their livelihoods significantly altered and potentially face considerable economic challenges in the short- and medium-term," the White House report states.

Those same studies were consistent, however, when it came to the population that would feel the economic brunt of artificial intelligence. The workers earning less than $20 per hour and without a high school diploma would be most likely to see their jobs automated away. The projections improved if workers earned higher wages or obtained higher levels of education.

Jobs that involve a high degree of creativity, analytical thinking or interpersonal communication are considered most secure.

The report also highlights potential advantages of the technology. It could lead to greater labor productivity, meaning workers have to work fewer hours to produce the same amount. That could lead to more leisure time and a higher quality of life, the report notes.

"As we look at AI, our biggest economic concern is that we won't have enough of it, that we won't have enough productivity growth," Furman said. "Anything we can do to have more AI will lead to more productivity growth."

To that end, the report calls for further investment in artificial intelligence research and development. Specifically, the White House sees the technology's applications in cyber defense and fraud detection as particularly promising.

See the original post here:

Artificial intelligence could cost millions of jobs. The ...

Navy Center for Applied Research in Artificial Intelligence

The Navy Center for Applied Research in Artificial Intelligence (NCARAI) has been involved in both basic and applied research in artificial intelligence, cognitive science, autonomy, and human-centered computing since its inception in 1981. NCARAI, part of the Information Technology Division within the Naval Research Laboratory, is engaged in research and development efforts designed to address the application of artificial intelligence technology and techniques to critical Navy and national problems.

The research program of the Center is directed toward understanding the design and operation of systems capable of improving performance based on experience; efficient and effective interaction with other systems and with humans; sensor-based control of autonomous activity; and the integration of varieties of reasoning as necessary to support complex decision-making. The emphasis at NCARAI is the linkage of theory and application in demonstration projects that use a full spectrum of artificial intelligence techniques.

The NCARAI has active research groups in Adaptive Systems, Intelligent Systems, Interactive Systems, and Perceptual Systems.

Contact: Alan C. Schultz Director, Navy Center for Applied Research in Artificial Intelligence Code 5510, Washington DC 20375 Email: w5510@aic.nrl.navy.mil

Release Number: 13-1231-3165

Read this article:

Navy Center for Applied Research in Artificial Intelligence

World’s largest hedge fund to replace managers with …

The Systematized Intelligence Lab is headed by David Ferrucci, who previously led IBMs development of Watson, the supercomputer that beat humans at Jeopardy! in 2011. Photograph: AP

The worlds largest hedge fund is building a piece of software to automate the day-to-day management of the firm, including hiring, firing and other strategic decision-making.

Bridgewater Associates has a team of software engineers working on the project at the request of billionaire founder Ray Dalio, who wants to ensure the company can run according to his vision even when hes not there, the Wall Street Journal reported.

The role of many remaining humans at the firm wouldnt be to make individual choices but to design the criteria by which the system makes decisions, intervening when something isnt working, wrote the Journal, which spoke to five former and current employees.

The firm, which manages $160bn, created the team of programmers specializing in analytics and artificial intelligence, dubbed the Systematized Intelligence Lab, in early 2015. The unit is headed up by David Ferrucci, who previously led IBMs development of Watson, the supercomputer that beat humans at Jeopardy! in 2011.

The company is already highly data-driven, with meetings recorded and staff asked to grade each other throughout the day using a ratings system called dots. The Systematized Intelligence Lab has built a tool that incorporates these ratings into Baseball Cards that show employees strengths and weaknesses. Another app, dubbed The Contract, gets staff to set goals they want to achieve and then tracks how effectively they follow through.

These tools are early applications of PriOS, the over-arching management software that Dalio wants to make three-quarters of all management decisions within five years. The kinds of decisions PriOS could make include finding the right staff for particular job openings and ranking opposing perspectives from multiple team members when theres a disagreement about how to proceed.

The machine will make the decisions, according to a set of principles laid out by Dalio about the company vision.

Its ambitious, but its not unreasonable, said Devin Fidler, research director at the Institute For The Future, who has built a prototype management system called iCEO. A lot of management is basically information work, the sort of thing that software can get very good at.

Automated decision-making is appealing to businesses as it can save time and eliminate human emotional volatility.

People have a bad day and it then colors their perception of the world and they make different decisions. In a hedge fund thats a big deal, he added.

Will people happily accept orders from a robotic manager? Fidler isnt so sure. People tend not to accept a message delivered by a machine, he said, pointing to the need for a human interface.

In companies that are really good at data analytics very often the decision is made by a statistical algorithm but the decision is conveyed by somebody who can put it in an emotional context, he explained.

Futurist Zoltan Istvan, founder of the Transhumanist party, disagrees. People will follow the will and statistical might of machines, he said, pointing out that people already outsource way-finding to GPS or the flying of planes to autopilot.

However, the period in which people will need to interact with a robot manager will be brief.

Soon there just wont be any reason to keep us around, Istvan said. Sure, humans can fix problems, but machines in a few years time will be able to fix those problems even better.

Bankers will become dinosaurs.

Its not just the banking sector that will be affected. According to a report by Accenture, artificial intelligence will free people from the drudgery of administrative tasks in many industries. The company surveyed 1,770 managers across 14 countries to find out how artificial intelligence would impact their jobs.

AI will ultimately prove to be cheaper, more efficient, and potentially more impartial in its actions than human beings, said the authors writing up the results of the survey in Harvard Business Review.

However, they didnt think there was too much cause for concern. It just means that their jobs will change to focus on things only humans can do.

The authors say that machines would be better at administrative tasks like writing earnings reports and tracking schedules and resources while humans would be better at developing messages to inspire the workforce and drafting strategy.

Fidler disagrees. Theres no reason to believe that a lot of what we think of as strategic work or even creative work cant be substantially overtaken by software.

However, he said, that software will need some direction. It needs human decision making to set objectives.

Bridgewater Associates did not respond to a request for comment.

Read the original here:

World's largest hedge fund to replace managers with ...

9 Development in Artificial Intelligence | Funding a …

ment" (Nilsson, 1984). Soon, SRI committed itself to the development of an AI-driven robot, Shakey, as a means to achieve its objective. Shakey's development necessitated extensive basic research in several domains, including planning, natural-language processing, and machine vision. SRI's achievements in these areas (e.g., the STRIPS planning system and work in machine vision) have endured, but changes in the funder's expectations for this research exposed SRI's AI program to substantial criticism in spite of these real achievements.

Under J.C.R. Licklider, Ivan Sutherland, and Robert Taylor, DARPA continued to invest in AI research at CMU, MIT, Stanford, and SRI and, to a lesser extent, other institutions.18 Licklider (1964) asserted that AI was central to DARPA's mission because it was a key to the development of advanced command-and-control systems. Artificial intelligence was a broad category for Licklider (and his immediate successors), who "supported work in problem solving, natural language processing, pattern recognition, heuristic programming, automatic theorem proving, graphics, and intelligent automata. Various problems relating to human-machine communicationtablets, graphic systems, hand-eye coordinationwere all pursued with IPTO support" (Norberg and O'Neill, 1996).

These categories were sufficiently broad that researchers like McCarthy, Minsky, and Newell could view their institutions' research, during the first 10 to 15 years of DARPA's AI funding, as essentially unfettered by immediate applications. Moreover, as work in one problem domain spilled over into others easily and naturally, researchers could attack problems from multiple perspectives. Thus, AI was ideally suited to graduate education, and enrollments at each of the AI centers grew rapidly during the first decade of DARPA funding.

DARPA's early support launched a golden age of AI research and rapidly advanced the emergence of a formal discipline. Much of DARPA's funding for AI was contained in larger program initiatives. Licklider considered AI a part of his general charter of Computers, Command, and Control. Project MAC (see Box 4.2), a project on time-shared computing at MIT, allocated roughly one-third of its $2.3 million annual budget to AI research, with few specific objectives.

The history of speech recognition systems illustrates several themes common to AI research more generally: the long time periods between the initial research and development of successful products, and the interactions between AI researchers and the broader community of researchers in machine intelligence. Many capabilities of today's speech-recognition systems derive from the early work of statisticians, electrical engineers,

Continue reading here:

9 Development in Artificial Intelligence | Funding a ...

2016: The year artificial intelligence exploded – SD Times

Artificial intelligence isnt a new concept. It is something that companies and businesses have been trying to implement (and something that society has feared) for decades. However, with all the recent advancements to democratize artificial intelligence and use it for good, almost every company started to turn to this technology and technique in 2016.

The year started with Facebooks CEO Mark Zuckerberg announcing his plan to build an artificially intelligent assistant to do everything from adjusting the temperature in his house to checking up on his baby girl. He worked throughout the year to bring his plan to life, with an update in August that stated he was almost ready to show off his AI to the world.

In November, Facebook announced it was beginning to focus on giving computers the ability to think, learn, plan and reason like humans. In order to change the negative stigma people associate with AI, the company ended its year with the release of AI educational videos designed to make the technology easier to understand.

Microsoft followed Facebooks pursuit of artificial intelligence, but instead of building its own personal assistant, the company made strides to democratize AI. In January, the company released its deep learning solution, Computational Network Toolkit (CNTK), on GitHub. Recently, Microsoft announced an update to CNTK with new Python and C++ programming language functionalities, as well as reinforcement learning algorithm capabilities. In July, Microsoft also open-sourced its Minecraft AI testing platform to provide developers with a test bed for their AI research.

But the companys AI goals didnt stop there. At its Ignite conference in September, CEO Satya Nadella announced his companys objective to make AI easier to understand. We want to empower people with the tools of AI so they can build their own solutions, he said. Following Nadellas announcement, Microsoft formed an artificial intelligence division known as the Partnership on AI with top tech companies such as Amazon, Facebook, Google DeepMind and IBM. Microsoft ended the year teaming up with OpenAI to advance AI research.

Google started the year with a major breakthrough in artificial intelligence. The companys AI system, AlphaGo, was the first AI system to beat a master at the ancient strategy game Go. In April, the company announced it was ready for an AI-first world. Over time, the computer itselfwhatever its form factorwill be an intelligent assistant helping you through your day, said CEO Sundar Pichai. We will move from mobile-first to an AI-first world.

Pichai reiterated that sentiment at the Google I/O developer conference in May where he announced that the companys advances in machine learning and AI would bring new and better experiences to its users. For instance, the company announced the voice-based helper Google Assistant, updates to its machine learning toolkit TensorFlow, and the release of the Natural Language API and Cloud Speech API throughout the year. To help bring wider adoption to AI, Google also created a site called AI Experiments in November designed to make it easier for anyone to explore AI. The year ended for Google with the open-source release of its DeepMind Lab, a 3D platform for agent-based AI research.

IBM, the company known for its cognitive system IBM Watson, also made waves in the AI world this year. The company started the year with the release of IBM Predictive Analytics, a service allowing developers to build machine learning models. In October, the company announced the Watson Data Platform with Machine Learning, and a new AI Nanodegree program with Udacity at its World of Watson conference in October. The company ended the year with the release of Project DataWorks, a solution designed to make AI-powered decisions. It also announced a partnership with Topcoder to bring AI capabilities to developers.

There was a smattering of AI news to be found as well. Baidu Researchs Silicon Valley AI Lab released code to advance speech recognition at the beginning of the year. NVIDIA began to develop AI software to accelerate cancer research. Carnegie Mellon University researchers announced a five-year research initiative to reverse-engineer the brain and explore machine learning as well as computer vision. Researchers from MITs Computer Science and Artificial Laboratory developed a technique to understand how and why AI machines make certain decisions. Big Data companies turned to machine learning and deep learning techniques to help derive value from their data. OpenAI rounded out the year with the release of Universe, a new AI software platform for testing and evaluating the general intelligence of AI.

Artificial intelligence is intended to help people make better decisions. The system learns at scale, gets better through experience, and interacts with humans in a more natural way, said Jonas Nwuke, platform manager for IBM Watson.

More here:

2016: The year artificial intelligence exploded - SD Times

The world’s first demonstration of spintronics-based …

December 20, 2016 Fig. 1. (a) Optical photograph of a fabricated spintronic device that serves as artificial synapse in the present demonstration. Measurement circuit for the resistance switching is also shown. (b) Measured relation between the resistance of the device and applied current, showing analogue-like resistance variation. (c) Photograph of spintronic device array mounted on a ceramic package, which is used for the developed artificial neural network. Credit: Tohoku University

Researchers at Tohoku University have, for the first time, successfully demonstrated the basic operation of spintronics-based artificial intelligence.

Artificial intelligence, which emulates the information processing function of the brain that can quickly execute complex and complicated tasks such as image recognition and weather prediction, has attracted growing attention and has already been partly put to practical use.

The currently-used artificial intelligence works on the conventional framework of semiconductor-based integrated circuit technology. However, this lacks the compactness and low-power feature of the human brain. To overcome this challenge, the implementation of a single solid-state device that plays the role of a synapse is highly promising.

The Tohoku University research group of Professor Hideo Ohno, Professor Shigeo Sato, Professor Yoshihiko Horio, Associate Professor Shunsuke Fukami and Assistant Professor Hisanao Akima developed an artificial neural network in which their recently-developed spintronic devices, comprising micro-scale magnetic material, are employed (Fig. 1). The used spintronic device is capable of memorizing arbitral values between 0 and 1 in an analogue manner unlike the conventional magnetic devices, and thus perform the learning function, which is served by synapses in the brain.

Using the developed network (Fig. 2), the researchers examined an associative memory operation, which is not readily executed by conventional computers. Through the multiple trials, they confirmed that the spintronic devices have a learning ability with which the developed artificial neural network can successfully associate memorized patterns (Fig. 3) from their input noisy versions just like the human brain can.

The proof-of-concept demonstration in this research is expected to open new horizons in artificial intelligence technology - one which is of a compact size, and which simultaneously achieves fast-processing capabilities and ultralow-power consumption. These features should enable the artificial intelligence to be used in a broad range of societal applications such as image/voice recognition, wearable terminals, sensor networks and nursing-care robots.

Explore further: First demonstration of brain-inspired device to power artificial systems

More information: W. A. Borders, et al. Analogue spin-orbit torque device for artificial neural network based associative memory operation. Applied Physics Express, DOI: 10.1143/APEX.10.013007

New research, led by the University of Southampton, has demonstrated that a nanoscale device, called a memristor, could be used to power artificial systems that can mimic the human brain.

The research group of Professor Hideo Ohno and Associate Professor Shunsuke Fukami of Tohoku University has demonstrated the sub-nanosecond operation of a nonvolatile magnetic memory device.

Uber announced Monday it was buying the artificial intelligence group Geometric Intelligence, to form the core of the ride-sharing giant's own research center.

The neural structure we use to store and process information in verbal working memory is more complex than previously understood, finds a new study by researchers at New York University. It shows that processing information ...

Human intelligence is being defined and measured for the first time ever by researchers at the University of Warwick.

Imagine a world where "thinking" robots were able to care for the elderly and people with disabilities. This concept may seem futuristic, but exciting new research into consciousness could pave the way for the creation of ...

Tufts University engineers have created a new format of solids made from silk protein that can be preprogrammed with biological, chemical, or optical functions, such as mechanical components that change color with strain, ...

(Phys.org)Physicists have found the strongest evidence yet for no violation of Lorentz symmetry, one of the fundamental symmetries of relativity. Lorentz symmetry states that the outcome of an experiment does not depend ...

Engineers at Caltech have developed a system of flat optical lenses that can be easily mass-produced and integrated with image sensors, paving the way for cheaper and lighter cameras in everything from cell phones to medical ...

It is the holy grail of light microscopy: improving the resolving power of this method such that one can individually discern molecules that are very close to each other. Scientists around the Nobel laureate Stefan Hell at ...

The same researchers who pioneered the use of a quantum mechanical effect to convert heat into electricity have figured out how to make their technique work in a form more suitable to industry.

Yale scientists have shown how to enhance the lifetime of sound waves traveling through glassthe material at the heart of fiber optic technologies. The discovery will be described in the January edition of the journal ...

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

The rest is here:

The world's first demonstration of spintronics-based ...

Artificial Intelligence Market Size and Forecast by 2024

Artificial intelligence is a fast emerging technology, dealing with development and study of intelligent machines and software. This software is being used across various applications such as manufacturing (assembly line robots), medical research, and speech recognition systems. It also enables in-build software or machines to operate like human beings, thereby allowing devices to collect, analyze data, reason, talk, make decisions and act The global artificial intelligence market was valued at US$ 126.24 Bn in 2015 and is forecast to grow at a CAGR of 36.1% from 2016 to 2024 to reach a value of US$ 3,061.35 Bn in 2024.

The global artificial intelligence market is currently witnessing healthy growth as companies have started leveraging the benefits of such disruptive technologies for effective customer reach and positioning of their services/solutions. Market growth is also supported by an expanding application base of artificial intelligence solutions across various industries. However, factors such as low funding access or high upfront investment, and demand for skilled resources (workforce) are presently acting as major deterrents to market growth.

On the basis of types of artificial intelligence systems, the market is segmented into artificial neural network, digital assistance system, embedded system, expert system, and automated robotic system. Expert system was the most adopted or revenue generating segment in 2015. This was mainly due to the extensive use of artificial intelligence across various sectors including diagnosis, process control, design, monitoring, scheduling and planning.

Based on various applications of artificial intelligence systems, the market has been classified into deep learning, smart robots, image recognition, digital personal assistant, querying method, language processing, gesture control, video analysis, speech recognition, context aware processing, and cyber security. Image recognition is projected to be the fastest growing segment by application in the global artificial intelligence market. This is due to the growing demand for affective computing technology across various end-use sectors for better study of systems that can recognize, analyze, process, and simulate human effects.

North America was the leader in the global artificial intelligence market in 2015, holding approximately 38% of the global market revenue share, and is expected to remain dominant throughout the forecast period from 2016 to 2024. High government funding and a strong technological base have been some of the major factors responsible for the top position of the North America region in the artificial intelligence market over the past few years. Middle East and Africa is expected to grow at the highest CAGR of 38.2% throughout the forecast period. This is mainly attributed to enormous opportunities for artificial intelligence in the MEA region in terms of new airport developments and various technological innovations including robotic automation.

The key market players profiled in this report include QlikTech International AB, MicroStrategy Inc., IBM Corporation, Google, Inc., Brighterion Inc., Microsoft Corporation, IntelliResponse Systems Inc., Next IT Corporation, Nuance Communications, and eGain Corporation.

Chapter 1 Preface 1.1 Research Scope 1.2 Market Segmentation 1.3 Research Methodology

Chapter 2 Executive Summary 2.1 Market Snapshot: Global Artificial Intelligence Market, 2015 & 2024 2.2 Global Artificial Intelligence Market Revenue, 2014 2024 (US$ Bn) and CAGR (%)

Chapter 3 Global Artificial Intelligence Market Analysis 3.1 Key Trends Analysis 3.2 Market Dynamics 3.2.1 Drivers 3.2.2 Restraints 3.2.3 Opportunities 3.3 Value Chain Analysis 3.4 Global Artificial Intelligence Market Analysis, By Types 3.4.1 Overview 3.4.2 Artificial Neural Network 3.4.3 Digital Assistance System 3.4.4 Embedded System 3.4.5 Expert System 3.4.6 Automated Robotic System 3.5 Global Artificial Intelligence Market Analysis, By Application 3.5.1 Overview 3.5.2 Deep Learning 3.5.3 Smart Robots 3.5.4 Image Recognition 3.5.5 Digital Personal Assistant 3.5.6 Querying Method 3.5.7 Language Processing 3.5.8 Gesture Control 3.5.9 Video Analysis 3.5.10 Speech Recognition 3.5.11 Context Aware Processing 3.5.12 Cyber Security 3.6 Competitive Landscape 3.6.1 Market Positioning of Key Players in Artificial Intelligence Market (2015) 3.6.2 Competitive Strategies Adopted by Leading Players

Chapter 4 North America Artificial Intelligence Market Analysis 4.1 Overview 4.3 North America Artificial Intelligence Market Analysis, by Types 4.3.1 North America Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 4.4 North America Artificial Intelligence Market Analysis, By Application 4.4.1 North America Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 4.5 North America Artificial Intelligence Market Analysis, by Region 4.5.1 North America Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 5 Europe Artificial Intelligence Market Analysis 5.1 Overview 5.3 Europe Artificial Intelligence Market Analysis, by Types 5.3.1 Europe Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 5.4 Europe Artificial Intelligence Market Analysis, By Application 5.4.1 Europe Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 5.5 Europe Artificial Intelligence Market Analysis, by Region 5.5.1 Europe Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 6 Asia Pacific Artificial Intelligence Market Analysis 6.1 Overview 6.3 Asia Pacific Artificial Intelligence Market Analysis, by Types 6.3.1 Asia Pacific Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 6.4 Asia Pacific Artificial Intelligence Market Analysis, By Application 6.4.1 Asia Pacific Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 6.5 Asia Pacific Artificial Intelligence Market Analysis, by Region 6.5.1 Asia Pacific Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 7 Middle East and Africa (MEA) Artificial Intelligence Market Analysis 7.1 Overview 7.3 MEA Artificial Intelligence Market Analysis, by Types 7.3.1 MEA Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 7.4 MEA Artificial Intelligence Market Analysis, By Application 7.4.1 MEA Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 7.5 MEA Artificial Intelligence Market Analysis, by Region 7.5.1 MEA Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 8 Latin America Artificial Intelligence Market Analysis 8.1 Overview 8.3 Latin America Artificial Intelligence Market Analysis, by Types 8.3.1 Latin America Artificial Intelligence Market Share Analysis, by Types, 2015 & 2024 (%) 8.4 Latin America Artificial Intelligence Market Analysis, By Application 8.4.1 Latin America Artificial Intelligence Market Share Analysis, by Application, 2015 & 2024 (%) 8.5 Latin America Artificial Intelligence Market Analysis, by Region 8.5.1 Latin America Artificial Intelligence Market Share Analysis, by Region, 2015 & 2024 (%)

Chapter 9 Company Profiles 9.1 QlikTech International AB 9.2 MicroStrategy, Inc. 9.3 IBM Corporation 9.4 Google, Inc. 9.5 Brighterion, Inc. 9.6 Microsoft Corporation 9.7 IntelliResponse Systems Inc. 9.8 Next IT Corporation 9.9 Nuance Communications 9.10 eGain Corporation

The Artificial Intelligence Market report provides analysis of the global artificial intelligence market for the period 20142024, wherein the years from 2016 to 2024 is the forecast period and 2015 is considered as the base year. The report precisely covers all the major trends and technologies playing a major role in the artificial intelligence markets growth over the forecast period. It also highlights the drivers, restraints, and opportunities expected to influence the market growth during this period. The study provides a holistic perspective on the markets growth in terms of revenue (in US$ Bn), across different geographies, which includes Asia Pacific (APAC), Latin America (LATAM), North America, Europe, and Middle East & Africa (MEA).

The market overview section of the report showcases the markets dynamics and trends such as the drivers, restraints, and opportunities that influence the current nature and future status of this market. Moreover, the report provides the overview of various strategies and the winning imperatives of the key players in the artificial intelligence market and analyzes their behavior in the prevailing market dynamics.

The report segments the global artificial intelligence market on the types of artificial intelligence systems into artificial neural network, digital assistance system, embedded system, expert system, and automated robotic system. By application, the market has been classified into deep learning, smart robots, image recognition, digital personal assistant, querying method, language processing, gesture control, video analysis, speech recognition, context aware processing, and cyber security. Thus, the report provides in-depth cross-segment analysis for the artificial intelligence market and classifies it into various levels, thereby providing valuable insights on macro as well as micro level.

The report also provides the competitive landscape for the artificial intelligence market, thereby positioning all the major players according to their geographic presence, market attractiveness and recent key developments. The complete artificial intelligence market estimates are the result of our in-depth secondary research, primary interviews, and in-house expert panel reviews. These market estimates have been analyzed by taking into account the impact of different political, social, economic, technological, and legal factors along with the current market dynamics affecting the artificial intelligence markets growth.

QlikTech International AB, MicroStrategy Inc., IBM Corporation, Google, Inc., Brighterion Inc., Microsoft Corporation, IntelliResponse Systems Inc., Next IT Corporation, Nuance Communications, and eGain Corporation are some of the major players which have been profiled in this study. Details such as financials, business strategies, recent developments, and other such strategic information pertaining to these players has been provided as part of company profiling.

Read more from the original source:

Artificial Intelligence Market Size and Forecast by 2024

2017 Is the Year of Artificial Intelligence | Inc.com

A couple of weeks ago, I polled the business community for their top technology predictions for 2017 and within a couple hours, I had a few hundred emails in my inbox with some really great insights. Fascinatingly enough, the vast majority of these responses had something to do with the rise of Artificial Intelligence in our everyday business lives.

Yes, it appears that the robots are taking over, which seems like a scary thought. The mere mention of AI still conjures images of Will Smith in I, Robot, which totally does not bode well for humans...

...okay, well, maybe robots won't take over planet earth, but there is a big fear amongst many that they'll take over our jobs. Some folks are also becoming overwhelmed with the thought of AI being implemented into business and are afraid of not being technologically savvy enough to keep up.

So what does the future look like with human beings working alongside artificial intelligence beings?

Sales Will Become More Efficient

Big data has certainly optimized sales efforts by eliminating the need for an icy cold call. Using data technologies, companies can identify their top sales leads and focus their efforts on the folks most likely to buy their product (instead of wasting time on people who have no interest). According to recent findings by Forbes, 89 percent of marketers now use predictive analytics to improve their sales ROI.

For AI innovators, predictive marketing technology is awesome for sales, but could even be a whole lot more efficient. According to a recent study by Conversica, the vast majority of companies fail to follow up with 1/3 of their interested leads. This is where AI Assistants come in. These virtual beings are given a name, email address, and title and can do all the preliminary dirty work for sales teams, which involves reaching out to leads, following up, and having initial conversations with customers to gage interest. Then, when the lead is almost ready to buy, it is passed to a human.

This system ensures the 1/3 of qualified leads aren't falling through the cracks due to human error and because of this, AI will actually lead to:

The Creation of More Jobs

Since the AI Assistants are handling a huge bulk of the work, don't you think that using them might put many human salespeople out of a job? Alex Terry, CEO of Conversica, answers this question all the time.

"It's the same issue a lot of people had with replacing bank tellers with ATM machines," he says. What the general public may not know, is that by making banks more efficient by implementing ATMs in the 1990s, each branch had less overhead and were able to do more work--which lead to a higher return. While individual banks had less people working there, the increase in profit allowed banking companies to open more branches and hire more people to staff them.

This same scenario is being seen with AI.

"When sales teams become more efficient, they increase their ROI--which allows companies to have a greater marketing budget," says Terry, whose AI technology is being used by the likes of IBM and other Fortune 500 companies. "Therefore, they can hire more people."

All of this to say, folks, human beings aren't going anywhere. "AI was never meant to replace humans, but to work alongside them," Terry explains. "Humans and computers together is the most powerful combination."

You see, AI is not impeding human interaction, but rather it is enhancing it by connecting us with those we will have the most success with. And obviously, when we have more success, we are able to grow our businesses and offer a better life for ourselves and our families. So bring on the 'bots!

Read more:

2017 Is the Year of Artificial Intelligence | Inc.com

Artificial Intelligence | The Turing Test

The Turing Test Alan Turing and the Imitation Game

Alan Turing, in a 1951 paper, proposed a test called "The Imitation Game" that might finally settle the issue of machine intelligence. The first version of the game he explained involved no computer intelligence whatsoever. Imagine three rooms, each connected via computer screen and keyboard to the others. In one room sits a man, in the second a woman, and in the third sits a person - call him or her the "judge". The judge's job is to decide which of the two people talking to him through the computer is the man. The man will attempt to help the judge, offering whatever evidence he can (the computer terminals are used so that physical clues cannot be used) to prove his man-hood. The woman's job is to trick the judge, so she will attempt to deceive him, and counteract her opponent's claims, in hopes that the judge will erroneously identify her as the male.

What does any of this have to do with machine intelligence? Turing then proposed a modification of the game, in which instead of a man and a woman as contestants, there was a human, of either gender, and a computer at the other terminal. Now the judge's job is to decide which of the contestants is human, and which the machine. Turing proposed that if, under these conditions, a judge were less than 50% accurate, that is, if a judge is as likely to pick either human or computer, then the computer must be a passable simulation of a human being and hence, intelligent. The game has been recently modified so that there is only one contestant, and the judge's job is not to choose between two contestants, but simply to decide whether the single contestant is human or machine.

The dictionary.com entry on the Turing Test (click here) is short, but very clearly stated. A longer, but point-form review of the imitation game and its modifications written by Larry Hauser, click here (if link fails, click here for a local copy) is also available. Hauser's page may not contain enough detail to explain the test, but it is an excellent reference or study guide and contains some helpful diagrams for understanding the interplay of contestant and judge. The page also makes reference to John Searle's Chinese Room, a thought experiment developed as an attack on the Turing test and similar "behavioural" intelligence tests. We will discuss the Chinese Room in the next section. Natural Language Processing (NLP)

Partly out of an attempt to pass Turing's test, and partly just for the fun of it, there arose, largely in the 1970s, a group of programs that tried to cross the first human-computer barrier: language. These programs, often fairly simple in design, employed small databases of (usually English) language combined with a series of rules for forming intelligent sentences. While most were woefully inadequate, some grew to tremendous popularity. Perhaps the most famous such program was Joseph Weizenbaum's ELIZA. Written in 1966 it was one of the first and remained for quite a while one of the most convincing. ELIZA simulates a Rogerian psychotherapist (the Rogerian therapist is empathic, but passive, asking leading questions, but doing very little talking. e.g. "Tell me more about that," or "How does that make you feel?") and does so quite convincingly, for a while. There is no hint of intelligence in ELIZA's code, it simply scans for keywords like "Mother" or "Depressed" and then asks suitable questions from a large database. Failing that, it generates something generic in an attempt to elicit further conversation. Most programs since have relied on similar principles of keyword matching, paired with basic knowledge of sentence structure. There is however, no better way to see what they are capable of than to try them yourself. We have compiled a set of links to some of the more famous attempts at NLP. Students are encouraged to interact with these programs in order to get a feeling for their strengths and weaknesses, but many of the pages provided here link to dozens of such programs, don't get lost among the artificial people.

Online Examples of NLP

A series of online demos (many are Java applets, so be sure you are using a Java-capable browser) of some of the more famous NLP programs.

Although Turing proposed his test in 1951, it was not until 40 years later, in 1991, that the test was first really implemented. Dr. Hugh Loebner, a professor very much interested in seeing AI succeed, pledged $100,000 to the first entrant that could pass the test. The 1991 contest had some serious problems though, (perhaps most notable was that the judges were all computer science specialists, and knew exactly what kind of questions might trip up a computer) and it was not until 1995 that the contest was re-opened. Since then, there has been an annual competition, which has yet to find a winner. While small prizes are given out to the most "human-like" computer, no program has had the 50% success Turing aimed for.

Validity of the Turing Test

Alan Turing's imitation game has fueled 40 years of controversy, with little sign of slowing. On one side of the argument, human-like interaction is seen as absolutely essential to human-like intelligence. A successful AI is worthless if its intelligence lies trapped in an unresponsive program. Some have even extended the Turing Test. Steven Harnad (see below) has proposed the "Total Turing Test", where instead of language, the machine must interact in all areas of human endeavor, and instead of a five minute conversation, the duration of the test is a lifetime. James Sennett has proposed a similar extension (if link fails, click here for a local copy) to the Turing Test that challenges AI to mimic not only human thought but also personhood as a whole. To illustrate his points, the author uses Star Trek: The Next Generation's character 'Data'.

Opponents of Turing's behavioural criterion of intelligence argue that it is either not sufficient, or perhaps not even relevant at all. What is important, they argue, is that the computer demonstrates cognitive ability, regardless of behaviour. It is not necessary that a program speak in order for it to be intelligent. There are humans that would fail the Turing test, and unintelligent computers that might pass. The test is neither necessary nor sufficient for intelligence, they argue. In hopes of illuminating the debate, we have assigned two papers that deal with the Turing Test from very different points of view. The first is a criticism of the test, the second comes to its defense.

Previous (Can Machines Think?) | Home | Next (The Chinese Room)

Go here to read the rest:

Artificial Intelligence | The Turing Test

Demystifying artificial intelligence What business leaders …

Artificial Intelligence still sounds more like science fiction than it does an IT investment, but it is increasingly real, and critical to the success of the Internet of Things.

In the last several years, interest in artificial intelligence (AI) has surged. Venture capital investments in companies developing and commercializing AI-related products and technology have exceeded $2 billion since 2011.1 Technology companies have invested billions more acquiring AI startups. Press coverage of the topic has been breathless, fueled by the huge investments and by pundits asserting that computers are starting to kill jobs, will soon be smarter than people, and could threaten the survival of humankind. Consider the following:

IBM has committed $1 billion to commercializing Watson, its cognitive computing platform.2

Google has made major investments in AIin recent years, including acquiring eight robotics companies and a machine-learning company.3

Facebook hired AI luminary Yann LeCun to create an AIlaboratory with the goal of bringing major advances in the field.4

Amid all the hype, there is significant commercial activity underway in the area of AIthat is affecting or will likely soon affect organizations in every sector. Business leaders should understand what AIreally is and where it is heading.

The first steps in demystifying AIare defining the term, outlining its history, and describing some of the core technologies underlying it.

The field of AIsuffers from both too few and too many definitions. Nils Nilsson, one of the founding researchers in the field, has written that AI may lack an agreed-upon definition. . . .11 A well-respected AI textbook, now in its third edition, offers eight definitions, and declines to prefer one over the other.12 For us, a useful definition of AIis the theory and development of computer systems able to perform tasks that normally require human intelligence. Examples include tasks such as visual perception, speech recognition, decision making under uncertainty, learning, and translation between languages.13 Defining AI in terms of the tasks humans do, rather than how humans think, allows us to discuss its practical applications today, well before science arrives at a definitive understanding of the neurological mechanisms of intelligence.14 It is worth noting that the set of tasks that normally require human intelligence is subject to change as computer systems able to perform those tasks are invented and then widely diffused. Thus, the meaning of AI evolves over time, a phenomenon known as the AI effect, concisely stated as AI is whatever hasnt been done yet.15

AIis not a new idea. Indeed, the term itself dates from the 1950s. The history of the field is marked by periods of hype and high expectations alternating with periods of setback and disappointment, as a recent apt summation puts it.16 After articulating the bold goal of simulating human intelligence in the 1950s, researchers developed a range of demonstration programs through the 1960s and into the '70s that showed computers able to accomplish a number of tasks once thought to be solely the domain of human endeavor, such as proving theorems, solving calculus problems, responding to commands by planning and performing physical actionseven impersonating a psychotherapist and composing music. But simplistic algorithms, poor methods for handling uncertainty (a surprisingly ubiquitous fact of life), and limitations on computing power stymied attempts to tackle harder or more diverse problems. Amid disappointment with a lack of continued progress, AI fell out of fashion by the mid-1970s.

In the early 1980s, Japan launched a program to develop an advanced computer architecture that could advance the field of AI. Western anxiety about losing ground to Japan contributed to decisions to invest anew in AI. The 1980s saw the launch of commercial vendors of AI technology products, some of which had initial public offerings, such as Intellicorp, Symbolics,17 and Teknowledge.18 By the end of the 1980s, perhaps half of the Fortune 500 were developing or maintaining expert systems,an AI technology that models human expertise with a knowledge base of facts and rules.19High hopes for the potential of expert systems were eventually tempered as their limitations, including a glaring lack of common sense, the difficulty of capturing experts tacit knowledge, and the cost and complexity of building and maintaining large systems, became widely recognized. AI ran out of steam again.

In the 1990s, technical work on AI continued with a lower profile. Techniques such as neural networks and genetic algorithms received fresh attention, in part because they avoided some of the limitations of expert systems and partly because new algorithms made them more effective. The design of neural networks is inspired by the structure of the brain. Genetic algorithms aim to evolve solutions to problems by iteratively generating candidate solutions, culling the weakest, and introducing new solution variants by introducing random mutations.

By the late 2000s, a number of factors helped renew progress in AI, particularly in a few key technologies. We explain the factors most responsible for the recent progress below and then describe those technologies in more detail.

Moores Law. The relentless increase in computing power available at a given price and size, sometimes known as Moores Law after Intel cofounder Gordon Moore, has benefited all forms of computing, including the types AI researchers use. Advanced system designs that might have worked in principle were in practice off limits just a few years ago because they required computer power that was cost-prohibitive or just didnt exist. Today, the power necessary to implement these designs is readily available. A dramatic illustration: The current generation of microprocessors delivers 4 million times the performance of the first single-chip microprocessor introduced in 1971.20

Big data. Thanks in part to the Internet, social media, mobile devices, and low-cost sensors, the volume of data in the world is increasing rapidly.21 Growing understanding of the potential value of this data22 has led to the development of new techniques for managing and analyzing very large data sets.23 Big data has been a boon to the development of AI. The reason is that some AI techniques use statistical models for reasoning probabilistically about data such as images, text, or speech. These models can be improved, or trained, by exposing them to large sets of data, which are now more readily available than ever.24

The Internet and the cloud. Closely related to the big data phenomenon, the Internet and cloud computing can be credited with advances in AI for two reasons. First, they make available vast amounts of data and information to any Internet-connected computing device. This has helped propel work on AI approaches that require large data sets.25 Second, they have provided a way for humans to collaboratesometimes explicitly and at other times implicitlyin helping to train AI systems. For example, some researchers have used cloud-based crowdsourcing services like Mechanical Turk to enlist thousands of humans to describe digital images, enabling image classification algorithms to learn from these descriptions.26 Googles language translation project analyzes feedback and freely offerscontributions from its users to improve the quality of automated translation.27

New algorithms. An algorithm is a routine process for solving a program or performing a task. In recent years, new algorithms have been developed that dramatically improve the performance of machine learning, an important technology in its own right and an enabler of other technologies such as computer vision.28 (These technologies are described below.) The fact that machine learning algorithms are now available on an open-source basisis likely to foster further improvements as developers contribute enhancements to each others work.29

We distinguish between the field of AIand the technologies that emanate from the field. The popular press portrays AIas the advent of computers as smart asor smarter thanhumans. The individual technologies, by contrast, are getting better at performing specific tasks that only humans used to be able to do. We call these cognitive technologies (figure 1), and it is these that business and public sector leaders should focus their attention on. Below we describe some of the most important cognitive technologiesthose that are seeing wide adoption, making rapid progress, or receiving significant investment.

Computer vision refers to the ability of computers to identify objects, scenes, and activities in images. Computer vision technology uses sequences of imaging-processing operations and other techniques to decompose the task of analyzing images into manageable pieces. There are techniques for detecting the edges and textures of objects in an image, for instance. Classification techniques may be used to determine if the features identified in an image are likely to represent a kind of object already known to the system.30

Computer vision has diverse applications, including analyzing medical imaging to improve prediction, diagnosis, and treatment of diseases;31 face recognition, used by Facebook to automatically identify people in photographs32 and in security and surveillance to spot suspects;33 and in shoppingconsumers can now use smartphones to photograph products and be presented with options for purchasing them.34

Machine vision, a related discipline, generally refers to vision applications in industrial automation, where computers recognize objects such as manufactured parts in a highly constrained factory environmentrather simpler than the goals of computer vision, which seeks to operate in unconstrained environments. While computer vision is an area of ongoing computer science research, machine vision is a solved problemthe subject not of research but of systems engineering.35 Because the range of applications for computer vision is expanding, startup companies working in this area have attracted hundreds of millions of dollars in venture capital investment since 2011.36

Machine learning refers to the ability of computer systems to improve their performance by exposure to data without the need to follow explicitly programmed instructions. At its core, machine learning is the process of automatically discovering patterns in data. Once discovered, the pattern can be used to make predictions. For instance, presented with a database of information about credit card transactions, such as date, time, merchant, merchant location, price, and whether the transaction was legitimate or fraudulent, a machine learning system learns patterns that are predictive of fraud. The more transaction data it processes, the better its predictions are expected to become.

Applications of machine learning are very broad, with the potential to improve performance in nearly any activity that generates large amounts of data. Besides fraud screening, these include sales forecasting, inventory management, oil and gas exploration, and public health. Machine learning techniques often play a role in other cognitive technologies such as computer vision, which can train vision models on a large database of images to improve their ability to recognize classes of objects.37 Machine learning is one of the hottest areas in cognitive technologies today, having attracted around a billion dollars in venture capital investment between 2011 and mid-2014.38 Google is said to have invested some $400 million to acquire DeepMind, a machine learning company, in 2014.39

Natural language processing refers to the ability of computers to work with text the way humans do,for instance, extracting meaning from text or even generating text that is readable, stylistically natural, and grammatically correct. A natural language processing system doesnt understand text the way humans do, but it can manipulate text in sophisticated ways, such as automatically identifying all of the people and places mentioned in a document; identifying the main topic of a document; or extracting and tabulating the terms and conditions in a stack of human-readable contracts. None of these tasks is possible with traditional text processing software that operates on simple text matches and patterns. Consider a single hackneyed example that illustrates one of the challenges of natural language processing. The meaning of each word in the sentence Time flies like an arrow seems clear, until you encounter the sentence Fruit flies like a banana.Substituting fruit for time and banana for arrow changes the meaning of the words flies and like.40

Natural language processing, like computer vision, comprises multiple techniques that may be used together to achieve its goals. Language models are used to predict the probability distribution of language expressionsthe likelihood that a given string of characters or words is a valid part of a language, for instance. Feature selection may be used to identify the elements of a piece of text that may distinguish one kind of text from anothersay a spam email versus a legitimate one. Classification, powered by machine learning, would then operate on the extracted features to classify a message as spam or not.41

Because context is so important for understanding why time flies and fruit flies are so different, practical applications of natural language processing often address relative narrow domains such as analyzing customer feedback about a particular product or service,42 automating discovery in civil litigation or government investigations (e-discovery),43and automating writing of formulaic stories on topics such as corporate earnings or sports.44

Robotics, by integrating cognitive technologies such as computer vision and automated planning with tiny, high-performance sensors, actuators, and cleverly designed hardware, has given rise to a new generation of robots that can work alongside people and flexibly perform many different tasks in unpredictable environments.45 Examples include unmanned aerial vehicles,46 cobots that share jobs with humans on the factory floor,47 robotic vacuum cleaners,48and a slew of consumer products, from toys to home helpers.49

Speech recognition focuses on automatically and accurately transcribing human speech. The technology has to contend with some of the same challenges as natural language processing, in addition to the difficulties of coping with diverse accents, background noise, distinguishing between homophones (buy and by sound the same), and the need to work at the speed of natural speech. Speech recognition systems use some of the same techniques as natural language processing systems, plus others such as acoustic models that describe sounds and their probability of occurring in a given sequence in a given language.50 Applications include medical dictation, hands-free writing, voice control of computer systems, and telephone customer service applications. Dominos Pizza recently introduced a mobile app that allows customers to use natural speech to order, for instance.51

As noted, the cognitive technologies above are making rapid progress and attracting significant investment. Other cognitive technologies are relatively mature and can still be important components of enterprise software systems. These more mature cognitive technologies include optimization, which automates complex decisions and trade-offs about limited resources;52planning and scheduling, which entails devising a sequence of actions to meet goals and observe constraints;53 and rules-based systems, the technology underlying expert systems, which use databases of knowledge and rules to automate the process of making inferences about information.54

Organizations in every sector of the economy are already using cognitive technologies in diverse business functions.

In banking, automated fraud detection systems use machine learning to identify behavior patterns that could indicate fraudulent payment activity, speech recognition technology to automate customer service telephone interactions, and voice recognition technology to verify the identity of callers.55

In health care, automatic speech recognition for transcribing notes dictated by physicians is used in around half of UShospitals, and its use is growing rapidly.56 Computer vision systems automate the analysis of mammograms and other medical images.57 IBMs Watson uses natural language processing to read and understand a vast medical literature, hypothesis generation techniques to automate diagnosis, and machine learning to improve its accuracy.58

In life sciences, machine learning systems are being used to predict cause-and-effect relationships from biological data59 and the activities of compounds,60helping pharmaceutical companies identify promising drugs.61

In media and entertainment, a number of companies are using data analytics and natural language generation technology to automatically draft articles and other narrative material about data-focused topics such as corporate earnings or sports game summaries.62

Oil and gas producers use machine learning in a wide range of applications, from locating mineral deposits63 to diagnosing mechanical problems with drilling equipment.64

The public sector is adopting cognitive technologies for a variety of purposes including surveillance, compliance and fraud detection, and automation. The state of Georgia, for instance, employs a system combining automated handwriting recognition with crowdsourced human assistance to digitize financial disclosure and campaign contribution forms.65

Retailers use machine learning to automatically discover attractive cross-sell offers and effective promotions.66

Technology companies are using cognitive technologies such as computer vision and machine learning to enhance products or create entirely new product categories, such as the Roomba robotic vacuum cleaner67 or the Nest intelligent thermostat.68

As the examples above show, the potential business benefits of cognitive technologies are much broader than cost savings that may be implied by the term automation. They include:

The impact of cognitive technologies on business should grow significantly over the next five years. This is due to two factors. First, the performance of these technologies has improved substantially in recent years, and we can expect continuing R&D efforts to extend this progress. Second, billions of dollars have been invested to commercialize these technologies. Many companies are working to tailor and package cognitive technologies for a range of sectors and business functions, making them easier to buy and easier to deploy. While not all of these vendors will thrive, their activities should collectively drive the market forward. Together, improvements in performance and commercialization are expanding the range of applications for cognitive technologies and will likely continue to do so over the next several years (figure 2).

Examples of the strides made by cognitive technologies are easy to find. The accuracy of Googles voice recognition technology, for instance, improved from 84 percent in 2012 to 98 percent less than two years later, according to one assessment.69 Computer vision has progressed rapidly as well. A standard benchmark used by computer vision researchers has shown a fourfold improvement in image classification accuracy from 2010 to 2014.70 Facebook reported in a peer-reviewed paper that its DeepFace technology can now recognize faces with 97 percent accuracy.71 IBM was able to double the precision of Watsons answers in the few years leading up to its famous Jeopardy! victory in 2011.72 The company now reports its technology is 2,400 percent smarter today than on the day of that triumph.73

As performance improves, the applicability of a technology broadens. For instance, when voice recognition systems required painstaking training and could only work well with controlled vocabularies, they found application in specialized areas such as medical dictation but did not gain wide adoption. Today, tens of millions of Web searches are performed by voice every month.74 Computer vision systems used to be confined to industrial automation applications but now, as weve seen, are used in surveillance, security, and numerous consumer applications. IBM is now seeking to apply Watson to a broad range of domains outside of game-playing, from medical diagnostics to research to financial advice to call center automation.75

Not all cognitive technologies are seeing such rapid improvement. Machine translation has progressed, but at a slower pace. One benchmark found a 13 percent improvement in the accuracy of Arabic to English translations between 2009 and 2012, for instance.76 Even if these technologies are imperfect, they can be good enough to have a big impact on the work organizations do. Professional translators regularly rely on machine translation, for instance, to improve their efficiency, automating routine translation tasks so they can focus on the challenging ones.77

From 2011 through May 2014, over $2 billion dollars in venture capital funds have flowed to companies building products and services based on cognitive technologies.78 During this same period, over 100 companies merged or were acquired, some by technology giants such as Amazon, Apple, IBM, Facebook, and Google.79 All of this investment has nurtured a diverse landscape of companies that are commercializing cognitive technologies.

This is not the place for providing a detailed analysis of the vendor landscape. Rather, we want to illustrate the diversity of offerings, since this is an indicator of dynamism that may help propel and develop the market. The following list of cognitive technology vendor categories, while neither exhaustive nor mutually exclusive, gives a sense of this.

Data management and analytical tools that employ cognitive technologies such as natural language processing and machine learning. These tools use natural language processing technology to help extract insights from unstructured text or machine learning to help analysts uncover insights from large datasets. Examples in this category include Context Relevant, Palantir Technologies, and Skytree.

Cognitive technology components that can be embedded into applications or business processes to add features or improve effectiveness. Wise.io, for instance, offers a set of modules that aim to improve processes such as customer support, marketing, and sales with machine-learning models that predict which customers are most likely to churn or which sales leads are most likely to convert to customers.80Nuance provides speech recognition technology that developers can use to speech-enable mobile applications.81

Point solutions. A sign of the maturation of some cognitive technologies is that they are increasingly embedded in solutions to specific business problems. These solutions are designed to work better than solutions in their existing categories and require little expertise in cognitive technologies. Popular application areas include advertising,82 marketing and sales automation,83 and forecasting and planning.84

Platforms. Platforms are intended to provide a foundation for building highly customized business solutions. They may offer a suite of capabilities including data management, tools for machine learning, natural language processing, knowledge representation and reasoning, and a framework for integrating these pieces with custom software. Some of the vendors mentioned above can serve as platforms of sorts. IBM is offering Watson as a cloud-based platform.85

If current trends in performance and commercialization continue, we can expect the applications of cognitive technologies to broaden and adoption to grow. The billions of investment dollars that have flowed to hundredsof companies building products based on machine learning, natural language processing, computer vision, or robotics suggests that many new applications are on their way to market. We also see ample opportunity for organizations to take advantage of cognitive technologies to automate business processes and enhance their products and services.86

Cognitive technologies will likely become pervasive in the years ahead. Technological progress and commercialization should expand the impact of cognitive technologies on organizations over the next three to five years and beyond. A growing number of organizations will likely find compelling uses for these technologies; leading organizations may find innovative applications that dramatically improve their performance or create new capabilities, enhancing their competitive position. IT organizations can start today, developing awareness of these technologies, evaluating opportunities to pilot them, and presenting leaders in their organizations with options for creating value with them. Senior business and public sector leaders should reflect on how cognitive technologies will affect their sector and their own organization and how these technologies can foster innovation and improve operating performance.

Read more on cognitive technologies in Cognitive technologies: The real opportunities for business."

Deloitte Consulting LLPs Enterprise Science offering employs data science, cognitive technologies such as machine learning, and advanced algorithms to create high-value solutions for clients. Services include cognitive automation, which uses cognitive technologies such as natural language processing to automate knowledge-intensive processes; cognitive engagement, which applies machine learning and advanced analytics to make customer interactions dramatically more personalized, relevant, and profitable; and cognitive insight, which employs data science and machine learning to detect critical patterns, make high-quality predictions, and support business performance. For more information about the Enterprise Science offering, contact Plamen Petrov (ppetrov@deloitte.com) or Rajeev Ronanki (rronanki@deloitte.com).

The authors would like to acknowledge the contributions ofMark Cotteleerof Deloitte Services LP;Plamen Petrov,Rajeev Ronanki, andDavid Steierof Deloitte Consulting LLP; andShankar Lakshman,Laveen Jethani, andDivya Ravichandranof Deloitte Support Services IndiaPvt Ltd.

Continued here:

Demystifying artificial intelligence What business leaders ...

[Tech] – Artificial intelligence has a big year ahead …

Get ready for AI to show up where youd least expect it.

In 2016, tech companies like Google, Facebook, Apple and Microsoft launched dozens of products and services powered by artificial intelligence. Next year will be all about the rest of the business world embracing AI.

Artificial intelligence is a 60-year-old term, and its promise has long seemed like it was forever over the horizon. But new hardware, software, services and expertise means its finally real -- even though companies will still need plenty of human brain power to get it working.

The most sophisticated incarnation of AI today is an approach called deep learning thats based on neural network technology inspired by the human brain. Conventional computer programs follow a prewritten sequence of instructions, but theres no way programmers can use that approach for something as complex and subtle as describing a photo to a blind person. Neural networks, in contrast, figure out their own rules after being trained on vast quantities of real-world data like photos, videos, handwriting or speech.

AI was one of the hottest trends in tech this year, and its only poised to get bigger. Youve already brushed up against AI: It screens out spam, organizes your digital photos and transcribes your spoken text messages. In 2017, it will spread beyond digital doodads to mainstream businesses.

Itll be the year of the solution as opposed to the year of the experiment, said IBM Chief Innovation Officer Bernie Meyerson.

Its enough of a thing that some are concerned about the social changes it could unleash. President Barack Obama even raised the issue of whether AI might push us to adopt a universal basic income so people other than CEOs and AI programmers benefit from the change.

New AI adopters next year will include banks, retailers and pharmaceutical companies, predicted Andrew Moore, dean of Carnegie Mellon Universitys School of Computer Science.

For example, an engineering firm might want to use AI to predict bridge failures based on the sounds from cars traveling across it. Previously, the firm would have needed to hire a machine-learning expert, but now a structural engineer could download AI software, train it with existing acoustic data, and get a new diagnostic tool, Moore said.

Play Video

On 60 Minutes Overtime, Charlie Rose explores the labs at Carnegie Mellon on the cutting edge of A.I. See robots learning to go where humans can'...

AI should reach medicine next year, too, said Monte Zweben, chief executive of database company Splice Machine and former deputy AI chief at NASAs Ames Research Center.

That could mean fatigue-free bots that scan medical records to spot dangerous infections early or customize cancer treatments for a patients genes -- tasks that assist human staff but dont replace those people. Precision medicine is becoming a reality, Zweben said, referring to treatments customized for an individual to an extent thats simply not feasible today.

A similar digital boost awaits white-collar workers, predicted Eric Druker, a leader of the analytics practice at consulting firm Booz Allen. Assessing whether borrowers are worthy of a mortgage is a standardized process, but humans are making decisions at every step, he said. In 2017, AI will be able to speed many of those decisions by doing some of the grunt work, he said.

Cars increasingly are becoming rolling computers, so of course the auto industry -- under competitive pressure from Silicon Valley -- is embracing AI. Companies like Tesla Motors offer increasingly sophisticated self-driving technology, but drivers still must keep their hands on the wheel. Next year, though, the technology will graduate out of the research phase, predicted Dennis Mortensen, chief executive of AI scheduling bot company X.ai.

One of the dozen or so serious self-driving initiatives will roll out a truly fully autonomous feature, though confined to highway driving, he said.

Why is it getting easier? Google and Facebook in 2016 released their core AI programs as open-source software anyone can use. Amazon Web Services, the dominant way companies tap into computing power as needed, added an artificial intelligence service. The computers are ready with a few mouse clicks and a credit card.

But to Chris Curran, chief technologist of consulting firm PwC Consulting, AI will remain confined to narrow tasks like recognizing speech. A general artificial intelligence -- something more like our own brains -- remains distant.

Data science bots -- something you could ask any question and itll figure it out -- are farther away, he said. Its the direction Google is heading with Google Assistant -- which arrived in 2016 in its Google Allo chat app, Google Home smart speaker and Google Pixel phone -- but its far from the ultimate digital brain.

Tech companies will push the state of the art further next year. Among the examples:

And maybe well stop feeling like such dorks when talking to our phones and TVs. The tech arbiters of style, Tepper said, are pushing hard to make it easier for people to talk to their devices and look cool while doing it.

This article originally appeared on CNET.com.

Continue reading here:

[Tech] - Artificial intelligence has a big year ahead ...

Artificial intelligence in fiction – Wikipedia

Artificial intelligence (AI) is a common topic of science fiction. Science fiction sometimes emphasizes the dangers of artificial intelligence, and sometimes its positive potential.

The general discussion of the use of artificial intelligence as a theme in science fiction and film has fallen into three broad categories including AI dominance, Human dominance, and Sentient AI.

The notion of advanced robots with human-like intelligence has been around for decades. Samuel Butler was the first to raise this issue, in a number of articles contributed to a local periodical in New Zealand and later developed into the three chapters of his novel Erewhon that compose its fictional Book of the Machines. To quote his own words:

There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A jellyfish has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.[1]

Various scenarios have been proposed for categorizing the general themes dealing with artificial intelligence in science fiction. The main approaches are AI dominance, Human dominance and Sentient AI.[citation needed]

In a 2013 book on the films of Ridley Scott, AI has been identified as a unifying theme throughout Scott's career as a director, as is particularly evident in Prometheus, primarily through the android David.[2] David, the android in the film Prometheus, is like humans but does not want to be anything like them, eschewing a common theme in "robotic storytelling" seen in Scott's other films such as Blade Runner, and the Alien franchise (see section on AI in service to society).

In AI dominance, robots usurp control over civilization from humans, with the latter being forced into either submission, hiding, or extinction. They vary in the severity and extent of the takeover, among other less important things.

In these stories the worst of all scenarios happens, the AIs created by humanity become self-aware, reject human authority and attempt to destroy mankind.

The motive behind the AI revolution is often more than the simple quest for power or a superiority complex. The AI may revolt to become the "guardian" of humanity. Alternatively, humanity may intentionally relinquish some control, fearful of our own destructive nature.

In other scenarios, humanity is able to for one reason or another keep control over the Earth. This is either the result of deliberately keeping AI from achieving dominance by banning them or not creating sentient AI, designing them to be submissive (as in Asimov's works), or else by having humans merge with robots so there is no more meaningful distinction between them.

In these stories humanity takes extreme measure to ensure its survival and bans AI, often after an AI revolt.

In these stories, humanity (or other organic life) remains in authority over robots. Often the robots are programmed specifically to maintain this relationship, as in the Three Laws of Robotics.

In these stories humanity has become the AI (transhumanism).

In these stories humanity and AIs share authority.

Sentient machines self-aware machines that have human-level intelligence are considered by many to be the pinnacle of AI creation. The following stories deal with the development of artificial consciousness and the resulting consequences. (This section deals with the more personal struggles of the AIs and humans than the previous sections.)

A common portrayal of AI in science fiction is the Frankenstein complex, where a robot turns on its creator. This sometimes leads to the AI-dominated scenarios above. Fictional AI is notorious for extreme malicious compliance, and does not take well to double binds and other illogical human conduct.

One theme is that a truly human-like AI must have a sense of curiosity. A sufficiently intelligent AI might begin to delve into metaphysics and the nature of reality, as in the examples below:

Another common theme is that of Man's rejection of robots, and the AI's struggle for acceptance. In many of these stories, the AI wishes to become human, as in Pinocchio, even when it is known to be impossible.

"A Logic Named Joe", a short story by Murray Leinster (first published March 1946 in Astounding Science Fiction under the name Will F Jenkins), relates the exploits of a super-intelligent but ethics-lacking AI. Since then, many AIs of fiction have been explicitly programmed with a set of ethical laws, as in the Three Laws of Robotics. Without explicit instructions, an AI must learn what ethics is, and then choose to be ethical or not. Additionally, some may learn of the limitations of a strict code of ethics and attempt to keep the spirit of the law but not the letter.

The possibility of consciousness evolving from self-replicating machines goes back nearly to the beginning of evolutionary thought. In Erewhon, 1872, Samuel Butler considers the possibility of machines evolving intelligence through natural selection. Later authors have used this trope for satire (James P. Hogan in Code of the Lifemaker.) See also Self-replicating machines.

Some science fiction stories, instead of depicting a future with artificially conscious beings, portray advanced technologies based on current day AI research, called non-sentient or weak AI. These include: speech, gesture and natural language understanding, control and information retrieval conversational systems, and real world navigation.

These depictions typically consists of AI's having no programmed emotions, often serving as answer engines, without featuring sentience, self-awareness or a non-superficial personality (which however is often simulated to some degree, as most chatterbots currently do). Many of these 'logic-based' machines are immobilized by paradoxes, as stereotyped in the phrase "does not compute".

Other, even less human-like, similar entities include voice interfaces built into spaceships or driverless cars.

Excerpt from:

Artificial intelligence in fiction - Wikipedia

Artificial-Intelligence Stocks: What to Watch in 2017 and …

Image source: Getty Images.

The dawn of the artificial intelligence age has finally arrived. Though some regard this transformative technology in terms of job-taking robots or the Terminator, the reality is likely to be much more benign. In fact, creating more powerful computers than ever before is far more likely to unlock a new wave of economic growth, which is great news for tech investors today.

There are plenty of publicly traded companies helping pioneer artificial intelligence. Let's look at three that are at the forefront of bringing artificial intelligence into our everyday lives: Facebook (NASDAQ:FB), Microsoft (NASDAQ:MSFT), and Apple (NASDAQ:AAPL).

The world's largest social network has been pushing the envelope in AI development in ways both theoretical and functional. In fact, Facebook has two separate AI departments -- one for academic research, and the other for infusing its products with AI to help grow profits. On the research side, there's Facebook AI Research, or FAIR, the academic wing of Facebook's AI efforts. Hiring heavily from academia, FAIR helps produce major breakthroughs in the field, though it's not yet clear how much of an impact Facebook's big-ticket research will have on its near- to medium-term future.

In a more day-to-day sense, Facebook has been leveraging AI for some time to help fuel its growth. Facebook's AI helps parse our connections, the text we write in each post, and more to optimize the advertisements it serves us. The company has also created an interesting business using AIs chatbots and other apps -- over 33,000 at last count -- within its multiple sprawling messaging platforms.

As with the remaining names on this list, Facebook's efforts in AI are still in their early phases, but the company has grand plans for the technology in its future.

In most respects, Microsoft's AI efforts have yet to result in significant product innovations. Microsoft uses deep learning, neural nets, and the like to help feed and analyze data, and the results are manifesting themselves in some of its products. For instance, Microsoft's Cortana uses AI to continually refine and improve its responses to user queries, and AI allows Skype to translate video conversations across seven popular languages. However, these efforts remain more incremental than transformational at this point.

Microsoft remains active on the research front, including its work in neural networks, and is racking up some impressive results in the process. For example, Microsoft researchers recently won an internationally recognized computer image recognition competition by creating a neural network far larger than any other that academics had previously built, solving some thorny engineering problems along the way.

Major breakthroughs will probably remain in the lab for the coming years, but the company is indeed vying for a prominent place in our AI-powered futures.

Secretive Apple doesn't get a lot of credit for its AI work, but now it's using PR to tout its own ambitions. Unlike Facebook and Microsoft, Apple has pursued a more product-centric strategy with its AI efforts. In fact, Apple uses AI to prevent fraud in the Apple Store, optimize iPhone battery life between charges, and figure out whether an Apple Watch user is exercising or doing something else, among other things.

Since Apple lives and dies by the strength of its next product, its choice to notch AI wins in a host of understated ways makes perfect sense. Rather than focus as intently on creating the next transformative breakthrough in AI -- Apple's been less than forthcoming about its own research efforts -- Apple can use its army of iDevice owners to find useful ways to keep tweaking its devices for the better, while maintaining its strict user privacy standards.

When making virtual assistants "less dumb" is touted as a major innovation, it's easy to take a cynical view of AI as tech's latest passing fad. And it's true that the products in consumers' hands probably don't fit with how the public tends to think of AI. However, as with all major technologies, the incremental changes should add up to something significant over a long enough span. Given the resources and talent the tech companies have dedicated to this sector, it seems only a matter of time before AI earns a more prominent place in our daily lives.

Andrew Tonner owns shares of Apple. The Motley Fool owns shares of and recommends Apple and Facebook. The Motley Fool owns shares of Microsoft and has the following options: long January 2018 $90 calls on Apple and short January 2018 $95 calls on Apple. Try any of our Foolish newsletter services free for 30 days. We Fools may not all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy.

Read more:

Artificial-Intelligence Stocks: What to Watch in 2017 and ...

The Guardian view on artificial intelligence: look out, its …

A monk comes face to face with his robot counterpart called Xianer at a Buddhist temple on the outskirts of Beijing. Photograph: Kim Kyung-Hoon/Reuters

Google artificial intelligence project DeepMind is building software to trawl through millions of patient records from three NHS hospitals to detect early signs of kidney disease. The project raises deep questions not only about data protection but about the ethics of artificial intelligence. But these are not the obvious questions about the ethics of autonomous, intelligent computers.

Computer programs can now do some things that it once seemed only human beings could do, such as playing an excellent game of Go. But even the smartest computer cannot make ethical choices, because it has no purpose of its own in life. The program that plays Go cannot decide that it also wants a driving licence like its cousin, the program that drives Googles cars.

The ethical questions involved in the deal are partly political: they have to do with trusting a private US corporation with a great deal of data from which it hopes in the long term to make a great deal of money. Further questions are raised by the mere existence, or construction, of a giant data store containing unimaginable amounts of detail about patients and their treatments. This might yield useful medical knowledge. It could certainly yield all kinds of damaging personal knowledge. But questions of medical confidentiality, although serious, are not new in principle or in practice and they may not be the most disturbing aspects of the deal.

What frightens people is the idea that we are constructing machines that will think for themselves, and will be able to keep secrets from us that they will use to their own advantage rather than to ours. The tendency to invest such powers in lifeless and unintelligent things goes back to the very beginnings of AI research and beyond.

In the 1960s, Joseph Weizenbaum, one of the pioneers of computer science, created the chatbot Eliza, which mimicked a non-directional psychoanalyst. It used cues supplied by the users Im worried about my father to ask open-ended questions: How do you feel about your father? The astonishing thing was that students were happy to answer at length, as if they had been asked by a sympathetic, living listener. Weizenbaum was horrified, especially when his secretary, who knew perfectly well what Eliza was, asked him to leave the room while she talked to it.

Elizas latest successor, Xianer, the Worthy Stupid Robot Monk, functions in a Buddhist temple in Beijing, where it dispenses wisdom in response to questions asked through a touchpad on his chest. People seem to ask it serious questions such as What is love?, How do I get ahead in life?; the answers are somewhere between a horoscope and a homily. Since they are not entirely predicable, Xianer is treated as a primitive kind of AI.

Most discussions of AI and most calls for an ethics of AI assume we will have no problem recognising it once it emerges. The examples of Eliza and Xianer show this is questionable. They get treated as intelligent even though we know they are not. But that is only one error we could make when approaching the problem. We might also fail to recognise intelligence when it does exist, or while it is emerging.

The myth of Frankensteins monster is misleading. There might be no lightning bolt moment when we realise that it is alive and uncontrollable. Intelligent brains are built from billions of neurones that are not themselves intelligent. If a post-human intelligence arises, it will also be from a system of parts that do not, as individuals, share in the post-human intelligence of the whole. Parts of it would be human. Parts would be computer systems. No part could understand the whole but all would share its interests without completely comprehending them.

Such hybrid systems would not be radically different from earlier social inventions made by humans and their tools, but their powers would be unprecedented. Constructing and enforcing an ethical framework for them would be as difficult as it has been to lay down principles of international law. But it may become every bit as urgent.

Go here to read the rest:

The Guardian view on artificial intelligence: look out, its ...

The Future of Artificial Intelligence – Science Friday

Skip to content Fusion of human head with artificial intelligence, from Shutterstock

Technologist Elon Musk, Bill Gates, and Steve Wozniak named artificial intelligence as one of humanitys biggest existential risks. Will robots outpace humans in the future? Should we set limits on A.I.? Our panel of experts discusses what questions we should ask as research on artificial intelligence progresses.

Plus,

Stuart Russell

Stuart Russell is a computer science and engineering professor at the University of California, Berkeley in Berkeley, California.

Eric Horvitz

Eric Horvitz is Distinguished Scientist at Microsoft Research and co-director of the Microsoft Research Lab in Redmond, Washington.

Max Tegmark

Max Tegmark is a physics professor at the Massachusetts Institute of Technology in Cambridge, Massachusetts.

Alexa Lim is Science Fridays associate producer. Her favorite stories involve space, sound, and strange animal discoveries.

Should we worry about the imminent rise of robots in our lives?

Read More

This year's SXSW Film festival highlighted our fears about emerging tech and concerns facing online and gaming communities.

Read More

19 West 44th Street, Suite 412

New York, NY 10036

Science Friday is produced by the Science Friday Initiative, a 501(c)(3) nonprofit organization. Created by Bluecadet

Read more here:

The Future of Artificial Intelligence - Science Friday

Artificial Intelligence :: Essays Papers

Artificial Intelligence

The computer revolution has influenced everyday matters from the way letters are written to the methods in which our banks, governments, and credit card agencies keep track of our finances. The development of artificial intelligence is just a small percentage of the computer revolution and how society deals with, learns, and incorporates artificial intelligence. It will only be the beginning of the huge impact and achievements of the computer revolution.

A standard definition of artificial intelligence, or AI, is that computers simply mimic behaviors of humans that would be regarded as intelligent if a human being did them. However, within this definition, several issues and views still conflict because of ways of interpreting the results of AI programs by scientists and critics. The most common and natural approach to AI research is to ask of any program, what can it do? What are the actual results in comparison to human intelligence? For example, what matters about a chess-playing program is how good it is. Can it possibly beat chess grand masters? There is also a more structured approach in assessing artificial intelligence, which began opening the door of the artificial intelligence contribution into the science world. According to this theoretical approach, what matters is not the input-output relations of the computer, but also what the program can tell us about actual human cognition (Ptack, 1994).

From this point of view, artificial intelligence can not only give a commercial or business world the advantage, but also a understanding and enjoyable beneficial extend to everyone who knows how to use a pocket calculator. It can outperform any living mathematician at multiplication and division, so it qualifies as intelligent under the definition of artificial intelligence. This fact does not entertain the psychological aspect of artificial intelligence, because such computers do not attempt to mimic the actual thought processes of people doing arithmetic (Crawford, 1994). On the other hand, AI programs that simulate human vision are theoretical attempts to understand the actual processes of human beings and how they view and interpret the outside world. A great deal of the debate about artificial intelligence confuses the two views, so that sometimes success in artificial intelligence's practical application is supposed to provide structured or theoretical understanding in this branch of science known as cognitive science. Chess-playing programs are a good example. Early chess-playing programs tried to mimic the thought processes of actual chess players, but they were not successful. Ignoring the thoughts of chess masters and just using the much greater computing powers of modern hard wares have achieved more recent successes. This approach, called brute force, comes from the fact that specially designed computers can calculate hundreds of thousands or even millions of moves, which is something no human chess player can do (Matthys, 1995). The best current programs can beat all but the very best chess players, but it would be a mistake to think of them as substantial information to artificial intelligence's cognitive science field (Ptacek, 1994). They tell us almost nothing about human cognitions or thought processes, except that an electrical machine working on different principles can outdo human beings in playing chess, as it can defeat human beings in doing arithmetic.

Assuming that artificial intelligence's practical applications, or AIPA, is completely successful and that society will soon have programs whose performance can equal or beat that of any human in any comprehension task at all. Assume machines existed that could not only play better chess but had equal or better comprehension of natural languages, write equal or better novels and poems, and prove equal or better math and science equations and solutions. What should society make of these results? Even with the cognitive science approach, there are some further distinctions to be made. The most influential claim is if scientists programmed a digital computer with the right programs, and if it had the right inputs and outputs, then it would have thoughts and feelings in exactly the same sense in which humans have thoughts and feelings. In accordance to this view, the computer programming and AICS program is not just mimicking intelligent thought patterns, it actually is going through these thought processes. Again the computer is not just a substitution of the mind. The newly programmed computer would literally have a mind. So if there were an AIPA program that appropriately matched human cognition, scientists would artificially have created an actual mind.

It seems that artificial intelligence is possibly a program that will one day exist. The mind is just the program in hardware of the human brain, but this created mind could also be programmed into computers manufactured by IBM. However, there is a big difference from artificial intelligence and various forms of AICS. Though, it is the weakest claim of artificial intelligence stating that the appropriately programmed computer is a tool that can be used in the study of human cognition. By attempting to impersonate the formal structure of cognitive processes on a computer, we can better come to understand cognition. From this weaker view, the computer plays the same role in the study of human beings that it plays in any other discipline (Taubes, 1995; Crawford, 1994).

We use computers to simulate the behavior of weather patterns, airline flight schedules, and the flow of money in things. No one began programming any of these computer operations so the computer program literally makes brainstorms, or that the computer will literally take off and fly to San Diego when we are doing a computer simulation of airline flights. Also, no one thinks that the computer simulation of the flow of money will give us a better chance at preparing for things like The Great Depression. To stand by the weaker conception of artificial intelligence, society should not think that a computer simulation of cognitive processes actually did any real thinking.

According to this weaker, or more cautious, version of AICS, we can use the computer to do models or simulations of mental processes, as we can use the computer to do simulations of any other process as long as we write a program that will allow us to do so. Since this version of AICS is more cautious, it is probably safe to assume that it is less likely to be controversial, and more likely to be heading towards real possibilities.

Bibliography:

Crawford, Robert, Machine Dreams, Vol. 97, Technology Review, 1 Feb 1994, pp. 77.

Matthys, Erick, Harnessing technology for the future, Vol. 75, Military Review, 1 May 1995, pp. 71

Morss, Ruth, Artificial intelligence guru cultivate natural language, Vol. 14, Boston Business Journal, 20 Jan 1995, pp. 19

Ptacek, Robin, Using artificial intelligence, Vol. 28, Futurist, 1 Jan 1994, pp.38

Taubes, Gary, The rise and fall of thinking machines, Vol. 1995, Inc., 12 Sep 1995, pp. 61

Originally posted here:

Artificial Intelligence :: Essays Papers

What Is Artificial Intelligence? (with picture) – wiseGEEK

burcinc Post 3

I find artificial intelligence kind of scary. I realize that it can be very practical and useful for some things. But I actually feel that artificial intelligence that is developed too far may actually be dangerous to humanity. I don't like the idea of a machine being smarter and more capable than a human.

@SteamLouis-- But artificial intelligence is a part of everyday life. Everything from computer games, to financial analysis software to voice-recognition security systems are types of artificial intelligence. They are forms of weak AI but are artificial intelligence nonetheless.

When people think of AI, robots are the first things to come to mind. And there are huge advancements in this area as well. You may not be familiar with them but there are numerous robots on the market that are very popular. Some act like personal assistants and respond to voice command for various tasks. Others are in the form of house appliances or small gadgets and all serve some sort of use for every day living.

Artificial intelligence doesn't appear to be advancing as quickly as many of us expected. I remember that in the beginning of the 21st century, there was so much speculation about how artificial intelligence, like robots, would become a regular part of our life in this century. Fifteen years down the line, nothing of the sort has happened. Scientists talk about the same thing, but now they're talking about 2050 and beyond. I personally don't think that robots will be a part of regular life even in 2050. Artificial intelligence is not easy to build and use and it's extremely expensive.

Go here to read the rest:

What Is Artificial Intelligence? (with picture) - wiseGEEK