UK government launches artificial intelligence inquiry – CNET – CNET

Facebook showed off some artificial intelligence at its F8 event.

The United Kingdom's government has some questions about artificial intelligence.

On Wednesday, the House of Lords announced a public call for experts to weigh in on issues surrounding AI, including its ethical, economic and social effects as the technology becomes more prevalent.

When you think about all the crazy things that AI can accomplish, like a sex robot with a "brain," yeah, we've got some questions too.

AI is already poised to take over jobs, as it has for an insurance company in Japan, but Britain's Parliament has concerns from all sides. Members of Parliament want to know who AI is helping the most, who it's hurting, what role the government should play, and how AI will look in the next 20 years.

"The Committee wants to use this inquiry to understand what opportunities may exist for society in the development and use of artificial intelligence, as well as what risks there might be," Lord Clement-Jones, chairman of the committee on AI, said in a statement.

Experts can submit their testimonies here. The deadline for entries is on Sept. 6.

Follow this link:

UK government launches artificial intelligence inquiry - CNET - CNET

Artificial Intelligence Will Widen The Gap Between Rich And Poor – Huffington Post Australia

Globally, the economic divide is growing. The rich are getting richer and the poor are getting poorer. In Australia, more than a quarter of households have recently experienced a decrease in income. The reasons for the growing economic divide are many and complex. They include factors such as job insecurity, wage cuts and underemployment.

Underemployment and unemployment are being affected by growing use of artificial intelligence in the workplace. As technology rapidly progresses, more jobs will be affected. For many low-skilled workers, technology can now perform the functions of these employees, leaving them out of work.

On the other end of the spectrum, high-skilled workers such as those in the engineering, legal and medical fields, are complemented by technology, and increased demand in their work has led to an increase in wages.

Machines, robots, and other forms of artificial intelligence are expected to continue to carry out increasing levels of tasks normally carried out by humans. PwC has projected that this will boost global GDP by around 14 percent by 2030.

Artificial intelligence is anticipated to contribute $15.7 trillion to the world's economy over the next decade, primarily by increasing productivity and increasing consumption; as shoppers have extra time to buy more, higher-quality goods and services.

Approximately 42 percent of the expected $15.7 trillion increase in the global economy is expected to be created by automated machinery in the workplace. Many are now concerned that this will lead to massive job losses, leading to extreme increases in the divide between the rich and the poor.

Machines may be able to meet and improve on human performance in many routine and entry-level jobs. Many repetitive and knowledge-based occupations will be vulnerable to extinction at the hands of systems automation, machine learning and other artificial intelligence faculties. Artificial intelligence is also likely to increase the feasibility of outsourcing offshore work.

By 2025, the number of robots worldwide is expected to quadruple. Consulting firm McKinsey believes that within the next 20 years, 45 percent of US workers are at risk of losing their jobs to automation. The World Bank believes that within that period, 57 percent of jobs in the OECD could become taken over by machines.

The Massachusetts Institute of Technology conducted a study which found that for every new robot incorporated into the US economy, employment would be reduced by 5.6 workers. This data did not even incorporate the effects of artificially-intelligent robots, meaning the scope for impact is even greater.

PwC has contended that despite the decrease of certain jobs as the result of automation and artificial intelligence, new jobs will be created, "a new set of personnel will be required to build, maintain, operate, and regulate these emerging technologies." However, there is great concern that the jobs created will not be significant enough to offset the losses.

Furthermore, it is the particular types of jobs that will disappear that is the cause for concern. Low-skilled, entry-level, repetitious jobs will become automated. These jobs are generally those held by the lower socio-economic sector of our community. Once these jobs disappear, what will become of that group of people?

The growth of artificial intelligence in our workplaces will be a revolution of sorts, but will have a significantly different impact on workers than that of the Industrial Revolution. The Industrial Revolution replaced jobs with other jobs. This revolution simply eliminates them.

MORE ON THE BLOG:

Killer Robots And Racist Software: Are There Decisions Only Humans Should Make?

Artificial intelligence will provide great profit to companies, leaving high levels of wealth in the hands of few, and many people without employment.

So, where will those people go? Some have suggested that service industries such as hospitality will become a popular safe haven for those pushed out of their jobs by automation. But, there is a limit on how many of these jobs can exist, and how many people those industries can sustain. Others have suggested that our communities will need to become increasingly welfare driven, with the incorporation of volunteer work, a working welfare type model.

All of these insights indicate that we must seriously begin to consider the structure of our future economy, and begin to develop approaches to address the challenges that will arise from the deepening of the divide between the rich and the poor.

While artificial intelligence is considered to be innovative, perhaps the greatest innovation required will be in adapting to the economic factors associated with a robot revolution.

ALSO ON HUFFPOST AUSTRALIA

Original post:

Artificial Intelligence Will Widen The Gap Between Rich And Poor - Huffington Post Australia

Could artificial intelligence disrupt the photography world? – TechRepublic

Scroll through some of the recent stories found on TechRepublic and you'll see the topic of artificial intelligence (AI) mentioned on several occasions. AI isn't something widely seen in action today, but the reality of its becoming more common is definitely on the lips and text editors of technologists. Can AI disrupt the world of photography? Will it eventually replace human input when it comes to processing photos? Anything is possible, but I truly doubt it.

In a recent blog post, a team at Google shared how its deep learning technology has been able to produce "professional quality" photo editing for a batch of landscape photos. In blind testing, pro photographers rated up to 40% of the images edited by AI as semi-pro or pro level quality. Quite frankly, some of the images published were quite nice, but is this enough to disrupt the world of photography? I don't think so. Disrupt the world of photography editing? Well it could be useful, but not disruptive. Allow me to explain.

Let's think of a scenario that a photographer may face. First there's a scheduled photo shoot with a client. In general, the client will have ideas on what they're looking for in the session and the photographer works closely with the client to meet those needs. We'll just throw headshot sessions out the window and look more at product photography or photography based on a scene in our example. Now close your eyes, be the client, and think of an ad showing a boardroom setting. In any scenario, it's up to the client and photographer to determine the mood and message it wants presented in that boardroom photo shoot.

Is the message "Board meetings are serious and powerful"? Or is the message "Come together and collaborate"? Both messages can be answered from the same scene by making a few nuance changes with lighting, the models' posture, facial expressions, and gestures, or even the props used within the scene. The client may not understand those concepts, but the photographer will. In this scenario, I can't say AI will aid in getting the client's message across. Right now, the AI used by Google isn't based on compositing or replacing props in a scene. A boardroom with with a few bottles of water or cups of coffee does not give the same vibe as a boardroom with an open box of doughnuts and crumpled cans of energy drinks. AI isn't ready to replace the analytical skills a photographer brings to the set of a photo shoot.

In the editing process, the photographer and AI share the same data. If a client were to upload an image into an AI system, it could easily input specified parameters to assist in the editing process. Keywords and maybe even a brief description of what the client is looking for is handy data. The AI could analyze the keywords against the uploaded image, proceed with editing to fit the client's needs, and display it within minutes or even SECONDS as a preview. The client could then approve the image and download it for use.

But what if the client doesn't approve?

Speaking from experience, I've edited photos for clients who didn't always agree with my post processingespecially when dealing with humans in the images. "Can you make my neck look slimmer?" "Can you remove that small mole that's under my left eye?" Those are not outlandish requests and are pretty common because most people want aesthetically superior models in their photographs. On the other hand, some individuals have taken pride in or made a name for themselves around their imperfections. Think of the former NFL player, Michael Strahan. Strahan has a gap between his two front teeth. With the gazillions of dollars he's earned as a professional football player, he could easily have gotten orthodontic care to correct the gap. He didn't. How will AI photo editing handle such situations? Sure, the machine can learn to touch up skin blemishes or imperfections, but to what extent? Will the AI understand the context of the edit or the subject matter better than a human?

When I hosted a Smartphone Photographers Community, we discussed how photos that tell a story are usually the photos that capture our emotions. It may not be the photo with the best exposure or color saturation, but when you see it, you stop to admire it. For example, one of the more iconic images of US history is the raising of the US flag at Iwo Jima. This image isn't technically sound. The exposure isn't quite right and the contrast could be increased. But at the end of the day, WHO CARES? It's an awesome photo capturing an emotional moment. Who's to say that running the image through post processing wouldn't have ruined it?

I think it would be tough for AI to know when and where to draw the line when it comes to post processing photos. Some photos need human intervention in the editing process to understand the mood and message the photo is supposed to convey, not just the adjusting of exposure or white balance. If a photo is just a run-of-the-mill landscape photograph, there just may be a place for AI photo editing. But even with that said, I'd much rather lean on the professional skills of landscape photographers, such as Trey Ratcliff or Thomas Heaton, who have a way of tugging at your emotions with their photography.

What are your thoughts about AI photo editing? Leave a comment below or tag me on Twitter with your thoughts.

Read the original here:

Could artificial intelligence disrupt the photography world? - TechRepublic

Nvidia Faces Much Tougher Competition in Artificial Intelligence, but Will Still Be OK – TheStreet.com

Nvidia Corp. (NVDA) is set to face a much tougher competitive environment in the white-hot market for server co-processors used to power artificial intelligence projects, as the likes of Intel Corp. (INTC) , AMD Inc. (AMD) , Fujitsu and Alphabet Inc./Google (GOOGL) join the fray. But the ecosystem that the GPU giant has built in recent years, together with its big ongoing R&D investments, should allow it to remain a major player in this space.

This column originally appeared on Real Money, our premium site for active traders. Click here to get great columns like this.

It's a basic rule of economics that when a market sees a surge in demand that leads to a small number of suppliers amassing huge profits, more suppliers will enter in hopes of getting a chunk of those profits. That's increasingly the case for the server accelerator cards used for AI projects, as a surge in AI-related investments by enterprises and cloud giants contribute to soaring sales of Nvidia's Tesla server GPUs.

Thanks partly to soaring AI-related demand, Nvidia's Datacenter product segment saw revenue rise 186% annually in the company's April quarter to $409 million, after rising 205% in the January quarter. Growth like that doesn't go unnoticed. Over the last 12 months, several other chipmakers and one cloud giant have either launched competing chips or announced plans to do so.

To understand why some of these rival products could be competitive with Tesla GPUs on a raw price/performance basis, it's important to understand what made Nvidia's chips so popular for AI workloads in the first place. Whereas server CPUs, like their PC and mobile counterparts, feature a small number of relatively powerful CPU cores -- the most powerful chip in Intel's new Xeon Scalable server CPU line has 28 cores -- GPUs can feature thousands of smaller cores that work in parallel, and which have access to to blazing-fast memory.

That gives GPUs a big edge for projects that involve a subset of AI known as deep learning. Deep learning involves training models that attempt to function much like how neurons in the human brain do to detect patterns in content such as voice, text and images, with the algorithms used by the models (like the human brain) getting better at both understanding these patterns as they take in more content and applying what they've learned to future tasks. Once an algorithm has gotten good enough, it can be used against real-world content in an activity known as inference.

Read more:

Nvidia Faces Much Tougher Competition in Artificial Intelligence, but Will Still Be OK - TheStreet.com

2 Top Stocks for Artificial Intelligence Investors – Motley Fool

There are plenty of companies you could choose from if you wanted to benefit from the growing artificial intelligence (AI) market. I won't get into all of them, but it's safe to say that nearly all the big players in the tech sector -- like Apple, Microsoft, IBM, Intel, Facebook, and a slew of others -- believe AI could reach a market size of $59.8 billion by 2025.

But that's not helpful if you want to know which companies are making the biggest moves in the space, and which have the most potential to benefit. To help answer that, we need to take a closer look at NVIDIA Corporation (NASDAQ:NVDA) and Alphabet (NASDAQ:GOOG) (NASDAQ:GOOGL). These companies may differ in their approach to AI, but both deserve to be at the top of the list for AI investments. Here's why.

Image source: Getty Images.

NVIDIA is basically a tech investor's dream at the moment, mainly because its share price has gained more than 200% over the past 12 months. NVIDIA makes graphics processors that are used in computers for things like high-performance gaming, but the company has been taking its graphics processing unit (GPU) know-how and wisely applying it to AI businesses as well.

For example, the company has built a self-driving supercomputer, called Drive PX 2, that processes a massive amount of image information so that semi-autonomous cars can perceive the world around them. Audi, Toyota, Tesla, and others are already using the company's AI tech for their semi-autonomous vehicles, and NVIDIA believes its total addressable market for AI-powered self-driving cars is about $8 billion 2025.

In fact, NVIDIA believes that its total addressable market for all AI will be around $40 billion between 2020 and 2025. That includes everything from self-driving cars to AI cities and GPU-powered deep-learning data centers.

The company's data-center segment is a growing AI opportunity because more and more companies are looking to GPUs to power intense image processing on their servers. Goldman Sachs analyst Toshiya Hari thinks the company already holds nearly 90% of the market for chips used for computer-training tasks, a part of the machine-learning and AI markets.

One thing investors should know is that NVIDIA's "top AI stock" designation comes from the company's potential in the space, and not necessarily from its current revenues. In fiscal first-quarter 2018, the company brought in just 7% of its total revenues from the automotive market (which includes its Drive PX system) and about 21% from its data-center business. Meanwhile, GPU sales for gaming accounted for about 53% of revenue.

But the potential here for NVIDIA is too large to ignore. Graphics processing is an integral part of many AI learning systems, and NVIDIA's chips are some of the best in the business. With automakers already betting on the company's AI computer and tech companies looking to NVIDIA for their AI data centers, it's only a matter of time before the company's AI revenues follow its opportunities.

Like NVIDIA, Alphabet is pursuing AI in several different ways, but one of the most important is using it to serve up better ads to its users.

Alphabet's Google debuted its Smart Bidding learning system last year, which uses machine learning to better automate bids on AdWords and DoubleClick. Google said at the time that the system accounts for many more factors than a person or team could determine, in order to make ads more efficient. The importance of serving up the most relevant ads becomes clear when you consider that Google is expected to earn about 78% of all U.S. search ad revenue this year, and more than 80% by 2019, according to data from eMarketer.

But Google has been very persistent in expanding its AI footprint in other areas as well. According to Recode, the company has acquired at least 20 AI companies over the past few years. One of those is DeepMind, which Google plans to use to do things like cure diseases, and find new ways for companies to reduce energy consumption.

And, of course, the company is using its AI to build some of the most advanced driverless cars. Google spun out its self-driving car business into its own company, called Waymo, late last year, but it still falls under the broader umbrella of Alphabet companies. The opportunity for Alphabet here is in using AI-powered self-driving technology to earn revenues from self-driving car services, and in selling the technology to other companies to implement in their own vehicles. Waymo is already testing its technology with public riders in Phoenix, as part of a partnership with Fiat Chrysler.

Additionally, Google is using its AI to improve its voice assistant, called Google Assistant. Google Assistant now comes on newer versions of Android phones and in the company's smart home speaker, Google Home. Smart home speakers are expected to become a $13 billion market by 2024.

But Alphabet's biggest opportunity in AI remains in how it's used to sell more ads. Google's ad revenue accounted for 88% of Alphabet's total revenue in 2016, so it's very likely that the company will continue to apply its AI efforts to keep that trend going.

Remember that the artificial intelligence market is just getting started, which means that there's tons of time to reap the benefits, but it could also be a while before the market takes off. Investors looking to Alphabet and NVIDIA for AI gains will likely get them -- but should plan for the benefits to come over the next several years, as opposed to the next few quarters.

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Teresa Kersten is an employee of LinkedIn and is a member of The Motley Fool's board of directors. LinkedIn is owned by Microsoft. Chris Neiger has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A and C shares), Apple, Facebook, Nvidia, and Tesla. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.

View original post here:

2 Top Stocks for Artificial Intelligence Investors - Motley Fool

Artificial Intelligence, Explained – Seeking Alpha

From personal assistants like Siri, to movie suggestions on Netflix, artificial intelligence (NYSE:AI) is rapidly becoming ubiquitous in everyday life. As this technology continues to advance in capability and prevalence, we sought to explore AI and several closely related subtopics: machine leaning, deep learning, and neural networks.

What are the Differences between Artificial Intelligence, Machine Learning, and Deep Learning?

While artificial intelligence (AI), machine learning (ML), and Deep Learning (NYSE:DL) are often used interchangeably, there are several key differences. One way to visualize the relationship is through a series of concentric circles. AI is the macro topic which encompasses the entire field of study, while ML is a subtopic within AI. DL is a further refinement of ML and represents the most cutting edge of AI applications that are being used today.1

At a basic level, artificial intelligence is the concept of machines accomplishing tasks which have historically required human intelligence.1 AI can be broken down into two distinct fields:

Applied AI: Machines designed to complete very specifics tasks like navigating a vehicle, trading stocks, or playing chess as IBMs Deep Blue demonstrated in 1996 when it defeated chess grand master Gerry Kasparov.

General AI: Machines designed to complete any task which would normally require human intervention. The broad nature of General AI requires machines to learn as they encounter new tasks or situations. This need for a learned approach is what gave rise to modern Machine Learning.2

Today, many firms at the cutting edge of AI are focusing on machine learning (ML). In simple terms, ML is the process of building machines which can access data, apply algorithms to this data, and then train themselves to deduce valuable insights based on these underlying datasets.

The key difference between ML and AI is that ML does not rely explicitly on the code of its creator. Rather, ML systems use computer code as a starting point and then gather data, information, and inputs which can be studied similar to how a student might study for an exam. It is this relationship with big data that makes ML and the Internet of Things(connecting regular objects to the internet so they can collect data or be controlled remotely) so closely intertwined.3

Currently, ML is typically used to recognize faces, voice commands, and objects, as well as to translate languages. It has been successfully implemented in chatbots, such as Siri (Apple), Cortana (Microsoft), and Alexa (Amazon). With the victory of Googles Deep Mind over the Go world champion in 2016, ML is now increasingly becoming accepted as a useful tool for decision making in the corporate world.4

Deep learning takes artificial intelligence a step further, by mimicking how the human brain works through the use of artificial neural networks. In an artificial neural network, each neuron is charged with providing a binary (yes/no) response to basic questions about a piece of data. By layering thousands (or millions) of these artificial neural networks, a Deep Learning machine is able to generate reliable outputs (recommendations or interactions) without changing the underlying coding.

Consider a very basic artificial neural network which is responsible for determining if a photo contains a banana or an apple. The network has three neurons which are responsible for answering:

The network would respond with no, yes, no for the photo of a banana and yes, no, yes for the photo of an apple. Using binary, the network would learn that a banana is 010 and an apple is 101. Extrapolate this concept across thousands of yes/no questions of exponential complexity and you have the bases of artificial neural networks and deep learning.5

Apart from being used in image and voice recognition algorithms, companies are implementing deep learning to predict customer preferences, detect fraud and spam, fight malware, conduct life-saving diagnoses, and recognize handwriting. In many ways, the possibilities for this technology are endless.

Whats Ahead?

Gartners recent study projects that by the end of this decade, the average person will have more conversations with a virtual assistant or bot than with his or her immediate family.6Such penetration of artificial intelligence into our everyday lives will depend on further advances in the technology to become smarter, more capable, and easier to interact with. While many expect this progress to occur from advances in machine learning and deep learning, there are new techniques to being introduced as well.

Its with this momentum in mind that we developed our Robotics and Artificial Intelligence ETF (NASDAQ:BOTZ). The fund seeks to invest in companies that can potentially benefit from increased adoption and utilization of robotics and artificial intelligence.

1. https://www.forbes.com/sites/bernardmarr/2016/12/06/what-is-the-difference-between-artificial-intelligence-and-machine-learning/#35aa37802742

2. https://www.leverege.com/blogpost/the-difference-between-artificial-intelligence-machine-learning-and-deep-learning

3. http://www.techrepublic.com/article/machine-learning-the-smart-persons-guide/

4. http://www.techrepublic.com/article/7-companies-that-used-machine-learning-to-solve-real-business-problems/

5. http://www.explainthatstuff.com/introduction-to-neural-networks.html

6. http://www.gartner.com/smarterwithgartner/gartner-predicts-a-virtual-world-of-exponential-change/

Investing involves risk, including the possible loss of principal. The investable universe of companies in which BOTZ may invest may be limited. The Fund invests in securities of companies engaged in Information Technology which can be affected by rapid product obsolescence, and intense industry competition. In addition to normal risks associated with investing, international investments may involve risk of capital loss from unfavorable fluctuation in currency values, from differences in generally accepted accounting principles or from social, economic or political instability in other nations. The fund is non-diversified which represents a heightened risk to investors.

Shares are bought and sold at market price (not NAV) and are not individually redeemed from the Fund. Brokerage commissions will reduce returns.

Carefully consider the Funds investment objectives, risk factors, charges and expenses before investing. This and additional information can be found in the Funds full or summary prospectus, which may be obtained by calling 1-888-GX-FUND-1 (1.888.493.8631), or by visiting globalxfunds.com. Read the prospectus carefully before investing.

Global X Management Company LLC serves as an advisor to Global X Funds. The Funds are distributed by SEI Investments Distribution Co. (SIDCO), which is not affiliated with Global X Management Company LLC. Global X Funds are not sponsored, endorsed, issued, sold or promoted by Solactive AG, FTSE, Standard & Poors, NASDAQ, Indxx, or MSCI nor do these companies make any representations regarding the advisability of investing in the Global X Funds. Neither SIDCO nor Global X is affiliated with Solactive AG, FTSE, Standard & Poors, NASDAQ, Indxx, or MSCI.

Original post:

Artificial Intelligence, Explained - Seeking Alpha

What an Artificial Intelligence Researcher Fears About AI – Government Technology

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?

Oper proprie, CC BY-SA

The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.

Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.

Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

If this guy comes for you, how will you convince him to let you live? tenaciousme, CC BY

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.

But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

Arend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University

This article was originally published on The Conversation. Read the original article.

Here is the original post:

What an Artificial Intelligence Researcher Fears About AI - Government Technology

What Makes an Artificial Intelligence Racist and Sexist – Lifehacker

Artificial intelligence is infiltrating our daily lives, with applications that curate your phone pics, manage your email, and translate text from any language into another. Google, Facebook, Apple, and Microsoft are all heavily researching how to integrate AI into their major services. Soon youll likely interact with an AI (or its output) every time you pick up your phone. Should you trust it? Not always.

AI can analyze data more quickly and accurately than humans, but it can also inherit our biases. To learn, it needs massive quantities of data, and the easiest way to find that data is to feed it text from the internet. But the internet contains some extremely biased language. A Stanford study found that an internet-trained AI associated stereotypically white names with positive words like love, and black names with negative words like failure and cancer.

Luminoso Chief Science Officer Rob Speer oversees the open-source data set ConceptNet Numberbatch, which is used as a knowledge base for AI systems. He tested one of Numberbatchs data sources and found obvious problems with their word associations. When fed the analogy question Man is to woman as shopkeeper is to... the system filled in housewife. It similarly associated women with sewing and cosmetics.

While these associations might be appropriate for certain applications, they would cause problems in common AI tasks like evaluating job applicants. An AI doesnt know which associations are problematic, so it would have no problem ranking a womans rsum lower than an identical rsum from a man. Similarly, when Speer tried building a restaurant review algorithm, it rated Mexican food lower because it had learned to associate Mexican with negative words like illegal.

So Speer went in and de-biased ConceptNet. He identified inappropriate associations and adjusted them to zero, while maintaining appropriate associations like man/uncle and woman/aunt. He did the same with words related to race, ethnicity, and religion. To fight human bias, it took a human.

Numberbatch is the only semantic database with built-in de-biasing, Speer says in an email. Hes happy for this competitive advantage, but he hopes other knowledge bases will follow suit:

This is the threat of AI in the near term. Its not some sci-fi scenario where robots take over the world. Its AI-powered services making decisions we dont understand, where the decisions turn out to hurt certain groups of people.

The scariest thing about this bias is how invisibly it can take over. According to Speer, some people [will] go through life not knowing why they get fewer opportunities, fewer job offers, more interactions with the police or the TSA... Of course, he points out, racism and sexism are baked into society, and promising technological advances, even when explicitly meant to counteract them, often amplify them. Theres no such thing as an objective tool built on subjective data. So AI developers bear a huge responsibility to find the flaws in their AI and address them.

There should be more understanding of whats real and whats hype, Speer says. Its easy to overhype AI because most people dont have the right metaphors to understand it yet, and that stops people from being appropriately skeptical.

Theres no AI that works like the human brain, he says. To counter the hype, I hope we can stop talking about brains and start talking about whats actually going on: its mostly statistics, databases, and pattern recognition. Which shouldnt make it any less interesting.

Continue reading here:

What Makes an Artificial Intelligence Racist and Sexist - Lifehacker

Elon Musk Warns Governors: Artificial Intelligence Poses ‘Existential Risk’ – NPR

Tesla and SpaceX CEO Elon Musk responds to a question by Nevada Republican Gov. Brian Sandoval during the third day of the National Governors Association's meeting on Saturday in Providence, R.I. Among other things, Musk warned governors that artificial intelligence poses a "fundamental risk to the existence of human civilization." Stephan Savoia/AP hide caption

Tesla and SpaceX CEO Elon Musk responds to a question by Nevada Republican Gov. Brian Sandoval during the third day of the National Governors Association's meeting on Saturday in Providence, R.I. Among other things, Musk warned governors that artificial intelligence poses a "fundamental risk to the existence of human civilization."

Tesla CEO Elon Musk, speaking to U.S. governors this weekend, told the political leaders that artificial intelligence poses an "existential threat" to human civilization.

At the bipartisan National Governors Association in Rhode Island, Musk also spoke about energy sources, his own electric car company and space travel. But when Gov. Brian Sandoval of Nevada, grinning, asked if robots will take everyone's jobs in the future Musk wasn't joking when he responded.

Yes, "robots will do everything better than us," Musk said. But he's worried about more than the job market.

"AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that," Musk said. He said he has access to cutting-edge AI technology, and that based on what he's seen, AI is "the scariest problem."

Musk told the governors that AI calls for precautionary, proactive government intervention: "I think by the time we are reactive in AI regulation, it's too late," he said.

He was clearly not thrilled to make that argument, calling regulation generally "not fun" and "irksome," but he said that in the case of AI, the risks are too high to allow AI to develop unfettered.

"I think people should be really concerned about it," Musk said. "I keep sounding the alarm bell."

It's true: For years, Musk has issued Cassandra-like cautions about the risks of artificial intelligence. In 2014, he likened AI developers to people summoning demons they think they can control. In 2015, he signed a letter warning of the risk of an AI arms race.

Musk has invested in a project designed to make AI tech open-source, which he asserts will prevent it from being controlled by one company. And earlier this year, Maureen Dowd wrote a lengthy piece for Vanity Fair about Musk's "crusade to stop the A.I. apocalypse." Dowd noted that some Silicon Valley leaders including Google co-founder Larry Page do not share Musk's skepticism, and describe AI as a possible force for good.

Critics "argue that Musk is interested less in saving the world than in buffing his brand," Dowd writes, and that his speeches on the threat of AI are part of a larger sales strategy.

Back at the governors conference, some politicians expressed skepticism about the wisdom of regulating a technology that's still in development. Musk said the first step would be for the government to gain "insight" into the actual status of current research.

"Once there is awareness, people will be extremely afraid," Musk said. "As they should be."

See original here:

Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk' - NPR

Robotic Hogwash! Artificial Intelligence Will Not Take Over Wall Street – Wall Street Journal (subscription)


Wall Street Journal (subscription)
Robotic Hogwash! Artificial Intelligence Will Not Take Over Wall Street
Wall Street Journal (subscription)
A decade on, artificial intelligence and machine learning are the buzzwords in automated investment. But for all the hype, applying AI to investment still has ...

and more »

See the original post here:

Robotic Hogwash! Artificial Intelligence Will Not Take Over Wall Street - Wall Street Journal (subscription)

Learn how three experts are bringing the power of artificial intelligence to cloud computing – GeekWire

Diego Oppenheimer, CEO of Algorithmia. (GeekWire Photo / Kevin Lisota)

This time, it seems like its actually going to happen.

Weve been hearing promises about how artificial intelligence and machine learning are going to change the world for decades, but in 2017, its hard to deny that real breakthroughs are being made. AI is changing the way tech products are developed, data is evaluated, and even the way we communicate with each other.

At our GeekWire Cloud Tech Summit last month, we invited three AI experts Jensen Harris, CTO of Textio; Diego Oppenheimer, CEO of Algorithmia; and Jasjeet Thind, vice president of data science and engineering at Zillow to deliver a series of technical talks on how artificial intelligence and machine learning are being incorporated into products and services. Theyre presented below, and worth watching if youve been thinking about how AI would make sense in your application or service, but arent quite sure how to make it all work.

Diego Oppenheimer, Algorithmia

Oppenheimer blended a little of our serverless and microservices technical track into his talk, which focused on how developers are actually building applications that take advantage of artificial intelligence. Every application is going to become an intelligence application over the next couple of years, he said, and Googles new AI venture capital firm agrees, having invested $10.5 million into the company a few weeks after his appearance.

Jensen Harris, Textio

The next disruptive technology in productivity, and especially in writing, is machine intelligence, Harris said, early into his presentation on how Textio built its augmented writing system. He walked attendees through the process Textio went through in developing its AI technology, and some of the unsolved challenges that remain.

Jasjeet Thind, Zillow

Once youve deployed artificial intelligence algorithms into your application or service, how do you make sure everything runs the way it should? Thind explained how Zillow tests and deploys AI-powered applications by overcoming some unique challenges that AI presents in the testing process.

Follow this link:

Learn how three experts are bringing the power of artificial intelligence to cloud computing - GeekWire

Elon Musk doesn’t think we’re prepared to face humanity’s biggest threat: Artificial intelligence – Washington Post

The subjugation of humanity by a race of super-smart, artificially intelligent beings is something that has been theorized by everyone from generations of moviemakers to New Zealands fourth-most-popular folk-parodyduo.

But the latestprophet of our cyber-fueled downfall must realize why people would be inclined to take his warnings with a grain of silicon. He is, after all, the same guy whos asking us to turn over control of our cars and our lives to a bunch of algorithms.

Elon Musk, who hopes that one day everyone will ride in a self-driving, electric-powered Tesla, told a group of governors Saturday that they needed to get on the ball and start regulating artificial intelligence, which he called a fundamental risk to the existence of human civilization.

No pressure.When pressed for better guidance, Musk said the government must get a better understanding of the latest achievements in artificial intelligence before its too late.

Once there is awareness, people will be extremely afraid, as they should be, Musk said. AI is a fundamental risk to the future of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals in society, but they were not harmful to individuals as a whole.

And then Musk outlined the ways AI could bring down our civilization, which may sound vaguely familiar.

He believes AI could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information. Or, indeed as some companies already claim they can do by getting people to say anything that the machine wants.

Musk said hes usually against proactive regulation, which can impede innovation. But hes making an exception in the case of an AI-fueled Armageddon.

By the time we are reactive in regulation, its too late, he said, confessing that this is really like the scariest problem to me.

Hes been warning people about the problem for years, and hes even come up with a solution: Join forces with the computers.

He announced earlier this year that hes leading a company called Neuralink, which would devise ways to connect the human brain to computers, CNN reported.

In the decades to come, an Internet-connected brain plug-in would allow people to communicate without opening their mouthsand learn something as fast as it takes to download a book.

Other prominent figures in the world of science and technology have also warned against the dangers of artificial intelligence, including Microsoft founder Bill Gates and theoretical physicistStephen Hawking. But Musk concedes that people have been hesitant to accept their viewpoint.

I keep sounding the alarm bell, but until people see like robots going down the streets killing people, they dont know how to react because it seems so ethereal, he said. I think we should be really concerned about AI.

Still, even to the biggest skeptic, one sentence offered some food for thought: I have exposure to the very most cutting edge AI, and I think people should be really concerned about it.

Maybe Musk knows something the rest of us dont? He is, after all, a multibillionaire, capable of using obscene sums of money todevelop AI. Maybe in some Musk-funded lab, or on some secret SpaceX satellite, theres already a powerful AI on the verge of getting out.

Maybe its already loose.

Better safe than sorry:

01001001 00100000 01100001 01101101 00100000 01101111 01101110 00100000 01111001 01101111 01110101 01110010 00100000 01110011 01101001 01100100 01100101 00100000

Read more:

Police thought a skydiver died in an accident until they saw his final message to his wife

Donald Trump interrupted a screening of Rogue One. Mark Hamill had a forceful response.

Teen camper wakes up to crunching noise and discovers his head is inside bears mouth

Go here to read the rest:

Elon Musk doesn't think we're prepared to face humanity's biggest threat: Artificial intelligence - Washington Post

The future of artificial intelligence: two experts disagree – The Conversation AU

Artificial intelligence (AI) promises to revolutionise our lives, drive our cars, diagnose our health problems, and lead us into a new future where thinking machines do things that were yet to imagine.

Or does it? Not everyone agrees.

Even billionaire entrepreneur Elon Musk, who admits he has access to some of the most cutting-edge AI, said recently that without some regulation AI is a fundamental risk to the existence of human civilization.

So what is the future of AI? Michael Milford and Peter Stratton are both heavily involved in AI research and they have different views on how it will impact on our lives in the future.

Michael:

Answering this question depends on what you consider to be artificial intelligence.

Basic machine learning algorithms underpin many technologies that we interact with in our everyday lives - voice recognition, face recognition - but are application-specific and can only do one very specific defined task (and not always well).

More capable AI - what we might consider as being somewhat smart - is only now becoming widespread in areas such as online retail and marketing, smartphones, assistive car systems and service robots such as robotic vacuum cleaners.

Peter:

The most obvious and useful examples of current AI are the speech recognition on your phone, and search engines such as Google. There is also IBMs Watson, which in 2011 beat human champion players at the US TV game show Jeopardy, and is now being trialled in business and healthcare.

Most recently, Googles DeepMind AI called AlphaGo beat the world champion Go player, surprising a lot of people especially since Go is an extremely complex game, way surpassing chess.

Peter:

Many auto manufacturers and research institutions are competing to create practical driverless cars for general road use. While currently these cars can drive themselves for much of the time, many challenges remain in dealing with bad weather (heavy rain, fog and snow) and random real-world events such as roadworks, accidents and other blockages.

These incidents often require some degree of human judgement, common sense and even calculated risk to successfully navigate through. We are still a long way from fully autonomous vehicles that dont need a licensed driver ready to take control in an instant.

The same can be said for all the AI that we will see over the coming 10-20 years, such as online virtual personal assistants, accountants, legal and financial advisers, doctors and even physical shop-bots, museum guides, cleaners and security guards.

They will be advanced tools that are very useful in specific situations, but they will never fully replace people because they will have little common sense (probably none, in fact).

Michael:

We will definitely see a range of steady, incremental improvements in everyday AI. Online product recommendations will get better, your phone or car will understand your voice increasingly well and your vacuum cleaner robot wont get stuck as often.

Its likely that well see some major advances beyond todays technology in some but not all of the following areas: self-driving cars, healthcare, utilities (electricity, water, and so on) management, legal, and service areas such as cleaning robots.

I disagree on self-driving cars - theres no real reason why there wont be fully autonomous controlled ride-sharing fleets in the affluent centres of cities, and this is indeed the strategy of companies such as NuTonomy, working in Singapore and Boston.

Michael:

Major advances will come from two sources.

First, there is a long runway of steady incremental improvements left in many areas of conventional AI - large, complex neural networks and algorithms. These systems will continue to improve steadily as more training data becomes available and as scientists perfect them.

The second area will likely be biological inspiration. Scientists are only just starting to tap into the knowledge about how brain networks work, and its likely they will copy or adapt what we know about animal and human brains to make current deep learning networks far more capable.

Peter:

Old-fashioned AI, which was based on pure logic and computer programs that tried to get machines to behave intelligently, basically failed to do anything that humans are good at and computers are not (speech and image recognition, playing complex strategic games, for example).

Whats quite clear now is that our best-performing AI is based on how we think the brain works.

But our current brain-based AI (called Deep Artificial Neural Networks) is still light years away from emulating an actual brain. Enhanced AI capabilities in the future will come from developing better theories of how the brain works.

The fundamental science needed to cultivate these theories will probably come from publicly funded research institutions, which will then be spun off into commercial start-up companies, and then quickly acquired by interested large corporations if they look like they might be successful.

Peter:

Most jobs wont be under threat for a long time, probably several generations. Real people are needed to actually make any significant decisions because AI currently has no common sense.

Instead of replacing jobs, our overall quality of life will go up. For example, right now few people can afford a personal assistant, or a full-time life coach. In the near future, well all have (a virtual) one!

Our virtual doctor will be working for us daily, monitoring our health and making exercise and lifestyle suggestions.

Our houses and workplaces might be cleaner, but we will still need people to clean the spots the robots miss. Well also need people to deploy, retrieve and maintain all the robots.

Our goods will be cheaper due to reduced transport costs, but well still need human drivers to cover all the situations the self-drivers cant.

All this doesnt even mention the whole new entertainment technologies and industries that will spring up to capture our increased disposable income and to cash-in on our improved quality of life.

So yes, jobs will change, but there will still be plenty of them.

Michael:

Its likely that a significant fraction of jobs will be under threat over the coming decade. Its important to note that this wont necessarily be divided by blue-collar versus white-collar, but rather by which occupations are easily automatable.

Its unlikely that an effective plumber robot will be built in the near future, but aspects of the so far undisrupted construction industry may change radically.

Some people say machines will never have the emotional capabilities of humans. Whether that is true or not, many jobs will be under threat with even the most rudimentary levels of emotional understanding and interaction.

Dont think about the complex, nuanced interaction you had with your psychologist; instead think about the one with that disinterested, uncaring part-time hospitality worker. The bar for disruption is not as high as many think.

That leaves the question of what happens then. There are two scenarios - the first being that, like in the past, new types of jobs are generated by the technological revolution.

The other is that humanity gradually transitions into a Utopian society where scientific, artistic and sporting pursuits are pursued at leisure. The short to medium-term reality is probably somewhere in between.

Michael:

Its unlikely in the near future but possible. The real danger is the unpredictability. Skynet-like killer cyborgs as featured in the Terminator film series are unlikely because that development cycle takes a while, and we have multiple opportunities to stop development.

But AI could destroy or damage humanity in other unpredictable ways. For example, when big companies like Google Deepmind start entering into healthcare, its likely that they will improve patient outcomes through a combination of big data and intelligent systems.

One of the temptations or pressures will be to deploy these extremely complex systems before we completely understand every possible ramification. Imagine the pressure if there is good evidence it will save thousands of lives per year.

As we well know, we have a long history of negative unintended consequences with new technology that we didnt fully understand.

In a far-fetched but not impossible healthcare scenario, deploying AI may lead to catastrophic outcomes - a world-wide AI network deciding in ways invisible to us human observers to kill us all off to optimise some misguided performance goal.

The challenge is that with newly developing technologies, there is an illusion of 100% control, which doesnt really exist.

Peter:

All our current AI, and any that we can possibly create in the foreseeable future, are just tools developed for specific jobs and totally useless outside of the exact duties they were designed for. They dont have thoughts or feelings. These AIs are just as likely to try to take over the world as your Xbox or your toaster.

One day, I believe, we will build machines that rival us in intelligence, and these machines will have their own thoughts and possibly learn in an unconstrained way. This sounds scary. But humans are dangerous for exactly the reasons that the machines wont be.

Humans evolved in a constant struggle for life and death, which made us innately competitive and potentially treacherous. When we build the machines, we can instead build them with any underlying motivation that we would like.

For example, we could build an intelligent machine whose only desire is to dismantle itself. Or, we could build in a hidden remote-controlled off switch that is completely separate from any of the machines own circuits, and an auto-shutdown reflex if the machine somehow ever notices it.

All these safeguards will be trivial to implement. So there is simply no way that we could accidentally build a machine that then tries to wipe out the human race.

Of course, because humans themselves are dangerous, someone could build a machine that doesnt have these safeguards and use it for nefarious purposes. But we have that same problem now with nuclear weapons.

In the future, just as now, we have to hope that we are simply smart enough to use our technology wisely.

Visit link:

The future of artificial intelligence: two experts disagree - The Conversation AU

AI fly-by: artificial intelligence is mapping the brains of flies – TechRadar

The human brain has something like 21 billion neurons in it. That's why mapping the connections between those neurons vital for understanding how the brain works is an intimidating task. But the brain of a fruit fly only has 100,000, making it rather more approachable.

A team of neuroscientists at the Howard Hughes Medical Institute has therefore chosen fruit flies for their first experiments with a machine learning system that has the ability to map those connections. We wanted to understand what neurons are doing at the cellular level, said Kristin Branson, who led the team.

Their system crawled through more than 225 days of video footage of more than 400,000 fruit flies, tracking the position and cataloguing the behaviour of every insect. That's a task that would have taken humans about 3,800 years.

The researchers already had an anatomical map of the neurons in a fruit fly's brain, but they didn't know what role each group of neurons played in behavior. So, using populations of flies that were genetically engineered to crank up the activity of different neurons, they set about characterising their effects.

For example, one population huddled together when put into a shallow dish. Others acted even more strangely Sometimes youd get flies that would all turn in circles, or all follow one another like they were in a conga line, said lab technician Jonathan Hirokawa.

By matching up these behaviors, painstakingly logged by the artificial intelligence, with the data on which neurons were active, the researchers could figure out which neurons were involved in different behaviors.

Ultimately, it's hoped that the results of the research could be applied to other animals and perhaps even humans, with their billions of neurons. Flies do all the things that an organism needs to do in the world, said Alice Robie, lead author on the study describing the results.

They have to find food, they have to escape from predators, they have to find a mate, they have to reproduce.

The full details of the research were published in the journal Cell.

Read the rest here:

AI fly-by: artificial intelligence is mapping the brains of flies - TechRadar

Artificial intelligence is going to change every aspect of …

AP Photo/Laurent Cipriani

You've probably heard of artificial intelligence by now. It's the technology powering Siri, driverless cars, and that creepy Facebook feature that automatically tags your friends when you upload photos.

But what is AI, why are people talking about it now, and what will it mean for your everyday life? SunTrust recently released a report that breaks down how AI works, so let's tackle those questions one at a time.

What is AI?

AIis what people call computer programs that try to replicate how the human brain operates. For now, they only can replicate very specific tasks. One system can beat humans at the complicated and ancient board game called Go, for example. Lots of these AI systems are being developed, each really good at a specific task.

These AI systems all operate in basically the same way. Imagine a system that tries to identify whether a photo has a cat in it. For a human, this is fairly easy, but a computer has a hard timefiguring it out. AI systems are unique because they are set up like human brains. You feed a cat photo in one end, and it bounces around a lot of different checkpoints until it comes out the other end with a yes or no answer just like your eyes passing your view of a cat through all the neurons in your brain. AI is even talked about in terms of neurons and synapses, just like the human brain.

AI systems have to be trained, which is a process of adjusting these checkpoints to achieve better results. If one checkpoint determineswhether there is hair in the photo, training an AI system is like deciding how important the presence of hair in a photo is to decide whether there is a cat in the photo.

This training process takes a huge amount of computer process to fine tune. The better a system is trained, the better results you can get from it and the better your cat photo system will be able to determine whether there is a cat in a photo you show it. The huge amount of processing power required to run and train AI systems is what has kept AI research relatively quiet until recently, which leads to the next question.

DeepMind

Why are people talking about AI all the time now?

There is a famous AI contest where researchers pit computers against humans in a challenge to correctly identify photos. Humans usually are able to identify photos with about 95% accuracy in this content, and in 2012, computers were able to identify about 74% of photos correctly, according to SunTrust's report. In 2015, computers reached 96% accuracy, officially beating humans for the first time. This was called the "big bang" of AI, according to SunTrust.

The big bang of AI was made possible by some fancy new algorithms, three specifically. These new algorithms were better ways of training AI systems, making them faster and cheaper to run.

AI systems require lots of real-world examples to be trained well, like lots of cat photos for example. These cat photos also had to be labeled as cat photos so the system knew when it got the right answer from its algorithms and checkpoints. The new algorithms that led to the big bang allowed AI systems to be trained with fewer examples that didn't have to be labeled as well as before. Collecting enough examples to train an AI system used to be really expensive, but was much cheaper after the big bang. Advances in processing power and cheap storage also helped move things along.

Since the big bang, there have been a number of huge strides in AI technology. Tesla, Google, Apple and many of the traditional car companies are training AI systems for autonomous driving.Google, Apple and Amazon are pioneering the first smart personal assistants. Some companies are even working on AI driven healthcare solutions that could personalize treatment plans based on a patient's history, according to SunTrust.

What will AI mean for your life?

AI technology could be as simple as making your email smarter, but it could also extend your lifespan, take away your job, or end human soldiers fighting the world's wars.

SunTrust says AI has the capability to change nearly every industry. The moves we are seeing now are just the beginnings, the low hanging fruits. Cities can become smarter, TSA might be scanning your face as you pass through security and doctors could give most of their consultations via your phone thanks to increased AI advancements.

NVIDIA

SunTrust estimates the AI business will be about $47.250 billion by the year 2020. Nvidia, a large player in the AI space thanks to its GPU hardware and CUDA software platform, is a bit more conservative. It only sees AI as a $30 billion business, which is four times the current size of Nvidia.

There is no doubt AI is a huge opportunity, but there are a few companies you should watch if you're an investor looking to enter the AI space, according to SunTrust.

One is for sure. AI is exciting, sometimes scary, but ultimately, here to stay. We are just starting to see the implications of the technology, and the world is likely to change for good and bad because of artificial intelligence.

Continue reading here:

Artificial intelligence is going to change every aspect of ...

The ‘bias’ of artificial intelligence – The Boston Globe

The Ideas piece, by Emily Kumler, The bias in the machine (July 9), states, Typically, a programmer instructs a machine with a series of commands, and the computer follows along.

This statement captures in broad stokes the larger contours of the here and now of computing and artificial intelligence though far from entirely so, of course. Reality isnt that lock-step the computer [slavishly] following along with a series of commands. To that point, the essay further assumes a straight-line development of AI, such that whats expected longer term is more sophisticated programming leading to still-genuflecting computer obedience.

Advertisement

The future of AI, however, will likely be very different than that. Rather, AI will depend decreasingly on human intervention for its thinking and increasingly on its self-programming, as machines learn more and more heuristically. That is, the trajectory of AI systems will be to independently acquire, curate, adapt, and apply knowledge in order to inform and shape and reshape its own behaviors and eventually to do so, if human egos can relinquish some of AIs executive functions, far more competently than erstwhile human programmers.

Keith Tidman

Bethesda, Md.

Go here to read the rest:

The 'bias' of artificial intelligence - The Boston Globe

Miller: Artificial intelligence a life-altering technology – Auburn Citizen

The industrial revolution emerged in the 18th century and altered life for mankind. The computer age that came along in the 20th century did likewise. Now, artificial intelligence, an advanced technology that utilizes algorithms a sequence of actions that combines calculations, data processing and automated reasoning will allow computers to read, understand and analyze as the human mind does. Thus, America is poised to embark on an innovative boom of historic proportions that will transform our everyday life and make some alert investors very wealthy.

Ninety percent of all data produced and collected since the beginning of our time has been done in the last two years, and will be doubled (at the present rate) in the next five years. This incredible statement of facts is difficult to absorb even for the highly intelligent mind. The human brain has astonishing capability. Once our technologists are freed from the monotonous task of sorting out the billions of pages of data now published daily by computer software, our minds can focus on creative research such as medical science, financial analysis and robotics (to name only a few). Just recently, an automobile drove itself and four passengers through the Albany area for 6.1 miles in the first ever test of an autonomous vehicle in New York state.

Artificial intelligence will also enhance human productivity growth. The McKinsey Global Institute recently reported that almost half of all paid technology research work can be automated by AI. This would increase human productivity by .8 percent to 1.4 percent, compounded every year. This will give our country a substantial manpower economic boost.

Unfortunately, artificial intelligence has also empowered a cast of twisted minds, criminals and terrorists who are building a worldwide audience to promote their views. However, AI technologists are already busy creating algorithms that can sweep digital networks and automatically purge incorrect and extremist content.

Amy Hirsh Guarino, an expatriate from upstate New York (who happens to be my niece) has been living and working in Silicon Valley for many years now. Recently, she was recruited by Kyndi (kyndi.com), one of the leading companies in the growing field of artificial intelligence technologists. She is now chief operating officer and considered to be one of the top 100 women technologists in Silicon Valley.

The time is coming when humans can no longer keep up with the volume of reading in our modern age. We foresee a time when every technologist worker must be partnered with an artificial intelligence assistant, she told me during my interview with her. Next, Guarino explained digital forensics as understanding how and why something happens (the TV series "Forensic Files" is a dramatized example of digital forensics).

AI will be able to utilize all the current medical journal information plus medical reports and patient reports to tailor the diagnosis and treatment plans based on individual symptoms, genetics and patient history, Guarino said.

The key of artificial intelligence is being able to process lots of combinations of systems in real time, plus being aware of the latest research. AI will never replace doctors, but it will help them make the right decisions since the systems will be able to recall all known diseases, and, in theory, they dont have bias. With that said, doctors know their patients, and AI will help them provide a filter based on that knowledge.

America is entering a new age call it the information technology age where there will be wonderful opportunities among technologists, innovators and businessmen alike. The key to it all is education.

Harold Miller is a businessman and Auburn native. He can be reached at hmillermod@aol.com.

Read the original:

Miller: Artificial intelligence a life-altering technology - Auburn Citizen

4 fears an AI developer has about artificial intelligence – MarketWatch – MarketWatch

As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.

And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?

I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?

The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.

That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.

I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.

Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These arent world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.

But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.

Read: Job of the future is robot psychologist

Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.

Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.

Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.

Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.

While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus isnt on determining whether I like or approve of something; it matters only that I can unveil it.

Read: 10 jobs robots already do better than you

Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.

One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.

Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.

Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.

In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research wont change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the 1% and the rest of us.

Read: Two-thirds of jobs in this city could be automated by 2035

There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?

The key question in this scenario is: Why should a superintelligence keep us around?

I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.

But I dont speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.

Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.

Now read: 5 ETFs that may let you profit from the next tech revolution

Arend Hintze is an assistant professor of integrative biology & computer science and engineering at Michigan State University. This first appeared on The Conversation What an artificial intelligence researcher fears about AI.

See the original post here:

4 fears an AI developer has about artificial intelligence - MarketWatch - MarketWatch

Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization’ – Fortune

Appearing before a meeting of the National Governors Association on Saturday, Tesla CEO Elon Musk described artificial intelligence as the greatest risk we face as a civilization and called for swift and decisive government intervention to oversee the technologys development.

On the artificial intelligence front, I have access to the very most cutting edge AI, and I think people should be really concerned about it, an unusually subdued Musk said in a question and answer session with Nevada governor Brian Sandoval.

Musk has long been vocal about the risks of AI . But his statements before the nations governors were notable both for their dire severity, and his forceful call for government intervention.

AIs a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, its too late," he remarked. Musk then drew a contrast between AI and traditional targets for regulation, saying AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.

Those are strong words from a man occasionally associated with so-called cyberlibertarianism, a fervently anti-regulation ideology exemplified by the likes of Peter Thiel, who co-founded Paypal with Musk.

Get Data Sheet , Fortunes technology newsletter.

Musk went on to argue that broad government regulation was vital because companies are currently pressured to pursue advanced AI or risk irrelevance in the marketplace:

Thats where you need the regulators to come in and say, hey guys, you all need to just pause and make sure this is safe . . . You kind of need the regulators to do that for all the teams in the game. Otherwise the shareholders will be saying, why arent you developing AI faster? Because your competitor is.

Part of Musks worry stems from social destabilization and job loss. When I say everything, the robots will do everything, bar nothing," he said.

But Musk's bigger concern has to do with AI that lives in the network, and which could be incentivized to harm humans. [They] could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information," he said. "The pen is mightier than the sword.

Musk outlined a hypothetical situation, for instance, in which an AI could pump up defense industry investments by using hacking and disinformation to trigger a war.

Im against overregulation for sure, Musk emphasized, But man, I think with weve got to get on that with AI, pronto.

Musks comments on AI only took up a small part of the hour-long exchange. He also speculated about the future of driverless cars and space travel, and mourned that meeting the sky-high expectations surrounding him was "quite a difficult emotional hardship" and "a whole lot less fun than it may seem."

Excerpt from:

Elon Musk Says Artificial Intelligence Is the 'Greatest Risk We Face as a Civilization' - Fortune

These AI bots are so believable, they get more dates than you – CNBC

Here's how it works: When a company signs up with Conversica, they get to pick the name, gender and title of their new assistant. As leads come in, the AI assistant gets in touch with them through email or text message. If a lead is interested, the AI assistant routes the communication to a real-life member of the sales team to close the deal.

One advantage over humans is the AI isn't put off by unanswered emails it doesn't mind being ignored or forget to follow up, so it can be programmed to be more persistent, emailing weeks after the initial contact.

"She has a name. She has a title, an email address and a phone number," said Alex Terry, CEO of Conversica. "She reads and writes emails and SMS text messages back and forth with leads."

Conversica has about 1,000 companies that use the platform, Terry said, and about 250 million messages have been sent so far, giving the company a pretty robust sample size to see what makes an AI assistant successful. A lot of that has to do with how they're set up in the first place. For one thing, the data suggest that the gender of the assistant is important and customers often like to think they're communicating with someone young.

"What we tend to find is female names outperform male names in general," Terry said. "And most commonly names that were popular 24 or 25 years ago tend to do pretty well."

The most common names for companies to name their AI assistants are popular female names from the 80s and 90s like Ashley and Stephanie: They're both in the top five in terms of the most leads worked.

View original post here:

These AI bots are so believable, they get more dates than you - CNBC