Artificial intelligence will change America. Here’s how. – Washington Post

By Jonathan Aberman By Jonathan Aberman February 27 at 7:00 AM

The term artificial intelligence is widely used, but less understood. As we see it permeate our everyday lives, we should deal with its inevitable exponential growth and learn to embrace it before tremendous economic and social changes overwhelm us.

Part of the confusion about artificial intelligence is in the name itself. There is a tendency to think about AI as an endpoint the creation of self-aware beings with consciousness that exist thanks to software. This somewhat disquieting concept weighs heavily; what makes us human when software can think, too? It also distracts us from the tremendous progress that has been made in developing software that ultimately drives AI: machine learning.

Machine learning allows software to mimic and then perform tasks that were until very recently carried out exclusively by humans. Simply put, software can now substitute for workers knowledge to a level where many jobs can be done as well or even better by software. This reality makes a conversation about when software will acquire consciousness somewhat superfluous.

When you combine the explosion in competency of machine learning with a continued development of hardware that mimics human action (think robots), our society is headed into a perfect storm where both physical labor and knowledge labor are equally under threat.

The trends are here, whether through the coming of autonomous taxis or medical diagnostics tools evaluating your well-being. There is no reason to expect this shift towards replacement to slow as machine learning applications find their way into more parts of our economy.

The invention of the steam engine and the industrialization that followed may provide a useful analogue to the challenges our society faces today. Steam power first substituted the brute force of animals and eventually moved much human labor away from growing crops to working in cities. Subsequent technological waves such as coal power, electricity and computerization continued to change the very nature of work. Yet, through each wave, the opportunity for citizens to apply their labor persisted. Humans were the masters of technology and found new ways to find income and worth through the jobs and roles that emerged as new technologies were applied.

Heres the problem: I am not yet seeing a similar analogy for human workers when faced with machine learning and AI. Where are humans to go when most things they do can be better performed by software and machinery? What happens when human workers are not users of technology in their work but instead replaced by it entirely? I will admit to wanting to have an answer, but not yet finding one.

Some say our economy will adjust, and we will find ways to engage in commerce that relies on their labor. Others are less confident and predict a continued erosion of labor as we know it, leading to widespread unemployment and social unrest.

Other big questions raised by AI include what our expectations of privacy should be when machine learning needs our personal data to be efficient. Where do we draw the ethical lines when software must choose between two peoples lives? How will a society capable of satisfying such narrow individual needs maintain a unified culture and look out for the common good?

The potential and promise of AI requires a discussion free of ideological rigidity. Whether change occurs as our society makes those conscious choices or while we are otherwise distracted, the evolution is upon us regardless.

Jonathan Aberman is a business owner, entrepreneur and founderof Tandem NSI, a national community that connects innovators to government agencies. He is host of Whats Working in Washington on WFED, a program that highlights business and innovation, and he lectures at the University of Marylands Robert H. Smith School of Business.

Read the original here:

Artificial intelligence will change America. Here's how. - Washington Post

Honda Chases Silicon Valley With New Artificial-Intelligence Center – Wall Street Journal (subscription)


Financial Express
Honda Chases Silicon Valley With New Artificial-Intelligence Center
Wall Street Journal (subscription)
TOKYOHonda Motor Co. is creating a research arm focused on artificial intelligence, an area where one of its American advisers says it risks falling behind. R&D Center X will open in Tokyo in April as a software-focused counterpart to Honda's ...
New Honda R&D centre to develop technologies such as autonomous driving and artificial intelligenceFinancial Express

all 3 news articles »

View post:

Honda Chases Silicon Valley With New Artificial-Intelligence Center - Wall Street Journal (subscription)

Why 2017 Is The Year Of Artificial Intelligence – Forbes

Why 2017 Is The Year Of Artificial Intelligence
Forbes
A recent acceleration of innovation in artificial intelligence (AI) has made it a hot topic in boardrooms, government and the media. But it is still early, and everyone seems to have a different view of what AI actually is. Having investigated the ...

Read more:

Why 2017 Is The Year Of Artificial Intelligence - Forbes

4 challenges Artificial Intelligence must address – The Next Web – TNW

If news, polls and investment figures are any indication, Artificial Intelligence and Machine Learning will soon become an inherent part of everything we do in our daily lives.

Backing up the argument are a slew of innovations and breakthroughs that have brought the power and efficiency of AI into various fields including medicine, shopping, finance, news, fighting crime and more.

Gary Vaynerchuk was so impressed with TNW Conference 2016 he paused mid-talk to applaud us.

But the explosion of AI has also highlighted the fact that while machines will plug some of the holes human-led efforts leave behind, they will bring disruptive changes and give rise to new problems that can challenge the economical, legal and ethical fabric of our societies.

Here are four issues that need Artificial Intelligence companies need to address as the technology evolves and invades even more domains.

Automation has been eating away at manufacturing jobs for decades. Huge leaps in AI have accelerated this process dramatically and propagated it to other domainspreviously imagined to remain indefinitely in the monopoly of human intelligence

From driving trucks to writing news and performing accounting tasks, AI algorithms are threatening middle class jobs like never before. They might set their eyes on other areas as well, such as replacing doctors, lawyers or even the president.

Its also true that the AI revolution will create plenty of new data science, machine learning, engineering and IT job positions to develop and maintain the systems and software that will be running those AI algorithms. But the problem is that, for the most part, the people who are losing their jobs dont have the skill sets to fill the vacant posts, creating an expanding vacuum of tech talent and a growing deluge of unemployed and disenchanted population. Some tech leaders are even getting ready for the day the pitchforks come knocking at their doors.

In order to prevent things from running out of control, the tech industry has a responsibility to help the society to adapt to the major shift that is overcoming the socio-economic landscape and smoothly transition toward a future where robots will be occupying more and more jobs.

Teaching new tech skills to people who are losing or might lose their jobs to AI in the future can complement the efforts. In tandem, tech companies can employ rising trends such as cognitive computing and natural language generation and processing to helpbreak down the complexity of tasks and lower the bar for entry into tech jobs, making them available to more people.

In the long run governments and corporations must consider initiatives such as Universal Basic Income (UBI), unconditional monthly or yearly payments to all citizens, as we slowly inch toward the day where all work will be carried out by robots.

As has been proven on several accounts in the past years, AI can be just as or even more biased than humans.

Machine Learning, the popular branch of AI that is behind face recognition algorithms, product suggestions, advertising engines, and much more, depends on data to train and hone its algorithms.

The problem is, if the information trainers feed to these algorithms is unbalanced, the system will eventually adopt the covert and overt biases that those data sets contain. And at present, the AI industry is suffering from diversity troubles that some label the white guy problem, or largely dominated by white males.

This is the reason why an AI-judged beauty contest turned out to award mostly white candidates, a name-ranking algorithm ended up favoring white-sounding names, and advertising algorithms preferred to show high-paying job ads to male visitors.

Another problem that caused much controversy in the past year was the filter bubble phenomenon that was seen in Facebook and other social media that tailored content to the biases and preferences of users, effectively shutting them out from other viewpoints and realities that were out there.

While for the moment much of the cases can be shrugged off as innocent mistakes and humorous flaws, some major changes need to be made if AI will be put in charge of more critical tasks, such as determining the fate of a defendant in court. Safeguards also have to be put in place to prevent any single organization or company to skew the behavior of an ML algorithm in its favor by manipulating the data.

This can be achieved by promoting transparency and openness in algorithmic datasets. Shared data repositories that are not owned by any single entity and can be vetted and audited by independent bodies can help move toward this goal.

Whos to blame when a software or hardware malfunctions? Before AI, it was relatively easy to determine whether an incident was the result of the actions of a user, developer or manufacturer.

But in the era of AI-driven technologies, the lines are not as clearcut.

ML algorithms figure out for themselves how to react to events, and while data gives them context, not even the developers of those algorithms can explain every single scenario and decision that their product makes.

This can become an issue when AI algorithms start making critical decisions such as when a self-driving car has to choose between the life of a passenger and a pedestrian.

Extrapolating from that, there are many other conceivable scenarios where determining culpability and accountability will become difficult, such as when an AI-driven drug infusion system or robotic surgery machine harms a patient.

When the boundaries of responsibility are blurred between the user, developer, and data trainer, every involved party can lay the blame on someone else. Therefore, new regulations must be put in place to clearly predict and address legal issues that will surround AI in the near future.

AI and ML feed on data reams of it and companies that center their business around the technology will grow a penchant for collecting user data, with or without the latters consent, in order to make their services more targeted and efficient.

In the hunt for more and more data, companies may trek into uncharted territory and cross privacy boundaries. Such was the case of a retail store that found out about a teenage girls secret pregnancy, and the more recent case of UK National Health Services patient data sharing program with Googles DeepMind, a move that was supposedly aimed at improving disease prediction.

Theres also the issue of bad actors, of both governmental and non-governmental nature, that might put AI and ML to ill use. A very effective Russian face recognition app rolled out last year proved to be a potential tool for oppressive regimes seeking to identify and crack down on dissidents and protestors. Another ML algorithm proved to be effective at peeking behind masked images and blurred pictures.

Other implementations of AI and ML are making it possible to impersonate people by imitating their handwriting, voice and conversation style, an unprecedented power that can come in handy in a number of dark scenarios.

Unless companies developing and using AI technology regulate their information collection and sharing practices and take necessary steps to anonymize and protect user data, theyll end up causing harm than good to users. The use and availability of the technology must also be revised and regulated in a way to prevent or minimize ill use.

Users should also become more sensible about what they share with companies or post on the Internet. Were living in an era where privacy is becoming a commodity, and AI isnt making it any better.

There are benefits and dark sides to every disruptive technology, and AI is no exception to the rule. What is important is that we identify the challenges that lay before us and acknowledge our responsibility to make sure that we can take full advantage of the benefits while minimizing the tradeoffs.

The robots are coming. Lets make sure they come in peace.

This post is part of our contributor series. It is written and published independently of TNW.

Read next: Microsoft will soon let you block desktop apps from installing on Windows 10

More here:

4 challenges Artificial Intelligence must address - The Next Web - TNW

Artificial Intelligence: Removing The Human From Fintech – Forbes


Forbes
Artificial Intelligence: Removing The Human From Fintech
Forbes
As I'm sure many in the technology industry have thought today, there should have been a way to avoid the Oscars Envelopegate. But, is artificial intelligence the answer to all of our human error problems? A recent Accenture report found that the ...

See the article here:

Artificial Intelligence: Removing The Human From Fintech - Forbes

Christianity is engaging Artificial Intelligence, but in the right way – Crux: Covering all things Catholic

In a recent essay in The Atlantic, Jonathan Merritt laments that theologians and Christian leaders, including Pope Francis, have not addressed what he claims will be the greatest challenge that Christianity has ever faced: Artificial Intelligence, or AI.

In his view, intelligent machines threaten to overturn many Christian beliefs, a trial that theologians seem blind to because theyre stuck rehashing old questions instead of focusing on the coming ones.

Such a criticism would be devastating if true, but is it?

A fuller reading of Pope Franciss work suggests that he is actually engaging the issues with AI that most directly affect the contemporary Church and society. Before I get to that, though, its necessary to give Merritts argument his due. Most theologians are indeed not addressing the specific aspects of AI that he considers essential, but this is a wise choice on their part.

First, its important to note that rehashing old questions, or what Catholics like to call the development of tradition, provides many insights into these questions. For example, Merritt claims that Christians have mostly understood the soul to be a uniquely human element, an internal and eternal component that animates our spiritual sides.

This is not an accurate characterization.

Drawing upon the heritage of Greek philosophy, most theologians have understood the soul to be what makes a specific living thing what it is. It is the principle of growth and development in all living things, movements and sensation in animals, and rationality in humans.

Therefore, animals have souls, plants have souls, and an AI that could think and manipulate the world around it would have to have something like a soul.

Merritt qualifies himself in the next sentence to refer to the image of God that each person possesses in her soul. Yet again, major figures in the tradition such as Thomas Aquinas do not see the image of God restricted to humans.

For him (some other theologians have very different interpretations), we imagine God primarily in our potential for reason and free will, so any being with reason and free will would possess that image, including angels, for Aquinas, rational aliens, for Francis, even true AI, if it existed.

Of course, this reason is not mere instrumental reason, but one that understands purposes, meaning, and the moral law.

Still, based on Merritts argument one might ask, how can such spiritual faculties arise out of silicon circuits (or nanotubes, or any other material)? While a problem, it is no more difficult, nor much different, than the question of how the spiritual arises from lowly flesh, a question that thinkers have wrestled with throughout the Western tradition.

Theologians struggle with this problem in ordinary human development how and when new life gains a soul is a central theological question, for obvious practical reasons. The predominant answer in the Catholic tradition is that, in the process of procreation in which human parents cooperate, God creates an individual spiritual soul for each human body. Something like this framework could be used to think about AI.

It is true that some issues are more difficult, like how AI could be redeemed.

Christianity argues for Gods special care for humanity, with the second person of the Trinity assuming a human nature in the Incarnation. This doctrine raises questions about Christs relation to any possible AI, but ones not fundamentally different to questions of how Christ redeems all of nonhuman creation, questions that have become ever more pressing given environmental devastation.

Given these resources, why havent more theologians directly addressed AI?

First, I would guess that most theologians are less optimistic than the ones Merritt quotes about the actual possibility of true AI. Beyond the sixty years of unfulfilled promises that AI is just around the corner, AI theorists have not addressed philosophical concerns as to whether their programs can have consciousness and grasp meaning.

In his Chinese Room argument, John Searle pointed out that while computer programs manipulate symbols (syntax), allowing them to imitate behavior, they cannot really grasp the meaning (semantics) of the things they manipulate, which would be necessary for consciousness.

A second source of skepticism for engaging AI is that, along with many contemporary non-Christian thinkers, theologians recognize making an AI is an extremely bad idea. If a machine has the free choice necessary for true AI, then it has the possibility of sin, leading to large downside risks, such as human extinction.

This concern about risk raises the final problem with Merritts analysis if one reads Francis carefully, one finds that he addresses the problems of todays limited AI that are harming people right now rather than future speculative possibilities.

Laudato Si, Franciss recent encyclical, is just as much about technology in human ecology as it is about the natural environment.

He addresses contemporary mental pollution and isolation, reflecting concerns in other papal addresses over people only receiving information that confirms their opinions, problems that arise in part due to AI algorithms reflecting our opinions back to us in search results and news feeds, a solipsism whose political effects were chillingly documented in Adam Curtis documentary HyperNormalisation.

In a second and even more important example, he laments a kind of technological progress in which the costs of production are reduced by laying off workers and replacing them with machines.These are not only issues of automation impacting blue collar jobs, but now, even many white collar jobs are disappearing due to the applications of AI.

Pope Francis demonstrates that dealing with Merritts speculative problems may distract us from more pressing challenges, such as knowledge workers in their late 40s whose positions become redundant due to AI and who thus wont be able to make their mortgages while they retrain.

Problems like that may not be as hot a topic for a TED talk as speculating on the prayer life of AI, but these are the challenges of technology that a Church whose members will be judged by their care for the least in society should be addressing.

Paul Scherz is an assistant professor of moral theology/ethics at The Catholic University of America. He examines how the daily use of biomedical technologies shapes the way researchers, doctors, and patients see and manipulate the world and their bodies. Scherz has a Ph.D. in Genetics from Harvard University and a Ph.D. in moral theology from the University of Notre Dame.

See the rest here:

Christianity is engaging Artificial Intelligence, but in the right way - Crux: Covering all things Catholic

‘Artificial intelligence is the next big thing’ – The Hindu – The Hindu


The Hindu
'Artificial intelligence is the next big thing' - The Hindu
The Hindu
Spencer Kelly, presenter of the BBC's Click technology programme, discusses Indian jugaad, South Korea's jellyfish-hunting robots, and how self-driving cars ...

and more »

See the original post here:

'Artificial intelligence is the next big thing' - The Hindu - The Hindu

Goldman Sacked: How Artificial Intelligence Will Transform Wall Street – Newsweek

For the past year, we as a society have been worried sick about artificial intelligence eating the jobs of 3 million truck drivers. Turns out that a more imminently endangered species are the Wall Street traders and hedge fund managers who can afford to buy Lamborghinis and hire Elton John to play their Hamptons house parties.

So maybe hooray for AI on this one?

Financial giants such as Goldman Sachs and many of the biggest hedge funds are all switching on AI-driven systems that can foresee market trends and make trades better than humans. Its been happening, drip by drip, for years, but a torrent of AI is about to wash through the industry, says Mark Minevich, a New York-based investor in AI and senior adviser to the U.S. Council on Competitiveness. High-earning traders are going to get unceremoniously dumped like workers at a closing factory.

Try Newsweek for only $1.25 per week

It will really hit at the soul of Wall Street, Minevich tells me. It will transform New York.

Some of these AI trading systems are being built by startups such as Sentient in San Francisco and Aidyia in Hong Kong. In 2014, Goldman Sachs invested in and began installing an AI-driven trading platform called Kensho. Walnut Algorithms, a startup hedge fund, was designed from the beginning to work on AI. Infamously weird hedge fund company Bridgewater Associates hired its own team to build an AI system that could practically run the operation on its own. Bridgewaters effort is headed by David Ferrucci, who previously led IBMs development of the Watson computer that won on Jeopardy!

AI trading software can suck up enormous amounts of data to learn about the world and then make predictions about stocks, bonds, commodities and other financial instruments. The machines can ingest books, tweets, news reports, financial data, earnings numbers, international monetary policy, even Saturday Night Live sketchesanything that might help the software understand global trends. The AI can keep watching this information all the time, never tiring, always learning and perfecting its predictions.

RELATED: How robots will save the global economy

A report from Eurekahedge monitored 23 hedge funds utilizing AI and found they outperformed funds relying on people. Quants, the Ph.D. mathematicians who design fancy statistical models, have been the darlings of hedge funds for the past decade, yet they rely on crunching historical data to create a model that can anticipate market trends. AI can do that too, but AI can then watch up-to-the-instant data and learn from it to continually improve its model. In that way, quant models are like a static medical textbook, while AI learning machines are like a practicing doctor who keeps up with the latest research. Which is going to lead to a better diagnosis? Trading models built using back-tests on historical data have often failed to deliver good returns in real time, says the Eurekahedge report.

Traders work on the floor of the New York Stock Exchange (NYSE) as the Dow Jones industrial average closed above the 20,000 mark for the first time on January 25 in New York City. Spencer Platt/Getty

Human traders and hedge fund managers dont stand a chance, in large part because theyre human. Humans have biases and sensitivities, conscious and unconscious," says Babak Hodjat, co-founder of Sentient and a computer scientist who played a role in Apples development of Siri. "It's well-documented we humans make mistakes. For me, it's scarier to be relying on those human-based intuitions and justifications than relying on purely what the data and statistics are telling you."

So whats going to happen to the finance people who find themselves standing in front of the oncoming AI bus? Well, average compensation for staff in sales, trading and research at the 12 largest investment banks is $500,000, according to business intelligence company Coalition Development. Many traders earn in the millions. In 2015, five hedge fund managers made $1 billion or more, according to an industry survey. If you think Carls Jr. is motivated to replace $8-an-hour fast-food workers with robots, imagine the motivation to dump million-dollar-a-year ($500 an hour!) traders.

Goldman Sachs shows just how devastating automation can be to traders. In 2000, its U.S. cash equities trading desk in New York employed 600 traders. Today, that operation has two equity traders, with machines doing the rest. And this is before the full brunt of AI has come into play at Goldman. In 10 years, Goldman Sachs will be significantly smaller by head count than it is today, Daniel Nadler, CEO of Kensho, told The New York Times. Expect the same to happen on every trading floor at every major financial company.

Much of America is not going to weep for the types of people depicted in The Wolf of Wall Street, yet this new AI reality could be devastating in many ways. Imagine the impact on high-end real estate in New York. Think of the For Sale signs on summer beach homes in Southampton. How will luxury retailers survive the likely dip in sales of $2,000 suits and $5,900-per-pound white truffles? Maybe Donald Trump will be driven to demand that somebody bring back traders jobs, thinking theyve moved to Mexico.

Minevich, though, sees a net positive if AI drives brilliant people out of finance and into, well, almost anything else.

As the surest, fastest path to million-dollar paydays, Wall Street trading and hedge fund managing have long soaked up a large chunk of Americas best and brightest. About one-third of graduates from the top 10 business schools go into finance. Only a tiny sliver, usually around 5 percent, go into health care. An even smaller percentage go into energy or manufacturing businesses, and you can count on two hands the number who take jobs at nonprofits each year.

Most of the rest of society looks at that and sees selfishness. Yeah, sure, we need liquid markets and financial instruments and all that. But if were going to pay a group of people so much money, maybe wed be better off if they were inventing electric cars that go 1,000 miles on a charge, or healthy vegetarian kielbasa, or babies who dont cry on airplanes. Just do something that brings tangible benefits to the masses.

Some of these smart people will move into tech startups, or will help develop more AI platforms, or autonomous cars, or energy technology, Minevich says. That could be really helpful right now, since the tech industry is always fretting that it doesnt have enough highly skilled pros and might be facing a geek drought in the age of Trump travel bans. If the MBA elite leave Wall Street but stay in New York, Minevich adds, New York might compete with Silicon Valley in tech.

As math Ph.D.s no longer find that hedge fund recruiters are salivating over them, they might leap into efforts to model climate change or the behavior of cancer cells in the body. The National Security Agencys website says it is actively seeking mathematicians to work on some of our hardest signals intelligence and information security problems. Math whizzes could help catch terrorists! Or liberals!

The pay for a mathematician at the National Security Agency is around $100,000. Compared with a hedge fund salary, that would be a major lifestyle downgrade. But at least the traders and quants will have options, which is more than we can say for truck drivers and other workers threatened by AI.

Theres one other benefit to AI machines taking over finance. Ben Goertzel, chief scientist at Aidyia, says his machine will never need human intervention. If we all die, it would keep trading, he once said.

So if Trump pulls out the nuclear codes and pushes the button, at least some people will still get a good return on their 401(k)s.

Read more here:

Goldman Sacked: How Artificial Intelligence Will Transform Wall Street - Newsweek

Government promises 20m investment in robotics and artificial intelligence – The Independent

The government will launch a review into Artifical Intelligence (AI) and robotics in an attempt to make the UK a world leader in tech.

The government said in a statement on Sunday that it would invest 17.3 million in university research on AI.Artificial intelligence powers technologies such as Apples SIRI, Amazons Alexa, and driverless cars.

According to a report by consultancy firm Accenture, Artificial Intelligence could add around 654 billion to the UK economy.

A report by the Institute for Public Policy Research recently forecast that millions of jobs will be lost to automation over the next two decades. Researchers predicted that two million jobs retail jobs will disappear by 2030 and 600,000 will go in manufacturing.

Jrme Pesenti, CEO of Benevolent Tech, who will be leading government research into AI, said,

There has been a lot of unwarranted negative hype around Artificial Intelligence (AI), but it has the ability to drive enormous growth for the UK economy, create jobs, foster new skills, positively transform every industry and retain Britains status as a world leader in innovative technology.

EU universal income must be 'seriously considered' amid rise of robots

The announcement is part of the governments new Digital Strategy, which will be announced in full on Wednesday. As well as investment in research and the tech industry, the strategy is also expected to detail a comprehensive modernisation of the civil service.

The government has been heavily criticised the delay in the publication of the strategy. In 2015, Ed Vaizey, the then DigitalMinister, said plans would be published in early 2016.

In January, the chairman of the governments Science and Technology Committee criticised the government for this delay.

In a letter to Digital Minister Matt Hancock, Mr Metcalfe expressed his disappointment over such a long delay.

The letter also asked why the strategy continues to be a work in progress nearly a year after [Mr Hancocks] predecessor considered it already largely completed.

The government has said it was forced to delay the publication of the report to take into account the impact of Brexit.

However, other sources have suggested that Whitehalls resistance to the modernisation of the civil service under the Government Digital Service plans was also a significant factor.

Here is the original post:

Government promises 20m investment in robotics and artificial intelligence - The Independent

How Artificial Intelligence Can Benefit E-Commerce Businesses – Forbes


Forbes
How Artificial Intelligence Can Benefit E-Commerce Businesses
Forbes
According to Business Insider, 85 percent of customer interactions will be handled without a human by 2020. Artificial Intelligence may be changing the way ...
Machine Intelligence Revolutionizing Marketing-Anoop Karumathil ...BW Businessworld

all 2 news articles »

See original here:

How Artificial Intelligence Can Benefit E-Commerce Businesses - Forbes

Artificial intelligence advances to make farming smarter – Stuff.co.nz

TIM CRONSHAW

Last updated16:27, February 27 2017

Murray Wilson/ Fairfax NZ.

A robotic milking system for dairy farms.

More artificial intelligence, cheaper sensors and longer flying dronesare only some of the technological advances that Kiwi farmers can look forward to on "data-driven" farms overthe next 10 years.

Microsoft Researchprincipal researcherRanveerChandra has been in New Zealand for a week offering insights into precision agricultureand advances the United States technology company is working on to improve farming and food production.

He said Kiwi farmers and their innovations could help lead world farming, but they would see more advances themselvesoverthe next decade.

Supplied

Ranveer Chandra is a principal researcher for Microsoft.

Farmers faced doubling food production to feed a growing population by 2050 and this would require more technological advances world-wide, he said..

READ MORE:

*Automatic milkers easy on people, cows and farms

Kirk Hargreaves

More robotics are being introduced at meat plants to increase safety and processing efficiency.

*Precision tools help make a difference

*NZ on road to becoming the Detroit of agriculture

" New Zealand is quite advanced as far as technology and agri-practicesgo and I think thisis where New Zealand can lead the world because there is more work to bedone."

Chandra was a keynote speaker at the eResearch NZ conference in Queenstownbefore travelling to Christchurch and Palmerston North to meet withAgResearchandother leaders and leavingfor the United States on Monday.

The Indian-born researcherwent to the US 18years ago to complete post-graduate studies and has led Microsoft projects including in longer lasting batteries andTV white space networking.

Chandra said using technology to provide more food for the world was close to his heart as India had much to do to lift its food production.

"I think the one change that will absolutely happen is a move forward to data driven farming."

Farmers would rely less on intuition before taking actions on their farm as they gained more data and this was happening in the technological space and wouldincrease in farming for "precision nutrition", better yields and profits, he said.

The focuson precision nutrition would havenutrients customised for every animal based on the evaluation ofdata showing, for example,their body condition score, phenotyping and other genetic research. However, this data had to beaffordable for farmers to increase its uptake.

Data-driven farming required more sensors andunmannedaerialvehiclessuch as dronesto captureinformation such as the location of an animal, soil and ambienttemperature, humidityand soil nutrients.

The limitation with drones was that it remained difficultto send large amounts of data to the cloud, but this wouldbe solved once faster data streaming was available.

Chandra's research included aerial imagerywork with drones and tethered balloons above cattle farms in the US to plot cow movements in a pasture farm to see if they were grazing properly and pastures were being grazed at the right level.

Abarrier to advancing farm technology was the cost of sensors, he said.Research for field crop farms in the US showed precision agriculture improved yieldsand returns on investment, but the sensors were expensive with five of them costing US$8000.

"That is not feasible for farmers when most farmers don't make much money and ... if we reduce the cost of sensors we couldbring the benefit ofprecision agriculture to farmers worldwide."

Chandra's team found that to shortenrural gaps in wireless access they coulduseTV band white spaces unused VHF and UHF TV channels - and because of their lower frequency they could increase the distance ofcoverage during US and India projects, so farmers could connect to the internet.

"This is how we wouldenable dense placing of sensors if we had $25-$30 sensors and if we could scale this up they would fall down .... because there is so much spectrum available we can get camera data and stream to the cloud."

Chandra said advances in artificial intelligencethat had beeninitiated on farms would be more mainstream over the next 10 years.Artificial intelligence wouldguide farmers with data-driven predictions such as forthe best time to sow seeds,applyfertiliser and providethe best nutrition for livestock.

Weather forecasts would be co-ordinated with available water storage for irrigating crops and pastures or applying fertiliser to them and farmers would take pictures of pests and use artificial intelligence to analyse the best pesticide to control them.

Other research was being carried out to prolong the life of batteries to extend drone flightsandassist precision agriculture advances.

-Stuff

Visit link:

Artificial intelligence advances to make farming smarter - Stuff.co.nz

Artificial Intelligence, IoT Will Fuel Technology Deal-Making In Year Ahead – Forbes


Forbes
Artificial Intelligence, IoT Will Fuel Technology Deal-Making In Year Ahead
Forbes
The relentless drive to digital transformation among tech and non-tech companies pushed mergers and acquisitions to record levels over the past year, the latest analysis finds. Now, artificial intelligence and machine learning loom as the next wave of ...

The rest is here:

Artificial Intelligence, IoT Will Fuel Technology Deal-Making In Year Ahead - Forbes

Artificial intelligence tool combats trolls – The Hindu


The Hindu
Artificial intelligence tool combats trolls
The Hindu
Google has said it will begin offering media groups an artificial intelligence tool designed to stamp out incendiary comments on their websites. The programming tool, called Perspective, aims to assist editors trying to moderate discussions by ...
Google gargle: New artificial intelligence aimed at making internet a troll-free environmentFinancial Express
How Robots Can Help: Google Uses Artificial Intelligence To Track Abusive Comments On New York Times, Other SitesInternational Business Times
Check Out Alphabet's New Tool to Weed Out the 'Toxic' Abuse of Online CommentsFortune

all 131 news articles »

Read the original here:

Artificial intelligence tool combats trolls - The Hindu

What Companies Are Winning The Race For Artificial Intelligence? – Forbes


Forbes
What Companies Are Winning The Race For Artificial Intelligence?
Forbes
... general AI research, including traditional software engineers to build infrastructure and tooling, UX designers to help make research tools, and even ecologists (Drew Purves) to research far-field ideas like the relationship between ecology and ...

View post:

What Companies Are Winning The Race For Artificial Intelligence? - Forbes

Artificial intelligence ‘will save wearables’! – The Register

When a technology hype flops, do you think the industry can use it as a learning experience? A time of self-examination? An opportunity to pause and reflect on making the next consumer or business tech hype a bit less stupid?

Don't be silly.

What it does is pile the next hype on to the last hype, and call it "Hype 2.0".

"With AI integration in wearables, we are entering 'wearable 2.0' era," proclaim analysts Counterpoint Research in one of the most optimistic press releases we've seen in a while.

It's certainly bullish for market growth, predicting that "AI-powered wearables will grow 376 per cent annually in 2017 to reach 60 million units."

In fact it's got a new name for these "hearables". Apple will apparently have 78 per cent of this hearable market.

The justification for the claim is that language-processing assistants like Alexa will be integrated into more products. Counterpoint also includes Apple Airpods and Beats headphones as "AI-powered hearables", which may be stretching things a little.

It almost seems rude to point out that the current wearables market a bloodbath for vendors is already largely "hearable". Android Wear has been obeying OK Google commands spoken by users since it launched in 2014:

Apple built Siri into its Apple Watch in 2015 with its first update, watchOS 2:

Microsoft's Band built in Cortana:

If a "smart" natural language interface had the potential to make wearables sell, surely we would know it by now. But we hardly need to tell you what sales of these devices are. Many vendors have hit paused, or canned their efforts completely. You could even argue that talking into a wearable may be one of the reasons why the wearable failed to be a compelling or successful consumer electronics story. People don't want to do it.

Sprinkling the latest buzzword machine learning or AI over something that isn't a success doesn't suddenly make that thing a success. But AI has always had a cult-like quality to it: it's magic, and fills a God-shaped hole. For 50 years, the divine promise of "intelligent machines" has periodically overcome people's natural scepticism as they imagine a breakthrough is close at hand. Then it recedes into the labs again. All that won't stop people wishing that this time AI has Lazarus-like powers.

We can't wait for our machine-learning powered Sinclair C5 the Deluxe Edition with added Blockchain.

Can you?

Excerpt from:

Artificial intelligence 'will save wearables'! - The Register

College Student Uses Artificial Intelligence To Build A Multimillion-Dollar Legal Research Firm – Forbes


Forbes
College Student Uses Artificial Intelligence To Build A Multimillion-Dollar Legal Research Firm
Forbes
Lawyers spend years in school learning how to sift through millions of cases looking for the exact language that will help their clients win. What if a computer could do it for them? It's not the kind of question many lawyers would dignify with an answer.

Original post:

College Student Uses Artificial Intelligence To Build A Multimillion-Dollar Legal Research Firm - Forbes

Artificial intelligence: Understanding how machines learn – Robohub

From Jeopardy winners and Go masters to infamous advertising-related racial profiling, it would seem we have entered an era in which artificial intelligence developments are rapidly accelerating. But a fully sentient being whose electronic brain can fully engage in complex cognitive tasks using fair moral judgement remains, for now, beyond our capabilities.

Unfortunately, current developments are generating a general fear of what artificial intelligence could become in the future. Its representation in recent pop culture shows how cautious and pessimistic we are about the technology. The problem with fear is that it can be crippling and, at times, promote ignorance.

Learning the inner workings of artificial intelligence is an antidote to these worries. And this knowledge can facilitate both responsible and carefree engagement.

The core foundation of artificial intelligence is rooted in machine learning, which is an elegant and widely accessible tool. But to understand what machine learning means, we first need to examine how the pros of its potential absolutely outweigh its cons.

Simply put, machine learning refers to teaching computers how to analyse data for solving particular tasks through algorithms. For handwriting recognition, for example, classification algorithms are used to differentiate letters based on someones handwriting. Housing data sets, on the other hand, use regression algorithms to estimate in a quantifiable way the selling price of a given property.Machine learning, then, comes down to data. Almost every enterprise generates data in one way or another: think market research, social media, school surveys, automated systems. Machine learning applications try to find hidden patterns and correlations in the chaos of large data sets to develop models that can predict behaviour.

Machine learning, then, comes down to data. Almost every enterprise generates data in one way or another: think market research, social media, school surveys, automated systems. Machine learning applications try to find hidden patterns and correlations in the chaos of large data sets to develop models that can predict behaviour.

Data have two key elements samples and features. The former represents individual elements in a group; the latter amounts to characteristics shared by them.

Look at social media as an example: users are samples and their usage can be translated as features. Facebook, for instance, employs different aspects of liking activity, which change from user to user, as important features for user-targeted advertising.

Facebook friends can also be used as samples, while their connections to other people act as features, establishing a network where information propagation can be studied.

Outside of social media, automated systems used in industrial processes as monitoring tools use time snapshots of the entire process as samples, and sensor measurements at a particular time as features. This allows the system to detect anomalies in the process in real time.

All these different solutions rely on feeding data to machines and teaching them to reach their own predictions once they have strategically assessed the given information. And this is machine learning.

Any data can be translated into these simple concepts and any machine-learning application, including artificial intelligence, uses these concepts as its building blocks.

Once data are understood, its time to decide what do to with this information. One of the most common and intuitive applications of machine learning is classification. The system learns how to put data into different groups based on a reference data set.

This is directly associated with the kinds of decisions we make every day, whether its grouping similar products (kitchen goods against beauty products, for instance), or choosing good films to watch based on previous experiences. While these two examples might seem completely disconnected, they rely on an essential assumption of classification: predictions defined as well-established categories.

When picking up a bottle of moisturiser, for example, we use a particular list of features (the shape of the container, for instance, or the smell of the product) to predict accurately that its a beauty product. A similar strategy is used for picking films by assessing a list of features (the director, for instance, or the actor) to predict whether a film is in one of two categories: good or bad.

By grasping the different relationships between features associated with a group of samples, we can predict whether a film may be worth watching or, better yet, we can create a program to do this for us.

But to be able to manipulate this information, we need to be a data science expert, a master of maths and statistics, with enough programming skills to make Alan Turing and Margaret Hamilton proud, right? Not quite.

We all know enough of our native language to get by in our daily lives, even if only a few of us can venture into linguistics and literature. Maths is similar; its around us all the time, so calculating change from buying something or measuring ingredients to follow a recipe is not a burden. In the same way, machine-learning mastery is not a requirement for its conscious and effective use.

Yes, there are extremely well-qualified and expert data scientists out there but, with little effort, anyone can learn its basics and improve the way they see and take advantage of information.

Going back to our classification algorithm, lets think of one that mimics the way we make decisions. We are social beings, so how about social interactions? First impressions are important and we all have an internal model that evaluates in the first few minutes of meeting someone whether we like them or not.

Two outcomes are possible: a good or a bad impression. For every person, different characteristics (features) are taken into account (even if unconsciously) based on several encounters in the past (samples). These could be anything from tone of voice to extroversion and overall attitude to politeness.

For every new person we encounter, a model in our heads registers these inputs and establishes a prediction. We can break this modelling down to a set of inputs, weighted by their relevance to the final outcome.

For some people, attractiveness might be very important, whereas for others a good sense of humour or being a dog person says way more. Each person will develop her own model, which depends entirely on her experiences, or her data.

Different data result in different models being trained, with different outcomes. Our brain develops mechanisms that, while not entirely clear to us, establish how these factors will weighout.

What machine learning does is develop rigorous, mathematical ways for machines to calculate those outcomes, particularly in cases where we cannot easily handle the volume of data. Now more than ever, data are vast and everlasting. Having access to a tool that actively uses this data for practical problem solving, such as artificial intelligence, means everyone should and can explore and exploit this. We should do this not only so we can create useful applications, but also to put machine learning and artificial intelligence in a brighter and not so worrisome perspective.

There are several resources out there for machine learning although they do require some programming ability. Many popular languages tailored for machine learning are available, from basic tutorials to full courses. It takes nothing more than an afternoon to be able to start venturing into it with palpable results.

All this is not to say that the concept of machines with human-like minds should not concern us. But knowing more about how these minds might work will gives us the power to be agents of positive change in a way that can allow us to maintain control over artificial intelligence and not the other way around.

This article was originally published on The Conversation. Read the original article.

If you liked this article, you may also want to read:

See allthe latest robotics newson Robohub, orsign up for our weekly newsletter.

Visit link:

Artificial intelligence: Understanding how machines learn - Robohub

Four Artificial Intelligence Challenges Facing The Industrial IoT – Forbes

Four Artificial Intelligence Challenges Facing The Industrial IoT
Forbes
As a CTO who works closely with software architects and heads of business units validating and designing IoT solutions, it's obvious there's a disconnect between our vision of AI and what's actually happening in the industry right now. While there are ...

See the original post:

Four Artificial Intelligence Challenges Facing The Industrial IoT - Forbes

Artificial Intelligence or Artificial Expectations? – Science 2.0

News concerning Artificial Intelligence (AI) abounds again. The progress with Deep Learning techniques are quite remarkable with such demonstrations of self-driving cars, Watson on Jeopardy, and beating human Go players. This rate of progress has led some notable scientists and business people towarn about the potential dangers of AI as it approaches a human level. Exascale computers are being considered that would approach what many believe is this level.

However, there are many questions yet unanswered on how the human brain works, and specifically the hard problem of consciousness with its integrated subjective experiences. In addition, there are many questions concerning the smaller cellular scale, such as why some single-celled organisms can navigate out of mazes, remember, and learn without any neurons.

In this blog, I look at a recent review that suggests brain computations being done at a scale finer than the neuron might mean we are far from the computational, power both quantitatively and qualitatively. The review is by Roger Penrose (Oxford) and Stuart Hameroff (University of Arizona) on their journey through almost three decades of investigating the role of potential quantum aspects in neurons microtubules. As a graduate student in 1989, I was intrigued when Penrose, a well-known mathematical physicist, published the book, The Emperors New Mind, outlining a hypothesis that consciousness derived from quantum physics effects during the transition from a superposition and entanglement of quantum states into a more classical configuration (the collapse or reduction of the wavefunction). He further suggested that this process, which has baffled generations of scientists, might occur only when a condition, based on the differences of gravitational energies of the possible outcomes, is met (i.e., Objective Reduction or OR). He then went another step in suggesting that the brain takes advantage of the this process to perform computations in parallel, with some intrinsic indeterminacy (non-computability), and over a larger integrated range by maintaining the quantum mix of microtubule configurations separated from the noisy warm environment until this reduction condition was met (i.e., Orchestrated Objective Reduction or Orch OR).

As an anesthesiologist, Stuart Hameroff questioned how relatively simple molecules could cause unconsciousness. He explored the potential classical computational power of microtubules. The microtubules had been recognized as an important component of neurons, especially in the post synaptic dendrites and cell body, where the cylinders lined up parallel to the dendrite, stabilized, and formed connecting bridges between cylinders (MAPs). Not only are there connections between microtubules within dendrites but there are also interneuron junctions allowing cellular material to tunnel between neuron cells. One estimate of the potential computing power of a neurons microtubules (a billion binary state microtubule building blocks , tubulins, operating at 10 megahertz) is the equivalent computing power of the assumed neuronnet of the brain (100 billion neurons each with 1000 synapses operating at about 100 Hz). That is, the brains computing power might be the square of the standard estimate (10 petaflops) based on relatively simple neuron responses.

Soon after this beginning, Stuart Hameroff and Roger Penrose, found each others complementary approach and started forming a more detailed set of hypotheses. Much criticism was leveled about this view. Their responses included modifying the theory, calling for more experimental work, and defending against general attacks. Many experiments await to be done, including whether objective reduction occurs but this experiment cannot be done yet with the current resolution of laboratory instruments. Other experiments on electronic properties of microtubules were done in Japan in 2009 which discovered high conductance at certain frequencies from kilohertz to gigahertz frequencies. These measurements, which also show conductance increasing with microtubule length, are consistent with conduction pathways through aligned aromatic rings in the helical and linear patterns of the microtubule. Other indications of quantum phenomena in biology include the recent discoveries quantum effects in photosynthesis, bird navigation, and protein folding

There are many subtopics toexplore. Often the review discusses potential options without committing (or claiming) a specific resolution. These subtopics include interaction of microtubule with associated protein and transport mechanisms, the relationship of microtubules to diseases such as Alzheimers, the frequency of the collapse from the range of megahertz to hertz, memory formation and processing with molecules that bind to microtubules, the temporal aspect of brain activity and conscious decisions, whether the quantum states are spin (electron or nuclear) or electrical dipoles, the helical pattern of the microtubule (A or B), the fraction of microtubules involved with entanglement, the mechanism for environmental isolation, and the way that such a process might be advantageous in evolution. The review ends not with a conclusion concerning the validity of the hypothesis but instead lays a roadmap for the further tests that could rule out or support their hypothesis.

As I stated at the beginning, the progress in AI has been remarkable. However, the understanding of the brain is still very limited and the mainstream expectation that computers are getting close to equaling computing potential may be far off both qualitatively and quantitatively. While in the end it is unclear how much of this hypothesis will survive the test of experiments, it is very interesting to consider and follow the argumentative scientific process.

Stuart Hameroffs Web Site: http://www.quantumconsciousness.org/

Review Paper site: http://smc-quantum-physics.com/pdf/PenroseConsciouness.pdf

Go here to see the original:

Artificial Intelligence or Artificial Expectations? - Science 2.0

3 Ways Sales Is Changing With Artificial Intelligence – Small Business Trends

Technology is the great equalizer. In every industry and in nearly every department, technology is and should be central to performance and achievement capacity. Of course, the frontiers of technology constantly change. The assembly line modernized the means of production in the early 1900s, the telephone revolutionized communication, computers changed nearly everything in the 1980s, and today the frontier of technology is big data and artificial intelligence (A.I.).

Much has been made of those two trends in the last year. Every company under the sun has made bold claims about how much data they can capture and utilize. Then there were the data purists who said data had to be cleared of noise and be converted into smart data. The rules of good data have even been turned into an alliteration: Volume, Velocity, Variety, Veracity, and Value. On top of data came A.I., the much heralded next wave of technological progress.

A.I. captures a unique place in the public consciousness because we have been told both to fear it and to hope for it to save us from the tedium of work. But for all of the talk about what A.I. can do, very little has been made of what it is doing right now. There are many hundreds of products out there that purport to leverage A.I. for various tasks, but few of them live up to the future world that we read about in the news.

But there is one specific department where A.I. is operating to its futuristic potential by accomplishing one simple goal: leveling the playing field. That department is sales and the products that are available leverage A.I. to become prescriptive sales tools.

These are three ways that Prescriptive Sales is changing the industry:

Prescriptive Sales tools function like a regular customer relationship management (CRM) platform except that it is tracking and analyzing millions of events and identifying areas for improvement. Uzi Shmilovici, a thought leader in Prescriptive Sales technology and the CEO of Base CRM, says this technology gives sales professionals data-driven feedback for constant improvement.

Artificial intelligence programs can scan through millions of events to find patterns and correlations that we just would not notice on a day to day basis, explains Shmilovici. So it might notice a correlation between sending a specific pitch deck to prospective clients before calling them results in better conversions. Or it might notice that sending a weekly follow up email can yield results up to 8 weeks after initial contact. These are small practices that a sales professional might miss but that can increase performance over time.

The effect is to give sales professionals a second brain, one that crunches numbers and identifies patterns without needing any assistance. This has the potential to make every salesperson in the office a top performer, not just those with the best instincts. In that way, A.I. is leveling the playing field.

Growing a company is a chess match. There are a million strategies at play, but at the end of the day, cash is king, and you do not want to find yourself without it. But how do you grow your sales without hiring sales personnel? One way is to sell more with the team you have, and that is the future of Prescriptive Sales.

There is a litany of statistics available about how badly the average sales office performs. By any metric, there is room for growth. One study found that 63% of sales professionals fail to meet their personal quotas. So when we talk about there being room for growth without hiring new personnel, that is the space we are talking about.

Prescriptive Sales is designed to make it easier for salespeople to exceed their quotas. When a whole sales office uses the platform, the A.I. analyzes performance across individual experiences, meaning the program takes notes on how the top performing individuals work and shares it with the rest of the team. That cross-pollination of best practices makes up for numerous shortcomings in talent.

Don Schuerman, CTO of Pegasystems writes, Using AI to correlate data and uncover trends is great, but data is made valuable only when you can take action on it.

It is hard to overemphasize the importance of this leap forward. Todays CRM platforms are broadly flat, meaning they describe what is and what is likely to be, but not what can be. In that way, todays CRM platforms are Descriptive rather than Prescriptive.

Transitioning to Prescriptive Sales technology opens up new worlds of business opportunities. Suddenly executives are not handcuffed to best, middle, and worst case projections for annual revenue; instead they can paint a path toward concrete results and understand what it will take to achieve them.

That shift in thinking will have impacts on management and business strategy beyond what we can speculate about here. Of course, the best executives have always looked at what can be and worked toward that end, but now they have incredibly powerful tools at their disposal to get there.

The impact of A.I. on sales today is significant enough to qualify as a top-tier competitive advantage, asserts Shmilovici. Every CRM company is actively working to release their own Prescriptive Sales platform for that reason. This is the wave of the future. By combining Prescriptive Sales technology with a talented sales force, companies will be able to achieve growth at a much quicker pace. This technology could potentially become the future of sales and marketing.

AI Photo via Shutterstock

See the original post:

3 Ways Sales Is Changing With Artificial Intelligence - Small Business Trends