Argo AI’s CEO says IPO expected within next year – Reuters

Self-driving startup Argo AI, backed by Ford Motor Co (F.N) and Volkswagen AG (VOWG_p.DE), expects to pursue a public listing within the next year, founder and CEO Bryan Salesky said on Wednesday.

"So we're actively fundraising and are going out this summer to raise a private round initially," Salesky said at The Information's Autonomous Vehicles Summit. "And then we're looking forward to an IPO within the next year."

"The raise this year will definitely provide capital that gives us plenty of runway and will help us continue to scale out," he said, adding that autonomous driving is a capital-intensive business.

Ford and Volkswagen each hold a 42% ownership interest in Argo AI.

Last year, Germany's Volkswagen closed its $2.6 billion investment in Pittsburgh-based Argo AI, which valued the company at just over $7 billion.

Our Standards: The Thomson Reuters Trust Principles.

See original here:

Argo AI's CEO says IPO expected within next year - Reuters

Will artificial intelligence have a conscience? – TechTalks

Does artificial intelligence require moral values? We spoke to Patricia Churchland, neurophilosopher and author of Conscience: The Origins of Moral Intuition

This article is part of the philosophy of artificial intelligence, a series of posts that explore the ethical, moral, and social implications of AI today and in the future

Can artificial intelligence learn the moral values of human societies? Can an AI system make decisions in situations where it must weigh and balance between damage and benefits to different people or groups of people? Can AI develop a sense of right and wrong? In short, will artificial intelligence have a conscience?

This question might sound irrelevant when considering todays AI systems, which are only capable of accomplishing very narrow tasks. But as science continues to break new grounds, artificial intelligence is gradually finding its way into broader domains. Were already seeing AI algorithms applied to areas where the boundaries of good and bad decisions are not clearly defined, such as criminal justice and job application processing.

In the future, we expect AI to care for the elderly, teach our children, and perform many other tasks that require moral human judgement. And then, the question of conscience and conscientiousness in AI will become even more critical.

With these questions in mind, I went in search of a book (or books) that explained how humans develop conscience and give an idea of whether what we know about the brain provides a roadmap for conscientious AI.

A friend suggested Conscience: The Origins of Moral Intuitionby Dr. Patricia Churchland, neuroscientist, philosopher, and professor emerita at the University of California, San Diego. Dr. Churchlands book, and a conversation I had with her after reading Conscience, taught me a lot about the extent and limits of brain science. Conscience shows us how far weve come to understand the relation between the brains physical structure and workings and the moral sense in humans. But it also shows us how much more we must go to truly understand how humans make moral decisions.

It is a very accessible read for anyone who is interested in exploring the biological background of human conscience and reflect on the intersection of AI and conscience.

Heres a very quick rundown of what Conscience tells us about the development of moral intuition in the human brain. With the mind being the main blueprint for AI, better knowledge of conscience can tell us a lot about what it would take for AI to learn the moral norms of human societies.

Conscience is an individuals judgment about what is normally right or wrong, typically, but not always, reflecting some standard of a group to which the individual feels attached, Churchland writes in her book.

But how did humans develop the ability to understand to adopt these rights and wrongs? To answer that question, Dr. Churchland takes us back through time, when our first warm-blooded ancestors made their apparition.

Birds and mammals are endotherms: their bodies have mechanisms to preserve their heat. In contrast, in reptiles, fish, and insects, cold-blooded organisms, the body adapts to the temperature of the environment.

The great benefit of endothermy is the capability to gather food at night and to survive colder climates. The tradeoff: endothermic bodies need a lot more food to survive. This requirement led to a series of evolutionary steps in the brains of warm-blooded creatures that made them smarter. Most notable among them is the development of the cortex in the mammalian brain.

The cortex can integrate diverse signals and pull out abstract representation of events and things that are relevant to survival and reproduction. The cortex learns, integrates, revises, recalls, and keeps on learning.

The cortex allows mammals to be much more flexible to changes in weather and landscape, as opposed to insects and fish, who are very dependent on stability in their environmental conditions.

But again, learning capabilities come with a tradeoff: mammals are born helpless and vulnerable. Unlike snakes, turtles, and insects, which hit the ground running and are fully functional when they break their eggshells, mammals need time to learn and develop their survival skills.

And this is why they depend on each other for survival.

The brains of all living beings have a reward and punishment system that makes sure they do things that support their survival and the survival of their genes. The brains of mammals repurposed this function to adapt for sociality.

In the evolution of the mammalian brain, feelings of pleasure and pain supporting self-survival were supplemented and repurposed to motivate affiliative behavior, Churchland writes. Self-love extended into a related but new sphere: other-love.

The main beneficiary of this change are the offspring. Evolution has triggered changes in the circuitry of the brains of mammals to reward care for babies. Mothers, and in some species both parents, go to great lengths to protect and feed their offspring, often at a great disadvantage to themselves.

In Conscience, Churchland describes experiments on the biochemical reactions of the brains of different mammals reward social behavior, including care for offspring.

Mammalian sociality is qualitatively different from that seen in other social animals that lack a cortex, such as bees, termites, and fish, Churchland writes. It is more flexible, less reflexive, and more sensitive to contingencies in the environment and thus sensitive to evidence. It is sensitive to long-term as well as short-term considerations. The social brain of mammals enables them to navigate the social world, for knowing what others intend or expect.

The brains of humans have the largest and most complex cortex in mammals. The brain of homo sapiens, our species, is three times as large as that of chimpanzees, with whom we shared a common ancestor 5-8 million years ago.

The larger brain naturally makes us much smarter but also has higher energy requirements. So how did we come to pay the calorie bill? Learning to cook food over fire was quite likely the crucial behavioral change that allowed hominin brains to expand well beyond chimpanzee brains, and to expand rather quickly in evolutionary time, Churchland writes.

With the bodys energy needs supplied, hominins eventually became able to do more complex things, including the development of richer social behaviors and structures.

So the complex behavior we see in our species today, including the adherence to moral norms and rules, started off as a struggle for survival and the need to meet energy constraints.

Energy constrains might not be stylish and philosophical, but they are as real as rain, Churchland writes in Conscience.

Our genetic evolution favored social behavior. Moral norms emerged as practical solutions to our needs. And we humans, like every other living being, are subject to the laws of evolution, which Churchland describes as a blind process that, without any goal, fiddles around with the structure already in place. The structure of our brain is the result of countless experiments and adjustments.

Between them, the circuitry supporting sociality and self-care, and the circuitry for internalizing social norms, create what we call conscience, Churchland writes. In this sense your conscience is a brain construct, whereby your instincts for caring, for self and others, are channeled into specific behaviors through development, imitation, and learning.

This is a very sensitive topic and complicated, and despite all the advances in brain science, many of the mysteries of the human mind and behavior remain unlocked.

The dominant role of energy requirements in the ancient origin of human morality does not mean that decency and honesty must be cheapened. Nor does it mean that they are not real. These virtues remain entirely admirable and worthy to us social humans, regardless of their humble origins. They are an essential part of what makes us the humans we are, Churchland writes.

In Conscience, Churchland discusses many other topics, including the role of reinforcement learning in the development of social behavior and the human cortexs far-reaching capacity to learn by experience, to reflect on counterfactual situations, develop models of the world, draw analogies from similar patterns and much more.

Basically, we use the same reward system that allowed our ancestors to survive, and draw on the complexity of our layered cortex to make very complicated decisions in social settings.

Moral norms emerge in the context of social tension, and they are anchored by the biological substrate. Learning social practices relies on the brains system of positive and negative reward, but also on the brains capacity for problem solving, Churchland writes.

After reading Conscience, I had many questions in mind about the role of conscience in AI. Would conscience be an inevitable byproduct of human-level AI? If energy and physical constraints pushed us to develop social norms and conscientious behavior, would there be a similar requirement for AI? Does physical experience and sensory input from the world play a crucial role in the development of intelligence?

Fortunately, I had the chance to discuss these topics with Dr. Churchland after reading Conscience.

What is evident from Dr. Churchlands book (and other research on biological neural networks), physical experience and constraints play an important role in the development of intelligence, and by extension conscience, in humans and animals.

But today, when we speak of artificial intelligence, we mostly talk about software architectures such as artificial neural networks. Todays AI is mostly disembodied lines of code that run on computers and servers and process data obtained by other means. Will physical experience and constraints be a requirement for the development of truly intelligent AI that can also appreciate and adhere to the moral rules and norms of human society?

Its hard to know how flexible behavior can be when the anatomy of the machine is very different from the anatomy of the brain, Dr. Churchland said in our conversation. In the case of biological systems, the reward system, the system for reinforcement learning is absolutely crucial. Feelings of positive and negative reward are essential for organisms to learn about the environment. That may not be true in the case of artificial neural networks. We just dont know.

She also pointed out that we still dont know how brains think. In the event that we were to understand that, we might not need to replicate absolutely every feature of the biological brain in the artificial brain in order to get some of the same behavior, she added.

Churchland reminded that while initially, the AI community largely dismissed neural networks, they eventually turned out to be quite effective when their computational requirements were met. And while current neural networks have limited intelligence in comparison to the human brain, we might be in for surprises in the future.

One of the things we do know at this stage is that mammals with cortex and with reward system and subcortical structures can learn things and generalize without a huge amount of data, she said. At the moment, an artificial neural network might be very good at classifying faces by hopeless at classifying mammals. That could just be a numbers problem.

If youre an engineer and youre trying to get some effect, try all kinds of things. Maybe you do have to have something like emotions and maybe you can build that into your artificial neural network.

One of my takeaways from Conscience was that humans generally align themselves with the social norms of their society, they also challenge them at times. And the unique physical structure of each human brain, the genes we inherit from our parents and the later experiences that we acquire through our lives make for the subtle differences that allow us to come up with new norms and ideas and sometimes defy what was previously established as rule and law.

But one of the much-touted features of AI is its uniform reproducibility. When you create an AI algorithm, you can replicate it countless times and deploy it in as many devices and machines as you want. They will all be identical to the last parametric values of their neural networks. Now, the question is, when all AIs are equal, will they remain static in their social behavior and lack the subtle differences that drive the dynamics of social and behavioral progress in human societies?

Until we have a much richer understanding of how biological brains work, its really hard to answer that question, Churchland said. We know that in order to get a complicated result out of a neural network, the network doesnt have to have wet stuff, it doesnt have to have mitochondria and ribosomes and proteins and membranes. How much else does it not have to have? We dont know.

Without data, youre just another person with an opinion, and I have no data that would tell me that youve got to mimic certain specific circuitry in the reinforcement learning system in order to have an intelligent network.

Engineers will try and see what works.

We have yet to learn much about human conscience, and even more about if and how it would apply to highly intelligent machines. We do not know precisely what the brain does as it learns to balance in a headstand. But over time, we get the hang of it, Churchland writes in Conscience. To an even greater degree, we do not know what the brain does as it learns to find balance in a socially complicated world.

But as we continue to observe and learn the secrets of the brain, hopefully we will be better equipped to create AI that serves the good of all humanity.

Read more here:

Will artificial intelligence have a conscience? - TechTalks

Maybe the AI dystopia is already here – Washington Post

You know the scenario from 19th-century fiction and Hollywood movies: Mankind has invented a computer, or a robot or another artificial thing that has taken on a life of its own. In Frankenstein, the monster is built from corpses; in 2001: A Space Odyssey, its an all-seeing computer with a human voice; in Westworld, the robots are lifelike androids that begin to think for themselves. But in almost every case, the out-of-control artificial life form is anthropomorphic. It has a face or a body, or at least a human voice and a physical presence in the real world.

But what if the real threat from artificial life doesnt look or act human at all? What if its just a piece of computer code that can affect what you see and therefore what you think and feel? In other words what if its a bot, not a robot?

For those who dont know (and apologies to those who are wearily familiar), a bot really is just a piece of computer code that can do things that humans can do. Wikipedia uses bots to correct spelling and grammar on its articles; bots can also play computer games or place gambling bets on behalf of human controllers. Notoriously, bots are now a major force on social media, where they can like people and causes, post comments, react to others. Bots can be programmed to tweet out insults in response to particular words, to share Facebook pages, to repeat slogans, to sow distrust.

Slowly, their influence is growing. One tech executive told me he reckons that half of the users on Twitter are bots, created by companies that either sell them or use them to promote various causes. The Computational Propaganda Research Project at the University of Oxford has described how bots are used to promote either political parties or government agendas in 28 countries. They can harass political opponents or their followers, promote policies, or simply seek to get ideas into circulation.

About a week ago, for example, sympathizers of the Polish government possibly alt-right Americans launched a coordinated Twitter bot campaign with the hashtag #astroturfing (not exactly a Polish word) that sought to convince Poles that anti-government demonstrators were fake, outsiders or foreigners paid to demonstrate. An investigation by the Atlantic Councils Digital Forensic Research Lab pointed out the irony: An artificial Twitter campaign had been programmed to smear a genuine social movement by calling it ... artificial.

That particular campaign failed. But others succeed or at least they seem to. The question now is whether, given how many different botnets are running at any given moment, we even know what that means. Its possible for computer scientists to examine and explain each one individually. Its possible for psychologists to study why people react the way they do to online interactions why fact-checking doesnt work, for example, or why social media increases aggression.

But no one is really able to explain the way they all interact, or what the impact of both real and artificial online campaigns might be on the way people think or form opinions. Another Digital Forensic Research Lab investigation into pro-Trump and anti-Trump bots showed the extraordinary number of groups that are involved in these dueling conversations some commercial, some political, some foreign. The conclusion: They are distorting the conversation, but toward what end, nobody knows.

Which is my point: Maybe weve been imagining this scenario incorrectly all of this time. Maybe this is what computers out of control really look like. Theres no giant spaceship, nor are there armies of lifelike robots. Instead, we have created a swamp of unreality, a world where you dont know whether the emotions you are feeling are manipulated by men or machines, and where once all news moves online, as it surely will it will soon be impossible to know whats real and whats imagined. Isnt this the dystopia we have so long feared?

Read more from Anne Applebaums archive, follow her on Twitter or subscribe to her updates on Facebook.

See more here:

Maybe the AI dystopia is already here - Washington Post

Meet The Stanford AI Lab Alums That Raised $15 Million To Optimize Machine Learning – Forbes

Snorkel AI cofounders (L to R): Alex Ratner (CEO), Chris R (Board Member), Paroma Varma (Head of ... [+] Solutions), Braden Hancock (Head of Technology), and Henry Ehrenberg (Head of Engineering)

In 2014, computer science PhD candidate Alex Ratner and a team of fellow Stanford PhDs, advised by associate professor and MacArthur Fellow Chris R, were working on a research project at the universitys prominent AI Lab. The main issue they focused on was companies not being able to deploy AI as widely and effectively as they wanted to, due to the costly and time-consuming manual labeling of the data that machine learning models learn from.

Like many academic projects, it was meant to be just an afternoon of messing around and a whiteboard with some math, Ratner says. Soon it turned out that this question that we had started with, of what if we changed the paradigm from labeling by hand to labeling programmatically, was quite interesting to a lot of people.

After spending five years developing the product and deploying it at organizations like Google, Apple, Intel, and the departments of Justice and Defense, in 2019 the research team spun out of the AI Lab and created a company called Snorkel AI.

Today, the enterprise came out of stealth mode announcing that it had raised a total of $15 million (combined seed and Series A rounds), from investors like Greylock Partners, GV, and In-Q-Tel.

We were motivated by this mission of not just publishing more papers on some of these fun algorithmic or theoretical ideas, but actually making AI more broadly practical with a new end to end platform that focuses centrally on the problem of data labeling, Ratner, who serves as the companys CEO, says.

Snorkel AI's platform

The companys flagship product is the end-to-end Machine Learning platform called Snorkel Flow, which allows for AI applications to be deployed programmatically at much faster rates.

Snorkel Flow would serve as a replacement of armies of human labelers which at the moment do it by hand. An example of those manual processes include training AI applications to assist a radiologist in triaging chest X-rays. The radiologist would have to sit through a ton of images labeling which ones are emergency and which ones arent to teach the AI algorithm. Another example is a bank wanting AI to classify, sort and pull information out of someones loan portfolio. The companies would need to have their legal team check and label thousands of documents by hand every single time they want to change something.

Our key focus has been on sectors where labeling data by hand is not just a slower or more expensive option, but is often just a non-starter, Ratner says.

According to Ratner, this is usually due to one or more of three factors: the data is private so companies cant outsource it to get it labeled outside of the organization, the data requires in-demand experts (doctors or legal analysts), and the data changes frequently so companies find themselves labeling and relabeling all the time.

Snorkel AIs platform enables a programmatic approach so that instead of labeling one document at a time, the user can write a function (for example if they see the word employment in the header, they can label it as an employment contract).

The advantage is that writing a dozen or two dozen of these labeling functions to label your AI solution is orders of magnitude faster than labeling documents by hand, Ratner says.

Snorkel AI label

The Palo Alto-based Snorkel AI which counts around two dozen employees, raised a $3 million seed round in June of last year, and a $12 million Series A in October. The companys current customers include two top US banks, government agencies and other Fortune 500 companies.

Saam Motamedi, an Under 30 honoree and a general partner at Greylock Partners which co-led the seed round and led the Series A, says that Greylock immediately wanted to partner with the Snorkel AI cofounders given the caliber of the team, the traction of the open source Snorkel project and the power of the paradigm shift they are pioneering around this data-centric approach.

Customers have been able to go from what took months to deploy AI applications to now being able to deploy it in hours because they can programmatically manage the data, Motamedi says.

Continued here:

Meet The Stanford AI Lab Alums That Raised $15 Million To Optimize Machine Learning - Forbes

Artificial intelligence is a totalitarian’s dream here’s how to take power back – The Conversation UK

Individualistic western societies are built on the idea that no one knows our thoughts, desires or joys better than we do. And so we put ourselves, rather than the government, in charge of our lives. We tend to agree with the philosopher Immanuel Kants claim that no one has the right to force their idea of the good life on us.

Artificial intelligence (AI) will change this. It will know us better than we know ourselves. A government armed with AI could claim to know what its people truly want and what will really make them happy. At best it will use this to justify paternalism, at worst, totalitarianism.

Every hell starts with a promise of heaven. AI-led totalitarianism will be no different. Freedom will become obedience to the state. Only the irrational, spiteful or subversive could wish to chose their own path.

To prevent such a dystopia, we must not allow others to know more about ourselves than we do. We cannot allow a self-knowledge gap.

In 2019, the billionaire investor Peter Thiel claimed that AI was literally communist. He pointed out that AI allows a centralising power to monitor citizens and know more about them than they know about themselves. China, Thiel noted, has eagerly embraced AI.

We already know AIs potential to support totalitarianism by providing an Orwellian system of surveillance and control. But AI also gives totalitarians a philosophical weapon. As long as we knew ourselves better than the government did, liberalism could keep aspiring totalitarians at bay.

But AI has changed the game. Big tech companies collect vast amounts of data on our behaviour. Machine-learning algorithms use this data to calculate not just what we will do, but who we are.

Today, AI can predict what films we will like, what news we will want to read, and who we will want to friend on Facebook. It can predict whether couples will stay together and if we will attempt suicide. From our Facebook likes, AI can predict our religious and political views, personality, intelligence, drug use and happiness.

The accuracy of AIs predictions will only improve. In the not-too-distant future, as the writer Yuval Noah Harari has suggested, AI may tell us who we are before we ourselves know.

These developments have seismic political implications. If governments can know us better than we can, a new justification opens up for intervening in our lives. They will tyrannise us in the name of our own good.

The philosopher Isaiah Berlin foresaw this in 1958. He identified two types of freedom. One type, he warned, would lead to tyranny.

Negative freedom is freedom from. It is freedom from the interference of other people or government in your affairs. Negative freedom is no one else being able to restrain you, as long as you arent violating anyone elses rights.

In contrast, positive freedom is freedom to. It is the freedom to be master of yourself, freedom to fulfil your true desires, freedom to live a rational life. Who wouldnt want this?

But what if someone else says you arent acting in your true interest, although they know how you could. If you wont listen, they may force you to be free coercing you for your own good. This is one of the most dangerous ideas ever conceived. It killed tens of millions of people in Stalins Soviet Union and Maos China.

The Russian Communist leader, Lenin, is reported to have said that the capitalists would sell him the rope he would hang them with. Peter Thiel has argued that, in AI, capitalist tech firms of Silicon Valley have sold communism a tool that threatens to undermine democratic capitalist society. AI is Lenins rope.

We can only prevent such a dystopia if no one is allowed to know us better than we know ourselves. We must never sentimentalise anyone who seeks such power over us as well-intentioned. Historically, this has only ever ended in calamity.

One way to prevent a self-knowledge gap is to raise our privacy shields. Thiel, who labelled AI as communistic, has argued that crypto is libertarian. Cryptocurrencies can be privacy-enabling. Privacy reduces the ability of others to know us and then use this knowledge to manipulate us for their own profit.

Yet knowing ourselves better through AI offers powerful benefits. We may be able to use it to better understand what will make us happy, healthy and wealthy. It may help guide our career choices. More generally, AI promises to create the economic growth that keeps us from each others throats.

The problem is not AI improving our self-knowledge. The problem is a power disparity in what is known about us. Knowledge about us exclusively in someone elses hands is power over us. But knowledge about us in our own hands is power for us.

Anyone who processes our data to create knowledge about us should be legally obliged to give us back that knowledge. We need to update the idea of nothing about us without us for the AI-age.

What AI tells us about ourselves is for us to consider using, not for others to profit from abusing. There should only ever be one hand on the tiller of our soul. And it should be ours.

View post:

Artificial intelligence is a totalitarian's dream here's how to take power back - The Conversation UK

Hope, hype and humans: the foundations of AI in marketing – ITProPortal

What do you think about when you think of artificial intelligence (AI) in marketing? For most, its hype and hope. Hype, that AI can transform the workings of any marketing department in any company. And hope, that by bolting-on AI to your marketing stack, youll be able to overhaul how your marketing team manages customer data, and in doing so improve the experience for your users.

The reality though, is somewhat different. Hype has, in most cases, been driven by vendors wanting to push AI products to companies regardless of whether they need them, or whether theyll benefit from the technology. AI isnt suited to every company, particularly when it comes to marketing. And that hope of easily bolting on AI to a marketing stack just isnt achievable. The impact (for your team and your customers) isnt instantaneous, and the technology itself cannot provide quick-fix transformations for your teams workflows and customer data management process.

However, get it right, and adopting AI in marketing can deliver impressive results. According to Forbes Insights and Quantcast research, AI enables marketers to increase sales (52 per cent), increase in customer retention (51 per cent), and succeed at new product launches (49 per cent).

So thats the hype and the hope. Whats missing from the AI equation is the third h: the human factor. Adding AI into the marketing stack requires incredibly careful attention to deep details of data modelling and data hygiene. This involves human training and effort, and ongoing investment in people as well as technology. An AI platform must be useable by individuals across an organisation, for instance, and must be configurable and customisable to ensure a streamlined integration with a marketing teams existing IT stack.

Companies need to ensure they utilise AI technologies and models that make the most sense for their business and marketing strategy. If not, they wont succeed at AI in marketing, nor be able to address and overcome some of the most common challenges with customer data management. Failing at AI isnt uncommon: a quarter of businesses responding to a 2019 IDC survey reported a 50 per cent fail rate on AI projects.

With so many failures, adding AI into the marketing mix can seem daunting. Compounding this is the fact that AI is a relatively new concept, and awareness of what the technology can deliver is limited. However, one major area where AI can benefit marketers is in managing digital experiences. And at a time when digital, multi-channel experiences are the new normal, without the support of AI and other emerging technologies in managing those experiences, those brands which fail to digitalise will be left behind.

This is particularly true of the past few weeks; a period that has highlighted just how crucial a seamless, relevant and engaging online customer experience is. If theyve not done so already, businesses really must move to a digital-first way of managing customer experiences across multiple digital platforms, and unifying these.

With lockdown restrictions and physical footfall decreasing dramatically (or in the case of many retail businesses, completely), customers instead have relied on their smartphones and laptops. The digital domain has been the only way for consumers to buy goods, access and manage their finances, educate their children, enjoy entertainment, pay bills and use any other service that used to involve some in-person element.

As customers flip from one digital channel to the next theyll want (and increasingly, expect), for the company in question to provide a joined-up experience. When it works well, users will likely not realise its when it a company doesnt deliver on this that frustrations (resulting from a poor customer experience) will arise.

A consumer might, for example, pay a lot of money for a concert ticket and sign up to the ticket sites mailing list, only to be sent recommendations for the same concert theyve already spent a lot of money on. Or, a shopper browsing and buying mens clothing on a fashion website might repeatedly be shown recommendations for womens clothes when they next log on. Or, a consumer trying to authorise a transaction online might call a customer service helpline, only for the representative on the other end to have no knowledge of the customer or the transaction. Or, a customer shopping in Australia might be shown out of season stock on the UK version of a brands website. Or, a nut allergy shopper trying to do an online grocery order via their smartphone might be sent push notifications for special offers on Easter confectionary containing nuts. The list goes on!

Many businesses will hope that by throwing AI into the mix, these customer experience issues will instantly disappear. However, some AI tools will fail to deliver and the business in question will subsequently fail at AI. Whats needed is an AI-capable personalisation tool that utilises data from across different channels and from third-party sources as well as data available in house.

Things like geolocation, visit frequency, device and system used to browse your site, pages, viewed, browsing behaviour all need to be taken into consideration. Tools must then be available to marketers to analyse this data, to better understand it, use it to make predictions, inform decision making about what personas consumers might fit into, and then action this to display the content and offer the experience most appeal to those individuals. One human marketer doing this for one consumer is possible. Offering this one-to-one experience to thousands or millions of customers is not.

AI technologies can enable one-to-one personalisation at massive scale and in real or near-real time. Such a tool will gather customer, product, order and behavioural data from retail points of sale, and e-commerce order management, email marketing and analytics systems. It can then perform data hygiene, de-duplication, and standardisation, enabling marketing teams to gain an accurate, complete view of each and every customer.

Thats the tech part; now for third h those humans. To avoid an AI fail, these tools have to be straightforward to use by all team members including those hesitant about the adoption of AI. It should also be possible for businesses to use customer data platforms alongside other applications, dependent on the experience and knowledge of marketers.

Google Marketing Platform, for instance, features Google Analytics, which is already used by many marketers who wouldnt consider themselves AI specialists. Google Cloud, meanwhile, includes tools that allow users to analyse data using AI. Alternatively, if members of your team have stronger analytical skills and are familiar with AI, tools like BigQuery ML or AutoML have clustering and prediction tools built in. An AI-capable customer data platform should be able to work alongside these other applications, allowing a business user to choose the approach that works for its team.

Businesses should organise their marketing strategy, stack and organisation around AI and with a focus on the specific outcomes they want to achieve. Falling for hype and hoping the technology can simply be forced in are what lead to those AI fails. By taking this human-centric approach to AI integration, companies will be able to maximise the benefits of such technologies, unifying customer data across digital and physical technology silos, providing meaningful insights into the data, and orchestrating consistent, relevant customer engagement at every touchpoint. After all, its these human experiences that define brands.

Omer Artun, Chief Science Officer, Acquia

More here:

Hope, hype and humans: the foundations of AI in marketing - ITProPortal

The AI Doctor Orders More Tests – Bloomberg – Bloomberg

Few U.S. industries are growing as fast as health care, but the bigpublic-cloud companiesAmazon.com, Microsoft, Googlehave struggled to crack the $3.2trillion market. Even as hospitals and insurers collect mountains of health data on individual Americans, most of their spending on extra data storage is for old-school systems on their own premises, according to researcher IDC.

The public-cloud kingpins are trying to lure health-care providers with artificially intelligent cloud services that can act like doctors. The companies are testing, and in some cases marketing, AI software that automates mundane tasks including data entry; consulting work like patient management and referrals; and even the diagnostic elements of highly skilled fields such as pathology.

Amazon Web Services, the dominant cloud provider, is processing and storing genomics data for biotech companies and clinical labs. No.2 Microsofts cloud unit plans to store DNA records, and its Healthcare Next system provides automated data entry and certain cancer treatment recommendations to doctors based on visible symptoms. Google seems to be betting most heavily on health-care analysis as a way to differentiate its third-place cloud offerings. Gregory Moore, vice president for health care, says hes readying Google Cloud for a world of diagnostics as a service. In this world, AI couldalways be on hand to give doctors better informationor replace them altogether.

The cloud division is refining its genomics data analysis and working to make Google Glass, the augmented-reality headgear that consumers didnt want, a product more useful to doctors. German cancer specialist Alacris Theranostics GmbH leans on Google infrastructure to pair patients with drug therapies, something Google hopes more companies will do. Health-care systems are ready, says Moore, an engineer and former radiologist. People are seeing the potential of being able to manage data at scale.

In November, Google researchers showed off an AI system that scanned images of eyes to spot signs of diabetic retinopathy, which causes vision loss among people with high sugar levels. Another group of the companys researchers in March said they had used similar software to scan lymph nodes. They said theyd identified breast cancer from aset of 400 images with 89percent accuracy, a better record than most pathologists. Last year the University of Colorado at Denver moved its health research labs data to Googles cloud to support studies on genetics, maternal health, and the effect of legalized marijuana on the number and severity of injuries to young men. Michael Ames, the universitys project director, says he expects eventually to halve the cost of processing some 6million patientrecords.

However impressive Googles AI analysis gets, the health-care industry isnt exactly a gaggle of early adopters, says James Wang, an analyst at ARK Investment Management LLC. They can have the lowest error rate and the greatest algorithm, but getting it into a hospital is a whole other problem, he says. Most electronic medical records are likely to remain locked inside health companies for the foreseeable future, says Robert Mittendorff, a biotech investor at Norwest Venture Partners. Indeed, Googles first major effort in the industry, an online health records service, folded in 2011 because the company couldnt convince potential customers their data would be safe.

The most important business stories of the day.

Get Bloomberg's daily newsletter.

Moore says things have changed since then and that hes working with Stanford and the Broad Institute, plus about a dozen companies in the health-care industry and defense contractor Northrop Grumman Corp. For now, his primary focus is wrangling more health-care companies onto Googles cloud, because the more data he can get on Googles servers, the faster its AI systems will learn. There literally have to be thousands of algorithms to even come close to replicating what a radiologist can do on a given day, he says. Its not going to be all solved tomorrow.

The bottom line: Big cloud companiesespecially Googleare experimenting with AI diagnostics and other systems to attract medical clients.

See more here:

The AI Doctor Orders More Tests - Bloomberg - Bloomberg

Nvidia Trounces Google And Huawei In AI Benchmarks, Startups Nowhere To Be Found – Forbes

Nvidia uses the new Ampere A100 GPU and the Selene Supercomputer to break MLPerf performance records

Artificial Intelligence (AI) training numbers based on the suite of the new MLPerf 0.7 benchmark performance numbers were released today (7/29/20) and once again, Nvidia takes the performance crown. Eight companies submitted numbers on systems based on both AMD and Intel CPU processors and using a variety of AI accelerators from Google, Huawei, and Nvidia. The increase in peak performance for each MLPerf benchmark by the leading platform was 2.5x or more. The new benchmark also added new tests for additional emerging AI workloads.

As a brief background, MLPerf is an organization established to develop benchmarks for effectively and consistently testing systems running a wide range of AI workloads, including training and inference processing. The organization has gained wide industry support from semiconductor and IP companies, tools vendors, systems vendors, and the research and academic communities. First launched in 2018, updates and new benchmarking results have been announced for training about once a year, even though the goal is once a quarter.

The benefit of the MLPerf benchmark is not only seeing the advancements by each vendor, but the overall advancements of the industry, especially as new workloads are added. For the latest training version 0.7, new workloads were added for Natural Language Processing (NLP) using Bidirectional Encoder Representations from Transformers (BERT), recommendation systems using Deep Learning Recommendation Machines (DRLM), and reinforcement learning using Minigo. Note that using Minigo for re-enforced may also serve as a baseline for AI gaming applications. The benchmark results are reported as either commercially available (on-premise or in the cloud), preview (products coming to the market in the next six months, or research and development (systems still in earlier development). The most important near-term results are those that are commercially available or in preview. There is an open division, but that had no material impact on the overall result.

The companies and institutions that submitted results included Alibaba, Dell EMC, Fujitsu, Google, Inspur, Intel, Nvidia, and the Shenzhen Institute of Advanced Technology. The largest number of submissions came from Nvidia, which is not surprising given that the company recently built its own supercomputer (ranked #7 in the TOP500 supercomputer list and #2 in the Green500 supercomputer list), which is based on its latest Ampere A100 GPUs. This system, called Selene allows the company considerable flexibility in test different workloads and system configurations. In the MLPerf test results, the number of GPU accelerators range from two to 2048 in the commercially available category and 4096 in the research and development category.

All of the systems were based on AMD and Intel CPUs paired with one of the following accelerators: the Google TPU v3, the Google TPU v4, the Huawei Ascend910, the Nvidia Tesla V100 (in various configurations), or the Nvidia Ampere A100. Noticeably absent were the chip startups like Cerebras, Esperanto, Groq, Graphcore, Habana (an Intel company), and SambaNova. This is especially surprising because all of these companies are listed as contributors or supporters of MLPerf. There is a long list of other AI chips startups that are also not represented. Intel submitted performance numbers but only in the preview category for its upcoming Xeon Platinum processors, not for its recently acquired Habana AI accelerators. With only Intel submitting processor-only numbers, there is nothing to compare them to and the performance is well below the systems using accelerators. It is also worth noting that Google and Nvidia were the only companies that submitted performance numbers for all the different benchmark categories, but Google only submitted complete benchmark numbers for the TPU v4, which is in the preview category.

Each benchmark is ranked in terms of the execution time of the benchmark. Because of the high number of system configurations, the best way to compare the result is to normalize the execution time to each AI accelerator by dividing the execution time by the number of accelerators. This is not perfect because the performance per accelerator does typically increase with the number of accelerators and/or some workloads appear to have optimized performance around certain system configuration, but the results appear relatively consistent even when comparing the performance numbers of systems with relatively similar numbers of accelerators. The clear winner was Nvidia. Nvidia-based systems dominated all eight benchmarks for commercially available solutions. If considering all categories, including preview, the Google TPU v4 had the fastest per accelerator execution time for recommendations.

The platforms with the top performance results for each MLPerf 0.7 benchmark

Overall, the benchmarks increased from 2.5x to 3.3x from the 0.6 version benchmark categories, which include image classification, object detection, and translation. Interestingly, Nvidias previous generation GPU, the Tesla V100 scored best in three categories non-recurrent translation, recommendation, and reinforcement learning, the latter two being new MLPerf categories. This is not completely surprising because the Ampere had significant changes in the architecture that will also improve performance in inference processing. It will be interesting to see how the Ampere A100 systems score in the next generation of inference benchmarks that should be released later this year. Another development to note is the emergence of AMD Epyc processors in the top performance benchmarks because of their presence in the new Nvidia DGX A100 systems and DGX SuperPods with Nvidias new Ampere A100 accelerators.

Summary of the top MLPerf benchmark results and the performance improvements from version 0.6 to ... [+] versions 0.7

Nvidia continues to lead the pack, not just because of its lead in GPUs, but also its leadership in complete systems, software, libraries, trained models, and other tools for AI developers. Yet, every other company offering AI chips and solutions offers comparisons to Nvidia without the supporting benchmark numbers. MLPerf is not perfect. The results should be published more than once a year and the results should include an efficiency ranking (performance/watt) for the system configurations, two points the organization is working to achieve. However, MLPerf was developed as an industry collaboration and represents the best method of evaluating AI platforms. It is time for everyone else to submit MLPerf numbers to support their claims.

Visit link:

Nvidia Trounces Google And Huawei In AI Benchmarks, Startups Nowhere To Be Found - Forbes

Advancing AI by Understanding How AI Systems and Humans Interact – Windows IT Pro

Artificial intelligence as a technology is rapidly growing, but much is still being learned about how AI and autonomous systems make decisions based on the information they collect and process.

To better explain those relationships so humans and autonomous systems can better understand each other and collaborate more deeply, researchers at PARC, the Palo Alto Research Center, have been awarded a multi-million dollar federal government contract to create an "interactive sense-making system" that could answer many related questions.

The research for the proposed system, called COGLE (CommonGroundLearning andExplanation), is being funded by the Defense Advanced Research Projects Agency (DARPA),using an autonomous Unmanned Aircraft System (UAS) test bed but would later be applicable to a variety of autonomous systems.

The idea is that since autonomous systems are becoming more widely used, it would behoove humans who are using them to understand how the systems behave based on the information they are provided, Mark Stefik, a PARC research fellow who runs the lab's human machine collaboration research group, told ITPro.

"Machine learning is becoming increasing important," said Stefik. "As a consequence, if we are building systems that are autonomous, we'd like to know what decisions they will make. There is no established technique to do that today with systems that learn for themselves."

In the field of human psychology, there is an established history about how people form assumptions about things based on their experiences, but since machines aren't human, their behaviors can vary, sometimes with results that can be harmful to humans, said Stefik.

In one moment, an autonomous machine can do something smart or helpful, but then the next moment it can do something that is "completely wonky, which makes things unpredictable," he said. For example, a GPS system seeking the shortest distance between two points could erroneously and catastrophically send a user driving over a cliff or the wrong way onto a one-way street. Being able to delve into those autonomous "thinking" processes to understand them is the key to this research, said Stefik.

The COGLE research will help researchers pursue answers to these issues, he said. "We're insisting that the program be explainable," for the autonomous systems to say why they are doing what they are doing. "Machine learning so far has not really been designed to explain what it is doing."

The researchers involved with the project will essentially have roles educators and teachers for the machine learning processes to improve their operations and make it more useable and even more human like, said Stefik. "It's a sort of partnership where humans and machines can learn from each other."

That can be accomplished in three ways, he added, including reinforcement at the bottom level, using reasoning patterns like the ones humans use at the cognitive or middle level, and through explanation at the top sense-making level. The research aims to enable people to test, understand, and gain trust in AI systems as they continue to be integrated into our lives in more ways.

The research project is being conducted under DARPA's Explainable Artificial Intelligence (XAI) program, which seeks to create a suite of machine learning techniques that produce explainable models and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.

PARC, which is a Xerox company, is conducting the COGLE work with researchers at Carnegie Mellon University, West Point, the University of Michigan, the University of Edinburgh and the Florida Institute for Human & Machine Cognition. The key idea behind COGLE is to establish common ground between concepts and abstractions used by humans and the capabilities learned by a machine. These learned representations would then be exposed to humans using COGLE's rich sense-making interface, enabling people to understand and predict the behavior of an autonomous system.

Read the rest here:

Advancing AI by Understanding How AI Systems and Humans Interact - Windows IT Pro

A new AI claims it can help remove racism on the web. So I put it to work – ZDNet

Can AI flag truly problematic content?

I tend to believe technology can't solve every problem.

Why, it's not even managed to solve the vast problems caused by technology.

Yet when I received an email headlined: "AI to remove racism," how could I not open it? After all, AI has already removed so many things. Human employment, for example.

The email came on behalf of a company called UserWay. It claims to have a widget that is "the world's most advanced AI-based auto-remediation technology."

Within its paid offering, UserWay now has what it calls an AI-Powered Content Moderator. This, it hopes, will allow companies to ensure their websites identify problematic language -- reflecting racial bias, gender bias, age bias, and disability slurs, for example -- so that they can decide whether to change it or remove it.

As far as UserWay is concerned, this is "the first general, cross-website content moderation tool designed specifically for the greater public."

UserWay says it performed a test on 500,000 websites and found that 52% had examples of racial bias, 24% had examples of gender bias, and 12% featured age bias.

To give you an example of the sorts of words and phrases flagged, these include whitelist, blacklist, black sheep, chairman, and mankind. As well as language of the more overtly offensive sort.

Naturally, I asked UserWay to undertake another test. I gave it the names of some well-known news and business websites and wondered which of these might be great offenders. Or not. At least according to this AI.

I fear that, given our fractious times, your own political antennae may already be sending signals of acceptance or rejection. Please bear with me, as one or two of these results might surprise.

UserWay says it examined a representative sample of 10,000 pages from each of these sites -- ranging from Fox News to The Huffington Post, from The Daily Caller to The New York Times -- and then offered me its artificially intelligent conclusions.

The AI declared that, overall, the Washington Examiner had the most problematic content, followed by The Daily Caller. This wasn't so much because of pages with racial bias, but because of pages with gender bias and racial slurs.

But before you cheer for your side or begin to throw objects, please let me tell you which site -- according to UserWay -- had the most pages including racial bias. It was, in this sample, ESPN.com. Followed by CNBC.com.

And what if I told you that this AI believes ACLU.org has more problematic pages than FoxNews.com?

While you're digesting that, I'll add this: The AI also declared that FoxNews.com has fewer pages with gender bias than do CNN.com and The Washington Post's website.

I have no interest in besmirching any of these sites. At least publicly.

These results may make one or two people wonder, however, whether racism, sexism, and gender bias aren't the exclusive preserve of one political bent or another. It may also make some wonder about the very essence of AI as a content moderator.

A considerable element of such AI is the selection of criteria by which it makes its decisions. That's why the companies that operate the sites have to decide themselves which words and phrases are acceptable and which aren't.

If there's one thing that's sure about AI, it's that human nuance is not its strength. Sometimes it'll identify words and phrases without exactly understanding the context. And, who knows, certain terms that are currently acceptable may not be so positively received in even a few months' time.

When I asked UserWay how it chooses the words and phrases to be flagged, it told me it "curates the terms internally based on our own research."

Which did tend a little toward Facebook-speak.

Talking of which, I asked UserWay to look at Facebook.com too. Oddly, it couldn't produce any results.

UserWay's Founder and CEO Allon Mason told me: "It seems that Facebook is proactively preventing scanners and bots from scanning its site."

I'm taken aback.

Read more here:

A new AI claims it can help remove racism on the web. So I put it to work - ZDNet

Spotify Offers Personalized Artificial Intelligence Experience With The Weeknd – HYPEBEAST

With the help of new artificial intelligence technology, Spotify is providing fans with a highly-personalized way to experience The Weeknds critically acclaimed After Hours album. The microsite experience features a life-like version of The Weeknd, who will have a one-on-one chat with fans.

Upon entering the site, The Weeknds alter ego will appear on the screen. Hell start out by addressing each fan by name, and, based on listening data, share how theyve streamed his music over the years. The AI experience then turns into an intimate listening session of After Hours, one that is just between the individual and The Weeknd. Spotifys new Alone With Me experience comes after the streaming platform gave fans an exclusive remote listening party and Q&A to celebrate the release of The Weeknds new album back in March. The Alone With Me session gives the title of the album a whole new meaning.

Join The Weeknd in the Alone With Me experience on Spotifys website now.

In other music-related news, check out former U.S. President Barack Obamas annual Summer playlist.

More:

Spotify Offers Personalized Artificial Intelligence Experience With The Weeknd - HYPEBEAST

Futuristic Impacts of AI Over Businesses and Society – Analytics Insight

In the past decade, artificial intelligence (AI) has made it to mainstream society from academic journals. The technology has achieved numerous milestones when it comes to digital transformation across society including businesses, education, and healthcare as well. Today people can do the tasks which were not even possible ten years back.

The proportion of organizations using AI in some form rose from 10 percent in 2016 to 37 percent in 2019 and that figure is extremely likely to rise further in the coming year, according to Gartners 2019 CIO Agenda survey.

While the breakthroughs in surpassing human ability at human pursuits, such as chess, make headlines, AI has been a standard part of the industrial repertoire since at least the 1980s. Then production-rule or expert systems became a standard technology for checking circuit boards and detecting credit card fraud. Similarly, machine-learning (ML) strategies like genetic algorithms have long been used for intractable computational problems, such as scheduling, and neural networks not only to model and understand human learning but also for basic industrial control and monitoring.

Moreover, AI is also the core of some of the most successful companies in history in terms of market capitalizationApple, Alphabet, Microsoft, and Amazon. Along with information and communication technology (ICT) more generally, the technology has revolutionized the ease with which people from all over the world can access knowledge, credit, and other benefits of a contemporary global society. Such access has helped lead to a massive reduction of global inequality and extreme poverty, for example by allowing farmers to know fair prices, the best crops, and giving them access to accurate weather predictions.

Following the trends, we can say that there will be big winners and losers as collaborative technologies, robots and artificial intelligence transform the nature of work. Moreover, data expertise will become exponentially more important. Across various organizations, the role of a senior manager in a deeply data-driven world is about to shift, thanks to the AI revolution. It is estimated that information hoarders will slow the pace of their organizations and forsake the power of artificial intelligence while competitors exploit it.

In the future, judgments about consumers and potential consumers will be made instantaneously and many organizations will put cybersecurity on par with other intelligence and defense priorities. Besides, open-source information and artificial intelligence collection will provide opportunities for global technological parity and soon predictive analytics and artificial intelligence could play an even more fundamental role in content creation.

With the growth of AI-enabled technologies in the future, societies will face challenges in realizing technologies that benefit humanity instead of destroying and intruding on the human rights of privacy and freedom of access to information. Also, the surging capabilities of robots and artificial intelligence will see a range of current jobs supplanted, where professional roles such as doctors, lawyers, and accountants could be replaced by artificial intelligence by the year 2025.

Moreover, low-skill workers will reallocate to tasks that are non-susceptible to computerization. All the risks will arise out of human activity from certain technological development in this technology, synthetic biology, nano techno, and artificial intelligence.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Smriti is a Content Analyst at Analytics Insight. She writes Tech/Business articles for Analytics Insight. Her creative work can be confirmed @analyticsinsight.net. She adores crushing over books, crafts, creative works and people, movies and music from eternity!!

Excerpt from:

Futuristic Impacts of AI Over Businesses and Society - Analytics Insight

These young immigrant brothers are teaching A.I. to high-schoolers for free: We want to give kids ‘a lucky break’ – CNBC

Since 20-something brothers Haroon and Hamza Choudery immigrated to Brooklyn, New York, from from rural Pakistan in 1998, their lives have been changed by technology in both amazing and devastating ways.

Technology provides a nice living for the brothers: Haroon, 26, has a well-paying AI job at a healthcare company, and Hamza, 24, works at WeWork.

But their uncle has seen the other side.

The Chouderys' uncle used his life savings to finance a New York City taxi medallion in 2013 (which, at the time,cost as much as $1.3 million). But thanks to technology, the ride-share boom left the medallion worth just 20% of its original value, Haroon says.

"As you can imagine, starting from scratch after over two decades of working as a taxi driver had a devastating effect on the trajectory of his life."

This whiplash technology launching their careers while devastating their elder also had an effect on Haroon and Hamza. Inspired in part by the experience, the brothers co-founded a nonprofit calledAI for Anyone.

The idea behind the AI literacy organization is to use "our privilege to help those that are less privileged avoid getting into situations where their livelihoods are destroyed, whether it be through like automation replacing their jobs or whether it be through automation being designed to accommodate the needs of more affluent and more well off people and not really taken the the underrepresented populations into account when they're making their decisions," Haroon says.

Both in the classroom and online, AI for Anyone teaches students the basics of artificial intelligence, increases awareness of AI's role in society and shows how the technology can be used.

"We had support that really gave us a lot of lucky breaks," Haroon says, referring to the opportunities they were afforded after coming to the US. "We want to ... help give [kids] a lucky break in the form of some knowledge that may help them make a pivot in their lives," he says.

It wasn't just about lucky breaks for Haroon and Hamza. There was a lot of hard work too.But it is true that the brothers have lived some version of the American Dream.

After coming to the US when Haroon was 6 and Hamza was 4, their family lived with nine relatives in a two-bedroom apartment in Brooklyn, and later on a poultry farm in Maryland on the Eastern Shore of Maryland. Their father worked any number of jobs from baker to taxi driver to tow truck driver.

Haroon and Hamza Choudery with their father, Shabbir Choudery and their sister, Rahat Choudery.

Photo courtesy A.I for Anyone

Haroon (left) and Hamza Choudery in Pakistan.

Photo courtesy A.I. for Anyone

In 2011,Haroon won a Gates Millennium scholarship,which gave him a full ride (including tuition, housing, food and transportation) to both Penn State for undergrad and to University of California, Berkeley, where he got his masters in information and data science. After college, Haroon did data science work for Mark Cuban Companies and was a technology consultant at Deloitte Consulting. He is now a data scientist at Komodo Health.

Hamza graduated magna cum laude from the University of Maryland. He previously worked at Facebook, and now works in business operations at WeWork.

Today, living in New York City, the brothers could easily spend a couple of dollars on cup of coffee the same amount their family had to live on for a month in Pakistan. Living in both realities, Hamza says,"has contextualized the poverty and it has also contextualized the success."

And it has been the brothers' "call to action" to launch their education initiative.

In 2017, Haroon, Hamza and their friend Mac McMahon started AI for Anyone with $5,000 of their own money.

The idea wasto educate those who might be at risk of having their livelihood affected by artificial intelligence and arm them for the future.

The organization's team goes to schools to present workshopsthat teach kidswho might not learn about AI otherwise, because as important as it is to the future of work, itis not part of a regular high school curriculum.

So far, AI for Anyone has taught approximately 50 workshops, reaching over 55,000 people, according to the Chouderys. It also has a monthly newsletter,All About AI, with over 33,000 subscribers, as well as a new podcast,AI For You. (One episode has an interview with Hod Lipson, a well-renowned professor in the AI space.)

The non-profit is now funded by corporate sponsorships from Hypergiant and Komodo Health, so the workshops are free to students and teachers.

Haroon Choudery, a co-founder of A.I. for Anyone, teaching a workshop.

Photo courtesy A.I. for Anyone

Even the pandemic has not stopped AI for Anyone, as the team has taken their seminars virtual.

The first virtual workshop in April was a partnership with the Mark Cuban Foundation, the billionaire tech entrepreneur's philanthropy, via a connection Haroon made through the work he did at Mark Cuban Companies.

"When COVID-19 hit, Haroon and I reconnected and realized we were both thinking about ways to teach AI in a bite-sized way to kids stuck at home," saysRyan Kline, an associate at Mark Cuban Companies. "AI for Anyone is doing great work in fundamental AI education, reaching wide audiences of young students."

They collaborated to digitize the AI for Anyone workshop. Then if students want to learn more, they can be funneled into the Mark Cuban Foundation's Intro to AI Bootcamps, a collaboration between the Mark Cuban Foundation, Microsoft and Walmart, which was announced in 2019.

Cuban shared about the workshop on LinkedIn.

"We see AI for Anyone as providing a spark for hundreds of students to advance their AI learning, and hope that many AI for Anyone graduates will apply to participate in the Mark Cuban Foundation Bootcamps as we expand nationwide," Kline says.

AI for Anyone is still growing, but the purpose is clear for the founders.

"A.I. for Anyone works [as] one of the most appropriate and most fitting ways for us to use our privilege to give back to those that are less privileged than us," Haroon says.

See also:

Amid the coronavirus pandemic, many companies could replace their workers with robots

Merck CEO on success: I was one of a 'few inner city black kids' who rode bus 90 minutes to school

Barack Obama: This is what you can do to reform the system that leads to police brutality

The rest is here:

These young immigrant brothers are teaching A.I. to high-schoolers for free: We want to give kids 'a lucky break' - CNBC

How artificial intelligence outsmarted the superbugs – The Guardian

One of the seminal texts for anyone interested in technology and society is Melvin Kranzbergs Six Laws of Technology, the first of which says that technology is neither good nor bad; nor is it neutral. By this, Kranzberg meant that technologys interaction with society is such that technical developments frequently have environmental, social and human consequences that go far beyond the immediate purposes of the technical devices and practices themselves, and the same technology can have quite different results when introduced into different contexts or under different circumstances.

The saloon-bar version of this is that technology is both good and bad; it all depends on how its used a tactic that tech evangelists regularly deploy as a way of stopping the conversation. So a better way of using Kranzbergs law is to ask a simple Latin question: Cui bono? who benefits from any proposed or hyped technology? And, by implication, who loses?

With any general-purpose technology which is what the internet has become the answer is going to be complicated: various groups, societies, sectors, maybe even continents win and lose, so in the end the question comes down to: who benefits most? For the internet as a whole, its too early to say. But when we focus on a particular digital technology, then things become a bit clearer.

A case in point is the technology known as machine learning, a manifestation of artificial intelligence that is the tech obsession de nos jours. Its really a combination of algorithms that are trained on big data, ie huge datasets. In principle, anyone with the computational skills to use freely available software tools such as TensorFlow could do machine learning. But in practice they cant because they dont have access to the massive data needed to train their algorithms.

This means the outfits where most of the leading machine-learning research is being done are a small number of tech giants especially Google, Facebook and Amazon which have accumulated colossal silos of behavioural data over the last two decades. Since they have come to dominate the technology, the Kranzberg question who benefits? is easy to answer: they do. Machine learning now drives everything in those businesses personalisation of services, recommendations, precisely targeted advertising, behavioural prediction For them, AI (by which they mostly mean machine learning) is everywhere. And it is making them the most profitable enterprises in the history of capitalism.

As a consequence, a powerful technology with great potential for good is at the moment deployed mainly for privatised gain. In the process, it has been characterised by unregulated premature deployment, algorithmic bias, reinforcing inequality, undermining democratic processes and boosting covert surveillance to toxic levels. That it doesnt have to be like this was vividly demonstrated last week with a report in the leading biological journal Cell of an extraordinary project, which harnessed machine learning in the public (as compared to the private) interest. The researchers used the technology to tackle the problem of bacterial resistance to conventional antibiotics a problem that is rising dramatically worldwide, with predictions that, without a solution, resistant infections could kill 10 million people a year by 2050.

The team of MIT and Harvard researchers built a neural network (an algorithm inspired by the brains architecture) and trained it to spot molecules that inhibit the growth of the Escherichia coli bacterium using a dataset of 2,335 molecules for which the antibacterial activity was known including a library of 300 existing approved antibiotics and 800 natural products from plant, animal and microbial sources. They then asked the network to predict which would be effective against E coli but looked different from conventional antibiotics. This produced a hundred candidates for physical testing and led to one (which they named halicin after the HAL 9000 computer from 2001: A Space Odyssey) that was active against a wide spectrum of pathogens notably including two that are totally resistant to current antibiotics and are therefore a looming nightmare for hospitals worldwide.

There are a number of other examples of machine learning for public good rather than private gain. One thinks, for example, of the collaboration between Google DeepMind and Moorfields eye hospital. But this new example is the most spectacular to date because it goes beyond augmenting human screening capabilities to aiding the process of discovery. So while the main beneficiaries of machine learning for, say, a toxic technology like facial recognition are mostly authoritarian political regimes and a range of untrustworthy or unsavoury private companies, the beneficiaries of the technology as an aid to scientific discovery could be humanity as a species. The technology, in other words, is both good and bad. Kranzbergs first law rules OK.

Every cloud Zeynep Tufekci has written a perceptive essay for the Atlantic about how the coronavirus revealed authoritarianisms fatal flaw.

EU ideas explained Politico writers Laura Kayali, Melissa Heikkil and Janosch Delcker have delivered a shrewd analysis of the underlying strategy behind recent policy documents from the EU dealing with the digital future.

On the nature of loss Jill Lepore has written a knockout piece for the New Yorker under the heading The lingering of loss, on friendship, grief and remembrance. One of the best things Ive read in years.

Excerpt from:

How artificial intelligence outsmarted the superbugs - The Guardian

Google to ramp up AI efforts to ID extremism on YouTube – TechCrunch

Last week Facebook solicited help with what it dubbed hard questions including how it should tackle the spread of terrorism propaganda on its platform.

Yesterday Google followed suit with its own public pronouncement, via an op-ed in the FTnewspaper, explaining how its ramping up measures to tackle extremist content.

Both companies have been coming under increasing political pressure in Europe especially to do more to quash extremist content with politicians including in the UK and Germany pointing the finger of blame at platforms such as YouTube for hosting hate speech and extremist content.

Europe has suffered a spate of terror attacks in recent years, with four in the UK alone since March. And governments in the UK and France are currently considering whether to introduce a new liability for tech platforms that fail to promptly remove terrorist content arguing that terrorists are being radicalized with the help of such content.

Earlier this month the UKs prime minister also called for international agreements between allied, democratic governments to regulate cyberspace to prevent the spread of extremism and terrorist planning.

While in Germany a proposal that includes big fines for social media firms that fail to take down hate speech has already gained government backing.

Besides the threat of fines being cast into law, theres an additional commercial incentive for Google after YouTube faced an advertiser backlash earlier this yearrelated to ads being displayed alongside extremist content, with several companies pulling their ads from the platform.

Google subsequentlyupdated the platforms guidelinesto stop ads being served to controversial content, including videos containing hateful content and incendiary and demeaning content so their makers could no longer monetize the content via Googles ad network. Although the company still needs to be able to identify such content for this measure to be successful.

Rather than requesting ideas for combating the spread of extremist content, as Facebook did last week, Google is simply stating what its plan of action is detailingfour additional steps it says its going to take, and conceding that more action is needed to limit the spread of violent extremism.

While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now, writesKent Walker, general counselGoogle in a blog post.

The four additional steps Walker lists are:

Despite increasing political pressure over extremism and the attendant bad PR (not to mention threat of big fines) Google is evidently hoping to retain its torch-bearing stance as a supporter of free speech by continuing to host controversial hate speech on its platform, just in a way that means it cant be directly accused of providing violent individuals with a revenue stream. (Assuming its able to correctly identify all the problem content, of course.)

Whether this compromise will please either side on the remove hate speech vs retain free speech debate remains to be seen. The risk is it will please neither demographic.

The success of the approach will also stand or fall on how quickly and accurately Google is able to identify content deemed a problem and policing user-generated content at such scale is a very hard problem.

Its not clear exactly how many thousands of content reviewers Google employs at this point weve asked and will update this post with any response.

Facebook recently added an additional 3,000 to its headcount, bringing the total number of reviewers to 7,500. CEO Mark Zuckerberg also wants to apply AI to the content identification issue but has previously said its unlikely to be able to do this successfully for many years.

Touching on what Google has been doing already to tackle extremist content, i.e. prior to these additional measures, Walker writes: We have thousands of people around the world who review and counter abuse of our platforms. Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal. And we have developed partnerships with expert groups, counter-extremism agencies, and the other technology companies to help inform and strengthen our efforts.

Go here to see the original:

Google to ramp up AI efforts to ID extremism on YouTube - TechCrunch

Cleverbot.com – a clever bot – speak to an AI with some …

About Cleverbot

The site Cleverbot.com started in 2006, but the AI was 'born' in 1988, when Rollo Carpenter saw how to make his machine learn. It has been learning ever since!

Things you say to Cleverbot today may influence what it says to others in future. The program chooses how to respond to you fuzzily, and contextually, the whole of your conversation being compared to the millions that have taken place before.

Many people say there is no bot - that it is connecting people together, live. The AI can seem human because it says things real people do say, but it is always software, imitating people.

You'll have seen scissors on Cleverbot. Using them you can share snippets of chats with friends on social networks. Now you can share snips at Cleverbot.com too!

When you sign in to Cleverbot on this blue bar, you can:

Tweak how the AI responds - 3 different ways!Keep a history of multiple conversationsSwitch between conversationsReturn to a conversation on any machinePublish snippets - snips! - for the world to seeFind and follow friendsBe followed yourself!Rate snips, and see the funniest of themReply to snips posted by othersVote on replies, from awful to great!Choose not to show the scissors

Read more:

Cleverbot.com - a clever bot - speak to an AI with some ...

MIT’s AI streaming software aims to stop those video stutters – TechCrunch

MITs Computer Science and Artificial Intelligence Lab (CSAIL) wants to ensure your streaming video experience stays smooth. A research team led by MIT professor Mohammad Alizadeh has developed an artificial intelligence (dubbed Pensieve) that can select the best algorithms for ensuring video streams both without interruption, and at the best possible playback quality.

The method improves upon existing tech, including the adaptive bitrate (ABR) method used by YouTube that throttles back quality to keep videos playing, albeit with pixelation and other artifacts. The AI can select different algorithms depending on what kind of network conditions a device is experiencing, cutting down on the downsides associated with any one method.

During experimentation, the CSAIL research team behind this method found that video streamed with between 10 and 30 percent less rebuffing, with 10 to 25 percent improved quality. Those gains would definitely add up to a significantly improved experience for most video viewers, especially over a long period.

The difference in CSAILs Pensieve approach vs. traditional methods is mainly in its use of a neural network instead of sticking to a strictly algorithmic-based approach. The neural net learns how to optimize through a reward system that incentivizes smoother video playback, rather than setting out defined rules about what algorithmic techniques to use when buffering video.

Researchers say the system is also potentially tweakable on the user end, depending on what they want to prioritize in playback: You could, for instance, set Pensieve to optimize for playback quality, or conversely, for playback speed, or even for conservation of data.

The team is making their project code open source for Pensieve at SIGCOMM next week in LA, and they expect that when trained on a larger data set, it could provide even greater improvements in terms of performance and quality. Theyre also now going to test applying it to VR video, since the high bitrates required for a quality experience there are well suited to the kinds of improvements Pensieve can offer.

See the rest here:

MIT's AI streaming software aims to stop those video stutters - TechCrunch

Verizon and IBM Will Partner on 5G and AI – The Motley Fool

Verizon (NYSE:VZ) and IBM (NYSE:IBM) announced on Wednesday an extensive new partnership that would focus on a host of forward-looking technology, including 5G, edge computing, and artificial intelligence (AI).

The companies plan to use Verizon's high-speed, low-latency wireless 5G network, multi-access edge computing (MEC) capabilities, and Internet of Things (IoT) devices and sensors, and combine them with IBM's expertise in AI, hybrid multicloud, edge computing, asset management, and connected operations.

Image source: Getty Images.

By joining forces and leveraging each business's unique expertise, Verizon and IBM will initially offer mobile asset tracking and management solutions designed to help enterprises "improve operations, optimize production quality, and help clients enhance worker safety."

IBM and Verizon will also work to develop combined solutions for 5G and edge computing such as near-real-time cognitive automation for industrial applications. The combined solutions could help clients "detect, locate, diagnose and respond to system anomalies, monitor asset health and help predict failures in near real-time."

Many computation-intensive tasks happen at a data center, which can be thousands of miles from where the information is generated. Edge computing takes the processing of data from the cloud and moves it closer to the source, or the "edge."

The companies plan to use Verizon's lightning-fast 5G network to increase the number of IoT devices that can be used in a particular geographic area. This will give organizations the ability to interact with those devices in near real time by bringing the necessary computing power within close proximity of the devices.

Verizon and IBM hope to develop "innovative new applications" that could include remote control robotics, near-real-time cognitive video analysis, and industrial plant automation.

Visit link:

Verizon and IBM Will Partner on 5G and AI - The Motley Fool

Frankenstein fears hang over AI – Financial Times

The technology industry is facing up to the world-shaking ramifications of artificial intelligence. There is now a recognition that AI will disrupt how societies operate, from education and employment to how data will be collected about people.

Machine learning, a form of advanced pattern recognition that enables machines to make judgments by analysing large volumes of data, could greatly supplement human thought. But such soaring capabilities have stirred almost Frankenstein-like fears about whether developers can control their creations.

Failures of autonomous systems like the death last yearof a US motorist in a partially self-driving car from Tesla Motors have led to a focus on safety, says Stuart Russell, a professor of computer science and AI expert at the University of California, Berkeley. That kind of event can set back the industry a long way, so there is a very straightforward economic self-interest here, he says.

Alongside immigration and globalisation, fears of AI-driven automation are fuelling public anxiety about inequality and job security. The election of Donald Trump as US president and the UKs vote to leave the EU were partly driven by such concerns. While some politicians claim protectionist policies will help workers, many industry experts say most jobs losses are caused by technological change, largely automation.

Global elites those with high income and educational levels, who live in capital cities are considerably more enthusiastic about innovation than the general population, the FT/Qualcomm Essential Future survey found. This gap, unless addressed, will continue to cause political friction.

Vivek Wadhwa, a US-based entrepreneur and academic who writes about ethics and technology, thinks the new wave of automation has geopolitical implications: Tech companies must accept responsibility for what theyre creating and work with users and policymakers to mitigate the risks and negative impacts. They must have their people spend as much time thinking about what could go wrong as they do hyping products.

The industry is bracing itself for a backlash. Advances in AI and robotics have brought automation to areas of white-collar work, such as legal paperwork and analysing financial data. Some 45 per cent of US employees work time is spent on tasks that could be automated with existing technologies, a study by McKinsey says.

Industry and academic initiatives have been set up to ensure AI works to help people. These include the Partnership on AI to Benefit People and Society, established by companies including IBM, and a $27m effort involving Harvard and the Massachusetts Institute of Technology. Groups like Open AI, backed by Elon Musk and Google, have made progress, says Prof Russell: Weve seen papers...that address the technical problem of safety.

There are echoes of past efforts to deal with the complications of a new technology. Satya Nadella, chief executive of Microsoft, compares it to 15 years ago when Bill Gates rallied his companys developers to combat computer malware. His trustworthy computing initiative was a watershed moment. In an interview with the FT, Mr Nadella said he hoped to do something similar to ensure AI works to benefit humans.

AI presents some thorny problems, however. Machine learning systems derive insights from large amounts of data. Eric Horvitz, a Microsoft executive, told a US Senate hearing late last year that these data sets may themselves be skewed. Many of our data sets have been collected...with assumptions we may not deeply understand, and we dont want our machine-learned applications...to be amplifying cultural biases, he said.

Last year, an investigation by news organisation ProPublica found that an algorithm used by the US justice system to determine whether criminal defendants were likely to reoffend, had a racial bias. Black defendants with a low risk of reoffending were more likely than white ones to be labelled as high risk.

Greater transparency is one way forward, for example making it clear what information AI systems have used. But the thought processes of deep learning systems are not easy to audit.Mr Horvitz says such systems are hard for humans to understand. We need to understand how to justify [their] decisions and how the thinking is done.

As AI comes to influence more government and business decisions, the ramifications will be widespread. How do we make sure the machines we train dont perpetuate and amplify the same human biases that plague society? asks Joi Ito, director of MITs Media Lab.

Executives like Mr Nadella believe a mixture of government oversight including, by implication, the regulation of algorithms and industry action will be the answer. He plans to create an ethics board at Microsoft to deal with any difficult questions thrown up by AI.

He says: I want...an ethics board that says, If we are going to use AI in the context of anything that is doing prediction, that can actually have societal impact...that it doesnt come with some bias thats built in.

Making sure AI systems benefit humans without unintended consequences is difficult. Human society is incapable of defining what it wants, says Prof Russell, so programming machines to maximise the happiness of the greatest number of people is problematic.

This is AIs so-called control problem: the risk that smart machines will single-mindedly pursue arbitrary goals even when they are undesirable. The machine has to allow for uncertainty about what it is the human really wants, says Prof Russell.

Ethics committees will not resolve concerns about AI taking jobs, however. Fears of a backlash were apparent at this years World Economic Forum in Davos as executives agonised over how to present AI. The common response was to say machines will make many jobs more fulfilling though other jobs could be replaced.

The profits from productivity gains for tech companies and their customers could be huge. How those should be distributed will become part of the AI debate. Whenever someone cuts cost, that means, hopefully, a surplus is being created, says Mr Nadella. You can always tax surplus you can always make sure that surplus gets distributed differently.

Additional reporting by Adam Jezard

More here:

Frankenstein fears hang over AI - Financial Times

ai – Wiktionary

English[edit]Etymology 1[edit]

Originated 168595, from Brazilian Portuguese a, from Old Tupi.

ai (plural ais or ai)

Contraction of aight.

ai

ai

From Proto-Albanian *a-ei (compound of proclitic particle a and ei), from Proto-Indo-European *hy- (he, this (one)). Compare Latin is, German er, Lithuanian js, Sanskrit (aym)).

ai msg (accusative at, dative atij, ablative atij)

forms of ai

Albanian personal pronouns

ai

ai

ai

From Proto-Oceanic *wai, from Proto-Eastern Malayo-Polynesian *wai, from Proto-Central-Eastern Malayo-Polynesian *wai, from Proto-Malayo-Polynesian *wahi.

ai

ai

Chuukese possessive determiners

ai

Borrowed from Portuguese ai, from Old Tupi ai.

aim (plural ais)

ai

ai

From Latin allium.

aim (plural ais)

ai

From Proto-Polynesian *qai, from Proto-Malayo-Polynesian *qasiq.

ai

ai

Hiri Motu personal pronouns

ai

ai

ai

ai

From Proto-Yeniseian *a (I). Compare Assan aj (I), Arin aj (I), and Pumpokol ad (I).

ai

From English eye.

ai

From English I

ai

From English high.

ai

ai

a + i

ai

ai (Latin spelling, Hebrew spelling )

a

ai

Compare Russian (oj, ow!).

i: IPA(key): /a/

a: IPA(key): //

i! or a!

ai

ai

ai

ai

ai

ai

ai

ai

Ai! Pisei no prego! Ouch! I stepped on the nail!

ai

ai (masculine plural possessive)

From Latin allium / alium.

aim (uncountable)

declension of ai (singular only)

Inflected form of avea (to have).

ai

From an old or proto-Romanian form ae, from Latin habs[1].

ai

Probably from a Vulgar Latin *eas, from Latin habbs.

ai

(tu) ai (modal auxiliary, second-person singular form of avea, used with infinitives to form conditional tenses)

ai

ai

From English eye.

ai

From English aye, ay.

ai

From Proto-Malayo-Chamic *air, from Proto-Malayo-Sumbawan *wair, from Proto-Sunda-Sulawesi *wair, from Proto-Malayo-Polynesian *wahi.

ai

From English eye.

ai

From English I.

ai

From English eye.

ai

From Proto-Vietic *e.

ai

ai

ai

Read more:

ai - Wiktionary