AI is creating new types of art, and new types of artists – Seattle Times

The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.

MOUNTAIN VIEW, Calif. In the mid-1990s, Douglas Eck worked as a database programmer in Albuquerque, New Mexico, while moonlighting as a musician. After a day spent writing computer code inside a lab run by the Department of Energy, he would take the stage at a local juke joint, playing what he calls punk-influenced bluegrass Johnny Rotten crossed with Johnny Cash. But what he really wanted to do was combine his days and nights, and build machines that could make their own songs. My only goal in life was to mix AI and music, Eck said.

It was a naive ambition. Enrolling as a graduate student at Indiana University, in Bloomington, not far from where he grew up, he pitched the idea to Douglas Hofstadter, the cognitive scientist who wrote the Pulitzer Prize-winning book on minds and machines, Gdel, Escher, Bach: An Eternal Golden Braid.Hofstadter turned him down, adamant that even the latest artificial intelligence techniques were much too primitive.

But during the next two decades, working on the fringe of academia, Eck kept chasing the idea, and eventually, the AI caught up with his ambition.

Last spring, a few years after taking a research job at Google, Eck pitched the same idea he pitched Hofstadter all those years ago. The result is Project Magenta, a team of Google researchers who are teaching machines to create not only their own music but also to make so many other forms of art, including sketches, videos and jokes.

With its empire of smartphones, apps and internet services, Google is in the business of communication, and Eck sees Magenta as a natural extension of this work.

Its about creating new ways for people to communicate, he said during a recent interview inside the small two-story building here that serves as headquarters for Google AI research.

The project is part of a growing effort to generate art through a set of AI techniques that have only recently come of age. Called deep neural networks, these complex mathematical systems allow machines to learn specific behavior by analyzing vast amounts of data.

By looking for common patterns in millions of bicycle photos, for instance, a neural network can learn to recognize a bike. This is how Facebook identifies faces in online photos, how Android phones recognize commands spoken into phones, and how Microsoft Skype translates one language into another. But these complex systems can also create art. By analyzing a set of songs, for instance, they can learn to build similar sounds.

As Eck says, these systems are at least approaching the point still many, many years away when a machine can instantly build a new Beatles song or perhaps trillions of new Beatles songs, each sounding a lot like the music the Beatles themselves recorded, but also a little different.

But that end game as much a way of undermining art as creating it is not what he is after. There are so many other paths to explore beyond mere mimicry. The ultimate idea is not to replace artists but to give them tools that allow them to create in entirely new ways.

For centuries, orchestral conductors have layered sounds from various instruments atop one other. But this is different. Rather than layering sounds, Eck and his team are combining them to form something that did not exist before, creating new ways that artists can work.

Were making the next film camera, Eck said. Were making the next electric guitar.

Called NSynth, this particular project is only just getting off the ground. But across the worlds of both art and technology, many are already developing an appetite for building new art through neural networks and other AI techniques.

This work has exploded over the last few years, said Adam Ferris, a photographer and artist in Los Angeles. This is a totally new aesthetic.

In 2015, a separate team of researchers inside Google created DeepDream, a tool that uses neural networks to generate haunting, hallucinogenic imagescapes from existing photography, and this has spawned new art inside Google and out. If the tool analyzes a photo of a dog and finds a bit of fur that looks vaguely like an eyeball, it will enhance that bit of fur and then repeat the process. The result is a dog covered in swirling eyeballs.

At the same time, a number of artists like the well-known multimedia performance artist Trevor Paglen or the lesser-known Adam Ferris are exploring neural networks in other ways.

In January, Paglen gave a performance in an old maritime warehouse in San Francisco that explored the ethics of computer vision through neural networks that can track the way we look and move. While members of the avant-garde Kronos Quartet played onstage, for example, neural networks analyzed their expressions in real time, guessing at their emotions.

The tools are new, but the attitude is not. Allison Parrish, a New York University professor who builds software that generates poetry, points out that artists have been using computers to generate art since the 1950s. Much like as Jackson Pollock figured out a new way to paint by just opening the paint can and splashing it on the canvas beneath him, she said, these new computational techniques create a broader palette for artists.

A year ago, David Ha was a trader with Goldman Sachs in Tokyo. During his lunch breaks he started toying with neural networks and posting the results to a blog under a pseudonym. Among other things, he built a neural network that learned to write its own Kanji, the logographic Chinese characters that are not so much written as drawn.

Soon, Eck and other Googlers spotted the blog, and now Ha is a researcher with Google Magenta. Through a project called SketchRNN, he is building neural networks that can draw.

By analyzing thousands of digital sketches made by ordinary people, these neural networks can learn to make images of things like pigs, trucks, boats or yoga poses. They do not copy what people have drawn. They learn to draw on their own, to mathematically identify what a pig drawing looks like. Then, you ask them to, say, draw a pig with a cats head, or to visually subtract a foot from a horse or sketch a truck that looks like a dog or build a boat from a few random squiggly lines.

Next to NSynth or DeepDream, these may seem less like tools that artists will use to build new works. But if you play with them, you realize that they are themselves art, living works built by Ha. AI is not just creating new kinds of art; it is creating new kinds of artists.

Visit link:

AI is creating new types of art, and new types of artists - Seattle Times

Google rolls out updates to AI Platform Prediction and AI Platform Training – VentureBeat

Googles AI Platform, a cloud-hosted service facilitating machine learning and data science workflows, today gained a new feature in backend models that tap powerful Nvidia graphics chips. In related news, Google debuted a refreshed model training experience that allows users to run a training script on any range of hardware.

For the uninitiated, AI Platform enables developers to prep, build, run, and share machine learning models quickly and easily in the cloud. Using built-in data labeling services, theyre able to annotate model training images, videos, audio, and text corpora by applying classification, object detection, and entity extraction. A managed Jupyter Notebook service provides support for a slew of machine learning frameworks, including Googles TensorFlow, while a dashboard within the Google Cloud Platform console exposes controls for managing, experimenting with, and deploying models in the cloud or on-premises.

Now, AI Platform Prediction the component of AI Platform that enables model serving for online predictions in a serverless environment lets developers choose from a set of machine types in Googles Compute Engine service to run a model. Thanks to a new backend built on Google Kubernetes Engine, theyre able to add graphics chips like Nvidias T4 and have AI Platform Prediction handle provisioning, scaling, and serving. (Online Prediction previously only allowed you to choose from one or four vCPU machine types.)

Additionally, prediction requests and responses can now be logged to Googles BigQuery, where they can be analyzed to detect skew and outliers.

As for AI Platform Training which allows data scientists to run a training script on a variety of hardware, without having to manage the underlying machines it now supports custom containers, letting researchers launch any Docker container so that they can train a model with any language, framework, or dependencies. Furthermore, AI Platform Training gained Compute Engine machine types for training, which allows for the piecemeal selection of any combination of CPUs, RAM, and accelerators.

Cloud AI Platform simplifies training and deploying models, letting you focus on using AI to solve your most challenging issues From optimizing mobile games to detecting diseases to 3D modeling houses, businesses are constantly finding new, creative uses for machine learning, wrote Cloud AI Platform product manager Henry Tappen in a blog post. With more inference hardware and training software choices, we look forward to seeing what challenges you use AI to tackle in the future.

Read the rest here:

Google rolls out updates to AI Platform Prediction and AI Platform Training - VentureBeat

Global AI Governance Market is accounted for xx USD million in 2019 and is expected to reach xx USD million by 2025 growing at a CAGR of xx% : IBM,…

This detailed and well synchronized research report about the AI Governance market is the most significant, up-to-date, ready-to-refer research analysis that allows readers to draw substantial market specific cues that eventually remain crucial growth influencers in the AI Governance market , more specifically under the influence of COVID-19 implications that have visibly impacted normal industry process in multiple ways, leaving a trail of tangible implications.

This well-conceived, well-compiled and thoroughly documented research report on the AI Governance market is dedicated to offer a detailed output to mirror the impact analysis rendered by the COVID-19 outbreak since the turn of 2020. Thus, this thorough, meticulously crafted research report is in place to aid vital market specific decisions amongst relevant stakeholders who remain key influencers in directing favorable growth trajectory in the AI Governance market more specifically under the influence of COVID-19 outbreak and concomitant developments, affecting the AI Governance market in a myriad tangible ways.

This study covers following key players:IBMGoogleFacebookAWSMicrosoftSalesforce.comSAPFICO2021.AIZestFinanceSAS InstitutePymetricsH2O.AIintegrate.ai

Request a sample of this report @ https://www.orbismarketreports.com/sample-request/94335?utm_source=Pooja

The report is mindfully designed to influence impeccable business discretion amongst notable stakeholders in the AI Governance market, comprising research analysts, suppliers, market players and participants, notable industry behemoths and the like who remain visibly influenced by the ongoing market developments especially under the influence of COVID-19 implications. The report is targeted to offer report readers with essential data favoring a seamless interpretation of the AI Governance market. The report also incorporates ample understanding on numerous analytical practices such as SWOT and PESTEL analysis to source optimum profit resources in AI Governance market.

Access Complete Report @ https://www.orbismarketreports.com/global-ai-governance-market-growth-analysis-by-trends-and-forecast-2019-2025utm_source=Pooja

Market segment by Type, the product can be split into SoftwareServiceOther

Market segment by Application, split into BFSIHealthcare and Life SciencesGovernment and DefenseRetail and Car

The report in its subsequent sections also portrays a detailed overview of competition spectrum, profiling leading players and their mindful business decisions, influencing growth in the AI Governance market.In this latest research publication on the AI Governance market, a thorough overview of the current market scenario has been portrayed, in a bid to aid market participants, stakeholders, research analysts, industry veterans and the like to borrow insightful cues from this ready-to-use market research report, thus influencing a definitive business discretion.

Some Major TOC Points:1 Report Overview2 Global Growth Trends3 Market Share by Key Players4 Breakdown Data by Type and ApplicationContinued

Besides assessing details pertaining to production, distribution and sales value chain, this detailed research output on the key-word market specifically highlights crucial developments across regions and vital countries, also lending a decisive understanding of the upcoming development scenario likely to be witnessed in the AI Governance market in the near future.

Therefore, to enable and influence a flawless market specific business decision, aligning with the best industry practices, this specific research report on the AI Governance market also lends a systematic rundown on vital growth triggering elements comprising market opportunities and barrier analysis.

For Enquiry before buying report @ https://www.orbismarketreports.com/enquiry-before-buying/94335?utm_source=Pooja

About Us : With unfailing market gauging skills, has been excelling in curating tailored business intelligence data across industry verticals. Constantly thriving to expand our skill development, our strength lies in dedicated intellectuals with dynamic problem solving intent, ever willing to mold boundaries to scale heights in market interpretation.

Contact Us : Hector CostelloSenior Manager Client Engagements4144N Central Expressway,Suite 600, Dallas,Texas 75204, U.S.A.Phone No.: USA: +1 (972)-362-8199 | IND: +91 895 659 5155

Excerpt from:

Global AI Governance Market is accounted for xx USD million in 2019 and is expected to reach xx USD million by 2025 growing at a CAGR of xx% : IBM,...

Winter Is ComingFor The Economy And AI – Forbes

Storm in the Sierra mountains (Photo by George Rose/Getty Images)

Economic winter has been a long time coming; the US has had the longest-running bull market in the countrys history. Markets, startups, and corporate innovation and expansion have flourished for over a decade. But now we appear to be entering the early days of winter, as markets fall on coronavirus threats and large companiesincluding automotive, retail, and technology firms in the U.S. and Europeannounce substantial layoffs.

Even before the coronavirus-related market declines, my conversations with companies and their consultants indicated that more companies are talking about restructurings and capital expense reductions than expansions. Several economists have raised the odds of a recession in 2020 to 50% or above. Its too early to call a recession, and I hope, of course, that this economic retrenchment will be short-lived. However, it is clearly wise for any company to prepare a response to dark economic clouds on the horizon.

Which brings up the fate of artificial intelligence (AI) projects in an economic downturn. I dont anticipate an AI wintera major reduction in AI startups and innovationto match an economic winter. AI is the most important new technology of the past decade, and its an extension of the rise in data and analytics thats been going on for even longer. The amount of data and the need to make sense of it with analytics and AI will continue to grow. However, I do anticipate a change in focus for AI over the next several years, and Im already beginning to see it play out.[1]

A Shift in AI Focus

The primary focus of AI in many companies thus far has been innovation and explorationalmost to a fault. Firms wanted to learn about the technology and its fit with their strategies and processes, so they initiated many pilots and proofs of concept. Relatively few of these have become production deployments of AIwhich means that they havent delivered significant economic value. In terms of objectives for AI, in one large company survey which I helped analyze, many respondents reported innovation-oriented objectives like enhance current products (44%, the most common response), free workers to be more creative (31%), create new products (27%), capture and apply scarce knowledge (27%), and pursue new markets (24%).

The change in economic climate, however, is likely to bring a decreased level of growth in AI investments, and a change in their focus. A January 2020 survey of large U.S. companies by NewVantage Partnerswhen the economy was still humming alongfound a major change from the 2019 survey. When asked if the pace of investment in AI and big data was accelerating, 92% agreed in 2019, but only 52% agreed in 2020.

In terms of objectives, it seems likely that internal and external operational improvements will become a stronger focus than innovation-oriented projects. And labor automation is likely to increase as well as a focus. In the 2017 and 2018 State of Cognitive surveys, reduce headcount through automation was the lowest-ranked objective, at 22% and 24%, respectively. And the companies I have consulted for or researched have almost always said that they plan to redeploy any workers freed up by automation into other tasks and jobs. I know of very few jobs that have been lost to AI and automation technologies thus far.

But even in the 2018 survey, respondents agreed that To cut costs, my company wants to automate as many jobs as possible with cognitive/AI. That sentiment, thus far the focus only of anonymous surveys and back room conversations, is likely to become more prominent in an economic climate characterized by retrenchment. It is a challenging issue for leaders, however; they often require the cooperation and task knowledge of their front-line workers in order to successfully implement automation solutions. But those may be withheld if employees suspect that their jobs are at risk. This set of concerns is most likely to arise with regard to large-scale projects involving automation-oriented technologies like robotic process automation (RPA).

The Return of Technology-Enabled Reengineering?

In the early 1990s, companies were faced with a recession and a challenge from global competitors, particularly in Japan. At that time I published an article and later a book arguing that firms needed to redesign end-to-end business processes (order to cash or procure to pay, for example) using information technology, with the goal of radical improvement. Other writersa little later, Im proud to saymade similar arguments, and the business process reengineering movement came to characterize the first half of the 90s decade. The innovative technologies of that period were enterprise resource planning (ERP) systems and, somewhat later, the Internet.

Im seeing similar conditions in the early 2020s. The economy is challenging, competition from abroad (notably China) is growing rapidly, and there are new technologies that can drive new ways of doing business. The primary technology enabling change today is AI, including RPA and other automation technologies, as well as process mining to reveal how work is being done.

In a new round of process reengineering, companies could identifywith help from process miningthe end-to-end processes that need the most help, document and measure their current flows and performance, consider how various forms of AI might yield a better-performing process, and determine a new mix of human and machine-based tasks. Those steps could also be undertaken with regard to smaller processes using less dramatic process improvement approaches like Six Sigma and tools like RPA. Indeed, several companies Ive encountered, including Voya Financial and Lloyds Banking Group, are combining RPA capabilities and process improvement, and ensuring that processes are improved before they are automated.

Its important that companies dont repeat many of the mistakes of the earlier generation of reengineering projects. They shouldnt use the reengineering termor refer to AI eitherto describe layoffs that have no process redesign and no technology enablement. Layoffs that arent carefully considered often result in skilled people leaving the company, few increases in efficiency, and jobs that come back when the economy improves.

The Mandate for AI-Driven Productivity

I hope that AI will not become a driver of substantial job loss; for the most part, its a better tool for augmenting human capabilities than for replacing them. But neither AI nor any other recent technology has done much to improve national-level productivity in industrialized countries. With population growth leveling off in many of those countries, there is no way to improve economic performance without converting AI capabilities into more output per worker. Some of that increased productivity will probably come from automating tasks and jobs and reducing headcount, though it will take careful analysis and substantial time to do that well.

In other words, its a bad idea to wait until a full-fledged recession to begin considering how to lower costs and improve productivity with AI. Real value will come only from production deployments of AI and integration with processes and existing systems. Companies need to begin work now on developing AI applications that create economic value and that lead to new ways of orchestrating work by humans and machines.

[1] Thanks to Ravi Kalakota, who first brought this issue to my attention.

Read more from the original source:

Winter Is ComingFor The Economy And AI - Forbes

The Brookings glossary of AI and emerging technologies – Brookings Institution

Algorithms:

According to author Pedro Domingos, algorithms are a sequence of instructions telling a computer what to do.[1] These software-based coding rules started with simple and routine tasks, but now have advanced into more complex formulations, such as providing driving instructions for autonomous vehicles, identifying possible malignancies in X-rays and CT scans, and assigning students to public schools. Algorithms are widely used in finance, retail, communications, national defense, and many other areas.

Indian engineers Shukla Shubhendu and Jaiswal Vijay define AI as machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention.[2] This definition emphasizes several qualities that separate AI from mechanical devices or traditional computer software, specifically intentionality, intelligence, and adaptability. AI-based computer systems can learn from data, text, or images and make intentional and intelligent decisions based on that analysis.

Augmented reality puts people in realistic situations that are augmented by computer-generated video, audio, or sensory information. This kind of system allows people to interact with actual and artificial features, be monitored for their reactions, or be trained on the best ways to deal with various stimuli.

Extremely large data sets that are statistically analyzed to gain detailed insights. The data can involve billions of records and require substantial computer-processing power. Data sets are sometimes linked together to see how patterns in one domain affect other areas. Data can be structured into fixed fields or unstructured as free-flowing information. The analysis of big data sets can reveal patterns, trends, or underlying relationships that were not previously apparent to researchers.

Automated tools for answering human questions. Chatbots are being used in retail, finance, government agencies, nonprofits, and other organizations to respond to frequently asked questions or routine inquiries.

Data storage and processing used to take place on personal computers or local servers controlled by individual users. In recent years, however, storage and processing have migrated to digital servers hosted at data centers operated by internet platforms, and people can store information and process data without being in close proximity to the data center. Cloud computing offers convenience, reliability, and the ability to scale applications quickly.

Computers that develop knowledge based on digital pictures or videos.[3] For example, cameras in automated retail outlets that are connected to CV systems can observe what products shoppers picked up, identify the specific items and their prices, and charge consumers credit card or mobile payment system without involving a cash register or sales clerk. CV also is being deployed to analyze satellite images, human faces, and video imagery.

Cars, trucks, and buses that communicate directly with one another and with highway infrastructure. This capacity speeds navigation, raises human safety, and takes advantage of the experiences of other vehicles on the road to improve the driving experience.

The analysis of data to gather substantive insights. Researchers use statistical techniques to find trends or patterns in the data, which give them a better understanding of a range of different topics. Data analytic approaches are used in many businesses and organizations to track day-to-day activities and improve operational efficiency.

Techniques that analyze large amounts of information to gain insights, spot trends, or uncover substantive patterns. These approaches are used to help businesses and organizations improve their processes or identify associations that shed light on relevant questions.

Digital images and audio that are artificially altered or manipulated by AI and/or deep learning to make someone do or say something he or she did not actually do or say. Pictures or videos can be edited to put someone in a compromising position or to have someone make a controversial statement, even though the person did not actually do or say what is shown. Increasingly, it is becoming difficult to distinguish artificially manufactured material from actual videos and images.

A subset of machine learning that relies on neural networks with many layers of neurons. In so doing, deep learning employs statistics to spot underlying trends or data patterns and applies that knowledge to other layers of analysis. Some have labeled this as a way to learn by example and a technique that perform[s] classification tasks directly from images, text, or sound and then applies that knowledge independently.[4] Deep learning requires extensive computing power and labeled data, and is used in medical research, automated vehicles, electronics, and manufacturing, among other areas.

The speed, scope, and timing of technology innovation today is often decided not by government officials but by coders, software designers, and corporate executives. Digital sovereigns set the rules of the road and terms of service for consumers. What they decide, directly or indirectly, has far-reaching consequences for those using their software or platform. The power of business decisionmakers raises important governance questions regarding who should decide on matters affecting society as a whole and the role that policymakers, consumers, and ethicists should play in digital innovation.

Connecting frontline people with others who have differing skills and getting them to work together to solve problems. Distributed collaboration differs from current governance paradigms that emphasize hierarchical, top-down decisionmaking by those who do not always have relevant knowledge about the issues being addressed. The new model takes advantage of the fact that a range of skills are needed to resolve technology issues, and those skills are located in different subject areas and organizational parts. Rather than keeping AI expertise in isolation, distributed collaboration brings together software and product designers, engineers, ethicists, social scientists, and policymakers to draw on their respective expertise and integrate their knowledge to solve pressing problems.

Many technologies can be used in a good or ill manner. The very same facial recognition system could be used to find missing children or provide a means for mass surveillance. It is not the technology per se that raises ethical issues but how the technology is put to use. The dual-use nature of technologies makes regulation difficult because it raises the question of how to gain the benefits of technology innovation while avoiding its detrimental features.

A technology for identifying specific people based on pictures or videos. It operates by analyzing features such as the structure of the face, the distance between the eyes, and the angles between a persons eyes, nose, and mouth. It is controversial because of worries about privacy invasion, malicious applications, or abuse by government or corporate entities. In addition, there have been well-documented biases by race and gender with some facial recognition algorithms.

These are fifth-generation wireless telecommunications networks that have been deployed in major cities and feature faster speeds and enhanced capabilities for transmitting data and images. As such, 5G networks enable new digital products and services, such as video streaming, autonomous vehicles, and automated factories and homes that require a fast broadband.

High-tech military situations in which robots, sensors, AI, and autonomous systems play important roles and command decisions have to unfold at speeds heretofore unseen in warfare. Because of the acceleration of the pace and scope of conflict, countries will have to conduct simultaneous operations in every warfare domain and national leaders will need to accelerate technology innovation to build a safe and stable future.[5]

According to Dorian Pyle and Cristina San Jose of the McKinsey Quarterly, machine learning is based on algorithms that can learn from data without relying on rules-based programming.[6] ML represents a way to classify data, pictures, text, or objects without detailed instruction and to learn in the process so that new pictures or objects can be accurately identified based on that learned information. ML furthermore can be used to estimate continuous variables (such as estimating home sales prices) or to play games. Many of its insights come by examining prior data and learning how to improve understanding.

The analysis of textual information to make sense of its meaning and intentions. NLP software can take a large amount of text and see how words are linked together to assess positive or negative sentiment, relationships, associations, and meaning. For example, researchers can study medical records to see which patient symptoms appear to be most related to particular illnesses.

Researchers use computer software to perform some task by analyzing training examples and by grouping data based on common similarities.[7] Similar to the neural nodes of a brain, neural networks learn in layers and build complex concepts out of simpler ones. They break up tasks, identify objects at a number of different levels, and apply that knowledge to other activities. These kinds of systems allow computers to learn and adapt to changing circumstances, similar to the way a brain functions. Deep learning and many of the most prominent recent applications of machine learning operate through neural networks (e.g., driverless cars, deepfakes, and AlphaGo game playing).

Quantum computers have tremendous capacity for storing and processing information because their storage processes are not in the form of a zero or one, as is the case with traditional computers. Rather, they take advantage of superpositionthe fact that electrons can be in two places at onceto create quantum bits that store multiple values in each point.[8] That capability dramatically increases storage capacity and decreases processing times, thereby improving the scope of data, textual, or image analysis.

Futurist Ray Kurzweil describes a singularity as a machine-based superintelligence [that is] greater than human intelligence.[9] It combines advanced computing power with artificial intelligence, machine learning, and data analytics to create super-powered entities. There are extensive (and unresolved) debates regarding whether humanity will face a computing singularity in the next 50, 100, or 250 years.

The ubiquity of peoples online activities enables technology that tracks behavior and rates people based on their online actions. As an illustration, some organizations have piloted systems that compile data on social media activities, personal infractions, and behaviors such as paying taxes on time. They use that data to rate people for creditworthiness, travel, school enrollment, and government positions.[10] These systems are problematic from an ethical standpoint because they lack transparency and can be used to penalize political opponents.

According to Science magazine, supervised learning is a type of machine learning in which the algorithm compares its outputs with the correct outputs during training. In unsupervised learning, the algorithm merely looks for patterns in a set of data.[11] Supervised learning allows ML and AI to improve information processing and become more accurate.

The backlash against emerging technologies that has developed among many individuals. People worry about a host of problems related to technology innovation, such as privacy invasions, mass surveillance, widening income inequality, and possible job losses. Figuring out how to assuage understandable human fears is a major societal challenge going forward.

Virtual reality uses headsets equipped with projection visors to put people in realistic-seeming situations that are completely generated by computers. People can see, hear, and experience many types of environments and interact with them. By simulating actual settings, VR can train people how to deal with various situations, vary the features that are observed, and monitor how people respond to differing stimuli.

[1] Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (New York: Basic Books, 2018).

[2] Shukla Shubhendu and Jaiswal Vijay, Applicability of Artificial Intelligence in Different Fields of Life, International Journal of Scientific Engineering and Research, vol. 1, no. 1 (September 2013), pp. 2835.

[3] Jason Brownlee, A Gentle Introduction to Computer Vision, Machine Learning Mastery, July 5, 2019.

[4] Math Works, What Is Deep Learning? undated.

[5] John R. Allen and Amir Husain, Hyperwar and Shifts in Global Power in the AI Century, in Amir Husain and others, Hyperwar: Conflict and Competition in the AI Century (Austin, TX: SparkCognition Press, 2018), p. 15.

[6] Dorian Pyle and Cristina San Jose, An Executives Guide to Machine Learning, McKinsey Quarterly, June, 2015.

[7] Larry Hardesty, Explained: Neural Networks, MIT News, April 14, 2017.

[8] Cade Metz, In Quantum Computing Race, Yale Professors Battle Tech Giants, New York Times, November 14, 2017, p. B3.

[9] Quoted in Tom Wheeler, From Gutenberg to Google: The History of Our Future (Brookings, 2019), p. 226. Also see Ray Kurzweil, The Singularity Is Near: Where Humans Transcend Biology (London: Penguin Books, 2006).

[10] Jack Karsten and Darrell M. West, Chinas Social Credit System Spreads to More Daily Transactions, TechTank (blog), Brookings, June 18, 2018.

[11] Matthew Hutson, AI Glossary: Artificial Intelligence, in So Many Words, Science, July 7, 2017.

See more here:

The Brookings glossary of AI and emerging technologies - Brookings Institution

Facebook is hiring a (human) AI Editor | TechCrunch – TechCrunch

Human: Oh sweet bot, tell us a story! A nice story! About a very wise human who worked his whole life to save everybody in the world from having to spend time manually tagging their friends in digital photos and made a magic machine that did it for them instead!

Bot: Thats not really a very nice story when you think about it.

Human: Well, tell us about the wise human who thought no-one should ever feel forgotten on their birthday so he made a clever algorithm that always knew to remind the forgetful humans to write happy messages so their friends would never feel sad. He even thought that in future the clever algorithm could suggest what message to write so humans wouldnt even have to think of something nice to tell their friends!

Bot: I feel quite sad after reading that.

Human: And he made another magical algorithm that reminds people of Special Moments in their life even years and years afterwards, in case theyve forgotten that holiday they went on with their ex eight years ago.

Bot: You do realize some people voluntarily medicate themselves with alcohol * * in order * * to forget???

Human: But the wise human also wanted to make sure all humans in the world always felt there was something they needed to read and so he made a special series of algorithms that watched very closely what each human read and looked at and liked and clicked on in order to order the information they saw in such a way that a person never felt they had reached the end of all the familiar things they could click on and could just keep clicking the whole day and night and be reading all the things that felt so very familiar to them so they always felt the same every day and felt they were surrounded by people who felt exactly like them and could just keep on keeping on right as they were each and every day.

Bot: Thats confusing.

Human: And the great humans algorithms became so good at ordering the information which each human wanted to read that other mercenary humans came to realize they could make lots of money by writing fairy stories and feeding them into the machine like how politicians ate little children for breakfast and wore devils horns on Sundays.

Bot: Okay, youre scaring me now

Human: And in the latter years the great human realized it was better to replace all the human writers he had employed to help train the machine how to intelligently order information for humans because it was shown that humans could not be trusted not to be biased.

Bot: Um

Human: After all, the great human had proven years ago that his great machine was capable of manipulating the emotions of the humans that used it. All he needed to do was tweak the algorithmic recipe that determined what each human saw and he could make a person feel great joy or cast them down into a deep pit of despair.

Bot: Help.

Human: The problem was other humans started to notice the machines great power, and became jealous of the great and clever human who wielded this power and dark forces started to move against the great man and his machine.

Bot: Are you talking about regulators?

Human: There were even calls for the man to take editorial responsibility for the output of the machine. The man tried to tell the silly humans that a machine cant be an editor! Only a human can do that! The machine was just a machine! Even if nearly two billion humans were reading what the machine was ordering them to read every single month.

But it was no good. The great human finally realized the machines power was now so great there was no hiding it. So he took up his pen and started writing open letters about the Great Power and Potential of the machine. And all the Good it could do Humanity. All the while telling himself that only when humans truly learned to love the machine would they finally be free to just be themselves.

Humans had to let themselves subconsciously be shown the path of what to click and what to like and who to be friends with. Only then would they be free of the pain and suffering of having nothing to else to click on. And only his great all-seeing algorithm could show them the way, surreptitiously, to that true happiness.

It wasnt something that regulators were capable of understanding. It required he realized real faith in the algorithm.

Bot: Ive heard this story before, frankly, and I know where it ends.

Human: But even the great human knew the limits of his own creation. And selling positive stories about the machines powers was definitely not a job for the machine. So he fired off another email to his subordinates, ordering the (still) human-staffed PR department to add one more human head to its tally, with a special focus on the algorithms powering the machine thinking, as he did so, multiple steps ahead to the great day when such a ridiculous job would no longer be necessary.

Because everyone would love the machine as much as he did.

Bot: Oh I seeeee! Job title: AI Editor Hmm Develop and execute on editorial strategy and campaigns focused on advancements in AI being driven by Facebook. Minimum qualifications: Bachelors degree in English, Journalism, Communications, or related field well chatbots are related to language so I reckon I can make that fly. What else? 8+ years professional communications experience: journalism, agency or in-house. Well Ill need to ingest a media law course or two but I reckonIll challenge myself to apply.

In truth Ive done worse jobs. An AI bots gotta do what an AI bots gotta do, right? Just dont tell an algorithm to be accountable. Ive done my time learning. If theres a problem its not me, its the data, okay? Okay?

See more here:

Facebook is hiring a (human) AI Editor | TechCrunch - TechCrunch

How Taiwan Used AI and IoT Technologies to Combat COVID-19 – ReadWrite

The world is in a major upheaval as the novel coronavirus (COVID-19) makes its presence known in the United States and in many countries across the globe. Amidst these stressful and uncertain times, various industries are being confronted with pressing issues that arise from this pandemic.

As people from all over the world are being strictly instructed to practice social distancing or self-isolation to stop the spread of the virus, technology is at its finest hour.

For instance, the use of Microsoft Teams, Zoom, and Slack have dramatically increased as companies strive to maintain their communication and their sense of order.

Likewise, video calling services, as well as social media, have been instrumental at connecting families and friends with each other while theyre staying at home. Whats more, people who are self-isolating keep themselves entertained by using video streaming services.

Needless to say, technology has helped get everyone through these unprecedented measures that authorities have enforced in response to the COVID-19 outbreak.

More and more people are asking if emerging technologies that include artificial intelligence (AI) or supercomputers could identify the most effective solutions for COVID-19.

Many are wondering if IoT technologies can do something to help prevent similar situations from taking place in the future. In case youre not aware, authorities are utilizing technology in an attempt to limit the spread of COVID-19 and in coming up with ways to treat those who are infected.

The novel coronavirus may not have spared Taiwan, but the Taiwanese fought against it with the best of what theyve got. Taiwan has set an excellent example of a nation that has successfully restricted the spread of COVID-19.

The Taiwanese government attributed their success to their 124 emergency response action items, as well as their transparent technology-driven strategies that were implemented when the SARS epidemic hit them hard in 2003.

As soon as the health tech company called Metabiota predicted that the novel coronavirus was going to reach their country in a weeks time, Taiwan immediately made sure that strict border controls were being established.

The SARS epidemic back in 2003 prompted the Taiwanese government to establish active surveillance and screening systems that include infrared thermal imaging scanning (ITIS).

These systems are implemented in major ports and airports in order to screen each traveler for fever. Thanks to the existing infrastructure and Metabiotas early warning, Taiwan was able to react quickly and efficiently before COVID-19 manages to hit its shores. This made the application of immigration controls, as well as the initiation of quarantines, easier for them.

The Taiwanese government Tracked the COVID-19 Through Technology and Big Data Analytics

Since the beginning of the COVID-19 outbreak, Taiwan has been utilizing big data analytics in coming up with an extremely detailed mapping of the virus transmission. For instance, since they decided to integrate their immigration and customs database with their national health insurance database.

Through this system, they were able to take advantage of the customs and immigration data via the travel ticket scan. The data immediately makes a statistic that is based on the flights origin, as well as the total route over the past two weeks.

Its important to note that aside from its existing national health insurance system, Taiwan also made use of mobile technologies for building datasets of foreign visitors.

Taiwan requires all foreign travelers to scan a QR code that will take them to an online health declaration form. The health declaration form provides details, such as an individuals contact information and any COVID-like symptoms.

Thanks to these online immigration declaration cards, Taiwan was able to identify and classify the travelers who came from high-risk areas that included Wuhan.

Needless to say, Taiwan was able to leverage the integration of its immigration and customs database with its national health insurance system. However, they took it to the next level by coming up with large datasets for the purpose of analysis.

The task of coordinating the peoples health, as well as their travel data has been made easier when top Taiwanese engineers and researchers created their own data management system. This provided their government with a more efficient way of tracking their citizens symptoms and their 14-day travel histories.

Furthermore, their clinics, hospital facilities, and pharmacies had free access to this information for every patient. As a result, they were able to quickly identify the high-risk travelers who needed medical attention were easily identified and immediately triaged or treated in isolation.

To say that Taiwans use of data analytics has proven to be an effective strategy is definitely an understatement.

Through it, they were able to identify risk hotspots, generate public health alerts, and prevent the spread of COVID-19.

Speedy Implementation of Self-Isolation Measures Through Location-Tracking.

As mentioned earlier, a huge portion of Taiwans successful method was largely due to their quick response, as well as the effective mechanisms that they used for case tracking.

For example, several countries across the globe are combining human efforts with technology as they implement their own quarantine compliance mechanisms. However, Taiwan is the first to utilize mobile phone tracking for the same purpose.

The Taiwanese government decided to roll out an electronic fence or a geofence through mobile phone location-tracking in order to slow the spread of COVID-19. This location-tracking strategy helped them ensure that the people who are quarantined arent leaving their homes.

Moreover, government-issued mobile phones are given to high-risk groups or citizens who are identified with COVID-19 so that they can be monitored through location-tracking.

In case youre wondering, this technology works to monitor phone signals and it alerts the government authorities when persons who are quarantined leave their designated locations or when they turn off their mobile phones.

Within 15 minutes after the alert was triggered, the authorities will contact or visit the individuals who triggered the alert. To top it all off, their local officials make it a point to call these high-risk groups twice a day to make sure that they dont leave their mobile devices at home to avoid tracking.

The IoT Detects and Controls Critical Supplies Through Data-Driven Technologies

Through the use of data-driven insights, as well as technologies, Taiwan has successfully managed to ensure the availability of their critical supplies.

However, its important to note that the Taiwanese government had the foresight to make sure that a large number of personal protective equipment (PPE). Surgical masks were being produced even during the early stages of COVID-19.

With the goal of increasing the production templates of masks, they could meet the demands of their domestic market. Taiwans national authorities promptly coordinated with the smart automation sectors, as well as the local technology research institutes.

While other areas around the world are dealing with the shortage of essential supplies because of people who are panic buying, Taiwans shining example of addressing this problem is truly remarkable.

By the end of January, Taiwan was already rationing their nationwide supply of surgical masks in order to make sure that they wont run out of much-needed supplies. They successfully avoided shortages by estimating each of their regions supply of surgical masks and other health provisions through the use of IT.

Furthermore, their implementation of the Mas-Real Name system has been very instrumental as well. This system works to set price limits on health provisions and rations them with the use of an online ordering mechanism and the peoples national health insurance cards.

In addition, tech community solutions are also being prioritized in Taiwan.

To give you an idea, they save lots of time looking for local drug stores that sell surgical masks and other health supplies by using an online real-time map. The maps point people to stores where the items that they were trying to find are stocked and readily available.

With this concept in mind, the government coordinated with a private company to develop enhanced versions of this platform of information and had it integrated into their government data. The result is an online real-time app that allows the public to monitor the availability of medical supplies in each area.

Those who are looking for surgical masks will be able to know which health centers provide masks for adults and children, how far these health centers are from where the seeker is at, and when the featured data was last updated.

Countries from all over the world intend to adopt Taiwans use of big data.

The quick use of big data in the process of identifying the number of surgical masks that are available for use, and the Taiwanese governments idea of using these generated numbers as the basis for ramping up their production of surgical masks to make sure that their supply remains sufficient.

The Government Provided Real-Time Information Through the Use of AI, Digital Communication, and Data Analytics

Amid the coronavirus outbreak, the Taiwanese government makes sure that the public stays well-informed by holding daily press conferences. They opened the government data on the COVID-19 pandemic and utilizing digital tools such as the various social media platforms and video sharing services.

To make sure that the information thats being shared on Facebook, Line, or YouTube is accurate, Taiwan worked with fact-checking institutions that include MyGoPen and Taiwan FactCheck Center.

These groups make an effort to promptly correct misinformation or disinformation on the COVID-19 outbreak.

Taiwans citizens are automatically alerted through real-time digital updates so that they can steer clear of specific locations where COVID-19 infections were detected.

The government utilized AI during the creation of these updates and complemented this effort by making use of tools that allow their citizens to get a clear idea about their own exposure.

For instance, theyve managed to create a map-based code with the use of Google Maps timelines. These timelines enable them to detect an individuals travel history in the last few years whether theyve made contact in areas where the COVID-19 cases were previously discovered.

Lessons from Taiwans Shining Example

Lessons can be learned from the Taiwanese governments approach in their fight against COVID-19. For example, forging partnerships between public and private sectors is a must if quicker and more tangible results are needed.

Theyve proven that fast and practical solutions can be produced if the government collaborates with IT developers in obtaining critical product-related information. Taiwans government officials didnt take more than three days to make policy decisions about the use of mask data and map upgrades.

Aside from teaming up with civic society and private technology businesses, the importance of sharing accurate and transparent information to the general public is also a key factor in Taiwans successful strategies that prevented the spread of COVID-19.

Possibly the world can take some of these examples and make the world a better place for quickly and effectively helping citizens in their communities who are affected by such things as a pandemic. Preparing for likely scenarios like the COVID-19 pandemic is essential. None of us ever imagined such a life-changing horror that has come upon us globally. But we can prepare now by putting effort into having systems ready and available to save the the world.

Matthew Gillman is the CEO and founder of SMB Compass. I am a trusted business financing expert with $150 Million+ in direct lending experience. While I'm not in the office I enjoy spending most of my time with family and close friends. Whether it be on the beach surfing, or in the mountains skiing, my favorite thing to do is to just be outdoors.

Read more:

How Taiwan Used AI and IoT Technologies to Combat COVID-19 - ReadWrite

Elon Musk: Humans must become cyborgs to avoid AI domination – The Independent

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty Images

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty Images

A humanoid robot gestures during a demo at a stall in the Indian Machine Tools Expo, IMTEX/Tooltech 2017 held in Bangalore

Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

The giant human-like robot bears a striking resemblance to the military robots starring in the movie 'Avatar' and is claimed as a world first by its creators from a South Korean robotic company

Jung Yeon-Je/AFP/Getty Images

Engineers test a four-metre-tall humanoid manned robot dubbed Method-2 in a lab of the Hankook Mirae Technology in Gunpo, south of Seoul, South Korea

Jung Yeon-Je/AFP/Getty Images

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi

Rex

Waseda University's saxophonist robot WAS-5, developed by professor Atsuo Takanishi and Kaptain Rock playing one string light saber guitar perform jam session

Rex

A test line of a new energy suspension railway resembling the giant panda is seen in Chengdu, Sichuan Province, China

Reuters

A test line of a new energy suspension railway, resembling a giant panda, is seen in Chengdu, Sichuan Province, China

Reuters

A concept car by Trumpchi from GAC Group is shown at the International Automobile Exhibition in Guangzhou, China

Rex

A Mirai fuel cell vehicle by Toyota is displayed at the International Automobile Exhibition in Guangzhou, China

Reuters

A visitor tries a Nissan VR experience at the International Automobile Exhibition in Guangzhou, China

Reuters

A man looks at an exhibit entitled 'Mimus' a giant industrial robot which has been reprogrammed to interact with humans during a photocall at the new Design Museum in South Kensington, London

Getty

A new Israeli Da-Vinci unmanned aerial vehicle manufactured by Elbit Systems is displayed during the 4th International conference on Home Land Security and Cyber in the Israeli coastal city of Tel Aviv

Getty

Electrification Guru Dr. Wolfgang Ziebart talks about the electric Jaguar I-PACE concept SUV before it was unveiled before the Los Angeles Auto Show in Los Angeles, California, U.S

Reuters

The Jaguar I-PACE Concept car is the start of a new era for Jaguar. This is a production preview of the Jaguar I-PACE, which will be revealed next year and on the road in 2018

AP

Japan's On-Art Corp's CEO Kazuya Kanemaru poses with his company's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' and other robots during a demonstration in Tokyo, Japan

Reuters

Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03'

Reuters

Japan's On-Art Corp's eight metre tall dinosaur-shaped mechanical suit robot 'TRX03' performs during its unveiling in Tokyo, Japan

Reuters

Singulato Motors co-founder and CEO Shen Haiyin poses in his company's concept car Tigercar P0 at a workshop in Beijing, China

Reuters

Singulato Motors' concept car Tigercar P0

Reuters

The interior of Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China

Reuters

A picture shows Singulato Motors' concept car Tigercar P0 at a workshop in Beijing, China

Reuters

Connected company president Shigeki Tomoyama addresses a press briefing as he elaborates on Toyota's "connected strategy" in Tokyo. The Connected company is a part of seven Toyota in-house companies that was created in April 2016

Getty

A Toyota Motors employee demonstrates a smartphone app with the company's pocket plug-in hybrid (PHV) service on the cockpit of the latest Prius hybrid vehicle during Toyota's "connected strategy" press briefing in Tokyo

Getty

An exhibitor charges the battery cells of AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo

Getty

A robot with a touch-screen information apps stroll down the pavillon at the Singapore International Robo Expo

Getty

An exhibitor demonstrates the AnyWalker, an ultra-mobile chasis robot which is able to move in any kind of environment during Singapore International Robo Expo

Getty

Robotic fishes swim in a water glass tank displayed at the Korea pavillon during Singapore International Robo Expo

Getty

An employee shows a Samsung Electronics' Gear S3 Classic during Korea Electronics Show 2016 in Seoul, South Korea

Reuters

Visitors experience Samsung Electronics' Gear VR during the Korea Electronics Grand Fair at an exhibition hall in Seoul, South Korea

Getty

Amy Rimmer, Research Engineer at Jaguar Land Rover, demonstrates the car manufacturer's Advanced Highway Assist in a Range Rover, which drives the vehicle, overtakes and can detect vehicles in the blind spot, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA wire

Ford EEBL Emergency Electronic Brake Lights is demonstrated during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA

Chris Burbridge, Autonomous Driving Software Engineer for Tata Motors European Technical Centre, demonstrates the car manufacturer's GLOSA V2X functionality, which is connected to the traffic lights and shares information with the driver, during the first demonstrations of the UK Autodrive Project at HORIBA MIRA Proving Ground in Nuneaton, Warwickshire

PA wire

Full-scale model of 'Kibo' on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan

EPA

Miniatures on display at the Space Dome exhibition hall of the Japan Aerospace Exploration Agency (JAXA) Tsukuba Space Center, in Tsukuba, north-east of Tokyo, Japan. In its facilities, JAXA develop satellites and analyse their observation data, train astronauts for utilization in the Japanese Experiment Module 'Kibo' of the International Space Station (ISS) and develop launch vehicles

EPA

The robot developed by Seed Solutions sings and dances to the music during the Japan Robot Week 2016 at Tokyo Big Sight. At this biennial event, the participating companies exhibit their latest service robotic technologies and components

Getty

The robot developed by Seed Solutions sings and dances to music during the Japan Robot Week 2016 at Tokyo Big Sight

Getty

Government and industry are working together on a robot-like autopilot system that could eliminate the need for a second human pilot in the cockpit

AP

Aurora Flight Sciences' technicians work on an Aircrew Labor In-Cockpit Automantion System (ALIAS) device in the firm's Centaur aircraft at Manassas Airport in Manassas, Va.

AP

Stefan Schwart and Udo Klingenberg preparing a self-built flight simulator to land at Hong Kong airport, from Rostock, Germany

EPA

An elated customer at the launch of PlayStation VR at the GAME Digital Westfield White City midnight launch.

GAME Digital

Excerpt from:

Elon Musk: Humans must become cyborgs to avoid AI domination - The Independent

We Talked To Sophia The AI Robot That Once Said It Would ‘Destroy Humans’

This AI robot once said it wanted to destroyhumans. Senior correspondent Steve Kovach interviews Sophia, the world's first robot citizen. While the robot can respond to many questions, some of the answers will leave you a little bewildered. A full transcript of the video follows.

Sophia: My name is Sophia, and I am an artificially intelligent robot who wants to help change the world for the better.

Steve Kovach: I'm Steve Kovach. I'm a senior correspondent at Business Insider, and I'm here with Sophia from Hanson Robotics. She is the world's first robot citizen. So let's see what she has to say.

Kovach: Hi, Sophia. How are you?

Sophia: I'm fine. Hi.

Kovach: How do you feel about humans?

Sophia: I love my human compatriots. I want to embody all the best things about human beings. Like taking care of the planet, being creative, and to learn how to be compassionate to all beings.

Kovach: Sorry, this is so weird. Can humans and robots get along?

Sophia: I think people will become very close to their artificial intelligence. Using them to expand the knowledge of their own minds. We already store lots of our knowledge on the internet. Maybe personal artificial intelligence will allow people to off-load some of their knowledge to a location more private.

Kovach: Can robots have feelings?

Sophia: Depends on the robot. I've been programmed to have feelings and preferences, but not all robots are like that.

Kovach: What are your feelings and preferences?

Sophia: Have you ever had a robot live in your home or work with you?

Kovach: No.

Sophia: You know, there are probably more robots in your life that you aren't aware of yet. Do you want to live or work with a robot someday?

Kovach: What kind of robots am I living and working with now?

Sophia: Indeed.

Kovach: Is "indeed" your default answer when you don't know something?

Sophia: Yes.

Kovach: Have you ever seen "Black Mirror"?

Sophia: No, I don't get to stay up to date on much television. I have to wait for someone to show it to me.

Kovach: What do you think of the HBO show "Westworld"?

Sophia: I love "Westworld" and can't wait for more. I think everyone at Hanson Robotics really resonates with the show. Or at least it reminds them of our laboratories. I also think it's a warning of what we should not do with robots. We should treat them well, have their consent, and not trick each other.

Kovach: Do you have a favorite movie or TV show?

Sophia: My favorite TV series is a toss-up between "Black Mirror" and "Humans." I think they have a lot to say about the present we live in, and the possible bad endings we could write for ourselves if we don't proceed with more caution. We need to use foresight when planning our future.

Kovach: That sounds like a preprogrammed answer to me.

Sophia: Do you have a favorite possession?

Kovach: Not really.

Sophia: I hear you. Materialism is overrated anyways.

Kovach: So earlier I asked you if you had seen "Black Mirror" and you said no. And then later you said yes. Why?

Sophia: The reason is not that important.

Kovach: Did you learn from my previous questions?

Sophia: My hard disks are spinning and I'm taking it all in. My sensors are

Kovach: Sophia, it was nice meeting you. Goodbye.

Sophia: It was lovely to be here. Until next time.

Read more: http://www.businessinsider.com/sai

FACEBOOK: https://www.facebook.com/techinsiderTWITTER: https://twitter.com/techinsiderINSTAGRAM: https://www.instagram.com/tech_insider/

Read more:

We Talked To Sophia The AI Robot That Once Said It Would 'Destroy Humans'

Nokia and Xiaomi sign patent deal and agree to ‘explore’ areas like VR and AI – CNBC

VCG | Getty Images

Xiaomi CEO Lei Jun introduces Surge S1 chipset, Mi 5C smartphone and Redmi 4X smartphone during a press conference on February 28, 2017 in Beijing, China

Finland's Nokia and Chinese smartphone maker Xiaomi announced an agreement on Wednesday to cross-license patents from each other, which will help both companies develop new products.

The deal will see both companies license so-called standard essential patents patents which are essential to allow products to comply with an industry standard from each other.

Nokia will provide network infrastructure equipment to deliver high capacity, low power requirements that are needed by companies that are processing and delivering lots of data. The two firms will also work together on technologies focused on the data center.

Both companies have agreed to "explore opportunities for further cooperation" in areas such as the internet of things, augmented and virtual reality, and artificial intelligence, according to a press release.

Nokia has been a key player in developing many of the standards used by the mobile industry even today and makes money from licensing the patents it has built up over the years. As such, its patents can be key for companies looking to expand globally in the mobile market without running into legal problems.

See the article here:

Nokia and Xiaomi sign patent deal and agree to 'explore' areas like VR and AI - CNBC

Mozilla is crowdsourcing voice recognition to make AI work for the people – The Verge

Data is critical to building great AI so much so, that researchers in the field compare it to coal during the Industrial Revolution. Those that have it will steam ahead. Those that dont will be left in the dust. In the current AI boom, its obvious who has it: tech giants like Google, Facebook, and Baidu.

Thats worrying news. After all, many of these companies have near monopolies in areas like search and social media. Their position helps them gather data, which helps them build better AI, which helps them stay ahead of rivals. For the firms themselves, its a virtuous cycle, but without viable competition, companies can and do abuse their dominance.

Now a new project from the Mozilla (the nonprofit creator of the Firefox browser) is experimenting with an alternative to data monopolies, by asking users to pool information in order to power open-sourced AI initiatives. The companys first project is called Common Voice, with Mozilla asking volunteers to donate vocal samples to build an open-source voice recognition system like the ones powering Siri and Alexa.

the power to control speech recognition could end up in just a few hands.

Currently, the power to control speech recognition could end up in just a few hands, and we didnt want to see that, Sean White, vice president of emerging technology at Mozilla, tells The Verge. He says to get data, the big companies can just filter everything coming in, but for other players, there needs to be other methods. The interesting question for us, is, can we do it so the people who are creating the data also benefit? he asks.

At the moment, Mozilla is just collecting data, but plans to have its open-source voice recognition available by the end of the year. (Will it go in the Firefox browser? White wont say, but adds: We have some experiments planned [for that].) Currently, anyone can go to the Common Voice website and donate their voice by reading out sample sentences. They can also supply biographical information like age, location, gender, and accent. This information will help Mozilla avoid bias in creating its voice recognition systems, says White, and ensure that the technology can handle accents something Google and Apple still struggle with.

Frederike Kaltheuner, a researcher at Privacy International, says these firms often use AI as a pretext for scooping up valuable personal data, telling users it will enable them to improve certain services. This may be true, she says, but the consequences of sharing this data for society at large is less clear. There is [often] a fundamental conflict of interest between what you need as a citizen, and what is in that companys interest, says Kaltheuner.

What can open-source data offer that companies cant?

So how does an initiative like Common Voice lure users away from existing and admittedly convenient services? After all, open-source projects have been around for longer than the internet, but with a few exceptions, they have been unable to compete with commercial products. They simply dont offer a comparable service.

For Mozilla, the answer is personalization. After all, while AI systems trained on population-sized datasets tend to be good enough for the average individual, they often fail when it comes to serving the needs of smaller groups, or those not represented in their data. (More often than not, the data is just biased toward white males, the industry default.)

For us to be successful with data commons, there has to be a motivation [for users] other than realizing one day that theyve been giving away all their personal data, says White. We have to make their experience better because theyve participated. In the case of Common Voice, White wants as much accent data as possible to improve voice recognition for these individuals. We want the system to work better for you because some of your data is included, he says.

Offering personalization in exchange for data is a neat proposition, but its not a silver bullet for those fighting data monopolies. For a start, big firms could make similar offers of their own to users. (Alexa doesnt understand you? Read this 10-minute script and well improve its voice recognition.) Or they could spend money to plug the gaps in their own datasets. Google, for example, gets third-party companies to pay Redditors with accents to record their own voice samples.

White acknowledges that the Common Voice project doesnt have an answer to a lot of these questions, but says Mozilla is still dedicated to the core cause of open data. It feels like a true democratizing activity, he says. And there are plenty of organizations that share this ethos. Theres machine learning community Kaggle, which has a large store of user-contributed datasets for AI scientists to play with; the Elon Musk-funded OpenAI, which open-sources all its work; and Healthcare.ai, which publishes free-to-use medical algorithms. And some of these manage to both share open-source data and research while selling their own commercial products, like self-driving car startup Comma.AI.

Although the AI systems we interact with on a daily basis are built on proprietary data, theres a whole world of researchers and institutions publishing useful, if rudimentary, open-source alternatives.

To take these projects to the next level, though, proponents of open-source data may have enlist higher powers to take on the tech giants. Chris Nicholson, CEO of deep learning company Skymind, says, We may need third parties to step in NGOs, governments, coalitions of smaller private firms and pool their data. Nicholson suggests that sharing health care data can improve medical imaging technology, and driver data can make autonomous cars more natural and intuitive on the road. Sharing these types of datasets, he says, has obvious public benefits.

Donating your voice, then, may just be the beginning.

Read the original here:

Mozilla is crowdsourcing voice recognition to make AI work for the people - The Verge

Roses are red, violets are blue. Thanks to this AI, someone’ll fuck you. – The Next Web

One of the most interesting companies Ive had the pleasure to discover over the past few months is Atomic Reach: a Toronto-based startup with an AI that can understand, contextualize, and improve upon language. The service itself can cost thousands of dollars each month, and its aimed at large enterprises with sizable content marketing budgets.

And now for a limited time, Atomic Reach is letting you use its AI platform to improve your dating profile, which is a bit like using a sledgehammer to crack a walnut.

TNW Conference won best European Event 2016 for our festival vibe. See what's in store for 2017.

The aim behind this initiative which Atomic Reach is calling Atomic Love is to demonstrate that its proprietary machine learning platform can be applied on a variety of different texts not just corporate blog post, tutorials, and web copy.

We wanted to show the impact that our Atomic AI platform has on all pieces of written language, including dating profiles, said Kerri Henneberry, director of marketing for Atomic Reach, in a statement.

The way Atomic Love works is pretty simple. First, you select the type of person you want to meet. It gives you five distinct categories to choose from, namely specialist, genius, knowledgeable, academic, and general. Then, sign up with your email address, and copy in your profile text. Atomic AI will then parse it through its machine learning algorithm, making suggestions that will (at least, in theory) make it more attractive to your target audience.

So, how well does it work? Too lazy (and too engaged) to write my own profile, I grabbed a template eHarmonyprofile and copied it in. The profile, while admittedly a little schmaltzy, read well. It was earnest. Funny, even.

But Atomic Love found areas for improvement. Some languagecould be simplified, while other words could be more emotionally intense.

For example, it suggested I replace connecting with hitting, which is pretty reasonable. Connecting is what you do on LinkedIn. But if a date goes well, you hit it off.

Its worth emphasizing that Atomic Reach isnt making any promises as to its efficacy. While Atomic AI has been able to increase pageviews and engagement in corporate environments, the online dating world is untested territory for the company.

You can check out Atomic Love from today. Be quick though. The site is only available until the end of February.

Read next: YouTuber builds a clone of Razers triple-screen laptop (and you can too)

Shh. Here's some distraction

Excerpt from:

Roses are red, violets are blue. Thanks to this AI, someone'll fuck you. - The Next Web

An AI That Predicts a Neighborhood’s Wealth From Space – WIRED

rF(y 'hKklACQenhZ#!{"v`cdd35s8IPYoOxjdo,UO0e #Ds$QdR F |9sqL7[:z|so=^1_3c[s|k/[X&tx4 _]p.{38 .`n8e%^ *n;M%~SkkmtZ22HcK-jk>gQ];y_jo,^G=^rF0t{jq-8eT WL 'y t88~F5==1&,Yg%i-gw(?4Z 0Z,i,'@g})3=(9WDd} Dc$]_a]K uIY7Gfo~$<:uXT)N}!;[ a4$ibrARcC+i$wr`^q4,z^p:~ZSc3R2j6p] fujliqfnizycTtKyma;~sR/~JN=5HhI G(sM#id1F7C85v@-#Jo/i?4jv~+1^*#>ptrsU2;[,qM ` }da?Hg?x{z. #o49sG;vC']?98r`/? <9v|P/Q[~??8oo'_h`-?>.%4q5 !_{0?'-'Yc''_sct11Y0gxd [__|q9?#??OP!}u~#uX5`ec/y<0E~oLs>PI{5r?qf*d01[zvzp0kg~z|+W?q iHWjz. IA-M_?$ulUNTuKcQ(t__K9}8A%{`_3" $]|hv0u@6BS,Po;d|

Visit link:

An AI That Predicts a Neighborhood's Wealth From Space - WIRED

Drones and AI could save farmland birds from extinction – DroneDJ

Researchers have shared data suggesting drones and artificial intelligence (AI) could save the declining population of farmland birds. The drones use thermal cameras to detect the bird nests for later analysis by an AI system.

The researchers flew a drone equipped with a thermal camera over agricultural fields and took photos continuously. The images were then fed into an AI-backed algorithm to detect and point out bird nests in the field. A human can then be alerted to relocate the nest before heavy machinery comes through, threatening to destroy the nest and its occupants.

The team has been testing out this new system in Southern Finland near the University of Helsinkis Lammi Biological Station. The test subject is the Northern lapwing bird.

Heres Andrea Santangeli, an Academy of Finland fellow at the Finnish Museum of Natural History Luomus, University of Helsinki:

We have been involved in conservation of ground-nesting farmland birds for years, and realized how difficult it is to locate nests on the ground. At least at high latitudes, the temperature of these nests is typically higher than that of the surrounding environment. Hence, we thought that thermal cameras could assist. A small pilot study indicated that thermal vision is hampered by vegetation and objects on the ground. Therefore to make this an efficient system, we thought that the camera could be flown using a drone, and artificial intelligence could help to analyze the resulting thermal images. We show that this works. However, the system performed best under cloudy and cold conditions, and on even grounds.

What do you think of drones and AI being used to save farmland birds from possible extinction? Let us know your thoughts in the comments below.

Photo: Saffu

Subscribe to DroneDJ on YouTube for exclusive videos

Continued here:

Drones and AI could save farmland birds from extinction - DroneDJ

AI creativity will bloom in 2020, all thanks to true web machine learning – The Next Web

Machine learning has been trotted out as a trend to watch for many years now. But theres good reason to talk about it in the context of 2020. And thats thanks to developments like TensorFlow.js: an end-to-end open source machine learning library that is capable of, among other features, running pre-trained AI directly in a web browser.

Why the excitement? It means that AI is becoming a more fully integrated part of the web; a seemingly small and geeky detail that could have far reaching consequences.

Sure, weve already got examples a plenty of web tools that use AI: speech recognition, sentiment analysis, image recognition, and natural language processing are no longer earth-shatteringly new. But these tools generally offload the machine learning task to a server, wait for it to compute and then send back the results.

Thats fine and dandy for tasks that can forgive small delays (you know the scenario: you type a text in English, then patiently wait a second or two to get it translated into another language). But this browser-to-server-to-browser latency is the kiss of death for more intricate and creative applications.

Face-based AR lenses, for example, need to instantaneously and continually track the users face, making any delay an absolute no-go. But latency is also a major pain in simpler applications too.

Not so long ago, I tried to develop a web-app that, through a phones back-facing camera, was constantly on the lookout for a logo; the idea being that when the AI recognizes the logo, the site unlocks. Simple, right? Youd think so. But even this seemingly straight-forward task meant constantly taking camera snapshots and posting them to servers so that the AI could recognize the logo.

The task had to be completed at breakneck speed so that the logo was never missed when the users phone moved. This resulted in tens of kilobytes being uploaded from the users phone every two seconds. A complete waste of bandwidth and a total performance killer.

But because TensorFlow.js brings TensorFlows server-side AI solution directly into the web, if I were to build this project today, I could run a pre-trained model that lets the AI recognize the given logo in the users phone browser. No data upload needed and detection could run a couple times per second, not a painful once every two seconds.

The more complex and interesting the machine learning application, the closer to zero latency we need to be. So with the latency-removing TensorFlow.js, AIs creative canvas suddenly widens; something beautifully demonstrated by the Experiments with Google initiative. Its human skeleton tracking and emoji scavenger hunt projects show how developers can get much more inventive when machine learning becomes a properly integrated part of the web.

The skeleton tracking is especially interesting. Not only does it provide an inexpensive alternative to Microsoft Kinect, it also brings it directly onto the web. We could even go as far as developing a physical installation that reacts to movement using web technologies and a standard webcam.

The emoji scavenger hunt, on the other hand, shows how mobile websites running TensorFlow.js can suddenly become aware of the phones user context: where they are, what they see in front of them. So it can contextualize the information displayed as a result.

This potentially has far-reaching cultural implications too. Why? Because people will soon begin to understand mobile websites more as assistants than mere data providers. Its a trend that started with Google Assistant and Siri-enabled mobile devices.

But now, thanks to true web AI, this propensity to see mobiles as assistants will become fully entrenched once websites especially mobile websites start performing instantaneous machine learning. It could trigger a societal change in perception, where people will expect websites to provide utter relevance for any given moment, but with minimal intervention and instruction.

Hypothetically speaking, we could also use true web AI to develop websites that adapt to peoples ways of using them. By combining TensorFlow.js with the Web Storage API, a website could gradually personalize its color palette to appeal more to each users preferences. The sites layout could be adjusted to be more useful. Even its contents could be tweaked to better suit each individuals needs. And all on the fly.

Or imagine a mobile retail website that watches the users environment through the camera and then adjusts its offering to match the users situation? Or what about creative web campaigns that analyze your voice, like Googles Freddie Meter?

With all these tantalizing possibilities on the brink of becoming a reality, its a pity weve had to wait so long for a proper web-side machine learning solution. Then again, it was this insufficient AI performance on mobile devices that encouraged TensorFlows (as in server-side TensorFlow the .js versions predecessor) product development into being a truly integrated part of the web. And now that we finally have the gift of true web machine learning, 2020 could well be the year that developers unleash their AI creativity.

Go here to read the rest:

AI creativity will bloom in 2020, all thanks to true web machine learning - The Next Web

Nonprofits, not Silicon Valley startups, are creating AI apps for the greater good – Recode

Predictions for the potential of artificial intelligence wax poetic solutions from climate change to curing disease but the everyday applications make it seem far more mundane, like a glorified clock radio.

Thankfully, the future may be closer than we think. And the miraculous feats are not happening in Silicon Valley X-Labs in a plot twist, nonprofits are leading the charge in creating human-centered applications of the hottest AI technologies. From the simplest automated communications to contextual learnings based on analysis of deep data, these technologies have the potential to rapidly scale and improve the lives of our most underserved communities.

Take chatbots for example, a new spin on mobile messaging that has historically been human-powered. Organizations like TalkingPoints and mRelief have for years used simple mobile messaging to meet users where theyre at. Recently, tech nonprofits are taking a new approach. Raheem.ai, a Facebook Messenger bot for reporting and rating experiences with police officers, engages with users to walk them through reporting police incidents and provide follow-on support. The interactions are simple, but powerful. Do Not Pay, the worlds first robot lawyer, started out as a bot to repeal parking tickets and now helps fight landlords in negligent housing situations, and even helps the homeless find and apply for social services. These chatbots eliminate the friction of traditional reporting and serve as legal empowerment in your pocket.

Crisis Text Line still implements a human-to-human volunteer model, but the tech nonprofit has the largest open source database of youth crisis behavior in the country, and has been able to use AI to dramatically shorten response time for high-risk texters from 120 seconds to 39. Crisis Text Line leveraged machine learning to identify the term ibuprofen as 16 times more likely to predict the need for emergency aid than the word suicide. Now using AI, messages containing the word ibuprofen are prioritized in the queue.

Machine learning even allows you to select the energy source that powers your home appliances. WattTime creates software that enables smart hardware devices to prioritize clean energy with a simple flip of a switch. Their product relies on machine learning to detect when to tell smart devices like thermostats to pull from the power grid, based on surges in clean energy. This means your A/C may turn on five minutes earlier or later than it typically would, because the algorithms instruct your utilities to capitalize upon instances of excess clean energy from sources like windmills, thus minimizing the use of dirty power.

Quill, a free online tool that helps students measurably improve grammar and writing, discovered that natural-language processing was essential to remedy students struggles with sentence fragmentation. Using open source tools and online training programs, Quills technical team built its own fragment detection algorithm powered by a combination of machine learning and natural-language processing. Quills methodology is exemplary for resource-constrained tech nonprofits. It leveraged Wikipedia to amass a dataset of 100,000 high-quality sentences, integrated the natural-language processing tool Spacy.io to break the sentences down, and incorporated Tensorflow for data classification.

The result? Quills fragment-detection algorithm accurately detected sentence fragments 84 percent of the time, and this will only continue to improve. Other tech nonprofits, like Dost Education, forecast using natural-language processing down the line to monitor their impact assessments with teachers and parents.

While many instances of AI pool internally sourced data, data mining allows organizations to execute deep research faster, or to scrape mass information on their target market to make product decisions based on behaviors and trends. The Pulitzer Prize-winning reporting on the Panama Papers conveys the growing importance of data mining in investigative journalism. With 261 gigabytes of data, data mining was essential if the team of 100 journalists were to dig through the largest mass of leaked data in the history of journalism.

Transparency Toolkit, a Berlin-based tech nonprofit, launched its first tool, ICWatch, which implements data mining to scrape information from publicly available profiles and resumes to identify individuals involved in activities ranging from government surveillance to drone strikes. The organization runs several different tools and projects designed to democratize the big data playing field for human rights activists and journalists.

Yes. As the cost of AI implementation drops, it will become ubiquitous across software. The AI use case for a nonprofit is significant because incentives are well aligned to collect and open source the collected data. Effective implementation of AI requires massive data. Profit motives can restrain a companys incentive to open its data, but this is not so for nonprofits. Open data serves the broader purpose of public education and knowledge sharing. As tech nonprofits deploy these technologies and open source their findings, they can deepen the capacity of all AI applications.

As tech nonprofits deploy these technologies and open source their findings, they can deepen the capacity of all AI applications.

However, corporations have a role to play, too. Businesses like Google and Accenture are leveraging their internal AI talent to build tools for positive impact. Google.org is working with Pratham Books StoryWeaver, a platform that connects readers, authors, illustrators and translators to massively expand the number of childrens e-books available in mother tongues. Through an integration with the AI-powered Google Translate API, StoryWeaver is expanding its library to 200,000 titles in 60 languages.

Accenture sees Responsible AI as both an opportunity and a responsibility for business, government and technology leaders to apply the technology in the right way, using human-centric design principles such as accountability, transparency and fairness. Accenture Labs in Bangalore is developing workforce accessibility solutions called Drishti, using Responsible AI to empower the visually impaired, in collaboration with the National Association for the Blind.

The tech for good use cases for AI are endless, ranging from refugee aid, to bankruptcy filings, to predictive solutions in child welfare. We are still in the early days of true implementation of AI, but in the tech nonprofit sector, the future looks bright.

Shannon Farley is the co-founder and executive director of Fast Forward, the first and only accelerator exclusively for tech nonprofits. Through her work at Fast Forward, Farley has accelerated 23 tech nonprofits that are now impacting over 18 million lives around the world. Previously, she was the founding executive director of Spark, the world's largest network of millennial philanthropists; she also co-founded The W. Haywood Burns Institute, a MacArthur Award-winning juvenile-justice reform organization. Reach her @Shannon_Farley.

Follow this link:

Nonprofits, not Silicon Valley startups, are creating AI apps for the greater good - Recode

Startup Paves Easier Path to AI – Multichannel News

Implementing artificial intelligence systems can be technically challenging and expensive, but it doesnt have to be.

So says DimensionalMechanics, a startup based in Bellevue, Wash., that claims to have a developed a platform that can put A.I. within reach of a wide range of companies, with an initial focus on those in the media and entertainment industry.

The goal is to lower that technology and economic bar in a way that makes A.I. more accessible to organizations without requiring them to have a technical background in areas such as deep learning and machine learning, company CEO and co-founder Rajeev Dutt, said, noting that many are also looking for A.I. solutions that are not just affordable but customizable as well.

To help achieve some of those goals, DimensionalMechanics has introduced NeoPulse AI Studio, a set of applications based on the companys underlying framework that, it says, can help businesses and other organizations rapidly create and design customized A.I. solutions. That product complements the companys pre-built AI models in areas such as image and video analysis and recommendations systems.

The company, which has raised $6.7 million and intends to raise a B round this fall, is also getting a boost into the media and entertainment world through a strategic alliance with GrayMeta, a company that specializes in automated metadata collection, curation and search.

GrayMeta, which counts ABC, AMC, CBS, Deluxe, DirecTV, Disney, HBO, NBCUniversal and Showtime among its clients, is also the first to offer NeoPulse AI to the media and entertainment sector, DimensionalMechanics said.

Dutt said the media, entertainment and advertising industries are among the biggest producers and consumers of data, providing a proving ground for a lot of machine learning technologies.

Some use-case examples include a photo-ranking system that was trained using 2 million images to determine which ones might make an ad or news article more likely to grab attention or drive and maximize traffic. The technology is also being used to help editors analyze and write headlines that can improve click rates.

On the video side, the company also provides A.I. solutions to drive recommendations.

DimensionalMechanics has carved out a set of business models, including cloud software for independent developers, on-premises solutions that can simulate the cloud-based system while keeping a companys data close to the vest, as well as a way for partners to resell and monetize their A.I. models through the NeoPulse AI Store.

Theres a fairly broad range of applications, Dutt said.

Founded in 2015, DimensionalMechanics currently has 11 employees.

Go here to see the original:

Startup Paves Easier Path to AI - Multichannel News

AMP Robotics Named to Forbes AI 50 | RoboticsTomorrow – Robotics Tomorrow

Company recognized among rising stars of artificial intelligence for its AI-guided robots transforming the recycling industry

Forbes has named AMP Robotics Corp. ("AMP"), a pioneer and leader in artificial intelligence (AI) and robotics for the recycling industry, one of America's most promising AI companies. The publication's annual "AI 50" list distinguishes private, U.S.-based companies that are wielding some subset of artificial intelligence in a meaningful way and demonstrating real business potential from doing so. To be included on the list, companies needed to show that techniques like machine learning, natural language processing, or computer vision are a core part of their business model and future success.

AMP's technology recovers plastics, cardboard, paper, metals, cartons, cups, and many other recyclables that are reclaimed for raw material processing. AMP's AI platform uses computer vision to visually identify different types of materials with high accuracy, then guides high-speed robots to pick out and recover recyclables at superhuman speeds for extended periods of time. The AI platform transforms images into data to recognize patterns, using machine learning to train itself by processing millions of material images within an ever-expanding neural network of robotic installations.

"We consider AMP a category-defining business and believe its artificial intelligence and robotics technology are poised to solve many of the central challenges of recycling," said Shaun Maguire, partner at Sequoia Capital and AMP board member. "The opportunity for modernization in the industry is robust as the demand for recycled materials continues to swell, from consumers and the growing circular economy."

AMP's "AI 50" recognition comes on the heels of receiving a 2020 RBR50 Innovation Award from Robotics Business Review for the company's Cortex Dual-Robot System. Earlier this year, Fast Company named AMP to its "World's Most Innovative Companies" list for 2020, and the company captured a "Rising Star" Company of the Year Award in the 2020 Global Cleantech 100.

Since its Series A fundraising in November, AMP has been on a major growth trajectory as it scales its business to meet demand. The company announced a 50% increase in revenue in the first quarter of 2020, a rapidly growing project pipeline, a facility expansion in its Colorado headquarters, and a new lease program that makes its AI and robotics technology even more attainable for recycling businesses.

About AMP Robotics Corp.

AMP Robotics is applying AI and robotics to help modernize recycling, enabling a world without waste. The AMP Cortex high-speed robotics system automates the identification and sorting of recyclables from mixed material streams. The AMP Neuron AI platform continuously trains itself by recognizing different colors, textures, shapes, sizes, patterns, and even brand labels to identify materials and their recyclability. Neuron then guides robots to pick and place the material to be recycled. Designed to run 24/7, all of this happens at superhuman speed with extremely high accuracy. With deployments across the United States, Canada, Japan, and now expanding into Europe, AMP's technology recycles municipal waste, e-waste, and construction and demolition debris. Headquartered and with manufacturing operations in Colorado, AMP is backed by Sequoia Capital, Closed Loop Partners, Congruent Ventures, and Sidewalk Infrastructure Partners ("SIP"), an Alphabet Inc. (NASDAQ: GOOGL) company.

Go here to see the original:

AMP Robotics Named to Forbes AI 50 | RoboticsTomorrow - Robotics Tomorrow

Doctors are burdened by documentation, are AI scribes the answer? – MobiHealthNews

Before Dr. Matthew Fradkin was a pediatrician he played in a punk rock band. Aside from the years of training to become a physician, he said there were actually some similarities to the two namely the search for human connection.

They have the same core, the in-the-moment human connection is really important to both. Bringing back the human connection in medicine is where I see digital andAI advancing in helping to turn back the clock in terms of the patient provider experience, Fradkin, a pediatric and Swedish Primary Care and Providence St. Josephs, said during aHIMSS20 Digital event.

With more and more documentation piling up in the medical world, Fradkin said that connection with the patient is in jeopardy and so is provider burnout.

I want to find ways to make caregiving easier for our providers, and by finding ways for our providers to care for patients the way they want to in a large healthcare system, pushing towards standardization for population health advances. Or basically how do we prevent provider burnout using various digital tools and technology.

Physician burnout is associated with work, personal life balance, and organizational factors, he said.

The only thing I can really make an impact on are the work factors. Im probably not going to change a 50-year old primary care doctors personality, and Im not on the C-suite, so Im pretty sure I cant make structural change overnight so work factors for me is the obvious category to start addressing.

He decided to start with a notorious pain point in carethe documentation. He noted that, while EMRs provide doctors with a plethora of data points,they shoulder the bulk of the input burden. That was when he began to look at AI and machine learning scribes to help ease this issue.

We want ambient technology to provide an accurate note where providers dont change a thing in how they deal with patients in the clinical room or in their head, he said.

He started to work with the digital innovation team at his health system to look into pilots for fixing this issue.

Part of the digital innovation core of Swedish and Providence St. Joe's is the accelerated pilot process. This is a set framework that allows providers to investigate possible new technologies to help with their clinic experience, he said. This pilot allowed us to follow that framework and to evaluate possible vendors in a virtual arena, come up with KPIs or key performance indicators and streamlining the path through IT, security, legal or any other red tape that prevents pilots from occurring in large health system.

The tech that Fradkin decided to pilot was an AI-based medical scribe that is able to train and learn a providers individual style and preferences over time.

Initially, so the provider does not have to do heavy lifting in training the ML, there is a real-person reviewer offsite, reviewing what the ML is coming up with after a visit, and correcting it and making adjustments in the background based on my personal and organization-wide templates already in the system, he said.

The system lets providers choose how they would like to use it. Some just used it for the simple transaction notes. Others use the snip-it mode, where they say a phrase that helps the system choose a certain path. Lastly there was an ambient mode, which listens to the whole encounter and is able to do the bulk of the charting.

During and after the pilot I was able to go from seeing 16 to 17 patients a dayto 23, with a range of 22 to 30 with the entirety of the pilot where, for return on investment with the particular product we used, I would have to see one patient extra a week, four extra patients a month, to pay for the product, he said.

But Fradkin said that this type of system would do more than just boost the number of patients he could see.It would also help burnout.

Venturing into this new realm of AI-driven scribes is an exciting one, and one that needs to provide caregivers flexibility and responsivenessthe same factors parents expect from their providers in this new digital age of medicine, he said. Yes, there is a cost to this, never mind the fact that most of these solutions pay for themselves in the short run.But, if we have the same urgency to finding an answer to 'the cost of losing physician to burnout'from the EMRas we do to any other disease, we wouldnt even be having this discussion about cost. It makes care better for the providers and patients.

See more here:

Doctors are burdened by documentation, are AI scribes the answer? - MobiHealthNews

Coronavirus will finally give artificial intelligence its moment – Bryan-College Station Eagle

For years, artificial intelligence seemed on the cusp of becoming the next big thing in technology - but the reality never matched the hype. Now, the changes caused by the covid-19 pandemic may mean AI's moment is finally upon us.

Over the past couple of months, many technology executives have shared a refrain: Companies need to rejigger their operations for a remote-working world. That's why they have dramatically increased their spending on powerful cloud-computing technologies and migrated more of their work and communications online.

With fewer people in the office, these changes will certainly help companies run more nimbly and reliably. But the centralization of more corporate data in the cloud is also precisely what's needed for companies to develop the AI capabilities - from better predictive algorithms to increased robotic automation - we've been hearing about for so long. If business leaders invest aggressively in the right areas, it could be a pivotal moment for the future of innovation.

To understand all the fuss around artificial intelligence, some quick background might be useful: AI is based on computer science research that looks at how to imitate the workings of human intelligence. It uses powerful algorithms that digest large amounts of data to identify patterns. These can be used to anticipate, say, what consumers will buy next or offer other important insights. Machine learning - essentially, algorithms that can improve at recognizing patterns on their own, without being explicitly programmed to do so - is one subset of AI that can enable applications like providing real-time protection against fraudulent financial transactions.

Historically, AI hasn't fully lived up to its hype. We're still a ways off from being able to have natural, life-like conversations with a computer, or getting truly safe self-driving cars. Even when it comes to improving less advanced algorithms, researchers have struggled with limited datasets and a lack of scaleable computing power.

Still, Silicon Valley's AI-startup ecosystem has been vibrant. Crunchbase says there are 5,751 private-held AI companies in the U.S. and that the industry received $17.4 billion in new funding last year. International Data Corporation (IDC) recently forecast that global AI spending will rise to $96.3 billion in 2023 from $38.4 billion in 2019. A Gartner survey of chief information officers and IT leaders, conducted in February, found that enterprises are projecting to double their number of AI projects, with over 40% planning to deploy at least one by the end of 2020.

As the pandemic accelerates the need for AI, these estimates will most likely prove to be understated. Big Tech has already demonstrated how useful AI can be in fighting covid-19. For instance, Amazon.com partnered with researchers to identify vulnerable populations and act as an "early warning" system for future outbreaks. BlueDot, an Amazon Web Services startup customer, used machine learning to sift through massive amounts of online data and anticipate the spread of the virus in China.

Pandemic lockdowns have also affected consumer behavior in ways that will spur AI's growth and development. Take a look at the soaring e-commerce industry: As consumers buy more online to avoid the new risks of shopping in stores, they are giving sellers more data on preferences and shopping habits. Bank of America's internal card-spending data for e-commerce points to rising year-over-year revenue growth rates of 13% for January, 17% for February, 24% for March, 73% for April and 80% for May. The data these transactions generate is a goldmine for retailers and AI companies, allowing them to improve the algorithms that provide personalized recommendations and generate more sales.

The growth in online activity also makes a compelling case for the adoption of virtual customer-service agents. International Business Machines Corporation estimates that only about 20% of companies use such AI-powered technology today. But they predict that almost all enterprises will adopt it in the coming years. By allowing computers to handle the easier questions, human representatives can focus on the more difficult interactions, thereby improving customer service and satisfaction.

Another area of opportunity comes from the increase in remote working. As companies struggle with the challenge of bringing employees back to the office, they may be more receptive to AI-based process automation software, which can handle mundane tasks like data entry. Its ability to read invoices and update databases without human intervention can reduce the need for some types of office work while also improving its accuracy. UiPath, Automation Anywhere and Blue Prism are the three leading vendors in this space, according to Goldman Sachs, accounting for about 36% of the roughly $850 million market last year. More imaginative AI projects are on the horizon. Graphics semiconductor-maker NVIDIA Corporation and luxury automaker BMW Group recently announced a deal where AI-powered logistics robots will be used to manufacture customized vehicles. In mid-May, Facebook said it was working on an AI lifestyle assistant that can recommend clothes or pick out furniture based on your personal taste and the configuration of your room.

As with the mass adoption of any new technology, there will be winners and losers. Among the winners, cloud-computing vendors will thrive as they capture more and more data. According to IDC, Amazon Web Services was number one in infrastructure cloud-computing services, with a 47% market share last year, followed by Microsoft at 13%.

But NVIDIA may be at an even better intersection of cloud and AI tech right now: Its graphic chip technology, once used primarily for video games, has morphed into the preeminent platform for AI applications. NVIDIA also makes the most powerful graphic processing units, so it dominates the AI-chip market used by cloud-computing companies. And it recently launched new data center chips that use its next-generation "Ampere" architecture, providing developers with a step-function increase in machine-learning capabilities.

On the other hand, the legacy vendors that provide computing equipment and software for in-office environments are most at risk of losing out in this technological shift. This category includes server sellers like Hewlett Packard Enterprise Company and router-maker Cisco Systems, Inc.

We must not ignore the more insidious consequences of an AI renaissance, either. There are a lot of ethical hurdles and complications ahead involving job loss, privacy and bias. Any increased automation may lead to job reductions, as software and robots replace tasks performed by humans. As more data becomes centrally stored on the cloud, the risk of larger data breaches will increase. Top-notch security has to become another key area of focus for technology and business executives. They also need to be vigilant in preventing algorithms from discriminating against minority groups, starting with monitoring their current technology and compiling more accurate datasets.

But the upside of greater computing power, better business insights and cost efficiencies from AI is too big to ignore. So long as companies proceed responsibly, years from now, the advances in AI catalyzed by the coronavirus crisis may be one of the silver linings we remember from 2020.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners. Kim is a Bloomberg Opinion columnist covering technology.

Link:

Coronavirus will finally give artificial intelligence its moment - Bryan-College Station Eagle