ObEN raises $5M from Tencent to create AI celebrities for AR … – TechCrunch

The worlds of VR and AR may deeply change how we interact with computers but it may also change how we follow celebrities as well.

ObEN, which uses artificial intelligence-powered tools to create avatars that look and sound like the real deal, announced today that it has raised $5 million in strategic investment from Tencent as it looks to tackle the AI celebrity scene.Li Ruigang, Chairman of CMC and Fengshion Capital, also participated in the round.The startup has raised nearly $13 million to date.

The company, which recently graduated from HTCs Vive X virtual reality accelerator, has shifted its focus a bit as its moved from helping VR gamers make more accurate avatars to building virtual copies of celebrities that can interact with fans. With this move, ObEN seems to be moving to capitalize on the popularity of things like mask filters and Snaps Bitmoji figures to bring celebrity avatars into augmented reality experiences.

The AI-powered tech takes a photo and audio snippet and creates a virtual celebrity that not only looks similar but talks in a familiar tone. Users may one day be able to ask their favorite pop stars for advice and receive some AI wisdom in return. Its all a bit odd and feeds off the celebrity worship that probably isnt a great hallmark of our society, but it makes for some user-generated content that is much more interactive than a simple chatbot.

With platforms like Apples ARKit soon to launch, the startup will soon have millions of phones on which its ready to serve up its little assistants. The Tencent partnership makes a lot of sense for ObEN which is watching the development of AR and VR usage in Asian markets very closely. In April, the startup showcased a demo of WeChat services integrated into VR.

While some VR companies have devoted heavy resources to creating volumetric scans of public figures, ObEN is looking to achieve similar results while only focusing on nailing the faces, something that could make creating celebrity-focused content incredibly easy.

Read the original:

ObEN raises $5M from Tencent to create AI celebrities for AR ... - TechCrunch

Artificial Intelligence, Privacy, And The Choices You Must Make – Forbes

The smart use of AI requires thoughtful choices.

Our lives are full of trade-offs.

Speed versus accuracy. Efficiency versus predictability. Flexibility versus commitment. Surely Some versus Maybe More.

Artificial Intelligence (AI) presents us with yet another round of trade-offs. Theres no doubt about AIs labor-saving benefits. But at what price? And are the benefits worth the price?

For some thoughtful insights we can turn to Rhonda Scharfs bookAlexa Is Stealing Your Job: The Impact of Artificial Intelligence on Your Future.

In the first part of my conversation with Rhonda (see What Role Will [Does] Artificial Intelligence Play In Your Life?), we explored the evolution of AI in recent years. In this second part of the conversation, Rhonda addresses the all-important issue of privacy and the ways AI is already affecting career choices and opportunities.

Rodger Dean Duncan:If theyre concerned about privacy with their technology devices, what can people do?

Rhonda Scharf

Rhonda Scharf:Turning off the geotracking on your cell phone doesnt mean you cant be tracked. You can be tracked through your phones internal compass, air pressure reading, weather reports, and more. Your location can be accurately identified, even if you are on an airplane!

So, I say, too little, too late. Even if you refuse to use any technology at all, the fact that your cousin posted your photo online means you can be facially identified in the future.

That doesnt mean you have zero privacy, or that big brother is watching. You can limit the privacy invasion by shutting off your phone, passing on wearable technology, removing yourself from social media, and making sure you have no AI gadgets in your home (thermostats, smart speakers, automated plugs, motion sensors, etc.). However, youll remove a lot of conveniences as well as the time- and money-saving features that come with them.

Is it worth it? For some, yes. For me, no. Ill give up my privacy for convenience and support. My theory is that Ive got nothing to hide, so why worry?

Duncan:With rapid advances in AI, the choices for workers seem clearpassively wait for technology to replace their jobs, or be proactive and strategic in discovering how to use technology to create better careers. What are the keys to succeeding with the latter approach?

Scharf:It is essential to ask a lot of questions to determine how quickly youll need to make changes to protect your career.

By asking yourself these key questions, you will open your eyes to your imminent future. By responding rather than reacting, you can create a better career.

Duncan:What contributions do you expect AI to make in the fields of teaching and learning?

If you don't want to be left behind, you'd better get educated on AI.

Scharf:There is undoubtedly potential for AI to impact the fields of teaching and learning through the use of systems, such as the automatic grading of papers (the same way AI can scan resumes and identify ideal applicants today).

Imagine if droids or chatbots taught our children. Each child would have a customized learning environment, with the lessons specific to the needs of the child. Imagine having the ability to ask every single question you needed to ask, and having things explicitly explained for you. AI would know that it took you 10 percent longer than average to answer a math question about fractions. It would instinctively know you were taking a little longer to process this information, indicating you were struggling with it. The chatbot or droid would see that you needed more time or more review with that concept. Classrooms would no longer move at the speed of the slowest learners but instead move at the speed of each learner.

Duncan:What can todays companies learn from the Blockbuster versus Netflix experience?

Scharf:Blockbuster was a giant in the video-rental business. But six years after its peak in the market, it filed for bankruptcy. This wasnt because Blockbuster refused to adapt (the company added video games, video-on-demand, DVDs by mail, etc.). It was because its executives lacked vision; they adapted but didnt forecast.

Netflix did the opposite and forecasted its future based on the changing needs of its clients. Interesting enough, Netflix offered itself for sale to Blockbuster for only $50 million, and Blockbuster turned it down. Netflix is currently worth shy of $135 billion, which makes it the worlds most highly valued media and entertainment company.

When we look to a future with AI, we need to look further than next week. Strategic planning needs to be strategic, not reactive. By taking a long-range view, you can stay ahead of the curve. If you havent employed any AI in your business at this point, you are already reactive. Jump on the bandwagon now; otherwise, youll end up just like Blockbuster: a great company with lousy vision. AI is your prescription for a bright future.

Next: How Will Your Career Be Impacted By Artificial Intelligence?

Go here to read the rest:

Artificial Intelligence, Privacy, And The Choices You Must Make - Forbes

Arnold Milstein to Explore Innovative Uses of AI by Health Insurers at HIMSS20 – Yahoo Finance

Prealize Co-Founder to Highlight Real-World Deployment Learnings

PALO ALTO, Calif., March 4, 2020 /PRNewswire/ --Prealize, an artificial intelligence (AI)-enabled health insights company, today announced that Arnold Milstein MD, professor of Medicine and director of the Clinical Excellence Research Center at Stanford University, and Prealize co-founder, will be a featured speaker during a SPARK session at the2020 HIMSS Global Health Conference & Exhibition, taking place March 9-13 in Orlando.

During his presentation, "Brass Tacks andBrainware: How to Really Succeed With AI," Dr. Milstein will discuss AI's power to foresee tomorrow's preventable costly health crises and its conversion into higher healthcare value for patients and lower medical loss ratios for health insurers. He will highlight practical applications of AI and the need to shift human perspectives, or our brainware, in order to fully enable proactive approaches to healthcare.

"It's not just about hardware and software," said Dr. Milstein. "The biggest challenges, and in turn the biggest rewards, hinge on helping diverse stakeholders reimagine their roles and workflows to leverage increasingly powerful applications of machine learning."

Dr. Milstein's session will take place Wednesday, March 11, from 11:30 a.m. to Noon in room W300 in the Orange County Convention Center.

About PrealizePrealize marries state-of-the-art AI-enabled data science with "next-best action" health insights. Based in Palo Alto, Calif., the company was founded by two industry thought leaders from Stanford. Committed to transforming the healthcare system from reactive to proactive, reducing healthcare costs and enabling more people to live healthier lives, Prealize partners with health plans, employers and providers across the nation to positively influence the health trajectory of millions of people. For more information, visit http://www.prealizehealth.comor emailinfo@prealizehealth.com.

View original content:http://www.prnewswire.com/news-releases/arnold-milstein-to-explore-innovative-uses-of-ai-by-health-insurers-at-himss20-301016602.html

SOURCE Prealize

Read more:

Arnold Milstein to Explore Innovative Uses of AI by Health Insurers at HIMSS20 - Yahoo Finance

Why per-seat pricing needs to die in the age of AI – TechCrunch

Instead, create pricing models that maximize product usage and product value

Pricing is the most important, least-discussed element of the software industry. In the past, founders could get away with giving pricing short shrift under the mantra, the best product will ultimately win. No more.

In the age of AI-enabled software, pricing and product are linked; pricing fundamentally impacts usage, which directly informs product quality.

Therefore, pricing models that limit usage, like the predominant per-seat per month structure, limit quality. And thus limit companies.

For the first time in 20 years, there is a compelling argument to make for changing the way that SaaS is priced. For those selling AI-enabled software, its time to examine new pricing models. And since AI is currently the best-funded technology in the software industry by far pricing could soon be changing at a number of vendors.

Per-seat pricing makes AI-based products worse. Traditionally, the functionality of software hasnt changed with usage. Features are there whether users take advantage of them or not your CRM doesnt sprout new bells and whistles when more employees log in; its static software. And since its priced per-user, a customer incurs more costs with every user for whom its licensed.

AI, on the other hand, is dynamic. It learns from every data point its fed, and users are its main source of information; usage of the product makes the product itself better. Why, then, should AI software vendors charge per user, when doing so inherently disincentivizes usage? Instead, they should design pricing models that maximize product usage, and therefore, product value.

AI-enabled software promises to make people and businesses far more efficient, transforming every aspect of the enterprise through personalization. Software tailored to the specific needs of the user has been able to command a significant premium relative to generic competitors; for example, Salesforce offers a horizontal CRM that must serve users from Fortune 100s to SMBs across every industry. Veeva, which provides a CRM optimized for the life sciences vertical, commands a subscription price many multiples higher, in large part because it has been tailored to the pharma users end needs.

AI-enabled software will be even more tailored to the individual context of each end-user, and thus, should command an even higher price. Relying on per-seat pricing gives buyers an easy point of comparison ($/seat is universalizable) and immediately puts the AI vendor on the defensive. Moving away from per-seat pricing allows the AI vendor to avoid apples-to-apples comparisons and sell their product on its own unique merits. There will be some buyer education required to move to a new model, but the winners in the AI era will use these discussions to better understand and serve their customers.

Probably the most important upsell lever software vendors have traditionally used is tying themselves to the growth of their customers. As their customers grow, the logic goes, so should the vendors contract (presumably because the vendor had some part in driving this growth).

Tethering yourself to per-seat pricing will make contract expansion much harder.

However, effective AI-based software makes workers significantly more efficient. As such, seat counts should not need to grow linearly with company growth, as they have in the era of static software. Tethering yourself to per-seat pricing will make contract expansion much harder. Indeed, it could result in a world where the very success of the AI software will entail contract contraction.

Here are some key ideas to keep top of mind when thinking about pricing AI software:

This is the same place to start as in static software land. (Check out my primer on this approach here.) Work with customers to quantify the value your software delivers across all dimensions. A good rule of thumb is that you should capture 10-30% of the value you create. In dynamic software land, that value may actually increase over time as the product is used more and the dataset improves. Its best to calculate ROI after the product gets to initial scale deployment within a company (not at the beginning). Its also worth recalculating after a year or two of use and potentially adjusting pricing. Tracking traditionally consumer usage metrics like DAU/MAU becomes absolutely critical in enterprise AI, as usage is arguably the core driver of ROI.

While ROI is a good way to determine how much to charge, do not use ROI as the mechanism for how to charge. Tying your pricing model directly to ROI created can cause lots of confusion and anxiety when it comes time to settle up at year-end. This can create issues with establishing causality and sets up an unnecessarily antagonistic dynamic with the customer. Instead, use ROI as a level-setting tool and other mechanisms to determine how to arrive at specific pricing.

Read the original here:

Why per-seat pricing needs to die in the age of AI - TechCrunch

All Great Artists Share This One QualityCan AI Learn It Too? – Singularity Hub

Think about your favorite work of art. Why do you like it so much? What does it do for you?

Be it painting, sculpture, music, or writing, we love art not just for its beauty, but for the reactions and emotions it evokes in us. You probably feel a sort of kinship with your favorite artists even though youve never met them, because their work speaks to you in what feels like a unique and personal way.

How does this change when the art in question is produced by a machine and not a human? Is creativity an irreplaceable human skill, or will computers be able to learn it?

In a new video from Big Think, Andrew McAfee, associate director of MIT Sloan School of Managements Center for Digital Business, discusses these questions and explores the concept of creative AI.

McAfee notes that as it stands, AI can mimic some forms of creativity and create art. Generative design, for example, lets you input specifications including materials, budget, and manufacturing methods into software, and it generates design alternatives. In many cases these alternatives look and perform better than human-conceived designs.

Robots can paint in the style of a master artist or their own style. Software can also compose music, and when people dont know theyre listening to AI-generated music, they like it.

The standout feature of these computer-generated forms of art is that they require human inputs before theyre able to create something. Design software needs parameters to know what its working with, and music software needs code for the basic rules of music as a starting point.

Based on one of the definitions of creativity McAfee mentionsthe ability to come up with something thats valuable and also novelthis software technically qualifies as creative.

Luckily for us humans, though, McAfee offers a second, more profound definition of creativity: understanding the human condition, illuminating it, and reflecting it back to us in a way that we respond to.

While AIs can create original works of art if we give them some guidance, they certainly dont have any awareness of the fundamental conditions of being human, such as being aware of our own mortality, living inside a designated physical body, and probably most importantly, interacting with and relating to other humans.

McAfee calls our understanding of these the native speakers intuition about the human condition, and though hes a self-proclaimed technology optimist, he says, Im skeptical were going to be able to successfully convey that intuition even to a really big, really sophisticated piece of technology. If that day ever comes, its a long way away.

But besides wondering whether AI will ever be able to understand the human condition and reflect it back to us in a meaningful way, shouldnt we also be wondering whyor, better yet, whetherwe want it to be able to?

Weve already created AIs that can diagnose illness, drive cars, and win at Go. Siri can answer questions. Google Home and Amazon Echo help run our homes.

As more tasks become automatedand are thus performed far more efficiently than we were performing themmore jobs will be lost. Optimists believe the shift created by technological unemployment will unleash the worlds creativity, allowing us to work less and devote more time and energy to our true passionswhich for many people involve creative endeavors.

If this best-case scenario proves true and we end up with lots of time on our hands to create whatever our hearts desire, it seems like giving AI an understanding of the human condition would just be one more way to render ourselves obsoleteand in the process, relinquish the final quality that differentiates us from machines and makes us human.

Instead of asking whether AI can learn the one quality that makes humans creative, then, the more pertinent question is: should we let it?

The decision, for now, is in our (uniquely creative) hands.

Stock Media provided by bezikus / Pond5

Read more:

All Great Artists Share This One QualityCan AI Learn It Too? - Singularity Hub

AIOps and cybersecurity the power of AI in the backend – TechHQ

Artificial intelligence (AI) continues to attract buzz and fascination in the business world, but its fast becoming an essential tool is making sense and use of the masses of data we accumulate.

In simple terms, AI is machines executing tasks based on smart algorithms; computers learning and acting from rich datasets without being explicitly programmed to do so.

As consumers, or just people, we encounter this technology frequently in the form of predictive modelling, or machine learning, where models are built for making future decisions based on new data points. That takes form in the products and services we use daily, whether its Netflix recommending what you want to watch next, Google Maps knowing youll probably be going home at 6pm, or your Revolut account flagging an anomalous payment.

Its the draw of smart products like these that have led to 93% of UK and US organizations considering AI to be a business priority according to a recent Vanson Bourne study commissioned by SnapLogic.

Too often, though, the power of AI and machine learning is enjoyed by the product user in the form of benefits to UX, and not necessarily by the product-makers, who may still be reliant on legacy solutions, among everyone else in the backend who have their own needs and requirements for the convenience that data-crunching algorithms can offer. Or for business-wide systems that could make enterprises more secure and resilient.

AI has serious applications in optimizing the backend workings of the business, and enabling organizations to predict and respond to trends, events and even threats, faster. Splunks AI and Machine Learning in Your Organization ebook sheds light on where AI is being used for full impact behind the scenes.

While an end-user product may be refined, the developers who made it may still be dealing with complex IT structures, thousands of alerts and an increasingly opaque environment.

Now, there are software systems that can autonomously improve and replace IT operations.

Artificial intelligence for IT operations, or AIOps, is the marriage of big data and machine learning in IT. Coined by Gartner in 2017, AIOps is now a growing trend in IT. It leverages historical data to boost productivity by allocating resources to low value, repetitive tasks, and enables the faster remediation of issues using a combination of predictive analytics and automated incident response.

As all businesses become tech-dependent and IT teams continue to grow, AIOps has become a burgeoning industry with no shortage of vendors and specialists emerging focused on performance monitoring, event correlation and analysis, IT service management and automation.

The end result? Time and money saved for the business and more productive (and happier) engineers. Automation continues to be the most important end-goal for IT operations teams who are swimming in data and routine tasks, OpsRamp SVP, Bhanu Singh, previously told TechHQ.

Gone are the days when businesses could hide in the herd; cybercriminals techniques are so far spread that just connecting to the internet opens the door to threats, including compromised websites, phishing emails, and distributed denial of service attacks.

Unfortunately, businesses are unprepared to fully prevent, detect, and respond to the growing number and sophistication of threats.

Consider that ransomware attacks occur every 14 seconds, according to a Cyber Security Ventures Official Annual Cybercrime Report. Given so many attacks, businesses are turning to AI and machine learning capabilities to help shore up a scarcity of cybersecurity experts.

In cybersecurity, machine learning has applications in advanced threat detection and stopping insider threats, which require a more nuanced approach to monitoring and response. Sophisticated attacks that move laterally within a network, or breaches caused by unwitting access to sensitive information can be tackled by automated and intelligent anomaly detection.

AI and machine learning can enable analysts and security teams to paw through masses of log and event data from applications, endpoints and network devices to conduct rapid investigations and uncover patterns to determine the root cause of incidents.

As the threat landscape evolves, and the cost of a cybersecurity breach becomes increasingly catastrophic for small and large businesses alike, AI and machine learning is handing organizations improvements in detection speed, impact analysis and response.

Data, and lots of it, is core to the success of any AI or machine learning initiative. To leverage the benefits of these intelligent systems within the organization, businesses must be prepared to conduct the manual effort and resource required to refine large volumes of data that give AI models the fuel they need to learn and burn.

At IBM a company with a better view than most of the emerging technologies market data-related struggles are a top reason the companys clients have ceased or cancelled AI projects, according to the firms SVP of Cloud and Cognitive Software, Arvind Krishna.

Speaking at Wall Street Journals Future of Everything Festival last year, Krishna said that companies are finding themselves underprepared for the work and cost of acquiring and preparing that data work comprising about 80% of an AI project.

[] you run out of patience along the way, because you spend your first year just collecting and cleansing the data, said Krishna. Companies can become impatient and disillusioned with the work, he explained, and kind of bail on it.

However, the more data you have, the better, and once the grunt work and heavy-lifting is out the way, effective AI and machine learning means organizations are no longer bogged down by data; they are elevated by it. The challenge is getting there, but the benefits speak (and work) for themselves.

Read this article:

AIOps and cybersecurity the power of AI in the backend - TechHQ

‘Marjorie Prime’ explores the limits of AI built from memories – Engadget

We are only human, and our recollections are imperfect. So when we try to create an account of our past, can we trust ourselves? That's the question at the heart of the new film Marjorie Prime, which opens today in 15 cities (with a national rollout to eventually follow). It is a quiet, contemplative drama that studies our fear of technology and mortality by juxtaposing people with computerized versions of themselves. Thanks to convincing performances by Jon Hamm (Walter Prime), Lois Smith (Marjorie/Marjorie Prime) and Geena Davis (Tess/Tess Prime), the movie forces us to consider if we're to blame for all the times AI goes awry. It also questions whether we're entrusting technology with too much responsibility.

In Marjorie Prime, the holograms (usually of the deceased) are meant to provide comfort, although they sometimes act as caretakers. For example, Walter Prime obligingly tells Marjorie stories of how he wooed her and when he proposed, based on the tales she had told him in the past. It's an unconventional form of therapy, but the act of talking to a loved one without fear of judgment can be just as cathartic as traditional counseling. Walter Prime also reminds Marjorie to eat, calmly questioning the excuses she comes up with to avoid doing so. By contrast, Marjorie's human caretaker, Julie, sneaks the ailing woman cigarettes when Tess and Jon aren't around.

Compared with AI, people's imperfections stand out. These imperfections are passed on to the Primes. The stories that Marjorie shares with Walter Prime (that he later tells back to her) are the versions she wants to remember. For a variety of reasons, she casually changes details like the movie she was watching with her late husband when he proposed, and even the people involved in certain events. The Primes are also designed to mimic verbal signs of hesitation like stuttering or pausing to appear more realistic, and thus more flawed.

The film challenges our mistrust of AI and technology, showing that if anything is untrustworthy, it's our own memories. We are the ones who contaminate software with our own biases. We don't need Marjorie Prime to show us that -- our own world today is full of examples: Microsoft's AI chatbot Tay, who was turned racist by Twitter users, and the company's subsequent bot Zo, who met the same fate. Some believe that in the US justice system, the use of algorithms that predict a person's potential for recidivism as a way to determine punishment is inherently biased. AI is a man-made product, and its flaws are created by us. It is also our fault when we entrust the technology with responsibilities, like making them our therapists, as the characters in Marjorie Prime have done, however unwittingly.

The film eventually takes its central idea to the logical conclusion, where we find out whether AI can even fool themselves into thinking they're human.

The questions of trusting AI and contrasting humans with machines have already been heavily explored (think: Her or the episode "Be Right Back" in Black Mirror), but Marjorie Prime delves deeper into how human nature is to blame. Yet it withholds judgement and shows how we can't help our failings, especially as we age. The beauty of humanity often lies in its flaws, and it's something AI can imitate but not fully replicate.

See more here:

'Marjorie Prime' explores the limits of AI built from memories - Engadget

How AI and Teams are benefitting the littlest of patients – Stories – Microsoft

Felicitas Hanne raises her arms in delight, surrounded by some of the members of the Microsoft Germany team that developed solutions for Kinderhaus AtemReich. Photo: Microsoft

So last summer, when Hanne attended Microsoft Germanys #Hackfest2018 in Munich, a two-day Microsoft employee hackathon to help customers, partners and nonprofit organizations, she wasnt sure what to expect.

At that time, It was my great hope that Microsoft would help me to expand and improve my work with Microsoft Access database, she says.

But as Hanne spoke to the Microsoft employees about Kinderhaus AtemReich, We listened really carefully to what she was saying about the children, and I think half of our colleagues had tears in their eyes, says Volker Strasser, a Microsoft digital adviser who normally works with large companies. Moved by the childrens challenges and those faced by Kinderhaus AtemReich, he became the project lead for the effort.

Andre Kiehne, executive sponsor of the project and a member of the Microsoft Germany leadership team, also remembers talking to Hanne that first time. It was an emotional moment, he says. His twin daughters were born 13 years ago in the same childrens hospital where the idea for Kinderhaus AtemReich was raised, and around the same time. His girls were premature babies and faced some medical problems in their first weeks they are completely healthy now, he says but the worry he faced remains a fresh memory.

The night the hackfest ended, Strasser remembers being unable to sleep as thoughts circled my mind as to how wed help Kinderhaus succeed, how we could bring these ideas to life, and how wed scale those ideas more broadly for other potential and much-needed Kinderhaus AtemReichs in his country.

At 3 a.m., he got out of bed and started drafting a plan that would ultimately include bringing machine learning, artificial intelligence (AI), Microsoft Teams and a modern recruiting strategy to Kinderhaus AtemReich.

For the next year, the team met for a project call every Monday at 8 a.m. We put that meeting on Monday at that time because we wanted to start the week with the most important thing, Kinderhaus AtemReich, Strasser says.

Hanne had no idea she would wind up with a dedicated army of 50 Microsoft volunteers and partners who, over the past year, have not only provided Kinderhaus AtemReich with a digital transformation, but who also spend their own time at the facility, about 5 miles from Microsofts Munich office, doing everything from helping clean out the cellar to tending the garden.

The technology solutions being put into place fit the needs of AtemReich to get closer to the goal of more staff time with the children, and less on paperwork, says Hanne. That is what touches me most of all. This incredible combination of Microsoft and partner team members empathy, passion, know-how and time for our children can hardly be put into words because it is so great.

Among the changes that have come to Kinderhaus AtemReich: shifting from a laborious, often manual, medical record-keeping system that only kept track of a childs vital signs to a system that compiles information such as heart rate, oxygen, breathing rhythm, blood pressure from the childrens medical devices and uses machine learning, AI, IoT and Azure tools to produce data and analysis to see if there are safety or medically related problems or trends that should be addressed.

Before, we just copied the data from the monitors onto paper. But we were not able to evaluate or compare the incredible amounts of data provided by our devices, Hanne says. Now we can evaluate and analyze data. This allows us to discover patterns in children and makes it possible to react faster than we could before.

Follow this link:

How AI and Teams are benefitting the littlest of patients - Stories - Microsoft

artificial intelligence | Definition, Examples, and …

Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasksas, for example, discovering proofs for mathematical theorems or playing chesswith great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

Top Questions

Artificial intelligence (AI) is the ability of a computer or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment.Although there are no AIs that can perform the wide variety of tasks an ordinary human can do, some AIs can match humans in specific tasks.

No, artificial intelligence and machine learning are not the same, but they are closely related. Machine learning is the method to train a computer to learn from its inputs but without explicit programming for every circumstance. Machine learning helps a computer to achieve artificial intelligence.

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only then, if the coast is clear, carries her food inside. The real nature of the wasps instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligenceconspicuously absent in the case of Sphexmust include the ability to adapt to new circumstances.

Psychologists generally do not characterize human intelligence by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and proceduresknown as rote learningis relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the add ed rule and so form the past tense of jump based on experience with similar verbs.

Excerpt from:

artificial intelligence | Definition, Examples, and ...

Ai Weiwei Believes Americans Still Have to Fight for Democracy – Vanity Fair

The artist and activist Ai Weiwei.

By Carl Court/Getty Images.

He is the most famous Chinese artist living today, a political and artistic multi multi-hyphenate: political detainee, activist, philosopher, provocateur, a sculptor, architect, filmmaker, installation artist, and the only person Ive ever met who has an asteroid named after him: 83598 Aiweiwei. I wasnt sure what to expect as we sat down to talk earlier this month at New Yorks Gramercy Park Hotel; the artist, who works in Beijing and Berlin, was fresh off a plane. I was anticipating someone fierce but instead found Ai to be deeply charming, curious, a playful bear of a man who is more interested in asking questions than answering them.

Our spirited conversation was illustrative of the shift in the artists work in recent years, away from events in China and the governments response to his workoften a commentary within a commentary, such as putting surveillance cameras on his home/studio and broadcasting live 24 hours a day while already under state surveillance. Of late he has worked to broaden his international scope, creating work in response to the global disaster of the refugee crisis and encouraging us to live in the embrace of each others differences, an important note from a man who had grown up in a culture that allowed for no difference, no individualization.

At this point in Ais career, there is no separation between his artwork and his political activism. His recent work includes Soleil Levant, composed of 3,500 life jackets discarded by refugees who had landed on Lesbos that barricaded the windows of the Kunsthal Charlottenborg museum in Copenhagen. Human Flow, a film about the global refugee crisis shot in 23 countries that documents his desire to understand it, will be released this fall. His Hansel & Gretel, a collaboration with architects Jacques Herzog and Pierre de Meuron, is an immersive exploration of modern surveillance complete with selfies, and is on view at New Yorks Park Avenue Armory through August 6.

And in October, as part of its 40th anniversary, the Public Art Fund will present Good Fences Make Good Neighbors, titled after the poem Mending Wall by Robert Frost. Good Fences will consist of interruptions in the urban landscape in various locations across New Yorks five boroughs. Among the works under discussion, though not yet confirmed, are a large-scale sculpture in Central Parks Doris C. Freedman Plaza, which would present as a kind of beautiful golden/gilded bird cage made of steel, a visual puzzle akin to a maximum-security visitation from another planet and would have a central portion where visitors can enter, surrounded by an inaccessible passageway containing turnstiles. Another possible location is the Washington Square Park which could host a polished mirror passageway featuring two united human silhouettes, reminiscent of an entrance that artist Marcel Duchamp (who was known to play chess in the park) designed for Andr Bretons Gradiva gallery in 1937.

Throughout the city, using elements of the everyday, Ai will create variations on fencing that draw on both literal and metaphorical expressions of division, questioning notions of security, exclusion, privacy, and possession. The scale of Ais work and the scope of his imagination allow for a beautiful, often elegiac, synthesis of his intellect and the aestheticone that is both larger than life and poetic in its simplicity.

On the nature of these immersive experiences, at this point, I suspect Ai would likely say that life itself is the most significant immersive experience and we best pay attention to it.

Vanity Fair: Since you were detained, did your perspective on China change? Did your perspective on the world change?

Ai Weiwei: Its supposed to change, but it doesnt change that much, because in China I was fighting for democracy and human rights. Here, in the U.S., particularly, you still have to fight for democracy and this freedom of speech. . . I realized human rights and human conditions are something every generation has to fight for. You cannot take it for granted. Its like milk; you cannot keep it fresh for very long.

I was reading a bit about your father, A Qng, who was considered one of the top modern Chinese poets and was later exiled and forced to work cleaning toilets, and I was curious to know if as a child you ever questionedis this how its supposed to be, or is this just my family. Did it all make sense?

In your society, people are always insisting on individualism. And people have very different beliefs, religious or non-religious. But in communist societies, its just one color, the color gray. So, its very hard for anybody even to question anything, because there is no reference. There is no such thing as difference. My family, if I think back, Im very privileged because I know my father was a poet and studied in Paris. He talked about Lorca. And that is very different from what other people talk about. And very different from Chairman Maos language or party propaganda. But at the same time there is a disconnect from reality. The reality is he has to clean the public toilets, 13 or 14 of them. And its not really a toilet. Its just a hole dug in the earth; theres no paper, no water. And the people just have to pick up some cotton from the fields, or some mud or clay to clean their ass. And theyre fine; thats life. Everybodys doing that. Its quite fair because everybody is doing it. So, you cannot really question anything. Because everything is given as is, almost like a fish doesnt question that it needs to be in the water, polluted or clean, there is no choice.

And was there a moment for you, when you came to New York the first time, of a kind of awakening of . . .

Oh, no. All I know about the United States is from a few writers like Mark TwainOn the Mississippi. When I was 10, I was so in love with that book. I thought, this is a fantastic story, very American, very boasting kind of epic. I think the Chinese revolution spirit was also influenced by that kind of language. Because you think youre creating a new world, a new land. So, I came to the United States. I land in New Yorktotally capitalism. (Laughs) Its such a harsh time for me.

Ai Weiwei's Soleil Levant made from over 3,500 lifejackets discarded by migrants on the Greek island of Lesbos at the Kunsthal Charlottenborg museum in Copenhagen.

From AP/REX/Shutterstock.

Did you come to study art ?

I started a semester at Parsons. For me, its like kindergarten for the rich kids. I was always very frustrated. I have come from a communist society, learned all the skills of the representative, representational art. Then to be with the kids, struggling with color or other things. . . . Then, I was out, and on my own hanging around. But I dont really want to accept the so-called American dream, to gain the security, or social status. I feel not interested. But to be an artist . . . it is complete nonsense. You want to sell a lot of work? Why? To whom?

I can imagine that would both amuse you and drive you crazy, that there is this art world that is rich people are buying art, you know.

Today I still dont understand the art world. The art world is like people taking drugs. It has its own reason, its own charming or high moment. But for people who dont take drugs, it doesnt seem real.

In terms of New York and this project that youre doing, were there artists that were an influence, in terms of thinking about this kind of work?

Nah, not really. I purposely disassociate myself from the so called . . . art world. I think if you really wanna become a relevant artist, you should explore some kind of new boundary or possibility. It is nonsense to repeat anybody, it doesnt matter who.

Your work both makes a political and cultural comment. At the same time, its also formally both beautiful and well-resolved. I feel like people dont often talk about the formal aspects.

I realized that Im a very passionate lover, but at the same time, I want to make a seduction beautiful. I cannot impose on anybody; you have to be very skillful. I really believe art comes out of a language that means you communicate well. The idea is very simple, but how to heighten the idea? Or have the question be double-faced? [That] is putting yourself in a vulnerable condition, and people can share that. . . . There is no elite in there; we are all the same. You cannot really teach art. It can only be discovered through yourself.

Is the word beauty of meaning to you?

It means . . . it means a lot. It means humanity. It means you have to understand the limit of our human perspective.

I find America and our political system to be absent of history or context, while I think of other countries as very invested in their history, where whatever you make you carry your history with you. Right now, we have a president who doesnt even know history . . .

But thats also an interesting characteristic, [and] thats why America can go very far. Its like a child lost in the woods; does he forget his way or where he came from? But its really testing how much strength you have if you can still find another mushroom or another strawberry. Or you get totally lost. We see a lot of cultures that carry history, but they cant really go forward because the history is so heavy.

Your upcoming New York piece was a long time in the evolution. We sort of talked about the difference between inside and outside, being inside a fence held in versus being outside trying to get in. Im curious about both the conception of the piece and the idea of doing it in multiple locations around the city.

There are different levels, because we all, were all migrants somehow . . . our parents . . . we all come from somewhere as outsiders and foreigners. And eventually we have only one planet, which is totally foreign in the cosmosunique. And so we have to recognize how much of a miracle life as a human species is, what kind of joy is even possible . . .

Is that not enough to make us want to protect each other? Do we still have to have nuclear bombs? Its ridiculous. To design a piece for a city thats like a miracle in that it has people from all over the world, a mixed society, with the most powerful and crazy minds and the most desperate people trying to find their next bread and butter, all mixed in the same location. Its like a stage for Shakespeare. Its not easy.

I want to co-exist with the conditions of those characters. And so fenceslike netsyou can look through and from both sides. It looks the same, and very often you cant even tell what side youre on because theres no other references there. But it clearly divides the in and out and the left-right, east-west, bottom or top.

And how about the site up by Trumps house?

Oh, thats a very, very, unique location, because Central Park and 59th Street are two blocks away from our president, or your president, or, you know, president of the universe. And he loves gold stuff. He doesnt hide intentions, but at the same time, he makes statements all the time. We never had a president like that; theyre all more uptight or . . . this guy doesnt really care that much. But he is going to challenge the democratic system, which is quite established for, for a long time. And he is gonna challenge that and how far he can go.

I wonder what kind of system he wants. Just the Trump system?

Its really a system of making a deal. Its like hes taken the space shuttle with all of us in it? (Laughs)

Do you feel like theres an element of performance to your work?

No. I dont enjoy that part. Performance means youre, youre not completely trusting what youre doing. . . . I think about my fathers generation, a whole generation of intellectuals has been swept away. No wordscompletely silent. There were a lot of intelligent people, a lot of good writers who could never really speak up. My voice owes so much to them. Every time I speak up, I think of the millions of people disappeared, and their voices were silenced. My voice is nothing, you know.

Do you feel like you now have freedom?

No, I would never feel that way. The more freedom you gain, the more responsibility you have, and then you feel such a burden. I would never have been introduced to refugees. . . . The Chinese used to say, If youre made of iron, how many nails can you make? A few thousand, or a hundred thousand?

Meaning how many nails you can make without using up your body?

Theres an absolute limit. Theres a time element. Theres a space element. If I talk to you, can I talk to other people?

This interview has been edited and condensed for clarity.

Update (6:03 P.M.): This post has been updated to reflect that certain works in the upcoming Ai exhibit Good Fences Make Good Neighbors have not yet been made final.

The artist standing next to his installation, Abolition of Alienated Labor, at MoMA PS1 in Queens, New York. The piece, Pendleton said, was a timeline of sorts.

Nauman in his Galisteo, New Mexico, studio. The piece he is working on, Days and Giorni, would later be exhibited in the 2009 Venice Biennale.

The art worlds greatest disrupters at the Venice Biennale.

Here, Linzy poses as one of Linzys alter egos, Taiwan Braswell, before a show. I didnt expect many people to show up, the performance artist said. They did.

July standing in an elevator, which was a key part of her Whitney Biennial audio installation The Drifters.

The artist in Stykkishlmur, Iceland. The country's vast, awe-inspiring landscapes are often an inspiration for her works.

Kusama in her Infinity Mirrored Room. Said the artist: I am standing at the end of the universe. My heart is filled with impressions of the mysterious brightness of nature.

PreviousNext

The artist standing next to his installation, Abolition of Alienated Labor, at MoMA PS1 in Queens, New York. The piece, Pendleton said, was a timeline of sorts.

Courtesy of Jason Schmidt.

Nauman in his Galisteo, New Mexico, studio. The piece he is working on, Days and Giorni, would later be exhibited in the 2009 Venice Biennale.

Courtesy of Jason Schmidt.

The art worlds greatest disrupters at the Venice Biennale.

Courtesy of Jason Schmidt.

Here, Linzy poses as one of Linzys alter egos, Taiwan Braswell, before a show. I didnt expect many people to show up, the performance artist said. They did.

Courtesy of Jason Schmidt.

July standing in an elevator, which was a key part of her Whitney Biennial audio installation The Drifters.

Courtesy of Jason Schmidt.

The artist in Stykkishlmur, Iceland. The country's vast, awe-inspiring landscapes are often an inspiration for her works.

Courtesy of Jason Schmidt.

Kusama in her Infinity Mirrored Room. Said the artist: I am standing at the end of the universe. My heart is filled with impressions of the mysterious brightness of nature.

Courtesy of Jason Schmidt.

Here is the original post:

Ai Weiwei Believes Americans Still Have to Fight for Democracy - Vanity Fair

China Everbright Limited and Terminus Technologies Launch A 10-Billion AI Economy Fund – PRNewswire

BEIJING, July 12, 2020 /PRNewswire/ -- China Everbright Limited ("CEL"), and Terminus Technologies (Terminus), recently announced the joint launch of "CEL AI Economy Fund", aiming to raise RMB 10 billion and operate in both Renminbi and US dollars.

So far, the fund has received RMB 7 billion in Phase 1 from institutional investors, with the launch of the US dollar tranche to launch soon. CEL AI Economy Fund focuses on the application of AIoT+ strategy and its ecology in the entire industry. The fund aims at developing the next-generation ICT-enabled industrial chain through equity investments including Smart City projects, autonomous driving solutions, smart healthcare systems, intelligent transportation, and smart retail. The establishment of this series of funds reflect Everbright Limited's investment philosophy of making investments around the industry and combining industry with finance. CEL AI Economy Fund opens the new chapter of Everbright Limited's investment strategy plans, enabling a smooth and flawless transition from new economy into AI economy.

The establishment of the CEL AI Economy Fund will accelerate the global deployment of Terminus' AI CITY network. By closely connecting financing and industries, it will facilitate the construction of intelligent new infrastructure, the growth of AI, 5G, Internet of Things, cloud computing and other technology-based infrastructure, and the formation of the new model of smart city. This is another major move taken by China Everbright Limited following its new "One-Four-Three" strategy last year. China Everbright Limited's "One-Four-Three" strategy will focus on investing in 4 key industries (AIoT, the entire aircraft industry, real estate management, and retirement management) to incubate and cultivate 4 leading companies (Terminus Technologies, CALC, EBA Investments, and China Everbright Senior Healthcare Company Limited), and to raise funds for each of the four leading companies in 3 years.

According to Dr. Zhao Wei, Executive Director and Chief Executive Officer of China Everbright Limited, "As China's central government proposed to accelerate the new infrastructure construction, its featured industries such as 5G communication technology, AI, Internet of Things and data centers have become the new essential production factors. This will drive social and economic development and innovation in many aspects, and empower the industries in the intelligent new economy. Especially, when China is undergoing an economic transformation and replacing old growth drivers with new ones, industrial investment cannot blindly seek scale and completeness. Instead, investors shall depend on their own advantages and focus on real economy to serve the best interest of overall economic development. The establishment of CEL AI Economy Fund is an innovative breakthrough for China Everbright Limited. It carries out the 'Three Big and One New', the strategic blueprint for the new technology sector laid out by China Everbright Group, marking an important move to drive strong the four core strategic platforms of China Everbright Limited."

Victor Ai, founder and CEO of Terminus Technologies, said: "CEL AI Economy Fund finds industrial development is the fostering land for its new economy mindset. Through industrial agglomeration and urban function upgrade, the fund will take big data, AI and Internet of Things as the leading power to realize the intelligent economy and the coordinated development. Terminus' AI CITY strategy aims to combine innovative design and cutting-edge technology to drive the development of next generation cities, and create a new paradigm for urban construction and operation. With its rich practice and leading experience in the AI CITY field, and its core technologies and product capabilities on AI and Internet of Things, Terminus will be able to fundamentally accelerate the development and innovation of the intelligent new economy. And CEL AI Economy Fund is another effective support for Terminus to achieve its global strategic layout of AI CITY.

On July the 2nd, Expo 2020 Dubai announced Terminus Technologies as its 12th Official Premier Partner. Leveraged by the company's cutting-edge AIoT technologies and its global competitive advantages in the field of AI City, Terminus Technologies will empower Expo 2020 Dubai with its technological innovations together with Cisco, Siemens, SAP, Accenture, and other widely known tech giants, making it possible for the city-level smart solutions to exploit their full potential in the age of AI. Meanwhile, Terminus Technologies is actively engaged in establishing its first overseas branch in District 2020 and further developing its innovative AI City ecosystem.

Prior to this, in April, Terminus Technologies' first AI industrial base was launched in Chongqing. The project is a milestone achieved in the direction of "the new infrastructure" development, being the part of the Chinese national strategy of Smart Cities' construction. As will be supported by high-quality strategic resources at home and abroad, this model is going to see its replication and expansion in multiple core cities of various countries and regions across the world in the future, establishing a global technology network of Terminus Technologies' AI CITY, and achieving its goal of international construction and development.

SOURCE China Everbright Limited

Originally posted here:

China Everbright Limited and Terminus Technologies Launch A 10-Billion AI Economy Fund - PRNewswire

LinkedIn details AI tool that better matches jobs to candidates – VentureBeat

LinkedIn today pulled back the curtains on Qualified Applicant (QA), an AI system that learns from job candidate interactions the kinds of skills and experience a hirer prefers. Its the model the Microsoft-owned platform uses to help over 690 million users in 200 countries find jobs for which they have the best chances of hearing back, and which aims to reduce the likelihood recruiters overlook applicants by highlighting those deemed a fit.

Creating a system that can contend with the transient nature of job posts was no walk in the park, according to LinkedIn. It had to work at scale QA has billions of coefficients and it had to be effective for as many job seekers and hirers as possible. Formally, QA tries to project the probability of a positive recruiter action conditional on a given member applying for a specific role. What constitutes a positive recruiter action depends on the context it can include viewing an applicants profile, messaging them, inviting them to an interview, or sending them a job offer.

The single global QA model is individually tailored to members and roles, with per-member and per-job models trained on data unique to the members and jobs. Each of the many models is independent within a single training iteration, making them parallel and easier to serve at scale. While the global model is trained on all data, per-member models are trained using only members job applications. Per-job models, meanwhile, are trained on jobs applicants.

The global QA is retrained once every few weeks, but the personalized models must be refreshed regularly to combat degradation. (LinkedIn says the per-member models performance advantage over the baseline halves after three weeks.) Training labels are generated every day from events like hirer engagement with new candidates; an approximate label collection pipeline heuristically infers negatives and uses explicit positive and negative feedback as soon as it becomes available. For example, if a recruiter responds to other applications submitted later, the pipeline might infer a negative label for an application with no engagement after 14 days.

It takes up to a day to generate labels and retrain the personalized QA model components, which are only deployed if they pass certain automated quality checks. In the future, LinkedIn hopes to reduce the lag time to minutes with a near-real-time data collection and training framework built atop stream processing technologies like Apache Samza and Apache Kafka.

Across LinkedIn business lines where QA has been deployed Job Seekers, Premium, and Recruiter the company says its enabled new experiences. On the seeker side, QA highlights search results if a members profile is a good match for the job. For Premium members, it showcases opportunities for which members are competitive with other job applicants. And hirers using LinkedIn Recruiter benefit from a smarter ranking of applicants, as well as notifications for members with very high match scores.

LinkedIn says the personalized models delivered double-digit gains in hirer interaction rates and click-through rate (CTR) for recruiter notifications compared with the systems they replaced, as well as a site-wide lift in confirmed hires and premium job seeker CTR. Our analysis demonstrates that the majority of job applicants apply to at least 5 jobs, while the majority of job postings receive at least 10 applicants. This proves to result in enough data to train personalization models, LinkedIn wrote in a blog post. Our vision is to create economic opportunity for every member of the global workforce. Key to achieving this is making the marketplace between job seekers and hirers more efficient Active job seekers apply for many jobs, and hear back from only a few.

LinkedIns use of AI is pervasive. In October 2019, the Microsoft-owned platform revealed a model that generates text descriptions for images uploaded to LinkedIn, achieved using Microsofts Cognitive Services platform and a unique LinkedIn-derived data set. LinkedIns Recommended Candidates feature learns the hiring criteria for a given role and automatically surfaces relevant candidates in a dedicated tab, and its AI-driven search engine employs data like the kinds of things people post on their profiles and the searches that candidates perform to produce predictions for best-fit jobs and job seekers. Moreover, LinkedIns AI-driven moderation tool automatically spots and removes inappropriate user accounts.

Go here to read the rest:

LinkedIn details AI tool that better matches jobs to candidates - VentureBeat

Vendors rush to call everything AI even if it isn’t, or doesn’t help – The Register

Many enterprise software vendors are focused on the goal of simply building and marketing an AI-based product rather than identifying use cases and the business value to customers.

So says Gartner analyst Jim Hare in a July 6th piece of research titled How Enterprise Software Providers Should (and Should Not) Exploit the AI Disruption.

Nearly every technology provider is now claiming to be an AI company, Hare writes, having counted more than 1,000 vendors who claim to either sell AI or bake it into their products. This ultrahype of the AI label has led to a hysteria of 'rebranding' from companies desperate to keep up. Similar to the go-go days of the late 1990s, when every enterprise was an 'ebusiness' company, many vendors are entering the AI market by simply adding 'AI' to their sales and marketing materials.

Similar to greenwashing, in which companies exaggerate the environmental-friendliness of their operational practices for business benefit, many technology vendors are now 'AI washing' by applying the AI label too indiscriminately.

That's not helpful, Hare argues, because AI-washing often contains nothing more than empty promises.

Some vendors are promoting AI brands as if they are superheroes (such as Einstein, Holmes, Leonardo and Watson) that can save the world, he says. While it is creative brand marketing, it misdirects buyers and increases confusion as to what is real and what is just marketing.

He also says plenty of AI isn't, as follows:

Ouch. And double ouch for VendorLand because Hare says users are already seeing through the hype and just not buying AI while they wait for the hype to die down. He also warns that those who do buy AI now are at risk of becoming disillusioned by products that over-promise, under-deliver and leave buyers wary of buying any more AI any time soon.

Hare therefore urges marketers to tone it down, drop the term AI from their web pages and hop off the hype-train if they want to be differentiated or have a serious discussion about how their wares use machine learning.

He remains a believer in AI, however, as his document predicts it will be pervasive by 2020 and that it can work well when used to improve the performance of existing systems, rather than as a big-bang upgrade.

But he also warns that even when taking that path of least resistance, organisations will struggle to adopt AI because few have the skilled people to sift through the masses of supposedly artificially-intelligence products on offer, never mind keep them running once installed.

Excerpt from:

Vendors rush to call everything AI even if it isn't, or doesn't help - The Register

Quora Question: Which Company is Leading the Field in AI Research? – Newsweek

Quora Questions are part of a partnership between NewsweekandQuora, through which we'll be posting relevant and interesting answers from Quora contributors throughout the week. Read more about the partnershiphere.

Answer from Eric Jang, Research engineer at Google Brain:

Who is leading in AI research among big players like IBM, Google, Facebook, Appleand Microsoft?First, my response contains some bias, because I work at Google Brain, and I really like it there. My opinions are my own, and I do not speak for the rest of my colleagues or Alphabet as a whole.

I rank leaders in AI research among IBM, Google, Facebook, Apple, Baidu, Microsoft as follows:

I would say Deepmind is probably #1 right now, in terms of AI research.

Their publications are highly respected within the research community, and span a myriad of topics such as deep reinforcement learning, Bayesian neural nets, robotics, transfer learningand others. Being London-based, they recruit heavily from Oxford and Cambridge, which are great ML feeder programs in Europe. They hire an intellectually diverse team to focus on general AI research, including traditional software engineers to build infrastructure and tooling, UX designers to help make research tools, and even ecologists (Drew Purves) to research far-field ideas like the relationship between ecology and intelligence.

They are second to none when it comes to PR and capturing the imagination of the public at large, such as with DQN-Atari and the history-making AlphaGo. Whenever a Deepmind paper drops, it shoots up to the top of Reddits Machine Learning page and often Hacker News, which is a testament to how well-respected they are within the tech community.

Before you roll your eyes at me putting two Alphabet companies at the top of this list, I discount this statement by also ranking Facebook and OpenAI on equal terms at #2. Scroll down if you dont want to hear me gush about Google Brain.

With all due respect to Yann LeCun (he has a pretty good answer), I think he is mistaken about Google Brains prominence in the research community.

"But much of it is focused on applications and product development rather than long-term AI research."

This is categorically false, to the max.

TensorFlow (the Brain teams primary product) is just one of many Brain subteams, and is to my knowledge the only one that builds an externally-facing product. When Brain first started, the first research projects were indeed engineering-heavy, but today, Brain has many employees that focus on long-term AI research in every AI subfield imaginable, similar to FAIR and Deepmind.

FAIR has 16 accepted publications to the ICLR 2017 conference track (announcement by Yann: Yann LeCun - FAIR has co-authors on 16 papers accepted at...), with 3 selected for orals (i.e. very distinguished publications).

Google Brain actually slightly edged out FB this year at ICLR2017, with 20accepted papers and fourselected for orals. I'm excited that the Google Brain teamwill have a decent presence at ICLR 2017.

This doesnt count publications from Deepmind or other teams doing research within Google (Search, VR, Photos). Comparing the number of accepted papers is hardly a good metric, but I want to dispel any insinuations by Yann that Brain is not a legitimate place to do deep learning research.

Google Brain is also the industry research org with the most collaborative flexibility. I dont think any other research institution in the world, industrial or otherwise, has ongoing collaborations with Berkeley, Stanford, CMU, OpenAI, Deepmind, Google X and a myriad of product teams within Google.

I believe that Brain will soon be regarded as a top tier institution in the near future. I had offers from both Brain and Deepmind, and chose the former because I felt that Brain gave me more flexibility to design my own research projects, collaborate more closely with internal Google teams, and join some really interesting robotics initiatives that I cant disclose yet.

Microsoft claims its new speech recognition software has reached parity with humans but still isn't perfect. Microsoft/ YouTube

FAIRs papers are good and my impression is that a big focus for them is language-domain problems like question answering, dynamic memory, Turing-test-type stuff. Occasionally there are some statistical-physics-meets-deep-learning papers. Obviously they do computer vision type work, as well. I wish I could say more, but I dont know enough about FAIR besides their reputation is very good.

They almost lost the deep learning framework wars with the widespread adoption of TensorFlow, but well see if Pytorch is able to successfully capture back market share.

One weakness of FAIR, in my opinion, is that its very difficult to have a research role at FAIR without a PhD. A FAIR recruiter told me this last year. Indeed, PhDs tend to be smarter, but I dont think having a PhD is necessary to bring fresh perspectives and make great contributions to science.

OpenAI has an all-star list of employees: Ilya Sutskever (all-around deep learning master), John Schulman (inventor of TRPO, master of policy gradients), Pieter Abbeel (robot sent from the future to crank out a river of robotics research papers), Andrej Karpathy (Char-RNN, CNNs), Durk Kingma (co-inventor of VAEs), Ian Goodfellow (inventor of GANs), to name a few.

Despite being a small group of around 50 people (so I guess not a Big Player by headcount or financial resources), they also have a top-notch engineering team and publish top-notch, really thoughtful research tools like Gym and Universe. Theyre adding a lot of value to the broader research community by providing software that was once locked up inside big tech companies. This has added a lot of pressure on other groups to start open-sourcing their codes and tools as well.

I almost ranked them as #1, on par with Deepmind in terms of top-research talent, but they havent really been around long enough for me to confidently assert this. They also havent pulled off an achievement comparable to AlphaGo yet, though I cant overstate how important Gym/Universe are to the research community.

As a small nonprofit research group building all their infrastructure from scratch, they dont have nearly as much GPU resources, robots, or software infrastructure as big tech companies. Having lots of compute makes a big difference in research ability and even the ideas one is able to come up with.

Startups are hard and well see whether they are able to continue attracting top talent in the coming years.

Baidu SVAIL and Baidu Institute of Deep Learning are excellent places to do research, and they are working on a lot of promising technologies like home assistants, aids for the blindand self-driving cars.

Baidu does have some reputation issues, such as recent scandals with violating ImageNet competition rules, low-quality search results leading to a Chinese student dying of cancer, and being stereotyped by Americans as a somewhat-sketchy Chinese copycat tech company complicit in authoritarian censorship.

They are definitely the strongest player in AI in China though.

Before the Deep Learning revolution, Microsoft Research used to be the most prestigious place to go. They hire very experienced faculty with many years of experience, which might explain why they sort of missed out on deep learning (the revolution in deep learning has largely been driven by PhD students).

Unfortunately, almost all deep learning research is done on Linux platforms these days, and their CNTK deep learning framework havent gotten as attention as TensorFlow, torch, Chainer, etc.

Apple is really struggling to hire deep learning talent, as researchers tend to want to publish and do research, which goes against Apples culture as a product company. This typically doesnt attract those who want to solve general AI or have their work published and acknowledged by the research community. I think Apples design roots have a lot of parallels to research, especially when it comes to audacious creativity, but the constraints of shipping an insanely great product can be a hindrance to long-term basic science.

I know a former IBM employee who worked on Watson and describes IBMs cognitive computing efforts as a total disaster, driven from management that has no idea what ML can or cannot do but sell the buzzword anyway. Watson uses deep learning for image understanding, but as I understand it the rest of the information retrieval system doesnt really leverage modern advances in deep learning. Basically there is a huge secondary market for startups to capture applied ML opportunities whenever IBM fumbles and drops the ball.

No offense to IBM researchers; youre far better scientists than I ever will be. My gripe is that the corporate culture at IBM is not conducive to leading AI research.

To be honest, all the above companies (maybe with the exception of IBM) are great places to do deep learning research, and given open source software and how prolific the entire field is nowadays, I dont think any one tech firm leads AI research by a substantial margin.

There are some places like Salesforce/Metamind, Amazonthat I heard are quite good but I dont know enough about to rank them.

My advice for a prospective deep learning researcher is to find a team/project that youre interested in, ignore what others say regarding reputation, and focus on doing your best work so that your organization becomes regarded as a leader in AI research.

Who is leading in AI research among big players like IBM, Google, Facebook, Apple, and Microsoft? originally appeared on Quorathe place to gain and share knowledge, empowering people to learn from others and better understand the world. You can follow Quora on Twitter, Facebook, and Google+. More questions:

Read the original post:

Quora Question: Which Company is Leading the Field in AI Research? - Newsweek

Overcoming Racial Bias In AI Systems And Startlingly Even In AI Self-Driving Cars – Forbes

AI systems can have embedded biases, including in AI self-driving cars.

The news has been replete with stories about AI systems that have been regrettably and dreadfully exhibiting various adverse biases including racial bias, gender bias, age discrimination bias, and other lamentable prejudices.

How is this happening?

Initially, some pointed fingers at the AI developers that craft AI systems.

It was thought that their own personal biases were being carried over into the programming and the AI code that is being formulated. As such, a call for greater diversity in the AI software development field was launched and efforts to achieve such aims are underway.

Turns out though that it isnt only the perspectives of the AI programmers that are necessarily the dominant factor involved, and many began to realize that the algorithms being utilized were a significant element.

There is yet another twist.

Many of the AI algorithms used for Machine Learning (ML) and Deep Learning (DL) are essentially doing pattern matching, and thus if the data being used to train or prepare an AI system contains numerous examples with inherent biases in them, theres a solid chance those will be carried over into the AI system and how it ultimately performs.

In that sense, its not that the algorithms are intentionally generating biases (they are not sentient), while instead, it is the subtle picking up of mathematically hidden biases via the data being fed into the development of the AI system thats based on relatively rudimentary pattern matching.

Imagine a computer system that had no semblance about the world and you repeatedly showed it a series of pictures of people standing and looking at the camera. Pretend that the pictures were labeled as to what kind of occupations they held.

Well use the pictures as the data that will be fed into the ML/DL.

The algorithm thats doing pattern matching might computationally begin to calculate that if someone is tall then they are a basketball player.

Of course, being tall doesnt always mean that a person is a basketball player and thus already the pattern matching is creating potential issues as to what it will do when presented with new pictures and asked to classify what the person does for a living.

Realize too that there are two sides to that coin.

A new picture of a tall person gets a suggested classification of being a basketball player. In addition, a new picture of a person that is not tall will be unlikely to get a suggested classification of being a basketball player (therefore, the classification approach will be inclusive and furthermore tend toward being exclusionary).

In lieu of using height, the pattern matching might calculate that if someone is wearing a sports jersey, they are a basketball player.

Once again, this presents issues since the wearing of a sports jersey is not a guarantee of being a basketball player, nor necessarily that someone is a sports person at all.

Among the many factors that be explored, it could be that the pattern matching opts to consider the race of the people in the pictures and uses that as a factor in finding patterns.

Depending upon how many pictures contain people of various races, the pattern matching might calculate that a person in occupation X is associated with being a race of type R.

As a result, rather than using height or sports jerseys or any other such factors, the algorithm landed on race as a key element and henceforth will use that factor when trying to classify newly presented pictures.

If you then put this AI system into use, and you have it in an app that lets you take a picture of yourself and ask the app what kind of occupation you are most suited for, consider the kind of jobs it might suggest for someone, doing so in a manner that would be race biased.

Scarier still is that no one might realize how the AI system is making its recommendations and the race factor is buried within the mathematical calculations.

Your first reaction to this might be that the algorithm is badly devised if it has opted to use race as a key factor.

The thing is that many of the ML/DL algorithms are merely full-throttle examining all available facets of whatever the data contains, and therefore its not as though race was programmed or pre-established as a factor.

In theory, the AI developers and data scientists that are using these algorithms should be analyzing the results of the pattern matching to try and ascertain in what ways are the patterns being solidified.

Unfortunately, it gets complicated since the complexity of the pattern matching is increasing, meaning that the patterns are not so clearly laid out that you could readily realize that race or gender or other such properties were mathematically at the root of what the AI system has opted upon.

There is a looming qualm that these complex algorithms that are provided with tons of data are not able to explain or illuminate what factors were discovered and are being relied upon. A growing call for XAI, explainable AI, continues to mount as more and more AI systems are being fielded and underlay our everyday lives.

Heres an interesting question: Could AI-based true self-driving cars become racially biased (and/or biased in other factors such as age, gender, etc.)?

Sure, it could happen.

This is a matter that ought to be on the list of things that the automakers and self-driving tech firms should be seeking to avert.

Lets unpack the matter.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that kids are forewarned about a disturbing aspect thats been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Biases

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Consider one important act of driving, namely the need to gauge what pedestrians are going to do.

When you drive your car around your neighborhood or downtown area, the odds are that you are looking at pedestrians that are standing at a corner and waiting to enter into the crosswalk, particularly when the crosswalk is not controlled by a traffic signal.

You carefully give a look at those pedestrians because you know from experience that sometimes a pedestrian will go into a crosswalk even when it is not safe for them to cross.

According to the NHTSA (National Highway Traffic Safety Administration), approximately 60% of pedestrian fatalities occur at crosswalks.

Consider these two crucial questions:

By what means do you decide whether a pedestrian is going to cross?

And, by what means do you decide to come to a stop and let a pedestrian cross?

There have been various studies that have examined these questions, and some of the research suggests that at times there are human drivers that will apparently make their decisions based on race.

In one such study by the NITC (National Institute for Transportation and Communities), an experiment was undertaken and revealed that black pedestrians were passed by twice as many cars and experienced wait times that were 32% longer than white pedestrians.

The researchers concluded that the results support the hypothesis that minority pedestrians experience discriminatory treatment by drivers.

Analysts and statisticians argue that you should be cautious in interpreting and making broad statements based on such studies since there are a number of added facets that come to play.

There is also the aspect of explicit bias versus implicit bias that enters into the matter.

Some researchers believe that a driver might not realize they hold such biases, being unaware explicitly, and yet might implicitly have such a bias and that in the split-second decision making of whether to keep driving through a crosswalk or stopping to let the pedestrian proceed there is a reactive and nearly subconscious element involved.

Put aside for the moment the human driver aspects and consider what this might mean when trying to train an AI system.

If you collected lots of data about instances of crosswalk crossing, which included numerous examples of drivers that choose to stop for a pedestrian to cross and those that dont stop, and you fed this data into an ML/DL, what might the algorithm land on as a pattern?

Based on the data presented, the ML/DL might computationally calculate that there are occasions when human drivers do and do not stop, and within that, there might be a statistical calculation potentially based on using race as a factor.

In essence, similar to the earlier example about occupations, the AI system might mindlessly find a mathematical pattern that uses race.

Presumably, if human drivers are indeed using such a factor, the chances of the pattern matching doing the same are likely increased, though even if human drivers arent doing so it could still become a factor by the ML/DL computations.

Thus, the AI systems that drive self-driving cars can incorporate biases in a myriad of ways, doing so as a result of being fed lots of data and trying to mathematically figure out what patterns seem to exist.

Figuring out that the AI system has come to that computational juncture is problematic.

If the ML/DL itself is essentially inscrutable, you have little chance of ferreting out the bias.

Another approach would be to do testing to try and discern that biases have crept into the AI system, yet the volume and nature of such testing is bound to be voluminous and might not be able to reveal such biases, especially if the biases are subtle and assimilated into other correlated factors.

Its a conundrum.

Dealing With The Concerns

Some would argue that the AI developers ought to forego using data and instead programmatically develop the code to detect pedestrians and decide whether to accede to their crossing.

Or, maybe just always come to a stop at a crosswalk for all pedestrians, thus presumably vacating any chance of an inherent bias.

Well, theres no free lunch in any of this.

Yes, directly programming the pedestrian detection and choice of crossing is indeed what many of the automakers and self-driving tech firms are doing, though this does not again guarantee that some form of biases wont be in the code.

Furthermore, the benefit of using ML/DL is that the algorithms are pretty much already available, and you dont need to write something from scratch. Instead, you pull together the data and feed it into the ML/DL. This is generally faster than the coding-from-scratch approach and might be more proficient and exceed what a programmer could otherwise write on their own.

In terms of the always coming to a stop approach, some automakers and self-driving tech firms are using this as a rule-of-thumb, though you can imagine that it tends to make other human drivers upset and become angered at self-driving cars (have you ever been behind a timid driver that always stops at crosswalks, its a good bet that you got steamed at such a driver), and might lead to an increase in fender benders as driverless cars keep abruptly coming to a stop.

Widening the perspective on AI and self-driving cars, keep in mind that the pedestrian at a crosswalk is merely one such example to consider.

Another commonly voiced concern is that self-driving cars are going to likely choose how to get to wherever a human passenger asks to go.

A passenger might request that the AI take them to the other side of town.

Suppose the AI system opts to take a route that avoids a certain part of the town, and then over and over again uses this same route. Gradually, the ML/DL might become computationally stuck-in-a-rut and always take that same path.

This could mean that parts of a town will never tend to see any self-driving cars roaming through their neighborhood.

Some worry that this could become a kind of bias or discriminatory practice by self-driving cars.

How could it happen?

Once again, the possibility of the data being fed into the AI system could be the primary culprit.

Enlarge the view even further and consider that all the self-driving cars in a fleet might be contributing their driving data to the cloud of the automaker or self-driving tech firm that is operating the fleet.

The hope is that by collecting this data from hundreds, or thousands, or eventually millions of driverless cars, it can be scanned and examined to presumably improve the driving practices of the self-driving cars.

Via the use of OTA (Over-The-Air) electronic communications, the data will be passed along up to the cloud, and whenever new updates or patches are needed in the self-driving cars they will be pushed down into the vehicles.

Ive already forewarned that this has the potential for a tremendous kind of privacy intrusion since you need to realize that a self-driving car is loaded with cameras, radar, LIDAR, ultrasonic, thermal, and other data-collecting devices and going to be unabashedly capturing whatever it sees or detects during a driving journey.

A driverless car that passes through your neighborhood and goes down your block will tend to record whatever is occurring within its detectable range.

There you are on your front lawn, playing ball with your kids, and the scene is collected onto video and later streamed up to the cloud.

Assuming that driverless cars are pretty much continuously cruising around to be available for those that need a ride, it could end-up allowing the possibility of knitting together our daily efforts and activities.

In any case, could the ML/DL that computationally pattern matches on this vast set of data be vulnerable to landing on inherently biased elements and then opt to use those by downloading updates into the fleet of driverless cars.

Yes.

Conclusion

This description of a problem is one that somewhat predates the appearance of the problem.

There are so few self-driving cars on our roadways that theres no immediate way to know whether or not those driverless cars might already embody any kind of biases.

Until the number of self-driving cars gets large enough, we might not be cognizant of the potential problem of embedded and rather hidden computational biases.

Some people seem to falsely believe that AI systems have common sense and thus wont allow biases to enter into their thinking processes.

Nope, there is no such thing yet as robust common-sense reasoning for AI systems, at least not anywhere close to what humans can do in terms of employing common sense.

There are others that assume that AI will become sentient and presumably be able to discuss with us humans any biases it might have and then squelch those biases.

Sorry, do not hold your breath for the so-called singularity to arrive anytime soon.

For now, the focus needs to be on doing a better job at examining the data that is being used to train AI systems, along with doing a better job at analyzing what the ML/DL formulates, and also pursuing the possibility of XAI that might provide an added glimpse into what the AI system is doing.

Visit link:

Overcoming Racial Bias In AI Systems And Startlingly Even In AI Self-Driving Cars - Forbes

Microsoft built a hardware platform for real-time AI – Engadget

It's considerably more flexible than many of its hard-coded rivals, too. It relies on a 'soft' dynamic neural network processing engine dropped into off-the-shelf FPGA chips where competitors often need their approach locked in from the outset. It can handle Microsoft's own AI framework (Cognitive Toolkit), but it can also work with Google's TensorFlow and other systems. You can build a machine learning system the way you like and expect it to run in real-time, instead of letting the hardware dictate your methods.

To no one's surprise, Microsoft plans to make Project Brainwave available through its own Azure cloud services (it's been big on advanced tech in Azure as of late) so that companies can make use of live AI. There's no guarantee it will receive wide adoption, but it's evident that Microsoft doesn't want to cede any ground to Google, Facebook and others that are making a big deal of internet-delivered AI. It's betting that companies will gladly flock to Azure if they know they have more control over how their AI runs.

Here is the original post:

Microsoft built a hardware platform for real-time AI - Engadget

No, AI and Big Data Are Not Going to Win the Next Great Power Competition – The Defense Post

Artificial Intelligence and Big Data, two buzzwords that are colloquially interchangeable but subtly nuanced, are not silver bullets poised to handily solve all of the US militarys problems.

Unpopular opinion: the US military and the defense industrial complex are currently giving up one heck of a run to the Chinese Communist Party. Note how I didnt say that were losing yet.

This century saw a solid first quarter, with US domination that witnessed the rise and fall of a competing Soviet Union and the establishment of global American hegemony both militarily and economically. We have enjoyed decades of unipolar dominance. When youre on top, it typically feels like you could never lose.

However, the rapidly shifting political landscape and return to great power competition have America reeling. The Chinese Communist Party and its Peoples Liberation Army are mounting a comeback.

While we were worried about cracking terrorist networks of bearded men with AK-47s in caves, the Chinese were speeding towards advanced technologies, hypersonic weapons, and the very defenses required to put up a front against the predictable American military machine.

The Chinese have seized this strategic opportunity. While the West was distracted, Beijing sunk billions of dollars into anti-access and area denial capabilities (A2/AD), a defensive posture aimed at the American way of fighting. They have also amassed massive amounts of data required to weaponize and harness the benefits of Big Data.

Chinese tech companies and government-sponsored research initiatives have built massive data sets while we were preoccupied with Iraq and Afghanistan. These are precisely the requisite data sets necessary to train Machine Learning algorithms and AI neural networks.

While we were building social networks by word of mouth of terrorist cells, the Chinese were collecting intelligence and building advanced systems for moving data with little regard for civil liberties, privacy, or data protection. Not that I advocate for it, but it is amazing what you can do when you ignore ethics or societal norms.

In defense tech news, all I read about is AI solving the joint, all-domain command and control problem, or Big Data providing a potential solution for some multi-domain capability gap. Perhaps we just desire an easy, one size fits all solution in the form of a Big Data Band-Aid?

Indeed, it seems like our greatest adversary and the second greatest existential threat to the American way of life after nuclear war has already found the elixir of life in Big Data, so why cant we?

For starters, artificial intelligence is not the Terminator. It is not a killing machine that is easily weaponized, deployed, and employed to combat adversary capabilities. Even the most cutting edge artificial intelligence tools today are narrow in scope and limited in application.

While that will change eventually, algorithms are currently fantastic for vehicle routing, search engine optimization, facial recognition, asking Siri to set a timer, and other modern technical conveniences that we all carry around in our pockets. These are simple applications of AI. These are not weaponized, military applications that result in warheads on foreheads or power projection.

AI is great at parsing through billions of bits quickly and making sense of it all; creating information from data is its strong suit. This is not complicated. At their core, these algorithms rely on data configuration and formatting to sort and shape vast matrices full of different variables, perform some sort of reduction or matrix operation, and compare this reduction to a set of user-programmed decision criteria.

There is a difference between artificial intelligence and decision making. AI facilitates expedited data to decision throughput, but it does not make its own decisions in a vacuum.

Next, AI is slightly more complicated versions of the matrix math you were probably introduced to in algebra. This advanced linear algebra is advanced applied statistics. By itself, it does not result in a major weapons system delivering effects against an adversary position. Just like space and cyber effects at your favorite large force exercise, you cannot simply sprinkle some Big Data on top and bring added military capability to bear to win the 21st-century fight.

On their own, AI and Big Data do not result in increased competition by the US military. They dont produce a capability to which the Chinese Communist Party has no solution. While they can expedite paths through a particular kill-web to deliver effects, they arent a standalone military capability.

Another reason why AI and Big Data wont solve the A2/AD problem is because of the laws of physics. The US Indo-Pacific Command Area of Responsibility poses a geography problem for the US military. It requires ships and airplanes to travel farther to even get to the fight. Missiles can only go so far and fast, and AI does not provide a solution that creates a silver bullet hypersonic solution.

A2/AD is also a logistics nightmare. Posturing the supplies and equipment at disparate operating locations anywhere from the Philippines to Guam or to Alaska to support even a limited regional conflict is a hard nut to crack, and AI does not by itself solve the agile, global logistics problem.

I might sound exceptionally contrarian in my simplification of AI and Big Data. In truth, Im a huge proponent of defense applications for AI and Big Data. Our militarys future hinges on it.

For the Department of Defense (DoD) to harness AI and to weaponize Big Data, the US military machine and industrial base need to integrate artificial intelligence into military systems.

The current generation of developmental systems needs to bake in advanced algorithms to take the human brain as a data filter out of the loop while introducing fusion, machine/deep learning, and the power of computation to military applications.

The old way of filtering data and enabling the military operators tactical decision making is irrelevant today. If the DoD cant shift, adapt, and embrace this change, theyre doomed to fight the last war for the rest of this century.

The DoD, like many contemporary large organizations, will face many hurdles in weaponizing artificial intelligence capabilities.

One of the main challenges in this transition is simple integration. Thats something the DoD already isnt good at. To abuse an overused example, the F-22 and F-35, arguably the worlds most advanced tactical fighters, cannot communicate via their tactical data links. While they were both developed by Lockheed Martin, their data links use different standards for their waveforms and are not interoperable. To oversimplify two prodigiously complex weapon systems, the F-22 is using AM radio and the F-35 uses FM.

This is partially the governments fault but also the fault of the big defense contractors. Back to my data link example: in the 21st century, these capabilities are software-driven. However, major defense contractors are hardware companies.

During the early years of the American century, they mobilized and bent metal to create some of the last generations most capable machines. That said, they have a comparative advantage only in producing hardware, not in the software required to fight in the 21st century.

For the DoD to be successful in harnessing AI for the next conflict, it needs to foster relationships with organizations that operate in the tech space with a mastery of software development featuring AI applications.

The key is integrating AI and Big Data capabilities into military applications of all kinds, across the full spectrum of military operations. Global logistics, command and control, persistent ISR, and advanced weapons are untapped applications for AI that have not yet been touched by the tech space.

The traditional bloated defense contractor is not resourced for this, nor do they have the right skill sets. Only seasoned developers outside the typical defense industrial base have the know-how to actually succeed with this integration.

AI alone wont compete with Chinese military capabilities. Applying the tenets of big data and weaponizing it to field advanced and lethal military capabilities is the future of power competition.

The Chinese are catching up and may one day challenge American global military dominance, but applied AI capabilities and advanced data science just might be the key to preserving American hegemony and protecting American interests domestically and abroad.

Alex Hillman is an analyst and engineer in the defense space. A US Air Force Academy graduate, he holdsmasters degrees in operations research, systems engineering, and flight test engineering, and has previously served in various technical and leadership roles for the USAF. Alex is a graduate of the United States Air Force Test Pilot School and a former US Department of State Critical Language Scholar for Russian.

Disclaimer: The views and opinions expressed here are those of the author and do not necessarily reflect the editorial position of The Defense Post.

The Defense Post aims to publish a wide range of high-quality opinion and analysis from a diverse array of people do you want to send us yours?Click hereto submit an op-ed.

Original post:

No, AI and Big Data Are Not Going to Win the Next Great Power Competition - The Defense Post

A Web Developer Is Developing an AI Filter to Block Unsolicited Penis Pictures – Interesting Engineering

Sick and tired of receiving unwanted pictures of random men's genitalia, this web developer has decided to do something about it and is asking for more men to send her images.

The reason she's asking for more photos is to create a filter that recognizes and then blurs these unsolicited images. Why haven't tech companies looked at implementing something like this already?

RELATED: 7 GREAT FREE ONLINE COURSES TO HELP YOU LEARN ABOUT AI AND MACHINE LEARNING

Web developer, Kelsey Bressler, received an unwanted image of a stranger's genitals in her Twitter's direct messaging service this month.

Sick and tired of seeing and receiving such unfavorable photos, Bressler created an AI filter that supposedly blocks 95% of explicit images from landing in her inbox, reports the Guardian.

In order to fully test out her filter, Bressler created a Twitter account called 'safeDM'@showYoDiqand asked men to send her direct messages of their private parts. All in the name of 'science.'

There were some tricky and entertaining moments during the project, as one man apparently sent an image of his appendage covered in bright purple glitter, confusing the AI, which up until then had only been 'taught' to recognize flesh-colored photos.

It turns out around 53% of women between 18 and 29-years old have received unwanted images of men's packages, according to a 2017 Pew Research Study on cyber harassment.

Bressler created her AI filter quickly and easily, which has posed the question as to why tech companies haven't done this themselves.

Some sites have done a mediocre attempt at best to solve the issue though.

For example, Twitter has a setting that can block images that other users have selected as 'sensitive content.' However, it's not enough to properly block images landing into your inbox.

Furthermore, as the action of sending nudes online is not deemed illegal, many big companies are reluctant to put any effort it seems. Also, as companies who aren't publishers are not required to protect their online users, they are free of any responsibility of content shared on their site.

Slowly, some U.S. States and some countries are standing up against cyber-harassment and non-consensual images. For instance, cyber flashing sending nude images through Apple's AirDrop feature within a 10-meter radius has been illegal in Scotland since 2010, with Singapore following suit this May.

In the U.S., 46 states do not allow the sharing of nonconsensual pornography. Just this September, Texas created an anti-lewd imagery law.

"Having this bill where it is now illegal to send an unsolicited lewd photo in the state of Texas is like a deterrent, like adding stop signs to the internet, said Bumble's Chief of Staff, Caroline Ellis Roche.

This new law dictates that a fine of up to $500 has to be paid if an unwanted nude photo is sent.

The fight seems to be just starting, and it's about time, given how many people of all ages, are now online and sharing images at the press of a button.

See more here:

A Web Developer Is Developing an AI Filter to Block Unsolicited Penis Pictures - Interesting Engineering

Can Warfighters Remain the Masters of AI? – War on the Rocks

Editors Note: This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the second question (part b.) on the types of AI expertise and skill sets the national security workforce needs.

The Department of Defense is engaging in a dangerous experiment: It is investing heavily in artificial intelligence (AI) equipment and technologies while simultaneously underinvesting in preparation for the workforce will need to understand its implementation. In a previous article, Michael Horowitz and Lauren Kahn called this an AI literacy gap. Americas AI workforce, in uniform or out, is not prepared to use this fast-advancing area of technology. Are we educating the masters of artificial intelligence, or are we training its servants?

The U.S. government as a whole, and by extension the military services in particular, are flush with AI mania. One could be forgiven for thinking that dominance in AI is todays preeminent military competition. It is certainly true that advances in technology including AI will be drivers of national success in the future. However, we are currently in the boost phase of excitement about AI. From our perspective as cutting-edge practitioners and educators in the field of statistics as applied to military problems, it is almost certain that the expectation for AI in the mid-term will not be completely met. This interplay between inflated expectations, technical realities, and eventual productive systems is reflected in the business world as well and is described as part of the Gartner Hype Cycle.

Figure 1: Gartner Hype Cycle. Notably, AI technologies are on the upswing of hype. How will the Department of Defense position itself with a particular eye toward manpower to survive the inevitable crash?

The current hype about AI is high and is likely to be followed by a crash. This is not a statement about the technology or even the U.S. government so much as human nature. The more promising the technology, the harder the eventual crash will be, before entering the productive phase.

As an example of the disconnect between technologies and manpower, the Defense Department recently added data scientist to its job series descriptions, although a universally accepted definition of what a data scientist is remains elusive. Our working definition of a data scientist is someone who sits at the intersection of the disciplines of statistics and computer science. On the back side of the curve, the glacial pace of Defense Department budgetary programming means that current AI initiatives will be around for the long haul, and that means that there will need to be a cadre of individuals with the requisite education to see us through the hype cycles trough of disillusionment.

At the same time, the Navy in particular is shedding its AI-competent manpower at an alarming rate. By AI-competent manpower we mean operationally experienced officers with the requisite statistical, computer programming, and data skills to bridge advanced computing research into combat relevant, data-driven decisions.

We have observed several trends to support this assertion. Navy officers directly involved in operations and eligible for command at sea (unrestricted Line officers) taking the Naval Postgraduate Schools operations analysis curriculum mathematics applied to military problems focusing on statistics, optimization and associated disciplines has decreased dramatically in the past 10 years. For example, the last U.S. naval aviator to graduate with the operations analysis subspecialty was in 2014. The Navys assessment division (OPNAV N81) the sponsor for the operations analysis community has also recognized this trend and directed the creation of a tailored 18-month program for unrestricted line officers, with the objective of gaining more analytical talent in the fleet. Other Navy communities, such as information warfare, are only now recognizing the need for officers educated in the programming, statistical, and analytical skills needed to fully develop AI for naval warfare, and are beginning to send one or two officers annually to earn operations research degrees. We are personally aware of at least two cases where flag officers became directly involved in the detailing of officers with an operations research or systems analysis specialty. What is interesting about these cases is that these officers are considered unpromotable from their unrestricted line communities of origin that is, these officers spent career time on in-depth education, and are frequently penalized for it.

We write in the contexts of our roles as professionals, as well as retired naval officers and frequent commenters on defense policy. As such, it is our firm opinion that the Navys future with artificial intelligence rests critically on the natural intelligence that enables and guides it.

First, Its About Perspective

The true challenges to AI lie in policy, not technology. What is the impact of AI, and what is the right historical parallel? Many organizations both in and out of government reason that AI is a big computery thing, so it should go with all of the other big computery things, which frequently means it gets categorized as subordinate to the IT department. Although IT infrastructure is a necessary component for artificial intelligence, we think that this categorization is a mistake. It is clear to us that in the coming era, the difference between warrior and computer guy may become blurred to the point of non-existence. An excellent historical example is that of Capt. Joe Rochefort, who was considered derisively at the time to be what we might now call a computer geek but who, in retrospect, was one of the architects of victory at Midway and by extension the entire Pacific theater.

We think that a useful historical parallel to draw with the broad introduction of AI into the service is the introduction of nuclear power to the Navy some 65 years ago. It would have been an unthinkable folly to stop the education of nuclear-qualified engineers while introducing the USS Nautilus and USS Enterprise to the fleet. This is, in so many words, the Navys current strategy toward technical education for its officers at the dawn of naval AI.

Similarly, while there are many offices working on AI in the Navy, there most likely needs to be a single strong personality like Hyman Rickover for the nuclear Navy and Wayne Myers for Aegis who will unify these efforts. This individual will need to understand the statistical science behind machine learning, have a realistic appreciation for its potential operational applications, and the authority to cultivate the necessary manpower to develop and use those applications.

Next, Its Manpower

Echoing other writers in these pages, it may seem paradoxical that the most important component to building better thinking machines is better thinking humans. However, the writing is on the wall for both industry and government: The irreplaceable element of success is to have the right people in critical jobs. The competition for professionals in statistics and AI is tight and expected to become tighter. Simply put, the military will not be able to compete for existing talent on the open market. Nor can the open market provide people who understand the naval applications of the science. As with nuclear power, in order for the Navy to successfully harness AI, it needs sailors who are educated to become its masters not trained as its servants.

There is a shortage of people working in the fields of applied mathematics, specifically AI, and nobody will truly know how the systems developed now will react when eventually deployed and tested in situ on board actual ships conducting actual operations. The ultimate judge of the Navys success in AI will be the crews that man the ships on which it is installed. It is our observation that the technical depth of these crews is decreasing as time progresses. This is why it is critical that the services particularly the Navy grow their own and build a cadre of professionals with the requisite education and experience (and who happen to be deployable). Aviators, information warfare officers, submariners, and surface officers should be inspired to obtain technical, professional, and tactical analytical skills to best apply future AI at sea. One cannot help recalling a historical analogy learning how best to apply new radar technology during the nighttime Battle of Cape Esperance in October 1942. In this battle, Adm. Norman Scott and his battle staff were not in the ship with the best radar capabilities, which resulted in confusion as to enemy locations and friendly identification. A better knowledge of this new technology may have resulted in its more efficient employment.

What will inspire officers to gain the skills to serve as masters to AI and subsequently resist seduction by the private sector, remaining in the service instead? Organizational recognition of their value through promotion within their operational fields and opportunities to perform at a higher level much faster than they would find in the outside market. This is, sadly, not the practice of naval personnel actions. Paradoxically, time spent away from warfare communities gaining advanced education skills in areas such as those needed to be a master of AI is currently seen as dead time at best, and a career killer at worst. In the near future, the use of advanced algorithms to guide warfare and operational decisions will no longer be a subspecialty but rather an integral part of the warfighting mission of the Navy. Accordingly, moving away from the educational quota system, derived from subspecialty requirements, is a solid first step. In its place should be a Navy educational board to select due course officers for specific educational programs that will shape the Navys future, not meet the requirements of the current staffs.

When the Navy introduced nuclear engineering, it established a nuclear engineering school to meet its manpower requirements. When the Navy introduced the Aegis combat system, it established a dedicated Aegis school to meet its manpower requirements. The difference between these historical examples and AI is that AI does not need the same physical safeguards as radioactive materials and high-power radars. The Navy currently has the ability to better prepare its AI workforce through multiple institutions and methods both military and civilian including the Naval Postgraduate School, civilian institutions, and fellowships. Programs exist in these institutions that provide the programming, mathematics, and computer science skills needed to gain a deep appreciation for AI technology. Better incentivizing and using the tools already in place will allow sailors to use AI science for warfighting advantages. Where possible, the Navy should partner with industry and outside academic institutions to augment military experience with the lessons being learned commercially, resulting in a technical education with an operational purpose.

AI technology is maturing and the educational programs exist. AI technology exists. The critical element is the sailors who are going to be its masters for integration and deployment. These challenges may be solved internally by policy not externally with technology. It will ultimately be those policies that determine the success of the fleet.

Harrison Schramm is a non-resident senior fellow at the Center for Strategic and Budgetary Assessments. While on active duty, he was a helicopter pilot and operations research analyst. He enjoys professional accreditation from INFORMS, the American Statistical Association and the Royal (U.K.) Statistical Society. He is the 2018 recipient of the Clayton Thomas Prize for contributions to the profession of operations research and the 2014 Richard H. Barchi Prize. As a pilot, Schramm was awarded the Naval Helicopter Associations Aircrew of the Year (2004).

Jeff Kline is a professor of practice in the Naval Postgraduate School Department of Operations Research. Kline supports applied analytical research in maritime operations and security, tactical analysis, risk assessment and future force composition studies. He has served on the U.S. chief of naval operations Fleet Design Advisory Board and several naval study board committees of the National Academies. His faculty awards include the Superior Civilian Service Medal, 2019 J. Steinhardt Award for Lifetime Achievement in Military Operations Research, 2011 Institute for Operations Research and Management Science (INFORMS) Award for Teaching of OR Practice, 2007 Hamming Award for interdisciplinary research, and 2007 Wayne E. Meyers Award for Excellence in Systems Engineering Research.

Image: Naval Postgraduate School (Photo by Javier Chagoya)

Go here to read the rest:

Can Warfighters Remain the Masters of AI? - War on the Rocks

Microsoft shifts from ‘mobile first, cloud first’ to everything AI – CIO Dive

Dive Brief:

Microsoft is going all in on artificial intelligence, prioritizing the advanced technology in its fiscal 2017 earnings report, according to CNBC. In its year-end earnings statement, for the fiscal year ending June 30, Microsoft made AI a key part of its business vision in its effort to create "an intelligent edge infused with artificial intelligences."

As the way people and companies interact with technology changes, more advanced computing processes are necessary, Microsoft said. A new era of technology is emerging where "AI drives insights and acts on the user's behalf, and user experiences span devices with a user's available data and information. "

Microsoft wants AI accessible across all devices, applications and infrastructure to help to act on the user's behalf to drive insight, according to the earnings report. Part of that will come from the company's use of Azure, which helps scale data intensive efforts without requiring devices to store data locally.

Microsoft is moving away from its "mobile first, cloud first" roots into an era where advanced and analytic heavy technology reigns. The move is in line with what the company has signaled for a while. It wants to create a technological core that drives insight without manual processes.

CEO Satya Nadella has already started promoting a new era for Microsoft. When he joined as CEO in 2014, he rolled out the company's "mobile first, cloud first" slogan. But At Microsoft Build this year, he introduced the new corporate mantra of "intelligent cloud."

The new phrase illustrates Microsoft's efforts to build artificial intelligence into apps and services, reflecting how much of technology is supported by robust troves of data. Microsoft wants to remain on the forefront of computing, for consumers and in the enterprise.

But the AI prioritization also holds a humanitarian angle for Nadella. Speaking in September, Nadella said Microsoft is pursuing AI for the "greater good" and wants to "democratize AI." With AI and support from advanced analytics, Nadella thinks people can build tools to solve the biggest problems society and the economy face.

Go here to see the original:

Microsoft shifts from 'mobile first, cloud first' to everything AI - CIO Dive