Ai | Poetry Foundation

Ai is a poet noted for her uncompromising poetic vision and bleak dramatic monologues which give voice to marginalized, often poor and abused speakers. Though born Florence Anthony, she legally changed her name to Ai which means love in Japanese. She has said that her given name reflects a scandalous affair my mother had with a Japanese man she met at a streetcar stop and has no wish to be identified for all eternity with a man she never knew. Ais awareness of her own mixed race heritageshe self-identifies as Japanese, Choctaw-Chickasaw, Black, Irish, Southern Cheyenne, and Comancheas well as her strong feminist bent shape her poetry, which is often brutal and direct in its subject matter. In the volumes of verse she published since her first collection, Cruelty (1973), Ai provoked both controversy and praise for her stark monologues and gruesome first-person accounts of non-normative behavior. Dubbed All womanall human by confessional poet Anne Sexton, Ai has also been praised by the Times Literary Supplement for capturing the cruelty of intimate relationships and the delights of perverse spontaneitye.g. the joy a mother gets from beating her child. Alicia Ostriker countered Sextons summation of Ai, writing: All womanall human; she is hardly that. She is more like a bad dream of Woody Allens, or the inside story of some Swinburnean Dolorosa, or the vagina-dentata itself starting to talk. Woman, in Ais embodiment, wants sex. She knows about death and can kill animals and people. She is hard as dirt. Her realitiesvery small onesare so intolerable that we fashion female myths to express our fear of her. She, however, lives the hard life below our myths.

Ai explained her use of the dramatic monologue as an early realization that first person voice was always the stronger voice to use when writing. Her poems depict individuals that Duane Ackerson characterized in Contemporary Women Poets as people seeking transformation, a rough sort of salvation, through violent acts. The speakers in her poems are struggling individualsusually women, but occasionally menisolated by poverty, by small-town life, or life on a remote farm. Killing Floor (1978), the volume that followed Cruelty, includes a poem called The Kid which is spoken in the voice of a boy who has just murdered his family. Sin (1986) contains more complex dramatic monologues as Ai assumes actual personae, from Joe McCarthy to the Kennedy brothers. Ais characters tend to speak in a flat demotic, stripped of nuance or emotion. Poet and critic Rachael Hadas has noted that although virtually all the poems present themselves as spoken by a particular character, Ai makes little attempt to capture individual styles of diction [or] personal vocabularies. For Hadas, however, this makes the poems all the more striking, as her stripped-down diction conveys an underlying, almost biblical indignationnot, at times, without compassionat human misuses of power and the corrupting energies of various human appetites.

Fate (1991) and Greed (1993), like Sin before them, contain monologues that dramatize public figures. Readers confront the inner worlds of former F.B.I. director J. Edgar Hoover, missing-and-presumed-dead Union leader Jimmy Hoffa, musician Elvis Presley, and actor James Dean as voices from beyond-the-grave who yet remain out of sync with social or ethical norms. Noting that Ai reinvents each of her subjects within her verse, Ackerson added that, through each monologue, what these individuals say, returning after death, expresses more about the American psyche than about the real figures. Vice: New and Selected Poems (1999) contained work from Ais previous five books as well as 18 new poems. It was awarded the National Book Award for Poetry. Ais next book, Dread (2003), was likewise praised for its searing and honest treatment of, according to a Publishers Weekly reviewer, violent or baroquely sexual life stories. In the New York Times Book Review, Viijay Seshadri wrote that Dread has the characteristic moral strength that makes Ai a necessary poet. Aiming her poetic barbs directly at prejudices and societal ills of all types, Ai has been outspoken on the subject of race, saying People whose concept of themselves is largely dependent on their racial identity and superiority feel threatened by a multiracial person. The insistence that one must align oneself with this or that race is basically racist. And the notion that without a racial identity a person cant have any identity perpetuates racismI wish I could say that race isnt important. But it is. More than ever, it is a medium of exchange, the coin of the realm with which one buys ones share of jobs and social position. This is a fact which I have faced and must ultimately transcend. If this transcendence were less complex, less individual, it would lose its holiness.

In addition to the National Book Award, Ais work was awarded an American Book Award from the Before Columbus Foundation, for Sin, and the Lamont Poetry Award of the Academy of American Poets, for Killing Floor. She received grants from the Guggenheim Foundation, the Bunting Fellowship Program at Radcliffe College and the National Endowment for the Arts. She taught at Oklahoma State University. She died in 2010.

Read this article:

Ai | Poetry Foundation

Computer vision: Why its hard to compare AI and human perception – TechTalks

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

Human-level performance. Human-level accuracy. Those are terms you hear a lot from companies developing artificial intelligence systems, whether its facial recognition, object detection, or question answering. And to their credit, the recent years have seen many great products powered by AI algorithms, mostly thanks to advances in machine learning and deep learning.

But many of these comparisons only take into account the end-result of testing the deep learning algorithms on limited data sets. This approach can create false expectations about AI systems and yield dangerous results when they are entrusted with critical tasks.

In a recent study, a group of researchers from various German organizations and universities have highlighted the challenges of evaluating the performance of deep learning in processing visual data. In their paper, titled, The Notorious Difficulty of Comparing Human and Machine Perception, the researchers highlight the problems in current methods that compare deep neural networks and the human vision system.

In their research, the scientist conducted a series of experiments that dig beneath the surface of deep learning results and compare them to the workings of the human vision system. Their findings are reminder that we must be cautious when comparing AI to humans, even if it shows equal or better performance on the same task.

In the seemingly endless quest to reconstruct human perception, the field that has become known as computer vision, deep learning has so far yielded the most favorable results. Convolutional neural networks (CNN), an architecture often used in computer vision deep learning algorithms, are accomplishing tasks that were extremely difficult with traditional software.

However, comparing neural networks to the human perception remains a challenge. And this is partly because we still have a lot to learn about the human vision system and the human brain in general. The complex workings of deep learning systems also compound the problem. Deep neural networks work in very complicated ways that often confound their own creators.

In recent years, a body of research has tried to evaluate the inner workings of neural networks and their robustness in handling real-world situations. Despite a multitude of studies, comparing human and machine perception is not straightforward, the German researchers write in their paper.

In their study, the scientists focused on three areas to gauge how humans and deep neural networks process visual data.

The first test involves contour detection. In this experiment, both humans and AI participants must say whether an image contains a closed contour or not. The goal here is to understand whether deep learning algorithms can learn the concept of closed and open shapes, and whether they can detect them under various conditions.

For humans, a closed contour flanked by many open contours perceptually stands out. In contrast, detecting closed contours might be difficult for DNNs as they would presumably require a long-range contour integration, the researchers write.

For the experiment, the scientists used the ResNet-50, a popular convolutional neural network developed by AI researchers at Microsoft. They used transfer learning to finetune the AI model on 14,000 images of closed and open contours.

They then tested the AI on various examples that resembled the training data and gradually shifted in other directions. The initial findings showed that a well-trained neural network seems to grasp the idea of a closed contour. Even though the network was trained on a dataset that only contained shapes with straight lines, it could also performed well on curved lines.

These results suggest that our model did, in fact, learn the concept of open and closed contours and that it performs a similar contour integration-like process as humans, the scientists write.

However, further investigation showed that other changes that didnt affect human performance degraded the accuracy of the AI models results. For instance, changing the color and width of the lines caused a sudden drop in the accuracy of the deep learning model. The model also seemed to struggle with detecting shapes when they became larger than a certain size.

The neural network was also very sensitive to adversarial perturbations, carefully crafted changes that are imperceptible to the human eye but cause disruption in the behavior of machine learning systems.

To further investigate the decision-making process of the AI, the scientists used a Bag-of-Feature network, a technique that tries to localize the bits of data that contribute to the decision of a deep learning model. The analysis proved that there do exist local features such as an endpoint in conjunction with a short edge that can often give away the correct class label, the researchers found.

The second experiment tested the abilities of deep learning algorithms in abstract visual reasoning. The data used for the experiment is based on the Synthetic Visual Reasoning Test (SVRT), in which the AI must answer questions that require understanding of the relations between different shapes in the picture. The tests include same-different tasks (e.g., are two shapes in a picture identical?) and spatial tasks (e.g., is the smaller shape in the center of the larger shape?). A human observer would easily solve these problems.

For their experiment, the researchers use the ResNet-50 and tested how it performed with different sizes of training dataset. The results show that a pretrained model finetuned on 28,000 samples performs well both on same-different and spatial tasks. (Previous experiments trained a very small neural network on a million images.) The performance of the AI dropped as the researchers reduced the number of training examples, but degradation in same-different tasks was faster.

Same-different tasks require more training samples than spatial reasoning tasks, the researchers write, adding, this cannot be taken as evidence for systematic differences between feed-forward neural networks and the human visual system.

The researchers note that the human visual system is naturally pre-trained on large amounts of abstract visual reasoning tasks. This makes it unfair to test the deep learning model on a low-data regime, and it is almost impossible to draw solid conclusions about differences in the internal information processing of humans and AI.

It might very well be that the human visual system trained from scratch on the two types of tasks would exhibit a similar difference in sample efficiency as a ResNet-50, the researchers write.

The recognition gap is one of the most interesting tests of visual systems. Consider the following image. Can you tell what it is without scrolling further down?

Below is the zoomed-out view of the same image. Theres no question that its a cat. If I showed you a close-up of another part of the image (perhaps the ear), you might have had a greater chance of predicting what was in the image. We humans need to see a certain amount of overall shapes and patterns to be able to recognize an object in an image. The more you zoom in, the more features youre removing, and the harder it becomes to distinguish what is in the image.

Deep learning systems also operate on features, but they work in subtler ways. Neural networks sometimes the find minuscule features that are imperceptible to the human eye but remain detectable even when you zoom in very closely.

In their final experiment, the researchers tried to measure the recognition gap of deep neural networks by gradually zooming in images until the accuracy of the AI model started to degrade considerably.

Previous experiments show a large difference between the image recognition gap in humans and deep neural networks. But in their paper, the researchers point out that most previous tests on neural network recognition gaps are based on human-selected image patches. These patches favor the human vision system.

When they tested their deep learning models on machine-selected patches, the researchers obtained results that showed a similar gap in humans and AI.

These results highlight the importance of testing humans and machines on the exact same footing and of avoiding a human bias in the experiment design, the researchers write. All conditions, instructions and procedures should be as close as possible between humans and machines in order to ensure that all observed differences are due to inherently different decision strategies rather than differences in the testing procedure.

As our AI systems become more complex, we will have to develop more complex methods to test them. Previous work in the field shows that many of the popular benchmarks used to measure the accuracy of computer vision systems are misleading. The work by the German researchers is one of many efforts that attempt to measure artificial intelligence and better quantify the differences between AI and human intelligence. And they draw conclusions that can provide directions for future AI research.

The overarching challenge in comparison studies between humans and machines seems to be the strong internal human interpretation bias, the researchers write. Appropriate analysis tools and extensive cross checks such as variations in the network architecture, alignment of experimental procedures, generalization tests, adversarial examples and tests with constrained networks help rationalizing the interpretation of findings and put this internal bias into perspective. All in all, care has to be taken to not impose our human systematic bias when comparing human and machine perception.

Original post:

Computer vision: Why its hard to compare AI and human perception - TechTalks

Microsoft 2017 annual report lists AI as top priority – CNBC.com – CNBC

Mobile is gone -- not a surprise, given the company's struggles with its Windows Phone operating system and its acquisition of Nokia, which Microsoft essentially declared worthless when it wrote down the total value of that acquisition in 2015.

Cloud computing, including fast-growing products like Office 365 and the Azure public cloud are still there. Now AI is there with it, too.

Microsoft has acquired a few AI startups, like Maluuba and Swiftkey, since Nadella took over, and has established a formal AI and Research group. That team "focuses on our AI development and other forward-looking research and development efforts spanning infrastructure, services, applications, and search," the annual report says.

Microsoft's vision reset comes after Sundar Pichai, CEO of Alphabet's Google, began saying that the world is shifting from being mobile-first to AI-first. Facebook has also invested in both long-term AI research and AI product enhancements alongside Microsoft and Alphabet.

Read more:

Microsoft 2017 annual report lists AI as top priority - CNBC.com - CNBC

AI is here to save your career, not destroy it – VentureBeat

Imagine: Humans waging an epic battle against technology, with human intelligence inevitably subjugated by artificial overlords. Plenty of folks would line up with front-row tickets and popcorn in hand. But its also the very real manifestation of a universal fear jobs relegated to machines, livelihoods handed over to bots.

But when we take a closer look at bots and other forms of artificial intelligence, our worst fears are a far cry from the truth. Weve built bots to help us succeed. And instead of viewing them as our grand reckoning, we should view AI and bots as tools to exponentially expand our human capabilities in and out of the workplace. Yes, bots can make us more human in our daily lives.

Those who use bots as superhuman digital assistants will find the most success. Itll be humans to the bot-th power, rather than humans versus bots.

Much of our understanding of AI and the future is rooted in misconception. Were trepidatious toward the future. Its a valid and human response that shouldnt go ignored. But the truth is, the future is already here.

Anyone whos tagged a photo of a friend on Facebook has used AI. But do people think that way? While 86 percent of people say theyre interested in trying AI tools, 63 percent dont realize theyre already using AI.

Machines are much better at quickly surfacing the most relevant information the internet holds. Its on us humans to take that knowledge and make the most informed decisions. But finding information not all our bot friends can help us with they can do much more than just answer direct questions.

Soon, bots will work in the background on our behalf and initiate a conversation when something interesting has happened. Well be prompted with a notable result, and then well make the choice to move forward.

Its simple, but so powerful. As technology should be.

Computers now have the ability to do what we once thought only human intelligence could handle. In the near future, AI is going to feel less artificial and more intelligent.

Humans learn from example and experience. So do machines. Machine learning allows you tell a system what you want, not how to do it.

Once something a few PhDs wrote about, machine learning is now something millions of people benefit from. Everything from predictive learning and lead scoring to content recommendations and email optimization will get much easier for marketers and salespeople alike.

Already, 40 percent of people dont care if theyre served by an AI tool or a human so long as the question gets answered. Only 26 percent say the same for more complicated customer requests. But how those humans will best serve their customers will take (you guessed it) bots.

If you want your employees and business to benefit from all this machine learning, youll need to invest in getting the data in one centralized place. After all, the data is what gives machine learning the learning part. Theres no learning without the data.

Not only is AI the future of marketing and sales, its the future of the inbound movement. AI and bots allow you to provide highly personalized, helpful, and human experiences for your customers. It may not be a summer blockbuster fit for theaters, but AI and bots sure feel like theyre fit for businesses.

Visit link:

AI is here to save your career, not destroy it - VentureBeat

Eversana ups data and AI power with acquisition of HVH Precision Analytics – Agencies – MM&M – Medical Marketing and Media

Commercialization giant Eversana has bulked up its data and analytics offering with the acquisition of HVH Precision Analytics from Havas Health & You (HH&Y) and Perspecta. The deal adds a range of data-fueled capabilities, including advanced machine learning and patient identification in rare and misdiagnosed disorders, to Eversanas expanding slate.

The deal comes as somewhat of a surprise, if only because HH&Y leadership had long touted the HVH unit as its secret weapon.

Eversana and HH&Y will, however, continue to work together, with details of what the companies characterized as an exclusive strategic partnership set to be disclosed within the next few weeks.

Were maintaining the relationship with Havas and will expand it, said Brigham Hyde, president, data and analytics at Eversana. He believes, for instance, that the market access/payer and value communications strengths of Eversana Engage pair well with HH&Ys expansive offerings.

Very often were the execution arm of the commercialization process, Hyde explained. A lot of time, the advice and guidance that Havas creates, we end up executing. Having a tighter tie there made a ton of sense. An HH&Y spokesperson did not immediately respond to an emailed request for comment.

The deal came together during the pandemic shutdown, with Eversana one of multiple bidders, Hyde said. After a host of phone calls and Zoom meetings, Hyde and HVH CEO Steve Costalas sorted many of the remaining details at a socially distanced meal in Princeton, NJ. Costalas will remain with the company, though his titleand those of other HVH leadershasnt been finalized. The HVH brand will be formally merged into Eversana before the end of the year.

Given the broad range of activities in which Eversana engages on behalf of its clients everything from running copay programs to servicing specialty pharmacies importing deeper data and analytics expertise is a no-brainer. At the core, what makes those services run well is data and analytics, Hyde said. This is a great building block for what were trying to become, as both a prediction-driven business and a digital business.

HVH Precision Analytics was formally debuted in early 2017 by the predecessor organizations of HH&Y (Havas Health) and Perspecta (Vencore). At the time, then-HVH chief operating officer Jeff Ceitlin noted that the units analytical rigor had been battle-tested, literally, in a different arena. If [Vencore] can use data and analytics to find bad guys in Afghanistan, they can use it to find [undiagnosed] patients, he told MM&M.

In the wake of the acquisition, HVHs 30 or so full-time employees will be integrated into Eversanas data and analytics unit, while its primary Wayne, PA, office will become the 31st outpost in the Eversana global network. The other three locations listed on HVHs website in New York, Boston and Hamilton Township, NJ are Havas network sites that hosted small HVH teams, according to an Eversana spokesperson. Eversana counts more than 2,700 employees around the globe.

Hyde joined Eversana in April. He arrived from Concerto HealthAI, a data and AI startup focused on oncology.

Originally posted here:

Eversana ups data and AI power with acquisition of HVH Precision Analytics - Agencies - MM&M - Medical Marketing and Media

AI and Cloud Remove Barriers to Entry for Real-Time Intraday Liquidity – www.waterstechnology.com

As increasedregulatory reporting obligations add to the pressure financial institutions are under to manage intraday liquidity, centralizing siloed legacy systems into a single automated solution can offer an enterprise-wide, real-time view of liquidity. Richard Morris, product manager, cash and liquidity management at SmartStream, explores how institutions can achieve this, minimizing volatility and performing as efficiently as possible.

Richard Morris, SmartStream

Financial institutions must actively manage their intraday liquidity, but getting to this point continues to be a challenge as banks are required to capture the information they need in real time while meeting increased regulatory reportingobligations.

However, for liquidity risk managers to have a truly relevant enterprise-wide, real-time view of their liquidity, financial institutions will need to consolidate siloed legacy systems into a single automated solution with predictive analytics layered on top.

A report by SmartStream, Intraday Liquidity Management: From a Cost Discussion to a Revenue Opportunity,explores this in detail, as well as how technologies such as cloud, artificial intelligence(AI) and machine learning can help banks achieve higher levels of automation and reduce manualworkload.

Intraday volatility in reporting leads to volatility in decision-making. To manage intraday liquidity successfully in a financial institution, funding, liquidity and risk managers must be able to anticipate the peaks and troughs of the bank balance, and predict the liquidity demands that may occur throughout the day.

Armed with that knowledge, a bank is in control of its own resources rather than responding to settlement demands when they arise. Financial institutions can leverage next-generation technologies such as cloud, AI and machine learning to achieve real-time management of their global intradayliquidity.

The importance of managing the flow of liquidity as well as intraday counterparty exposure cannot be overstated. There is also an element of understanding the drivers of liquidity demand and who within the organisation is driving the demand for intraday liquidity, being able to spot anomalies as they arise and respond to unexpectedevents.

Traditional systems address the operational burden of cash management and consolidating data from internal systems to provide an enterprise-wide view of liquidity demand throughout the day, and of positioning liquidity to meet settlement demands. It is an invaluable task, but it is incredibly data-intensive.

To date, interpreting trends and metrics, and identifying behaviors and anomalies has been hampered by the volume of data being processed and the time it takes to analyze it. Analysis of intraday usage has always been an historical analysis, but technology such as cloud, AI and machine learning can enable banks to take extra value out of the data that results from settlementactivity.

Machine learning allows financial institutions to predict the profiles of their intraday settlement and their peak liquidity demand at any point during the day. Many banks lack this actionable intelligence butusing technology such as machine learning to predict fluctuations in cashflowwill allow financial institutions to manage their flow of liquidity, reducing the liquidity buffer and, in turn, cost.

Predictive analytics can also be used to identify whether the bank or the market as a whole will enter a stressed environment and, therefore, use machine learning to put the organisation in a much better position to respond. These AI and machine learning techniques can also be applied to the regulatory use of data, to help banks derive the maximum benefit from what is beingreported.

The implementation of cloud, on the other hand, enables more institutions to adopt solutions that might otherwise carry a large cost of ownership. Where the largest banks have the resources to develop and operate these advanced solutions, it has always represented a significant investment. The lowering of upfront investment and ongoing costsdriven by the advent of cloud computingwill democratize these solutions and enable much wider uptake across the industry.

Go here to see the original:

AI and Cloud Remove Barriers to Entry for Real-Time Intraday Liquidity - http://www.waterstechnology.com

Biggest influencers in AI in Q2 2020: The top companies and individuals to follow – Verdict

GlobalData research has found the top artificial intelligence (AI) influencers based on their performance and engagement online. Using research from GlobalDatas Influencer platform, Verdict has named ten of the most influential people in artificial intelligence on Twitter during Q2 2020.

Evan Kirstel is a B2B thought leader with extensive experience across enterprises sales, alliances, and business development. He currently serves as chief digital officer and advisor of NYDLA.ORG, a remote, distance/digital learning and collaboration association.

Kirstel is of the opinion that the role of artificial intelligence accelerates the opportunity for increased customer and agent engagement alike. Contact centres are vital for businesses. Overlaying AI, robots and human-guided technology such as gaming is an exciting area.

Twitter followers: 289,645

GlobalData influencer score: 100

Spiros Margaris is a venture capitalist and payment tech consultant. He is also the founder of Margaris Ventures, and serves on the Advisory Board of the wefox Group, a Europe-based insurtech start-up. He is the first international influencer to achieve The Triple Crown ranking.

Margaris has tweeted on varied AI topics such as its assistance in surgical decision- making, the importance of diversity in AI tools, and how companies are spending millions of dollars on the technology.

Twitter followers: 100,490

GlobalData influencer score: 90

Get the Verdict morning email

Ronald van Loon is a recognised thought leader in technologies including AI, big data, IoT, machine learning, deep learning, 5G, predictive analytics, cloud, edge and data science. He currently serves as principal analyst and CEO of the Intelligent World, an influencer network that connects experts, businesses, and influencers to new audiences.

Loon is of the opinion that AI has progressed at a furious pace over the past few years, and though it has usurped large chunks of the big data, the technology is nowhere near human intelligence.

Twitter followers: 226,358

GlobalData influencer score: 86

Dr Ganapathi Pulipaka is a chief data scientist and a SAP technical lead at Accenture. With over 20 years of experience in SAP across fields such as project management and technology integration, Ganapathi has worked with various customers on developing AI strategies, neural networks, and other deep learning techniques.

Twitter followers: 92,845

GlobalData influencer score: 82

Kirk Borne is a principal data scientist and advisor at Booz Allen Hamilton, a technology and consulting company in the US. Kirk has been a professor of astrophysics and advisor at the national research labs and government facilities and is known as a top influencer since 2013.

Borne has tweeted on AI topics such as modernising threat detection and analysis with AI tools, and the importance of continued investments in technologies such as AI, cloud, cybersecurity, and others during the global coronavirus pandemic.

Twitter followers: 263,369

GlobalData influencer score: 77

Nigel Willson is a top social media influencer and technologist. He currently serves as the founding partner of awakenAI, a personal advisory company, and is also as the co-founder of We and AI, a non-governmental organisation that focuses on the mission to increase public awareness and understanding of the risks and rewards of AI in the UK.

Ranked as one of the top 20 AI influencers in the world, Nigel is a global speaker, and advisor on artificial intelligence, innovation and technology. He has tweeted on important areas including the ethical risks associated with AI initiatives, the applications of AI in urban management, and more.

Twitter followers: 55,842

GlobalData influencer score: 66

Robert Scoble is a chief strategy officer at Infinite Retina, which helps companies implement spatial computing technologies. A technology strategist and the author of four books on technology, Robert advises companies on areas such as augmented and virtual reality, autonomous vehicles, and associated fields.

Scobles book on how augmented reality and artificial intelligence will change everything delves on discussions and interviews between technologists and business decision makers, and how the technologies will be useful to them.

Twitter followers: 405,477

GlobalData influencer score: 65

Tamara McCleary is the creator of Thulium, a social media analytics and consulting agency. A technology futurist, McCleary is an inspirational keynote speaker, and serves as advisor to leading global tech companies including Amazon, Oracle, Dell, SAP, Cisco, IBM, and Verizon, among others.

According to the influencer, the COVID-19 pandemic has hurried the introduction of artificial intelligence across industries, right from outbreak tracing to contactless customer pay interactions. A change in public sentiment is a possibility, from AI is dangerous to AI is safe.

Twitter followers: 308,431

GlobalData influencer score: 61

Thomas Power is a board member and director of 9Spokes Plc in New Zealand and London based Team Blockchain Ltd. He is also the author of Tokenomics, a book that effectively describes the blockchain shift to cryptocurrencies. He is an author of seven other books and has delivered nearly a thousand speeches across 56 countries.

Twitter followers: 313,005

GlobalData influencer score: 54

Prof Sally Eaves is a global strategy advisor for technologies such as blockchain, AI, and fintech. She specialises in the application and integration of these and other emergent technologies for business and societal benefit. She is also a member of the Forbes Technology Council, and an award winning international keynote speaker, author and influencer.

Eaves is of the opinion that people need to be educated more on AI and blockchain and believes in leveraging AI for societal benefits such as education, healthcare, and more. She also states that automation, AI and communicating over conversational intelligence will be a competitive advantage for businesses during the current health crisis and beyond.

Twitter followers: 107,004

GlobalData influencer score: 53

GlobalData is this websites parent business intelligence company.

Link:

Biggest influencers in AI in Q2 2020: The top companies and individuals to follow - Verdict

How AI Machines Could Save Wall Street Brokers’ Jobs – Entrepreneur

Morgan Stanleys recentdecision to partner 16,000 financial advisers with algorithms that can identify trades and prod brokers to reach out to clients is evidence of yet another in-road being made by machines into human roles. If brokers embrace this mind-and-machine partnershipthough, the payoff is job security in an industry in which returns are paramount.

The financial services industry is highly Darwinian in nature, with its culture of survival of the best performers. Now, bringing artificial intelligence (AI) into the mix is turning the competition up a notch. The most vulnerable, ironically, could be the high-performing brokers who might be tempted to continue alone without algorithmic assistance. But as weve seen in chess championships likeGarry Kasparov vs. Deep Blue, the supercomputer of its time, or IBMs Watsons victory on Jeopardy!, when human and computer are pitted against each other, the computer wins.

Related: Good, Bad & Ugly! Artificial Intelligence for Humans is All of This & More

As research has shown, however, a human-and-computer collaboration makes an unbeatable combination. Thats whyin business, science or other fields, peoples greatest collaborators are likely to be machines. On Wall Street, if a mediocre broker quickly adopts to using a machine as a partner, he or she will become a formidable performer with increased job security, potentially outperforming the strong broker who refuses to leverage the machine.

Morgan Stanley, one of the worlds biggest brokerages, will roll out its AI pilot to 500 advisers in July. The rest of its brokers will be involved by year-end. The project is being billed as an augmentation of human brokers, not a robo replacement of them.

Automated wealth-management services, known as robo-advisors, are already becoming commonplace among many cost-conscious retail investors, who are gravitating toward computers for inexpensive asset allocation and investment advice. A study in Europe by Fujitsu found that 20 percentof respondents said they would buy banking or insurance services from the likes of Google, Amazonor Facebook. Uber has made a step toward financial services by partnering with GoBank to offer checking accounts and debit cards to drivers.

Related: Why Small Businesses Should Be Paying Attention to Artificial Intelligence

For these digital disruptors, their mastery of machine learning would make it relatively easy for them to enter finance -- arguably far more easily than financial advisers could enter the field of machine learning. This same problem confronted Wall Street in the 1980s when computers first entered the business. At that time, computer scientists grasped the fundamentals of finance with greater ease than finance experts learned the fundamentals of computer programming. By bringing together expertise in each field -- those who know algorithms and those who finance -- Wall Street can offer a high-powered collaboration.

While traditional brokerage services are seen as susceptible to an Uber-like disruption, particularly on the retail end, the high net-worth clientele segment is more likely to be protected -- at least for now due to the importance of relationships.

Yet even here, the mind-and-machine partnership can take the higher end to another level. Algorithms will send brokers multiple-choice recommendations based on market changes or events in a clients life, with the objective of generating more business with customers. But humans are being augmented, not replaced. Bloomberg quoted Jeff McMillan, chief analytics and data officer for Morgan Stanleys wealth-management division, as saying brokers will be needed for the foreseeable future to advise wealthy clients with complicated financial planning needs.

Its analogous to what we see happening in medicine, where AI is being used to enhance physicians clinical knowledge in making diagnoses. One can easily imagine the day when individuals will wear biosensors that produce reams of data that can only be digested by computers to help doctors manage patients health conditions, from diabetes to allergies.

On Wall Street, a machine may excel at making accurate market predictions, but it does so in a black box -- a very dark and unknowable pool for high net worth investors, in particular. These individuals are used to the high-trust relationships such as in private equity, in which there is a premium for explaining how an investment strategy is structured and is expected to perform. Even the most accurate black boxis not likely win the trust of a high-touch client who relies on a human relationship.

Thus, for Wall Streets biggest brokerages such as Morgan Stanley, AI becomes a tool for wealth management. While robo-advisors are embraced by retail investors, high net worth clients who are used to high touch service will still need the human part of the mind-and-machine collaboration. For this clientele, its a matter of trust.

Related: Can Artificial Intelligence Identify Pictures Better than Humans?

But as Morgan Stanley and other Wall Street firms embrace more AI, trust in wealth advisement is likely to become a triangulated relationship. Not only must the two humans -- the client and the adviser -- trust each other, but the two humans (and especially the adviser) must also trust the machine.

For the machine, its about using data and machine learning to make market predictions and identify trade opportunities. For the human, its about relationships and building trust, an area of expertise in which people still have considerable edge over computers.

Brian Uzziis a professor attheKellogg School of Managementat Northwestern University and a globally recognizedscientist and speaker on leadership, social networks and new media. Professor Uzzi was a co-ch...

More here:

How AI Machines Could Save Wall Street Brokers' Jobs - Entrepreneur

Cooperation on Artificial Intelligence will boost security and prosperity on both sides of the Atlantic – NATO HQ

"There are considerable benefits of setting up a transatlantic digital community cooperating on Artificial Intelligence (AI) and emerging and disruptive technologies, where NATO can play a key role as a facilitator for innovation and exchange", said NATO Deputy Secretary General Mircea Geoan. On Wednesday (28 October 2020) he took part in a high-level virtual discussion on transatlantic cooperation in the era of AI, organised by the Atlantic Council's Future Europe Initiative and GeoTech Center.

Mr. Geoan engaged in this conversation alongside the Chair and Vice Chair of the National Security Commission on Artificial Intelligence (NSCAI), Dr. Eric Schmidt and Secretary Robert O. Work, and the Head of Cabinet of European Commission Executive Vice-President Margrethe Vestager, Ambassador Kim Jrgensen. They discussed what modern technologies mean for European and American defence and security stakeholders, why the United States and the European Union should cooperate on AI, and how best to promote shared values in the field.

"NATO is a natural platform for transatlantic cooperation of AI," the Deputy Secretary General underlined. NATO offers its consultative mechanisms and unique networks for collaboration on defence and security questions. Bringing together Allies and partners, public and private sector, innovators and industry. We have great communities in areas like military capability development, science and technology, standardisation - and of course our Command Structure and military exercises. We also have new cross-cutting policy teams on Innovation Policy, who cover AI, and on Data Policy, he pointed out.

See more here:

Cooperation on Artificial Intelligence will boost security and prosperity on both sides of the Atlantic - NATO HQ

How to Reduce Biases in Your Contact Center AI Technology – On the Wire

How to Reduce Bias: Optimizing AI and Machine Learning For Contact Centers

Bias exists everywhere in our society. And while some biases are largely harmless, like a childs bias towards one food vs the other due to exposure, others are quite destructive. Impacting our society negatively and often resulting in deaths, dispassionate laws, and discrimination. But what happens when the biases that exist in the physical world are hardcoded into the digital? The rise and adoption of artificial intelligence for decision making has already caused alarm in some communities as the impacts of digital-bias play out in front of them every day. In addition, the current events and trends pushing the U.S. and the world towards anti-racism stances and equity regardless of skin color, raises concerns about how societal biases can influence AI, what that means for already marginalized communities, and what companies should be doing to ensure equity in service and offerings to consumers.

Its no news that Artificial Intelligence and Machine Learning are vulnerable to the biases held by the persons that program them1. But, how does bias impact the quality and integrity of the technologies, processes, and more that rely on AI and ML? Covid-19 has hastened the move towards employing these technologies in healthcare, media, and across industries to accommodate for shifts in consumer behavior; new restrictions in the number of personnel allowed in one car, room, or office.

For contact center professionals concerned with ensuring business continuity, improving customer experience, or increasing capacity, the application of AI and ML during these early phases of restructuring due to the pandemic relates to the expansion of capacity, improvement of customer service, and reduced fraud and operational costs. Understanding the consequences of adopting inherently biased AI or ML technologies meant to protect you; the possible impact on your business is necessary as we traverse toward a new normal2 where technology fills the 6ft gap in our society and where fairness and equity will be expected for everyone.

This post discusses bias in artificial intelligence and machine learning reviews the threats to your business this bias causes and presents you with actionable considerations for you to discuss with your team when searching for a contact center anti-fraud or authentication solution.

Bias in artificial intelligence and machine learning can be summarized as the utilization of bad data to teach the machine and thus inform the intelligence. In short, ML bias becomes AI bias through the input and presence of weak data that inform decisions and the encoding of biases based on the thought processes of developers manifesting themselves in algorithmic and societal biases. The inaccuracies caused by these biases can erode trust between the technology and its human users as it is less reliable3. For you, this means less trust, loyalty, and affinity associated with you by consumers.

Includes the aforementioned bad data and is present in many data sets in 1 of 2 ways.

This occurs when the data used to train the algorithm over-represents one population, making it operate better for them at the expense of others4.

For contact centers, a real-world example could be gleaned from AI improperly trained on international calls. For many contact centers, the majority of calls may be domestic- not giving the algorithm enough data relating to international calls may cause bias wherein international calls are flagged for fraud and rerouted to an analyst vs a customer service agent.

Additionally, the machine has to be trained, taught to make a decision. Developers bias algorithms with the ways they interact with them. For example, if we define something as fraud for the machine and teach it that fraud only looks one way with biased inputs, it recognizes fraud committed- as long as it matches the narrow definition it has learned. Combined with selection bias, this results in machines making decisions that are slanted towards one population, while ignoring others3. For a call center professional concerned with fraud mitigation, a real-world form of this bias is an AI systematically ignoring costly fraudster activity and instead focusing on genuine caller behavior and flagging it as suspicious or fraudulent because it doesnt fit the criteria for fraud that the machine has learned.

When choosing a solution for your contact center- you should ask about the diversity and depth of the data being fed to the machine and how it learns over time. Though no solution is infallible, Pindrop works to reduce bias in our AI by making sure that voiceprints are user-specific instead of a generalization based on a large population of persons with similar features, like an accent. Feeding the machine truth gives the machine a more diverse dataset, reducing algorithmic bias.

It is not as quickly defined, tested for, nor resolved4.

This occurs when an algorithm is taught to identify something based on historical data and often stereotypes. An example of this would be an AI determining that someone is not a doctor because they are male. This is due to the historical preponderance of stock imagery featuring male doctors versus those featuring female ones5. The AI is not sexist, the machine has learned over and over that males in lab coats with glasses and badges are doctors; that women can be or should be ignored for this possibility. Pindrop addresses societal bias by developing them using diverse teams. The best applications of AI are those that also include human input. Diversifying human interaction with the machine, the data it is fed, and modeling it is given, strengthens our AI against bias.

Customer Service

Biased solutions could erroneously flag callers as fraudulent. Ruining customer experiences and causing attrition as customers issues take longer to resolve, ultimately costing you monetarily and in brand reputation. An example of this is contact center authentication solutions that use geographic location as a primary indicator of risk. A person merely placing a phone call as they drive could be penalized. Even worse, persons living in risky neighborhoods are at the mercy of their neighbors criminal activity, as biased tech could flag zip codes and unfairly lock out entire populations. Pindrops commitment to reducing bias addresses this impact to customer service using the diverse data sets mentioned above and by applying more complex models for learning. The result is no-one group is more likely than the other to be flagged as fraudulent, suspicious, or otherwise risky. For you, that means less angry callers and false positives overall.

Fraud Costs

As biases can be restrictive for some, locking customers out, other biases coded into your contact center antifraud or authentication solution can allow more fraud through as it makes certain assumptions. For example, for years6 data has pointed towards iPhone users being more affluent than Android users. For contact center professionals, should your solution make assumptions that wealthier consumers are more trustworthy than working-class persons, it may lower the score of fraudsters on iPhone, possibly allowing the perpetrators into accounts and systems while over penalizing Android users. Though Pindrop is not immune to bias no solution is we can greatly reduce the AI biases that can increase fraud costs unintentionally, through our approach to developing AI.

Contact Center Operations

Lastly, a biased solution could cost you in productivity and operational costs. The two examples above can quickly impact your productivity, costing you more per call. AI biases could cause you to implement step-up authentication for genuine callers and flag accounts exhibiting normal behavior as suspicious because of an encoded logarithmic or societal bias.

Solutions like Pindrops single platform solutions for contact center security help improve customer experience, reduce fraud costs, and optimize contact center operations by developing proprietary AI that learns from diverse and purely fact-based inputeliminating bias in the AI.

Bias enters AI and ML via corrupt data practices but also from the way the solutions are built5. But there are ways to address the builders biases and shield the solution from the input of bad data. In this section, there are 3 core principles to remember when searching for a solution employing AI or machine learning.

Now that you understand how a biased AI can impact your business, you should consider 3 core principles when searching for a solution to serve your contact center. Your ideal solution should:

Have diverse, varied, and fact-based inputs

Diverse, varied, and fact-based inputs address selection bias and ensure that all populations are sampled and therefore considered in calculations that become decisions. For example,

Understand Garbage In, Garbage Out

Question your solutions data inputs. Utilizing outdated concepts, naming conventions, and more influences your machine to make decisions that are prejudiced against specific population segments. Understanding the data inputs and freshness of the data ingested by your solution helps fight against latent bias in AI. For example, earlier in this post we discussed latent bias. This kind of bias is based on societal norms or rather accepted societal behaviors at the time. With that in mind, think of an engine deciding college admissions, based on the admissions of the past 60 years. Its 2020 in 1960 many public and private schools where still racially segregated. In the event that this data is fed to the engine, it will most certainly weigh an applicants race negatively.

Everyone Has Biases

The goal should be neutrality, and diverse views bring us closer to an optimal state of development. By combining varied voices, thought processes, and capabilities from diverse groups of developers, an AI could be created with such diverse and varied inputs, that it learns to operate outside of the conflicting biases of its makers. For example, above, we explained how societal influences even those no longer widely accepted could impact AIs decisions. Should the AI ingest historic information polluted with outdated thought processes, naming conventions, and other latent biases but is also fed fresh, diverse data by diverse humans, it will gain via feedback from the humans, deep learnings to help it make more nuanced, accurate, and less biased decisions.

When considering an AI-powered solution for the protection of your contact center and customers, understanding bias in AI and ML, how it impacts your business, and what you can do about it ultimately saves you time, reduce costs, and hardens your contact center to attack.

Pindrops single-platform solutions for the contact center can help you address challenges in fraud mitigation and identity verification. These solutions are fed fact-based inputs, follow proprietary data collection and analysis processes, and are built by diverse and capable teams to help eliminate bias from our software. Contact us today to see it in action, or learn more from our resource pages.

IEEE, Spectrum. Full Page Reload. IEEE Spectrum: Technology, Engineering, and Science News, 2019, spectrum.ieee.org/tech-talk/tech-history/dawn-of-electronics/untold-history-of-ai-the-birth-of-machine-bias.

Radfar, Cyrus. Bias in AI: A Problem Recognized but Still Unresolved. TechCrunch, TechCrunch, 25 July 2019, techcrunch.com/2019/07/25/bias-in-ai-a-problem-recognized-but-still-unresolved/.

Howard, Ayanna, and Jason Borenstein. AI, Robots, and Ethics in the Age of COVID-19: Ayanna Howard and Jason Borenstein. MIT Sloan Management Review, 12 May 2020, sloanreview.mit.edu/article/ai-robots-and-ethics-in-the-age-of-covid-19/.

Gershgorn, Dave. Google Explains How Artificial Intelligence Becomes Biased against Women and Minorities. Quartz, Quartz, 28 Aug. 2017, qz.com/1064035/google-goog-explains-how-artificial-intelligence-becomes-biased-against-women-and-minorities/.

Hao, Karen. This Is How AI Bias Really Happens-and Why Its so Hard to Fix. MIT Technology Review, MIT Technology Review, 2 Apr. 2020, http://www.technologyreview.com/2019/02/04/137602/this-is-how-ai-bias-really-happensand-why-its-so-hard-to-fix/.

Yahoo Finance. These Maps Show That Android Is For Poor People. Yahoo! Finance, Yahoo!, 4 Apr. 2014, finance.yahoo.com/news/maps-show-android-poor-people-000200949.html.

Excerpt from:

How to Reduce Biases in Your Contact Center AI Technology - On the Wire

Google and ZebiAI launch Chemome Initiative to identify chemical probes with AI models – VentureBeat

In a study published this week in the Journal of Medicinal Chemistry, researchers at Google, in collaboration with X-Chem Pharmaceuticals, demonstrated an AI approach for identifying biologically active molecules using a combination of physical and virtual screening processes. It led to the creation of the Chemome Initiative, which launches today a collaboration between Googles Accelerated Science team and startup ZebiAI that aims to enable the discovery of many more small molecule chemical probes for biological research.

As part of the Chemome Initiative, Google says that ZebiAI will work with researchers to identify proteins of interest and source screening data the Accelerated Science team will use to train AI models. These models will make predictions on commercially available libraries of small molecules chemical probes that arent useful as drugs, but that selectively inhibit or promote the function of specific proteins that will be provided to researchers for activity testing to advance some programs through discovery.

Making sense of the biological networks that support life and produce disease is a complex task. One approach is using small molecules; in a biological system (e.g., cancer cells growing in a dish), they can be added at a specific time to observe how the system responds when a protein has increased or decreased activity.

Despite how useful chemical probes are for this kind of biomedical research, only 4% of human proteins have a known chemical probe available. In an effort to isolate new ones, Google and X-Chem Pharmaceuticals turned to the field of AI and machine learning.

As the coauthors of the study explain, chemical probes are identified by scanning the space of small molecules in a target protein to distinguish hit molecules that can be further tested. The physical part of the process uses DNA-encoded small molecule libraries (DELs) that contain many distinct small molecules in one pool, each of which is attached to a fragment of DNA serving as a barcode for that molecule. One generates many chemical fragments along with a common chemical handle. The results are pooled and split into separate reactions, where a set of distinct fragments with another chemical handle are added.

The chemical fragments from the two steps react and fuse together at the common chemical handles, and theyre connected to build one continuous barcode for each molecule. Once a library has been generated, it can be used to find the small molecules that bind to the protein of interest by mixing the DEL with the protein and washing away the small molecules that dont attach. Sequencing the remaining DNA barcodes produces millions of individual reads of DNA fragments that can then be processed to estimate which of the billions of molecules in the original DEL interact with the protein.

Above: The fraction of molecules from those tested showing various levels of activity, comparing predictions from the classifier and random forests on three protein targets.

Image Credit: Google

To predict whether an arbitrarily chosen small molecule will bind to a target protein, the researchers built a machine learning model specifically a graph convolutional neural network, a type of model designed for graph-like inputs like small molecules. The physical screening with the DEL provides positive and negative examples for a classifier, such that the small molecules remaining at the end of the screening process are positive examples and everything else is negative examples.

The team physically screened three diverse proteins using DEL libraries: sEH(a hydrolase),ER(a nuclear receptor), andc-KIT (a kinase). Using the DEL-trained models, they then virtually screened large make-on-demand libraries from drug discovery platform Mculeand an internal molecule library atX-Chem to identify a set of molecules predicted to show affinity with each protein target. Lastly, they compared the results of their classifier to a random forest model, a common method for virtual screening that uses standard chemical fingerprints. They report that the classifier significantly outperformed the RF model in discovering potent candidates.

The team tested almost 2,000 molecules across the three targets, which it claims is the largest published prospective study of virtual screening to date.

Were excited to be a part of the Chemome Initiative enabled by the effective ML techniques described here and look forward to its discovery of many new chemical probes. We expect the Chemome will spur significant new biological discoveries and ultimately accelerate new therapeutic discovery for the world, Google wrote in a blog post. While more validation must be done to make the hit molecules useful as chemical probes, especially for specifically targeting the protein of interest and the ability to function correctly in common assays, having potent hits is a big step forward in the process.

More here:

Google and ZebiAI launch Chemome Initiative to identify chemical probes with AI models - VentureBeat

WIMI Holographic Academy Invest More in AI Technology Research, Holographic Technology Achieves Breakthrough in Mobile Interaction – GlobeNewswire

HONG KONG, May 28, 2021 (GLOBE NEWSWIRE) -- MobiusTrend, the fintech market research organization, recently released a research report "WIMI Holographic Academy Invest More in AI Technology Research, Holographic Technology Achieves Breakthrough in Mobile Interaction". With the continuous development of holographic technology, its application range is getting wider and wider. At the same time, in order to bring the audience a more extreme sense of experience and dig deeper into holographic R&D resources, outstanding scholars at home and abroad have successively invested in various special research and explored cutting-edge technologies.

According to a new study by the Holographic Research Group of Brigham Young University, they have found a way to create a lightsaber. Namely, Yoda is green, Darth Vader is red, which naturally produces a luminous beam. This discovery overcomes a long-standing challenge in this field and brings a new concept of interaction to holographic technology.

We should understand a concept firstly, that is, what is the concept of interaction? It is generally believed that interaction takes people as the main body, and people exchange information with other things, and produce interaction and influence. The two will produce a process of information exchange, which may be unilateral or bilateral. Objectively speaking, interaction contains multi-level and multi-dimensional attributes. With the continuous innovation of information technology, computer technology, virtual technology, and imaging technology, the concept of interaction is also changing subtly, and the continuous update of the concept of interaction will expand people's thinking and promote the diversification of interaction methods.

An interview with one of the researchers, Professor of Electrical Engineering of BYU, Dan Smalley, said: "What they see in the scenes they create is real; there is no computer-generated." It is not like movies, either lightsabers or photon torpedoes never really exist in physical space. They are real and can even make simple animations in thin air. The development paves the way for immersive experiences, where people can interact with holographic-like virtual objects that coexist in their direct space. To prove this principle, the team also created virtual stick figures who can walk in the air.

It can be said that the emergence of brand-new interactive concepts has changed people's stereotypes to a certain extent and brought new enlightenment. Its innovation is a key factor for the development of the holographic industry, especially its use in holographic imaging technology. Through the holographic image, the audience can enter a brand-new situation. In addition to watching, the audience can also feel the situation and get a new experience. This "spatial reconstruction" model allows the interactive subject to directly occupy the active position in the context. Holographic images have made human-computer interaction more thorough, enhanced the communication between the product and the audience, allowed the audience to have a deeper understanding of the product, and established the behavioral relationship between the product and the person. At the same time, with the help of holographic images, designers can express the design concepts and thoughts contained in the product to the audience, so that the audience can resonate in thought.

The characteristics of the new interactive concept under the holographic imaging technology are presented as follows. (1) Diversity. In the process of human perception of the surrounding environment, vision and hearing have a complementary effect on each other. Holographic imaging technology fully integrates voice and vision. On the basis of voice interaction, the interaction breaks the shackles of 2D and transforms into a three-dimensional interaction. Even some special interactive media can provide the audience with the sense of smell, touch, and taste based on the sense of sight and hearing. The diversified experience makes the interaction efficiency to a higher level. (2) Visualization. The virtual environment brings hearing and visual feelings to people. However, due to the lack of tactile feel it still has a certain gap with the real feelings. This is also one of the limitations of virtualized interaction. The holographic image is associated with force feedback. When the user touches the image, a special force feedback device will feed back information to the user, allowing the user to get a more intuitive experience. (3) A large amount of information. Compared with traditional media, holographic images carry a greater amount of information. The digital image is refined by computing software, and multiple holographic images can be superimposed to form a digital virtual space, which carries a lot of information. It is also based on this large-capacity information that brings a fuller interactive experience to the audience.

In order to promote in-depth cooperation with academia, explore holographic cutting-edge technologies with scholars at home and abroad, promote the implementation of the industry, and open up research results, this time, the WIMI Holographic Academy of Sciences also continues to research on the AI cutting-edge technology and establish strategic partnerships with scholars from research institutions. WIMI aims to explore disruptive emerging technologies together with them to accelerate the application and promotion of research results. In 2020, relying on the research teams in Shenzhen and Beijing, the WIMI Holographic Academy of Sciences has opened four research themes of holographic computing science, holographic communication science, micro-integration science, and holographic cloud science. Relying on the team strength of the WIMI Holographic Academy of Sciences, it is actively promoting the research and development of holographic products. WIMI is looking forward to exploring the unknown AI cutting-edge technology with outstanding scholars from universities or scientific research institutions at home and abroad, and to creating a sustainable and win-win AI industry-university cooperation research ecosystem. The fineness of image information obtained by WIMI holographic computer vision AI synthesis is about 10 times higher than the industry level, and the processing capability of computer holographic vision AI synthesis is about 80% better than the industry average.

At present, artificial intelligence technology is widely used in various Internet applications, enterprise-level applications, and emerging intelligent hardware scenarios. Namely, AI has penetrated all walks of life. From 2020, the scale of China's core artificial intelligence industry will be close to 65 billion yuan, involving many fields such as security, finance, medical care, and education. Facing different application requirements, artificial intelligence technology has spawned a variety of different machine learning algorithms, such as deep learning, active learning, and reinforcement learning, aiming to bring more extreme experience and greater capacity.

WIMI emphasizes both research and application development, and its basic research focuses on machine learning, computer vision, and other directions. Also, it has published many research papers, and its technology applications focus on social, gaming, education, medical AI, and other fields. In terms of cloud computing, artificial intelligence, the Internet of Things, and other frontier technology fields, WIMI Hologram Cloud also relies on global and Chinese technology and business networks, and it has successively invested in companies in related fields, attracting many innovative companies, partners, and innovative talents. In the future, it is believed that WIMI will play a more important role in the application space and bring humans a better AI interactive experience.

About MobiusTrend

MobiusTrend Group is a leading market research organization in Hong Kong. They have built one of the premier proprietary research platforms on the financial market, emphasizing on emerging growth companies and paradigm-shifting businesses. MobiusTrend team is professional in market research reports, industry insights, and financing trends analysis. For more information, please visit http://www.mobiustrend.com/

Media contact

Company: MobiusTrend Research

E-Mail: cs@mobiustrend.com

Website: http://www.mobiusTrend.com

YouTube: https://www.youtube.com/channel/UCOlz-sCOlPTJ_24rMgR6JLw

Excerpt from:

WIMI Holographic Academy Invest More in AI Technology Research, Holographic Technology Achieves Breakthrough in Mobile Interaction - GlobeNewswire

14 ways AI will impact the education sector – VentureBeat

There have been a lot of digital next big things in education over the years everything from the Apple IIe to online learning. The latest is artificial intelligence education tech (AI Ed), and only time will tell what impact it ultimately has. But for something as important as education, now is the time to start talking about the benefits and challenges created by the AI-powered personalized learning systems that are making their way into classrooms.

Entefy covered this topic in previous articles: Old school no more: AI disrupts the classroom, which focused on teachers; and Artificial intelligence may transform education, but are parents ready?, which focused on parents.

The clear near-term opportunity for AI Ed is to support teachers by taking over time-consuming, lower-value tasks, like grading and record keeping. But there are already sophisticated AI teaching systems under development, systems that raise long-term questions about what place AI should have in schools.

Here are 14 unprecedented benefits and challenges that could arise:

AI has the potential to change the quality, delivery, and nature of education. It also promises to change forever the role of parents, students, teachers, and educational organizations.

Additional article contributors: Mehdi Ghafourifar and Brian Walker.

Alston Ghafourifar is the CEO and Co-Founder ofEntefy, an AI-communication technology company, which makes the first universal communicator.

The article originally appeared atEntefy.

See the rest here:

14 ways AI will impact the education sector - VentureBeat

HarperCollins Brings AI To Book Recommendations – Forbes


Forbes
HarperCollins Brings AI To Book Recommendations
Forbes
Publishers have always emphasized the power of word-of-mouth marketing when it comes to selling books. Booksellers are expected to hand-sell titles to bookstore patrons for this reason, and the shelves are often peppered with "employee recommendation" ...

Link:

HarperCollins Brings AI To Book Recommendations - Forbes

Deepfakes and the New AI-Generated Fake Media Creation-Detection Arms Race – Scientific American

Falsified videos created by AIin particular, by deep neural networks (DNNs)are a recent twist to the disconcerting problem of online disinformation. Although fabrication and manipulation of digital images and videos are not new, the rapid development of AI technology in recent years has made the process to create convincing fake videos much easier and faster. AI generated fake videos first caught the public's attention in late 2017, when a Reddit account with the name Deepfakes posted pornographic videos generated with a DNN-based face-swapping algorithm. Subsequently, the term deepfake has been used more broadly to refer to all types of AI-generated impersonating videos.

While there are interesting and creative applications of deepfakes, they are also likely to be weaponized. We were among the early responders to this phenomenon, and developed the first deepfake detection method based on the lack of realistic eye-blinking in the early generations of deepfake videos in early 2018. Subsequently, there is a surge of interest in developing deepfake detection methods.

DETECTION CHALLENGE

A climax of these efforts is this years Deepfake Detection Challenge. Overall, the winning solutions are a tour de force of advanced DNNs (an average precision of 82.56 percent by the top performer). These provide us effective tools to expose deepfakes that are automated and mass-produced by AI algorithms. However, we need to be cautious in reading these results. Although the organizers have made their best effort to simulate situations where deepfake videos are deployed in real life, there is still a significant discrepancy between the performance on the evaluation data set and a more real data set; when tested on unseen videos, the top performers accuracy reduced to 65.18 percent.

In addition, all solutions are based on clever designs of DNNs and data augmentations, but provide little insight beyond the black boxtype classification algorithms. Furthermore, these detection results do not reflect the actual detection performance of the algorithm on a single deepfake video, especially ones that have been manually processed and perfected after being generated from the AI algorithms. Such crafted deepfake videos are more likely to cause real damage, and careful manual post processing can reduce or remove artifacts that the detection algorithms are predicated on.

DEEPFAKES AND ELECTIONS

The technology of making deepfakes is at the disposal of ordinary users; there are quite a few software tools freely available on GitHub, including FakeApp, DFaker, faceswap-GAN, faceswap and DeepFaceLabso its not hard to imagine the technology could be used in political campaigns and other significant social events. However, whether we are going to see any form of deepfake videos in the upcoming elections will be largely determined by non-technical considerations. One important factor is cost. Creating deepfakes, albeit much easier than ever before, still requires time, resources and skill.

Compared to other, cheaper approaches to disinformation (e.g., repurposing an existing image or video to a different context), deepfakes are still an expensive and inefficient technology. Another factor is that deepfake videos can usually be easily exposed by cross-source fact-checking, and are thus unable to create long-lasting effects. Nevertheless, we should still be on alert for crafted deepfake videos used in an extensive disinformation campaign, or deployed at a particular time (e.g., within a few hours of voting) to cause short-term chaos and confusions.

FUTURE DETECTION

The competition between the making and detection of deepfakes will not end in the foreseeable future. We will see deepfakes that are easier to make, more realistic and harder to distinguish. The current bottleneck on the lack of details in the synthesis will be overcome by combining with the GAN models. The training and generating time will be reduced with advances in hardware and in lighter-weight neural network structures. In the past few months we are seeing new algorithms that are able to deliver a much higher level of realism or run in near real time. The latest form of deepfake videos will go beyond simple face swapping, to whole-head synthesis (head puppetry), joint audiovisual synthesis (talking heads) and even whole-body synthesis.

Furthermore, the original deepfakes are only meant to fool human eyes, but recently there are measures to make them also indistinguishable to detection algorithms as well. These measures, known as counter-forensics, take advantage of the fragility of deep neural networks by adding targeted invisible noise to the generated deepfake video to mislead the neural networkbased detector.

To curb the threat posed by increasingly sophisticated deepfakes, detection technology will also need to keep up the pace. As we try to improve the overall detection performance, emphasis should also be put on increasing the robustness of the detection methods to video compression, social media laundering and other common post-processing operations, as well as intentional counter-forensics operations. On the other hand, given the propagation speed and reach of online media, even the most effective detection method will largely operate in a postmortem fashion, applicable only after deepfake videos emerge.

Therefore, we will also see developments of more proactive approaches to protect individuals from becoming the victims of such attacks. This can be achieved by poisoning the would-be training data to sabotage the training process of deepfake synthesis models. Technologies that authenticate original videos using invisible digital watermarking or control capture will also see active development to complement detection and protection methods.

Needless to say, deepfakes are not only a technical problem, and as the Pandoras box has been opened, they are not going to disappear in the foreseeable future. But with technical improvements in our ability to detect them, and the increased public awareness of the problem, we can learn to co-exist with them and to limit their negative impacts in the future.

Go here to read the rest:

Deepfakes and the New AI-Generated Fake Media Creation-Detection Arms Race - Scientific American

MJ or LeBron Who’s the G.O.A.T.? Machine Learning and AI Might Give Us an Answer – Built In Chicago

Our country is deeply divided into two camps.

From coast to coast, people are eager to know the answer to one simple question: Who will come out on top Michael Jordan or LeBron James?

It might seem like a moot point. NBA legend Michael Jordan is now well into retirement while LeBron James is still able to continue building his case with the Los Angeles Lakers. Thanks to the laws of time and space, theres no way to accurately compare their talent in a conclusive way.

Or is there?

AutoStats, a product of Stats Perform, is using artificial intelligence and computer vision to unlock secrets of seasons past and predict seasons future.

The goal of AutoStats is to collect tracking data from every sports video that has ever existed which essentially enables us to travel back in time and compare players and eras in a way that we havent been able to do previously, said Patrick Lucey, chief scientist at Stats Perform. Using this technology, we can start to make the impossible possible.

The implications of these statistics are a real game-changer in the sports world, the effects of which can be seen in betting, team drafting and recruitment, professional commentary, fantasy football and how well your opinions on all-star players hold up.

Sujoy Ganguly, Ph.D.

Director of Computer Vision

I am the director of computer vision, which means I teach computers to watch sports. Specifically, we extract the positions of the players, their limbs and actions directly from the broadcast video you get in your home.

Patrick Lucey, Ph.D.

Chief Scientist

Im the chief scientist, and my role is to set the AI strategy to maximize the value of our deep treasure troves of sports data using AI technology.

Patrick Lucey: AI not only emulates what a human can do, but surpasses what even the best human expert can do. The reason why artificial intelligence has reached this superhuman capability is that it has utilized an enormous amount of data. The more data you have, the better your AI technology will be simple as that.

When it comes to the sheer volume of sports data, no other company has the amount that we have. We cover any sport you can think of, and we capture it at a depth that no other company does.

Sujoy Ganguly: The goal of our team is to create the most in-depth data at the broadest breadth. We do this by extracting player tracking, pose and event data everywhere there is broadcast video. To accomplish this, we have three streams:one that focuses on model development, the second that focuses on the deployment of these models to the cloud, and a third that focuses on implementation at the edge for in-venue deployment.

How does Stats Perform get its data?

Stats Perform collects data through raw video. Its collected via the companys in-venue hardware or snapped up from broadcasts.

Lucey: Well, its like teaching a child how to read. First, they have to learn the alphabet and words before being able to understand a sentence, then onto a paragraph only then they can understand the whole story. Once they have read a lot of books and seen similar stories in the past, then they can actually start to predict how the story will unfold.

Its similar for sport, where we first have to create a sports-specific alphabet and words from which to form sentences that represent gameplay that a computer can understand. Instead of using characters and textual words, we use spatial data and event sequences. From this sports-specific language we have built, we can then get the computer to learn similar gameplay from the data we have, which enables us to predict plays and player motion. The main reason why I believe AI has so much hype around it is that it is the ultimate decision analysis tool every decision and action can be objectively analyzed.

Ganguly: Teaching a machine to interpret sports is a complex and evolving problem. At a high level, we start with a clearly defined question. For example, what is the likelihood that a team will win a game, and how does this depend on the players on that team? Then we ask what information we have: We have results of thousands of games and data about the players who played in those games. From there, we can start the process of conducting experiments and converging to a high-performing model. Generally, this process requires an open and honest conversation about the results of each test and what we have learned.

Ganguly: Many of the challenges we face with machine learning are the same as in other industries, like how we collect and maintain data sets or how we manage training and deployment workloads. However, most companies that work on prediction are doing so on strictly temporal data. In contrast, we have spatial and temporal information. Unlike the autonomous vehicle companies that also deal with spatial-temporal data, we dont control all of the sources of video. This presents unique challenges in data collection but also allows us to use predictive models that allow for noise and are therefore robust.

Different kinds of data

Temporal data is data relating to time and spatial data refers to space. As Ganguly alluded to, combining the two is necessary in the tech behind self-driving cars. This data helps determine whats another moving object, like another car, and whats stationary, say, a tree. For Stats Perform, they data scientists are looking less at a deer in the road, and more how a player moves on the field, and at what speed. The result is the ability to pinpoint the specific motions of a player depending on the context of the game and play, and to anticipate how theyd react in a similar situation.

Lucey: The example I like to talk about is our work in soccer. Soccer is a hard sport to analyze because it is low-scoring, continuous and strategic. As such, the current statistics used, such as possession percentage, number of passes and completion rate, number of corners and tackles, do not correlate with goals scored and who won the match. Our AI-based metrics expected goals, quality of passes and playing styles correlate much higher with goals compared to standard statistics. These AI-metrics simply measure performance better. Using these AI tools, we were able to show how, against incredible odds, underdog Leicester City won the 2015-16 English Premier League title.

Ganguly: There are two significant ways that AI is and will continue to revolutionize sports. Firstly, AI is creating more complex and granular data at an unprecedented scale. For example, with our AutoSTATS technology, we can capture the motions of players in college basketball, where this data was never before available. The other way AI is revolutionizing sport is by allowing people to draw insights from our increasingly in-depth data. Using player tracking data, we can predict the motion of players. This allows us to see how a player will behave on their team after a trade, thereby allowing for better player recruitment.

Isolating a teams formation

Tools like Stats Performsunsupervised clustering method can quickly find a teams formation right down to the frame. When humans attempt to do this, their results fall just a few yards short.

Lucey: Even though we have the most sports data on the planet, to tell the best stories and provide the best analysis and products for our customers, we need even more granular data. Thats why I am so excited about our AutoStats work.

AI has so much hype around it is because it is the ultimate decision analysis tool every decision and action can be objectively analyzed. AI can not only capture data using computer vision and other sensors that couldnt be captured before, but it can help us transform that data into a form that can be used to make decisions. Given how popular sports are around the world and the importance they have on other sectors, theres potential for other industries to directly use the data and technology that we have generated to make future decisions.

Link:

MJ or LeBron Who's the G.O.A.T.? Machine Learning and AI Might Give Us an Answer - Built In Chicago

Tesla AI Day Starts Today. Here’s What to Watch. – Barron’s

Text size

Former defense secretary Donald Rumsfeld said there are known knownsthings people knowknown unknownsthings people know they dont knowand unknown unknownsthings people dont realize they dont know. That pretty much sums up autonomous driving technology these days.

It isnt clear how long it will take the auto industry to deliver truly self-driving cars. Thursday evening, however, investors will get an education about whats state of the art when Tesla (ticker: TSLA) hosts its artificial intelligence day.

The event will likely be livestreamed on the companys website beginning around 8 p.m. Eastern Standard Time. The companys YouTube channel will likely be one place to watch the event. Other websites will carry the broadcast as well. The company didnt respond to a request for comment about the agenda for the event, but has said it will be available to watch.

Much of what will get talked about wont be a surprise, even if investors dont understand it all. Those are known unknowns.

Tesla should update investors about its driver assistance feature dubbed full self driving. Whats more, the company will describe the benefit of vertical integration. Tesla makes the hardwareits own computers with its own microchipsand its software. Tesla might even give a more definitive timeline for when Level 4 autonomous vehicles will be ready.

Roth Capital analyst Craig Irwin doesnt believe Level 4 technology is on the horizon though. He tells Barrons the computing power and camera resolution just isnt there yet. Tesla will work hard to suggest tech leadership in AI for automotive, says Irwin. Reality will probably be much less exciting than their claims.

Irwin rates Tesla shares Hold. His price target is just $150 a share.

The car industry essentially defines five levels of autonomous driving. Level 1 is nothing more than cruise control. Level 2 systems are available on cars today and combine features such as adaptive cruise and lane-keeping assistance, enabling the car to do a lot on its own. Drivers, however, still need to pay attention 100% of the time with Level 2 systems.

Level 3 systems would allow drivers to stop paying attention part of the time. Level 4 would let them stop paying attention most of the time. And Level 5 means the car does everything always. Level 5 autonomy isnt an easy endeavor, says Global X analyst Pedro Palandrani. There are so many unique cases for technology to tackle, like in bad weather or dirt roads. But level 4 is enough to change the world. he added. He is more optimistic than Irwin about the timing for Level 4 systems and hopes Tesla provides more timing detail at its event.

Beyond a technology run down and level 4 timing, the company might have some surprises up its sleeve for investors. Palandrani has two ideas.

For starters, Tesla might indicate its willing to sell its hardware and software to other car companies. That would give Tesla other unexpected, sources of income. Tesla already offers its full self driving as a monthly subscription to owners of its cars. Thats new for the car industry and opens up a source of recurring revenue for anyone with the requisite technology. Selling hardware and software to other car companies, however, would be new, and surprising, for investors.

Tesla might also talk about its advancements in robotics. CEO Elon Musk has talked often in the past about the difficulty of making the machine that makes the machine. Some of Teslas AI efforts might also be targeted at building, and not just driving, vehicles. Were just making a crazy amount of machinery internally, said Musk on the companys second-quarter conference call. This is.not well understood.

Those are two items that can surprise. Whether they, or other tidbits, will move the stock is something else entirely.

Tesla stock dropped about 7% over Monday and Tuesday partly because NHTSA disclosed it was looking into accidents involving Teslas driver assistance features. Tesla will surely stress the safety benefits of driver assistance features on Thursday, whether it can shake off that bit of bad news though is harder to tell.

Thursday becomes a much more important event in light of this weeks [NHTSA] probe, says Wedbush analyst Dan Ives. This week has been another tough week for Tesla [stock] and the Street needs some good news heading into this AI event.

Ives rates Tesla shares Buy and has a $1,000 price target for the stock. Teslas autonomous driving leadership is part of his bullish take on shares.

If history is any guide investors should expect volatility. Tesla stock dropped 10% the day following its battery technology event in September 2020. It took shares about seven trading days to recover, and Tesla stock gained about 86% from the battery event to year-end.

Tesla stock is down about 6% year to date, trailing behind the 18% and 15% comparable, respective gains of the S&P 500 and Dow Jones Industrial Average. Tesla stock hasnt moved much, in absolute terms, since March. Shares were in the high $600s back then. They closed down 3% at $665.71 on Tuesday, but are up 1.3% at $674.19 in premarket trading Wednesday.

Write to allen.root@dowjones.com

The rest is here:

Tesla AI Day Starts Today. Here's What to Watch. - Barron's

AI Technology Can Enhance Human-Centered Work Instead Of Threaten It – Forbes

AI company Samasource is advocating a human-in-the-loop, model that requires human involvement ... [+] with its advanced technology.

Whenever new technologies change the way we work, some worry that their jobs are in jeopardy. And sometimes, new technologies do make certain jobs obsolete. Artificial intelligence is an emerging force in the business world that has the potential to either replace humans in certain industries or empower humans with better tools, depending on how the technology is utilized. And in the high-unemployment COVID era when the terms outsourcing and AI stoke fears for workforce stability and opportunity, there needs to be a more human-centered approach to workforce training and development.

Samasource, a training data and validation company based in San Francisco, believes AI can enhance how we work and is advocating for human-in-the-loop, a work model that requires human involvement even with advanced technology. It's really this combination of human and artificial intelligence, or human-in-the-loop and machine learning, that will allow us to bring best-in-class technology to solve the world's most pressing challenges, says Heather Gadonniex, the VP of Marketing & Strategic Partnerships at Samasource.

I spoke with Gadonniexas part of my research on purpose-driven businessand to learn about how Samasource is using its advanced technology to enhance human-centered work and provide job opportunities to marginalized communities. Samasource is a Certified B Corporation, a company that has met certain social and environmental standards as verified by the nonprofit B Lab.

Christopher Marquis: Some people worry that artificial intelligence will ultimately replace humans in the workforce. How is Samasource ensuring that AI is used to enhance human-centered work and not fully replace it?

Heather Gadonniex, VP of Marketing & Strategic Partnerships at Samasource.

Heather Gadonniex: Our philosophy is to use artificial intelligence to power and empower the human workforce. We firmly believe in up-skilling. We firmly believe in cross-skilling. And we firmly believe that artificial intelligence, or machine learning, is not necessarily going to negate the need for human involvement but will simply remove more mundane tasks from day-to-day workloads so that humans can focus on tasks that require higher cognition and focus on higher value areas of work.

It's really this combination of human and artificial intelligence, or human-in-the-loop and machine learning, that will allow us to bring best-in-class technology to solve the world's most pressing challenges. And in my opinion, we have enough challenges that need to be solved that we shouldn't shy away from using the power of technology to do that.

We're really embracing, advocating for, and being a voice in the market for this combination of machine intelligence and human intelligence, and at the same time ensuring that there is up-skilling and that there is cross-skilling. Because without that, you don't really have the ability to create environments or workplaces where people are learning the new skills they need to thrive in the next digital economy, or in today's digital economy even. That's really how we perceive this conflict between human and machine, if you will less of a conflict and more of a collaboration.

Marquis: Can you share examples of up-skilling? What does that look like in practice?

Gadonniex: We have a very robust education program internally, called SamaU. So we use a combination of technology platforms plus face-to-face training, and we provide basic digital skills to get people familiarized with working in the digital economy. Because the majority of the workforce that we employ have never had a formal job before they are working in areas that are typically not exposed to the digital economy. For most of them, this is their first full time job that pays living wages. And that, in and of itself, has the power to transform economies, not just lives but full economies.

So, we provide our workers with basic digital skills, and then from there we actually provide ongoing training around topics that focus on machine learning, general artificial intelligence, and soft skills as well, like: How do you manage your finances? How do you become a better boss? How do you work in a professional environment? Many of them have never had the opportunity to learn how to actually manage a paycheck. Oftentimes, they've never even had a steady paycheck. So that soft skill training is equally important to the digital up-skilling and the technology skill training.

We also just completed a randomized control trial with MIT to validate the efficacy of this model. People who received training and employment at Samasource reported 10% lower unemployment rates and wages that are, on average, more than 25% higher than our control group. Our up-skilling programs, our full-time employment programs, and paying living wages had a particularly significant effect on women who received earnings 30% higher than the control group. If you look at economic development as a whole, that is so powerful.

Marquis: How has your business adapted during COVID? And do you have any recommendations for other businesses looking to adapt to these challenges?

Gadonniex: We launched multiple initiatives through 2020 to ensure our employees were able to safely maintain their jobs and continued to deliver the secure, high quality training data that we're known for in the midst of this pandemic. Business continuity was extremely high on the priority list of our customers, especially during the start of COVID. And I think that that makes total sense given the situation.

To ensure continuity, we create a program called Samahome, a partnership between Samasource and local hotels in Kenya and Uganda. We enabled our employees to work in safe environments during the height of the pandemic in these live-work arrangements.

Our team members could opt in to live and work at Samahome. Before they entered the Samahome, everyone was checked to make sure they were healthy. We conducted ongoing health checks, practiced social distancing, and ensured that there were sanitization measures in place.

We recently launched our work-from-home initiative, which actually provides our employees with the necessary resources to effectively work from home. We also have a few teams that are starting to work from our secured delivery centers again, and we're continuing to transition back for certain teams while maintaining social distancing protocols and health checks. The pandemic shifted our business model, so we are exploring maintaining a work-from-home model because this allows us to expand the amount of opportunity that we can provide to the communities in which we operate.

Visit link:

AI Technology Can Enhance Human-Centered Work Instead Of Threaten It - Forbes

Microsoft India projects itself as open source champion, says AI is the next step – YourStory.com

The company announced that it is building the worlds largest artificial intelligence platform

At *.ai Intelligent Cloud, a conference hosted by Microsoft India in Bengaluru on Monday, the company said that Big Data, Cloud, and intelligence have changed the way machines compute services for human beings. It also announced that it was building the worlds largest artificial intelligence (AI) platform.

"Cognitive services will change IT applications in the future. Services can become intelligent with machine learning and we are providing APIs to startups for computer applications like vision, speech, language, knowledge and search," says Sandeep Alur, Microsoft Lead Evangelist.

The company also positioned itself as a champion of open source technologies, with over 16,000 contributions made on GitHub, an open source internet hosting service. Large corporate like Google are in the game too. They joined the .Net Foundation technical steering group, which was launched by Microsoft.

Microsoft, in return, joined the Linux Foundation. While this global exchange of knowledge is well-founded, today every one wants to know what AI is all about.

Analysts at Gartner tell YourStory that today most neural networks are still in the realm of machine learning, which means outcomes can be predictive. Computer science experts will say that AI comes with reasoning power, which means machines can fool you into believing they are human.

That said, several software engineers are going after building computers that can create predictive outcomes for services. Say, a machine can tell when a part could fail and will inform the customer about going to the dealership to replace the part. Let us also say you call a BPO and a chatbot using voice technologies, which will guide you through a query better than any human. Says Alur of Microsoft,

"Today, building a bot means maintaining several source codes. But we have created a framework for bots with one source code."

Research firm Markets&Markets says the AI market could be $16 billion by 2022.

Bot frameworks can change the way BPOs function and change their outcomes.

However, the point is that while all these frameworks make it easy with a lot application channels, saying that we are in the realm of AI is a wrong argument. However, the push that Microsoft offers, to go towards AI with machine learning, is something to watch out for.

Let's look at the numbers. If we look at all the system integrators, starting from Accenture to Infosys to TCS to Wipro, their revenues from platforms that offer AI or machine learning is less than 10 percent.

"Data is critical and we can build models that can become critical for predictive planning," says Anish Basu Roy, CEO of Shotang.

Several startups that YourStory spoke to believed that machine learning was today called AI. "Everything that is F of X is considered AI, when all we are building is an intelligent machine," said founders of a couple of startups who did not want to be named.

Representatives of the Indian startup ecosystem in the room were extremely robust. Over 200 of them were present and most of them were in the realm of data analytics, cognitive sciences and predictive maintenance. Somehow, the feeling was that they were here to witness Satya Nadella's talk with Nandan Nilekani.

Read more:

Microsoft India projects itself as open source champion, says AI is the next step - YourStory.com

Want a Diagnosis Tomorrow, Not Next Year? Turn to AI | WIRED – WIRED

Inside a red-bricked building on the north side of Washington DC, internist Shantanu Nundy rushes from one examining room to the next, trying to see all 30 patients on his schedule. Most days, five of them will need to follow up with some kind of specialist. And odds are, they never will. Year-long waits, hundred-mile drives, and huge out-of-pocket costs mean 90 percent of Americas most needy citizens cant follow through on a specialist referral from their primary care doc .

But Nundys patients are different. They have access to something most people dont: a digital braintrust of more than 6,000 doctors, with expert insights neatly collected, curated, and delivered back to Nundy through an artificial intelligence platform. The online system, known as the Human Diagnosis Project, allows primary care doctors to plug into a collective medical superintelligence, helping them order tests or prescribe medications theyd otherwise have to outsource. Which means most of the time, Nundys patients wait days, not months, to get answers and get on with their lives.

In the not-too-distant future, that could be the standard of care for all 30 million people currently uninsured or on Medicaid. On Thursday, Human Dx announced a partnership with seven of the countrys top medical institutions to scale up the project, aiming to recruit 100,000 specialistsand their expert assessmentsin the next five years. Their goal: Close the specialty care gap for three million Americans by 2022.

In January, a single mom in her thirties came to see Nundy about pain and joint stiffness in her hands. It had gotten so bad that she had to stop working as a housekeeper, and she was growing desperate. When Nundy pulled up her chart, he realized she had seen another doctor at his clinic a few months prior, who referred her to a specialist. But once the patient realized shed have to pay a few hundred dollars out of pocket for the visit, she didnt go. Instead, she tried get on a wait list at the public hospital, where she couldnt navigate the paperworkEnglish wasnt her first language.

Now, back where she started, Nundy examined the patients hands, which were angrily inflamed. He thought it was probably rheumatoid arthritis, but because the standard treatment can be pretty toxic, he was hesitant to prescribe drugs on his own. So he opened up the Human Dx portal and created a new case description: 35F with pain and joint stiffness in L/R hands x 6 months, suspected AR. Then he uploaded a picture of her hands and sent out the query.

Within a few hours a few rheumatologists had weighed in, and by the next day theyd confirmed his diagnosis. Theyd even suggested a few follow-up tests just to be sure, and advice about a course of treatment. I wouldnt have had the expertise or confidence to be able to do that on my own, he says.

Nundy joined Human Dx in 2015, after founder Jayanth Komarneni recruited him to pilot the platforms core technologies. But the goal was always to go big. Komarneni likens the network to Wikipedia and Linux, but instead of contributors donating encyclopedia entries or code, they donate medical expertise. When a primary care doc gets a perplexing patient, they describe their background, medical history, and presenting symptomsmaybe adding an image of an X-ray, a photo of a rash, or an audio recording of lung sounds. Human Dxs natural language processing algorithms will mine each case entry for keywords to funnel it to specialists who can create a list of likely diagnoses and recommend treatment.

Now, getting back 10 or 20 different doctors takes on a single patient is about as useful as having 20 friends respond individually via email to a potluck invitation. So Human Dxs machine learning algorithms comb through all the responses to check them against all the projects previously stored case reports. The network uses them to validate each specialist's finding, weight each one according to confidence level, and combine it with others into a single suggested diagnosis. And with every solved case, Human Dx gets a little bit smarter. With other online tools if you help one patient you help one patient, says Komarneni. Whats different here is that the insights gained for one patient can help so many others. Instead of using AI to replace jobs or make things cheaper were using it to provide capacity where none exists.

Komarneni estimates that those electronic consults can handle 35 to 40 percent of of specialist visits, leaving more time for people who really need to get into the office. Thats based on other models implemented around the country at places like San Francisco General Hospital, UCLA Health System, and Brigham and Womens Hospital. SFGHs eReferral system cut the average waiting time for an initial consult from 112 days to 49 within its first year.

That system, which is now the default for every SFGH specialty, relies on dedicated reviewers who get paid to respond to cases in a timely way. But Human Dx doesnt have those financial incentivesits service is free. Today, though, by partnering with the American Board of Medical Specialities, Human Dx can now offer continuing education and improvement credits to satisfy at least some of the 200 hours doctors are required to complete every four years. And the American Medical Association, the nations largest physician group, has committed to getting its members to volunteer, as well as supporting program integrity by verifying physicians on the platform.

Nick Stockton

Veritas Genetics Scoops Up an AI Company to Sort Out Its DNA

Megan Molteni

Thanks to AI, Computers Can Now See Your Health Problems

Megan Molteni

The Chatbot Therapist Will See You Now

Its a big deal to have the AMA on board. Physicians have historically been wary of attempts to supplant or complement their jobs with AI-enabled tools. But its important to not mistake the organizations participation in the alliance for a formal pro-artificial intelligence stance. The AMA doesnt yet have an official AI policy, and it doesnt endorse any specific companies, products, or technologies, including Human Dxs proprietary algorithms. The medical AI field is still young, with plenty of potential for unintended consequences.

Like discrepancies in quality of care. Alice Chen, the chief medical officer for the San Francisco Health Network and co-director of SFGHs Center for Innovation in Access and Quality, worries that something like Human Dx might create a two-tiered medical system, where some people get to actually see specialists and some people just get a computerized composite of specialist opinions. This is the edge of medicine right now, says Chen. You just have to find the sweet spot where you can leverage expertise and experience beyond traditional channels and at the same time ensure quality care.

Researchers at Johns Hopkins, Harvard, and UCSF have been assessing the platform for accuracy, and recently submitted results for peer-review. The next big hurdle is money. The project is currently one of eight organizations in contention for a $100 million John D. and Catherine T. MacArthur Foundation grant. If Human Dx wins, theyll spend the money to roll out nationwide. The alliance isnt contingent on the $100 million award, but it would certainly be a nice way to kickstart the processespecially with specialty visits accounting for more than half of all trips to the doctors office.

So its possible that the next time you go in for something that stumps your regular physician, instead of seeing a specialist across town, youll see five or 10 from around the country. All it takes is a few minutes over lunch or in an elevator to put on a Sherlock Holmes hat, hop into the cloud, and sleuth through your case.

Continued here:

Want a Diagnosis Tomorrow, Not Next Year? Turn to AI | WIRED - WIRED