Daily Archives: March 18, 2021

HPE Steps Up AI March With Standalone Version Of Ezmeral – CRN

Posted: March 18, 2021 at 12:39 am

Hewlett Packard Enterprise Wednesday dramatically expanded its artificial intelligence-machine learning (AI/ML) market reach with a standalone release of its Ezmeral Data Fabric.

The new standalone Ezmeral edge-to-cloud data fabric brings the fast growing cloud native AI/ML platform to a new multibillion-dollar market where the data fabric offering can be used in multiple enterprise big data buildouts on its own.

HPE made the decision to establish a separate standalone version of the data fabric in direct response to customers, said HPE Chief Technology Officer and Head of Software Kumar Sreekanti (pictured above). Its a huge market opportunity, he said. Customers have asked for this because it is a very proven platform with phenomenal scale. Many customers want to first deploy the data platform and later on bring in the Ezmeral container platform.

The new Ezmeral standalone offering came as part of an HPE Ezmeral day webcast blitz that included the launch of a new Ezmeral Technology Ecosystem program for ISVs (Independent Software Vendors) and an Ezmeral Marketplace that includes ISV and open source projects for enterprise customers anxious to modernize applications and move to cloud native workloads.

The standalone Ezmeral fabric offering appeals to enterprise customers looking to build out an edge to core to cloud scale-out file system for storing unstructured data independent from the analytics capabilities that come with the full Ezmeral platform, said HPE Ezmeral General Manager Anant Chintamaneni.

That is a new addressable market for customers that want a more effective price point to store large unstructured files, said Chintamaneni. We have had customers who want this standalone offering in the oil and gas industry and the healthcare industry. They are supporting digital transformation and want a massive scale out edge to core to cloud file system. They are primarily interested in storing data and will bring analytics in later on.

There are also traditional data analytics buyers who have not yet deployed container platforms and are looking for a proven POSIX (Portable Operating System Interface) file system for their AI/ML workloads, said Chintamaneni. They want to get their data strategy right first, he said. They want to modernize their data, collect the data from different places and put it in one place, and then bring the AI/ML and analytics tools later.

Ezmerals ability to provide a unified data repository that can be accessed via multiple protocols like POSIX, NFS (Network File System), or HDFS (Hadoop Distributed File System) is absolutely unique, said Chintamaneni.

The partner ecosystem program is another sign of Ezmeral momentum, said Sreekanti, citing the launch of a new HPE Ezmeral app store that includes big data ISVs like Dataiku, MinIO, H20.AI, Rapt.AI, Sysdig and Unravel. This provides customers access to pre-tested, pre-packaged applications, he said.

Key to Ezmeras market momentum is the ability for partners to quickly and easily roll out containers in a hybrid and multicloud world, said Sreekanti. The benefit for our channel partners and resellers is that it is easy, you dont have to pull all these pieces together, he said. Because of the unique comprehensive, combined nature of this it is easy for our partners to deploy.

Chintamaneni, for his part, said the flexibility Ezmeral provides for deployments at the edge, the core network and the public cloud with the ability to seamlessly move data in a unified software defined environment provides big benefits to HPE customers and partners. Its a very unique value proposition we are bringing to the market in a very simple fashion, he said.

Ezmeral new customer acquisition is accelerating significantly, said Chintamaneni. New logo acquisition has really seen an uptick, he said.

As part of Ezmeral day, HPE announced a number of new customers including ORock, a hybrid cloud service that is debuting a new suite of offerings powered by HPE Ezmeral and HPEs GreenLake pay per use platform. In another big win, HPE said DRAM manufacturer Nanya Technology has chosen Ezmeral to improve production by accelerating the rollout of AI projects in its manufacturing facilities.

Erik Krucker, CTO at Comport Consulting, an HPE Platinum partner with a growing AI practice within its ComportSecure managed services business, said he sees the standalone data fabric as another sign of HPEs transformation into an edge to cloud platform as a service software powerhouse.

Customers want to have the ability that HPE is providing with Ezmeral to move workloads around, said Krucker. Its a great strategy for containerized applications because you only have to engineer them once and then you can move them wherever you need them. HPE is definitely going in the right direction. They are talking about software solutions, platforms and data fabric rather than hardware. Its all about solutions that customers are trying to implement. This makes HPE much more attractive to enterprise customers.

Krucker credited HPE CEO Antonio Neri with transforming the company into an edge-to-cloud software platform innovator that is a fierce competitor in the intensely competitive big data storage market. Youve got to hand it to Antonio, hes thinking differently and bringing in the right people like Kumar Sreekanti and (GreenLake Cloud Services General Manager) Keith White, said Krucker.

Comport, for its part, is working with a growing number of customers on AI/ML solutions. AI/ML used to be a luxury or a nice to have for an organization, but now it is a must have and those organizations are pivoting fast, said Krucker. We see AI/ML going downstream and becoming pervasive even in the SMB market at some point. Customers have to gain a competitive edge in the marketplace and they are going to use AI/ML to deliver services faster and more cost effectively to customers.

Comport is building AI/ML solutions for a number of customers and then running them under the ComportSecure managed services banner. We are designing, implementing and managing these solutions from end to end for customers, said Krucker. That business is probably going to double this year for us.

One Comport customer is looking at leveraging AI to dramatically reduce the cost of bringing new drugs to market. If they can use AI to run their algorithms faster, they can save billions of dollars, said Krucker. They want to take their existing models and crunch numbers faster and faster and faster. AI can speed up their R&D and cut down the amount of time it takes to bring new drugs to market by 50 percent.

The rest is here:

HPE Steps Up AI March With Standalone Version Of Ezmeral - CRN

Posted in Ai | Comments Off on HPE Steps Up AI March With Standalone Version Of Ezmeral – CRN

A Top Computer Science Professors Insights And Predictions For Conversational AI – Forbes

Posted: at 12:39 am

Breaking Bots by Clincs Founder CEO Jason Mars is released with ForbesBooks.

This release is posted on behalf of ForbesBooks (operated by Advantage Media Group under license.)

NEW YORK (March 16, 2021) Breaking Bots: Inventing a New Voice in the AI Revolution by Clincs Founder CEO Dr. Jason Mars is available now. The book is published with ForbesBooks, the exclusive business book publishing imprint of Forbes.

In setting the stage for his new book, Jason Mars considers how technology has shaped the arc of human history, time and again. From the Bronze Age to the Industrial Revolution to our current Technological Age, a once gradual pace of progress has given way to an era of rapid, exponential growth. The next revolution humanity must prepare for, in Mars view, is Artificial Intelligence. In Breaking Bots: Inventing a New Voice in the AI Revolution, Jason Mars expresses the surprising progress AI has made in recent years and what our shared future holds. At the same time, Mars chronicles the unique journey and key insights of creating a company dedicated to advancing AIs potential.

The frontier for conversational AI is endless and thrilling, Mars explained. Being able to speak freely, as you would to a human in the room, is the holy grail.

While virtual home assistants like Alexa or Siri are now commonplace, these technologies are limited by their market and narrow internal intuitions. That said, Breaking Bots still positions conversational AI as humanitys next fire, light bulb, or internet. It is in bridging those intuitional gaps that the work of computer scientist Jason Mars and Clinc, the company he founded, seeks to make an impact. Breaking Bots offers insights into the paradigm-shifting technical and cultural DNA that makes Jasons work, and Clincs technology, a bold future for AI.

Ryan Tweedie, the CIO Portfolio Director and Global Managing Director of Accenture, believes that Mars is a true vanguard in the lineage and development of AI, especially for what countsthe human element.

Breaking Bots: Inventing a New Voice in the AI Revolution is available on Amazon today.

About Jason Mars, Ph.D.

Jason Mars has built some of the worlds most sophisticated scalable systems for AI, computer vision, and natural language processing. He is a professor of computer science at the University of Michigan where he directs Clarity Lab, one of the worlds top AI and computing training labs.

In his tenure as CEO of Clinc, he was named Bank Innovations #2 Most Innovative CEO in Banking 2017 and #4 in Top 11 Technologists in Voice AI 2019. His work has been recognized by Crains Detroit Businesss 2019 40 under 40 for career accomplishments, impact in their field and contributions to their community. Prior to the University of Michigan, Jason was a professor at UCSD and worked at Google and Intel. Jason holds a Ph.D. in computer science from UVA.

About ForbesBooks

Launched in 2016 in partnership with Advantage Media Group, ForbesBooks is the exclusive business book publishing imprint of Forbes. ForbesBooks offers business and thought leaders an innovative, speed-to-market, fee-based publishing model and a suite of services designed to strategically and tactically support authors and promote their expertise. For more information, visit forbesbooks.com.

Media Contacts

Michael Szudarek, Marx Layne, mszudarek@marxlayne.com

Carson Kendrick, ForbesBooks, ckendrick@forbesbooks.com

See the rest here:

A Top Computer Science Professors Insights And Predictions For Conversational AI - Forbes

Posted in Ai | Comments Off on A Top Computer Science Professors Insights And Predictions For Conversational AI – Forbes

Why the Future of Healthcare is Federated AI – insideBIGDATA

Posted: at 12:39 am

In this special guest feature, Akshay Sharma, Executive Vice President of Artificial Intelligence (AI) at Sharecare, highlights advancements and impact of federated AI and edge computing for the healthcare sector as it ensures data privacy and expands the breadth of individual, organizational, and clinical knowledge. Sharma joined Sharecare in 2021 as part of its acquisition of doc.ai, the Silicon Valley-based company that accelerated digital transformation in healthcare. With doc.ai, Sharma previously held various leadership positions including CTO, and vice president of engineering, a role in which he developed several key technologies that power mobile-based privacy products in healthcare. In addition to his role at Sharecare, Sharma serves as CTO of TEDxSanFrancisco and also is involved in initiatives to decentralize clinical trials. Sharma holds bachelors degrees in engineering and engineering in information science from Visvesvaraya Technological University.

Healthcare data is an incredibly valuable asset and ensuring that its kept private and secure should be a top priority for everyone. But as the pandemic led to more patient exams and visits being conducted within telehealth environments, its become even easier to lose control of that data.

This doesnt have to be the case. There are better options for ensuring a users health data remains private for them. The future of where all of the health information exists is only on the edge (mobile devices).

Right now, federated learning (or, federated AI) guarantees that the users data stays on the device, and the applications running a specific program are still learning how to process the data and building a better, more efficient, model. HIPAA laws protect patient medical data, but federated learning takes that a step further by not sharing the data with outside parties.

Leveraging federated learning is where healthcare can evolve with technology.

Traditional machine learning requires centralizing data to train and build a model. With federated learning, combining other privacy-preserving techniques can build models in a distributed data setup without leaking sensitive information from the data. This will allow health professionals to be more inclusive and find more diversity in the data by going to where the data is: with the users.

How the Right Data Makes a World of Difference

Right now, nearly everyone is carrying a smartphone that can collect health-based signals. With federated learning, well be able to meet those users. Those health-based signals could include photos with medical information, an accelerometer that can capture motion, GPS location information that can reveal signals of health, and integration with several health devices which can contain biometrics data, integration with medical records like Apple health, and more.

AI-based predictive models can combine the data collected on the smartphone for both prospective and retrospective medical research and provide better health indicators in real-time.

Technology in our phones has been providing us information about air quality for some time, but with federated learning I expect apps to start engaging with users and patients during specific events on a more personal basis. For instance, if a user with asthma is too close to a region experiencing a forest fire or if someone with seasonal allergies is around an area where pollen-count is high, I fully expect the app to engage with that user and provide tips to mitigate the situation.

The Importance of Being Privacy First

These insights cant be provided without a service gleaning that pivotal information from the user. With privacy-preserving techniques (such as differential privacy), this data is only stored locally and on the edge, without being sent to the cloud or leaked to a third party.

We keep stressing the importance of privacy, but its significance cant be overstated. Users should own their data and have transparency around where data is sent and shared. Every type of data needs to be authorized for collection, and there must be transparency on how the data will be used.

Theres more to privacy than a mission statement when health services are built privacy-first, you can bring in more participants to the data training loop which allows teams to find a more diverse pool of users who feel more confident in sharing access to their private data. The more real-time and encompassing health systems, where models are learning faster from a large group of users instead of just a few, will lead to better health outcomes.

The unfortunate truth is that healthcare has become incredibly siloed and data exchange is often difficult and expensive. For example, EMR data is not available with claims and prescription data, and then finding out whether the prescription was even collected only exists in other systems. If you then layer in data, such as genetics, what you eat, social determinants of health, and activity data, you have a multi-node problem for a single user. There is no single source of the full truth, and centralizing all this is incredibly hard.

Federated learning provides the perfect opportunity to avoid these barriers. By putting the user/patient in charge of coordinating their health data, you can provide the right opt-ins to learn from their data across these disparate systems. Its now possible to imagine federated learning being applied across organizations, holding sensitive data, and come together to collectively build efficient and more effective models in healthcare.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

Read the original:

Why the Future of Healthcare is Federated AI - insideBIGDATA

Posted in Ai | Comments Off on Why the Future of Healthcare is Federated AI – insideBIGDATA

Responsible AI in health care starts at the top but its everyones responsibility (VB Live) – VentureBeat

Posted: at 12:39 am

Presented by Optum

Health cares Quadruple Aim is to improve health outcomes, enhance the experiences of patients and providers, and reduce costs and AI can help. In this VB Live event, learn more about how stakeholders can use AI responsibly, ethically, and equitably to ensure all populations benefit.

Register here for free.

Breakthroughs in the application of machine learning and other forms of artificial intelligence (AI) in health care are rapidly advancing, creating advantages in the fields clinical and administrative realms. Its on the administrative side think workflows or back office processes where the technology has been more fully adopted. Using AI to simplify those processes creates efficiencies that reduce the amount of work it takes to deliver health care and improves the experiences of both patients and providers.

But its increasingly clear that applying AI responsibly needs to be a central focus for organizations who use data and information to improve outcomes and the overall experience.

Advanced analytics and AI have a significant impact in how important decisions are made across the health care ecosystem, says Sanji Fernando, SVP of artificial intelligence and analytics platforms at Optum. And, so, the company has guidelines for the responsible use of advanced analytics and AI for all of UnitedHealth Group.

Its important for us to have a framework, not only for the data scientists and machine learning engineers, but for everyone in our organization operations, clinicians, product managers, marketing to better understand expectations and how we want to drive breakthroughs to better support our customers, patients, and the wider health care system, he says. We view the promise of AI and its responsible use as part of our shared responsibility to use these breakthroughs appropriately for patients, providers, and our customers.

The guideline focuses on making sure everyone is considering how to appropriately use advanced analytics and AI, how these models are trained, and how they are monitored and evaluated over time, he adds.

Machine learning models, by definition, learn from the available data thats being created throughout the health care system. Inequities in the system may be reflected in the data and predictions that machine learning models return. Its important for everyone to be aware that health inequity may exist and that models may reflect that, he explains.

By consistently evaluating how models may classify or infer, and looking at how that affects folks of different races, ethnicities, and ages, we can be more aware of where some models may require consistent examination to best ensure they are working the way wed like them to, he says. The reality is that theres no magic bullet to fix an ML model automatically, but its important for us to understand and consistently learn where these models may impact different groups.

Transparency is a key factor in delivering responsible AI. That includes being very clear about how youre training your models, the appropriate use of data used to train an algorithm, as well as data privacy. When possible, it also means understanding how specific features are being identified or leveraged within the model. Basics like an age or date are straightforward features, but the challenge arises with paragraphs of natural language and unstructured text. Each word, phrase or paragraph can be considered a feature, creating an enormous number of combinations to consider.

But understanding feature importance the features that are more important to the model is important to provide better insight into how the model may actually be working, he explains. Its not true mathematical interpretability, but it gives us a better awareness.

Another important factor is being able to reproduce the performance and results of a model. Results will necessarily change when you train or retrain an algorithm, so you want to be able to trace that history, by being able to reproduce results over time. This ensures the consistency and appropriateness of the model remains constant (and allows for potential adjustments should they be needed).

Theres no shortage of tools and capabilities available across the field of responsible AI because there are so many people who are passionate about making sure we all use AI responsibly. For example, Optum uses an open-source bias audit tool from the University of Chicago. But there are any number of approaches and great thinking from a tooling perspective, Fernando says, so its really becoming an industry best practice to implement a policy of responsible AI.

The other piece of the puzzle requires work and a commitment from everyone in the ecosystem: making responsible use everyones responsibility, not just the machine learning engineer or data scientist.

Our aspiration is that every employee understands these responsibilities and takes ownership of them, he says, whether UHG employees are using ML-driven recommendations in their day-to-day work, designing new products and services, or theyre the data scientists and ML engineers who can evaluate models and understand output class distributions, we all have a shared responsibility to ensure these tools are achieving the best and most equitable results for the people we serve.

To learn more about the ways that AI is impacting the delivery and administration of health care across the ecosystem, the benefits of machine learning for cost savings and efficiency, and the importance of responsible AI for every worker, dont miss this VB Live event.

Dont miss out!

Register here for free.

Youll learn:

Speakers:

See the rest here:

Responsible AI in health care starts at the top but its everyones responsibility (VB Live) - VentureBeat

Posted in Ai | Comments Off on Responsible AI in health care starts at the top but its everyones responsibility (VB Live) – VentureBeat

Torch.AI Looks to Replace ‘Store and Reduce’ with Synaptic Mesh – Datanami

Posted: at 12:39 am

(Michael Traitov/Shutterstock)

Torch.AI, the profitable startup applying machine learning to analyze data in-flight via its proprietary synaptic mesh technology, announced its first funding round along with expansion plans.

The Series A round garnered $30 million, and was led by San Francisco-based WestCap Group. As its customer base expands, Torch.AI said Wednesday (March 17) it would use the funds to scale its Nexus AI platform for a customer base that includes financial services, manufacturing and U.S. government customers.

The three-year-old AI startups software seeks to unify different data types via its synaptic mesh framework that reduces data storage while analyzing data on the fly.

Theres just too much information, too many classes of information, said Torch.AI CEO Brian Weaver. Hence, enterprises coping with regulatory and other data governance issues are finding they cant trust all the data they store.

Working early on with companies like GE (NYSE:GE) and (Microsoft NASDAQ:MSFT) on advanced data analytics, Weaver asserted in an interview that current technology frameworks compound that complexity. The shift to AI came while working with a financial services company struggling to process huge volumes of real-time transactions.

We figured out that we could use artificial intelligence just to understand the data payload, or the data object, differently, Weaver said.

The result was its Nexus platform that creates an AI mesh across a users data and systems, unifying data by increasing the surface area for analytics. That approach differs fundamentally from the store and reduce approach in which information is dumped into a large repository, then applying machine learning to make sense of it to cull usable data.

Ive got to store it somewhere first, then Ive got to reduce [data] to make use of it, the CEO continued. That approach actually compounds [data] complexityimpedes a successful outcome in a lot of ways and introduces at the same time a lot of risk.

Torch.AIs proprietary synaptic mesh approach is touted as eliminating the need to store all those data, enabling customers to analyze the growing number of data types in flight.

We decompose a data object into the atomic components of the data, Weaver explained. We create a very, very rich description of the data object itself that has logic built into it. The synaptic mesh is then applied to process and analyze data.

Hence, for example, a video file could be used to analyze data in-memory, picking out shapes, words and other data components as it streams.

The AI application builds in human cognition to make sense of a scene. My brain doesnt need to store it, the scene, to determine whats in it, Weaver noted.

Thats sort of our North Star: Making sense of messy data by applying AI to unify the growing number of data types while reducing the resulting complexity.

If you think about these workloads, people are actually working for the technology, having to stitch all this stuff together and hope it works. Shouldnt the technology truly be serving the [customer] who has the problem?

Recent items:

The Past and Present of In-Memory Computing

The Data and AI Habits of Future-Ready Companies

Editors note: A longer version of this story was originally posted to sister website EnterpriseAI.com.

The rest is here:

Torch.AI Looks to Replace 'Store and Reduce' with Synaptic Mesh - Datanami

Posted in Ai | Comments Off on Torch.AI Looks to Replace ‘Store and Reduce’ with Synaptic Mesh – Datanami

Torch.AI Raises $30M to Scale Its AI-Driven High Speed Data Processing Platform – PRNewswire

Posted: at 12:39 am

WASHINGTON, March 17, 2021 /PRNewswire/ --Torch.AI, a leading global artificial intelligence (AI) firm that uses machine learning to enable massively scaled, ultra high performance data processing, today announced it raised $30 million in Series A funding to accelerate its overall growth strategy. The funding, led by Laurence Tosi's WestCap Group, a prominent San Francisco-based investment firm, will enable the company to rapidly scale its Nexus AI platform to meet increasing demand from clients including Fortune 100 companies and U.S. federal agencies charged with protecting national security.

"Torch.AI's philosophy embraces more open and adaptable architectures, allowing us to provide lower cost, future-proof solutions offering a dramatic departure from the monolithic black boxes and complex middleware that are the norm in the machine learning and data management landscape," said Brian Weaver, Torch.AI CEO. "This new funding and our partnership with WestCap Group provides welcome resources to further our marketplace disruption and accelerate the growth of our team to keep up with demand."

Founded in 2017,Torch.AIcreated a next-generation AI platform that instantaneously understands and richly describes any data in atomic detail, both in memory and in motion. The Nexus software creates an intelligent Synaptic Mesh across an organization's data and systems, increasing the surface area of data for discovery and action. Data is unified to address even the most vexing challenges for how data fuels operations and critical decisions in high-risk environments.

The new Series A funding the firm's first institutional investment allows Torch.AI to enhance its proprietary technology, product design, and user experience, while continuing to aggressively expand in the U.S.

Companies including Microsoft, H&R Block, General Electric, the U.S. Air Force, Centers for Medicare and Medicaid Services, U.S. Department of Agriculture, and the U.S. Department of Defense, have already benefitted from the Nexus platform's ability to put data to work to improve decision making. In 2018, the firm was tapped to help transform how data can be leveraged to improve security clearance decisions and diagnostics across 95% of the federal government's employee and contractor workforce. By mid-2020, the platform spanned data and business systems across more than a dozen federal agencies, providing the capacity for billions of real time data processing computations.

WestCap's Tosi will join the Torch.AI board of directors, whose members include Weaver; William Beyer, founding member of Deloitte Consulting's federal practice; and WestCap Principal Christian Schnedler.

"Over the past 20 years, we at WestCap have founded, operated and invested in more than 15 multi-billion-dollar companies including Airbnb, Ipreo, Skillz and iCapital, as well as cyber-security unicorns such as CarbonBlack and Cylance," said Westcap Partner Kevin Marcus. "In Brian Weaver and the team at Torch.AI, we recognize the leadership, competitive advantage and innovative spirit they share with those great companies. WestCap is thrilled to be part of the growth and development of Torch.AI as it redefines the data infrastructure marketplace."

Said Beyer: "We are proud to welcome WestCap to the Torch.AI family, and Laurence Tosi to our board of directors. Torch.AI is already an outlier a fully U.S.-owned company, it's profitable, and one of the only AI firms with federal certifications at the highest levels. Now, with the backing of one of the smartest investment firms in the country, Torch.AI will accelerate its growth and more rapidly scale to meet increasing customer demand."

Prior to the investment, Torch.AI launched a strategic employee recruitment effort, adding executives and software developers in both Washington, D.C., and its engineering center in suburban Kansas City.

To learn more about Torch.AI, visit Torch.AI.

About Torch.AITorch.AI's Nexus platform changes the paradigm of data and digital workflows, forever solving core impediments caused by the ever-increasing volume and complexity of information. Customers enjoy a single integrated solution which begins by instantly deconstructing and identifying any data, in real-time, at the earliest possible moment. Purpose built for massively scaled, ultra high-speed data processing, the platform comes equipped with security features, flexible data workloads, compliance capabilities, and drag and drop functionality that is unrivaled in today's technology landscape. It's an enlightened approach. Learn more at Torch.AI.

About WestCapThe WestCap Group is a growth equity firm founded by Laurence A. Tosi, who, together with the WestCap team, has founded, capitalized, and operated tech-enabled, asset-light marketplaces for over 20 years. With over $2 billion of assets under management, WestCap has made notable investments in technology businesses such as Airbnb, StubHub, iPreo, Skillz, Sonder, Addepar, Hopper, iCapital and Bolt. To learn more about WestCap, please visit WestCap.com.

SOURCE Torch.AI

More here:

Torch.AI Raises $30M to Scale Its AI-Driven High Speed Data Processing Platform - PRNewswire

Posted in Ai | Comments Off on Torch.AI Raises $30M to Scale Its AI-Driven High Speed Data Processing Platform – PRNewswire

The Secret Auction that Set Off the Race for AI Supremacy – WIRED

Posted: at 12:39 am

Hinton remained one of the few who believed it would one day fulfill its promise, delivering machines that could not only recognize objects but identify spoken words, understand natural language, carry on a conversation, and maybe even solve problems humans couldnt solve on their own, providing new and more incisive ways of exploring the mysteries of biology, medicine, geology, and other sciences. It was an eccentric stance even inside his own university, which spent years denying his standing request to hire another professor who could work alongside him in this long and winding struggle to build machines that learned on their own. One crazy person working on this was enough, he imagined their thinking went. But with a nine-page paper that Hinton and his students unveiled in the fall of 2012, detailing their breakthrough, they announced to the world that neural networks were indeed as powerful as Hinton had long claimed they would be.

Days after the paper was published, Hinton received an email from a fellow AI researcher named Kai Yu, who worked for Baidu, the Chinese tech giant. On the surface, Hinton and Yu had little in common. Born in postwar Britain to an upper-crust family of scientists whose influence was matched only by their eccentricity, Hinton had studied at Cambridge, earned a PhD in artificial intelligence from the University of Edinburgh, and spent most of the next four decades as a professor of computer science. Yu was 30 years younger than Hinton and grew up in Communist China, the son of an automobile engineer, and studied in Nanjing and then Munich before moving to Silicon Valley for a job in a corporate research lab. The two were separated by class, age, culture, language, and geography, but they shared a faith in neural networks. They had originally met in Canada at an academic workshop, part of a grassroots effort to revive this nearly dormant area of research across the scientific community and rebrand the idea as deep learning. Yu, a small, bespectacled, round-faced man, was among those who helped spread the gospel. When that nine-page paper emerged from the University of Toronto, Yu told the Baidu brain trust they should recruit Hinton as quickly as possible. With his email, Yu introduced Hinton to a Baidu vice president, who promptly offered $12 million to hire Hinton and his students for just a few years of work.

For a moment, it seemed like Hinton and his suitors in Beijing were on the verge of sealing an agreement. But Hinton paused. In recent months, hed cultivated relationships inside several other companies, both small and large, including two of Baidus big American rivals, and they, too, were calling his office in Toronto, asking what it would take to hire him and his students.

Sign up to get our best longform features, investigations, and thought-provoking essays, in your inbox every Sunday.

Seeing a much wider opportunity, he asked Baidu if he could solicit other offers before accepting the $12 million, and when Baidu agreed, he flipped the situation upside down. Spurred on by his students and realizing that Baidu and its rivals were much more likely to pay enormous sums of money to acquire a company than they were to shell out the same dollars for a few new hires from the world of academia, he created his tiny startup. He called it DNNresearch in a nod to the deep neural networks they specialized in, and he asked a Toronto lawyer how he could maximize the price of a startup with three employees, no products, and virtually no history.

As the lawyer saw it, he had two options: He could hire a professional negotiator and risk angering the companies he hoped would acquire his tiny venture, or he could set up an auction. Hinton chose an auction. In the end, four names joined the bidding: Baidu, Google, Microsoft, and a two-year-old London startup called DeepMind, cofounded by a young neuroscientist named Demis Hassabis, that most of the world had never heard of.

See the rest here:

The Secret Auction that Set Off the Race for AI Supremacy - WIRED

Posted in Ai | Comments Off on The Secret Auction that Set Off the Race for AI Supremacy – WIRED

Artificial Intelligence In 2021: Five Trends You May (or May Not) Expect – Forbes

Posted: at 12:39 am

5 Trends in AI 2021

Artificial Intelligence innovation continues apace - with explosive growth in virtually all industries. So what did the last year bring, and what can we expect from AI in 2021?

In this article, I list five trends that I saw developing in 2020 that I expect will be even more dominant in 2021.

MLOps

MLOps (Machine Learning Operations, the practice of production Machine Learning) has been around for some time. During 2020, however, COVID-19 brought a new appreciation for the need to monitor and manage production Machine Learning instances. The massive change to operational workflows, inventory management, traffic patterns, etc. caused many AIs to behave unexpectedly. This is known in the MLOps world as Drift - when incoming data does not match what the AI was trained to expect. While drift and other challenges of production ML were known to companies that have deployed ML in production before, the changes caused by COVID caused a much broader appreciation for the need for MLOps. Similarly, as privacy regulations such as the CCPA take hold, companies that operate on customer data have an increased need for governance and risk management. Finally, the first MLOps community gathering - the Operational ML Conference - which started in 2019, also saw a significant growth of ideas, experiences, and breadth of participation in 2020.

Low Code/No Code

AutoML (automated machine learning) has been around for some time. AutoML has traditionally focused on algorithmic selection and finding the best Machine Learning or Deep Learning solution for a particular dataset. Last year saw growth in the Low-Code/No-Code movement across the board, from applications to targeted vertical AI solutions for businesses. While AutoML enabled building high-quality AI models without in-depth Data Science knowledge, modern Low-Code/No-Code platforms enable building entire production-grade AI-powered applications without deep programming knowledge.

Advanced Pre-trained Language Models

The last few years have brought substantial advances to the Natural Language Processing space, the greatest of which may be Transformers and Attention, a common application of which is BERT (Bidirectional Encoder Representations with Transformers). These models are extremely powerful and have revolutionized language translation, comprehension, summarization, and more. However, these models are extremely expensive and time-consuming to train. The good news is that pre-trained models (and sometimes APIs that allow direct access to them) can spawn a new generation of effective and extremely easy-to-build AI services. One of the largest examples of an advanced model accessible via API is GPT-3 - which has been demonstrated for use cases ranging from writing code to writing poetry.

Synthetic Content Generation (and its cousin, the Deep Fake)

NLP is not the only AI area to see substantial algorithmic innovation. Generative Adversarial Networks (GANs) have also seen innovation, demonstrating remarkable feats in creating art and fake images. Similar to transformers, GANs have also been complex to train and tune as they require large training sets. However, innovations have dramatically reduced the data sizes of creating a GAN. For example, Nvidia has demonstrated a new augmented method for GAN training that requires much less data than its predecessors. This innovation can spawn the use of GANs in everything from medical applications such as synthetic cancer histology images, to even more deep fakes.

AI for Kids

As low-code tools become prevalent, the age at which young people can build AIs is decreasing. It is now possible for an elementary or middle school student to build their own AI to do anything from classifying text to images. High Schools in the United States are starting to teach AI, with Middle Schools looking to follow. As an example - in Silicon Valleys Synopsys Science Fair 2020, 31% of the winning software projects used AI in their innovation. Even more impressively, 27% of these AIs were built by students in grades 6-8. An example winner, who went on to the national Broadcom MASTERS, was an eighth-grader who created a Convolutional Neural Network to detect Diabetic Retinopathy from eye scans.

What does all this mean?

These are not the only trends in AI. However, they are noteworthy because they point in three significant and critical directions

Read this article:

Artificial Intelligence In 2021: Five Trends You May (or May Not) Expect - Forbes

Posted in Ai | Comments Off on Artificial Intelligence In 2021: Five Trends You May (or May Not) Expect – Forbes

Provation and Iterative Scopes Announce Exclusive AI Partnership – GlobeNewswire

Posted: at 12:38 am

Provation serves thousands of hospitals, surgical facilities, anesthesia groups, and medical offices, including 43 of the top 50 U.S. hospitals for gastroenterology (GI) and GI surgery.

Iterative Scopes is a gastrointestinal data company, working to deliver AI toolkits to the practice of gastroenterology by providing real-time, actionable insights to providers and the life sciences.

Minneapolis, MN and Cambridge, MA, March 17, 2021 (GLOBE NEWSWIRE) -- Provation and Iterative Scopes announced today an exclusive partnership, bringing together the premier clinical productivity and gastroenterology (GI) documentation software provider with a leading startup in Artificial Intelligence (AI) for precision GI.

Through this partnership, the two companies are focused on delivering AI-based solutions to healthcare providers and life science researchers, including AI-assisted diagnostic tools, procedure documentation, and clinical trial recruitment.

One of the first initiatives under the partnership will be an AI-based patient recruiting solution for inflammatory bowel disease (IBD) patients. This new recruitment method is expected to accelerate the time needed to bring new therapies to market, bypassing many of the problems pharmaceutical companies currently face.

Our partnership with Iterative Scopes is a logical one. Provation is the market leader in GI documentation, with more than 3,500 customer facilities including 80% of the top academic and large health systems. Our IBD data can enable Iterative Scopes to facilitate clinical trial recruitment and deliver clinical endpoint evaluations to their pharmaceutical partners, said Craig Moriarty, Provation Senior Vice President of Strategy and Product. In addition, Iterative Scopes can bring its extensive AI toolkit to our GI documentation solutions, including Provation Apex.

The integration of these technology platforms will enable more efficient and accurate documentation of gastroenterology procedures benefiting physicians and the patients they serve.

I am thrilled to partner with Provation, to accelerate the adoption of AI into gastroenterology and to be able to learn from the deep expertise and trusted position Provation holds. We look forward to unlocking tremendous value for patients, physicians and life science partners. said Jonathan Ng, CEO, Iterative Scopes.

About Iterative Scopes

Iterative Scopes is a gastrointestinal data company, working to deliver AI toolkits to the practice of gastroenterology by providing real-time, actionable insights to providers and the life sciences. Spun out of MIT in 2017, the team has since raised $14M in funding and is based out of Cambridge, Massachusetts. For more information, visit iterativescopes.com.

About Provation

Provation is a leading provider of healthcare software and SaaS solutions. Our purpose is to empower providers to deliver quality healthcare for all. We provide innovative solutions in clinical productivity, care coordination, quality reporting and billing. Celebrating 25 years, Provation serves thousands of hospitals, surgical facilities, anesthesia groups, and medical offices, including 43 of the top 50 U.S. hospitals for gastroenterology (GI) and GI surgery. Our comprehensive portfolio spans the entire patient procedure, from pre-op through post-op recovery and follow-up, with solutions for physician and nursing documentation (Provation MD, Provation Apex, MD-Reports and Provation MultiCaregiver), patient engagement, surgical care coordination, quality reporting, and billing capture (Provation SurgicalValet), order set and care plan management (Provation Order Set Advisor and Provation Care Plans), and EHR embedded clinical documentation (Provation Clinic Note). Provation is headquartered in Minneapolis, MN and backed by Clearlake Capital Group, L.P. For more information about our solutions, visit provationmedical.com.

Read more here:

Provation and Iterative Scopes Announce Exclusive AI Partnership - GlobeNewswire

Posted in Ai | Comments Off on Provation and Iterative Scopes Announce Exclusive AI Partnership – GlobeNewswire

Microsoft Innovates at the Intersection of Hybrid Cloud AI and Industries at Ignite 2021 – CMSWire

Posted: at 12:38 am

Barely five months after last year's event, Ignite 2021 was in some ways a timely opportunity for Microsoft to reconnect with customers in the throes of their digital transformation efforts. As the pace of digitization continues to accelerate in businesses, the event offered IT leaders announcements in several areas of interest: Teams collaboration, mixed reality computing, zero-trust security and developer productivity, to name a few.

However, a standout area this year was at the intersection of several of Microsoft's most important strategic domains: hybrid cloud and edge computing, artificial intelligence (AI) and industry clouds. Each of these areas are growing in influence in their own right, but it's becoming more and more important to watch where Microsoft's products cross paths. As the firm continues to strengthen the ties between its enormous flywheel of solutions, it's at these intersections where you can get a good glimpse of Microsoft's strategic direction and longer-term differentiation in the cloud market.

Let's take a closer look at this in the context of Ignite's highlights and what they mean for Microsoft.

The opening keynotes at Ignite 2021 balanced big-picture vision in areas such as cloud computing, mixed reality collaboration and AI at scale, with lauding the resilience of IT professionals and developers over the past year in helping their firms transform at an unprecedented rate.

Noting Microsoft's own, decade-long journey in cloud technologies, CEO Satya Nadella outlined five elements (see images below) that will enable cloud innovation over the next 10 years, driving Microsoft's overall cloud strategy and road map. The company's role, Nadella said, is to help customers build " tech intensity" an ethos driving companies to adopt technology, break down silos, change culture and build up their digital capabilities to enable their own digital transformations.

Related Article: Microsoft's Flywheel Kicks Into Gear at Ignite 2020

The first element, "ubiquitous, decentralized computing," is arguably Microsoft's greatest area of focus right now. This essentially calls out the new era unfolding for more distributed cloud computing, coming off the current "peak centralization" of the cloudand is a reincarnation of theintelligent cloud and intelligent edge concept Microsoft first introduced at Build 2017.

The hybrid cloud, and specifically Azure Arc, has become one of the core vehicles of Microsoft's cloud strategy over the past year and focus of several of the leading announcements at Ignite 2021 as a result. The firm announced the general availability of Azure Arc-enabled Kubernetes, allowing organizations to manage and govern Kubernetes clusters deployed on-premises, in the public cloud or at the network edge.

The flexibility that Arc-enabled Kubernetes provides will be welcome news for customers, especially as the platform matures. In the next few years, we expect over 40% of large firms will continue to run more than 40% of their IT workloads on-premises, with 46% of companies adopting multi-cloud strategies, according to CCS Insight's latest survey data.

Microsoft also announced a major move at the intersection of hybrid cloud and AI. Azure Arc-enabled machine learning allows customers to build machine learning models and run inference where data lives, whether in on-premises data centers, on dedicated AI hardware, in edge environments or in multiple public cloud environments, for example.

Customers are looking for more flexibility in the environments in which they run their machine learning applications, and want the option to use their existing hardware investments. This is where Azure Arc is squarely positioned. According to my firms "Senior Leadership IT Investment Survey, 2020," for example, close to a quarter of organizations said that the ability to support a hybrid IT or on-premises environment was the top consideration for their investment in machine learning, and 55% said they wished to pursue a multi-cloud approach for AI.

With the exception of IBM, which launched Watson Anywhere in 2019, Microsoft leads other players in this trend, giving it a unique window of opportunity against its cloud rivals. Google Cloud has announced a few hybrid AI capabilities this year, such as Speech-to-Text On-Prem, building on the 2020 release of BigQuery Omni, its hybrid, multi-cloud analytics solution. But so far, Google has fallen short within its core platform. Similarly, Amazon Web Services (AWS) has yet to announce compatibility between Amazon SageMaker and AWS Outposts, although I expect it will do so imminently.

Related Article: IBM and Microsoft Sign 'Rome Call for AI Ethics': What Comes Next?

Another big move targeted the intersection of AI, the internet of things and edge computing, strengthening Microsoft's play in industrial settings.

The Azure Percept hardware and services platform aims to solve the challenges of using AI on low-power, connected devices in factories and warehouses, spanning device management, machine learning model development and analytics. Microsoft released two hardware development kits as part of the solution,pre-integrated with Azure IoT Hub for secure deployment and management, as well as with Azure Cognitive Services, Azure Machine Learning and Azure Video Analytics.

The move is part of a bigger push by Microsoft to infuse AI everywhere: in its products, in its research and in its operational practices. Together, these give customers options to forge their own paths with the technology, whether it's a classical big data and machine learning path through enhancements to Azure Machine Learning, a modern AI approach through its large-scale models through Project Turing and its supercomputing resources in Azure, or with edge solutions for industries such as Azure Percept. This flexibility and variety of options is quickly becoming a hallmark of Microsoft's differentiation and strategy in AI.

Above all though, Ignite 2021 also showcased Microsoft's growing commitment to industry verticals. It's here that we see several products meet, spanning several of Microsofts clouds including Azure, Microsoft 365 and Dynamics 365.

In the days before the event, the firm introduced three new industry cloud offerings, adding Microsoft Cloud for Financial Services, Microsoft Cloud for Nonprofit and Microsoft Cloud for Manufacturing to its existing solutions for healthcare and retail. The launch is its response to customers pushing Microsoft harder to integrate and align innovation between its entire portfolio with industry outcomes and data protection standards.

Unlike other industry clouds, Microsoft's approach combines its infrastructure services, software-as-a-service applications for individual sectors, co-development initiatives with industry customers and a partner ecosystem to address specific business processes and industry challenges. Although it's early days in the shift to industry clouds, Microsoft's solution-centric and integrated approach to its cloud offerings will be a big part of how it plans to differentiate against other industry offerings.

CCS Insight has predicted for the past several years that 2021 will be the year of vertical clouds, as cloud providers ramp up activities to make their largely horizontal platforms more relevant to various sectors. In the past 12 months, we've seen IBM launch its Cloud for Financial Services, AWS deepen its capabilities in government, telecommunication and industrial sectors, and Google Cloud announce a focus on several industries for its AI solutions. In my view, Microsoft's announcements in this area leapfrog many of these efforts at this early stage of the game.

Related Article: Choosing a Cloud Provider for Business Innovation

Microsoft's relentless focus on achieving greater synergy and strengthening the connective tissue between its growing portfolio of products is perhaps its greatest strength in the market right now given the improved differentiation, product "stickiness" and customer value this strategy affords it. The area is also becoming an important source of innovation as well, especially at the intersection of some of its most important domains such as hybrid cloud and edge computing, AI and industry clouds as well as other areas including Teams and security, all of which came alive at Ignite 2021.

But as Microsoft continues down the right strategic paths in 2021, a new economic reality faces swathes of companies, putting pressure on IT budgets around the globe. How Microsoft complements this innovation with improvements to its commercial and licensing models to help customers reduce financial risk will be a vital new area on which it must now focus. Areas such as new bundling options across its clouds, more dedicated SMB offerings and above all, as it pursues industry solutions, a deeper focus on outcome-based pricing models should all come into play.

With many businesses across the globe struggling, helping customers find new ways to finance and consume its products will be just as important moving forward as the integration and intersecting areas of its portfolio are to Microsoft's strategy today.

Nicholas McQuire is vice president, enterprise research and artificial intelligence research at CCS Insight. He has over 15 years' experience in enterprise technology advisory services. He leads CCS Insight research in cloud computing, machine learning and the digital workplace.

Follow this link:

Microsoft Innovates at the Intersection of Hybrid Cloud AI and Industries at Ignite 2021 - CMSWire

Posted in Ai | Comments Off on Microsoft Innovates at the Intersection of Hybrid Cloud AI and Industries at Ignite 2021 – CMSWire