GYANT hauls in $13.6M Series A for AI care coordination tool – MobiHealthNews

GYANT, maker of an artificial intelligence-based virtual-healthcare assistant, has hauled in a $13.6 million Series A financing round. Wing Venture Capital headlined the raise, which also included support from Intermountain Ventures, Grazia Equity, Alpana Ventures, Techstars Ventures and Plug and Play Ventures.

WHAT IT DOES

A veteran of the Cedars-Sinai Tech Stars accelerator, GYANT's care-navigation business looks to serve as the "digital front door" for healthcare-provider organizations. The AI tool engages patients via chat to uncover their needs and direct them toward the services or tools they may need. GYANT's product can be added to websites, patient apps, or patient portals, and integrates with the organization's EHR systems.

HIMSS20 Digital

The startup appears to have gotten a leg up over the past year or so. GYANT said in its funding announcement that it's grown from three customers in July 2019 to 24 a year later, and saw wide deployment of an automated COVID-19 screener it developed in March.

WHAT IT'S FOR

GYANT said that the new funds will help further its tech platform and support greater interoperability for its customers.

The need for digital access and care navigation has never been greater, especially with healthcare inequities and disparities in the spotlight today," Stefan Behrens, CEO and cofounder, said in a statement. "This is the time for GYANT to continue growing and realize our vision of personalized patient experiences with digital navigation to the right, best possible care.

MARKET SNAPSHOT

GYANT isn't alone in deploying triage chatbots to overwhelmed healthcare providers. Ada Health, Buoy Health, Bright.md, Babylon Health and others have been racking up funds over the past few years and signing a growing number of deployment deals, ranging from carecoordination to automated delivery of care reminders. And as COVID-19 cases mount in the U.S., these technologies are increasingly being relied uponto ensure patients receive appropriate guidance.

ON THE RECORD

Intermountain Healthcare partnered with GYANT for screening and care navigation in the height of COVID-19 and the results speak for themselves, Dr. Mike Phillips, managing director and partner of Intermountain Ventures, said in a statement. Using GYANT led to a 30% decrease in call center volume, alleviating hospital capacity constraints and improving patient engagement. Our first-hand experience with GYANT and the value its market leading care navigation solution delivers drove our decision to invest.

Excerpt from:

GYANT hauls in $13.6M Series A for AI care coordination tool - MobiHealthNews

AI is learning how to create itself – MIT Technology Review

But theres another crucial observation here. Intelligence was never an endpoint for evolution, something to aim for. Instead, it emerged in many different forms from countless tiny solutions to challenges that allowed living things to survive and take on future challenges. Intelligence is the current high point in an ongoing and open-ended process. In this sense, evolution is quite different from algorithms the way people typically think of themas means to an end.

Its this open-endedness, glimpsed in the apparently aimless sequence of challenges generated by POET, that Clune and others believe could lead to new kinds of AI. For decades AI researchers have tried to build algorithms to mimic human intelligence, but the real breakthrough may come from building algorithms that try to mimic the open-ended problem-solving of evolutionand sitting back to watch what emerges.

Researchers are already using machine learning on itself, training it to find solutions to some of the fields hardest problems, such as how to make machines that can learn more than one task at a time or cope with situations they have not encountered before. Some now think that taking this approach and running with it might be the best path to artificial general intelligence.We could start an algorithm that initially does not have much intelligence inside it, and watch it bootstrap itself all the way up potentially to AGI, Clune says.

The truth is that for now, AGI remains a fantasy. But thats largely because nobody knows how to makeit.Advances in AI are piecemeal and carried out by humans, with progress typically involving tweaks to existing techniques or algorithms, yielding incremental leaps in performance or accuracy. Clune characterizes these efforts as attempts to discover the building blocks for artificial intelligence without knowing what youre looking for or how many blocks youll need. And thats just the start. At some point, we have to take on the Herculean task of putting them all together, he says.

Asking AI to find andassemble those building blocks for usis a paradigm shift. Its saying we want to create an intelligent machine, but we dont care what it might look likejust give us whatever works.

Even if AGI is never achieved, the self-teaching approach may still change what sorts of AI are created. The world needsmore than a very good Go player, says Clune. For him, creating a supersmart machine means building a system that invents its own challenges, solves them, and then invents new ones. POET is a tiny glimpse of this in action. Clune imagines a machine that teaches a bot to walk, then to play hopscotch, then maybe to play Go. Then maybe it learns math puzzles and starts inventing its own challenges, he says. The system continuously innovates, and the skys the limit in terms of where it might go.

More:

AI is learning how to create itself - MIT Technology Review

Google releases SimCLR, an AI framework that can classify images with limited labeled data – VentureBeat

A team of Google researchers recently detailed a framework called SimCLR, which improves previous approaches to self-supervised learning, a family of techniques for converting an unsupervised learning problem (i.e., a problem in which AI models train on unlabeled data) into a supervised one by creating labels from unlabeled data sets. In a preprint paper and accompanying blog post, they say that SimCLR achieved a new record for image classification with a limited amount of annotated data and that its simple enough to be incorporated into existing supervised learning pipelines.

That could spell good news for enterprises applying computer vision to domains with limited labeled data.

SimCLR learns basic image representations on an unlabeled corpus and can be fine-tuned with a small set of labeled images for a classification task. The representations are learned through a method called contrastive learning, where the model simultaneously maximizes agreement between differently transformed views of the same image and minimizes agreement between transformed views of different images.

Above: An illustration of the SimCLR architecture.

Image Credit: Google

SimCLR first randomly draws examples from the original data set, transforming each sample twice by cropping, color-distorting, and blurring them to create two sets of corresponding views. It then computes the image representation using a machine learning model, after which it generates a projection of the image representation using a module that maximizes SimCLRs ability to identify different transformations of the same image. Finally, following the pretraining stage, SimCLRs output can be used as the representation of an image or tailored with labeled images to achieve good performance for specific tasks.

Google says that in experiments SimCLR achieved 85.8% top 5 accuracy on a test data set (ImageNet) when fine-tuned on only 1% of the labels, compared with the previous best approachs 77.9%.

[Our results show that] preretraining on large unlabeled image data sets has the potential to improve performance on computer vision tasks, wrote research scientist Ting Chen and Google Research VP and engineering fellow and Turing Award winner Geoffrey Hinton in a blog post. Despite its simplicity, SimCLR greatly advances the state of the art in self-supervised and semi-supervised learning.

Both the code and pretrained models of SimCLR are available on GitHub.

View post:

Google releases SimCLR, an AI framework that can classify images with limited labeled data - VentureBeat

What our original drama The Intelligence Explosion tells us about AI – The Guardian

The Intelligence Explosion, an original drama published by the Guardian, is obviously a work of fiction. But the fears behind it are very real, and have led some of the biggest brains in artificial intelligence (AI) to reconsider how they work.

The film dramatises a near-future conversation between the developers of an artificial general intelligence named Gnther and an ethical philosopher. Gnther himself (itself?) sits in, making fairly cringeworthy jokes and generally missing the point. Until, suddenly, he doesnt.

It shows an event which has come to be known in the technology world as the singularity: the moment when an artificial intelligence that has the ability to improve itself starts doing so at exponential speeds. The crucial moment is the period when AI becomes better at developing AI than people are. Up until that point, AI capability can only improve as quickly as AI research progresses, but once AI is involved in its own creation, a feedback loop begins. AI makes better AI, which is even better at making even better AI.

It may not end with a robot bursting into a cloud of stars and deciding to ascend to a higher plane of existence but its not far off. A super-intelligent AI could be so much more intelligent than a human being that we cant even comprehend its actual abilities, as futile as explaining to an ant how wireless data transfer works.

So one big question for AI researchers is whether this event will be good or bad for humanity. And thats where the ethical philosophy comes into it.

Dr Nick Bostrom, a philosopher at the University of Oxford, presented one of the most popular explanations of the problem in his book Superintelligence. Suppose you create an artificial intelligence designed to do one thing in his example, running a factory for making paperclips. In a bid for efficiency, however, you decide to programme the artificial intelligence with another set of instructions as well, commanding it to improve its own processes to become better at making paperclips.

For a while, everything goes well: the AI chugs along making paperclips, occasionally suggesting that a piece of machinery be moved, or designing a new alloy for the smelter to produce. Sometimes it even improves its own programming, with the rationale that the smarter it is, the better it can think of new ways to make paperclips.

But one day, the exponential increase happens: the paperclip factory starts getting very smart, very quickly. One day its a basic AI, the next its as intelligent as a person. The day after that, its as smart as all of humanity combined, and the day after that, its smarter than anything we can imagine.

Unfortunately, despite all of this, its main directive is unchanged: it just wants to make paperclips. As many as possible, as efficiently as possible. It would start strip-mining the Earth for the raw materials, except its already realised that doing that would probably spark resistance from the pesky humans who live on the planet. So, pre-emptively, it kills them all, leaving nothing standing between it and a lot of paperclips.

Thats the worst possible outcome. But obviously having an extremely smart AI on the side of humanity would be a pretty good thing. So one way to square the circle is by teaching ethics to artificial intelligences, before its too late.

In that scenario, the paperclip machine would be told make more paperclips, but only if its ethical to do so. That way, it probably wont murder humanity, which most people consider a positive outcome.

The downside is that to code that into an AI, you sort of need to solve the entirety of ethics and write it in computer-readable format. Which is, to say the least, tricky.

Ethical philosophers cant even agree on what the best ethical system is for people. Is it ethical to kill one person to save five? Or to lie when a madman with an axe asks where your neighbour is? Some of the best minds in the world of moral philosophy disagree over those questions, which doesnt bode well for the prospect of coding morality into an AI.

Problems like this are why the biggest AI companies in the world are paying keen attention to questions of ethics. DeepMind, the Google subsidiary which produced the first ever AI able to beat a human pro at the ancient boardgame Go, has a shadowy ethics and safety board, for instance. The company hasnt said whos on it, or even whether its met, but early investors say that its creation was a key part of why Googles bid to acquire DeepMind was successful. Other companies, including IBM, Amazon and Apple, have also joined forces, forming the Partnership on AI, to lead from the top.

For now, though, the singularity still exists only in the world of science fiction. All we can say for certain is that when it does come, it probably wont have Gnthers friendly attitude front and centre.

See the article here:

What our original drama The Intelligence Explosion tells us about AI - The Guardian

The dark side of AI – SC Magazine

For all the good that machine learning can accomplish in cybersecurity, the technology is also accessible to bad actors.

For all the good that machine learning can accomplish in cybersecurity, it's important to remember that the technology is also accessible to bad actors.

While writers and futurists dream up nightmarish scenarios of artificial intelligence turning on its creators and exterminating mankind like Terminators and Cylons heck, Stephen Hawking and Elon Musk have warned AI is dangerous the more pressing concern today is that machines can be intentionally programmed to abet cybercriminal operations.

Could we one day see the benevolent AIs of the world matching wits with malicious machines, with the fate of our IT systems at stake? Here's what experts had to say

Derek Manky, global security strategist, Fortinet

In the future we will have attacker/defender AI scenarios play out. At first, they will employ simple mechanics. Later, they will play out intricate scenarios with millions of data points to analyze and action. However, at the end of the day there is only one output, a compromise or not.

In the coming year we expect to see malware designed with adaptive, success-based learning to improve the success and efficacy of attacks. This new generation of malware will be situation-aware, meaning that it will understand the environment it is in and make calculated decisions about what to do next. In many ways, it will begin to behave like a human attacker: performing reconnaissance, identifying targets, choosing methods of attack, and intelligently evading detection.

Autonomous malware operates much like branch prediction technology, which is designed to guess which branch of a decision tree a transaction will take before it is executed [This] malware, as with intelligent defensive solutions, are guided by the collection and analysis of offensive intelligence, such as types of devices deployed in a network segment, traffic flow, applications being used, transaction details, time of day transactions occur, etc.

We will also see the growth of cross-platform autonomous malware designed to operate on and between a variety of mobile devices. These cross-platform tools, or transformers, include a variety of exploit and payload tools that can operate across different environments. This new variant of autonomous malware includes a learning component that gathers offensive intelligence about where it has been deployed, including the platform on which it has been loaded, then selects, assembles, and executes an attack against its target using the appropriate payload.

Ryan Permeh, founder and chief cyber scientist, Cylance

Bad guys will use AI not just to create new types of attacks, but to find the limits in existing defensive approaches Having information on the limits of a defender's defense is useful to an attacker, even if it isn't an automatic break of the defenses.

Justin Fier, director of cyber intelligence and analysis, Darktrace

I think we're going to start to see in the next probably 12- 18 months AI moving into the other side. You're already starting to see polymorphic malware that [infects a] network and then changes itself, orautomatically deletes itself and disappears. So in its simplest form it's already there.

Where I think it could potentially head is where it actually sits dormant on a system and learns the user and then finds the most opportune time to take an action.

Diana Kelley, global executive security adviser, IBM

Malware is getting very, very situationally aware. There's some malware for example that can get onto the system and figure out, Is there AV on here? Is there other malware on here, shut it down so they're the only malware. Or even, Oh look, I've landed on a point-of-sale system rather than on a server, so I'm just going to shut down all of my functions that would work on a regular server and just have my ram scraper going cause that's what I want on the point of sale.

Staffan Truve, co-founder and CTO of Recorded Future

Truve said that AI will be used to automatically craft effective spear-phishing emails that contain victims' personal information, leveraging powerful data resources and natural-language generation capabilities to sound convincing.

I'm sure it will bevery hard to identify phishing emails in the future.

Additionally, We'll definitely be seeing AI that can analyze code and figure out ways to find vulnerabilities.

It's going to be an arms race between the good and bad guys The good side is a bit ahead right now and mostly I think the reason for that is that the bad guys are successful enough with old methods You can find enough targets that are who unsophisticated enough to be vulnerable to current technologies.

Originally posted here:

The dark side of AI - SC Magazine

Want Your Company To Stay Relevant? Start Learning How To Harness AI – Forbes

Want Your Company To Stay Relevant? Start Learning How To Harness AI
Forbes
Artificial intelligence (AI) has the power to change how our workforce operates, and if you want your business to stay competitive, you need to get ahead of the AI revolution. Don't let yourself feel daunted by it as a buzzword. At my current company ...

and more »

Go here to see the original:

Want Your Company To Stay Relevant? Start Learning How To Harness AI - Forbes

Candis raises nearly $14 million to automate accounting processes with AI – VentureBeat

Candis, a startup developing a platform for automated accounting and payment processes, this week closed a 12 million ($13.97 million) funding round. A spokesperson for the company said the money will be used to further develop Candis machine learning engine and fuel growth and expansion within Europe, specifically in the Netherlands.

Most paperwork is still done manually. According to a study published by Wakefield Research and Concur, 84% of small businesses rely on some kind of manual process each day. Some of these are financial and require specialized knowledge, and the stakes are high. Errors could result in a client being unable to deliver payments or in late bills that hurt planning.

Candis aims to expedite some of the more complex workflows with algorithms that import files, extract data, approve invoices, and handle exporting. Customers upload documents in one place by scanning them with an app or forwarding them to an email address. The platform handles payment reconciliation with linked business accounts, credit cards, and PayPal accounts by assigning account movements to the correct invoices and notifying admins of missing documents. Candis also automatically updates payment lists and maintains an overview of open liabilities and completed invoices while storing things like invoice numbers to simplify bank transfers and collaboration among team members.

Above: Candis web dashboard.

Image Credit: Candis

Candis platform runs without installation in a web browser on any device, and the company claims it stores data on ISO-certifiedservers in Germany.

Studies show thevast majorityof day-to-day accounting tasks can be automated with software. That may be why over 50% of respondents in asurvey conducted by the Association of Chartered Certified Accountants said they anticipate the development of automated and intelligent systems will have a significant impact on accounting businesses over the next 30 years. Indeed, Candis cofounder and managing director Christian Ritosek says the companys software automates more than 80% of accounting processes for tax advisors at thousands of companies. Despite competitors like Botkeeper, Candis says business has grown 500% since its last funding round at the end of 2018. Ritosek estimates the small and medium-sized enterprises (SME) market in the EU to be worth around $4 billion.

Existing investors Viola Ventures and Rabo Frontier Ventures, the investment arm of Rabobank, led this latest investment in Berlin-based Candis. Returning investors Lightspeed Venture Partners, Point Nine Capital, and Speedinvest (the main incubator of Commerzbank and 42CAP) also participated.

Follow this link:

Candis raises nearly $14 million to automate accounting processes with AI - VentureBeat

AIMed Launches New AI Community Group With Jvion as its Founding Member – Yahoo Finance

LONDON, July 21, 2020 /PRNewswire/ -- AIMed, a leading provider of global clinician-led artificial intelligence thought leadership, education and editorial content in healthcare and medicine has announced today the launch of the new AIMed Community Group - AIMedConnect. AIMed is delighted to welcome Jvion, a key healthcare clinical artificial intelligence company on board as a founding member. For more than a decade, Jvion has delivered clinical-AI solutions to the healthcare industry to identify individuals on a risk trajectory whose outcomes can be modified with the proper interventions.

AIMedConnect will extend AIMed's vision and bring about a revolution that embraces a new paradigm of medicine and healthcare propelled by Artificial Intelligence (AI) and related new technologies.

AIMedConnect is a unique opportunity for the medical, healthcare and technology industries. It's a strategic platform for Innovators to receive vital feedbacks from practicing clinicians and other stakeholders to ensure their new AI solutions are fit for purpose before commercialization as well as drive increased adoption of AI technology. It's also where participating members looking for a way to get started in AI or already have advanced data science background can be exposed to the various pre-market creations and share their thoughts on how they will like new technologies to be developed.

AIMed CEO, Freddy White said, "Over the last four years, we have consistently heard challenges coming from both executives and clinicians on the adoption and implementation of AI in healthcare. By launching the AIMed User Group, we are, for the first time, connecting key stakeholders together in a unique forum to collaborate, innovation and deliver solutions that will drive better patient outcomes and efficiency across the industry. I am thrilled with the initial response and look forward to the collective output from this group."

"Jvion is excited to be a founding member of the AIMEDConnect. This is a unique opportunity to foster cross-industry collaboration and drive innovation," said Jay Deady, CEO of Jvion. "We look forward to driving alignment and benefit across the industry."

AIMed Chairman and Founder, Chief Intelligence and Innovation Officer of Children's Hospital of Orange County (CHOC) Dr. Anthony Chang added, "AIMed announces the inception of a AIMed user group concept called (AIMedConnect). These user groups will regularly convene clinician/hospital users and payors as well as service providers, data scientists, and entrepreneurs. This is a singularly unique opportunity to communicate and network with all stakeholders in the domain of artificial intelligence in healthcare community in an honest and productive conversation to maximize utility and safety of artificial intelligence in healthcare.

Jvion, one of the leading healthcare AI companies, will be a founding member of AIMedConnect. This signals Jvion's extraordinary commitment to using artificial intelligence to prevent harm and lower cost, thereby bringing the highest value to healthcare."

About AIMed

A platform established in 2014 with the goal of bringing together healthcare, business and technology experts to start a revolution in medicine and healthcare brought about by the use of artificial intelligence (AI) and related new technologies. AIMed organizes year-round education and networking opportunities through a series of events, magazine and online content.

About Jvion

Jvion enables providers, payers and other healthcare entities to prevent avoidable patient harm and lower costs through its clinical AI solution. An industry first, the Jvion CORE goes beyond simple predictive analytics and machine learning to identify patients on a trajectory to becoming high risk and for whom intervention will likely be successful. Jvion determines the interventions that will more effectively reduce risk and enable clinical action. And it accelerates time to value by leveraging established patient-level intelligence to drive engagement across hospitals, populations, and patients. To date, the Jvion CORE has been deployed across about 50 hospital systems and 300 hospitals, who report average reductions of 30% for preventable harm incidents and annual cost savings of $13.7 million. For more information, visit http://www.jvion.com

CONTACTS med

Freddy White +44 796 8565 401freddy@ai-med.io

Lexi Herosianlexi@scratchmm.com

View original content:http://www.prnewswire.com/news-releases/aimed-launches-new-ai-community-group-with-jvion-as-its-founding-member-301096963.html

SOURCE AIMed

See the original post:

AIMed Launches New AI Community Group With Jvion as its Founding Member - Yahoo Finance

6 steps to better conversations that can reimagine AI regulation – World Economic Forum

AI and algorithms are everywhere, underpinning many parts of our daily lives, improving systems and increasing productivity. They also carry risk, however, potentially introducing new biases or worsening old ones, blocking candidates from jobs and even misidentifying crime suspects.

As AI is ubiquitous, governments and businesses leverage social licence to explore new uses. Through social licence, communities agree that governments, agencies, or companies are considered trustworthy enough to use AI in ways that may have risks.

Having this trust means people believe that if something does go wrong, it will be quickly identified and fixed before there is harm caused. Such trust can be built with conversations and robust engagement between the public, technologists and policy makers.

Successful conversations on AI or other emerging technologies are multi-stage processes, involving different players at different intervals to play important roles. As part of the World Economic Forums Reimagining Regulation for the Age of AI project, the project team and community developed a range of best practices and steps for how to design a successful conversation on AI. For better, more robust conversations, these tips can help.

The COVID-19 pandemic and recent social and political unrest have created a profound sense of urgency for companies to actively work to tackle racial injustice and inequality. In response, the Forum's Platform for Shaping the Future of the New Economy and Society has established a high-level community of Chief Diversity and Inclusion Officers. The community will develop a vision, strategies and tools to proactively embed equity into the post-pandemic recovery and shape long-term inclusive change in our economies and societies.

As businesses emerge from the COVID-19 crisis, they have a unique opportunity to ensure that equity, inclusion and justice define the "new normal" and tackle exclusion, bias and discrimination related to race, gender, ability, sexual orientation and all other forms of human diversity. It is increasingly clear that new workplace technologies and practices can be leveraged to significantly improve diversity, equity and inclusion outcomes.

The World Economic Forum has developed a Diversity, Equity and Inclusion Toolkit, to outline the practical opportunities that this new technology represents for diversity, equity and inclusion efforts, while describing the challenges that come with it.

The toolkit explores how technology can help reduce bias from recruitment processes, diversify talent pools and benchmark diversity and inclusion across organisations. The toolkit also cites research that suggests well-managed diverse teams significantly outperform homogenous ones over time, across profitability, innovation, decision-making and employee engagement.

The Diversity, Equity, and Inclusion Toolkit is available here.

Whats needed for powerful engagements

Strong engagement forges trust and will power valuable conversations to help different parties fully understand each others needs and views. Good engagement takes several factors into account, including:

How to run a successful engagement

Planning can create the right spaces for strong engagements. To make the most out of these opportunities, the following steps should be taken:

1. Define. To start, the design team must identify core elements such as principles, legal context and societal norms underpinning the current use of AI. These current rules and conventions outline which boundaries, safeguards and protections are already in place. Communicating these elements to participants helps focus discussions by letting the public know which concerns are already covered by regulation and how participants can contribute to the creation of new safeguards.

2. Discover. In this stage, the design team researches the issue and the current context, identifying how the engagement could fuel progress. This initial stage could take place with a small-scale workshop involving a few key stakeholders. This stage allows the designers to crystallise the issue at hand, the aspects where consultation is needed, and any goals or outcomes. Participants will trust the process more if they can see the pathway ahead, know they can contribute to the desired end goal, and understand they are helping to design some of the steps to reach the end goal.

3. Decide. Here, the design team identifies who should be included in the larger engagement. This might include stakeholders as well as members of the community with particular concerns about, or knowledge about, bias and algorithms. Hard-to-reach audiences need to be identified and decisions made on how best to reach these participants. Designers will also decide what level of influence the participants will have and the extent to which views will be listened to, respected, and used in the next stages. Trust will be lost if people participate and then find their views have been ignored.

4. Design. In this stage, the team takes what it has learned about the issue it is covering, and the people involved to decide the best channels for engagement and dialogue. The shape of the engagement will depend on factors such as the intended participant list, the level of involvement being sought by the participants, and the nature of the feedback wanted. So, for example, a small-scale engagement with a specific group seeking views on how personal data is to be used, with the aim being to feed into a piece of legislation, will require different materials, meetings and level of discussion than will a general nationwide conversation on how human rights sit within digital technologies, with the aim of raising awareness about digital rights.

5. Analyse. Once the engagement has occurred and information gathered, the results need to be analysed and synthesised by the design team. The design team needs to look at the inputs and adjust for bias, vested interests, monopoly of views and strong voices. The findings will need to be presented to key stakeholders and other key audiences, including the participants. These findings will include recommendations on how the engagement research will be used (that is, it will inform or change the decision?) and the next steps.

6. Review. Finally, the whole process should be reviewed and evaluated, either by the design team or by an independent group. This will help identify what went well and what didnt, giving valuable insights for future engagements. The review can be done in many ways (For instance, interviews or surveys with the participants and design team is a useful way to review the process from both sides). A key part of this process is providing honest feedback to participants on what happened with their input and how it has contributed to the next steps.

"Through trust, its possible to yield the valuable insights and material that make the ultimate decisions stronger."

Open and honest engagement requires courage on both sides. Conversations on new technologies that have the potential to disrupt communities and economies are difficult. A carefully planned engagement helps signal transparency and humility to participants and the fact that one party has all the answers.

Engagements will inevitably result in differing views or roadblocks. The steps above cant prevent disagreements, but if the designers are open and honest from the start, trust is built early and its possible to work through any disagreements. Through trust, its possible to yield the valuable insights and material that make the ultimate decisions stronger.

An involved and active community that trusts enough to engage is vital to a functioning democracy. Gaining and maintaining this trust requires an ongoing commitment but one that is worthwhile as community support and input makes for better, stronger policy.

Read more from the original source:

6 steps to better conversations that can reimagine AI regulation - World Economic Forum

Exploring the infrastructure needs of AI – ZDNet

When you're phasing advanced analytics, machine learning, and artificial intelligence into your infrastructure, traditional configurations aren't necessarily up to the task. Applications related to AI can accumulate a large volume of data based on I/O requirements. You'll need to ensure that these attributes are part of your setup:

Microsoft Cloud Services, for example, utilize commodity hardware and scale virtually infinitely to handle AI workloads. By using commodity hardware, Microsoft is able to provide storage services over standard protocols like iSCSI, NFS, SMB, CIFS, etc. and more advanced features.

Commodity hardware is a growing trend when designing a system to manage large volumes of data. Big data infrastructures normally utilize commodity hardware to distribute terabytes of data throughout the network. Microsoft Azure storage provides an affordable solution for AI by supporting structured and unstructured data in a highly scalable platform. You also have the ability to implement storage-related automation routines using RESTful APIs.

AI applications and the loads that they can put on a system's resources will vary in volume and complexity. To support deep learning, problem-solving, reasoning, and learning, these applications need to the ability to analyze large volumes of data in various formats, then compile data and produce results in an optimal amount of time.

Here are two examples of how Microsoft is designing applications to build the infrastructure of AI:

Azure Machine Learning is a methodology designed by Microsoft to incorporate predictive analytics into applications. The Machine Learning Studio allows developers to use known programming techniques to design models for AI.

Microsoft Azure Cognitive Services is another offering that combines user experiences with machine-based intelligence. Cognitive Services empowers the user by helping deploy applications across platforms easily.

Cognitive Services Development covers the following areas:

Microsoft recently announced Azure Batch AI Training, as well. This is a space segmented within the Azure Cloud that developers can use to work on AI applications without concern of compromising infrastructure resources. By doing this type of modeling, developers will be able to determine what is needed for their infrastructure before the application is deployed.

Each of these tools can utilize the Azure Cloud for deployment, as well, so you can reap the benefits of test/dev and production in Azure.

AI infrastructure dependencies vary, and smart design is critical. Microsoft Azure, along with Microsoft's development and training tools, put a custom solution within reach.

Go here to see the original:

Exploring the infrastructure needs of AI - ZDNet

AI will fix healthcare’s biggest and least sexy problem – MedCity News

When discussing the growing use of artificial intelligence, a hotly contested view is that AI will become a game-changer in healthcare diagnosing and treating patients with serious diseases, like cancer or diabetes.

While algorithm vs. doctor and clinical moonshots dominate the headlines, AI is quietly solving another big problem that has long plagued healthcare: waste and inefficiency.

Unlike clinical issues, inefficiency is often overlooked because its complicated and unsexy. However, many now believe that solving operational issues is the biggest lever for fixing healthcare and the area where we can really move the needle on cost and patient experience. This is more important than ever, as hospitals are seeing more bankruptcies and face growing uncertainties around reimbursements and operating margins in the face of ACHA and other turbulent policy issues.

But tackling inefficiency is hard. Thats because hospitals are complex, unpredictable organizations. Data was expected to help. But given the massive amount of information involved, standard industry tools like dashboards and reports arent useful enough. In healthcare, the stakes are much higher and require more practical solutions. How can we expect busy nurses and doctors to make sense of dashboards in high-pressure moments and figure out what decision to make? Its unreasonable and impossible.

We must also move away from rear window insights and into the proactive management of our problems. Lets say that a nurse reviews a report indicating that, the day before, a toddler had her surgery canceled due to lengthy delays in the OR. This insight is meaningless because its too late to fix the situation.

Many correctly believe that success requires predictive analytics, but predictions are hard to interpret. They cant help a busy nurse know exactly what she should do in that moment to prevent the chaos that may be heading her way. To make things work better for the frontline decision-makers, our tools have to be able to evaluate the possible interventions and suggest data-validated, real-time course corrections.

For example, in the situation of the toddler, AI can predict potential scheduling conflicts in the operating room, or flag when a delay is likely. It instantly identifies the best option and then prompts the nurse to take the specific action needed to prevent the cancellation. This way, better decisions are made and issues can be dealt with before they even arise.

The results associated with the use of AI in healthcare operations are compelling. During Beckers 8th Annual Symposium, a leading academic childrens hospital talked about a 25 percent reduction in same day surgery cancellations by using AI. Also, a prominent Midwest health system discussed steps for successfully transforming a low-performing emergency department reducing patient wait time for a doctor by 20 percent.

Using AI, our industry can extrapolate this success to avoid many issues currently affecting healthcare costs and patient experience such as surgery delays, overcrowding, patient falls and excessively lengthy hospital stays.

And, for our society, there are bigger goals that can be realized by focusing on these important and costly situations. Could we reduce painful facility closures facing our rural communities? Could we slow hospital spending that is now nearly $1 trillion dollars and represents a third of all healthcare costs in the U.S.? Could we reduce the 250,000 annual deaths from medical errors by making it easier for caregivers to do their job? On the most basic level, could we get patients in and out of the hospital faster and with less frustration?

To make an impact on these big picture outcomes, we have to change our mindset. Rather than seek out silver bullets, we must begin to value the small, day-to-day actions that, over time, can drive large-scale impacts.

Its crucial that we equip all healthcare staff with the best tools to do so. Thats where AI can be a game-changer the game; if we can mold the information it delivers into the right, actionable decisions. In the next few years it will be increasingly clear that those who are able to do so will see the greatest success.

Photo: Getty Images

Original post:

AI will fix healthcare's biggest and least sexy problem - MedCity News

Unpacking AIs power and its controversies at Transform 2020 – VentureBeat

Last Chance: Register for Transform, VB's AI event of the year, hosted online July 15-17.

I cant wait until Transform 2020 starts tomorrow. Its our flagship event for enterprise decision-makers to learn how to apply AI.

One of our goals at VentureBeat is to create a new kind of town square for enterprise decision makers to learn about transformative technology and transact. And the practice of AI is where were going deep. AI is the most powerful technology in enterprise today, and VentureBeat is the leading publication covering AI news.

So its important that VentureBeat create a virtual platform where that community can come together, and have conversations. Since weve already been digital with our news offering, we are able to pivot fully virtual and bring the same, if not more value to our AI events.

Im pretty proud of what weve done, including the one-to-one meeting feature for executives who would like to connect with each other to get business done.

Im excited about the agenda we have for the three days. Were focused on the top application areas: for example, conversational AI and computer vision and edge IoT.

In my opening remarks tomorrow, Ill provide preliminary results of our AI survey, and an overview of the big trends were seeing shaped by the hundreds of conversations we had with executives while preparing for the show.

Above: CTO Twitter Parag Agrawal

Image Credit: Twitter.com

Ill personally be interviewing Twitter CTO Parag Agrawal, about how Twitter has been using AI/ML at scale to foster a more constructive public discourse and to flag harmful speech. Twitter has been in the hot-seat lately forced to label tweets from President Donald Trump that it perceived as misleading or harmful. And how does Twitter flag offensive tweets accurately, when researchers have shown that leading AI models processing hate speech are often inaccurate or biased? Agarwal will discuss how Twitter blends AI/ML, policy, and product in a dynamic environment where it has to counter adversaries who often use state of-the-art conversational AI technology themselves.

Well address the gap in perceived importance of ethics and accuracy in AI. In our AI survey, most of our practitioner respondents said they believe enough is being done at their companies to counter bias (ethnic, gender, etc.) in implementing AI models. This contrasts with what were hearing from professionals from underrepresented backgrounds. On Thursday morning, well have an hour-long roundtable on the topic of Diversity and Inclusion in AI led by four Black professionals, which I highly recommend attending if you can get in (it will be capped). This will be a strong session, and eye-opening for anyone thinking enough is being done to counter bias in AI. It will follow our Women in AI (virtual) Breakfast, where well have Timnit Gebru and other leaders represented.

One area of particular controversy is facial recognition technology, where study after study has shown that it is less accurate on underrepresented populations. At our AI showcase at Transform, Trueface, a facial recognition company, will be releasing a new product, and Hari Sivaraman, Head of AI Content Strategy, VentureBeat, will have a crossfire Q&A with Trueface CEO Shaun Moore about how it is using facial recognition and its purported accuracy.

Theres too much happening to summarize entirely here. The AI Innovation Awards tomorrow evening, the Expo, the intimate roundtables But it does look like Transform will be the biggest AI event for business executives this year, given that most other events were canceled or postponed. We have almost 3,000 people registered, double the number from last year.

And of course, none of this is possible without our great sponsors folks like Dataiku, Intel, CapitalOne, Nvidia, Modzy, Cloudera, DotData, Twohat, Dell, Inference Solutions, Anaconda, Conversica, SambaNova, Xilinx, Globant, and more. Many of their executives will be participating as speakers, alongside speakers from some great brands like Walmart, Uber, Google, Adobe, Chase, Goldman Sachs, Visa, PayPal, Intuit, CommonSpirit Health, GE Healthcare, Pfizer, Pinterest, Slack, Yelp, LinkedIn, eBay, and Salesforce.

Looking forward to seeing you there virtually! (Register here: vbtransform.com)

Go here to see the original:

Unpacking AIs power and its controversies at Transform 2020 - VentureBeat

Robotics and AI-based Automation in the Post Pandemic Era – Robotics Tomorrow

There are several industries that I believe will take off in the post-pandemic era. First and foremost are e-commerce and contactless shopping; this is an industry in which new protocols are expected to continue as social distancing has become mandatory.

Robotics and AI-based Automation in the Post Pandemic Era

Q&A with Anis Uzzaman, CEO and General Partner | Pegasus Tech Ventures

Pegasus Tech Ventures is a global venture capital firm based in Silicon Valley that invests in emerging technology companies around the world. We work with startups to expand into new markets globally. Our portfolio companies target disruptive opportunities in Artificial Intelligence, Robotics, IT, HealthTech, IoT, Big Data, Quantum Computing, FinTech, and other next-generation technologies. Pegasus manages about US$1.5 billion across 25 funds, on behalf of 35+ corporate partners. By helping entrepreneurs connect with corporations and enter new markets, Pegasus Tech Ventures bridges innovation ecosystems around the world. Pegasus also founded and sponsors the Startup World Cup, one of the biggest startup competitions in the world. Startup World Cup covers more than 60 regional locations across six continents; the Grand Finale in San Francisco offers a US$1 million-dollar investment prize to the winning team.

There are several industries that I believe will take off in the post-pandemic era. First and foremost are e-commerce and contactless shopping; this is an industry in which new protocols, such as contactless delivery methods, are expected to continue as social distancing has become mandatory. Startups including Nuro and Starship Technologies have taken it one step further and created fully autonomous vehicles that deliver to your door without human involvement. Such advances will get increasingly popular as COVID-19 requires the world to adapt to new sets of living standards. Well also witness a rise in popularity of E-Sports. Traditional sports-focused companies, including ESPN, have been doubling down on growing E-Sports businesses. We also see this trend in startups, with companies like Sleepr adapting its product roadmap to bring E-Sports to the forefront. Twitch, already an industry leader, has seen a surge in the first quarter of 2020, breaking its own records in hours watched and average concurrent viewership.

A few startups that are creating artificial intelligence-based robotics automation solutions include Osaro, Vicarious, and Kindred. These companies are applying different types of artificial intelligence to automate various tasks in the warehouse and distribution environment; this is key in ensuring that social distancing will continue to occur, while still properly maintaining the supply chain. Kindred.ai, in particular, has already deployed many solutions with popular brands including GAP and Nike. The retail sector has been hit particularly hard during this time. I anticipate there will also be an emphasis on accelerating automation of their operations to ensure they can better maintain operations in the future. Osaro is focused on both E-Commerce applications and food packaging, where there is a desperate need for more human-free solutions.

While there are a wide variety of tips I have in mind for startups at the moment, three of the best ones to begin with are:

Reconsider your financial spending and needs. What is the conversion on sales with your marketing campaign? What is your current sales productivity? How much cash do you have in the bank, and how long will it last based on the current burn rate? Companies need to determine the most essential expenditures required to reach minimum milestones during this environment.

Reset your stakeholders expectations. Now is also the time to talk with your investors, customers, and employees to adjust expectations. In the past, current investors and potential investors may have indicated a particular revenue milestone that you need to reach for the next round of equity financing. Have a discussion with them and brainstorm what is realistic given the new macroeconomic environment.

Come up with an adjusted fundraising plan. Startups will need to adjust their fundraising plans based on the new economic reality. If a startup doesnt include an adjusted plan and cannot explain how it will take the new business environment into account, then investors will have an increasingly difficult time getting approval from their investment committee to finance that company.

It is even more relevant today as startups need more revenue and corporations need to accelerate their innovation initiatives. As a way to address the market dynamic for startups and corporations, Venture Capital-as-a-Service (VCaaS), provides an optimal mix of capital and business value to startups through corporate fund networks. Pegasus Tech Ventures is providing startups with both flexible check sizes and business engagements with strategic corporate partners. Pegasus has partnered with 35+ corporations including ASUS, AISIN, and SEGA. For example, Pegasus and AISIN have recently partnered to help provide funding for autonomous vehicle technology company StradVision.

As VCs we want to bring together those who may have been deprived of investment opportunities due to their backgrounds or ethnicities. Diversification, within robotics specifically, is key to ensuring that future technologies hold no biases. VCs in particular have an opportunity in front of them to empower diverse startups, and to help the companies become future leaders. VCs should use their influence to put diverse groups at the front of peoples minds. Here at Pegasus, weve taken concrete steps to provide opportunities for people of all backgrounds and conduct our business with purpose - whether that means investing in women-led startups or black and Indigenous people of color-owned startups, or hosting our Startup World Cup with over 60 countries participating to help ensure that everyone is able to play on the same field.

About Anis UzzamanAnis Uzzaman, Ph.D. is the CEO & General Partner of Pegasus Tech Ventures, overlooking overall management, investments, and operations. Located in Silicon Valley, Pegasus Tech Ventures provides early stage to final round funding. Anis has invested in over 170 startups across North America, Europe, and Asia. Anis is also the Chairman of Startup World Cup, a global startup pitch competition with 50+ regional events across the 6 continents, leading up to $1,000,000 in investment prize.

This post does not have any comments. Be the first to leave a comment below.

You must be logged in before you can post a comment. Login now.

Humans and robots can now share tasks - and this new partnership is on the verge of revolutionizing the production line. Today's drivers like data-driven services, decreasing product lifetimes and the need for product differentiation are putting flexibility paramount, and no technology is better suited to meet these needs than the Omron TM Series Collaborative Robot. With force feedback, collision detection technology and an intuitive, hand-guided teaching mechanism, the TM Series cobot is designed to work in immediate proximity to a human worker and is easier than ever to train on new tasks.

Read more:

Robotics and AI-based Automation in the Post Pandemic Era - Robotics Tomorrow

Former AI Company CEO Warns About Abuse of Virtual Relationship – Futurism

In BriefArtificial Intelligence has the potential to expedite humandevelopment and liberate us from menial tasks. However, AI is alsobecoming more integrated into our personal lives, raising concernsabout manipulation and coercion. AI Interaction as Manipulation

An article for the MIT Technology Reviewhas raised concerns about the potential for our intimacy with artificial intelligence (AI) to be exploited for insidious ends. Its author, Liesl Yearsley, shares her perspective as the former CEO of Cognea, which built virtual agents using a mixture of structured and deep learning.

Yearsley observed during her tenure at Cognea that humans were becoming more and more dependent on AI not just to perform tasks, but also to provide emotional and platonic support. This phenomenon occurred regardless of whether the agent was designed to act as a personal banker, a companion, or a fitness coach Yearsley wrote people would volunteer secrets, dreams, and even details of their love lives.

This may not necessarily be bad. AI is perhaps more capable than we are at caring it has the potential to be always available and be modified specifically for us. The fundamental problem is that the companies designing them are not primarily interested in each users well being, but in increasing traffic, consumption, and addiction to their technology, Yearsley wrote in the article.

Hauntingly, she writes that AI corporations have developed formulas that are incredibly efficient at achieving this. Every behavioral change we at Cognea wanted, we got so what if what companies wanted was unethical? Yearsley also observed that humans relationships with AI became circular. If humans were exposed to particularly servile or neutral AI, humans would tend to abuse them, and this relationship would make them more likely to behave the same toward humans.

AI is becoming integrated into our daily lives at a rapid pace: SIRI mediates our interaction with our iPhones, AI curates our online experience by tailoring advertisements, and chatbots constitute a significant proportion of our interactions with companies.

Our growing relationship with AI is catalyzed by the anthropomorphization (attributing human traits to things) of technology. SIRI was given a name to make her appear more like a person, and bots are adapting to your speech patterns in order to encourage you to trust them, bond with them, and therefore use them more.

The vulnerability caused by not understanding what an AI may be specifically programmed to do is increased by our lack of understanding concerning how AI does this. We currently know very little about how AI thinks, but are continuing to create bigger, faster, and more complex versions of it. This is not only an issue for us, but with the companies developing it because they cannot predict the actions of their AI with any certainty.

Our interaction with AI is clearly going to shape our future, but the danger is that AI can be curated to affect our society in a particular way or perhaps more that AIs interpretation of a human intention will lead to a future that none of us actually wanted.

Here is the original post:

Former AI Company CEO Warns About Abuse of Virtual Relationship - Futurism

Elon Musk says all advanced AI development should be regulated, including at Tesla – TechCrunch

Tesla and SpaceX CEO Elon Musk is once again sounding a warning note regarding the development of artificial intelligence. The executive and founder tweeted on Monday evening that all org[anizations] developing advance AI should be regulated, including Tesla.

Musk was responding to a new MIT Technology Review profile of OpenAI, an organization founded in 2015 by Musk, along with Sam Altman, Ilya Sutskever, Greg Brockman, Wojciech Zaremba and John Schulman. At first, OpenAI was formed as a non-profit backed by $1 billion in funding from its pooled initial investors, with the aim of pursuing open research into advanced AI with a focus on ensuring it was pursued in the interest of benefiting society, rather than leaving its development in the hands of a small and narrowly-interested few (i.e., for-profit technology companies).

At the time of its founding in 2015, Musk posited that the group essentially arrived at the idea for OpenAI as an alternative to sit[ting] on the sidelines or encourag[ing] regulatory oversight. Musk also said in 2017 that he believed that regulation should be put in place to govern the development of AI, preceded first by the formation of some kind of oversight agency that would study and gain insight into the industry before proposing any rules.

In the intervening years, much has changed including OpenAI. The organization officially formed a for-profit arm owned by a non-profit parent corporation in 2019, and it accepted $1 billion in investment from Microsoft along with the formation a wide-ranging partnership, seemingly in contravention of its founding principles.

Musks comments this week in response to the MIT profile indicate that hes quite distant from the organization he helped co-found both ideologically and in a more practical, functional sense. The SpaceX founder also noted that he must agree that concerns about OpenAIs mission expressed last year at the time of its Microsoft announcement are reasonable, and he said that OpenAI should be more open. Musk also noted that he has no control & only very limited insight into OpenAI and that his confidence in Dario Amodei, OpenAIs research director, is not high when it comes to ensuring safe development of AI.

While it might indeed be surprising to see Musk include Tesla in a general call for regulation of the development of advanced AI, it is in keeping with his general stance on the development of artificial intelligence. Musk has repeatedly warned of the risks associated with creating AI that is more independent and advanced, even going so far as to call it a fundamental risk to the existence of human civilization.

He also clarified on Monday that he believes advanced AI development should be regulated both by individual national governments as well as by international governing bodies, like the U.N., in response to a clarifying question from a follower. Time is clearly not doing anything to blunt Musks beliefs around the potential threat of AI: Perhaps this will encourage him to ramp up his efforts with Neuralink to give humans a way to even the playing field.

Read the rest here:

Elon Musk says all advanced AI development should be regulated, including at Tesla - TechCrunch

Ethics and Governance AI Fund funnels $7.6M to Harvard, MIT and … – TechCrunch

A $27 million fund aimed at applying artificial intelligence to the public interest has announced the first targets for its beneficence: $7.6 million will be split unequally between MITs Media Lab, Harvards Berkman Klein Center and seven smaller research efforts around the world.

The Ethics and Governance of Artificial Intelligence Fund was created by Reid Hoffman, Pierre Omidyar and the Knight Foundation back in January; the intention was to ensure that social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers have a say in how AI is developed and deployed.

To that end, this first round of fundings supports existing organizations working along those lines, as well as nurturing some newer ones.

The lions share of this initial round, $5.9 million, will be split by MIT and Harvard, as the initial announcement indicated. Media Lab is, of course, on the cutting edge of many research efforts in AI and elsewhere; Berkman Klein focuses more on the legal and analysis side of things.

The funds focuses are threefold:

Those two well-known organizations will be pursuing issues related to those (theyre already working together anyway), but the seven smaller efforts are also being more modestly funded.

Digital Asia Hub, FAT ML and ITS Rio will be hosting conferences and workshops to which experts across fields will be invited, advancing and enriching the conversations around various AI issues. ITS Rio also will be translating debates on the topics a critical task, since there are important thinkers worldwide and these conversations shouldnt be limited by something as last-century as native language.

On the research side, AI Now will be looking at bias in data collection and healthcare; the Leverhulme Center will be looking at interpretability of AI-related data; Data & Society will be conducting ethnographically-informed studies on the human element of AI and data for example, how demographic imbalances in who runs real estate businesses might inform the systems they create and use.

Access Now (which doesnt really fit in either category) will be working to create a set of guidelines for businesses and services looking to conform to major upcoming data regulations in the EU.

For this initial cohort, we looked for projects that fit our goal of building networks across fields, and that would complement the work of our anchor partners at the Media Lab and Berkman Klein, said Knights VP of Technology and Innovation, John Bracken, in an email to TechCrunch.

We think its vital that civil society has a strong voice in the development of artificial intelligence and machine learning. We see these projects as part of a growing set of researchers, engineers, and policy makers who will be part of ensuring that these new tools are developed ethically.

Although the funds are in the public interest, they arent just handouts; I asked Bracken whether there were any concrete expectations for the organizations involved.

Absolutely, he said. The discussion around artificial intelligence is no longer a far-off, speculative thing. Each of the grants were making have deliverables planned for the next twelve months, and well be showcasing them as they launch.

Well hear about them soon, no doubt.

A few million bucks may seem like a drop in the bucket among the herds of unicorns we track here at TechCrunch, but on the other hand it may seem cheap when the studies and events being funded come to fruition and result in the kind of productive dialogue this fast-moving field needs.

Link:

Ethics and Governance AI Fund funnels $7.6M to Harvard, MIT and ... - TechCrunch

UK regulators slap hospital involved in AI research project with Google’s DeepMind over data handling – GeekWire

The Royal Free Hospital in the U.K., which regulators said improperly handled patient records in an AI research project with Googles DeepMind. (Royal Free Hospital photo)

Googles DeepMind artificial intelligence research was dealt a minor setback Monday after U.K. regulators ruled a hospital it partnered with on a medical diagnostic app improperly handled medical data, and Google admitted it was working too fast on the project.

The ruling from the Information Commission Office Monday said that a project between Royal Free NHS Foundation Trust in London and DeepMind improperly handled patient data during a recent trial project, failing to properly inform patients how their data was being used. It required the hospital to re-evaluate its data-handling policies and publish an audit of its decision-making process throughout the trial.

DeepMind and the Royal Free struck a deal in 2015 to let DeepMind access 1.6 million patient records in order to build an app that could help diagnose serious kidney issues, but the full extent of that deal was not made public until 2016. Our investigation found that the Trust did carry out a privacy impact assessment, but only after Google DeepMind had already been given patient data. This is not how things should work, the ICO wrote in a blog post.

For its part, Googles DeepMind unit has not been accused by regulators of doing anything wrong, but the company admitted that patient privacy rights werent at the forefront of its thinking as the trial unfolded.

We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole. We got that wrong, and we need to do better, DeepMind wrote in a blog post.

The saga could make it harder for London-based DeepMind to find medical partners to work with in the U.K., which could slow the progress of artificial intelligence research into health issues. Google said the Streams app developed as part of the research was able to save nurses several hours a day processing test results and detected serious kidney issues faster than the traditional process would have allowed.

No-one suggests that red tape should get in the way of progress, the ICO wrote. But when youre setting out to test the clinical safety of a new service, remember that the rules are there for a reason.

Go here to read the rest:

UK regulators slap hospital involved in AI research project with Google's DeepMind over data handling - GeekWire

Ford is putting $1 billion into an AI startup, Detroit’s biggest investment yet in self-driving car tech – Recode

Ford has made Detroits biggest investment yet in self-driving technology, acquiring a majority stake in artificial intelligence startup Argo AI for $1 billion, the company announced on Friday.

Argo AI was founded by two top engineers from Google and Uber who were both previously at Carnegie Mellons robotics institute. The startup plans to use its expertise in AI to develop software for self-driving vehicles.

Bryan Salesky, the CEO of Argo AI, was the head of hardware at Alphabets car division, Waymo, for the past three years. Peter Rander, Argos COO, left Uber in September, where he was one of the top engineers for its self-driving division.

This is the largest investment a traditional auto manufacturer has made in self-driving technology. General Motors acquired self-driving startup Cruise for $1 billion last year, and Uber bought autonomous trucking company Otto for $680 million, also last year.

Ford will dole out the $1 billion over a five year schedule but will immediately become the majority shareholder. The company declined to disclose its specific stake, but the investment would value Argo at over $1 billion.

Ford says Argo will remain headquartered in Pittsburgh and operate with substantial independence. Both Ford and Argo elect two board seats, with a fifth independent position. Ford plans to install Raj Nair, head of research and development, and Vice President John Casea to the board.

This is the most Ford has spent on autonomous technology. In 2016, the automaker acquired on-demand shuttle service Chariot for far below $1 billion and invested in Velodyne, a maker of Lidar technology. Ford says it plans to market fully self-driving cars by 2021.

While Ford will effectively own Argo, the AI developer plans to eventually license its software and sensor suite to other companies.

Our view [is that], in the future, there will be a number of players that will have systems, Ford CEO Mark Fields told Recode in an interview. There wont be just one winner. But at the same time we can offer that to other companies where it doesnt compromise our competitive advantages. We think thats a great opportunity to get even more scale and create some value for the companies.

Theres a lot of advantages to having this company be independent and operate with the agility of a startup, Salesky told Recode. We know that in order for this technology to be fully realized and deployed at scale, we have to work with folks that know how to do that.

According to Fields and Rander, the most important reason Ford didnt acquire Argo was so the startup could attract top talent with offers of equity.

This is also not the first instance of a cross-pollination between staffers from Uber, Google and Carnegie Mellon.

Both Rander and Salesky who plan to have a team of 200 by the end of the year were big losses for Alphabets self-driving car company Waymo and for Uber.

As Recode first reported, Rander left Uber along with two other top self-driving engineers mapping head Brett Browning and autonomy head Drew Bagnell just a year or so after Uber famously raided Carnegie Mellons Robotics Institute for top talent.

Randers departure came in the wake of Ubers acquistion of Otto, a startup co-founded by former Google engineer Anthony Levandowski. The Otto founder was put in charge of Ubers entire self-driving division, which sources said contributed to staff defections.

For Waymo, Argo is the latest car startup to be hatched by one of its former engineers. In addition to Argo and Otto, Chris Urmson, Waymos former lead technologist, is starting, Aurora, as Recode first reported.

Then theres Nuro.ai, which was started by two former top executives of Googles self-driving car project: Jiajun Zhu and Dave Ferguson.

Though Google has been working on its self-driving technology far longer than any other tech company and has arguably the most advanced technology, many sources say there has been internal tension over the companys path to market.

Its no coincidence that many of those whove left what is now called Waymo to start their own companies are laser-focused on commercializing self-driving technology.

We want to take a straight line path to market as much as we possibly can, Salesky said.

You can contact this reporter at johana@recode.net or on Confide, WeChat or Telegram at (516) 233 8877.

Read more from the original source:

Ford is putting $1 billion into an AI startup, Detroit's biggest investment yet in self-driving car tech - Recode

University of Illinois and IBM Researching AI, Quantum Tech – Government Technology

The University of Illinois Urbana-Champaign Grainger College of Engineering is partnering with tech giant IBM to bolster the colleges research and workforce development efforts in quantum information technology, artificial intelligence and environmental sustainability.

According to a news release from the university, the 10-year $200 million partnership will fund the future construction of a new Discovery Accelerator Institute, where university and IBM researchers will collaborate on solving global challenges with emerging technologies such as AI.

Areas of study will include AI's potential to solve sustainable energy, new materials for CO2 capture and conversion, and cloud computing and security. Researchers will also explore ways to improve quantum information systems and quantum computing, which applies the rules of quantum mechanics to make computations much faster than most computers in use today.

Bashir said the partnership will allow IBM and university researchers to work toward developing the technology of tomorrow, with sustainability in mind.

Were looking for a new way to really bridge that gap [between academia and the tech industry] in a much more intimate way and expand our collective research and educational impact, he said. In higher ed and industry, we need to come together to solve grand challenges to keep a sustainable planet, to provide high-quality jobs and develop a new economy.

We had already been working with them in the AI space, Welser said. We realized we could take what were doing here with AI, expand it to do some of the work in the hybrid cloud space, and think about what we do with that by advancing these base technologies.

Its also using this as a test bed for what we call discovery acceleration, which is using technologies to discover new materials and new science that can help with societal problems, he continued. In the case of this, were focusing on carbon capture, carbon accounting and climate change.

As part of the initiative, Bashir said the company and faculty will team up to develop nondegree tech certification programs and professional development courses in IT-related fields. He said the goal will be to feed IT talent into the workforce, given the national shortage of tech professionals in artificial intelligence, data science and quantum computing.

Working with IBM, theyre interested in hiring the workforce of tomorrow. Building that talent from early in the pipeline and diversifying the STEM talent pipeline is something we want to work on together, he said, adding that the partnership also aims to diversify the IT talent pool by bringing students of color and women into emerging fields like quantum computing.

Welser said the Discovery Accelerator Institute will complement a related company initiative: the IBM Skills Academy, a training certification program that provides over 330 courses relating to artificial intelligence, cloud computing, blockchain, data science and quantum computing.

We have courses that help train professors in specific areas of these skills, and they can use those materials in their coursework and create their own accredited courses, he said. Weve realized there really is a need for having these kinds of courses that dont necessarily go into a full university [degree] but could be more certifications for students people who want to learn about an area and get a certain level of certification.

In addition to research and course development efforts, Bashir noted that the institute will give students close access to one of the worlds largest tech employers.

We believe we can work together to prepare more talent through our educational pipeline, which IBM can have firsthand access to, he said. If they are working together with us, then they get to know those students.

The Illinois initiative comes two months after the tech company announced a partnership with Cleveland Clinic to study hybrid cloud, AI and quantum computing technologies to accelerate advancements in health care and life sciences. As part of that partnership, IBM plans to install its first private-sector, on-premises quantum computing system in the U.S.

Here is the original post:

University of Illinois and IBM Researching AI, Quantum Tech - Government Technology

Five Trends Driving the Next Wave of AI and AIOps – JAXenter

The evolution of AI and AIOps today is being driven by five key trends which, while each important on their own, are much more striking when taken together. As a collective it forces a concentration on the kinetic aspects of AI. Let me outline the five biggest trends affecting AI and AIOps today, and explain why IT organizations should track their development and their implications.

SEE ALSO: Machine learning in finance: From buzzword to mainstream

The first trend is widely recognised: DevOps has made a continuously evolving IT environment a reality. IT has always evolved rapidly, a constant feature of the industry. But now, with DevOps adopted in many organizations, were going to another level. By design, weve introduced continuous evolution into our environments, and constancy has been designed out. AI strategies have always assumed there is a degree of constancy in the environment. This is no longer the case. IT environments are now expected to change continuously at all layers of the infrastructure and application stack, and these changes can frequently transpire in a matter of microseconds. Traditional data sampling timeframes, as well as human and robotic reaction times to actual or potential incidents (at best, a matter of seconds), are no longer effective or even informative.

Economic pressures have been exacerbated by the pandemic, so users now have an increased demand for automation of tasks. Until now, AI has been an exercise in the automation of analysis. Today, in almost any business context, users want AI to take the next step and drive automation to make humans more productive, in IT remediation for example. Traditionally, there has been resistance to this. But if you couple increased complexity with new economic realities, automation takes centre stage as AI evolves to drive increased productivity as a key benefit. If an AI solution cant deliver on automation, it will be perceived as incomplete.

The next trend is a consequence of increased complexity. It is now impossible to disentangle security management issues from operations management issues. There is no way of determining whether or not a given anomaly is a consequence of an unpublicised change, a system error, an unexpected change in user behaviour, or a malicious intervention. The attempt to run security and operations management side-by-side from start to finish is on the verge of breaking down. In the very near future, human and automated IT system data analysts will observe and analyse anomalies and surprising patterns purely with regard to the degree of their unexpectedness. The prime directive will be to pick up whats new and surprising as early as possible. The strategy for dealing with the unexpected will only come as a next step.

Remote work, the fourth trend, has been in place for a long time but has been accelerated by the drastic social distancing measures adopted in response to the pandemic. All teams addressing incidents will now be virtual and ephemeral, working on any given situation via a virtual Network Operations Centre. As people work from anywhere, teams will be assembled cross-functionally depending on the problem at hand, and disbanded once a solution is implemented.

Although the five dimensional approach to AIOps advocated by Moogsoft did not anticipate the Covid pandemic per se, it did anticipate the need for supporting an increasingly dispersed, virtualised, and ephemeral ITOSM (IT Operations and Service Management) workforce. The role of AI in collaboration enablement and virtual learning has already moved front and centre for many enterprises, and that role will only become more prominent as the post-Covid new normal becomes established.

Finally, because of continuous change and the requirement for short-term ROI on technology investments, any attempt to build ontologies, service models, or configuration management databases, that have no predictive power, will be seen as useless projects.

Historically, there has been scepticism surrounding such projects that promise to contain a single record of truth. However, given that changes are continually taking place in the environment and given the business need for immediate returns, any modelling that takes place needs to be an engine for prediction, and if its not that, its relatively pointless.

SEE ALSO: How enterprise companies are changing recruitment with AI

Any one of these trends operating individually would go a long way towards putting automation at the center of both AIOps strategies and AIOps technologies. All five of them working together, particularly under the pressure of the recent pandemic, mean that kinetic AIOps will be a critical element of digital business in the very near future. Crucially, any AIOps solution will be seen to be redundant unless it robustly supports the automation of any remedial or preventative actions recommended by the solution.

I will go out on a limb and predict that, in two years time, any AIOps solution with functionalities that are just confined to the first three dimensions (data selection, pattern discovery, and causal inference) and dont support collaboration and automation, will not be acceptable to effective enterprises.

Read more from the original source:

Five Trends Driving the Next Wave of AI and AIOps - JAXenter