AI bias may worsen COVID-19 health disparities for people of color – Healthcare IT News

Developers and data scientists have long said that biased data often leads to biased models when it comes to machine learning and artificial intelligence.

A new article in the Journal of the American Medical Informatics Association argues that such biased models may further the disproportionate impact the COVID-19 pandemic is having on people of color.

The article's authors, Eliane Rsli, of the Swiss Federal Institute of Technology, and Brian Rice and Tina Hernandez-Boussard, of Stanford University, noted that even as the global research community has rushed to push out new findings, it risks producing biased prediction models.

"If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden," wrote the researchers.

WHY IT MATTERS

The COVID-19 pandemic has had a hugely outsized impact on people of color, worsened by existing disparities in healthcare and systemic racism.

At the same time, researchers noted, COVID-19 prediction models can present serious shortcomings, especially regarding potential bias.

In a recent systematic review of COVID-19 prediction models, they wrote,"The most frequent problems encountered were unrepresentative data samples, high likelihood of model overfitting, and imprecise reporting of study populations and intended model use."

Researchers flagged the danger in regarding AI as intrinsically objective, particularly when building models for optimal allocation of resources, including ventilators and intensive care unit beds.

"These tools are built from biased data reflecting biased healthcare systems and are thus themselves also at high risk of bias even if explicitly excluding sensitive attributes such as race or gender," they wrote.

For example, they argued, models that include comorbidities associated with COVID-19 could reinforce the structural biases that lead to some groups experiencing those comorbidities.

"Resource allocation models must ... go beyond their basic utilitarian foundation to avoid further harming minority groups already suffering the most from COVID-19, based on health inequalities rooted in prior systemic discrimination," they wrote.

To manage these challenges, the researchers suggested implementing transparency frameworks and reporting standards, including publicizing the source code of AI models.

They also encouraged regulatory infrastructure that prioritizes broad-based data sharing, noting that models developed at academic healthcare systems may not represent the general U.S. population.

"COVID-related data are being produced at incredible speed but these data remain siloed within each country or academic institute, in part due to a lack of interoperability and in part due to a lack of appropriate incentives," they wrote.

THE LARGER TREND

Stakeholders have presented the prevention of algorithmic bias as an integral part of ethical AI model development.

In early 2019, a Duke-Margolis Center for Health Policy paper released a report including bias mitigation in AI as a priority for developers, regulators, clinicians and policymakers, among others. Health systems will need to develop best practices, they wrote, that can address any bias introduced by the training data.

Later that year, U.S. Sens. Cory Booker, D-N.J., and Ron Wyden, D-Ore., urged the Trump administration and major insurers to combat racial bias in healthcare data algorithms.

"In healthcare, there is great promise in using algorithms to sort patients and target care to those most in need. However, these systems are not immune to the problem of bias," said the senators.

ON THE RECORD

"There is hope that AI can help guide treatment decisions, including the allocation of scarce resources within this crisis. However, the hasty adoption of AI tools bears great risk due to biased, unrepresentative training data and a lack of a regulated COVID-19 data resource for validation purposes," wrote researchers in the JAMIA article.

"Given the pervasiveness of biases, a failure to proactively develop comprehensive mitigation strategies during the COVID-19 pandemic risks exacerbating existing health disparities and hindering the adoption of AI tools capable of improving patient outcomes," they wrote.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichHealthcare IT News is a HIMSS Media publication.

Excerpt from:

AI bias may worsen COVID-19 health disparities for people of color - Healthcare IT News

Artificial Intelligence (AI) for Telecommunication Market 2020 Recent Industry Developments and Growth Strategies Adopted by Top Key Players Worldwide…

Artificial Intelligence (AI) for Telecommunication Market research report has been added to Report Ocean database. This report provides in detail, the market size, growth spectrum, and the competitive scenario of Artificial Intelligence (AI) for Telecommunication Market in the forecast timeline. The Artificial Intelligence (AI) for Telecommunication Market research report provides the detailed analysis of this business landscape, and contains important details regarding the present market trends, current revenue, industry share, periodic deliverables, alongside the profit anticipation and growth rate registered during the estimated timeframe.

The overall Artificial Intelligence (AI) for Telecommunication Market size has been derived using both primary and secondary source. The research process begins with an exhaustive secondary research using internal and external sources to obtain qualitative and quantitative information related to the Artificial Intelligence (AI) for Telecommunication Market. Also, the primary interview was conducted with industry key opinion leaders (KOLs), VPs, Valuation experts of the industry to validate data and analysis.

Request Free Sample Report athttps://www.reportocean.com/industry-verticals/sample-request?report_id=43415

Impact of COVID 19 on Artificial Intelligence (AI) for Telecommunication Market: This report will provide you details on COVID 19 impact.

Get in-depth analysis of the COVID-19 impact on Artificial Intelligence (AI) for Telecommunication Market

We have analyzed the impact of COVID-19 on the product industry chain

We have analyzed the impact of COVID-19 on various regions and major countries.

The impact of COVID-19 on the future development of the industry is pointed out.

Some of the salient features of the report include:

We have ratified the Market estimates and forecasts through extensive secondary and primary research and a strict in-house quality check module

Along with the quantitative data, the report will moreover incorporate an in-depth qualitative analysis pertaining to the current as well as upcoming trends and developments impacting the market demand across the globe

Inclusion of models such as Porters 5 forces model, SWOT analysis, PEST Analysis will give a 360-degree view on the overall market scenario

The Competitive landscape chapter of the shared sample pages will include profiles of the key players operating in the market based on several parameters such as product portfolio, strategic initiatives/recent developments and so on. Kindly note that this section is completely customizable, and we can profile companies as per your interest.

Competitive Intelligence:

Key parameters which define the competitive landscape of the Artificial Intelligence (AI) for Telecommunication Market:

Company Market Share

Top Market Strategies

Company Profiles

o Company overview

o Company snapshot

o Product portfolio

o Key strategic moves and developments

Production and Share by Player

Mergers & Acquisitions, Expansion

Market Vendor Ranking Analysis

The major market players included in this report are:

Atomwise Inc.LifegraphZebra Medical Vision Inc.Baidu Inc.Microsoft CorporationIBM

Market Segmentation:

The segmentation is used to decide the target market into smaller sections or segments like product type, application, and geographical regions to optimize marketing strategies, advertising technique and global as well as regional sales efforts of Artificial Intelligence (AI) for Telecommunication Market.

By Component:

ToolsServices

By Application:

Traffic ClassificationResource Utilization & Network OptimizationAnomaly DetectionPredictionNetwork Orchestration

Dissecting the Artificial Intelligence (AI) for Telecommunication Market with respect to the geographical outlook:

The document delivers an exhaustive analysis pertaining to the regional scope of the Artificial Intelligence (AI) for Telecommunication Market, while categorizing the same into North America, Europe, Asia Pacific, Middle East & Africa, Latin America.

Market share of each regional competitor of this industry

Also, the study provides with details related to the estimated growth rate of each territory over the study period.

Some of the Major Highlights of TOC covers:

Artificial Intelligence (AI) for Telecommunication Market Analysis by Application

Consumption and Market Share by Application

Artificial Intelligence (AI) for Telecommunication Market Manufacturing Analysis

Key Raw Materials Analysis

Market Concentration Rate of Raw Materials

Manufacturing Cost Analysis

Labor Cost Analysis

Manufacturing Cost Structure Analysis

Manufacturing Process Analysis of Artificial Intelligence (AI) for Telecommunication Market

Artificial Intelligence (AI) for Telecommunication Market Dynamics

Growth Prospects

See Saw Analysis

Market Drivers

Restraints

Market Challenges

Market Opportunities

Artificial Intelligence (AI) for Telecommunication Market Industry Analysis

Porters 5 Force Model

PEST Analysis

Value Chain Analysis

Key Buying Criteria

Regulatory Framework

Investment Vs Adoption Scenario

Analyst Recommendation & Conclusion

Available Customizations

With the given market data, Report Ocean offers customizations based on the company-specific needs.

For more information and discount on this report, ask your query at:https://www.reportocean.com/industry-verticals/sample-request?report_id=43415

Thanks for reading this article; you can also get individual chapter wise section or region wise report version like North America, Europe, and Asia.

Contact Us: +1 888 212 3539 (US) +91-9997112116 (Outside US)Contact Person: TomEmail:[emailprotected]

Read more from the original source:

Artificial Intelligence (AI) for Telecommunication Market 2020 Recent Industry Developments and Growth Strategies Adopted by Top Key Players Worldwide...

Before we put $100 billion into AI – VentureBeat

America is poised to invest billions of dollars to remain the leader in artificial intelligence as well as quantum computing.

This investment is critically needed to reinvigorate the science that will shape our future. But in order to get the most from this investment, we have to create an environment that will produce innovations that are not just technical advancements but will also benefit society and uplift everybody in our society.

This is why it is important to invest in fixing the systemic inequalities that have sidelined Black people from contributing to AI and from having a hand in the products that will undoubtedly impact everyone. Black scholars, engineers, and entrepreneurs currently have little-to-no voice in AI.

There are a number of bills coming through the House and the Senate to invest up to $100 billion in the fields of AI and quantum computing. This legislation, for example, the one from the House Committee on Science, Space, and Technology, makes references to the importance of ethics, fairness, and transparency, which are great principles but are not precise and lack a clear meaning. The bicameral Endless Frontier Act would effect transformational change to AI but is similarly unclear about how it would remedy institutional inequity in AI and address the lived experience of Black Americans. What these bills do not address is equal opportunity, which has a more precise meaning and is grounded in the movement for civil rights. These substantial investments in technology should help us realize equity and better outcomes in tech research and development. They should ensure that the people building these technologies reflect society. We are not seeing that right now.

As a Black American, I am deeply concerned about the outcomes and ill-effects that this surge of funding could produce if we do not have diversity in our development teams, our research labs, our classrooms, our boardrooms, and our executive suites.

If you look at companies building AI today like OpenAI, Google DeepMind, Clearview, and Amazon they are far from having diverse development teams or diverse executive teams. And we are seeing the result play out in the wrongful AI-triggered arrest of Robert Williams in January, as well as many other abuses that go under the radar.

Thus, we need to see these substantial government investments in AI tied to clear accountability for equal opportunity. If we can bring equal opportunity and technological advancement together, we will deliver the potential of AI in a way that will benefit society as a whole and live up to the ideals of America.

So, how do we ensure equal opportunity in tech development? It starts with how we invest in scientific research. Currently, when we make investments, we only think about technological advancement. Equal opportunity is a non-priority and, at best, a secondary consideration.

This is the entrenched system of innovation that we are used to seeing. Scientific research is the spring-well that fuels advancements in our productivity and quality of life. Science has yielded an incredible return on investment across our history and is continually transforming our lives. But we also need innovation inside our engine of innovation as well. It would be a mistake to assume that all scientists are enlightened enough to engage, train, mentor, cultivate, and include Black people. We should always ask: What is the bottom line that incentivizes and shapes our scientific effort?

The fix is simple really and something we can do almost immediately: We must start enforcing existing civil rights statutes for how government funds are distributed in support of scientific advancement. This will mostly affect universities, but it will also reform other organizations that are leading the way in artificial intelligence.

Think of the government as the venture capitalist that specifically has the interest of the people as its bottom line.

If we start enforcing existing civil right statues, then federal funding of artificial intelligence will create a virtuous cycle. It is not just advanced technology and ideas that come out of that funding. It is also the people produced from supported research labs who are trained in how to engineer and innovate.

And research labs have an impact on the science classrooms. The faculty and students engaged in research are also educating the next generation innovation workforce. They impact not only who is in the classroom environment but also who gets opportunities on the development teams that define the industry. Government funding should remind universities of their responsibility to mentor and grow future generations, not just pick winners and losers by grade policing.

If we fix how we invest in science with this massive influx of money, we can produce more enlightened innovators that will produce better products and AI that will help remedy some of the troubling things we are seeing right now with the technology. We will also be able to produce new technologies that expand our horizons beyond our current imaginations and dogma.

If a research lab or a university degree program is not diverse and not creating equal opportunity as required by law, then it should be ineligible for federal funding, including research grants. We should not fund researchers in computer science departments that have only yielded token representation of Black students in their graduating classes. We should not fund researchers who have received millions in public money but have never successfully mentored a Black student. Instead, we should reward researchers who achieve both inclusion of Black scholars and scientific excellence in their work. We should incentivize thoughtful and considerate mentorship by researchers, as we would want for ourselves, our own children, and our tuition dollars.

We should look at equal opportunity the same way as we look at investing in the stock market. Would you invest in a stock that has not shown any growth that has stagnated and come to perform badly? It is unlikely anybody would put their own money in that stock unless they saw evidence growth will occur. The same should hold true for university departments that build their prestige and economic viability primarily from money granted by the American taxpayer.

Who would be responsible for making these decisions? Ideally, it would be done by federal funding agencies themselves the National Science Foundation, the National Institutes of Health, the Department of Defense, etc. These agencies have yielded an immense return on investment that has enabled American innovation to grow exponentially over the last century, but their view of merit needs to be rethought in the context of 2020 and the realities of our new century.

I wrote earlier that this was an easy fix. And it is, on paper. But change will be difficult for research institutions because of their entrenched institutional culture. The people who are in positions to make the necessary change have come up through the system. And so they do not necessarily see the solution or the problem.

I am a Professor of Computer Science and Engineering at the University of Michigan. I have worked in robotics and artificial intelligence for over 20 years. I know the feelings of elation and validation from winning large federal grants to support my research and my students. Few words can describe the sense of honor and acknowledgment that comes with federal support of ones research. I still swell with pride every time I think about my opportunity to shake President George W. Bushs hand in 2007 and the congratulatory note in 2016 from my congressional representative, Rep. Debbie Dingle, for my National Robotics Initiative grant.

I also understand from experience how hard it is to see things from the inside. If we make the analogy to law enforcement, it is very much like the police policing the police. We are the people that are producing the technology innovation and benefiting from the funding, but we are also responsible for reviewing ourselves. There is little external accountability, with only evolving attempts at broadening participation from within.

I am neither a lawyer nor a member of the civil service, to be very clear. That said, this moment in our history is an opportune time to reimagine equal opportunity throughout the federal research portfolio. One possibility is through the creation of an independent agency that analyzes and enforces equal opportunity across programs for federal funding of scientific research, in contrast to dividing this responsibility among individual sub-agencies solely within the Executive Branch. Regardless of implementation, it is essential that we continually oversee the policies and practices of funding in artificial intelligence to make sure there is proper representation and diversity included and to ensure that our federal funding is not going to be spent without consideration of different viewpoints on how technology should be built, and of the larger systemic issues at play.

The time to act on this is now before the funding begins. When it comes to discrimination and racism, we must address both the hidden disparate impact in our systems of innovation as well as the traditional explicit disparate treatment (such as the vividly portrayed in the 2016 movie Hidden Figures).

For those who want to act, you can first look at your own organization and your own working environments and see whether you are living up to the civil rights statutes. If you are interested in translating protest into policy, write to your representatives in Congress and your elected officials and tell them equal opportunity in AI is important.

We should also ask our presidential candidates to commit to the kind of accountability I have outlined here. Regardless of who is elected, these issues of artificial intelligence and equal opportunity are going to define our country for the next few decades. It is a national priority that demands our attention at the highest levels. We should all be asking who is developing this technology and what is their motivation. There is so much to be optimistic about in artificial intelligence I would not be in this field if I did not believe that. But getting the best out of AI requires us to listen to all perspectives from all walks of life, engage with people from all zip codes across our country, embrace our global citizenship, and attract the best people from around the world.

I truly hope someday equal opportunity in AI will just be commonplace and not require such challenging discussions. It would be a lot more fun to make the case for why nonparametric belief propagation will become a better option than neural networks for more capable and explainable robot systems.

Chad Jenkins is an Associate Professor of Computer Science and Engineering and Associate Director of the Michigan Robotics Institute at the University of Michigan. He is a roboticist specializing in computer vision and human-robot interaction and leader of the Laboratory for Progress. He is a cofounder of BlackInComputing.org.

Originally posted here:

Before we put $100 billion into AI - VentureBeat

AI-based Drug Discovery Markets, 2030 – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "AI-based Drug Discovery Market: Focus on Deep Learning and Machine Learning, 2020-2030" report has been added to ResearchAndMarkets.com's offering.

The "AI-based Drug Discovery Market: Focus on Machine Learning and Deep Learning, 2020-2030" report features an extensive study of the current market landscape and future potential of the players engaged in offering AI-based services, platforms and tools for the discovery of novel drug candidates. The study presents an in-depth analysis, highlighting the capabilities of various stakeholders engaged in this domain.

One of the key objectives of this report was to estimate the existing market size and the future growth potential within the AI-based drug discovery market. We have developed informed estimates on the financial evolution of the market, over the period 2020-2030.

Amongst other elements, the report features:

The report also provides details on the likely distribution of the current and forecasted opportunity across:

Key Questions Answered

Key Topics Covered:

1. Preface

2. Executive Summary

3. Introduction

4. Market Landscape

5. Company Profiles

6. Ai-Based Healthcare Initiatives Of Technology Giants

7. Partnerships And Collaborations

8. Funding And Investment Analysis

9. Company Valuation Analysis

10. Cost Saving Analysis

11. Market Forecast

12. Conclusion

13. Executive Insights

For more information about this report visit https://www.researchandmarkets.com/r/hgwin9

Read more:

AI-based Drug Discovery Markets, 2030 - ResearchAndMarkets.com - Business Wire

AI, responsible sustainability, and my broken washing machine – TNW

I had just sat down to work one Saturday afternoon, when the familiar sound of the clothes washer, starting its spin cycle like an airplane taking off, started humming in the background. It was that sort of familiar noise that was both comforting and quickly drowns out the background, allowing me to sink quickly into a nice flow with some engineering work.

Suddenly, with a loud thump, the sound of a rattle, and something too awful to describe, the spinning machine came to a dramatic halt. I knew immediately it was the washing machine, as that peaceful hum was no longer softly blanketing the background. An uncomfortable silence was left in its void.

I walked over to the machine and made a quick inspection. Sure enough, there was a dim indicator on the front panel that readErras my clothes sat in a soapy swamp. My first indication was to go online and seek some machine first-aid atifixit.com. As an engineer myself, its almost a reflex to begin the troubleshooting process, no matter the medium.

Down the rabbit hole I went, educating myself on condenser units, evacuation pumps, controller computers, and the impressive array of components used to assemble these machines. Eventually some sort of alarm went off in my mind, and I was hit with the heavy reality of having wasted several hours attempting to gain expertise in a field I barely knew. So, I called the repair line and booked a repair.

Easy enough. In modern times, we have access to nearly immediate service only a phone call or screen-tap away. Though as I sat back down at my computer, I began to wonder if there was another, more efficient way to allow the manufacturer to diagnose and service my washer. After all, the selfish side of me reasoned it would save me, as the consumer, some additional TCO in the life of the appliance.

On the flip side, what if I wouldve just declared the device defective, irreparable, or obsolete? Would it have made its way to a recycling yard or trash heap as I enjoyed the delivery of a shiny new product? Appliance manufacturers are producing products withshorter lifespans than everand higher failure rates than their legacy counterparts.

This drives earlier whole-unit replacements and generates more waste. However, I would add that the millennial generation has a distaste for such environmental or corporate villainy, quickly sniffing out its presence and choosing the more sustainable option instead.

Consumer electronics are attractive, and the provocation of lust for the next best thing is always innate in their marketing strategies. However, what if we had another option where our devices could detect or predict failure, suggesting and even ordering replacement parts for us in the meantime? What if we could then be guided by the manufacturer through a mobile app, giving us the opportunity to save time and money by walking us through the replacement process?

If we were constrained for time, at least the manufacturer could realize savings, both environmentally and in labor cost, by invoking only one trip for the service technician. Even better, what if the device could fix itself?

Though the idea, at some levels, seems trivial, one critical piece to construct such a tool has been missing: visual cognition. Though computers have been getting better at recognizing individual parts, an ensemble of image recognition, cognition, and communication (chat-bot) are necessary for this type of automation.

Its at the intersection of these three that we can begin to create fully automated solutions where we can rapidly decrease our ejection of defective technology to landfills and simultaneously reduce our environmental impact, in the end saving money for both ourselves, and the producers of the products we enjoy.

Taking this one step further, with accurate failure reporting and detection, manufacturers can design products that perform their tasks more reliably and efficiently. Instead of ending up in the trash,as 70.8 percent of consumer electronics eventually do, fully functional hand me downs could enrich less affluent regions, with the maintenance cost also reduced through the above means.

Through I regularly defend the positive impact of AI to my friends who entertain a dystopian viewpoint, at the intersection of industries are possibilities that I gloss over on a daily basis. To a larger degree, even the receipt of defective machine parts for recycling could then be automated, allowing the return path to be optimized in a way that isnt currently possible.

My washing machine is now fixed, and I have some fresh, clean clothes. However, next time an appliance breaks and Im tempted to discard it, I would love to have Artificial Intelligence take care of the process, saving both environment and resources at the same time.

See original here:

AI, responsible sustainability, and my broken washing machine - TNW

AIoT: The Power of Combining AI with the IoT – Reliable Plant Magazine

When people hear the terms artificial intelligence (AI) and the internet of things (IoT), most think of a futuristic world often depicted in the movies. However, many of those predictions are now coming to fruition in this fourth industrial revolution that is currently transforming the way the world works in every way imaginable.

Even though the full capability of AI and the IoT are still in their relative infancy, these two technologies are now being combined across every industry in scenarios where information and problem-solving can improve outcomes for all stakeholders.

The last great convergence of this magnitude occurred in the late 1990s as mobile phones and the internet were on a collision course that has changed the course of human history. Most people now hold more computing power in the palm of their hand than was required to put a man on the moon in 1969. The convergence of AI and the IoT are about to do the same thing on an even greater scale.

The ability to capture large amounts of data has exploded in the last three to five years. Along with these advances come new threats and concerns about privacy and security. Large volumes of user data and company proprietary information are tempting targets for dark web hackers and even government entities around the world. There are also new responsibilities that come with this increased capability.

Sensors can now be applied to everything. This means that infinitely more data can be collected from every process or transaction in real time. IoT devices are the front line of this data collection process in manufacturing environments, customer service departments and consumer products in peoples homes. Any device with a chipset has the potential to be connected to a network and begin streaming large swaths of data 24/7.

Complex algorithms offer the capability to perform predictive analytics from every conceivable angle. Machine learning (ML), a subset of artificial intelligence, continues to upgrade workflows and simplify problem solving.

Companies can now capture all the meaningful data surrounding their processes and problems and develop specific solutions for real- world challenges within the organization to improve reliability, efficiency and sustainability.

While AI and the IoT are impressive superpowers in their own right, thanks to the concept of convergence, 1+1=3. The IoT improves the value of AI by allowing real-time connectivity, signaling and data exchange. AI boosts the capabilities of the IoT by applying machine learning to improve decision making.

Many in the industry are now referring to this convergence simply as AIoT. Presently, many AIoT applications are fairly monolithic, as companies build the expertise and systems to deploy and support these powerful technologies across their entire organization. The coming years will see this convergence allow more optimization and networking, which will create even more value.

Some of the most well-respected minds have predicted full digital integration between humans and computers by the year 2030. Between this and ongoing advances in automation and robotics, up to 40 percent of the current workforce could be replaced by technology within the next 10-15 years. Consider that by 2023:

Solutions providers and hardware manufacturers are already in full swing to take advantage of this digital technology gold rush and position themselves in the evolving industrial landscape. Forward-looking companies like Amazon are offering re-education and training opportunities for employees in soon-to-be obsolete job functions.

Convergence is a concept everyone should become familiar with, as all manner of technology discoveries and advances are being combined to innovate and disrupt the way the entire world lives, works and plays.

Joseph Zulick is a writer and manager at MRO Electric and Supply.

Go here to see the original:

AIoT: The Power of Combining AI with the IoT - Reliable Plant Magazine

Lunit Expands Collaboration with GE Healthcare to Advance AI Adoption – Imaging Technology News

July 14, 2020Lunit, a leading medical AI startup, announced the expansion of its collaboration with GE Healthcare. This collaboration between GE Healthcare and Lunit will help make AI algorithms more accessible to clinicians, alleviate clinical strain and streamline workflows supporting better patient outcomes.

GE Healthcare recently introduced itsThoracic Care Suitefeaturing a collection of eight artificial intelligence (AI) algorithms fromLunit INSIGHT CXR.The AI Suite quickly analyzes chest x-ray findings and flags abnormalities to radiologists for review, including pneumonia, which may be indicative of COVID-19 as well as tuberculosis, lung nodulesand other radiological findings.

This collaboration between GE Healthcare and Lunit is one of the first of its kind to bring commercially available AI products from a medical AI startup to an existing X-ray equipment manufacturer making Lunit INSIGHT CXR available via Thoracic Care Suite to GE Healthcares thousands of global fixed, mobile, and R&F X-ray customers at point of sale.

As a startup company, our vision is to have AI to be recognized as the new standard of care, said Brandon Suh, CEO of Lunit. We have been applying our AI into various types of medical images. Among them, Lunit INSIGHT CXR is one of our major products that has been commercialized since a few years ago. To have our AI made available with a market-leading vendor like GE Healthcare especially as part of the Thoracic Care Suite is a significant advancement in delivering solutions to various customers within GE Healthcares install base and bringing us all one step closer to embracing AI as a part of todays standard of care. We will continue to push forward and cooperate with market leaders through extended partnerships and collaborations, increasing the number of global use cases of our AI.

Lunits original algorithm, Lunit INSIGHT CXR, is designed to provide accurate and instant analysis of chest x-ray images by mapping the location of the findings and displaying the scored calculation of its actual existence. In a COVID-19 setting, Lunit INSIGHT CXR can be useful in quickly identifying high-risk cases as well as monitoring patients progression and regression of mild respiratory symptoms. The algorithm performs at 97-99% accuracy rate (Area Under the Curve - AUC), and according to studies published in Radiologyand JAMA Network Open, a physician can see their performance increase up to 20% upon interpreting with the assistance of Lunits AI.

Like Lunit, GE Healthcare is committed to developing and providing AI solutions to help alleviate our customers most pressing challenges, says Katelyn Nye, General Manager of GE Healthcare mobile X-ray and AI products. Weve watched Lunits success closely over the past several years and are excited to marry our expertise as a well-established x-ray manufacturer with the nimbleness of a startup like Lunit. We believe that by featuring Lunit INSIGHT CXR as a part of our Thoracic Care Suite we can meet our customers most pressing needs and expand AI adoption in healthcare.

Lunit is also actively working with GE Healthcare to integrate its AI solutions into GEsEdison Open AI Orchestrator, an algorithm management solution to help physicians and healthcare organizations work more efficiently and effectively. Available via a standalone server or integrated into GE Healthcares Centricity Picture Archiving and Communication System (PACS) and Enterprise Imaging System (EIS), these algorithms can help clinicians manage chest radiography reading more efficiently, improve clinicians workflow and provide a more native reading experience. Once complete, this offering will help reduce the complexity of implementing and managing multiple systems and algorithms as well as provide an easy way for organizations to adopt and explore AI.

Additionally, Lunit is an innovator under GE Healthcares Edison Developer Program, which helps healthcare providers gain easier access to market-ready algorithms and applications such as Lunit INSIGHT CXR by directly integrating these technologies into existing workflows. As such, Lunit and GE Healthcare are actively working to deliver novel and targeted deep-learning technology for healthcare systems via Edison, GE Healthcares secure intelligence platform.

For more information: http://www.lunit.io

More:

Lunit Expands Collaboration with GE Healthcare to Advance AI Adoption - Imaging Technology News

Hellobike unveils trifecta of innovative shared mobility AI technologies at WAIC2020 – PRNewswire

During its presentation on 10 July, Hellobike unveiled three innovative technologies that leverage AI, big data, cloud infrastructure and the IoT: the Hermes road safety system, non-motorized vehicle safety management system, and fixed-point return. Hellobike's participation in WAIC2020 follows its highly successful debut at the conference last year, where the company unveiled exciting AI projects including the Hello Brain smart transportation OS and the Argus visual interaction system.

"We are honored to take part in WAIC2020 for the second year running. As the shared bike industry leader, WAIC2020 is the ultimate platform for us to demonstrate how we harness AI technology and work hand-in-hand with the state to build the city of the future," said Li Kaizhu, President of Hellobike.

Hellobike's latest technologies usher in the 3.0 era of China's bike-sharing industry: a new model that sees shared bicycles organically integrated into the urban public transportation ecosystem. Through strengthened cooperation between transport providers and municipal governments, the 3.0 era provides a systematic mechanism to help Chinese cities tackle unique operational challenges, address parking management, and streamline shared bike deployment and distribution.

Hellobike's Hermes road safety system integrates AI algorithms to provide users with a better, safer shared transport experience. Built as a scenario-based solution, Hermes automatically performs failsafe tests on both user behavior and the bike at the beginning, middle and end of their riding journey. If the system detects technical issues, dangerous operation or user violations, Hellobike delivers a risk warning to the user through the bike's built-in speaker.

Based on insights gathered from mining big data, Hellobike also found that the use of non-motorized vehicles can lead to chaotic, unsafe road conditions. To address this, Hellobike has partnered with local governments to develop non-motorized vehicle safety management systems tailored to each city's unique traffic conditions. Using video AI technology for data collection and situation analysis, as well as spatial data, Hellobike helps cities establish new vehicle management systems built upon data visualization, intelligent data processing and smart decision-making applications.

Furthermore, Hellobike has cooperated with city officials to promote improved traffic safety, simplified parking and enhanced city appearance through a shared bike management operation plan. Hellobike has established a number of convenient fixed-point return locations using electronic fencing, Bluetooth road studs, AI and the IoT. Fixed-point return encourages users to park at designated locations, while making it easier for staff to locate and redistribute vehicles across the city.

Hellobike President Li Kaizhu and Chief Scientist Liu Xingliang will also take part in WAIC2020's AI TALK and big data forum alongside entrepreneurs from leading local and global tech companies to discuss the applications of AI technology. In addition, Hellobike plans to host its first Technology Open Day on 31 July at its Shanghai headquarters, where users can tour the space, test new vehicles, and discover the technological innovations behind Hellobike.

About Hellobike

Hellobike has continuously built user-friendly and sustainable transport services in sectors such as shared bicycles, shared e-bikes and car-pooling. As a business leader in two-wheeled transport, users have taken more than 12 billion trips on Hellobike vehicles over the past three years. Hellobike now operates in more than 360 Chinese cities.

SOURCE Hellobike

More:

Hellobike unveils trifecta of innovative shared mobility AI technologies at WAIC2020 - PRNewswire

Mark Zuckerberg doubles down defending AI after Elon Musk says his understanding of it is ‘limited’ – CNBC

Earlier this month, Musk delivered a scary message while speaking to the National Governors Association: "I have exposure to the most cutting edge AI, and I think people should be really concerned by it. AI is a fundamental risk to the existence of human civilization."

On Sunday, Zuckerberg was dismissive of such warnings. "I think people who are naysayers and try to drum up these doomsday scenarios I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible," he says.

In response, on Tuesday, SpaceX and Tesla CEO Musk tweeted a rebuke: "I've talked to Mark about this. His understanding of the subject is limited."

In the same week that Zuckerberg and Musk spar over the potential for artificial intelligence, another billionaire tech titan, Mark Cuban, says AI "scares the s--- out of me" and, at the same time, Cuban says that Canada and China are handily beating the U.S. in their pace of developing artificial intelligence.

On the heels of Musk's dig at Zuckerberg, his post Tuesday night also announced that the Facebook AI Research team and researchers at Cornell and Tsinghua together won an award for their recent paper on "Densely Connected Convolutional Networks."

See also:

Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible'

Elon Musk: 'Robots will be able to do everything better than us'

Warren Buffett and Bill Gates think it's 'crazy' to view job-stealing robots as bad

See original here:

Mark Zuckerberg doubles down defending AI after Elon Musk says his understanding of it is 'limited' - CNBC

Yes, a $10 Raspberry Pi can handle AI – Mashable


Mashable
Yes, a $10 Raspberry Pi can handle AI
Mashable
Artificial Intelligence and Machine Learning usually work best with a lot of horsepower behind them to crunch the data, compute possibilities and instantly come up with better solutions. That's why most AI systems rely on local sensors to gather input ...

and more »

The rest is here:

Yes, a $10 Raspberry Pi can handle AI - Mashable

The Facebook chatbot controversy highlights how paranoid people are about life with robots and AI – CNBC

That the conversation around a bot research project could so quickly spin out of control illustrates what a lightning rod artificial intelligence has become.

Even tech titans of Silicon Valley are divided about a future integrated with AI.

Elon Musk recently put forth his own doomsday scenario. "I have exposure to the most cutting edge AI, and I think people should be really concerned by it," says Musk, speaking to a roomful of governors last month.

"AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not they were harmful to a set of individuals within society, of course, but they were not harmful to society as a whole," he says.

Batra's boss, Mark Zuckerberg, calls such fearful warnings "irresponsible."

"I have pretty strong opinions on this. I am optimistic," says Zuckerberg. "I think you can build things and the world gets better. But with AI especially, I am really optimistic. And I think people who are naysayers and try to drum up these doomsday scenarios I just, I don't understand it. It's really negative and in some ways I actually think it is pretty irresponsible."

See also:

Facebook CEO Mark Zuckerberg: Elon Musk's doomsday AI predictions are 'pretty irresponsible'

Elon Musk: 'Robots will be able to do everything better than us'

Warren Buffett and Bill Gates think it's 'crazy' to view job-stealing robots as bad

Read more:

The Facebook chatbot controversy highlights how paranoid people are about life with robots and AI - CNBC

What to know about AI and diversity ahead of OurCrowd Summit 2020 – The Times of Israel

When a panel of five men and two women took the stage at the annual OurCrowd Summit in Jerusalem last year to discuss the hype around artificial intelligence (AI), they accomplished the trifecta that is often hard to find at conferences: They provided a session that was informative, interesting, AND entertaining.

They did it by dissecting the good (better customer and work experiences), the bad (black boxes), and the ugly (bias) around AI and its impact. As they talked about the importance of large data sets from a variety of sources and shared different perspectives on topics that heated the conversation, the panel itself became an example of why numbers and diversity matter.

Large numbers and diversity matter even more now as AI moves beyond the hype and more businesses in the tech and traditional sectors transition to being model-based with machine learning algorithms at the core.

In late 2018 it was revealed that Amazon dropped its AI-powered internal recruiting tool because it was biased in favor of male candidates. In November 2019, news broke that Goldman Sachs was being investigated for sex discrimination after claims were made that its credit algorithm used for Apple Card is sexist. These are just two gender-related bias examples and barely touch other biases, such as race.

The main culprit for bias is a lack of diversity on the teams developing these solutions, which remain overwhelmingly white and male.

The issue has become so significant that CIOs in the US have made diversifying their tech teams a priority in 2020. A number of investment funds, including WestRiver Group (WRG) in Seattle, are also increasingly paying attention and investing in diverse management teams as a business advantage.

In Israel, several new initiatives have been launched in recent years by the government, non-profit organizations, and companies to support all diversity in the tech sector.

Power in Diversity, an initiative launched by Israeli investment firm Vintage Investment Partners, is working with companies to address these issues head on in Israels tech sector. For example, it recently created a workshop for Tel Aviv-based Yotpos R&D department, which hired three tech teams of Haredi women last year, to address and challenge differences and stereotypes to help make the work environment more inclusive.

Kaltura, a video technology company co-founded by serial entrepreneur Michal Tsur, announced its pledge in 2019 to work towards increasing female leadership at the company to 50% by 2024. It will also work to increase the number of female employees at all levels of the company to 50% in that time frame.

So, whats the lesson to keep in mind at OurCrowd Summit 2020? The diverse makeup of the AI panel last year was able to move the discussion past all the hype and to address serious issues, like explainable AI, and paint a more complete picture of AI. A picture that has played out throughout the past year.

These broader perspectives and insights into whats happening is something to keep in mind at the conference this year, especially for sessions that dont have a diverse lineup of speakers, such as the session on how AI is transforming industries, and as a result may not show the full picture, or, put another way, all the data points.

Lisa Damast is the Head of Marketing for RTInsights.com and the founder of the weekly newsletter Gender Diversity in Tech. She previously was the Israel correspondent and bureau chief for the financial news publication Mergermarket, and has been published on the Financial Times's website, Israel21c, and Green Prophet. She blogs about topics related to Israeli women in tech and female entrepreneurship.

Follow this link:

What to know about AI and diversity ahead of OurCrowd Summit 2020 - The Times of Israel

AI Technology Detects Deterioration in COVID-19 Patients by Identifying Predictive Patterns in Their Vital Signs – HospiMedica

A new study will apply Artificial Intelligence (AI) technology to look for predictive patterns in the vital signs of COVID-19 patients that could alert the medical team about any deterioration.

The Manchester-based trial is sponsored by The Christie NHS Foundation Trust together with the Manchester University NHS Foundation Trust (MFT) with additional participation from Aptus Clinical and core AI capabilities provided by Zenzium, Ltd. (Cheshire, UK).

The COSMIC-19 (COntinious Signs Monitoring In Covid-19 patients) pilot study aims to recruit 60 inpatients on general wards who are suspected or confirmed to have COVID-19. Approximately 10-20% of hospital inpatients with COVID-19 will need intensive care. The patients on the trial will be monitored for 20 days until either placed on a ventilator or discharged from hospital.

The study will use wireless wearable sensors to automatically collect each patients vital signs together with clinical data and observations. Zenzium will then apply its AI technology to look for predictive patterns in the patients vital signs that could alert the medical team if the patient is deteriorating. If the prediction indicates that the patient needs critical care, the medical team can intervene earlier to give patients the best chance of recovery. Zenziums core technology, including DeepHRV, is based on Deep Learning as applied to time-series measurements and data.

We are extremely excited to apply our AI technology based on time-series Deep Learning including DeepHRV to this challenge with the potential to make a substantial impact on patient outcomes, said Anthony D. Bashall, Managing Director & Founder of Zenzium.

Unfortunately some patients who are suffering from COVID-19 on our hospital wards can become seriously unwell. By using this system, we hope to be able to identify these patients early and this may mean we can optimize their management without the need for them to go to intensive care, said Professor Fiona Thistlethwaite, medical oncologist at The Christie, who will lead the trial.

Related Links:

Zenzium, Ltd.

Link:

AI Technology Detects Deterioration in COVID-19 Patients by Identifying Predictive Patterns in Their Vital Signs - HospiMedica

Why, Robot? Understanding AI ethics – The Register

Not many people know that Isaac Asimov didnt originally write his three laws of robotics for I, Robot. They actually first appeared in "Runaround", the 1942 short story*. Robots mustnt do harm, he said, or allow others to come to harm through inaction. They must obey orders given by humans unless they violate the first law. And the robot must protect itself, so long as it doesnt contravene laws one and two.

75 years on, were still mulling that future. Asimovs rules seem more focused on strong AI the kind of AI youd find in HAL, but not in an Amazon Echo. Strong AI mimics the human brain, much like an evolving child, until it becomes sentient and can handle any problem you throw at it, as a human would. Thats still a long way off, if it ever comes to pass.

Instead, today were dealing with narrow AI, in which algorithms cope with constrained tasks. It recognises faces, understands that you just asked what the weather will be like tomorrow, or tries to predict whether you should give someone a loan or not.

Making rules for this kind of AI is quite difficult enough to be getting on with for now, though, says Jonathan M. Smith. Hes a member of the Association for Computing Machinery, and a professor of computer science at the University of Pennsylvania, says theres still plenty of ethics to unpack at this level.

The shorter-term issues are very important because theyre at the boundary of technology and policy, he says. You dont want the fact that someone has an AI making decisions to escape, avoid or divert past decisions that we made in the social or political space about how we run our society.

There are some thorny problems already emerging, whether real or imagined. One of them is a variation on the trolley problem, a kind of Sophies Choice scenario in which a train is bearing down on two sets of people. If you do nothing, it kills five people. If you actively pull a lever, the signals switch and it kills one person. Youd have to choose.

Critics of AI often adapt this to self-driving cars. A child runs into the road and theres no time to stop, but the software could choose to swerve and hit an elderly person, say. What should the car do, and who gets to make that decision? There are many variations on this theme, and MIT even collected some of them into an online game.

There are classic counter arguments: the self-driving car wouldnt be speeding in a school zone, so its less likely to occur. Utilitarians might argue that the number of deaths eliminated worldwide by eliminating distracted, drunk or tired drivers would shrink overall, which means society wins, even if one person loses.

You might point out that a human would have killed one of the people in the scenario too, so why are we even having this conversation? Yasemin Erden, a senior lecturer in philosophy at Queen Marys University, has an answer for that. She spends a lot of time considering ethics and computing on the committee of the Society for the Study of Artificial Intelligence and Simulation of Behaviour.

Decisions in advance suggest ethical intent and incur others judgement, whereas acting on the spot doesnt, she points out.

The programming of a car with ethical intentions knowing what the risk could be means that the public could be less willing to view things as accidents, she says. Or in other words, as long as you were driving responsibly its considered ok for you to say that person just jumped out at me and be excused for whomever you hit, but AI algorithms dont have that luxury.

If computers are supposed to be faster and more intentional than us in some situations, then how theyre programmed matters. Experts are calling for accountability.

Id need to cross-examine my algorithm, or at least know how to find out what was happening at the time of the accident, says Kay Firth-Butterfield. She is a lawyer specialising in AI issues and executive director at AI Austin. Its a non-profit AI thinktank set up this March that evolved from the Ethics Advisory Panel, an ethics board set up by AI firm Lucid.

We need a way to understand what AI algorithms are "thinking" when they do things, she says. How can you say to a patient's family if they died because of an intervention we don't know how this happened? So accountability and transparency are important.

Puzzling over why your car swerved around the dog but backed over the cat isnt the only AI problem that calls for transparency. Biased AI algorithms can cause all kinds of problems. Facial recognition systems may ignore people of colour because their training data didnt have enough faces fitting that description, for example.

Or maybe AI is self-reinforcing to the detriment of society. If social media AI learns that you like to see material supporting one kind of politics and only ever shows you that, then over time we could lose the capacity for critical debate.

J.S Mill made the argument that if ideas arent challenged then they are at risk of becoming dogma, Erden recalls, nicely summarising what she calls the filter bubble problem. (Mill was a 19th century utilitarian philosopher who was a strong proponent of logic and reasoning based on empirical evidence, so he probably wouldnt have enjoyed arguing with people on Facebook much.)

So if AI creates billions of people unwilling or even unable to recognise and civilly debate each others ideas, isnt that an ethical issue that needs addressing?

Another issue concerns the forming of emotional relationships with robots. Firth-Butterfield is interested in two ends of the spectrum children and the elderly. Kids love to suspend disbelief, which makes robotic companions with their AI conversational capabilities all that easier to embrace. She frets about AI robots that may train children to be ideal customers for their products.

Similarly, at the other end of the spectrum, she muses about AI robots used to provide care and companionship to the elderly.

Is it against their human rights not to interact with human beings but just to be looked after by robots? I think thats going to be one of the biggest decisions of our time, she says.

That highlights a distinction in AI ethics, between how an algorithm does something and what were trying to achieve with it. Alex London, professor of philosophy and director at Carnegie Mellon Universitys Center for Ethics and Policy, says that the driving question is what the machine is trying to do.

The ethics of that is probably one of the most fundamental questions. If the machine is out to serve a goal thats problematic, then ethical programming the question of how it can more ethically advance that goal - sounds misguided, he warns.

Thats tricky, because much comes down to intent. A robot could be great if it improves the quality of life for an elderly person as a supplement for frequent visits and calls with family. Using the same robot as an excuse to neglect elderly relatives would be the inverse. Like any enabling technology from the kitchen knife to nuclear fusion, the tool itself isnt good or bad its the intent of the person using it. Even then, points out Erden, what if someone thinks theyre doing good with a tool but someone else doesnt?

Read the original post:

Why, Robot? Understanding AI ethics - The Register

Decision Automation in Security Operations Brings Transparency and Trust to AI – Security Magazine

Decision Automation in Security Operations Brings Transparency and Trust to AI | 2020-05-08 | Security Magazine This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more. This Website Uses CookiesBy closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.

Continued here:

Decision Automation in Security Operations Brings Transparency and Trust to AI - Security Magazine

This New AI Is Like Having Iron Man’s Jarvis Living on Your Wall – Futurism

In Brief Duo is an AI-powered computer for your home that can seamlessly connect with other smart devices or be used as a standalone entertainment hub. Its release is slated for October of this year, with each device selling for $399.

Meet Duo, an AI device thats part mirror all computer.

Unlike standalone devices such as the Amazon Echo, Alexa, or Google Assistant, Duo operates beyond its 27-inch reflective display. Its a powerful smart computer that connects all your home devices and serves as a sleek, discreet entertainment hub.

Think of it as something like Iron Mans Jarvis, but instead of being built into your entire home, you can interact with it via a touch-sensitive, 1.9 mm-thick mirror. Because of its design, Duo can easily be mounted on any wall to blend with your interior. You can control Duos screen via touch or communicate with its built-in artificial intelligence (AI) companion, Albert, who can help you control any app within the system using your voice.

Duos on-board processor allows users to play music, check the news and weather, stream videos, control lights, play games, or even use the device as a virtual gallery to display artwork. Duoruns on its own operating system, HomeOS, and not only does it come equipped with native apps, its team has also developed a web-based HomeOS SDK for developers who want to create their own apps for use with the device.

According to the Duo website, the device will see a limited release of only 1,000 units in October of this year, with each selling for $399.

The rest is here:

This New AI Is Like Having Iron Man's Jarvis Living on Your Wall - Futurism

AI reading list: 8 interesting books about artificial intelligence to check out – TechRepublic

These eight books about artificial intelligence cover a range of topics, including ethical issues, how AI is affecting the job market, and how organizations can use AI to gain a competitive advantage.

Artificial intelligence (AI) is an ever-evolving technology. With several different uses, it's easy to understand why it's being implemented more and more frequently. These titles answer common questions about AI, discuss what current AI technologies businesses are using, how humans can lose control over AI, and more.

T-Minus AI: Humanity's Countdown to Artificial Intelligence and the New Pursuit of Global Power

Image: Amazon

In T-Minus AI, author, national expert, and the US Air Force's first Chairperson for Artificial Intelligence Michael Kanaan explains a human-oriented perspective of AI. He offers his view on our history of innovation to illustrate what we should all know about modern computing, AI, and machine learning. Additionally, Kanaan discusses the global implications of AI by illuminating the cultural and national vulnerabilities already present as well as future pressing issues.

The Alignment Problem: Machine Learning and Human Values

Image: Amazon

The "alignment problem," according to researchers, occurs when the tech systems that humans attempt to teach don't do what is wanted or expected. Best-selling author Brian Christian discusses the alignment problem's "first-responders," and their plans to solve the problem before it is out of human hands. Using a blend of history and on-the-ground reporting, Christian follows the growth of machine learning in the field and examines our current technology and culture.

Rise of the Robots: Technology and the Threat of a Jobless Future

Image: Amazon

With the possibility of AI making jobs like paralegals, journalists, and even computer programmers obsolete, author Martin Ford looks at the future of the job market and how it will continue to transform. Rise of the Robots helps us understand how employment and society will have to adapt to the changing market.

Artificial Intelligence: A Guide for Thinking Humans

Image: Amazon

In Artificial Intelligence, author Melanie Mitchell asks urgent questions concerning AI today: How intelligent are the best AI programs? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us? Mitchell also covers the dominant models of modern AI and machine learning, cutting-edge AI programs, and human investors in AI.

AI Ethics (The MIT Press Essential Knowledge series)

Image: Amazon

AI Ethics discusses the major ethical issues artificial intelligence raises and addresses several concrete questions. Author Mark Coeckelbergh uses narratives, relevant philosophical discussions, and describes different approaches to machine learning and data science. AI Ethics takes a look at privacy concerns, responsibility and the delegation of decision-making, transparency and bias as it arises at all stages of data science processes, and much more.

The AI Advantage: How to Put the Artificial Intelligence Revolution to Work (Management on the Cutting Edge)

Image: Amazon

In The AI Advantage,Thomas Davenport offers a practical guide to using AI in a business setting. Davenport not only explains what AI technologies are available, but also how companies can use them to gain a competitive advantage.

The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity

Image: Amazon

In her book, author Amy Webb looks at how the foundations of AI are broken--all the way from the people working on the system to the technology itself. Webb suggests that the big nine corporations (Amazon, Google, Facebook, Tencent, Baidu, Alibaba, Microsoft, IBM, and Apple), "may be inadvertently building and enabling vast arrays of intelligent systems that don't share our motivations, desires, or hopes for the future of humanity."

Artificial Intelligence: 101 Things You Must Know Today About Our Future

Image: Amazon

Artificial Intelligence: 101 Things You Must Know Today About Our Futurecontains many timely topics related to AI, including: Self-driving cars, robots, chatbots, as well as how AI will impact the job market, business processes, and entire industries. As the title suggests, readers can learn the answers to 101 questions about artificial intelligence, and have access to a large number of resources, ideas, and tips.

See the rest here:

AI reading list: 8 interesting books about artificial intelligence to check out - TechRepublic

China plans to launch national AI plan- China Daily – Reuters

SHANGHAI China will launch a series of artificial intelligence (AI) projects and increase efforts to cultivate tech talent as part of a soon to announced national AI plan, the China Daily said on Friday, citing a senior official.

The country is focusing on AI as it is seen as a tool to boost productivity and empower employees, the paper said.

China will roll out a slew of AI research and development projects, allocate more resources to nurturing talent and increase the use of AI in education, healthcare and security among other things, said Wan Gang, the minister of science and technology at a conference in Tianjin.

The plan will soon be released to the public, said Wan.

China will build cooperation with international AI organizations and encourage foreign AI firms to set up R&D centers in the country, he added.

(Reporting By Engen Tham; Editing by Michael Perry)

SAN FRANCISCO Peer at the instrument panel on your new car and you may find sleek digital gauges and multicolored screens. But a glimpse behind the dashboard could reveal what U.S. auto supplier Visteon Corp found: a mess.

SEOUL Samsung Electronics Co Ltd said it will open its first U.S. appliances plant in more than three decades, a politically pleasing investment ahead of South Korean leader Moon Jae-in's two-day summit with U.S. President Donald Trump.

More here:

China plans to launch national AI plan- China Daily - Reuters

AI Weekly: Welcome to The Machine, VentureBeats AI site – VentureBeat

Take the latest VB Survey to share how your company is implementing AI today.

VentureBeat readers likely noticed this week that our site looks different. On Thursday, we rolled out a significant design change that includes not just a new look but also a new brand structure that better reflects how we think about our audiences and our editorial mission.

VentureBeat remains the flagship brand devoted to covering transformative technology that matters to business decision makers and now, our longtime GamesBeat sub-brand has its own homepage of sorts, and definitely its own look. And weve launched a new sub-brand. This one is for all of our AI content, and its called The Machine.

By creating two distinct brands under the main VentureBeat brand, were leaning hard into what weve acknowledged internally for a long time: Were serving more than one community of readers, and those communities dont always overlap. There are readers who care about our AI and transformative tech coverage, and there are others who ardently follow GamesBeat. We want to continue to cultivate those communities through our written content and events. So when we reorganized our site, we created dedicated space for games and AI coverage, respectively, while leaving the homepage as the main feed.

GamesBeat has long been a standout sub-brand under VentureBeat, thanks to the leadership of managing editor Jason Wilson and the hard work of Dean Takahashi, Mike Minotti, and Jeff Grubb. Thus, giving it a dedicated landing page makes logical sense. We want to give our AI coverage the same treatment, which is why we created The Machine.

We chose to take a long and winding path to selecting The Machine as the name for our AI sub brand. We could have just put our heads together and picked one, but wheres the fun in that? If youre going to come up with a name for an AI-focused brand, you should use AI to help you do it. And thats what we did.

First, we went through the necessary exercises to map out a brand: We talked through brand values, created an abstract about its focus and goals, listed the technologies and verticals we wanted to cover, and so on. Then, we humans brainstormed some ideas for names. (None stood out as clear winners.)

Armed with this data, we turned to Hugging Faces free online NLP tools, which require no code you just put text into the box and let the system do its thing. Essentially, we ended up following these tips to generate name ideas.

There are a few different approaches you can take. You can feed the system 20 names, lets say, and ask it to generate a 21st. You can give it tags and relevant terms (like machine learning, artificial intelligence, computer vision, and so on) and hope that it converts those into something that would be worthy of a name. You can enter a description of what you want (like a paragraph about what the sub-brand is all about) and see if it comes up with something. And you can tweak various parameters, like model size and temperature, to extract different results.

This sort of tinkering is a delightful rabbit hole to tumble down. After incessantly fiddling both with the data we fed the system and the various adjustable parameters, we ended up with a long and hilarious list of AI-generated names to chew on.

Here are some of our favorite terrible names that the tool generated:

This is a good lesson in the limitations of AI. The system had no idea what we wanted it to do. It couldnt, and didnt, solve our problem like some sort of name vending machine. AI isnt creative. We had to generate a bunch of data at the beginning, and then at the end, we had to sift through mostly unhelpful output (we ended up with dozens and dozens of names) to find inspiration.

But in the detritus, we found some nuggets of accidental brilliance. Here are a few NLP-generated names that are actually kind of good:

Its worth noting that the system all but insisted on AIBeat. No matter what permutations we tried, AIBeat kept resurfacing. It was tempting to pluck that low-hanging fruit it matched VentureBeat and GamesBeat, and theres no confusion about what beat wed be covering. But we humans decided to be more creative with the name, so we moved away from that construction.

We took a step back and used the long list of NLP-generated names to help us think up some fresh ideas. For example, We the Machine stood out to some of us as particularly punchy, but it wasnt quite right for a publication name. (Hello, I write for We the Machine doesnt exactly roll off the tongue.) But that inspired The Machine, which emerged as the winner from our shortlist.

The Machine has multiple layers. Its a play on machine learning, and its a wink at the persistent fear of sentient robots. And it frames our AI team as a formidable, well-oiled content machine, punching well above our weight with a tiny roster of writers.

And so, I write for The Machine. Bookmark this page and visit every day for the latest AI news, analysis, and features.

Here is the original post:

AI Weekly: Welcome to The Machine, VentureBeats AI site - VentureBeat

The state of AI in 2020: democratization, industrialization, and the way to artificial general intelligence – ZDNet

After releasing what may well have been the most comprehensive report on the State of AI in 2019, Air Street Capital and RAAIS founder Nathan Benaich and AI angel investor, and UCL IIPP Visiting Professor Ian Hogarth are back for more.

In the State of AI Report 2020 released today, Benaich and Hogarth outdid themselves. While the structure and themes of the report remain mostly intact, its size has grown by nearly 30 percent. This is a lot, especially considering their 2019 AI report was already a 136 slide long journey on all things AI.

The State of AI Report 2020 is 177 slides long, and it covers technology breakthroughs and their capabilities, supply, demand and concentration of talent working in the field, large platforms, financing and areas of application for AI-driven innovation today and tomorrow, special sections on the politics of AI, and predictions for AI.

ZDNet caught up with Benaich and Hogarth to discuss their findings.

We set out by discussing the rationale for such a substantial contribution, which Benaich and Hogarth admitted to having taken up an extensive amount of their time. They mentioned their feeling is that their combined industry, research, investment and policy background and currently held positions give them a unique vantage point. Producing this report is their way of connecting the dots and giving something of value back to the AI ecosystem at large.

Coincidentally, Gartner's 2020 Hype cycle for AI was also released a couple of days back. Gartner identifies what it calls 2 megatrends that dominate the AI landscape in 2020 -- democratization and industrialization. Some of Benaich and Hogarth's findings were about the massive cost of training AI models, and the limited availability of research. This seems to contradict Gartner's position, or at least imply a different definition of democratization.

Benaich noted that there are different ways to look at democratization. One of them is the degree to which AI research is open and reproducible. As the duo's findings show, it is not: only 15% of AI research papers publish their code, and that has not changed much since 2016.

Hogarth added that traditionally AI as an academic field has had an open ethos, but the ongoing industry adoption is changing that. Companies are recruiting more and more researchers (another theme the report covers), and there is a clash of cultures going on as companies want to retain their IP. Notable organizations criticized for not publishing code include OpenAI and DeepMind:

"There's only so close you can get without a sort of major backlash. But at the same time, I think that data clearly indicates that they're certainly finding ways to be close when it's convenient", said Hogarth.

Industrialization of AI is under way, as open source MLOps tools help bring models to production

As far as industrialization goes, Benaich and Hogarth pointed towards their findings in terms of MLOps. MLOps, short for machine learning operations, is the equivalent of DevOps for ML models: taking them from development to production, and managing their lifecycle in terms of improvements, fixes, redeployments and so on.

Some of the more popular and fastest growing Github projects in 2020 are related to MLOps, the duo pointed out. Hogarth also added that for start up founders, for example, it's probably easier to get started with AI today than it was a few years ago, in terms of tool availability and infrastructure maturity. But there is a difference when it comes to training models like GPT3:

"If you wanted to start a sort of AGI research company today, the bar is probably higher in terms of the compute requirements. Particularly if you believe in the scale hypothesis, the idea of taking approaches like GPT3 and continuing to scale them up. That's going to be more and more expensive and less and less accessible to new entrants without large amounts of capital.

The other thing that organizations with very large amounts of capital can do is run lots of experiments and iterates in large experiments without having to worry too much about the cost of training. So there's a degree to which you can be more experimental with these large models if you have more capital.

Obviously, that slightly biases you towards these almost brute force approaches of just applying more scale, capital and data to the problem. But I think that if you buy the scaling hypothesis, then that's a fertile area of progress that shouldn't be dismissed just because it doesn't have deep intellectual insights at the heart of it".

This is another key finding of the report: huge models, large companies and massive training costs dominate the hottest area of AI today: NLP - Natural Language Processing. Based on variables released by Google et. al., research has estimated the cost of training NLP models at about $1 per 1000 parameters.

That means that a model such as OpenAI's GPT3, which has been hailed as the latest and greatest achievement in AI, could have cost tens of millions to train. Experts suggest the likely budget was $10M. That clearly shows that not everyone can aspire to produce something like GPT3. The question is, is there another way? Benaich and Hogarth think so, and have an example to showcase.

PolyAI is a London-based company active in voice assistants. They produced, and open sourced, a conversational AI model (technically, a pre-trained contextual re-ranker based on transformers) that outperforms Google's BERT model in conversational applications. PolyAI's model not only performs much better than Google's, but it required a fraction of the parameters to train, meaning also a fraction of the cost.

PolyAI managed to produce a machine learning language models that performs better than Google in a specific domain, at a fraction of the complexity and cost.

The obvious question is, how did PolyAI did it, as this could be inspiration for others, too. Benaich noted that the task of detecting intent and understanding what somebody on the phone is trying to accomplish by calling is solved in a much better way by treating this problem as what is called a contextual re-ranking problem:

"That is, given a kind of menu of potential options that a caller is trying to possibly accomplish based on our understanding of that domain, we can design a more appropriate model that can better learn customer intent from data than just trying to take a general purpose model -- in this case BERT.

BERT can do OK in various conversational applications, but just doesn't have kind of engineering guardrails or engineering nuances that can make it robust in a real world domain. To get models to work in production, you actually have to do more engineering than you have to do research. And almost by definition, engineering is not interesting to the majority of researchers".

Long story short: you know your domain better than anyone else. If you can document and make use of this knowledge, and have the engineering rigor required, you can do more with less. This once more pointed to the topic of using domain knowledge in AI. This is what critics of the brute force approach, also known as the "scaling hypothesis", point to.

What the proponents of the scaling hypothesis seem to think, simplistically put, is that intelligence is an emergent phenomenon relating to scale. Therefore, by extension, if at some point models like GPT3 become large enough, complex enough, the holy grail of AI, and perhaps science and engineering at large, artificial general intelligence (AGI), can be achieved.

How to make progress in AI, and the topic of AGI, is at least as much about philosophy as it is about science and engineering. Benaich and Hogarth approach it in a holistic way, prompted by the critique to models such as GPT3. The most prominent critic to approaches such as GPT3 is Gary Marcus. Marcus has been consistent in his critique of models predating GPT3, as the "brute force" approach does not seem to change regardless of scale.

Benaich referred to Marcus' critique, summing it up. GPT3 is an amazing language model that can take a prompt and output a sequence of text that is legible and comprehensible and in many cases relevant to what the prompt was. What's more, we should add, GPT3 can even be applied to other domains, such as writing software code for example, which is a topic in and of its own.

However, there are numerous examples where GPT3 is off course, either in a way that expresses bias, or it just produces, irrelevant results. An interesting point is how we are able to measure the performance of models like GPT3. Benaich and Hogarth note in their report that existing benchmarks for NLP, such as GLUE and SuperGLUE are now being aced by language models.

These benchmarks are meant to compare the performance of AI language models against humans at a range of tasks spanning logic, common sense understanding, and lexical semantics. A year ago, the human baseline in GLUE was beat by 1 point. Today, GLUE is reliably beat, and its more challenging sibling SuperGLUE is almost beat too.

AI language models are getting better, but does that mean we are approaching artificial general intelligence?

This can be interpreted in a number of ways. One way would be to say that AI language models are just as good as humans now. However, the kind of deficiencies that Marcus points out show this is not the case. Maybe then what this means is that we need a new benchmark. Researchers from Berkeley have published a new benchmark, which tries to capture some of these issues across various tasks.

Benaich noted that an interesting extension towards what GPT3 could do relates to the discussion around PolyAI. It's the aspect of injecting some kind of toggles to the model that allow it to have some guardrails, or at least tune what kind of outputs it can create from a given input. There are different ways that you might be able to do this, he went on to add.

Previously, the use of knowledge bases and knowledge graphs was discussed. Benaich also mentioned some kind of learned intent variable that could be used to inject this kind of control over this more general purpose sequence generator. Benaich thinks the critical view is certainly valid to some degree, and points to what models like GPT3 could use, with the goal of making them useful in production environments.

Hogarth on his part noted that Marcus is "almost a professional critic of organizations like DeepMind and OpenAI". While it's very healthy to have those critical perspectives when there is reckless hype cycle around some of this work, he went on to add, OpenAI has one of the more thoughtful approaches to policy around this.

Hogarth emphasized the underlying difference in philosophy between proponents and critics of the scaling hypothesis. However, he went on to add, if the critics are wrong, then we might have a very smart but not very well-adjusted AGI on our hands as as evidenced by sort of some of these early instances of bias as you scale these models:

"So I think it's incumbent on organizations like OpenAI if they are going to pursue this approach to tell us all how they're going to do it safely, because it's not obvious yet from their research agenda. How do you marry AI safety with this kind of this kind of throw more data and compute to the problem and AGI will emerge approach".

This discussion touched on another part of the State of AI Report 2020. Some researchers, Benaich and Hogarth noted, feel that progress in mature areas of machine learning is stagnant. Others call for a advancing causal reasoning, and claim that adding this element to machine learning approaches could overcome barriers.

Adding causality to machine learning could be the next breakthrough. The work of pioneers like Judea Pearl shows the way

Causality, Hogarth said, is arguably at the heart of much of human progress. From an epistemological perspective, causal reasoning has given us the scientific method, and it's at the heart of all of our best world models. So the work that people like Judea Pearl have pioneered to bring causality to machine learning is exciting. It feels like the biggest potential disruption to the general trend of larger and larger correlation driven models:

"I think if you can if you can crack causality, you can start to build a pretty powerful scaffolding of knowledge upon knowledge and have machines start to really contribute to our own knowledge bases and scientific processes. So I think it's very exciting. There's a reason that some of the smartest people in machine learning are spending weekends and evenings working on it.

But I think it's still in its infancy as an area of attention for the commercial community. We really only found one or two examples of it being used in the wild, one by faculty at a London based machine learning company and one by BenevolentAI in our report this year".

If you thought that's enough cutting edge AI research and applications for one report, you'd be wrong. The State of AI Report 2020 is a trove of references, and we'll revisit it soon, with more insights from Benaich and Hogarth.

More here:

The state of AI in 2020: democratization, industrialization, and the way to artificial general intelligence - ZDNet