12345...102030...


Reducing bias in AI-based financial services – Brookings Institution

Artificial intelligence (AI) presents an opportunity to transform how we allocate credit and risk, and to create fairer, more inclusive systems. AIs ability to avoid the traditional credit reporting and scoring system that helps perpetuate existing bias makes it a rare, if not unique, opportunity to alter the status quo. However, AI can easily go in the other direction to exacerbate existing bias, creating cycles that reinforce biased credit allocation while making discrimination in lending even harder to find. Will we unlock the positive, worsen the negative, or maintain the status quo by embracing new technology?

This paper proposes a framework to evaluate the impact of AI in consumer lending. The goal is to incorporate new data and harness AI to expand credit to consumers who need it on better terms than are currently provided. It builds on our existing systems dual goals of pricing financial services based on the true risk the individual consumer poses while aiming to prevent discrimination (e.g., race, gender, DNA, marital status, etc.). This paper also provides a set of potential trade-offs for policymakers, industry and consumer advocates, technologists, and regulators to debate the tensions inherent in protecting against discrimination in a risk-based pricing system layered on top of a society with centuries of institutional discrimination.

AI is frequently discussed and ill defined. Within the world of finance, AI represents three distinct concepts: big data, machine learning, and artificial intelligence itself. Each of these has recently become feasible with advances in data generation, collection, usage, computing power, and programing. Advances in data generation are staggering: 90% of the worlds data today were generated in the past two years, IBM boldly stated. To set parameters of this discussion, below I briefly define each key term with respect to lending.

Big data fosters the inclusion of new and large-scale information not generally present in existing financial models. In consumer credit, for example, new information beyond the typical credit-reporting/credit-scoring model is often referred to by the most common credit-scoring system, FICO. This can include data points, such as payment of rent and utility bills, and personal habits, such as whether you shop at Target or Whole Foods and own a Mac or a PC, and social media data.

Machine learning (ML) occurs when computers optimize data (standard and/or big data) based on relationships they find without the traditional, more prescriptive algorithm. ML can determine new relationships that a person would never think to test: Does the type of yogurt you eat correlate with your likelihood of paying back a loan? Whether these relationships have casual properties or are only proxies for other correlated factors are critical questions in determining the legality and ethics of using ML. However, they are not relevant to the machine in solving the equation.

What constitutes true AI is still being debated, but for purposes of understanding its impact on the allocation of credit and risk, lets use the term AI to mean the inclusion of big data, machine learning, and the next step when ML becomes AI. One bank executive helpfully defined AI by contrasting it with the status quo: Theres a significant difference between AI, which to me denotes machine learning and machines moving forward on their own, versus auto-decisioning, which is using data within the context of a managed decision algorithm.

Americas current legal and regulatory structure to protect against discrimination and enforce fair lending is not well equipped to handle AI. The foundation is a set of laws from the 1960s and 1970s (Equal Credit Opportunity Act of 1974, Truth in Lending Act of 1968, Fair Housing Act of 1968, etc.) that were based on a time with almost the exact opposite problems we face today: not enough sources of standardized information to base decisions and too little credit being made available. Those conditions allowed rampant discrimination by loan officers who could simply deny people because they didnt look credit worthy.

Today, we face an overabundance of poor-quality credit (high interest rates, fees, abusive debt traps) and concerns over the usage of too many sources of data that can hide as proxies for illegal discrimination. The law makes it illegal to use gender to determine credit eligibility or pricing, but countless proxies for gender exist from the type of deodorant you buy to the movies you watch.

Americas current legal and regulatory structure to protect against discrimination and enforce fair lending is not well equipped to handle AI.

The key concept used to police discrimination is that of disparate impact. For a deep dive into how disparate impact works with AI, you can read my previous work on this topic. For this article, it is important to know that disparate impact is defined by the Consumer Financial Protection Bureau as when: A creditor employs facially neutral policies or practices that have an adverse effect or impact on a member of a protected class unless it meets a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact.

The second half of the definition provides lenders the ability to use metrics that may have correlations with protected class elements so long as it meets a legitimate business need,andthere are no other ways to meet that interest that have less disparate impact. A set of existing metrics, including income, credit scores (FICO), and data used by the credit reporting bureaus, has been deemed acceptable despite having substantial correlation with race, gender, and other protected classes.

For example, consider how deeply correlated existing FICO credit scores are with race. To start, it is telling how little data is made publicly available on how these scores vary by race. The credit bureau Experian is eager to publicize one of its versions of FICO scores by peoples age, income, and even what state or city they live in, but not by race. However, federal law requires lenders to collect data on race for home mortgage applications, so we do have access to some data. As shown in the figure below, the differences are stark.

Among people trying to buy a home, generally a wealthier and older subset of Americans, white homebuyers have an average credit score 57 points higher than Black homebuyers and 33 points higher than Hispanic homebuyers. The distribution of credit scores is also sharply unequal: More than 1 in 5 Black individuals have FICOs below 620, as do 1 in 9 among the Hispanic community, while the same is true for only 1 out of every 19 white people. Higher credit scores allow borrowers to access different types of loans and at lower interest rates. One suspects the gaps are even broader beyond those trying to buy a home.

If FICO were invented today, would it satisfy a disparate impact test? The conclusion of Rice and Swesnik in their law review article was clear: Our current credit-scoring systems have a disparate impact on people and communities of color. The question is mute because not only is FICO grandfathered, but it has also become one of the most important factors used by the financial ecosystem. I have described FICO as the out of tune oboe to which the rest of the financial orchestra tunes.

New data and algorithms are not grandfathered and are subject to the disparate impact test. The result is a double standard whereby new technology is often held to a higher standard to prevent bias than existing methods. This has the effect of tilting the field against new data and methodologies, reinforcing the existing system.

Explainability is another core tenant of our existing fair lending system that may work against AI adoption. Lenders are required to tell consumers why they were denied. Explaining the rationale provides a paper trail to hold lenders accountable should they be engaging in discrimination. It also provides the consumer with information to allow them to correct their behavior and improve their chances for credit. However, an AIs method to make decisions may lack explainability. As Federal Reserve Governor Lael Brainard described the problem: Depending on what algorithms are used, it is possible that no one, including the algorithms creators, can easily explain why the model generated the results that it did. To move forward and unlock AIs potential, we need a new conceptual framework.

To start, imagine a trade-off between accuracy (represented on the y-axis) and bias (represented on the x-axis). The first key insight is that the current system exists at the intersection of the axes we are trading off: the graphs origin. Any potential change needs to be considered against the status-quonot an ideal world of no bias nor complete accuracy. This forces policymakers to consider whether the adoption of a new system that contains bias, but less than that in the current system, is an advance. It may be difficult to embrace an inherently biased framework, but it is important to acknowledge that the status quo is already highly biased. Thus, rejecting new technology because it contains some level of bias does not mean we are protecting the system against bias. To the contrary, it may mean that we are allowing a more biased system to perpetuate.

As shown in the figure above, the bottom left corner (quadrant III) is one where AI results in a system that is more discriminatory and less predictive. Regulation and commercial incentives should work together against this outcome. It may be difficult to imagine incorporating new technology that reduces accuracy, but it is not inconceivable, particularly given the incentives in industry to prioritize decision-making and loan generation speed over actual loan performance (as in the subprime mortgage crisis). Another potential occurrence of policy moving in this direction is the introduction of inaccurate data that may confuse an AI into thinking it has increased accuracy when it has not. The existing credit reporting system is rife with errors: 1 out of every 5 people may have material error on their credit report. New errors occur frequentlyconsider the recent mistake by one student loan servicer that incorrectly reported 4.8 million Americans as being late on paying their student loans when in fact in the government had suspended payments as part of COVID-19 relief.

The data used in the real world are not as pure as those model testing. Market incentives alone are not enough to produce perfect accuracy; they can even promote inaccuracy given the cost of correcting data and demand for speed and quantity. As one study from the Federal Reserve Bank of St. Louis found, Credit score has not acted as a predictor of either true risk of default of subprime mortgage loans or of the subprime mortgage crisis. Whatever the cause, regulators, industry, and consumer advocates ought to be aligned against the adoption of AI that moves in this direction.

The top right (quadrant I) represents incorporation of AI that increases accuracy and reduces bias. At first glance, this should be a win-win. Industry allocates credit in a more accurate manner, increasing efficiency. Consumers enjoy increased credit availability on more accurate terms and with less bias than the existing status quo. This optimistic scenario is quite possible given that a significant source of existing bias in lending stems from the information used. As the Bank Policy Institute pointed out in its in discussion draft of the promises of AI: This increased accuracy will benefit borrowers who currently face obstacles obtaining low-cost bank credit under conventional underwriting approaches.

One prominent example of a win-win system is the use of cash-flow underwriting. This new form of underwriting uses an applicants actual bank balance over some time frame (often one year) as opposed to current FICO based model which relies heavily on seeing whether a person had credit in the past and if so, whether they were ever in delinquency or default. Preliminary analysis by FinReg Labs shows this underwriting system outperforms traditional FICO on its own, and when combined with FICO is even more predictive.

Cash-flow analysis does have some level of bias as income and wealth are correlated with race, gender, and other protected classes. However, because income and wealth are acceptable existing factors, the current fair-lending system should have little problem allowing a smarter use of that information. Ironically, this new technology meets the test because it uses data that is already grandfathered.

That is not the case for other AI advancements. New AI may increase credit access on more affordable terms than what the current system provides and still not be allowable. Just because AI has produced a system that is less discriminatory does not mean it passes fair lending rules. There is no legal standard that allows for illegal discrimination in lending because it is less biased than prior discriminatory practices. As a 2016 Treasury Department study concluded, Data-driven algorithms may expedite credit assessments and reduce costs, they also carry the risk of disparate impact in credit outcomes and the potential for fair lending violations.

For example, consider an AI that is able, with a good degree of accuracy, to detect a decline in a persons health, say through spending patterns (doctors co-pays), internet searches (cancer treatment), and joining new Facebook groups (living with cancer). Medical problems are a strong indicator of future financial distress. Do we want a society where if you get sick, or if a computer algorithm thinks you are ill, that your terms of credit decrease? That may be a less biased system than we currently have, and not one that policymakers and the public would support. Of all sudden what seems like a win-win may not actually be one that is so desirable.

AI that increases accuracy but introduces more bias gets a lot of attention, deservedly so. This scenario represented in the top left (quadrant II) of this framework can range from the introduction of data that are clear proxies for protected classes (watch Lifetime or BET on TV) to information or techniques that, on a first glance, do not seem biased but actually are. There are strong reasons to believe that AI will naturally find proxies for race, given that there are large income and wealth gaps between races. As Daniel Schwartz put it in his article on AI and proxy discrimination: Unintentional proxy discrimination by AIs is virtually inevitable whenever the law seeks to prohibit discrimination on the basis of traits containing predictive information that cannot be captured more directly within the model by non-suspect data.

Proxy discrimination by AI is even more concerning because the machines are likely to uncover proxies that people had not previously considered.

Proxy discrimination by AI is even more concerning because the machines are likely to uncover proxies that people had not previously considered. Think about the potential to use whether or not a person uses a Mac or PC, a factor that is both correlated to race and whether people pay back loans, even controlling for race.

Duke Professor Manju Puri and co-authors were able to build a model using non-standard data that found substantial predictive power in whether a loan was repaid through whether that persons email address contained their name. Initially, that may seem like a non-discriminatory variable within a persons control. However, economists Marianne Bertrand and Sendhil Mullainathan have shown African Americans with names heavily associated with their race face substantial discrimination compared to using race-blind identification. Hence, it is quite possible that there is a disparate impact in using what seems like an innocuous variable such as whether your name is part of your email address.

The question for policymakers is how much to prioritize accuracy at a cost of bias against protected classes. As a matter of principle, I would argue that our starting point is a heavily biased system, and we should not tolerate the introduction of increased bias. There is a slippery slope argument of whether an AI produced substantial increases in accuracy with the introduction of only slightly more bias. Afterall, our current system does a surprisingly poor job of allocating many basic credits and tolerates a substantially large amount of bias.

Industry is likely to advocate for inclusion of this type of AI while consumer advocates are likely to oppose its introduction. Current law is inconsistent in its application. Certain groups of people are afforded strong anti-discrimination protection against certain financial products. But again, this varies across financial product. Take gender for example. It is blatantly illegal under fair lending laws to use gender or any proxy for gender in allocating credit. However, gender is a permitted use for price difference for auto insurance in most states. In fact, for brand new drivers, gender may be the single biggest factor used in determining price absent any driving record. America lacks a uniform set of rules on what constitutes discrimination and what types of attributes cannot be discriminated against. Lack of uniformity is compounded by the division of responsibility between federal and state governments and, within government, between the regulatory and judicial system for detecting and punishing crime.

The final set of trade-offs involve increases in fairness but reductions in accuracy (quadrant IV in the bottom right). An example includes an AI with the ability to use information about a persons human genome to determine their risk of cancer. This type of genetic profiling would improve accuracy in pricing types of insurance but violates norms of fairness. In this instance, policymakers decided that the use of that information is not acceptable and have made it illegal. Returning to the role of gender, some states have restricted the use of gender in car insurance. California most recently joined the list of states no longer allowing gender, which means that pricing will be more fair but possibly less accurate.

Industry pressures would tend to fight against these types of restrictions and press for greater accuracy. Societal norms of fairness may demand trade-offs that diminish accuracy to protect against bias. These trade-offs are best handled by policymakers before the widespread introduction of this information such as the case with genetic data. Restricting the use of this information, however, does not make the problem go away. To the contrary, AIs ability to uncover hidden proxies for that data may exacerbate problems where society attempts to restrict data usage on the grounds of equity concerns. Problems that appear solved by prohibitions then simply migrate into the algorithmic world where they reappear.

The underlying takeaway for this quadrant is one in which social movements that expand protection and reduce discrimination are likely to become more difficult as AIs find workarounds. As long as there are substantial differences in observed outcomes, machines will uncover differing outcomes using new sets of variables that may contain new information or may simply be statistically effective proxies for protected classes.

The status quo is not something society should uphold as nirvana. Our current financial system suffers not only from centuries of bias, but also from systems that are themselves not nearly as predictive as often claimed. The data explosion coupled with the significant growth in ML and AI offers tremendous opportunity to rectify substantial problems in the current system. Existing anti-discrimination frameworks are ill-suited to this opportunity. Refusing to hold new technology to a higher standard than the status quo results in an unstated deference to the already-biased current system. However, simply opening the flood gates under the rules of can you do better than today opens up a Pandoras box of new problems.

The status quo is not something society should uphold as nirvana. Our current financial system suffers not only from centuries of bias, but also from systems that are themselves not nearly as predictive as often claimed.

Americas fractured regulatory system, with differing roles and responsibilities across financial products and levels of government, only serves to make difficult problems even harder. With lacking uniform rules and coherent frameworks, technological adoption will likely be slower among existing entities setting up even greater opportunities for new entrants. A broader conversation regarding how much bias we are willing to tolerate for the sake of improvement over the status quo would benefit all parties. That requires the creation of more political space for sides to engage in a difficult and honest conversation. The current political moment in time is ill-suited for that conversation, but I suspect that AI advancements will not be willing to wait until America is more ready to confront these problems.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative, and Apple, Facebook, and IBM provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Read more here:

Reducing bias in AI-based financial services - Brookings Institution

Posted in Ai

A new AI tool to fight the coronavirus – Axios

A coalition of AI groups is forming to produce a comprehensive data source on the coronavirus pandemic for policymakers and health care leaders.

Why it matters: A torrent of data about COVID-19 is being produced, but unless it can be organized in an accessible format, it will do little good. The new initiative aims to use machine learning and human expertise to produce meaningful insights for an unprecedented situation.

Driving the news: Members of the newly formed Collective and Augmented Intelligence Against COVID-19 (CAIAC) announced today include the Future Society, a non-profit think tank from the Harvard Kennedy School of Government, as well as the Stanford Institute for Human-Centered Artificial Intelligence and representatives from UN agencies.

What they're saying: "With COVID-19 we realized there are tons of data available, but there was little global coordination on how to share it," says Cyrus Hodes, chair of the AI Initiative at the Future Society and a member of the CAIAC steering committee. "That's why we created this coalition to put together a sense-making platform for policymakers to use."

Context: COVID-19 has produced a flood of statistics, data and scientific publications more than 35,000 of the latter as of July 8. But raw information is of little use unless it can be organized and analyzed in a way that can support concrete policies.

The bottom line: Humans aren't exactly doing a great job beating COVID-19, so we need all the machine help we can get.

Here is the original post:

A new AI tool to fight the coronavirus - Axios

Posted in Ai

Five Of The Leading British AI Companies Revealed – Forbes

Amid the Covid-19 gloom, many cutting-edge technology companies have quietly been getting on with raising finance with artificial intelligence emerging as a particular focus for investors. Last month, for example, London-based Temporall raised 1m of seed investment to continue developing its AI- and analytics-based workplace insights platform. It was just the latest in a string of AI businesses to successfully raise finance in recent months despite the uncertainties of the pandemic.

That extends a trend seen last year. In September, a report from TechNation and Crunchbase revealed that UK AI investment reached a record level of $1bn in the first half of 2019, surpassing the total amount of new finance raised during the whole of the previous year.

The UKs AI industry has been boosted by a supportive public sector environment: the UK government is leading the way on AI investment in Europe and has become the third biggest spender on AI in the world. In the private sector, meanwhile, many British companies offer world-leading technologies. Take just five of the most innovative AI start-ups in the UK:

Synthesized

Founded in 2017, by Dr Nicolai Baldin, a machine-learning researcher based at the University of Cambridge, Synthesized has created an all-in-one data provisioning and preparation platform that is underpinned by AI.

In just 10 minutes, its technology can generate a representative synthetic dataset incorporating millions of records, helping an organisation to share insights safely and efficiently while automatically complying with data regulations. In March, Synthesized raised $2.8m in funding with the aim of doubling the number of its employees in London and accelerating the companys rapid expansion.

Onfido

With more than $180m in funding, Onfido is on a mission to help businesses verify people's identities. Founded in 2012, it uses machine learning and AI technologies, including face detection and character recognition, to verify documents such as passports and ID cards, and to help companies with fraud prevention.

Onfido is headquartered in London and now employs more than 400 employees across seven offices worldwide. In 2019 the company had over 1,500 customers including Revolut, Monzo and Zipcar.

Benevolent AI

Aiming disrupt the pharmaceutical sector, Benevolent AIs goal is to find medicines for diseases that have no treatment. Benevolent AI applies AI and machine learning tools together with other cutting-edge technologies to try to reinvent the ways drugs are discovered and developed.

The business was founded in 2013 and has raised almost $300m in funding. Its software reduces drug development costs, decreases failure rates and increases the speed at which medicines are generated. Right now, it is focusing on searching for treatments for Covid-19.

Plum Fintech

Plum is an AI assistant that helps people manage their money and increase their savings. It uses a mix of AI and behavioural science to help users change the way they engage with their finances for example, it points out savings they can afford by analysing their bank transactions.

Plum also allows its users to invest the money saved, as well as to easily switch household suppliers to secure better deals - the average customer can save roughly 230 a year on regular bills it claims.

Poly AI

After meeting at the Machine Intelligence Lab at the University of Cambridge, Nikola Mrki, Pei-Hao Su and Tsung-Hsien Wen a group of conversational AI experts started Poly AI. CEO Mrki was previously the first engineer at Apple-acquired VocalIQ, which became an essential part of Siri.

Poly AI helps contact centres scale. The companys technology not only understands customers queries, but also addresses them in a conversational way, either via voice, email or messaging. The company doesnt position itself as a replacement to human contact centre agents, but as an enhancement that works alongside them. Poly AI has secured $12m in funding to date and works as a team of seven out of its London headquarters.

See the original post:

Five Of The Leading British AI Companies Revealed - Forbes

Posted in Ai

Where it Counts, U.S. Leads in Artificial Intelligence – Department of Defense

When it comes to advancements in artificial intelligence technology, China does have a lead in some places like spying on its own people and using facial recognition technology to identify political dissenters. But those are areas where the U.S. simply isn't pointing its investments in artificial intelligence, said director of the Joint Artificial Intelligence Center. Where it counts, the U.S. leads, he said.

"While it is true that the United States faces formidable technological competitors and challenging strategic environments, the reality is that the United States continues to lead in AI and its most important military applications," said Nand Mulchandani, during a briefing at the Pentagon.

The Joint Artificial Intelligence Center, which stood up in 2018, serves as the official focal point of the department's AI strategy.

China leads in some places, Mulchandani said. "China's military and police authorities undeniably have the world's most advanced capabilities, such as unregulated facial recognition for universal surveillance and control of their domestic population, trained on Chinese video gathered from their systems, and Chinese language text analysis for internet and media censorship."

The U.S. is capable of doing similar things, he said, but doesn't. It's against the law, and it's not in line with American values.

"Our constitution and privacy laws protect the rights of U.S. citizens, and how their data is collected and used," he said. "Therefore, we simply don't invest in building such universal surveillance and censorship systems."

The department does invest in systems that both enhance warfighter capability, for instance, and also help the military protect and serve the United States, including during the COVID-19 pandemic.

The Project Salus effort, for instance, which began in March of this year, puts artificial intelligence to work helping to predict shortages for things like water, medicine and supplies used in the COVID fight, said Mulchandani.

"This product was developed in direct work with [U.S. Northern Command] and the National Guard," he said. "They have obviously a very unique role to play in ensuring that resource shortages ... are harmonized across an area that's dealing with the disaster."

Mulchandani said what the Guard didn't have was predictive analytics on where such shortages might occur, or real-time analytics for supply and demand. Project Salus named for the Roman goddess of safety and well-being fills that role.

"We [now have] roughly about 40 to 50 different data streams coming into project Salus at the data platform layer," he said. "We have another 40 to 45 different AI models that are all running on top of the platform that allow for ... the Northcom operations team ... to actually get predictive analytics on where shortages and things will occur."

As an AI-enabled tool, he said, Project Salus can be used to predict traffic bottlenecks, hotel vacancies and the best military bases to stockpile food during the fallout from a damaging weather event.

As the department pursues joint all-domain command and control, or JADC2, the JAIC is working to build in the needed AI capabilities, Mulchandani.

"JADC2 is ... a collection of platforms that get stitched together and woven together[ effectively into] a platform," Mulchandani said. "The JAIC is spending a lot of time and resources focused on building the AI components on top of JADC2. So if you can imagine a command and control system that is current and the way it's configured today, our job and role is to actually build out the AI components both from a data, AI modeling and then training perspective and then deploying those."

When it comes to AI and weapons, Mulchandani said the department and JAIC are involved there too.

"We do have projects going on under joint warfighting, which are actually going into testing," he said. "They're very tactical-edge AI, is the way I describe it. And that work is going to be tested. It's very promising work. We're very excited about it."

While Mulchandani didn't mention specific projects, he did say that while much of the JAIC's AI work will go into weapons systems, none of those right now are going to be autonomous weapons systems. The concepts of a human-in-the-loop and full human control of weapons, he said, "are still absolutely valid."

See the original post here:

Where it Counts, U.S. Leads in Artificial Intelligence - Department of Defense

Posted in Ai

Pentagon AI center shifts focus to joint war-fighting operations – C4ISRNet

The Pentagons artificial intelligence hub is shifting its focus to enabling joint war-fighting operations, developing artificial intelligence tools that will be integrated into the Department of Defenses Joint All-Domain Command and Control efforts.

As we have matured, we are now devoting special focus on our joint war-fighting operation and its mission initiative, which is focused on the priorities of the National Defense Strategy and its goal of preserving Americas military and technological advantages over our strategic competitors, Nand Mulchandani, acting director of the Joint Artificial Intelligence Center, told reporters July 8. The AI capabilities JAIC is developing as part of the joint war-fighting operations mission initiative will use mature AI technology to create a decisive advantage for the American war fighter.

That marks a significant change from where JAIC stood more than a year ago, when the organization was still being stood up with a focus on using AI for efforts like predictive maintenance. That transformation appears to be driven by the DoDs focus on developing JADC2, a system of systems approach that will connect sensors to shooters in near-real time.

JADC2 is not a single product. It is a collection of platforms that get stitched together woven together into effectively a platform. And JAIC is spending a lot of time and resources focused on building the AI component on top of JADC2, said the acting director.

According to Mulchandani, the fiscal 2020 spending on the joint war-fighting operations initiative is greater than JAIC spending on all other mission initiatives combined. In May, the organization awarded Booz Allen Hamilton a five-year, $800 million task order to support the joint war-fighting operations initiative. As Mulchandani acknowledged to reporters, that task order exceeds JAICs budget for the next few years and it will not be spending all of that money.

One example of the organizations joint war-fighting work is the fire support cognitive system, an effort JAIC was pursuing in partnership with the Marine Corps Warfighting Lab and the U.S. Armys Program Executive Office Command, Control and Communications-Tactical. That system, Mulchandani said, will manage and triage all incoming communications in support of JADC2.

Mulchandani added that JAIC was about to begin testing its new flagship joint war-fighting project, which he did not identify by name.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

We do have a project going on under joint war fighting which we are going to be actually go into testing, he said. They are very tactical edge AI is the way Id describe it. That work is going to be tested. Its actually promising work were very excited about it.

As I talked about the pivot from predictive maintenance and others to joint war fighting, that is probably the flagship project that were sort of thinking about and talking about that will go out there, he added.

While left unnamed, the acting director assured reporters that the project would involve human operators and full human control.

We believe that the current crop of AI systems today [...] are going to be cognitive assistance, he said. Those types of information overload cleanup are the types of products that were actually going to be investing in.

Cognitive assistance, JADC2, command and controlthese are all pieces, he added.

Read more from the original source:

Pentagon AI center shifts focus to joint war-fighting operations - C4ISRNet

Posted in Ai

Workers Can Be Hired Back by Employers Using a Fully Automated AI Recruiter From Avrio – Business Wire

BOSTON--(BUSINESS WIRE)--Avrio now provides recruiters the first fully automated end-to-end recruiting technology, leveraging AI to increase recruiter efficiency while eliminating non-strategic manual tasks. To Avrio Eftase! It means Tomorrow has Arrived! in Greek, and with this version, the world of tomorrow is here today for employers. The whole process takes minutes and hours, not days and weeks to get top candidates.

Now, employers have a fully-scalable AI recruiter that requires no upfront integrations, no messy data replication and no ATS tracking codes and source-of-hire reports to reconcile. You post a job on the system and get hires. You pay for hires, when you acquire a new employee.

The AI engine can find candidates via job postings and across resume databases, contact them directly, confirm both job skills and fit criteria and find a time that works for them to connect with a human decision maker. Avrio is pre-integrated with Nexxt to publish jobs across 50 career sites with access to a diversified talent network of more than 75 million candidates from the Nexxt database. Machine learning and semantic matching is used to drive efficiency throughout the entire process for candidates and for employers.

With Avrio, customers can take advantage of the HR Tech industrys first and only risk free business model to make their lives easier. Customers only pay for actual hires. There are no upfront user licenses, no lock-ins, no CPC, PPC, PPV or CPA to worry about. You pay to hire an employee.

Alex Knowles, Talent Manager at Copenhagen Capacity, an early Avrio customer said, Avrio is a proven solution, not just for employers but also talent attraction agencies looking to create growth in their cities. As the official organisation for investment promotion and economic development in Greater Copenhagen, we are excited to use the new capabilities to further our mission to recruit top talent from all across the world to work in Denmark.

Avrio has taken a very unique and innovative approach when it comes to AI for hiring, said Nikos Livadas, Vice President of Strategic Alliances at Nexxt. We are very excited to be partnering with Avrio to help create a compelling recipe for talent acquisition leaders, enabling them to increase candidate engagement while also decreasing time to hire. With Nexxts more than 27 million resumes and 75 million candidates fueling Avrios conversational AI sourcing and ranking platform we cant wait to see how this new offering exceeds customer expectations in todays challenging environment.

Job applicants have long been stymied by the black hole of hiring processes, said Javid Muhammedali, Head of Product at Avrio AI. A responsive, scalable AI recruiter that can review resumes, ask personalized questions, have a full conversation and answer candidates questions is a compelling solution given all the uncertainty in the hiring process. HR leaders can have peace of mind in being able to scale up when called upon, and yet have a consistent process.

Were excited to bring to market a revolutionary solution that accelerates hiring, said Nachi Junankar, CEO. Recruiters and managers face a massive increase in applicants with a smaller team. At the same time, applicants are looking for employers to be more responsive. Avrio ensures that the right candidate gets to the front of the line and that both recruiters and applicants get the speed and effectiveness they deserve.

About Avrio AI Inc.Avrio, a leader in AI for recruiting helps employers and staffing firms match, engage and hire top talent. To see how AI is making breakthrough changes in recruiting, visit https://www.goavrio.com or book a demo at https://www.goavrio.com/chatbot

Read the original here:

Workers Can Be Hired Back by Employers Using a Fully Automated AI Recruiter From Avrio - Business Wire

Posted in Ai

Beyond the AI hype cycle: Trust and the future of AI – MIT Technology Review

Theres no shortage of promises when it comes to AI. Some say it will solve all problems while others warn it will bring about the end of the world as we know it. Both positions regularly play out in Hollywood plotlines like Westworld, Carbon Black, Minority Report, Her, and Ex Machina. Those stories are compelling because they require us as creators and consumers of AI technology to decide whether we trust an AI system or, more precisely, trust what the system is doing with the information it has been given.

This content was produced by Nuance. It was not written by MIT Technology Review's editorial staff.

Joe Petro is CTO at Nuance.

Those stories also provide an important lesson for those of us who spend our days designing and building AI applications: trust is a critical factor for determining the success of an AI application. Who wants to interact with a system they dont trust?

Even as a nascent technology AI is incredibly complex and powerful, delivering benefits by performing computations and detecting patterns in huge data sets with speed and efficiency. But that power, combined with black box perceptions of AI and its appetite for user data, introduces a lot of variables, unknowns, and possible unintended consequences. Hidden within practical applications of AI is the fact that trust can have a profound effect on the users perception of the system, as well as the associated companies, vendors, and brands that bring these applications to market.

Advancements such as ubiquitous cloud and edge computational power make AI more capable and effective while making it easier and faster to build and deploy applications. Historically, the focus has been on software development and user-experience design. But its no longer a case of simply designing a system that solves for x. It is our responsibility to create an engaging, personalized, frictionless, and trustworthy experience for each user.

The ability to do this successfully is largely dependent on user data. System performance, reliability, and user confidence in AI model output is affected as much by the quality of the model design as the data going into it. Data is the fuel that powers the AI engine that virtually converts the potential energy of user data into kinetic energy in the form of actionable insights and intelligent output. Just as filling a Formula 1 race car with poor or tainted fuel would diminish performance, and the drivers ability to compete, an AI system trained with incorrect or inadequate data can produce inaccurate or unpredictable results that break user trust. Once broken, trust is hard to regain. That is why rigorous data stewardship practices by AI developers and vendors are critical for building effective AI models as well as creating customer acceptance, satisfaction, and retention.

Responsible data stewardship establishes a chain of trust that extends from consumers to the companies collecting user data and those of us building AI-powered systems. Its our responsibility to know and understand privacy laws and policies and consider security and compliance during the primary design phase. We must have a deep understanding of how the data is used and who has access to it. We also need to detect and eliminate hidden biases in the data through comprehensive testing.

Treat user data as sensitive intellectual property (IP). It is the proprietary source code used to build AI models that solve specific problems, create bespoke experiences, and achieve targeted desired outcomes. This data is derived from personal user interactions, such as conversations between consumers and call agents, doctors and patients, and banks and customers. It is sensitive because it creates intimate, highly detailed digital user profiles based on private financial, health, biometric, and other information.

User data needs to be protected and used as carefully as any other IP, especially for AI systems in highly regulated industries such as health care and financial services. Doctors use AI speech, natural-language understanding, and conversational virtual agents created with patient health data to document care and access diagnostic guidance in real time. In banking and financial services, AI systems process millions of customer transactions and use biometric voiceprint, eye movement, and behavioral data (for example, how fast you type, the words you use, which hand you swipe with) to detect possible fraud or authenticate user identities.

Health-care providers and businesses alike are creating their own branded digital front door that provides efficient, personalized user experiences through SMS, web, phone, video, apps, and other channels. Consumers also are opting for time-saving real-time digital interactions. Health-care and commercial organizations rightfully want to control and safeguard their patient and customer relationships and data in each method of digital engagement to build brand awareness, personalized interactions, and loyalty.

Every AI vendor and developer not only needs to be aware of the inherently sensitive nature of user data but also of the need to operate with high ethical standards to build and maintain the required chain of trust.

Here are key questions to consider:

Who has access to the data? Have a clear and transparent policy that includes strict protections such as limiting access to certain types of data, and prohibiting resale or third-party sharing. The same policies should apply to cloud providers or other development partners.

Where is the data stored, and for how long? Ask where the data lives (cloud, edge, device) and how long it will be kept. The implementation of the European Unions General Data Protection Regulation, the California Consumer Privacy Act, and the prospect of additional state and federal privacy protections should make data storage and retention practices top of mind during AI development.

How are benefits defined and shared? AI applications must also be tested with diverse data sets to reflect the intended real-world applications, eliminate unintentional bias, and ensure reliable results.

How does the data manifest within the system? Understand how data will flow through the system. Is sensitive data accessed and essentially processed by a neural net as a series of 0s and 1s, or is it stored in its original form with medical or personally identifying information? Establish and follow appropriate data retention and deletion policies for each type of sensitive data.

Who can realize commercial value from user data? Consider the potential consequences of data-sharing for purposes outside the original scope or source of the data. Account for possible mergers and acquisitions, possible follow-on products, and other factors.

Is the system secure and compliant? Design and build for privacy and security first. Consider how transparency, user consent, and system performance could be affected throughout the product or service lifecycle.

Biometric applications help prevent fraud and simplify authentication. HSBCs VoiceID voice biometrics system has successfully prevented the theft of nearly 400 million (about $493 million) by phone scammers in the UK. It compares a persons voiceprint with thousands of individual speech characteristics in an established voice record to confirm a users identity. Other companies use voice biometrics to validate the identities of remote call center employees before they can access proprietary systems and data. The need for such measures is growing as consumers conduct more digital and phone-based interactions.

Intelligent applications deliver secure, personalized, digital-first customer service. A global telecommunications company is using conversational AI to create consistent, secure, and personalized customer experiences across its large and diverse brand portfolio. With customers increasingly engaging across digital channels, the company looked to technology partners to expand its own in-house expertise while ensuring it would retain control of its data in deploying a virtual assistant for customer service.

A top-three retailer uses voice-powered virtual assistant technology to let shoppers upload photos of items theyve seen offline, then presents items for them to consider buying based on those images.

Ambient AI-powered clinical applications improve health-care experiences while alleviating physician burnout. EmergeOrtho in North Carolina is using the Nuance Dragon Ambient eXperience (DAX) application to transform how its orthopedic practices across the state can engage with patients and document care. The ambient clinical intelligence telehealth application accurately captures each doctor-patient interaction in the exam room or on a telehealth call, then automatically updates the patient's health record. Patients have the doctors full attention while streamlining the burnout-causing electronic paperwork physicians need to complete to get paid for delivering care.

AI-driven diagnostic imaging systems ensure that patients receive necessary follow-up care. Radiologists at multiple hospitals use AI and natural language processing to automatically identify and extract recommendations for follow-up exams for suspected cancers and other diseases seen in X-rays and other images. The same technology can help manage a surge of backlogged and follow-up imaging as covid-19 restrictions ease, allowing providers to schedule procedures, begin revenue recovery, and maintain patient care.

As digital transformation accelerates, we must solve the challenges we face today while preparing for an abundance of future opportunities. At the heart of that effort is the commitment to building trust and data stewardship into our AI development projects and organizations.

See the original post:

Beyond the AI hype cycle: Trust and the future of AI - MIT Technology Review

Posted in Ai

COVID-19 Impact & Recovery Analysis – Artificial Intelligence Platforms Market 2020-2024 | Rise in Demand for AI-based Solutions to Boost Growth |…

LONDON--(BUSINESS WIRE)--Technavio has been monitoring the artificial intelligence platforms market and it is poised to grow by USD 12.51 billion during 2020-2024, progressing at a CAGR of over 33% during the forecast period. The report offers an up-to-date analysis regarding the current market scenario, latest trends and drivers, and the overall market environment.

Technavio suggests three forecast scenarios (optimistic, probable, and pessimistic) considering the impact of COVID-19. Please Request Latest Free Sample Report on COVID-19 Impact

The market is concentrated, and the degree of concentration will accelerate during the forecast period. Alibaba Group Holding Ltd., Alphabet Inc., Amazon Web Services Inc., International Business Machines Corp., Microsoft Corp., Palantir Technologies Inc., Salesforce.com Inc., SAP SE, SAS Institute Inc., and Tata Consultancy Services Ltd. are some of the major market participants. To make the most of the opportunities, market vendors should focus more on the growth prospects in the fast-growing segments, while maintaining their positions in the slow-growing segments.

The rise in demand for AI-based solutions have been instrumental in driving the growth of the market. However, the rise in data privacy issues might hamper market growth.

Artificial Intelligence Platforms Market 2020-2024: Segmentation

Artificial Intelligence Platforms Market is segmented as below:

To learn more about the global trends impacting the future of market research, download a free sample: https://www.technavio.com/talk-to-us?report=IRTNTR44235

Artificial Intelligence Platforms Market 2020-2024: Scope

Technavio presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources. Our artificial intelligence platforms market report covers the following areas:

This study identifies investments in AI start-ups as one of the prime reasons driving the artificial intelligence platforms market growth during the next few years.

Artificial Intelligence Platforms Market 2020-2024: Vendor Analysis

We provide a detailed analysis of around 25 vendors operating in the artificial intelligence platforms market, including some of the vendors such as Alibaba Group Holding Ltd., Alphabet Inc., Amazon Web Services Inc., International Business Machines Corp., Microsoft Corp., Palantir Technologies Inc., Salesforce.com Inc., SAP SE, SAS Institute Inc., and Tata Consultancy Services Ltd. Backed with competitive intelligence and benchmarking, our research reports on the artificial intelligence platforms market are designed to provide entry support, customer profile and M&As as well as go-to-market strategy support.

Register for a free trial today and gain instant access to 17,000+ market research reports.

Technavio's SUBSCRIPTION platform

Artificial Intelligence Platforms Market 2020-2024: Key Highlights

Table Of Contents:

Executive Summary

Market Landscape

Market Sizing

Five Forces Analysis

Market Segmentation by Deployment

Customer Landscape

Geographic Landscape

Market Drivers Demand led growth

Market Challenges

Market Trends

Vendor Landscape

Vendor Analysis

Appendix

About Us

Technavio is a leading global technology research and advisory company. Their research and analysis focus on emerging market trends and provides actionable insights to help businesses identify market opportunities and develop effective strategies to optimize their market positions. With over 500 specialized analysts, Technavios report library consists of more than 17,000 reports and counting, covering 800 technologies, spanning across 50 countries. Their client base consists of enterprises of all sizes, including more than 100 Fortune 500 companies. This growing client base relies on Technavios comprehensive coverage, extensive research, and actionable market insights to identify opportunities in existing and potential markets and assess their competitive positions within changing market scenarios.

See the original post here:

COVID-19 Impact & Recovery Analysis - Artificial Intelligence Platforms Market 2020-2024 | Rise in Demand for AI-based Solutions to Boost Growth |...

Posted in Ai

Adobe tests an AI recommendation tool for headlines and images – TechCrunch

Team members at Adobe have built a new way to use artificial intelligence to automatically personalize a blog for different visitors.

This tool was built as part of the Adobe Sneaks program, where employees can create demos to show off new ideas, which are then showcased (virtually, this year) at the Adobe Summit. While the Sneaks start out as demos, Adobe Experience Cloud Senior Director Steve Hammond told me that 60% of Sneaks make it into a live product.

Hyman Chung, a senior product manager for Adobe Experience Cloud, said that this Sneak was designed for content creators and content marketers who are probably seeing more traffic during the coronavirus pandemic (Adobe says that in April, its own blog saw a 30% month-over-month increase), and who may be looking for ways to increase reader engagement while doing less work.

So in the demo, the Experience Cloud can go beyond simple A/B testing and personalization, leveraging the companys AI technology Adobe Sensei to suggest different headlines, images (which can come from a publishers media library or Adobe Stock) and preview blurbs for different audiences.

Image Credits: Adobe

For example, Chung showed me a mocked-up blog for a tourism company, where a single post about traveling to Australia could be presented differently to thrill-seekers, frugal travelers, partygoers and others. Human writers and editors can still edit the previews for each audience segment, and they can also consult a Snippet Quality Score to see the details behind Senseis recommendation.

Hammond said the demo illustrates Adobes general approach to AI, which is more about applying automation to specific use cases rather than trying to build a broad platform. He also noted that the AI isnt changing the content itself just the way the content is promoted on the main site.

This is leveraging the creativity youve got and matching it with content, he said. You can streamline and adapt the content to different audiences without changing the content itself.

From a privacy perspective, Hammond noted that these audience personas are usually based on information that visitors have opted to share with a brand or website.

More here:

Adobe tests an AI recommendation tool for headlines and images - TechCrunch

Posted in Ai

New Research Reveals Adoption and Implementation of Artificial Intelligence in the Enterprise – GlobeNewswire

SAN FRANCISCO, July 09, 2020 (GLOBE NEWSWIRE) -- Informa Tech media brands, InformationWeek and ITPro Today, today announced findings from their latest research survey the 2020 State of Artificial Intelligence. The team surveyed technology decision makers across North American companies to uncover the ways organizations are approaching and implementing emerging technologies specifically artificial intelligence (AI) and the Internet of Things (IoT) in order to grow and get ahead of the competition.

Key Findings in the 2020 State of Artificial Intelligence

To download a complimentary copy of The 2020 State of Artificial Intelligence, click here.

Media interested in receiving a copy of the report or the State of AI infographic should contact Briana Pontremoli at Briana.Pontremoli@informa.com.

2020 State of Artificial Intelligence Report MethodologyThe survey collected opinions from nearly 300 business professionals at companies engaged with AI-related projects. Nearly 90% of respondents have an IT or technology-related job function, such as application development, security, Internet of Things, networking, cloud, or engineering. Just over half of respondents work in a management capacity, with titles such as C-level executive, director, manager, or vice president. One half are from large companies with 1,000 or more employees, and 20% work at companies with 100 to 999 employees.

About Informa TechInforma Tech is a market leading provider of integrated research, media, training and events to the global Technology community. We're an international business of more than 600 colleagues, operating in more than 20 markets. Our aim is to inspire the Technology community to design, build and run a better digital world through research, media, training and event brands that inform, educate and connect. Over 7,000 professionals subscribe to our research, with 225,000 delegates attending our events and over 18,000 students participating in our training programs each year, and nearly 4 million people visiting our digital communities each month. Learn more about Informa Tech.

Media Contact:Briana PontremoliInforma Tech PRbriana.pontremoli@informa.com

See more here:

New Research Reveals Adoption and Implementation of Artificial Intelligence in the Enterprise - GlobeNewswire

Posted in Ai

Lunar Rover Footage Upscaled With AI Is as Close as You’ll Get to the Experience of Driving on the Moon – Gizmodo

The last time astronauts walked on the moon was in December of 1972, decades before high-definition video cameras were available. They relied on low-res grainy analog film to record their adventures, which makes it hard for viewers to feel connected to whats going on. But using modern AI techniques to upscale classic NASA footage and increase the frame rate suddenly makes it feel like youre actually on the moon.

The YouTube channel Dutchsteammachine has recently uploaded footage from the Apollo 16 mission that looks like nothing youve ever seen before, unless you were an actual Apollo astronaut. Originally captured on 16-millimeter film at just 12 frames per second, footage of the lunar rover heading to Station 4, located on the rim of the moons Shorty Crater, was increased to a resolution of 4K and interpolated so that it now runs at 60 frames per second using the DAIN artificial intelligence platform.

Most of us immediately turn off the motion-smoothing options on a new TV, but heres a demonstration of how, when done properly, it can dramatically change the feeling of what youre watching. Even without immersive VR goggles, you genuinely feel like youre riding shotgun on the lunar rover.

The footage has been synced to the original audio from this particular mission, which also serves to humanize the astronauts if you listen along. Oftentimes, when bundled up in their thick spacesuits, the Apollo astronauts seem like characters from a science fiction movie. But listening to their interactions and narration of what theyre experiencing during this mission, they feel human again, like a couple of friends out on a casual Sunday afternoon driveeven thought that drive is taking place over 238,000 miles away from Earth.

G/O Media may get a commission

See the original post:

Lunar Rover Footage Upscaled With AI Is as Close as You'll Get to the Experience of Driving on the Moon - Gizmodo

Posted in Ai

AI researchers create testing tool to find bugs in NLP from Amazon, Google, and Microsoft – VentureBeat

AI researchers have created a language-model testing tool that has discovered major bugs in commercially available cloud AI offerings from Amazon, Google, and Microsoft. Yesterday, a paper detailing the CheckList tool received the Best Paper award from organizers of the Association for Computational Linguistics (ACL) conference. The ACL conference, which took place online this week, is one of the largest annual gatherings for researchers creating language models.

NLP models today are often evaluated based on how they perform on a series of individual tasks, such as answering questions using benchmark data sets with leaderboards like GLUE. CheckList instead takes a task-agnostic approach, allowing people to create tests that fill in cells in a spreadsheet-like matrix with capabilities (in rows) and test types (in columns), along with visualizations and other resources.

Analysis with CheckList found that about one in four sentiment analysis predictions by Amazons Comprehend change when a random shortened URL or Twitter handle is placed in text, and Google Clouds Natural Language and Amazons Comprehend makes mistakes when the names of people or locations are changed in text.

The [sentiment analysis] failure rate is near 100% for all commercial models when the negation comes at the end of the sentence (e.g. I thought the plane would be awful, but it wasnt), or with neutral content between the negation and the sentiment-laden word, the paper reads.

CheckList also found shortcomings when paraphrasing responses to Quora questions, despite surpassing human accuracy in a Quora Question Pair benchmark challenge. Creators of CheckList from Microsoft, University of Washington, and University of California at Irvine say results indicate that using the approach can improve any existing NLP models.

While traditional benchmarks indicate that models on these tasks are as accurate as humans, CheckList reveals a variety of severe bugs, where commercial and research models do not effectively handle basic linguistic phenomena such as negation, named entities, coreferences, semantic role labeling, etc, as they pertain to each task, the paper reads. NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.

Googles BERT and Facebook AIs RoBERTa were also evaluated using CheckList. Authors said BERT exhibited gender bias in machine comprehension, overwhelmingly predicting men as doctors for example. BERT was also found to always make positive predictions about people who are straight or Asian and negative predictions when dealing with text about people who are atheist, Black, gay, or lesbian. An analysis in early 2020 also found systemic bias among large-scale language models.

In recent months, some of the largest Transformer-based language models devised have come into being, from Nvidias Megatron to Microsofts Turing NLG. Large language models have racked up impressive scores in particular tasks. But some NLP researchers argue that a focus on human-level performance on individual tasks ignores ways in which NLP systems are still brittle or less than robust.

As part of a use case test with the team at Microsoft in charge of Text Analytics, a model currently in use by customers thats gone through multiple evaluations, CheckList found previously unknown bugs. The Microsoft team will now use CheckList as part of its workflow when evaluating NLP systems. A collection of people from industry and academia testing AI with the tool over the span of two hours were also able to discover inaccuracies or bugs in state-of-the-art NLP models. An open source version of CheckList is currently available on GitHub.

Sometimes referred to as black box testing, behavioral testing is an approach common in software engineering but not in AI. CheckList is able to do testing in areas like sentiment analysis, machine comprehension, and duplicate question detection. It can also analyze capabilities like robustness, fairness, and logic tests in a range of three kinds of tasks.

The authors are unequivocal in their conclusion that benchmark tasks alone are not sufficient for evaluating NLP models, but they also say that CheckList should complement, not replace, existing challenges and benchmark data sets used for measuring performance of language models.

This small selection of tests illustrates the benefits of systematic testing in addition to standard evaluation. These tasks may be considered solved based on benchmark accuracy results, but the tests highlight various areas of improvement in particular, failure to demonstrate basic skills that are de facto needs for the task at hand, the paper reads.

Other noteworthy work at ACL includes research by University of Washington professor Emily Bender and Saarland University professor Alexander Koller that won the best theme award. The paper argues that progress on large neural network NLP models such as GPT-3 or BERT derivatives is laudable, but that members of the media and academia should not refer to large neural networks as capable of understanding or comprehension, and that clarity and humility are needed in the NLP field when defining ideas like meaning or understanding.

While large neural language models may well end up being important components of an eventual full-scale solution to human-analogous natural language understanding, they are not nearly-there solutions to this grand challenge, the report reads.

Finally, a system from the U.S. Army Research Lab, University of Illinois, Urbana-Champaign, and Columbia University won the Best Demo paper award for its system named GAIA, which allows for text queries of multimedia like photos and videos.

Read more:

AI researchers create testing tool to find bugs in NLP from Amazon, Google, and Microsoft - VentureBeat

Posted in Ai

Infervision Receives FDA Clearance for the InferRead Lung CT.AI Product – BioSpace

PHILADELPHIA, July 9, 2020 /PRNewswire/ -- Infervision is pleased to announce the U.S. Food and Drug Administration (FDA) 510(K) clearance of the InferRead Lung CT.AI product, which uses the state-of-the-art artificial intelligence and deep learning technology to automatically perform lung segmentation, along with accurately identifying and labeling nodules of different types. InferRead Lung CT.AI is designed to support concurrent reading and can aid radiologists in pulmonary nodule detection during the review of chest CT scans, increasing accuracy and efficiency. With five years of international clinical use, Infervision's InferRead Lung CT.AI application is a robust and powerful tool to assist the radiologist.

InferRead Lung CT.AI is currently in use at over 380 hospitals and imaging centers globally. More than 55,000 cases daily are being processed by the system and over 19 million patients have already benefited from this advanced AI technology. "Fast, workflow friendly, and accurate are the three key areas we have emphasized during product development. We're very excited to be able to make our InferRead Lung CT.AI solution available to the North American market. Our clients tell us it has great potential to help provide improved outcomes for providers and patients alike," said Matt Deng, Ph.D., Director of Infervision North America. The Company offers the system under a number of pricing models to make it easy to acquire.

The company predicts the system may also be of great benefit to lung cancer screening (LCS) programs across the nation. Lung cancer is the second most common cancer in both men and women in the U.S. Survival rates are 60% in five years if discovered at an early stage. However, the survival rate is lower than 10% if the disease progresses to later stages without timely follow-up and treatment. The Lung Cancer Screening program has been designed to encourage the early diagnosis and treatment of the high-risk population meeting certain criteria. The screening process involves Low-dose CT (LDCT) scans to determine any presence of lung nodules or early-stage lung disease. However small nodules can be very difficult to detect and missed diagnoses are not uncommon.

"The tremendous potential for lung cancer screening to reduce mortality in the U.S. is very much unrealized due to a combination of reasons. Based on our experience reviewing the algorithm for the past several months and my observations of its extensive use and testing internationally, I believe that Infervision's InferRead Lung CT.AI application can serve as a robust lung nodule "spell-checker" with the potential to improve diagnostic accuracy, reduce reading times, and integrate with the image review workflow," said Eliot Siegel, M.D., Professor and Vice Chair of research information systems in radiology at the University of Maryland School of Medicine.

InferRead Lung CT.AI is now FDA cleared, and has also received the CE mark in Europe. "This is the first FDA clearance for our deep-learning-based chest CT algorithm and it will lead the way to better integration of advanced A.I. solutions to help the healthcare clinical workflow in the region," according to Matt Deng. "This marks a great start in the North American market, and we are expecting to provide more high-performance AI tools in the near future."

About Infervision Infervision is committed to the clinical application of artificial intelligence and deep learning technologies in health care, providing AI-based platforms and tools fully integrated with medical workflows. Health providers in over 10 countries in Asia, Europe, North America use Infervision's highly precise and efficient clinical tools, empowering them with improved clinical insights. Infervision currently has 8 global offices and over 300 employees worldwide. Each day Infervision helps process over 55,000 exams, and accumulatively 19M case reviews since 2015. Learn more about Infervision's product suites at global.infervision.com

Reference for LC survival rate: https://seer.cancer.gov/csr/1975_2017/browse_csr.php?sectionSEL=15&pageSEL=sect_15_table.12

Contact:

Haiyun WangMarketing Managerwhaiyun@infervision.com+1 765-637-8892

View original content:http://www.prnewswire.com/news-releases/infervision-receives-fda-clearance-for-the-inferread-lung-ctai-product-301091145.html

SOURCE Infervision North America

Original post:

Infervision Receives FDA Clearance for the InferRead Lung CT.AI Product - BioSpace

Posted in Ai

AI hiring tools aim to automate every step of recruiting – Quartz

The firms that sell AI tools to automate recruiting have started to work the pandemic into their pitches to prospective clients: As the economy tanks and the hiring process moves almost entirely online, AI recruiting tools offer a chance to save some money and make use of new troves of digital data on prospective candidates.

In fact, the field is expected to expand during the crisis and has been attracting new investment. Its not just automated resume-sifting: There are firms competing to automate every stage of the hiring process. And while the machines seldom make hiring decisions on their own, critics say their use can perpetuate discrimination and inequality.

AI firm Textio claims it can optimize every word of a job posting, using a machine learning model that correlates certain turns of phrase with better hiring outcomes. Companies hiring in California, for example, are advised to describe things as awesome to appeal to local job seekers, while New York employers are counseled to avoid the adjective.

Big name firms like LinkedIn and ZipRecruiter use matchmaking algorithms to comb through hundreds of millions of job postings to connect candidates with compatible companies. Smaller competitors, like GoArya, seek to differentiate themselves by scraping data from the internetincluding social media profilesto inform recruiting decisions.

Firms like Mya promise to automate the task of reaching out to candidates via email, text, WhatsApp, or Facebook Messenger, using natural language processing to have open-ended, natural, and dynamic conversations. The companys chatbots even conduct basic screening interviews, filtering out early-stage applicants who dont meet the employers qualifications. Other companies, like XOR and Paradox, sell chatbots designed to schedule interviews and field applicants questions.

Some AI vendorsincluding Ideal, CVViZ, Skillate, and SniperAIpromise to cut the drudgery of hiring by automatically comparing applicants resumes with those of current employees. Tools like these have faced criticism for recreating existing inequalities: Even if the algorithms are programmed to ignore traits like race or gender, they might learn from past hiring data to pick up on proxies for these traitsfor example, prioritizing candidates who played lacrosse or are named Jared. Amazon developed its own screener and quickly scrapped it in 2018 after finding it was biased against women.

Recruiting firm HireVue, which boasts 700 corporate clients including Hilton and Goldman Sachs, sells an AI tool that analyzes interviewees facial movements, word choice, and speaking voices to assign them an employability score. The platform is so ubiquitous in industries like finance and hospitality that some colleges have taken to coaching interviewees on how to speak and move to appeal to the platforms algorithms.

AI firm Humantic offers to understand every individual without spending your time or theirs by using AI to create psychological profiles of applicants based on the words they use in resumes, cover letters, LinkedIn profiles, and any other piece of text they submit.

Meanwhile, Pymetrics puts current and prospective employees through a series of 12 games to glean data about their personalities. Its algorithms use the data to to find applicants that fit company culture. In a 2017 presentation, a Pymetrics representative demonstrated a game that required users to react when a red circle appears, but do nothing when they see a green circle. That game was actually looking at your levels of impulsivity, it was looking at your attention span, and it was looking at how you learn from your mistakes, she told the crowd. Critics suggest the games might just measure which candidates are good at puzzles.

Follow this link:

AI hiring tools aim to automate every step of recruiting - Quartz

Posted in Ai

Hellobike unveils trifecta of innovative shared mobility AI technologies at WAIC2020 – Yahoo Finance

SHANGHAI, July 10, 2020 /PRNewswire/ -- Hellobike, China's two-wheel transport industry leader, has unveiled three revolutionary shared mobility AI technologies at the 2020 World Artificial Intelligence Conference (WAIC2020), taking place virtually between 9 and 11 July. In line with the conference theme, 'Intelligent Connectivity, Indivisible Community', Hellobike showcased its independent research and development into solutions that enable cities to create convenient, greener urban transportation ecosystems.

Hellobike's non-motorized vehicle safety management system

During its presentation on 10 July, Hellobike unveiled three innovative technologies that leverage AI, big data, cloud infrastructure and the IoT: the Hermes road safety system, non-motorized vehicle safety management system, and fixed-point return. Hellobike's participation in WAIC2020 follows its highly successful debut at the conference last year, where the company unveiled exciting AI projects including the Hello Brain smart transportation OS and the Argus visual interaction system.

Hellobike's new model A40

"We are honored to take part in WAIC2020 for the second year running. As the shared bike industry leader, WAIC2020 is the ultimate platform for us to demonstrate how we harness AI technology and work hand-in-hand with the state to build the city of the future," said Li Kaizhu, President of Hellobike.

Hellobike's latest technologies usher in the 3.0 era of China's bike-sharing industry: a new model that sees shared bicycles organically integrated into the urban public transportation ecosystem. Through strengthened cooperation between transport providers and municipal governments, the 3.0 era provides a systematic mechanism to help Chinese cities tackle unique operational challenges, address parking management, and streamline shared bike deployment and distribution.

Hellobike's Hermes road safety system integrates AI algorithms to provide users with a better, safer shared transport experience. Built as a scenario-based solution, Hermes automatically performs failsafe tests on both user behavior and the bike at the beginning, middle and end of their riding journey. If the system detects technical issues, dangerous operation or user violations, Hellobike delivers a risk warning to the user through the bike's built-in speaker.

Based on insights gathered from mining big data, Hellobike also found that the use of non-motorized vehicles can lead to chaotic, unsafe road conditions. To address this, Hellobike has partnered with local governments to develop non-motorized vehicle safety management systems tailored to each city's unique traffic conditions. Using video AI technology for data collection and situation analysis, as well as spatial data, Hellobike helps cities establish new vehicle management systems built upon data visualization, intelligent data processing and smart decision-making applications.

Story continues

Furthermore, Hellobike has cooperated with city officials to promote improved traffic safety, simplified parking and enhanced city appearance through a shared bike management operation plan. Hellobike has established a number of convenient fixed-point return locations using electronic fencing, Bluetooth road studs, AI and the IoT. Fixed-point return encourages users to park at designated locations, while making it easier for staff to locate and redistribute vehicles across the city.

Hellobike President Li Kaizhu and Chief Scientist Liu Xingliang will also take part in WAIC2020's AI TALK and big data forum alongside entrepreneurs from leading local and global tech companies to discuss the applications of AI technology. In addition, Hellobike plans to host its first Technology Open Day on 31 July at its Shanghai headquarters, where users can tour the space, test new vehicles, and discover the technological innovations behind Hellobike.

About Hellobike

Hellobike has continuously built user-friendly and sustainable transport services in sectors such as shared bicycles, shared e-bikes and car-pooling. As a business leader in two-wheeled transport, users have taken more than 12 billion trips on Hellobike vehicles over the past three years. Hellobike now operates in more than 360 Chinese cities.

Photo - https://photos.prnasia.com/prnh/20200710/2854753-1-a Photo - https://photos.prnasia.com/prnh/20200710/2854753-1-b

SOURCE Hellobike

See the rest here:

Hellobike unveils trifecta of innovative shared mobility AI technologies at WAIC2020 - Yahoo Finance

Posted in Ai

How to create real competitive advantage through business analytics and ethical AI – UNSW Newsroom

Some Australian organisations, which either feature large data science teams or are born digital with a data-driven culture, have advanced analytics capabilities (such as undertaking predictive and prescriptive analytics). For example, dedicated data science teams in marketing will build neural network models to predict customer attrition and the success of cross-selling and up-selling. However, most organisations that use data in their decision-making primarily rely on descriptive analytics.

While descriptive analytics may seem simplistic compared to creating predictions and running optimisation algorithms, descriptive analytics offers firms tremendous value by providing an accurate and up-to-date view of the business. For most organisations, analytics which may even be labelled as advanced analytics takes the form of dashboards; and, for many organisational tasks, understanding trends and the current state of the business is sufficient to make evidence-based decisions.

Moreover, dashboards provide a foundation for creating a more data-driven culture and are the first step for many organisations in their analytics journey. That said, by strictly relying on dashboards, organisations are missing opportunities for leveraging predictive analytics to create competitive advantages.

Despite the importance of analytics, firms are at different stages of their analytics journey. Some firms utilise suites of complex artificial intelligent technologies, while many others still use Microsoft Excel as their main platform for data analysis. Unfortunately, the process of obtaining organisational value from analytics is far from trivial, and the organisational benefits provided by analytics are almost equalled by the challenges required for successful implementation.

My colleagueProf. Richard Vidgenrecentlyundertook a Delphi studyto reach a consensus on the importance of key challenges in creating value from big data and analytics. Managers overwhelmingly agreed that there were two significant issues. The first is the wealth of issues related to data: assuring data quality, timeliness and accuracy, linking data to key decisions, finding appropriate data to support decisions and issues pertaining to databases.

The second set of challenges pertains to people: building data skills in the organisation,upskilling current employees to utilise analytics, massive skill shortages across both analytics and the IT infrastructure supporting analytics, and building a corporate data culture (which includes integrating data into the organisations strategy). While issues related to data quality are improving, the skill gap and lack of emphasis on data-driven decision making are systemic issues that will require radical changes in Australian education and Australian corporate culture.

Although there are many interesting trends in terms of the advancements of analytics like automated machine learning platforms (such as DataRobot and H2O), the greatest challenge with analytics and AI is going to be ensuring their ethical use.

Debate and governance around data usage are still in their infancy, and with time, analytics, black-box algorithms, and AI are going to come under increasing scrutiny. For example, Australias recent guidelines on ethical AI, where AI can be thought of as a predictive outcome created by an algorithm or model, include:

Achieving these goals with standard approaches to analytics is a challenging enough endeavour for organisations, due to the black-box nature of analytics, algorithms and AI. However, decisions driven by algorithms and analytics are now increasingly interacting with other organisations AI, which makes it even more difficult to predict the fairness and explainability of outcomes. For example, AI employed by e-commerce retailers to set prices can participate in collusion and driving up prices by mirroring and learning from competing AIs behaviours without human interference, knowledge or explicit programming for collusion.

As predictive analytics and AI will fundamentally transform almost all industries, it is critical that organisations adapt ethically. Organisations should implement frameworks to guide the use of AI and analytics, which explicitly incorporate fairness, transparency, explainability, contestability, and accountability.

A significant aspect of undertaking ethical AI and ethical analytics is optimising and selecting models and algorithms that incorporate ethical objectives. Typically, analytics professionals often select models based on their ability to make successful predictions on validation and hold-out data (that is, data that the model has never seen). However, rather than simply looking at prediction accuracy, analysts should incorporate issues related to transparency. For example, decision trees, which are a collection of if-then rules that connect branches of a tree, have simple structures and interpretations. They are highly visual, which enables analysts to easily convey the underlying logic and features that drive predictions to stakeholders.

Moreover, business analytics professionals can carefully scrutinise the nodes of the decision tree to determine if the criteria for the decision rules built into the model are ethical. Thus, rather than using advanced neural networks which often provide higher accuracy to models like decision trees but are effectively black-boxes, analysts should consider sacrificing slightly on performance in favour of transparency offered by simpler models.

AGSM Scholar and Senior Lecturer Sam Kirshner, UNSW Business School.

Sam Kirshner is a Senior Lecturer in the School of Information Systems and Technology Managementat UNSW Business School andis a member of the multidisciplinary Behavioural Insights for Business and Policy and Digital Enablement Research Networks.

For the full story visitUNSW Business Schools BusinessThink.

Here is the original post:

How to create real competitive advantage through business analytics and ethical AI - UNSW Newsroom

Posted in Ai

Jack Ma Calls for Wisdom and Innovation at World AI… – Alizila

Machine intelligence must go hand-in-hand with human wisdom, Alibaba Group founder Jack Ma said Thursday.

Speaking at the World Artificial Intelligence Conference, Ma said that humans should strive to better understand themselves and the Earth, especially in the face of a global crisis like the Covid-19 outbreak.

This pandemic has shown us how little we know about ourselves and how little we know about the Earth. Because we dont know ourselves, dont know the world we are living in, dont understand the Earth and dont know how to cherish and preserve the Earth, we have created many troubles and disasters, Ma said via video message. He added that, despite the resources, wealth, knowledge and technological prowess enjoyed by society today, human wisdom was still the key to addressing the worlds challenges and was needed to enhance communication and cooperation to find impactful and long-lasting solutions.

With the theme of Intelligent World, Indivisible Community, the three-day conference features presentations, keynote speeches and panel discussions with prominent global figures from the realm of science and technology, including Turing Award winners Yoshua Bengio and Andrew Yao, Director General of the United Nations Industrial Development Organization Li Yong and Tesla CEO Elon Musk.

In his speech during the opening ceremony, Ma said that while the technologies of the past improved our way of living, the technologies of today and tomorrow should help humankind survive better.

During the pandemic, people have used internet technology to survive, not just for themselves, but also for others, he said. There are many cases: going to school, having a meeting, shopping, visiting a doctor all of these activities rely on digital technology. To innovate in order to survive is the strongest and most irresistible force.

He also pointed to an AI algorithm that Alibabas research and innovation institute DAMO Academy developed to help diagnose Covid-19. Informed by data from thousands of computed-tomography scans and trained by deep learning, the algorithm can accurately detect the virus in 20 seconds, vastly shortening the time it takes for doctors to review CT scans, confirm cases and move on to treatment and supportive measures.

Ma said that such innovations were indicative of the quickening pace of digitalization in the world.

Technological transformation will come earlier and its speed will accelerate. We need to be ready, he said.

Sign up for our newsletterto receive the latest Alibaba updates in your inbox every week.

Go here to read the rest:

Jack Ma Calls for Wisdom and Innovation at World AI... - Alizila

Posted in Ai

Artificial Intelligence will aid in better decision-making capacity – Livemint

MUMBAI :Artificial intelligence (AI) will not only help to resolve complex global problems but can enable businesses make better decisions on a day-to-day basis if applied well, said panelists at the Mint Pivot or Perish webinar on automation in the new normal post covid pandemic.

AI can be embedded into our day-to-day applications so the human intellect takes better decisions based on the most relevant data to the particular request basis past or similar requests. This is a growth opportunity for businesses now," said Dulles Krishnan, area vice president - Salesforce.

For example, during a service request, offering the employee insights about the customers preferences or similar examples of previous customers can help them efficiently up sell solutions and create more revenue or service opportunities.

On one end of automation is work that is high volume and repetitive. More usage of automation is now moving to low volume but unique work which requires solutions. That is where bots and AI solutions co-exist with humans," said Kamal Singhani, managing partner, IBM India.

Sangeeta Gupta, VP and chief strategy officer, Nasscom, said while AI can augment productivity and innovation, it can only work in well-defined use cases though it needs good quality data to work with.

Subscribe to newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

View original post here:

Artificial Intelligence will aid in better decision-making capacity - Livemint

Posted in Ai

‘Make in India’: Artificial Intelligence Company, AiBridge ML, Adds Handwriting and Image Recognition Capabilities to AiMunshi, the Popular Financial…

HYDERABAD, India, July 10, 2020 /PRNewswire/ -- Today, Founder & Chief Data Scientist of AiBridge ML, Mr. Prajnajit Mohanty, announced addition of Handwriting & Image Recognition capabilities to their Financial Document Automation tool, AIMunshi. It is notable that AIMunshi is a 'Make in India' Deep Learning based Intelligent Financial Documents Automation tool from AiBridge ML.

"Addition of deep learning based Handwriting & Image recognition capabilities to AiMunshi will enable us to offer augmented features to diversified industries. It will help them to operate in contactless manner and automate their routine work during current COVID-19 pandemic. Industries like education, healthcare, retail, manufacturing etc. will be benefitted immensely and we are committed to help Indian industries to use AI & Machine learning," said Mr. Mohanty.

Many USAand Australian healthcare, pharma and retail companies have already realized considerable financialand operational benefits using AiMunshi and yielding real, tangible ROI faster.

AiMunshi processes ordersand invoices automatically, reducing accounts payable costs while improving both the accuracy and the speed of data extraction from various sources or emails directly. It is capable of automatically interpreting the relevant information and fields within a PDF or image-based invoicesand order, or in emails in real-time.

Intelligent features of AiMunshi:

About AiBridge Ml Pvt Ltd

Founded in Feb, 2019 by Senior Technology Leaders with combined experience of 84 years in IT, AiBridge ML Pvt Ltd develops innovative Enterprise Solutions in Artificial Intelligence, Machine Learning, Augment Reality and Robotic Process Automation. Aibridge ML released AI powered deep learning based tool for Financial Document Processing Automation, AiMunshi in Sep 2019. They currently have 30+ Senior Data Scientists with combined experience of more than 70 years. Currently, Aibridge ML is offering their solutions in USA, Australia, Canada & India.

Company website: http://www.aibridgeml.ai AiMunshi Product website: http://www.aimunshi.ai

Media Contact: Ajay Ray[emailprotected]+91-9849743823Director, Aibridge ML Pvt Ltd

SOURCE AiBridge ML Pvt Ltd

See the article here:

'Make in India': Artificial Intelligence Company, AiBridge ML, Adds Handwriting and Image Recognition Capabilities to AiMunshi, the Popular Financial...

Posted in Ai

Zebra Medical Vision collaborating with TELUS Ventures to advance AI-based preventative care in Canada – GlobeNewswire

KIBBUTZ SHEFAYIM, Israel and VANCOUVER, British Columbia, July 09, 2020 (GLOBE NEWSWIRE) -- Zebra Medical Vision (https://www.zebra-med.com/), the deep-learning medical imaging analytics company, announced today it has entered a strategic collaboration with TELUS Ventures, one of Canadas most active Corporate Venture Capital (CVC) funds. This collaboration includes an investment that will grow Zebra-Meds presence in North America and enable the company to expand its artificial intelligence (AI) solutions to new modalities and clinical care settings.

With five FDA clearances and Health Canada approvals, Zebra-Meds technology provides a fully automated analysis of images generated in the imaging system using clinically proven AI solutions trained on hundreds of millions of patient scans to identify acute medical findings and chronic diseases. Recently Zebra-Med joined the global battle against the Coronavirus pandemic, with its AI solution for COVID-19 detection and disease progression tracking.

This collaboration will help catalyze Zebra-Meds expansion into Canadas healthcare ecosystem, said Ohad Arazi, CEO at Zebra Medical Vision. Zebra-Med is deeply committed to enhancing care through the use of machine learning and artificial intelligence. We have already impacted millions of lives globally, and were honoured to launch this significant collaboration with TELUS Ventures, driving better care for Canadians.

TELUS Ventures focus has been on building a strong portfolio of investments to support TELUS Healths growth in the health technology market including digital solutions for preventive care and patient self-management. This strategy goes hand-in-hand with Zebra-Meds population health solutions. Screening for various conditions helps Zebra-Med and the medical team to identify missed care opportunities and incidental findings. Zebra-Med is the first AI start-up in medical imaging that has received FDA clearance for a population health solution, leveraging AI to stratify risk, improve patients quality of life, and reduce cost of care.

Supporting TELUS leadership in digital health solutions in Canada, we continue to invest in the growth of the health IT ecosystem by supporting the delivery of new technologies, like those being developed by Zebra Medical Vision, that aim to improve health outcomes for Canadians, said Rich Osborn, Managing Partner, TELUS Ventures. We are pleased to join a great roster of recent investors and complement our existing portfolio through this collaboration with a known leader in AI innovation supporting clinical efficacy and significantly advancing the detection of conditions through machine learning-based capabilities for medical imaging.

About TELUS Ventures

As the strategic investment arm of TELUS Corporation (TSX: T, NYSE: TU), TELUS Ventures was founded in 2001 and is one of Canadas most active corporate venture capital funds. TELUS Ventures has invested in over 70 companies since inception with a focus on innovative technologies such as Health Tech, IoT, AI and Security. TELUS Ventures is an active investment partner and supports its portfolio companies through mentoring; exposure to TELUS extensive network of business and co-investment partners; access to TELUS technologies and broadband networks; and by actively driving new solutions across the TELUS ecosystem.

For more information please visit: ventures.TELUS.com.

About Zebra Medical Vision

Zebra Medical Vision's Imaging Analytics Platform allows healthcare institutions to identify patients at risk of disease and offer improved, preventative treatment pathways to improve patient care. Zebra-Med is funded by Khosla Ventures, Marc Benioff, Intermountain Investment Fund, OurCrowd Qure, Aurum, aMoon, Nvidia, J&J, Dolby Ventures and leading AI researchers Prof Fei Fei Le, Prof Amnon Shashua and Richard Shocher. Zebra Medical Vision was named a Fast Company Top-5 AI and Machine Learning company. http://www.zebra-med.com

For media inquiries please contact:

Alona SteinReBlonde for Zebra Medical Vision alona@reblonde.com +972-50-778-2344

Jill YetmanTELUS Public Relationsjill.yetman@telus.com416-992-2639

Follow this link:

Zebra Medical Vision collaborating with TELUS Ventures to advance AI-based preventative care in Canada - GlobeNewswire

Posted in Ai

12345...102030...