Daily Archives: July 12, 2020

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism – VentureBeat

Posted: July 12, 2020 at 1:31 am

Take the latest VB Survey to share how your company is implementing AI today.

Researchers from Googles DeepMind and the University of Oxford recommend that AI practitioners draw on decolonial theory to reform the industry, put ethical principles into practice, and avoid further algorithmic exploitation or oppression.

The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations.

The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.

Any commitment to building the responsible and beneficial AI of the future ties us to the hierarchies, philosophy, and technology inherited from the past, and a renewed responsibility to the technology of the present, the paper reads. This is needed in order to better align our research and technology development with established and emerging ethical principles and regulation, and to empower vulnerable peoples who, so often, bear the brunt of negative impacts of innovation and scientific progress.

The paper incorporates a range of suggestions, such as analyzing data colonialism and decolonization of data relationshipsand employing the critical technical approach to AI development Philip Agre proposed in 1997.

The notion of anticolonial AI builds on a growing body of AI research that stresses the importance of including feedback from people most impacted by AI systems. An article released in Nature earlier this week argues that the AI community must ask how systems shift power and asserts that an indifferent field serves the powerful. VentureBeat explored how power shapes AI ethics in a special issue last fall. Power dynamics were also a main topic of discussion at the ACM FAccT conference held in early 2020 as more businesses and national governments consider how to put AI ethics principles into practice.

The DeepMind paper interrogates how colonial features are found in algorithmic decision-making systems and what the authors call sites of coloniality, or practices that can perpetuate colonial AI. These include beta testing on disadvantaged communities like Cambridge Analytica conducting tests in Kenya and Nigeria or Palantir using predictive policing to target Black residents of New Orleans. Theres also ghost work, the practice of relying on low-wage workers for data labeling and AI system development. Some argue ghost work can lead to the creation of a new global underclass.

The authors define algorithmic exploitation as the ways institutions or businesses use algorithms to take advantage of already marginalized people and algorithmic oppression as the subordination of a group of people and privileging of another through the use of automation or data-driven predictive systems.

Ethics principles from groups like G20 and OECD feature in the paper, as well as issues like AI nationalism and the rise of the U.S. and China as AI superpowers.

Power imbalances within the global AI governance discourse encompasses issues of data inequality and data infrastructure sovereignty, but also extends beyond this. We must contend with questions of who any AI regulatory norms and standards are protecting, who is empowered to project these norms, and the risks posed by a minority continuing to benefit from the centralization of power and capital through mechanisms of dispossession, the paper reads. Tactics the authors recommend include political community action, critical technical practice, and drawing on past examples of resistance and recovery from colonialist systems.

A number of members of the AI ethics community, from relational ethics researcher Abeba Birhane to Partnership on AI, have called on machine learning practitioners to place people who are most impacted by algorithmic systems at the center of development processes. The paper explores concepts similar to those in a recent paper about how to combat anti-Blackness in the AI community, Ruha Benjamins concept of abolitionist tools, and ideas of emancipatory AI.

The authors also incorporate a sentiment expressed in an open letter Black members of the AI and computing community released last month during Black Lives Matter protests, which asks AI practitioners to recognize the ways their creations may support racism and systemic oppression in areas like housing, education, health care, and employment.

See the original post:

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism - VentureBeat

Posted in Ai | Comments Off on DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism – VentureBeat

AI In Human Resources To AI or not to AI? – Analytics Insight

Posted: at 1:31 am

Every department in a company has its own challenges.

In the case of Human Resources, recruitment and onboarding processes, employee orientations, process paperwork, and background checks is a handful and many a time painstaking mostly because of the repetitive and manual nature of the work. The most challenging of all is engaging with employees on human grounds to understand their needs.

As leaders today are observing the AI revolution across every process, Human resources is no exception: there has been a visible wave of AI disruption across HR functions. According to an IBMs survey from 2017, among 6000 executives, 66% of CEOs believe that cognitive computing can drive compelling value in HR while half of the HR personnel believe this may affect roles in the HR organization. The study clearly exhibits the apprehension of HR executives caused by the AI disruption in their field.

While one aspect of AI is creating uneasiness: the other is promising convenience. AI aims to empower the HR department with the right knowledge to optimize processes with less manual power and guarantees to mitigate errors.

TheCOVID-19 pandemic has highlighted thepower of AIin real-time< Backlink-https://us.sganalytics.com/blog/ai-can-detect-infections-with-96-percent-accuracy-can-ai-predict-the-next-pandemic/>, including its shortcomings. At the crux of the AI evolution is the minimization of human labored processes. Sophisticated AI algorithms can analyze large amounts of data in no time and self-educate themselves to recognize and map patterns, which can come in handy for HR staffs to plan and operate strategically.

While a human can be biased, get bored and make unintended mistakes provoking inadequacy in productivity and efficiency, AI programs are unbiased and diligent, enabling more productivity and efficiency.

HR executives who perform tasks like applicant tracking, payroll, training, and job postings manually without automation, state that they spend 14 hours a week on an average on these tasks. Leveraging AI to automate these HR processes can be extremely pertinent for meeting the following key business requirements: First, save time and increase efficiency; Second, provide real-time responses and solutions that meet employee expectations.

As per a Mckinseys study AI will drastically change business regardless of the industry. AI could potentially deliver an additional economic output of around $13 trillion by 2030, boosting global GDP by about 1.2 percent a year.

Lets dive deeper to understand how AI can help sophisticate HR processes while not necessarily replacing human resource personnel.

1. Improved Employee Experience

Employees are the first customers for any organization. Hence employee experience is as important as customer experience.

As employee experience is becoming the next competitive edge for businesses, the coming days will be focused on providing personalized engagement and improving employee experience for human resources.

According to a Deloitte survey, 80% HR executives rate employee experience as important, while only 22% believe their organization excels at providing a differentiated employee experience.

Additionally, the advent of smart workplace has raised the bars of employees expectations for work-space experience and engagement factors.

Jennifer Stroud, HR Evangelist & Transformation Leader atServiceNow, says,We have seen the need for chatbots, AI and machine learning in the workplace to drive more productivity as well as modern, consumerized employee experiences. These consumer technology solutions are exactly what employees want in the workplace.

Engaging AI can help the HR department provide personalized employee engagement experiences across the entire employee lifecycle, right from recruitment and onboarding to career pathing.

2. Empowering HRs to make Data-Driven decisions

In common, the data-to-decision workflow looks like the below figure for many people.

Source: jobsoffice

Many HR technologies still follow the above workflow and depend on manual methods to glean insights from data. This task grows tedious and creates a bottleneck for end-users (data analysts) to draw insights within the stipulated time leading to decision making on outdated data.

While frontier technologies like data analytics are advancing to provide real-time data to make fast and fact-based decisions, AI can assist Human Resource professionals in harnessing this real-time data and making quick, consistent, and data-driven decisions. After all, the bottom line of HR agility is decision making.

3. Intelligent Automation

Intelligent automation fuses automation with AI. This will enable machines to make human-like decisions by self-educating themselves. Apart from augmenting productivity and efficiency for repetitive manual processes, this can help remove human interventions deployed for automated process completely.

1.More work in less time!

Crafting job descriptions for a particular role, filtering resumes and analyzing skillsets to find the apt talent is not only tiring and tedious, but also tricky for human resource professionals as a simple overlooked aspect can lead to a significant mistake, which may cost the company dearly in the long run. Well, AI can help HR staff overcome such scenarios by crafting bespoken job descriptions automatically and assist them in reading through thousands of resumes within a short time, thus effectively reducing the time and manual hard work put in by recruiters.

2. Identify the right talent without bias

HR personnel are humans and are likely to exhibit bias subconsciously. AI, on the other hand, is immune to human emotions which makes it the perfect fit to process candidate profiles based on required skillset without any disregard for candidates age, race, gender, geographic areas or organizational relationship. An unbiased recruitment is a win-win for both HR staff and organization. Furthermore, AI can be instrumental in increasing retention rates and establishing cultural diversity.

Consider programs like Texito, they help recognize gender bias in ads enabling recruiters to embrace a neutral language.

3. Streamline employee onboarding

The first day of an employee in an organization is like the first day of a transferred student in a new school. Although employees are grown-ups and possess the cognitive intelligence to adapt easily to an environment, deep down they look for a guidance to help them settle down in a new environment. Fortunately, organizations have HR staff to do this job. Employees generally have numerous queries on their first day regarding company policies, leaves, compensations, notice period, insurance claims, etc. As intriguing as the questions may be for an employee, these queries may turn repetitive and exhausting for an HR personnel over the time. Engaging AI chatbots makes it simple to answer such repetitive questions and make more time for the HR staff to concentrate on other essential tasks.

4. Optimize employee engagement to build better relationships

Apart from recruitment and onboarding, AI can be used to streamline processes like scheduling meetings, training employees and other such business processes. AIs capabilities to recognize personas will help Human resources professionals understand the human aspect of every single employee in-depth and enable them to shape a friendly and exciting company culture to provide unique and personalized employee engagement experiences.

5. Manage employee churn

Understanding factors that cause and arrest employee churn is the toughest part of an HRs job. People change jobs for various reasons like financial growth, career growth, shift in profiles, unsatisfied work environment, etc. Leveraging AI capabilities can help the HR department in continuously monitoring and evaluating employees thoughts about the organization, work culture, the degree of satisfaction with their job, etc. Knowing what offends or drives an employee can help in underlining the employee churn factors precisely. AI can help HR executives in performing this task more precisely.

All said and done, even though AIs capabilities would help reduce manual work and boost efficiency and productivity, artificial intelligence doesnt possess the emotional intelligence of humans. AI also cannot compensate for the humane connection that HR personnel form with employees and leverage to drive engagement and responsiveness.

Therefore, to answer the critical question that haunts HR executives Will AI be the reason why I might lose my job? No. Not really. The whole idea of AI in HR is the integration of technology to automate the more monotonous HR related tasks and optimize processes to add value to human work in less time. In the AI era, new jobs will evolve that will have new skills requirements unleashing the evolution of the HR function in an AI-first world.

Author Detail:

Jency is a technology content writer with SG Analytics. She contributes to the companys advancements by writing creative and engaging for their website and blogs. Her hobbies include music, reading, and trekking.

Company designation: Content Writer, SG Analytics

Location: Pune

Links to my blogs: https://us.sganalytics.com/blog/75-percent-consumers-anticipate-financial-impact-effects-of-covid-on-consumer-behaviour/, https://us.sganalytics.com/blog/social-media-analytics-is-truly-a-game-changer-heres-why/

Social media profile: LinkedIn https://www.linkedin.com/in/jency-durairaj-21225aa9

Excerpt from:

AI In Human Resources To AI or not to AI? - Analytics Insight

Posted in Ai | Comments Off on AI In Human Resources To AI or not to AI? – Analytics Insight

Reducing bias in AI-based financial services – Brookings Institution

Posted: at 1:31 am

Artificial intelligence (AI) presents an opportunity to transform how we allocate credit and risk, and to create fairer, more inclusive systems. AIs ability to avoid the traditional credit reporting and scoring system that helps perpetuate existing bias makes it a rare, if not unique, opportunity to alter the status quo. However, AI can easily go in the other direction to exacerbate existing bias, creating cycles that reinforce biased credit allocation while making discrimination in lending even harder to find. Will we unlock the positive, worsen the negative, or maintain the status quo by embracing new technology?

This paper proposes a framework to evaluate the impact of AI in consumer lending. The goal is to incorporate new data and harness AI to expand credit to consumers who need it on better terms than are currently provided. It builds on our existing systems dual goals of pricing financial services based on the true risk the individual consumer poses while aiming to prevent discrimination (e.g., race, gender, DNA, marital status, etc.). This paper also provides a set of potential trade-offs for policymakers, industry and consumer advocates, technologists, and regulators to debate the tensions inherent in protecting against discrimination in a risk-based pricing system layered on top of a society with centuries of institutional discrimination.

AI is frequently discussed and ill defined. Within the world of finance, AI represents three distinct concepts: big data, machine learning, and artificial intelligence itself. Each of these has recently become feasible with advances in data generation, collection, usage, computing power, and programing. Advances in data generation are staggering: 90% of the worlds data today were generated in the past two years, IBM boldly stated. To set parameters of this discussion, below I briefly define each key term with respect to lending.

Big data fosters the inclusion of new and large-scale information not generally present in existing financial models. In consumer credit, for example, new information beyond the typical credit-reporting/credit-scoring model is often referred to by the most common credit-scoring system, FICO. This can include data points, such as payment of rent and utility bills, and personal habits, such as whether you shop at Target or Whole Foods and own a Mac or a PC, and social media data.

Machine learning (ML) occurs when computers optimize data (standard and/or big data) based on relationships they find without the traditional, more prescriptive algorithm. ML can determine new relationships that a person would never think to test: Does the type of yogurt you eat correlate with your likelihood of paying back a loan? Whether these relationships have casual properties or are only proxies for other correlated factors are critical questions in determining the legality and ethics of using ML. However, they are not relevant to the machine in solving the equation.

What constitutes true AI is still being debated, but for purposes of understanding its impact on the allocation of credit and risk, lets use the term AI to mean the inclusion of big data, machine learning, and the next step when ML becomes AI. One bank executive helpfully defined AI by contrasting it with the status quo: Theres a significant difference between AI, which to me denotes machine learning and machines moving forward on their own, versus auto-decisioning, which is using data within the context of a managed decision algorithm.

Americas current legal and regulatory structure to protect against discrimination and enforce fair lending is not well equipped to handle AI. The foundation is a set of laws from the 1960s and 1970s (Equal Credit Opportunity Act of 1974, Truth in Lending Act of 1968, Fair Housing Act of 1968, etc.) that were based on a time with almost the exact opposite problems we face today: not enough sources of standardized information to base decisions and too little credit being made available. Those conditions allowed rampant discrimination by loan officers who could simply deny people because they didnt look credit worthy.

Today, we face an overabundance of poor-quality credit (high interest rates, fees, abusive debt traps) and concerns over the usage of too many sources of data that can hide as proxies for illegal discrimination. The law makes it illegal to use gender to determine credit eligibility or pricing, but countless proxies for gender exist from the type of deodorant you buy to the movies you watch.

Americas current legal and regulatory structure to protect against discrimination and enforce fair lending is not well equipped to handle AI.

The key concept used to police discrimination is that of disparate impact. For a deep dive into how disparate impact works with AI, you can read my previous work on this topic. For this article, it is important to know that disparate impact is defined by the Consumer Financial Protection Bureau as when: A creditor employs facially neutral policies or practices that have an adverse effect or impact on a member of a protected class unless it meets a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact.

The second half of the definition provides lenders the ability to use metrics that may have correlations with protected class elements so long as it meets a legitimate business need,andthere are no other ways to meet that interest that have less disparate impact. A set of existing metrics, including income, credit scores (FICO), and data used by the credit reporting bureaus, has been deemed acceptable despite having substantial correlation with race, gender, and other protected classes.

For example, consider how deeply correlated existing FICO credit scores are with race. To start, it is telling how little data is made publicly available on how these scores vary by race. The credit bureau Experian is eager to publicize one of its versions of FICO scores by peoples age, income, and even what state or city they live in, but not by race. However, federal law requires lenders to collect data on race for home mortgage applications, so we do have access to some data. As shown in the figure below, the differences are stark.

Among people trying to buy a home, generally a wealthier and older subset of Americans, white homebuyers have an average credit score 57 points higher than Black homebuyers and 33 points higher than Hispanic homebuyers. The distribution of credit scores is also sharply unequal: More than 1 in 5 Black individuals have FICOs below 620, as do 1 in 9 among the Hispanic community, while the same is true for only 1 out of every 19 white people. Higher credit scores allow borrowers to access different types of loans and at lower interest rates. One suspects the gaps are even broader beyond those trying to buy a home.

If FICO were invented today, would it satisfy a disparate impact test? The conclusion of Rice and Swesnik in their law review article was clear: Our current credit-scoring systems have a disparate impact on people and communities of color. The question is mute because not only is FICO grandfathered, but it has also become one of the most important factors used by the financial ecosystem. I have described FICO as the out of tune oboe to which the rest of the financial orchestra tunes.

New data and algorithms are not grandfathered and are subject to the disparate impact test. The result is a double standard whereby new technology is often held to a higher standard to prevent bias than existing methods. This has the effect of tilting the field against new data and methodologies, reinforcing the existing system.

Explainability is another core tenant of our existing fair lending system that may work against AI adoption. Lenders are required to tell consumers why they were denied. Explaining the rationale provides a paper trail to hold lenders accountable should they be engaging in discrimination. It also provides the consumer with information to allow them to correct their behavior and improve their chances for credit. However, an AIs method to make decisions may lack explainability. As Federal Reserve Governor Lael Brainard described the problem: Depending on what algorithms are used, it is possible that no one, including the algorithms creators, can easily explain why the model generated the results that it did. To move forward and unlock AIs potential, we need a new conceptual framework.

To start, imagine a trade-off between accuracy (represented on the y-axis) and bias (represented on the x-axis). The first key insight is that the current system exists at the intersection of the axes we are trading off: the graphs origin. Any potential change needs to be considered against the status-quonot an ideal world of no bias nor complete accuracy. This forces policymakers to consider whether the adoption of a new system that contains bias, but less than that in the current system, is an advance. It may be difficult to embrace an inherently biased framework, but it is important to acknowledge that the status quo is already highly biased. Thus, rejecting new technology because it contains some level of bias does not mean we are protecting the system against bias. To the contrary, it may mean that we are allowing a more biased system to perpetuate.

As shown in the figure above, the bottom left corner (quadrant III) is one where AI results in a system that is more discriminatory and less predictive. Regulation and commercial incentives should work together against this outcome. It may be difficult to imagine incorporating new technology that reduces accuracy, but it is not inconceivable, particularly given the incentives in industry to prioritize decision-making and loan generation speed over actual loan performance (as in the subprime mortgage crisis). Another potential occurrence of policy moving in this direction is the introduction of inaccurate data that may confuse an AI into thinking it has increased accuracy when it has not. The existing credit reporting system is rife with errors: 1 out of every 5 people may have material error on their credit report. New errors occur frequentlyconsider the recent mistake by one student loan servicer that incorrectly reported 4.8 million Americans as being late on paying their student loans when in fact in the government had suspended payments as part of COVID-19 relief.

The data used in the real world are not as pure as those model testing. Market incentives alone are not enough to produce perfect accuracy; they can even promote inaccuracy given the cost of correcting data and demand for speed and quantity. As one study from the Federal Reserve Bank of St. Louis found, Credit score has not acted as a predictor of either true risk of default of subprime mortgage loans or of the subprime mortgage crisis. Whatever the cause, regulators, industry, and consumer advocates ought to be aligned against the adoption of AI that moves in this direction.

The top right (quadrant I) represents incorporation of AI that increases accuracy and reduces bias. At first glance, this should be a win-win. Industry allocates credit in a more accurate manner, increasing efficiency. Consumers enjoy increased credit availability on more accurate terms and with less bias than the existing status quo. This optimistic scenario is quite possible given that a significant source of existing bias in lending stems from the information used. As the Bank Policy Institute pointed out in its in discussion draft of the promises of AI: This increased accuracy will benefit borrowers who currently face obstacles obtaining low-cost bank credit under conventional underwriting approaches.

One prominent example of a win-win system is the use of cash-flow underwriting. This new form of underwriting uses an applicants actual bank balance over some time frame (often one year) as opposed to current FICO based model which relies heavily on seeing whether a person had credit in the past and if so, whether they were ever in delinquency or default. Preliminary analysis by FinReg Labs shows this underwriting system outperforms traditional FICO on its own, and when combined with FICO is even more predictive.

Cash-flow analysis does have some level of bias as income and wealth are correlated with race, gender, and other protected classes. However, because income and wealth are acceptable existing factors, the current fair-lending system should have little problem allowing a smarter use of that information. Ironically, this new technology meets the test because it uses data that is already grandfathered.

That is not the case for other AI advancements. New AI may increase credit access on more affordable terms than what the current system provides and still not be allowable. Just because AI has produced a system that is less discriminatory does not mean it passes fair lending rules. There is no legal standard that allows for illegal discrimination in lending because it is less biased than prior discriminatory practices. As a 2016 Treasury Department study concluded, Data-driven algorithms may expedite credit assessments and reduce costs, they also carry the risk of disparate impact in credit outcomes and the potential for fair lending violations.

For example, consider an AI that is able, with a good degree of accuracy, to detect a decline in a persons health, say through spending patterns (doctors co-pays), internet searches (cancer treatment), and joining new Facebook groups (living with cancer). Medical problems are a strong indicator of future financial distress. Do we want a society where if you get sick, or if a computer algorithm thinks you are ill, that your terms of credit decrease? That may be a less biased system than we currently have, and not one that policymakers and the public would support. Of all sudden what seems like a win-win may not actually be one that is so desirable.

AI that increases accuracy but introduces more bias gets a lot of attention, deservedly so. This scenario represented in the top left (quadrant II) of this framework can range from the introduction of data that are clear proxies for protected classes (watch Lifetime or BET on TV) to information or techniques that, on a first glance, do not seem biased but actually are. There are strong reasons to believe that AI will naturally find proxies for race, given that there are large income and wealth gaps between races. As Daniel Schwartz put it in his article on AI and proxy discrimination: Unintentional proxy discrimination by AIs is virtually inevitable whenever the law seeks to prohibit discrimination on the basis of traits containing predictive information that cannot be captured more directly within the model by non-suspect data.

Proxy discrimination by AI is even more concerning because the machines are likely to uncover proxies that people had not previously considered.

Proxy discrimination by AI is even more concerning because the machines are likely to uncover proxies that people had not previously considered. Think about the potential to use whether or not a person uses a Mac or PC, a factor that is both correlated to race and whether people pay back loans, even controlling for race.

Duke Professor Manju Puri and co-authors were able to build a model using non-standard data that found substantial predictive power in whether a loan was repaid through whether that persons email address contained their name. Initially, that may seem like a non-discriminatory variable within a persons control. However, economists Marianne Bertrand and Sendhil Mullainathan have shown African Americans with names heavily associated with their race face substantial discrimination compared to using race-blind identification. Hence, it is quite possible that there is a disparate impact in using what seems like an innocuous variable such as whether your name is part of your email address.

The question for policymakers is how much to prioritize accuracy at a cost of bias against protected classes. As a matter of principle, I would argue that our starting point is a heavily biased system, and we should not tolerate the introduction of increased bias. There is a slippery slope argument of whether an AI produced substantial increases in accuracy with the introduction of only slightly more bias. Afterall, our current system does a surprisingly poor job of allocating many basic credits and tolerates a substantially large amount of bias.

Industry is likely to advocate for inclusion of this type of AI while consumer advocates are likely to oppose its introduction. Current law is inconsistent in its application. Certain groups of people are afforded strong anti-discrimination protection against certain financial products. But again, this varies across financial product. Take gender for example. It is blatantly illegal under fair lending laws to use gender or any proxy for gender in allocating credit. However, gender is a permitted use for price difference for auto insurance in most states. In fact, for brand new drivers, gender may be the single biggest factor used in determining price absent any driving record. America lacks a uniform set of rules on what constitutes discrimination and what types of attributes cannot be discriminated against. Lack of uniformity is compounded by the division of responsibility between federal and state governments and, within government, between the regulatory and judicial system for detecting and punishing crime.

The final set of trade-offs involve increases in fairness but reductions in accuracy (quadrant IV in the bottom right). An example includes an AI with the ability to use information about a persons human genome to determine their risk of cancer. This type of genetic profiling would improve accuracy in pricing types of insurance but violates norms of fairness. In this instance, policymakers decided that the use of that information is not acceptable and have made it illegal. Returning to the role of gender, some states have restricted the use of gender in car insurance. California most recently joined the list of states no longer allowing gender, which means that pricing will be more fair but possibly less accurate.

Industry pressures would tend to fight against these types of restrictions and press for greater accuracy. Societal norms of fairness may demand trade-offs that diminish accuracy to protect against bias. These trade-offs are best handled by policymakers before the widespread introduction of this information such as the case with genetic data. Restricting the use of this information, however, does not make the problem go away. To the contrary, AIs ability to uncover hidden proxies for that data may exacerbate problems where society attempts to restrict data usage on the grounds of equity concerns. Problems that appear solved by prohibitions then simply migrate into the algorithmic world where they reappear.

The underlying takeaway for this quadrant is one in which social movements that expand protection and reduce discrimination are likely to become more difficult as AIs find workarounds. As long as there are substantial differences in observed outcomes, machines will uncover differing outcomes using new sets of variables that may contain new information or may simply be statistically effective proxies for protected classes.

The status quo is not something society should uphold as nirvana. Our current financial system suffers not only from centuries of bias, but also from systems that are themselves not nearly as predictive as often claimed. The data explosion coupled with the significant growth in ML and AI offers tremendous opportunity to rectify substantial problems in the current system. Existing anti-discrimination frameworks are ill-suited to this opportunity. Refusing to hold new technology to a higher standard than the status quo results in an unstated deference to the already-biased current system. However, simply opening the flood gates under the rules of can you do better than today opens up a Pandoras box of new problems.

The status quo is not something society should uphold as nirvana. Our current financial system suffers not only from centuries of bias, but also from systems that are themselves not nearly as predictive as often claimed.

Americas fractured regulatory system, with differing roles and responsibilities across financial products and levels of government, only serves to make difficult problems even harder. With lacking uniform rules and coherent frameworks, technological adoption will likely be slower among existing entities setting up even greater opportunities for new entrants. A broader conversation regarding how much bias we are willing to tolerate for the sake of improvement over the status quo would benefit all parties. That requires the creation of more political space for sides to engage in a difficult and honest conversation. The current political moment in time is ill-suited for that conversation, but I suspect that AI advancements will not be willing to wait until America is more ready to confront these problems.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative, and Apple, Facebook, and IBM provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Go here to see the original:

Reducing bias in AI-based financial services - Brookings Institution

Posted in Ai | Comments Off on Reducing bias in AI-based financial services – Brookings Institution

How To Flunk Those Cognitive Deficiency Tests And What This Means Too For AI Self-Driving Cars – Forbes

Posted: at 1:31 am

Cognitive deficiency tests, AI, and self-driving cars.

Seems like the news recently has been filled with revelations about the taking of cognitive deficiency tests.

This is especially being widely noted by some prominent politicians that appear to be attempting to vouch for having mental clarity upon reaching an age in life whereby cognitive decline often surfaces.

Such tests are more aptly referred to as cognitive assessment tests rather than deficiency oriented tests, though the notion generally being that if a score earned is less than what might be expected, the potential conclusion is that the person has had a decline in their mental prowess.

Oftentimes also referred to as cognitive impairment detection exams, the person seeking to find out how they are mentally doing is administered a test consisting of various questions and asked to answer the questions. The administrator of the test then grades the answers as to correctness and fluidity, producing a score to indicate how the person overall performed.

The score is then compared to the scores of others that have taken the test, trying to gauge how the cognitive capacity of the person is rated or ranked in light of some larger population of test-takers.

Also, if a person takes the test over time, perhaps say once per year, their prior scores are compared to their most recent score, attempting to measure whether there is a difference emerging as they age.

There are some crucial rules-of-thumb about all of this cognitive test-taking.

For example, if the person takes the same test word-for-word, repeatedly over time, this raises questions about the nature of the test versus the nature of the cognitive abilities of the person taking the test. In essence, you can potentially do better on the test simply because youve seen the same questions before and likely also had been previously told what the considered correct answers are.

One argument to be made is that this is somewhat assessing your ability to remember having previously taken the test, but thats not usually the spirit of what such cognitive tests are supposed to be about. The idea is to assess overall cognition, and not merely be focused on whether you perchance can recall the specific questions of a specific test previously taken.

Another facet of this kind of cognitive test-taking consists of being formally administered the test, rather than taking the test entirely on your own.

Though there are plenty of available cognitive tests that you can download and take in private, some would say that this is not at all the same as taking a test under the guiding hands and watch of someone certified or otherwise authorized to administer such tests.

A key basis for claiming that the test needs to be formally administered is to ensure that the person taking the test is not undermining the test or flouting the testing process. If the test taker were to ask a friend for help, this obviously defeats the purpose of the test, which is supposed to focus on your solitary cognition and not be a collective semblance of cognition. Likewise, these tests are usually timed, and a person on their own might be tempted to exceed the normally allotted time, plus the person might be tempted to look-up answers, use a calculator, etc.

Perhaps the most important reason to have a duly authorized and trained administrator involves attempting to holistically evaluate the results of the cognition test.

Experts in cognitive test-taking are quick to emphasize that a robust approach to the matter consists of not just the numeric score that a test taker achieves, but also how they are overall able to interact with a fully qualified and trained cognitive-test administrator.

Unlike taking a secured SAT or ACT test that you might have had to painstakingly sit through for college entrance purposes, a cognitive assessment test is typically intended to assess in both a written way and in a broader manner how the person interacts and cognitively presents themselves.

Imagine for example that someone aces the written test, yet meanwhile, they are unable to carry on a lucid conversation with the administrator, and similarly, they mentally stumble on why they are taking the test or otherwise have apparent cognitive difficulties surrounding the test-taking process. Those facets outside of the test itself should be counted, some would vehemently assert, and thus would be unlikely to be valued if a person merely took the test on their own.

Despite all of the foregoing and the holistic nuances that Ive mentioned, admittedly, most of the time all that people want to know is what was their darned score on that vexing cognitive test.

You might be wondering whether there is one standardized and universal cognitive test that is used for these purposes.

No, there is not just one per se.

Instead, there are a bewildering and veritable plethora of such cognition tests.

It seems like each day there is some new version that gets announced to the world. In some cases, the cognitive test being proffered has been carefully prepared and analyzed for its validity. Unfortunately, in other cases, the cognitive test is a gimmick and being fronted as a moneymaker, whereby those pushing the test are aiming to get people to believe in it and hoping to generate gobs of revenue by how many take the test and charge them fees accordingly.

Please do not fall for the fly-by-night cognitive tests.

Sadly, sometimes a known celebrity or other highly visible person gets associated with a cognitive test promotion and adds a veneer of authenticity to something that does not deserve any bona fide reputational stamp-of-approval.

Some cognitive tests have lasted the test of time and are considered the dominant or at least well-regarded for their cognitive assessing capacity and validity.

On a related note, if a cognitive test takes a long time to complete, lets say hours of completion time, the odds are that it is not going to be overall well-received and considered onerous for testing purposes. As such, the popular cognitive tests tend to be the ones that take a relatively short period to undertake, such as an hour or less, and in many cases even just 15 minutes or less (these are usually depicted as screening tests rather than full-blown cognitive assessment tests).

Some decry that only requiring a few minutes to take a cognitive test is rife with problems and seems like a fast-food kind of approach to tackling a very complex topic of measuring someones cognition. Those in this camp shudder when these quickie tests are used by people that then go around touting how well they scored.

The counter-argument is that these short-version cognitive tests are reasonable and amount to using a dipstick to gauge how much gasoline there is in the tank of your car. The viewpoint is that it only takes a little bit of measurement to generally know how someone is mentally faring. Once an overall gauge is taken, you can always do a follow-up with a more in-depth cognitive test.

Given all of the preceding discussion, it might be handy to briefly take a look at a well-known cognitive test that has been around since the mid-1990s and continues to actively be in use today, including having been the test that reportedly President Trump took in 2018 (according to news reports).

The Famous MoCA Cognitive Test

That test is the Montreal Cognitive Assessment (MoCA) test.

Some mistakenly get confused by the name of the test and think that it is maybe just a test for Canadians since it refers to Montreal in the naming, but the test is globally utilized and was named for being initially developed by researchers in Montreal, Quebec.

Generally, the MoCA is one-page in size (see example here), which is handily succinct for doing this kind of testing, and the person taking the test is given 10 minutes to answer the questions. There is some leeway often allowed in the testing time allotted, and also some latitude related to having the person first become oriented to the test and its instructions.

Nonetheless, the person taking the test should not be provided say double the time or anything of that magnitude. The reason why the test should be taken in a prescribed amount of time is that the aspect of time is considered related to cognitive acuity.

In other words, if the person is given more time than others have previously gotten, presumably they can cognitively devote more mental cycles or effort and might do better on the test accordingly.

A timed test is not just about your cognition per se, but also about how fast you think and whether your thinking processes are as fluid as others that have taken the test.

If it took someone an hour and they got a top score, while someone else got a top score in ten minutes, we would be hard-pressed to compare their results. You might liken this to playing timed chess, whereby the longer you have, the more chess moves you can potentially mentally foresee, which is fine in some circumstances, but when trying to make for a balanced playing field, you put a timer on how long each player has to make their move.

That being said, the time allotted for a given test should not be so short as to shortchange the cognitive opportunities, which would once again presumably hamper the measurement of cognition. A chess player that has to say just two seconds to make a move will likely randomly take a shot rather than try to devote mental energy to the task.

In theory, the amount of time provided should be the classic Goldilocks amount, just enough time to allow for a sufficient dollop of mental effort, and not so much time that it inadvertently extends the cognition and perhaps enables a lesser cognitive capacity to use time as a crutch to imbue itself (assuming thats not what the test is attempting to measure).

I am about to explain specific details of the MoCA cognitive test, so if you want to someday take the test, please know that I am about to spoil your freshness (this is a spoiler alert).

The test attempts to cover a lot of cognitive ground, doing so by providing a variety of cognition tasks, including the use of numbers, the use of words, the use of sentences, the use of the alphabet, the use of visual cognitive capabilities such as interpreting images and composing writing, and so on.

Thats worth mentioning because a cognitive test that only covered say counting and involved the addition of numbers would be solely focused on your arithmetic cognition. We know that humans have a fuller range of cognitive abilities. As such, a well-balanced cognitive test tries to hit upon a slew of what are considered cognitive dimensions.

Notably, this can be hard to pack into one short test, and raises some criticisms by those that argue it is dubious to have someone undertake a single question on numbers and a single question on words, and so on, and then attempt to generalize overall about their cognition within each respective entire dimension of cognitive facets.

Lets try out a numbers and arithmetic related question.

Are you ready?

You are to start counting from 100 down to 0 and do so by subtracting 7 each time rather than by one.

Okay, your first answer should be 93, and then your next would be 86, and then 79, and so on.

You cannot use a pencil and paper, nor can you use a calculator. This is supposed to be off the top of your head. Using your fingers or toes is also considered taboo.

How did you do?

Try this next one.

Remember these words: Face, Velvet, Church, Daisy, Red.

I want you to look away from these words and say them aloud, without reading them from the page.

In about five minutes, without looking at the page to refresh your memory, try to once again speak aloud what the words were.

What do those cognitive tests signify?

The counting backward is usually a tough one for most people as they do not normally count in that direction. This forces your mind to slow down and think directly about the numbers and the doing of arithmetics in your head (this is also partially why the same kind of quiz is used for DUI roadway sobriety assessment). If I had asked you to count by sevens starting at zero and counting upward, you would likely do so with much greater ease, and the effort would be less cognitively taxing on you.

For the word memorization, this is an assessment of your short-term memory capacity. It is only five words versus if I had asked you to remember ten words or fifty words. Some people will try to memorize the five words by imagining an image in their minds of each word, while others might string together the words into making a short story that will allow them to recall the words.

Either way, this is an attempt to exercise your cognition around several facets, involving short-term memory, the ability to follow and abide by instructions, a semblance of encoding words in your mind, and has other mental leveraging cerebral components.

Some of the questions on these cognitive tests are considered controversial.

In the case of MoCA, there is typically a clock drawing task that some cognitive test experts have heartburn about.

You are asked to draw a clock and indicate the time on the clock as being a stated time such as perhaps 10 minutes past 7. In theory, you would draw a circle or something similar, you would write the numbers of 1 to 12 around the oval to represent each hour, and you would then sketch a short line pointing from the center toward the 7, and a longer mark pointing from the center to the 2 position (since the marks for minutes are normally representative of five minutes each).

Why is this controversial as a cognitive test question?

One concern is that in todays world, we tend to use digital clocks that display numerically the time and are less likely to use the conventional circular-shaped clock to represent time anymore.

If a person taking the cognitive test is unfamiliar with oval clocks, does it seem appropriate that they would lose several cognition points for poorly accomplishing this task?

This brings up a larger scope qualm about cognitive tests, namely, how can we separate knowledge versus the act of cognition.

I might not know what a conventional clock is and yet have superb cognitive skills. The test is unfairly ascribing knowledge of something in particular to the act of cognition, and so it is falsely measuring one thing that is not necessarily the facet that is being presumably assessed.

Suppose I asked you a question about baseball, such as please go ahead and name the bases or what the various player positions are called. If perchance you know about baseball, you can answer the question, while otherwise, you are going to fail that question.

Do the baseball question and your corresponding answer offer any reasonable semblance of your cognitive capabilities?

In any case, the MoCa cognitive test is usually scored based on a top score of 30, for which the scale typically used is this:

Score 26-30: No cognitive impairment detected

Score 18-25: Mild cognitive impairment

Score 10-17: Moderate cognitive impairment

Score00-09: Severe cognitive impairment

Research studies tend to indicate that people with demonstrative Alzheimers tend to score around 16, ending up in the moderate cognitive impairment category. Presumably, a person with no noticeable cognitive impairment, at least per this specific cognitive test, would score at 26 or higher.

Is it possible to achieve a score in the top tier, the score of 26 or above (suggesting that one does not possess any cognitive impairment), and yet still nonetheless have some form of cognitive deficiency?

Yes, certainly so, since this kind of cognitive test is merely a tiny snapshot or sliver and does not cover an entire battery or gamut of cognition, plus as mentioned earlier there is the possibility of being a priori familiar with the test and/or actively prepare beforehand for the test which can substantively boost performance.

Is it possible to score in the mild, moderate, or severe categories of cognitive impairment and somehow not truly be suffering from cognitive impairment?

Yes, certainly so, since a person might be overly stressed and anxious in taking the test, thus perform poorly due to the situation at hand, or could find the given set of tasks unrelated to their cognition prowess such as perhaps someone that is otherwise ingeniously inventive and cognitively sharp, but find themselves mentally cowed when doing simple arithmetic or memorizing seemingly nonsense words.

All told, it is best to be cautious in interpreting the results of such cognitive tests (and, once again, reinforces the need for a more holistic approach to cognitive assessments).

AI And Cognitive Tests

Another popular topic in the news and one that is seemingly unrelated to this cognitive testing matter is the emergence of AI (hold that thought, for a moment, well get back to it).

You are likely numbed by the multitude of AI systems that seem to keep being developed and released into and affecting our everyday lives, including the rise of facial recognition, the advent of Natural Language Processing (NLP) in the case of AI systems such as Alexa and Siri, etc.

On top of that drumbeat, there are the touted wonders of AI, entailing a lot of (rather wild) speculation about where AI is headed and whether AI will eclipse human intelligence, possibly even deciding to take over our planet and choosing to enslave or wipe out humanity (for such theories, see my analysis at this link here).

Why bring up AI, especially if it presumably has nothing to do with cognitive tests and cognitive testing?

Well, for the simple fact that AI does have to do with cognitive testing, very much so.

The presumed goal for AI is to achieve the equivalent of human intelligence, as might somehow be embodied in a machine. We do not yet know what the machine will be, though likely to consist of computers, but the specification does not dictate what it must be, and thus if you could construct a machine via Legos and duct tape that exhibited human intelligence, more power to you.

In brief, we want to craft artificial cognitive capabilities, which are the presumed crux of human intelligence.

Logically, since thats what we are attempting to accomplish, it stands to reason that we would expect AI to be able to readily pass a human-focused cognitive test since doing so would illustrate that the AI has arrived at similar cognitive capacities.

I dont want to burst anyones bubble, but there is no AI today that can do any proper semblance of common-sense reasoning, and we are a long way away from having sentient AI.

Bottom-line: AI today would essentially flunk the MoCA cognitive test and any others of similar complexity too.

Some might try to argue and claim that AI and computers can countdown from 100, and can memorize words, and do the other stated tasks, but this is a misleading assertion. Those are tasks undertaken by an AI system that has been constructed for and contrived to perform those specific tasks, and inarguably is a far cry from understanding or comprehending the test in a manner akin to human capacities and misleadingly anthropomorphize the matter (for more details, see my analysis at this link here).

There is not yet any kind of truly generalizable AI, which some are now calling Artificial General Intelligence (AGI).

As added clarification, there is a famous test in the AI field known as the Turing Test (see my explanation at this link here). No AI of today and nor in the foreseeable near future could pass a fully ranging Turing Test, and in some respects, being able to pass a cognitive test like those of MoCA is a variant of a Turing Test (in an extremely narrow way).

AI Cognition And Self-Driving Cars

Another related topic entails the advent of AI-based true self-driving cars.

We are heading toward the use of self-driving cars that involve AI autonomously driving the vehicle, doing so without any human driver at the wheel.

Some wonder whether the AI of today, lacking any kind of common-sense reasoning and nor any inkling of sentience, will be sufficient for driving cars on our public roadways. Critics argue that we are going to have AI substituting for human drivers and yet the AI is insufficiently robust to do so (see more on this contention at my analysis here).

Others insist that the driving task does not require the full range of human cognitive capabilities and thus the AI will do just fine in commanding self-driving cars.

Do you believe that the AI driving you to the grocery store needs to be able to first pass a cognitive test and showcase that it can adequately draw a clock and indicate the time of day?

For now, all we can say is that time will tell.

Go here to see the original:

How To Flunk Those Cognitive Deficiency Tests And What This Means Too For AI Self-Driving Cars - Forbes

Posted in Ai | Comments Off on How To Flunk Those Cognitive Deficiency Tests And What This Means Too For AI Self-Driving Cars – Forbes

Detect COVID-19 Symptoms Using Wearable Device And AI – Hackaday

Posted: at 1:31 am

A new study from West Virginia University (WVU) Rockefeller Neuroscience Institute (RNI) uses a wearable device and artificial intelligence (AI) to predict COVID-19 up to 3 days before symptoms occur. The study has been an impressive undertaking involving over 1000 health care workers and frontline workers in hospitals across New York, Philadelphia, Nashville, and other critical COVID-19 hotspots.

The implementation of the digital health platform uses a custom smartphone application coupled with an ura smart ring to monitor biometric signals such as respiration and temperature. The platform also assesses psychological, cognitive, and behavioral data through surveys administered through a smartphone application.

We know that wearables tend to suffer from a lack of accuracy, particularly during activity. However, the ura ring appears to take measurements while the user is very still, especially during sleep. This presents an advantage as the accuracy of wearable devices greatly improves when the user isnt moving. RNI noted that the ura ring has been the most accurate device they have tested.

Given some of the early warning signals for COVID-19 are fever and respiratory distress, it would make sense that a device able to measure respiration and temperature could be used as an early detector of COVID-19. In fact, weve seen a few wearable device companies attempt much of what RNI is doingas well as a few DIY attempts. RNIs study has probably been the most thorough work released so far, but were sure that many more are upcoming.

The initial phase of the study was deployed among healthcare and frontline workers but is now open to the general public. Meanwhile the National Basketball Association (NBA) is coordinating its re-opening efforts using uras technology.

We hope to see more results emerge from RNIs very important work. Until then, stay safe Hackaday.

Go here to read the rest:

Detect COVID-19 Symptoms Using Wearable Device And AI - Hackaday

Posted in Ai | Comments Off on Detect COVID-19 Symptoms Using Wearable Device And AI – Hackaday

AMP Robotics Named to Forbes AI 50 – Yahoo Finance

Posted: at 1:31 am

Company recognized among rising stars of artificial intelligence for its AI-guided robots transforming the recycling industry

Forbes has named AMP Robotics Corp. ("AMP"), a pioneer and leader in artificial intelligence (AI) and robotics for the recycling industry, one of Americas most promising AI companies. The publications annual "AI 50" list distinguishes private, U.S.-based companies that are wielding some subset of artificial intelligence in a meaningful way and demonstrating real business potential from doing so. To be included on the list, companies needed to show that techniques like machine learning, natural language processing, or computer vision are a core part of their business model and future success.

"Earlier this year, we notched a milestone of one billion picks over 12 months that demonstrates the productivity, precision, and reliability of our AI application for the recycling industry. Its an honor to be deemed one of the countrys most promising AI companies, and were just getting started," said Matanya Horowitz, AMP founder and chief executive officer. "Theres growing appreciation for the role of recycling in the domestic supply chain, in terms of keeping resources flowing and products on shelves, and resultant momentum around supportive policy initiatives that are putting some real wind in the sail for the industry. Were pleased to play a role in enabling better efficiency, safety, and transparency to help transform recycling."

AMPs technology recovers plastics, cardboard, paper, metals, cartons, cups, and many other recyclables that are reclaimed for raw material processing. AMPs AI platform uses computer vision to visually identify different types of materials with high accuracy, then guides high-speed robots to pick out and recover recyclables at superhuman speeds for extended periods of time. The AI platform transforms images into data to recognize patterns, using machine learning to train itself by processing millions of material images within an ever-expanding neural network of robotic installations.

"We consider AMP a category-defining business and believe its artificial intelligence and robotics technology are poised to solve many of the central challenges of recycling," said Shaun Maguire, partner at Sequoia Capital and AMP board member. "The opportunity for modernization in the industry is robust as the demand for recycled materials continues to swell, from consumers and the growing circular economy."

AMPs "AI 50" recognition comes on the heels of receiving a 2020 RBR50 Innovation Award from Robotics Business Review for the companys Cortex Dual-Robot System. Earlier this year, Fast Company named AMP to its "Worlds Most Innovative Companies" list for 2020, and the company captured a "Rising Star" Company of the Year Award in the 2020 Global Cleantech 100.

Since its Series A fundraising in November, AMP has been on a major growth trajectory as it scales its business to meet demand. The company announced a 50% increase in revenue in the first quarter of 2020, a rapidly growing project pipeline, a facility expansion in its Colorado headquarters, and a new lease program that makes its AI and robotics technology even more attainable for recycling businesses.

About AMP Robotics Corp.

AMP Robotics is applying AI and robotics to help modernize recycling, enabling a world without waste. The AMP Cortex high-speed robotics system automates the identification and sorting of recyclables from mixed material streams. The AMP Neuron AI platform continuously trains itself by recognizing different colors, textures, shapes, sizes, patterns, and even brand labels to identify materials and their recyclability. Neuron then guides robots to pick and place the material to be recycled. Designed to run 24/7, all of this happens at superhuman speed with extremely high accuracy. With deployments across the United States, Canada, Japan, and now expanding into Europe, AMPs technology recycles municipal waste, e-waste, and construction and demolition debris. Headquartered and with manufacturing operations in Colorado, AMP is backed by Sequoia Capital, Closed Loop Partners, Congruent Ventures, and Sidewalk Infrastructure Partners ("SIP"), an Alphabet Inc. (NASDAQ: GOOGL) company.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200710005481/en/

Contacts

Media Contact Carling Spelhaugcarling@amprobotics.com

Read more:

AMP Robotics Named to Forbes AI 50 - Yahoo Finance

Posted in Ai | Comments Off on AMP Robotics Named to Forbes AI 50 – Yahoo Finance

Announcing the second annual VentureBeat AI Innovation Awards at Transform 2020 – VentureBeat

Posted: at 1:31 am

Take the latest VB Survey to share how your company is implementing AI today.

The past year has seen remarkable change. As innovation in the field of AI and real-world applications of its constituent technologies such as machine learning, natural language processing, and computer vision continue to grow, so has an understanding of their social impacts.

At our AI-focused Transform 2020 event, taking place July 15-17 entirely online, VentureBeat will recognize and award emergent, compelling, and influential work in AI through our second annual VB AI Innovation Awards.

Drawn both from our daily editorial coverage and the expertise, knowledge, and experience of our nominating committee members, these awards give us a chance to shine a light on the people and companies making an impact in AI.

Our nominating committee includes:

Claire Delaunay, Vice President of Engineering, Nvidia

Claire Delaunay is vice president of engineering at Nvidia, where she is responsible for the Isaac robotics initiative and leads a team to bring Isaac to market for use by roboticists and developers around the world.

Prior to joining Nvidia, Delaunay was the director of engineering at Uber, after it acquired Otto, a startup she cofounded. She was also the robotics program lead at Google and founded two other companies, Botiful and Robotics Valley.

Delaunay has 15 years of experience in robotics and autonomous vehicles leading teams ranging from startups and research labs to Fortune 500 companies. Sheholds a Master of Science in computer engineering from cole Prive des Sciences Informatiques (EPSI).

Asli Celikyilmaz, Principal Researcher, Microsoft Research

Asli Celikyilmaz is a principal researcher at Microsoft Research (MSR) in Redmond, Washington. She is also an affiliate professor at the University of Washington. She received her Ph.D. in information science from the University of Toronto, Canada, and continued her postdoc study in the Computer Science Department at the University of California, Berkeley.

Her research interests are mainly in deep learning and natural language (specifically language generation with long-term coherence), language understanding, language grounding with vision, and building intelligent agents for human-computer interaction. She serves on the editorial boards of Transactions of the ACL (TACL) as area editor and Open Journal of Signal Processing (OJSP) as associate editor. She has received several best of awards, including at NAFIPS 2007, Semantic Computing 2009, and CVPR 2019.

The award categories are:

Natural Language Processing/Understanding Innovation

Natural language processing and understanding have only continued to grow in importance, and new advancements, new models, and more use cases continue to emerge.

Business Application Innovation

The field of AI is rife with new ideas and compelling research, developed at a blistering pace, but its the practical applications of AI that matter to people right now, whether thats RPA to reduce human toil, streamlined processes, more intelligent software and services, or other solutions to real-world work and life problems.

Computer Vision Innovation

Computer vision is an exciting subfield of AI thats at the core of applications like facial recognition, object recognition, event detection, image restoration, and scene reconstruction and thats fast becoming an inescapable part of our everyday lives.

AI for Good

This award is for AI technology, the application of AI, or advocacy or activism in the field of AI that protects or improves human lives or operates to fight injustice, improve equality, and better serve humanity.

Startup Spotlight

This award spotlights a startup that holds great promise for making an impact with its AI innovation. Nominees are selected based on their contributions and criteria befitting their category, including technological relevance, funding size, and impact in their sub-field within AI.

As we count down to the awards, well offer editorial profiles of the nominees on VentureBeats AI channel The Machine and share them across our social channels. The award ceremony will be held on the evening of July 15 to conclude the first day of Transform 2020.

Go here to read the rest:

Announcing the second annual VentureBeat AI Innovation Awards at Transform 2020 - VentureBeat

Posted in Ai | Comments Off on Announcing the second annual VentureBeat AI Innovation Awards at Transform 2020 – VentureBeat

AI Weekly: Welcome to The Machine, VentureBeats AI site – VentureBeat

Posted: at 1:31 am

Take the latest VB Survey to share how your company is implementing AI today.

VentureBeat readers likely noticed this week that our site looks different. On Thursday, we rolled out a significant design change that includes not just a new look but also a new brand structure that better reflects how we think about our audiences and our editorial mission.

VentureBeat remains the flagship brand devoted to covering transformative technology that matters to business decision makers and now, our longtime GamesBeat sub-brand has its own homepage of sorts, and definitely its own look. And weve launched a new sub-brand. This one is for all of our AI content, and its called The Machine.

By creating two distinct brands under the main VentureBeat brand, were leaning hard into what weve acknowledged internally for a long time: Were serving more than one community of readers, and those communities dont always overlap. There are readers who care about our AI and transformative tech coverage, and there are others who ardently follow GamesBeat. We want to continue to cultivate those communities through our written content and events. So when we reorganized our site, we created dedicated space for games and AI coverage, respectively, while leaving the homepage as the main feed.

GamesBeat has long been a standout sub-brand under VentureBeat, thanks to the leadership of managing editor Jason Wilson and the hard work of Dean Takahashi, Mike Minotti, and Jeff Grubb. Thus, giving it a dedicated landing page makes logical sense. We want to give our AI coverage the same treatment, which is why we created The Machine.

We chose to take a long and winding path to selecting The Machine as the name for our AI sub brand. We could have just put our heads together and picked one, but wheres the fun in that? If youre going to come up with a name for an AI-focused brand, you should use AI to help you do it. And thats what we did.

First, we went through the necessary exercises to map out a brand: We talked through brand values, created an abstract about its focus and goals, listed the technologies and verticals we wanted to cover, and so on. Then, we humans brainstormed some ideas for names. (None stood out as clear winners.)

Armed with this data, we turned to Hugging Faces free online NLP tools, which require no code you just put text into the box and let the system do its thing. Essentially, we ended up following these tips to generate name ideas.

There are a few different approaches you can take. You can feed the system 20 names, lets say, and ask it to generate a 21st. You can give it tags and relevant terms (like machine learning, artificial intelligence, computer vision, and so on) and hope that it converts those into something that would be worthy of a name. You can enter a description of what you want (like a paragraph about what the sub-brand is all about) and see if it comes up with something. And you can tweak various parameters, like model size and temperature, to extract different results.

This sort of tinkering is a delightful rabbit hole to tumble down. After incessantly fiddling both with the data we fed the system and the various adjustable parameters, we ended up with a long and hilarious list of AI-generated names to chew on.

Here are some of our favorite terrible names that the tool generated:

This is a good lesson in the limitations of AI. The system had no idea what we wanted it to do. It couldnt, and didnt, solve our problem like some sort of name vending machine. AI isnt creative. We had to generate a bunch of data at the beginning, and then at the end, we had to sift through mostly unhelpful output (we ended up with dozens and dozens of names) to find inspiration.

But in the detritus, we found some nuggets of accidental brilliance. Here are a few NLP-generated names that are actually kind of good:

Its worth noting that the system all but insisted on AIBeat. No matter what permutations we tried, AIBeat kept resurfacing. It was tempting to pluck that low-hanging fruit it matched VentureBeat and GamesBeat, and theres no confusion about what beat wed be covering. But we humans decided to be more creative with the name, so we moved away from that construction.

We took a step back and used the long list of NLP-generated names to help us think up some fresh ideas. For example, We the Machine stood out to some of us as particularly punchy, but it wasnt quite right for a publication name. (Hello, I write for We the Machine doesnt exactly roll off the tongue.) But that inspired The Machine, which emerged as the winner from our shortlist.

The Machine has multiple layers. Its a play on machine learning, and its a wink at the persistent fear of sentient robots. And it frames our AI team as a formidable, well-oiled content machine, punching well above our weight with a tiny roster of writers.

And so, I write for The Machine. Bookmark this page and visit every day for the latest AI news, analysis, and features.

Read the original:

AI Weekly: Welcome to The Machine, VentureBeats AI site - VentureBeat

Posted in Ai | Comments Off on AI Weekly: Welcome to The Machine, VentureBeats AI site – VentureBeat

How Black Lives Matter Has Been Coopted by Russia’s Government and Its Opposition – Foreign Policy

Posted: at 1:31 am

As Black Lives Matter protests spread across the globe, Russia has proven a notable exception. There have been solidarity demonstrations and localized movements against racism and police violence in Helsinki; Almaty, Kazakhstan; and Vilnius, Lithuania; but no such scenes in Moscow. Instead, Russia has used the civil unrest in the United States to continue its history of reflecting the United States most unbecoming aspects on itself. The Russian government and its liberal opposition alike have used their platforms to discredit the relatively peaceful spirit of the demonstrations and project ideas of U.S. weakness. And in doing so, the opposition stands to harm its larger fight against Russian state oppression.

The development of the Russian Lives Matter social media movement is perhaps the strongest example of how the liberal opposition in Russia has unwittingly aided its government in subverting a global anti-racist effort. Russian Lives Matter started after police raided a home in Yekaterinburg and killed a resident on May 31. Since the police shooting, members of Russias libertarian movement, including Libertarian Party leader Mikhail Svetov, have used the hashtag to shed light on Russian police violence against citizens. The hashtag is not a show of solidarity with the internationally known Black Lives Matter movement. Instead, #RussianLivesMatter has been used to undermine the American fight against systemic racism by downplaying the impact of racism against African Americans, by suggesting police killings of Black Americans were deserved, and by framing empathy towards victims of police violence in Russia as a zero-sum game.

The hashtag itself hints at the exclusionary undertones of the movements participants. In a recent podcast interview with the independent Russian media site Meduza, Svetov made clear that his concern was not racism, but police violence against ethnic Russians (a point he drove home by using the word russkie, meaning ethnic Russian, rather than the more all-encompassing rossiyane). Asked whether policing in Russia is influenced by race, Svetov demurred saying it depended on the region, noting that in Chechnya, for example, police violence is focused on Russians. In truth, much Russian police violence is targeted at ethnic minorities and migrants from Central Asia, Africa, and elsewhere, as can be seen in recent cases such as the September 2019 police torture of two Uzbek migrants and the June interrogation of Afro-Russian blogger Mariya Tunkara.

The omission of minorities from the Russian Lives Matter movement coincides with outright dismissals of Black Lives Matter, as Meduza noted, with supporters on Twitter saying things such as I dont give a damn about blacks in America when theyre lynching Russians in Yekaterinburg. Prominent Russian journalist Oleg Kashin added to the racist imagery online by posting a meme of Martin Luther King Jr. surrounded by shoeboxes and cellphone boxes with the text Martin Looter King.

Such racist sentiments have placed members of the Russian opposition in strange proximity to the government. Russian liberals who have vociferously opposed President Vladimir Putins regime are now silent at best and parrot Moscows messaging at worst. Russian liberals such as Ksenia Sobchak, who ran against Putin in 2018, and the high-profile journalist Yulia Latynina have gone a step further, writing articles and creating social media posts that focus on looting, property damage, and a perceived lack of law and order in the United Statesa near mirroring of government media, which portrays U.S. society in a state of chaos. Sobchak recently lost her job as a spokeswoman for Audi after posting a racist tirade on Instagram describing Black Americans as stupid and lazy. Latyninas recent op-ed in Novaya Gazeta compared the Black Lives Matter movement with Ukraines Euromaidan protests to undermine African American complaints of racism. That is why it is ridiculous and shameful to regard these pogroms as rebellion against the system, and to equate the rioters and even peaceful protests with those who really risk their lives when they go to the Maidan or Tiananmen Square. The protests, she wrote, were pogroms and the protesters hooligans.

The language echoes that of Russian state-controlled media. News sites such as RT and Sputnik have published articles decrying the woke mafia and focusing on an alleged rise in crime following demands to defund the police. Even the famous film Brother 2 received a new endingRussian viewers of state-controlled Channel One were surprised to see images of looting and police violence at the U.S. protests juxtaposed with the song Goodbye America.

Moscow, meanwhile, has used the opportunity to undermine U.S. legitimacy at home. In a recent interview, Putin pointed to the U.S. protests and Washingtons mishandling of the COVID-19 pandemic to contrast Russias rigorous law-and-order response.

This instance is hardly the first time Russia has sought to exploit U.S. racial issues, particularly police and civilian violence towards African Americans, for domestic and geopolitical purposes. During the 2016 presidential election, Russian operatives targeted African American communities with disinformation, including posts on police mistreatment of African Americans and posts on Instagram promoting Black women and beauty. The Internet Research Agency also created content on YouTube that focused on the Black Lives Matter movement.

These well-documented efforts demonstrate how systemic racism and police brutality against American civilians and specifically African Americans present a national security problem for the United States. Of course, the genesis of the threat is not Russias meddling, but the United States failure to address centuries-old systemic racism, which hands authoritarian regimes such as Russias an opportunity to undermine U.S. foreign policy in Eastern Europe and the former Soviet Union. When Russia uses state-sanctioned violence against ethnic minorities and political opponents, can the United States project itself as a counter to this regime? As U.S. policymakers learned in the competition against the Soviet Union for influence in the newly independent African states during the 1960s, they cannot successfully promote democratic values abroad when U.S. citizens are denied their rights at home.

With the 2020 election on the horizon, Ukraines pivot toward the European Union, and an impending revision of the Russian Constitution that would extend Putins term limit, it is a critical moment for U.S.-Russia relations. And the failure of Washington to uphold fundamental rights within the United States will endanger the opposition in Russia and elsewhere.

Russias opposition, for its part, has missed a critical chance to build transnational solidarity against police brutality. In using the notoriety of the United States Black Lives Matter slogan and American white supremacy logic to shed light on Russian police brutality and promote ethnic Russian nationalism, opposition members have undermined their own cause. In its eagerness to ignore the role of racism in Russia, the Russian Lives Matter movement has inadvertently stumbled on the same messaging as Putins regime. In the long run, this can only hurt its cause. Putin has no problem using the police and accusations of hooliganism to stop public demonstrations against his regime. Now, Moscow can point to the very logic of the opposition regarding the protests in the United States amid any accusations of state oppression.

See original here:

How Black Lives Matter Has Been Coopted by Russia's Government and Its Opposition - Foreign Policy

Posted in Government Oppression | Comments Off on How Black Lives Matter Has Been Coopted by Russia’s Government and Its Opposition – Foreign Policy

Five Of The Leading British AI Companies Revealed – Forbes

Posted: at 1:31 am

Amid the Covid-19 gloom, many cutting-edge technology companies have quietly been getting on with raising finance with artificial intelligence emerging as a particular focus for investors. Last month, for example, London-based Temporall raised 1m of seed investment to continue developing its AI- and analytics-based workplace insights platform. It was just the latest in a string of AI businesses to successfully raise finance in recent months despite the uncertainties of the pandemic.

That extends a trend seen last year. In September, a report from TechNation and Crunchbase revealed that UK AI investment reached a record level of $1bn in the first half of 2019, surpassing the total amount of new finance raised during the whole of the previous year.

The UKs AI industry has been boosted by a supportive public sector environment: the UK government is leading the way on AI investment in Europe and has become the third biggest spender on AI in the world. In the private sector, meanwhile, many British companies offer world-leading technologies. Take just five of the most innovative AI start-ups in the UK:

Synthesized

Founded in 2017, by Dr Nicolai Baldin, a machine-learning researcher based at the University of Cambridge, Synthesized has created an all-in-one data provisioning and preparation platform that is underpinned by AI.

In just 10 minutes, its technology can generate a representative synthetic dataset incorporating millions of records, helping an organisation to share insights safely and efficiently while automatically complying with data regulations. In March, Synthesized raised $2.8m in funding with the aim of doubling the number of its employees in London and accelerating the companys rapid expansion.

Onfido

With more than $180m in funding, Onfido is on a mission to help businesses verify people's identities. Founded in 2012, it uses machine learning and AI technologies, including face detection and character recognition, to verify documents such as passports and ID cards, and to help companies with fraud prevention.

Onfido is headquartered in London and now employs more than 400 employees across seven offices worldwide. In 2019 the company had over 1,500 customers including Revolut, Monzo and Zipcar.

Benevolent AI

Aiming disrupt the pharmaceutical sector, Benevolent AIs goal is to find medicines for diseases that have no treatment. Benevolent AI applies AI and machine learning tools together with other cutting-edge technologies to try to reinvent the ways drugs are discovered and developed.

The business was founded in 2013 and has raised almost $300m in funding. Its software reduces drug development costs, decreases failure rates and increases the speed at which medicines are generated. Right now, it is focusing on searching for treatments for Covid-19.

Plum Fintech

Plum is an AI assistant that helps people manage their money and increase their savings. It uses a mix of AI and behavioural science to help users change the way they engage with their finances for example, it points out savings they can afford by analysing their bank transactions.

Plum also allows its users to invest the money saved, as well as to easily switch household suppliers to secure better deals - the average customer can save roughly 230 a year on regular bills it claims.

Poly AI

After meeting at the Machine Intelligence Lab at the University of Cambridge, Nikola Mrki, Pei-Hao Su and Tsung-Hsien Wen a group of conversational AI experts started Poly AI. CEO Mrki was previously the first engineer at Apple-acquired VocalIQ, which became an essential part of Siri.

Poly AI helps contact centres scale. The companys technology not only understands customers queries, but also addresses them in a conversational way, either via voice, email or messaging. The company doesnt position itself as a replacement to human contact centre agents, but as an enhancement that works alongside them. Poly AI has secured $12m in funding to date and works as a team of seven out of its London headquarters.

Here is the original post:

Five Of The Leading British AI Companies Revealed - Forbes

Posted in Ai | Comments Off on Five Of The Leading British AI Companies Revealed – Forbes