Page 130«..1020..129130131132..140150..»

Category Archives: Ai

AI-based loan apps are booming in India, but some borrowers miss out – VentureBeat

Posted: June 13, 2021 at 12:44 pm

Elevate your enterprise data technology and strategy at Transform 2021.

(Reuters) As the founder of a consumer rights non-profit in India, Karnav Shah is used to seeing sharp practices and disgruntled customers. But even he has been surprised by the sheer volume of complaints against digital lenders in recent years.

While most of the grievances are about unauthorised lending platforms misusing borrowers data or harassing them for missed payments, others relate to high interest rates or loan requests that were rejected without explanation, Shah said.

These are not like traditional banks, where you can talk to the manager or file a complaint with the head office. There is no transparency, and no one to ask for remedy, said Shah, founder of JivanamAsteya.

It is hurting young people starting off in their lives a loan being rejected can result in a low credit score, which will adversely affect bigger financial events later on, he told the Thomson Reuters Foundation.

Hundreds of mobile lending apps have mushroomed in India as smartphone use surged and the government encouraged digitization in banking, with financial technology (fintech) firms rushing to fill the gap in access to loans.

Unsecured loan apps, which promise quick loans even to those without a credit history or collateral, have been criticized for high lending rates, short repayment terms, as well as aggressive recovery methods and misuse of customer data.

At the same time, their use of algorithms to gauge the creditworthiness of first-time borrowers disproportionately excludes women and other traditionally marginalized groups, analysts say.

Credit scoring systems were intended to reduce the subjectivity in loan approvals by decreasing the role of a loan officers discretion on lending decisions, said Shehnaz Ahmed, fintech lead at the Vidhi Centre for Legal Policy in Delhi.

However, since alternative credit scoring systems employ thousands of data points and complex models, they could potentially be used to mask discriminatory policies and may also perpetuate existing forms of discrimination, she said.

Globally, about 1.7 billion people do not have a bank account, leaving them vulnerable to loan sharks and at risk of being excluded from vital government and welfare benefits, which are increasingly dispersed by electronic means.

Nearly 80% of Indians do now have a bank account, partly as a result of the governments financial inclusion policies, but young people and the poor often lack the formal credit histories that lenders use to gauge an applicants creditworthiness.

Almost a quarter of loan enquiries every month are from people with no credit history, according to TransUnion CIBIL, a company that generates credit scores.

Authorities have backed the use of AI for creating credit scores for so-called new to credit consumers, who account for about 60% of motorbike loans and more than a third of mortgages.

Algorithms help assess the creditworthiness of first-time borrowers by scanning their social media footprint, digital payments data, number of contacts and calling patterns.

TransUnion CIBIL recently launched an algorithm that has mapped the credit data of similar subjects that do have a credit history and whose information is comparable, said Harshala Chandorkar, the firms chief operating officer.

Women made up about 28% of retail borrowers in India last year, up three percentage points from 2014, and have a slightly higher average CIBIL score than men, she said, without answering a question about the risk of discrimination from algorithms.

CreditVidya, a credit information firm, uses an artificial intelligence (AI)-based algorithm that taps over 10,000 data points to calculate its scores.

A clear, unambiguous consent screen that articulates what data is collected and the purpose for which it will be used is displayed to the user to take his or her consent, it said.

EarlySalary, which says its mobile lending app has garnered more than 10 million downloads, uses an algorithm that collects text and browsing history, and information from social media platforms including Facebook and LinkedIn.

People who do not have a substantial social media presence could be at a disadvantage from such techniques, said Ahmed, adding that many online lending platforms provide little information on how they rate creditworthiness.

There is always an element of subjectivity in determining creditworthiness. However, this is heightened in the case of alternative credit scoring models that rely on several data points for assessing creditworthiness, she said.

Personal lending apps in India which are mainly intermediaries connecting borrowers with lending institutions fall in a regulatory gray zone now.

A long-delayed Personal Data Protection Bill under discussion by lawmakers would have conditions for requiring and storing personal data, and penalties for misuse of such data.

Authorized lending platforms are advised to engage in data capture with the informed consent of the customer, and publish detailed terms and conditions, said Satyam Kumar, a member of lobby group Fintech Association for Consumer Empowerment (FACE).

Regular audits and internal checks of the lending process are done to ensure no discrimination on the basis of gender or religion is done manually or via machine-based analysis, he said.

Indias central bank has said it will draw up a regulatory framework that supports innovation while ensuring data security, privacy, confidentiality and consumer protection.

That will help boost the value of digital lending to $1 trillion in 2023, according to Boston Consulting Group.

Digital lending will still skew towards historically privileged groups, with credit scoring systems also allocating loans more often to men than women in India, said Tarunima Prabhakar, a research fellow at Carnegie India.

If an algorithm evaluates credit scores based on the number of contacts on a phone, it would likely find men more creditworthy as Indian men have greater social mobility than women.

So women may face loan rejections or higher interest rates.

There is almost no transparency as to how these scores are reached, she said.

Digital lenders justify the secrecy on grounds of competitive advantage, but there needs to be some clarification, including explanations when loans are rejected, she added.

If these platforms make it easier for men but not women to start small businesses, it might reduce womens agency in an already asymmetric power dynamic, Prabhakar said.

In the absence of strong monitoring and institutions, alternative lending may perpetuate the same arbitrary lending practices of informal credit markets that they aim to resolve.

Read the original here:

AI-based loan apps are booming in India, but some borrowers miss out - VentureBeat

Posted in Ai | Comments Off on AI-based loan apps are booming in India, but some borrowers miss out – VentureBeat

Heres How AI Can Determine The Taste Of Coffee Beans – Forbes

Posted: at 12:44 pm

Coffee cups are pictured in the tasting area of the Vanibel cocoa and vanilla production facility, a ... [+] former 18th Century sugar refinery in Vieux-Habitants, Guadeloupe, on April 9, 2018. / (Photo credit HELENE VALENZUELA/AFP via Getty Images)

Artificial intelligence (AI) ispredictedto reach $126 billion by 2025. It is showing up in every industry, from healthcare and agriculture to education, finance and shipping. And now, AI has made a move to the food industry to discover and develop new flavors in food and drink.

In 2018, Danish brewer,CarlsburgusedAIto map and predict flavors from yeast and other ingredients in beer. IBM developed an AI for McCormick to create better spices. AndNotCo, which produces vegan NotMilks, uses AI to analyze molecular structures and find new combinations of plant-based ingredients.

A Colombian startup,Demetria, has raised$3 millionto date and is betting on its sensory digital fingerprint using AI to match a coffee bean's profile to the industry's standardcoffee flavor wheelcreated in 1995.

The new company says that this new sensory digital fingerprint for coffee beans will let roasters and producers assess quality and taste at any stage of the coffee production process.

Felipe Ayerbe, CEO of Demetria, says the wine-a-fication of coffee is here to stay.

"Coffee drinkers today have been exposed and are more aware of the taste and general experience they are looking for and are willing to pay more for that experience," said Ayerbe. "That is why you see an ever-growing array of possibilities and choices for something that is supposedly a commodity - different prices, origins, roasts, blends, flavor characteristics, preparations just like wine."

Ayerbe notes that coffee is still considered a tradable commodity, but the experience that consumers get is anything but a commodity. "In the last 20 years [..], coffee has undergone a voyage of premiumization where the most important variable is sensory quality - taste," adds Ayerbe.

Ayerbe says this revolution in specialty coffees was spurred in part by industry pioneers like Starbuck and Nespresso that upgraded the world's taste in coffee.

"With this sensory digital fingerprint, we are upgrading the industry from analog to digital by allowing the full value chain to [..] measure and manage the most important variable in the industry - taste," said Ayerbe. "We envision that farmers will for the first time not only be able to understand the quality of what they are selling but also manage their farming practices to optimize their quality, creating an unparalleled level of empowerment for them."

The sensor technology that Demetria uses has existed for 40 years.

"In the past several years, sensors have become miniaturized, more affordable and can connect to the cloud," said Ayerbe. "This allows for the collection, storage and analysis of huge amounts of data."

Ayerbe says the company uses handheld near-infra-red (NIR) sensors to read the spectral fingerprint of green coffee beans. This is because the different colors and wavelengths of the light spectrum react differently to each organic compound present in the coffee, representing the whole chemical composition of the beans.

"We then needed AI to translate the NIR data into the sensory language the industry understands," said Ayerbe. "And, until now, the taste or sensory quality of coffee beans has been determined by cupping, a manual, time-consuming process carried out by the industry's certified tasting experts, measured according to the industry's standard coffee tasting wheel," said Ayerbe.

With all the data gathered from the NIR readings and the cupping data, Demetria calibrated the AI to match a specific spectral fingerprint to an unmistakable taste profile.

Ayerbe says that the biggest hurdle the team had to overcome was determining subjective taste identifiers like body, balance and aftertaste that were in line with the standardized coffee taster's flavor wheel.

"These taste identifiers had to be determined holistically, rather than individually, to establish the true overall flavor profile of the coffee bean," said Ayerbe. "Also, coffee beans from the same sample are not homogenous, so for a single 300-500 gram sample of beans, multiple scans were required to gather enough data to represent an overall, unanimous flavor profile."

Demetria made thousands of scans that considered the slightest difference between each reading from a wide range of different coffee beans, and the result was a model with a sensorial digital fingerprint unique to Demetria.

Armed with the unique sensory digital fingerprint, Demetria built a matchmaking profile for a distinct flavor profile forCarcafeby training their AI and models to use spectral readings and correlating those with the cupping analyses from hundreds of samples coffee.

"The AI for the high-value profile required by Carcafe was gathered from multiple hundreds of samples of green coffee beans and measured against q-graders (cuppers) who sent over data on their cupping scores," said Ayerbe. "By compiling the big data cupping analysis, we were able to match the sensory fingerprint to this unique Colombian coffee."

Ayerbe said that after four months of processing the coffee samples, the company created a viable product for Carcafe where the AI was continuously retrained with new samples.

"The biggest technical challenge was training the AI to detect nuances in taste like a cupper can detect, rather than just clear profiles," said Ayerbe. "Clear profiles need to exist in the database, but the whole gamut of nuances need to be programmed too."

For the Carcafe profile that Demetria was matching, it needed to determine a particular sweetness; for example, chocolate is different from caramel which is different from brown sugar sweetness.

"So in the second iteration, we had to define the true taste and ensure that the AI could be specific enough to determine between these very similar but different types of sweetness," said Ayerbe. "To be able to pinpoint the producers that can match this profile exactly and give the farmers the tools to be able to grow that crop again consistently brings a new level of efficiency and transparency to Carcafe and their customers."

Ayerbe believes this process removes many variables and unknowns that currently exist in the coffee supply chain.

"If you can control more of the process, you will end up with less defective coffee, which will increase overall availability," adds Ayerbe. "It's also important to note that cuppers play a vital role in the training of the model, and this technology is in no way meant to replace their position in the industry."

Ayerbe says the problem with cupping is that it's a [..] scarce resource. "We are expanding the ability to assess sensory quality ubiquitously, along the whole value chain, and this especially is applicable at the producer level where cupping doesn't currently exist."

Ayerbe adds that their technology facilitates better use of the cuppers' time and allows traders and roasters to be more efficient in understanding who is producing the taste, type and quality of coffee they seek.

Continued here:

Heres How AI Can Determine The Taste Of Coffee Beans - Forbes

Posted in Ai | Comments Off on Heres How AI Can Determine The Taste Of Coffee Beans – Forbes

Transform 2021 puts the spotlight on women in AI – VentureBeat

Posted: at 12:44 pm

Elevate your enterprise data technology and strategy at Transform 2021.

VentureBeat is proud to bring back the Women in AI Breakfast and Awards online for Transform 2021. In the male-dominated tech industry, women are constantly faced with the gender equity gap. There is so much work in the tech industry to become more inclusive of bridging the gender gap while at the same time creating a diverse community.

VentureBeat is committed year after year to emphasize the importance of women leaders by giving them the platform to share their stories and obstacles they face in their male-dominated industries.As part of Transform 2021, we are excited to host our annual Women in AI Breakfast, presented by Capital One, and recognize women leaders accomplishments with our Women in AI Awards.

VentureBeats third annual Women in AI Breakfast, presented by Capital One, will commemorate women leading the AI industry. Join the digital networking session and panel on July 12 at 7:35 a.m. Pacific.

This digital breakfast includes networking and a discussion on the topic surrounding Women in AI: a seat at the table. Our panelists will explore how we can get more women into the AI workforce, plus the roles and responsibilities of corporates, academia, governments, & society as a whole in achieving this goal.

Featured speakers include Kay Firth Butterfield, Head of AI and Machine Learning and Member of the Executive Committee, World Economic Forum; Kathy Baxter, Principal Architect, Ethical AI Practice, Salesforce; Tiffany Deng, Program Management Lead- ML Fairness and Responsible AI, Google; and Teuta Mercado, Responsible AI Program Director, Capital One. Registration for Transform 2021 is required for attendance.

Once again, VentureBeat will be honoring extraordinary women leaders at the Women in AI Awards. The five categories this year include Responsibility & Ethics of AI, AI Entrepreneur, AI Research, AI Mentorship, and Rising Star.

Submit your nominations by July 9th at 5 p.m. Pacific. Learn more about the nomination process here.

The winners of the 2021 Women in AI Awards will be presented at VB Transform on July 16th, alongside the AI Innovation Awards. Register for Transform 2021 to join online.

See the rest here:

Transform 2021 puts the spotlight on women in AI - VentureBeat

Posted in Ai | Comments Off on Transform 2021 puts the spotlight on women in AI – VentureBeat

The Future of AI in 2021 – Analytics Insight

Posted: at 12:44 pm

AI is a large part of our world already, affecting online search results and the way we shop. Interest in AI has attracted long-term investments in AI use across several industries, particularly in customer service, medical diagnostics and self-driving vehicles. The increased data available through research has created better algorithms which have enabled more complex AI systems that improve a users experience with search engines and online translation tools, but also means that businesses can make far more focused sales and marketing drives to customers and financial markets have virtual assistants able to deal with more than the simplest of requests.

AI system improvements will involve the processing of massive amounts of data which needs improved computing power and better algorithms and tools. Using cryptography and blockchain has made it easier to build these advances since they can publicly share data whilst keeping company information confidential.

When it comes to broader cybersecurity, AI is critical in the identification and prediction of cybersecurity threats. This is particularly true for online casinos like Starspins casino, where the real money is involved, so peoples bank accounts need to be protected. Security is already good, but AI will take casino security to a level when it will be rare to hear of any security breaches on online gaming websites and apps.

It is not just Tesla that is focusing on autonomous driving. Current semi-autonomous vehicles still require drivers but improved technology is bringing forward the date of the first fully automated drive. There will almost certainly be delays before self-driving cars are seen on the roads because of thorny issues around liability in cases of accidents. Despite this, about 15 percent of vehicles sold in 2030 are forecast to be fully autonomous.

Another use of AI where there is continuing research and development is with conversational agents, most commonly known as chatbots. These are used most often in customer services and call centres. Chatbots are limited now, but the future of AI in 2021 will see improvements in how customer tasks are managed. This is through customer-facing AI assistants which act as the first port of call for customer queries, fielding simple questions and forwarding complex problems to live agents. There are agent-supporting AI assistants that are used by live customer service agents use to support them while they are interacting with customers, which improves productivity. A lot of that will be supported by advances in natural language understanding in a move away from todays narrow sets of responses which provide simple answers to simple questions.

Whilst it is exciting to see how far AI could go, AI models are complex and much work still needs to be done to make them most efficient. Fortunately, those that use AI do not have to understand with technology thanks to the use of Explainable AI (XAI). In the medical field, this means that a diagnosis made by an AI model will provide a doctor with the analysis behind the diagnosis, which is information that can also be understood by the patient.

From marketing to fulfillment and distribution AI will continue to play a central role in e-commerce for every business sector. Currently, it is difficult to integrate different AI models, but collaborative research among tech giants like Microsoft, Amazon and others is leading to the construction of an Open Neural Network Exchange (ONNX) to allow integration. This is forecast to be the foundation of all future AI. This will enable more complex chatbox communications and other personalized shopping advances as well as targeted image-based targeting advertising, and warehouse and inventory automation.

Today, around 75 percent of job applications are rejected by an automated applicant tracking system powered by AI before they are seen by a human being. Job seekers can use the same AI technology to scan their application and compare it with a job description. Job seekers will receive suggested changes to the application to make it a better match to pass the applicant tracking system process.

One of the biggest computing challenges is the amount of energy needed to solve more complex problems. In cases of speech recognition, AI-enabled chips within high-performance CPUs are needed to improve efficiency. The electricity bill for one supercharged language model AI was estimated at USD4.6 million.

Extended Reality (XR) will add touch, taste and smell to the Virtual Reality or Augmented Reality worlds to provide an enhanced immersive experience. Volkswagen is already using XR tools so customers can experience their cars in 3D.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Read the rest here:

The Future of AI in 2021 - Analytics Insight

Posted in Ai | Comments Off on The Future of AI in 2021 – Analytics Insight

AutoBrains Revolutionary AI Central To Leading Supplier’s ADAS and AV Growth Strategy – PRNewswire

Posted: at 12:44 pm

AutoBrains revolutionary unsupervised AI technology is at the center of Continental's growth strategy in ADAS and AV.

Key to the disruptive technology's advantages over traditional deep learning is its massively reduced reliance on expensive and often error-prone manually labelled training data sets. The unsupervised AI system successfully interprets and navigates unusual driving scenarios and edge cases where traditional supervised learning systems are least reliable. This increases driving safety and helps accelerate the adoption of ADAS and vehicles capable of higher levels of autonomy. Reduced reliance on stored data also means AutoBrains' system requires roughly ten times less computing power than currently available systems and can be produced at lower cost, increasing accessibility of ADAS across market segments at a time when regulations are requiring more driver assistance capabilities for passenger and commercial vehicles.

"We are thrilled to partner with Continental to bring our revolutionary technology to the market," said Igal Raichelgauz, CEO of AutoBrains. "Unsupervised AI is backed by more than 200 patents, over a decade of research and development, and a nearly 2-year incubation period with Continental, and we are excited to take the next step with Conti as our key partner."

Frank Petznick, Head of the Driver Assistance Systems Business Unit at Continental added, "We are excited to be partnering with AutoBrains to bring to market its advanced and proven AI technology that we believe will disrupt the ADAS and AV marketplace. Historically, AV and ADAS technologies have been limited by their dependence on supervised learning that uses massive labelled training data sets and requires enormous compute power. AutoBrains' AI breaks through those barriers with a different approach that processes relevant signals from the car's environment in much the same way that human drivers do. This technology boosts performance while saving compute power and energy. With AutoBrains we intend to push rapidly ahead toward a safer and increasingly autonomous driving experience."

AutoBrains emerged out of AI tech company, Cortica, after identifying the potential of the unsupervised learning technology to vastly improve AI for automotive. Following a period with Continental's Business Unit ADAS and co-pace "The Startup Program of Continental", in 2019, AutoBrains was formally spun off to focus exclusively on building unsupervised AI for autos.

"We are excited to see that combining the strength of AutoBrains AI Technology and the Continental ADAS System know how led to such a high-performance system and profound partnership" said Jrgen Bilo, Managing Director of co-pace, The Startup Program of Continental.

PR and Media Contact:Ben Williams, Associate Director, Breakwater Strategy, [emailprotected], +1-(508)-330-5321

SOURCE AutoBrains

Read the original post:

AutoBrains Revolutionary AI Central To Leading Supplier's ADAS and AV Growth Strategy - PRNewswire

Posted in Ai | Comments Off on AutoBrains Revolutionary AI Central To Leading Supplier’s ADAS and AV Growth Strategy – PRNewswire

How wearable AI could help you recover from covid – MIT Technology Review

Posted: at 12:44 pm

The Illinois program gives people recovering from covid-19 a take-home kit that includes a pulse oximeter, a disposable Bluetooth-enabled sensor patch, and a paired smartphone. The software takes data from the wearable patch and uses machine learning to develop a profile of each persons vital signs. The monitoring system alerts clinicians remotely when a patients vitals such as heart rateshift away from their usual levels.

Typically, patients recovering from covid might get sent home with a pulse oximeter. PhysIQs developers say their system is much more sensitive because it uses AI to understand each patients body, and its creators claim it is much more likely to anticipate important changes.

Its an enormous benefit, says Terry Vanden Hoek, the chief medical officer and head of emergency medicine at University of Illinois Health, which is hosting the pilot. Working with covid cases is hard, he says: When you work in the emergency department its sad to see patients who waited too long to come in for help. They would require intensive care on a ventilator. You couldnt help but ask, If we could have warned them four days before, could we have prevented all this?

Like Angela Mitchell, most of the study participants are African-American. Another large group are Latino. Many are also living with risk factors such as diabetes, obesity, hypertension, or lung conditions that can complicate covid-19 recovery. Mitchell, for example, has diabetes, hypertension, and asthma.

African-American and Latino communities have been hardest hit by the pandemic in Chicago and across the country. Many are essential workers or live in high-density, multigenerational housing.

For example, there are 11 people in Mitchells house, including her husband, three daughters, and six grandchildren. I do everything with my family. We even share covid-19 together! she says with a laugh. Two of her daughters tested positive in March 2020, followed by her husband, before Mitchell herself.

Although African-Americans are only 30% of Chicagos population, they made up about 70% of the citys earliest covid-19 cases. That percentage has declined, but African-Americans recovering from covid-19 still die at rates two to three times those for whites, and vaccination drives have been less successful at reaching this community. The PhysIQ system could help improve survival rates, the studys researchers say, by sending patients to the ER before its too late, just as they did with Mitchell.

PhysIQ founder Gary Conkright has previous experience with remote monitoring, but not in people. In the mid-1990s, he developed an early artificial-intelligence startup called Smart Signal with the University of Chicago. The company used machine learning to remotely monitor the performance of equipment in jet engines and nuclear power plants.

Our technology is very good at detecting subtle changes that are the earliest predictors of a problem, says Conkright. We detected problems in jet engines before GE, Pratt & Whitney, and Rolls-Royce because we developed a personalized model for each engine.

Smart Signal was acquired by General Electric, but Conkright retained the right to apply the algorithm to the human body. At that time, his mother was experiencing COPD and was rushed to intensive care several times, he said. The entrepreneur wondered if he could remotely monitor her recovery by adapting his existing AI system. The result: PhysIQ and the algorithms now used to monitor people with heart disease, COPD, and covid-19.

Its power, Conkright says, lies in its ability to create a unique baseline for each patienta snapshot of that persons normand then detect exceedingly small changes that might cause concern.

The algorithms need only about 36 hours to create a profile for each person.

The system gets to know how you are looking in your everyday life, says Vanden Hoek. You may be breathing faster, your activity level is falling, or your heart rate is different than the baseline. The advanced practice provider can look at those alerts and decide to call that person to check in. If there are concernssuch as potential heart or respiratory failure, he saysthey can be referred to a physician or even urgent care or the emergency department.

In the pilot, clinicians monitor the data streams around the clock. The system alerts medical staff when the participants condition changes even slightlyfor example, if their heart rate is different from what it normally is at that time of day.

The rest is here:

How wearable AI could help you recover from covid - MIT Technology Review

Posted in Ai | Comments Off on How wearable AI could help you recover from covid – MIT Technology Review

AI Can Bring Objectivity to Recruiting If We Design It Responsibly – Built In

Posted: at 12:44 pm

Besides excellent leadership, an airtight business plan and a unique, in-demand product, one of the decisive factors determining a companys success or failure is its talent pool. Skilled teams and a positive work culture make all the difference.

As a result, the recruitment of great talent is one of the challenges companies of all sizes face, regardless of whether theyre a fledgling startup or a mature corporation. Qualified applicants are in such high demand that some speak of a talent war.

Besides their hard skills (like education and previous experience), soft factors often subsumed under culture fit contribute to a candidates ability to perform well in their new role. The competition for qualified applicants and the need to hire the best fit has turned recruiting into a complex and resource-intensive task.

More Expert Commentary From Thomas Falk3 Investment Principles for Life After the Pandemic

In recent years, leaps in artificial intelligence have helped reduce this complexity considerably. Through AI, organizations can rapidly analyze enormous amounts of data and use it to make informed decisions and predictions. In a recruiting application, HR teams can leverage AI to evaluate a candidates fit and aptitude.

Thanks to their immense potential benefits, AI solutions are being adapted at a rapid rate. In Sages recentsurvey of 500 senior HR and people leaders, a third said they are changing how they hire by building better candidate experiences for applicants and new hires. While 24 percent of companies are currently using AI for recruitment, that number is expected to grow with 56 percent reporting they plan to adopt AI in the next year. The pandemic has likely accelerated the pace of change for many of these companies.

Its natural for humans to make sense of the world through biases, preconceived opinions and prejudices, but in the recruiting process they are harmful to all parties. Studies show that when hiring new staff, HR managers are more likely to display an affinity bias by selecting candidates that are like them. Even the notion of culture fit can lead to discrimination, as it also encourages uniformity among the workforce rather than diversity. There are legal implications for not removing bias from the hiring process, as Facebook recently found themselves in hot water over their emphasis on culture fit.

Unconscious bias the preferences and prejudices that we dont realize we have are inherent to all humans. They often come from our background and arent necessarily apparent in day-to-day interactions with others. However, they can negatively inform the decisions we make about the people with whom we surround ourselves. Unless steps are taken to counteract biases, many hiring panels might unwittingly lean toward hiring (or not hiring) candidates based on those implicit prejudices.

There are ways to reduce biases in the hiring process such as reviewing CVs anonymously, asking each candidate the same catalog of questions during the interview and ensuring as diverse a hiring panel as possible. However, eliminating unconscious biases entirely is very difficult.

This is where AI comes in as a valuable tool to reduce time sourcing a diverse pool of interested candidates and contacting them.

AI-based recruiting software is not only able to screen hundreds of candidates in seconds, but it can also assess data points free from the assumptions, biases and mental fatigue that humans are prone to.

While it is important to note thatAI bias can be a problem mostly due to a lack of diversity in the algorithm-writing well-coded tools that counter this issue are able to create candidate profiles based on qualifications alone. For example: AI recruiting software likeFetcher are able to track and analyze teams and hiring needs, thereby providing insight into how many prospects hiring managers need to ensure a hire for every role. By combining AI with a human in the loop, Fetcher also enables hiring teams to train and monitor data, ensuring a diverse pipeline of interested candidates.

AI software can work around the clock, making the sourcing of passive candidates extremely time efficient. By aggregating information on candidates that might be a good fit from across the web including professional and social networks these tools provide comprehensive, up-to-date candidate profiles that are easy to assess.

Even more time can be saved by employing AI software that automatically reaches out to potential candidates by email and sending automated follow-ups and personalized templates without sounding computer-generated. (Fetchers recent $6.5 million Series A shows the tremendous potential waiting to be unlocked.)

While hiring took a dip following the COVID-19 lockdown, it is now stabilizing and the talent war is set to resume. Without claiming to be completely infallible, AI tools can create a major advantage by sourcing a diverse pool of candidates in a competitive market minus the unconscious biases humans are prone to.

Read more:

AI Can Bring Objectivity to Recruiting If We Design It Responsibly - Built In

Posted in Ai | Comments Off on AI Can Bring Objectivity to Recruiting If We Design It Responsibly – Built In

DeepMind researchers say reinforcement learning is the key to cracking general AI – The Next Web

Posted: at 12:44 pm

In their decades-long chase to create artificial intelligence, computer scientists have designed and developed all kinds of complicated mechanisms and technologies to replicate vision, language, reasoning, motor skills, and other abilities associated with intelligent life. While these efforts have resulted in AI systems that can efficiently solve specific problems in limited environments, they fall short of developing the kind of general intelligence seen in humans and animals.

In a new paper submitted to the peer-reviewedArtificial Intelligencejournal, scientists atUK-based AI lab DeepMindargue that intelligence and its associated abilities will emerge not from formulating and solving complicated problems but by sticking to a simple but powerful principle: reward maximization.

Titled Reward is Enough, the paper, which is still in pre-proof as of this writing, draws inspiration from studying the evolution of natural intelligence as well as drawing lessons from recent achievements in artificial intelligence. The authors suggest that reward maximization and trial-and-error experience are enough to develop behavior that exhibits the kind of abilities associated with intelligence. And from this, they conclude that reinforcement learning, a branch of AI that is based on reward maximization, can lead to the development ofartificial general intelligence.

One common method for creating AI is to try to replicate elements of intelligent behavior in computers. For instance, ourunderstanding of the mammal vision systemhas given rise to all kinds of AI systems that can categorize images, locate objects in photos, define the boundaries between objects, and more. Likewise, our understanding of language has helped in the development of variousnatural language processingsystems, such as question answering, text generation, and machine translation.

These are all instances ofnarrow artificial intelligence, systems that have been designed to perform specific tasks instead of having general problem-solving abilities. Some scientists believe that assembling multiple narrow AI modules will produce higher intelligent systems. For example, you can have a software system that coordinates between separatecomputer vision, voice processing, NLP, and motor control modules to solve complicated problems that require a multitude of skills.

A different approach to creating AI, proposed by the DeepMind researchers, is to recreate the simple yet effective rule that has given rise to natural intelligence. [We] consider an alternative hypothesis: that the generic objective of maximising reward is enough to drive behaviour that exhibits most if not all abilities that are studied in natural and artificial intelligence, the researchers write.

This is basically how nature works. As far as science is concerned, there has been no top-down intelligent design in the complex organisms that we see around us. Billions of years of natural selection and random variation have filtered lifeforms for their fitness to survive and reproduce. Living beings that were better equipped to handle the challenges and situations in their environments managed to survive and reproduce. The rest were eliminated.

This simple yet efficient mechanism has led to the evolution of living beings with all kinds of skills and abilities to perceive, navigate, modify their environments, and communicate among themselves.

The natural world faced by animals and humans, and presumably also the environments faced in the future by artificial agents, are inherently so complex that they require sophisticated abilities in order to succeed (for example, to survive) within those environments, the researchers write. Thus, success, as measured by maximising reward, demands a variety of abilities associated with intelligence. In such environments, any behaviour that maximises reward must necessarily exhibit those abilities. In this sense, the generic objective of reward maximization contains within it many or possibly even all the goals of intelligence.

For example, consider a squirrel that seeks the reward of minimizing hunger. On the one hand, its sensory and motor skills help it locate and collect nuts when food is available. But a squirrel that can only find food is bound to die of hunger when food becomes scarce. This is why it also has planning skills and memory to cache the nuts and restore them in winter. And the squirrel has social skills and knowledge to ensure other animals dont steal its nuts. If you zoom out, hunger minimization can be a subgoal of staying alive, which also requires skills such as detecting and hiding from dangerous animals, protecting oneself from environmental threats, and seeking better habitats with seasonal changes.

When abilities associated with intelligence arise as solutions to a singular goal of reward maximisation, this may in fact provide a deeper understanding since it explainswhysuch an ability arises, the researchers write. In contrast, when each ability is understood as the solution to its own specialised goal, the why question is side-stepped in order to focus uponwhatthat ability does.

Finally, the researchers argue that the most general and scalable way to maximize reward is through agents that learn through interaction with the environment.

In the paper, the AI researchers provide some high-level examples of how intelligence and associated abilities will implicitly arise in the service of maximising one of many possible reward signals, corresponding to the many pragmatic goals towards which natural or artificial intelligence may be directed.

For example, sensory skills serve the need to survive in complicated environments. Object recognition enables animals to detect food, prey, friends, and threats, or find paths, shelters, and perches. Image segmentation enables them to tell the difference between different objects and avoid fatal mistakes such as running off a cliff or falling off a branch. Meanwhile, hearing helps detect threats where the animal cant see or find prey when theyre camouflaged. Touch, taste, and smell also give the animal the advantage of having a richer sensory experience of the habitat and a greater chance of survival in dangerous environments.

Rewards and environments also shape innate and learned knowledge in animals. For instance, hostile habitats ruled by predator animals such as lions and cheetahs reward ruminant species that have the innate knowledge to run away from threats since birth. Meanwhile, animals are also rewarded for their power to learn specific knowledge of their habitats, such as where to find food and shelter.

The researchers also discuss the reward-powered basis of language, social intelligence, imitation, and finally, general intelligence, which they describe as maximising a singular reward in a single, complex environment.

Here, they draw an analogy between natural intelligence and AGI: An animals stream of experience is sufficiently rich and varied that it may demand a flexible ability to achieve a vast variety of subgoals (such as foraging, fighting, or fleeing), in order to succeed in maximising its overall reward (such as hunger or reproduction). Similarly, if an artificial agents stream of experience is sufficiently rich, then many goals (such as battery-life or survival) may implicitly require the ability to achieve an equally wide variety of subgoals, and the maximisation of reward should therefore be enough to yield an artificial general intelligence.

Reinforcement learning is a special branch of AI algorithms that is composed of three key elements: an environment, agents, and rewards.

By performing actions, the agent changes its own state and that of the environment. Based on how much those actions affect the goal the agent must achieve, it is rewarded or penalized. In many reinforcement learning problems, the agent has no initial knowledge of the environment and starts by taking random actions. Based on the feedback it receives, the agent learns to tune its actions and develop policies that maximize its reward.

In their paper, the researchers at DeepMind suggest reinforcement learning as the main algorithm that can replicate reward maximization as seen in nature and can eventually lead to artificial general intelligence.

If an agent can continually adjust its behaviour so as to improve its cumulative reward, then any abilities that are repeatedly demanded by its environment must ultimately be produced in the agents behaviour, the researchers write, adding that, in the course of maximizing for its reward, a good reinforcement learning agent could eventually learn perception, language, social intelligence and so forth.

In the paper, the researchers provide several examples that show how reinforcement learning agents were able to learn general skills in games and robotic environments.

However, the researchers stress that some fundamental challenges remain unsolved. For instance, they say, We do not offer any theoretical guarantee on the sample efficiency of reinforcement learning agents. Reinforcement learning is notoriously renowned for requiring huge amounts of data. For instance, a reinforcement learning agent might need centuries worth of gameplay to master a computer game. And AI researchers still havent figured out how to create reinforcement learning systems that can generalize their learnings across several domains. Therefore, slight changes to the environment often require the full retraining of the model.

The researchers also acknowledge that learning mechanisms for reward maximization is an unsolved problem that remains a central question to be further studied in reinforcement learning.

Patricia Churchland, neuroscientist, philosopher, and professor emerita at the University of California, San Diego, described the ideas in the paper as very carefully and insightfully worked out.

However, Churchland pointed it out to possible flaws in the papers discussion about social decision-making. The DeepMind researchers focus on personal gains in social interactions. Churchland, who has recently written a book on thebiological origins of moral intuitions, argues that attachment and bonding is a powerful factor in social decision-making of mammals and birds, which is why animals put themselves in great danger to protect their children.

I have tended to see bonding, and hence other-care, as an extension of the ambit of what counts as oneselfme-and-mine, Churchland said. In that case, a small modification to the [papers] hypothesis to allow for reward maximization to me-and-mine would work quite nicely, I think. Of course, we social animals have degrees of attachmentsuper strong to offspring, very strong to mates and kin, strong to friends and acquaintances etc., and the strength of types of attachments can vary depending on environment, and also on developmental stage.

This is not a major criticism, Churchland said, and could likely be worked into the hypothesis quite gracefully.

I am very impressed with the degree of detail in the paper, and how carefully they consider possible weaknesses, Churchland said. I may be wrong, but I tend to see this as a milestone.

Data scientist Herbert Roitblat challenged the papers position that simple learning mechanisms and trial-and-error experience are enough to develop the abilities associated with intelligence. Roitblat argued that the theories presented in the paper face several challenges when it comes to implementing them in real life.

If there are no time constraints, then trial and error learning might be enough, but otherwise we have the problem of an infinite number of monkeys typing for an infinite amount of time, Roitblat said.

Theinfinite monkey theoremstates that a monkey hitting random keys on a typewriter for an infinite amount of time may eventually type any given text.

Roitblat is the author ofAlgorithms are Not Enough, in which he explains why all current AI algorithms, including reinforcement learning, require careful formulation of the problem and representations created by humans.

Once the model and its intrinsic representation are set up, optimization or reinforcement could guide its evolution, but that does not mean that reinforcement is enough, Roitblat said.

In the same vein, Roitblat added that the paper does not make any suggestions on how the reward, actions, and other elements of reinforcement learning are defined.

Reinforcement learning assumes that the agent has a finite set of potential actions. A reward signal and value function have been specified. In other words, the problem of general intelligence is precisely to contribute those things that reinforcement learning requires as a pre-requisite, Roitblat said. So, if machine learning can all be reduced to some form of optimization to maximize some evaluative measure, then it must be true that reinforcement learning is relevant, but it is not very explanatory.

This article was originally published by Ben Dickson onTechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original article here.

Go here to read the rest:

DeepMind researchers say reinforcement learning is the key to cracking general AI - The Next Web

Posted in Ai | Comments Off on DeepMind researchers say reinforcement learning is the key to cracking general AI – The Next Web

RISC-V Evolving to Address Supercomputers and AI – Tom’s Hardware

Posted: at 12:44 pm

The open source RISC-V instruction set architecture is gaining more mainstream attention in the wake of Intel's rumored $2 billion bid for SiFive, the industry's leading RISC-V design house. Unfortunately, RISC-V has long been relegated to smaller chips and microcontrollers, limiting its appeal. However, that should change soon as RISC-V International, the organization that oversees the development of the RISC-V instruction set architecture (ISA),has announcedplans to extend the architecture to high performance computing, AI, and supercomputing applications.

The RISC-V open-source ISA was first introduced in 2016, but the first cores were only suitable for microcontrollers and some basic system-on-chip designs. However, after several years of development, numerous chip developers (e.g., Alibaba) have created designs aimed at cloud data centers, AI workloads (like theJim Keller-led Tenstorrent), and advanced storage applications (e.g., Seagate, Western Digital).

The means there's plenty of interest from developers for high-performance RISC-V chips. But to foster adoption of the RISC-V ISA by edge, HPC, and supercomputing applications, the industry needs a more robust hardware and software ecosystem (along with compatibility with legacy applications and benchmarks). That's where the RISC-V SIG for HPC comes into play.

At this point, the RISC-V SIG-HPC has 141 members on its mailing list and 10 active members in research, academia, and the chip industry. The key task for the growing SIG is to propose various new HPC-specific instructions and extensions and work with other technical groups to ensure that HPC requirements are considered for the evolving ISA. As a part of this task, the SIG needs to define AI/HPC/edge requirements and plot a feature and capability path to a point when RISC-V is competitive against Arm, x86, and other architectures.

There are short-term goals for the RISC-V SIG-HPC, too. In 2021, the group will focus on the HPC software ecosystem. First up, the group plans to findopen source software(benchmarks,libraries, and actual programs) that can work with the RISC-V ISAright out of the box. This process is set to be automatized. The first investigations will be aimed atapplications like GROMACS, Quantum ESPRESSO and CP2K;libraries like FFT, BLAS, and GCC and LLVM; andbenchmarks like HPL and HPCG.

The RISC-V SIG-HPC will develop a more detailed roadmap after the ecosystem is solidified. The long-term goal of the RISC-V SIG is to build an open-source ecosystem of hardware and software that can address emerging performance-demanding applications while also accomodating legacy needs.

How many years will that take? Only time will tell, but industry buy-in from big players, like Intel, would certainly help speed that timeline.

Read more:

RISC-V Evolving to Address Supercomputers and AI - Tom's Hardware

Posted in Ai | Comments Off on RISC-V Evolving to Address Supercomputers and AI – Tom’s Hardware

$1.2 million for two new Stanford research projects on energy/climate AI and environmental justice | Energy – Stanford University News

Posted: at 12:44 pm

Three Stanford University entities will fund two new research projects on using artificial intelligence and machine learning to make energy systems more sustainable, affordable, resilient and fair to all socioeconomic groups.

The projects funded by Stanfords Precourt Institute for Energy, the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the Bits & Watts Initiative are the first two Precourt Pioneering Projects. The new program aims to fund one new project led by a Stanford faculty member every quarter at a level greater than that provided through the institutes seed grant program. However, in its first round, leaders of the three entities decided to support two related projects. The resulting tools and datasets from both projects will be made available to researchers beyond Stanford.

Yi Cui, director of Stanford's Precourt Institutefor energy, and professor of materials scienceand of photon science

Both research teams proposed really exciting ideas for using massive data to transition our energy system to meet multiple goals simultaneously, said Yi Cui, director of the Precourt Institute, so we decided to support both projects.

In addition to optimizing for climate change, cost and reliability, they incorporate environmental justice and social equity criteria, which Stanford is committed to, said Cui, who is also a professor of materials science in the School of Engineering and of photon science at SLAC National Accelerator Laboratory.

SLAC, a U.S. Department of Energy national lab operated by Stanford, will coordinate with Precourt to fund researchers in these two broad research directions. This aligns with the climate and energy research priorities of the current U.S. administration.

We are excited to partner with Precourt and deepen our existing linkages, said SLAC Director Chi-Chang Kao, who is also a professor of photon science. SLAC has thriving efforts in machine learning and applied energy, and the lab is dedicated to advancing environmental justice and equity.

As Stanford moves to create a new school on climate and sustainability, the leaders of the four entities involved hope that the two new projects and related Stanford research will help recruit new faculty, bridge sustainability research across campus, and attract students of the highest caliber.

Fei-Fei Li, co-director of the Stanford Institute forHuman-Centered Artificial Intelligence and professorcomputer science

At HAI, we believe that artificial intelligence has the power to help with some of the biggest challenges of our time, said HAIs Denning Co-Director Fei-Fei Li, who is also a professor of computer science. Climate and energy certainly top the list of earths most urgent issues. Its truly our pleasure to support the Precourt Institute for Energys Precourt Pioneering Projects grant awards.

One project will build a platform MESMERIZE: A Macro-Energy System Model with Equity, Realism and Insight in Zero Emissions centered on how policies and people shape the needed transition to sustainable energy systems and its distributional/equity consequences. The hub will integrate a modelling effort, data sets, advanced computational algorithms and other tools developed at Stanford to solve energy and climate challenges to deep decarbonization.

The project team will use the hub to build a multidisciplinary, economy-wide decarbonization model that integrates social equity and human health concerns. The platform will be a resource for researchers at Stanford and elsewhere to identify and optimize the most effective technological, financial and equitable solutions for different U.S. regions and energy sectors, including electricity, natural gas, transportation and heating.

The question we want to address is: What are realistic and implementable pathways for sustainable and deeply decarbonized energy systems that include features of real policies, peoples decisions and behaviors, and account for environmental justice?, said Ines Azevedo, associate professor in the Department of Energy Resources Engineering in Stanfords School of Earth, Energy & Environmental Sciences.

We want this interdisciplinary simulation and optimization modeling hub to provide resources to others, said Azevedo, whose co-leaders on the project are professors Sally Benson, Adam Brandt, Ram Rajagopal and John Weyant, as well as visiting scholar Jacques de Chalendar. We hope to catalyze more efficient and effective collaborations across campus and beyond by lowering the barriers to sharing knowledge, data, methods and analytical tools.

The other project will build open-source tools to assess, forecast and plan for a human-centered infrastructure system with a particular focus on electricity to meet these criteria: decarbonization, equity, affordability and resiliency to the impacts of climate change, including extreme weather events. The research team, led by professors Ram Rajagopal, Arun Majumdar, and Azevedo, as well as adjunct professor Andrew Ng, will use machine learning and publicly available data sources. Other approaches using machine learning do not optimize those four criteria simultaneously.

The electricity grid is being transformed due to the urgency to decarbonize, improve resilience against climate-induced extreme weather events, and provide affordable, reliable access to at-risk communities, said Rajagopal, who is an associate professor in the Department of Civil & Environmental Engineering.

The combination of rapid adoption of renewables, electric vehicles, heat pumps for residential heating and natural gas generation as a transition technology are creating deep interactions among the power grid, natural gas, transportation and information, Rajagopal explained.

The project will develop three tools that enable granular, interconnected analysis of access, reliability, cost and emissions. The first will assess and predict the risks from climate-related extreme events to local communities, and produce climate-risk scores for communities. These risks include energy insecurity, ill health and other social impacts, particularly as they affect vulnerable populations. The second tool will use remotely sensed data and artificial intelligence to create detailed, high-resolution mapping of U.S. energy resources and infrastructure. Stanford researchers have already used this technology to map specific facets of U.S. energy infrastructure. The third tool will evaluate dynamics in demand and supply due to changing grid conditions: from short-term shocks like extreme weather, to longer term transformations like increased adoption of residential solar power.

The research team will share their data with other researchers as an energy data commons on datacommons.org.

SLAC, operated by Stanford for the U.S. Department of Energys Office of Science, explores how the universe works at the biggest, smallest and fastest scales and invents powerful tools used by scientists around the globe. HAI's mission is to advance AI research, education, policy and practice to improve the human condition. The Precourt Institute for Energy is a cross-campus research and education program to make energy more sustainable, affordable and secure for all people. The Bits & Watts Initiative, a Precourt Institute program, finds innovative solutions to power the 21st century electric grid.

Read more:

$1.2 million for two new Stanford research projects on energy/climate AI and environmental justice | Energy - Stanford University News

Posted in Ai | Comments Off on $1.2 million for two new Stanford research projects on energy/climate AI and environmental justice | Energy – Stanford University News

Page 130«..1020..129130131132..140150..»