The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: June 2021
What is Artificial Intelligence (AI)? | IBM
Posted: June 13, 2021 at 12:44 pm
Artificial intelligence leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind.
While a number of definitions of artificial intelligence (AI) have surfaced over the last few decades, John McCarthy offers the following definition in this 2004 paper(PDF, 106 KB) (link resides outside IBM), " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."
However, decades before this definition, the birth of the artificial intelligence conversation was denoted by Alan Turing's seminal work, "Computing Machinery and Intelligence" (PDF, 89.8 KB)(link resides outside of IBM), which was published in 1950. In this paper, Turing, often referred to as the "father of computer science", asks the following question, "Can machines think?" From there, he offers a test, now famously known as the "Turing Test", where a human interrogator would try to distinguish between a computer and human text response. While this test has undergone much scrutiny since its publish, it remains an important part of the history of AI as well as an ongoing concept within philosophy as it utilizes ideas around linguistics.
Stuart Russell and Peter Norvig then proceeded to publish, Artificial Intelligence: A Modern Approach(link resides outside IBM), becoming one of the leading textbooks in the study of AI. In it, they delve into four potential goals or definitions of AI, which differentiates computer systems on the basis of rationality and thinking vs. acting:
Human approach:
Ideal approach:
Alan Turings definition would have fallen under the category of systems that act like humans.
At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.
Today, a lot of hype still surrounds AI development, which is expected of any new emerging technology in the market. As noted in Gartners hype cycle (link resides outside IBM), product innovations like, self-driving cars and personal assistants, follow a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovations relevance and role in a market or domain. As Lex Fridman notes here (link resides outside IBM) in his MIT lecture in 2019, we are at the peak of inflated expectations, approaching the trough of disillusionment.
As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment. To read more on where IBM stands within the conversation around AI ethics, read more here.
Weak AIalso called Narrow AI or Artificial Narrow Intelligence (ANI)is AI trained and focused to perform specific tasks. Weak AI drives most of the AI that surrounds us today. Narrow might be a more accurate descriptor for this type of AI as it is anything but weak; it enables some very robust applications, such as Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles.
Strong AI is made up of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial general intelligence (AGI), or general AI, is a theoretical form of AI where a machine would have an intelligence equaled to humans; it would have a self-aware consciousness that has the ability to solve problems, learn, and plan for the future. Artificial Super Intelligence (ASI)also known as superintelligencewould surpass the intelligence and ability of the human brain. While strong AI is still entirely theoretical with no practical examples in use today, that doesn't mean AI researchers aren't also exploring its development. In the meantime, the best examples of ASI might be from science fiction, such as HAL, the superhuman, rogue computer assistant in 2001: A Space Odyssey.
Since deep learning and machine learning tend to be used interchangeably, its worth noting the nuances between the two. As mentioned above, both deep learning and machine learning are sub-fields of artificial intelligence, and deep learning is actually a sub-field of machine learning.
Deep learning is actually comprised of neural networks. Deep in deep learning refers to a neural network comprised of more than three layerswhich would be inclusive of the inputs and the outputcan be considered a deep learning algorithm. This is generally represented using the following diagram:
The way in which deep learning and machine learning differ is in how each algorithm learns. Deep learning automates much of the feature extraction piece of the process, eliminating some of the manual human intervention required and enabling the use of larger data sets. You can think of deep learning as "scalable machine learning" as Lex Fridman noted in same MIT lecture from above. Classical, or "non-deep", machine learning is more dependent on human intervention to learn. Human experts determine the hierarchy of features to understand the differences between data inputs, usually requiring more structured data to learn.
"Deep" machine learning can leverage labeled datasets, also known as supervised learning, to inform its algorithm, but it doesnt necessarily require a labeled dataset. It can ingest unstructured data in its raw form (e.g. text, images), and it can automatically determine the hierarchy of features which distinguish different categories of data from one another. Unlike machine learning, it doesn't require human intervention to process data, allowing us to scale machine learning in more interesting ways.
There are numerous, real-world applications of AI systems today. Below are some of the most common examples:
The idea of 'a machine that thinks' dates back to ancient Greece. But since the advent of electronic computing (and relative to some of the topics discussed in this article) important events and milestones in the evolution of artificial intelligence include the following:
IBM has been a leader in advancing AI-driven technologies for enterprises and has pioneered the future of machine learning systems for multiple industries. Based on decades of AI research, years of experience working with organizations of all sizes, and on learnings from over 30,000 IBM Watson engagements, IBM has developed the AI Ladder for successful artificial intelligence deployments:
IBM Watson gives enterprises the AI tools they need to transform their business systems and workflows, while significantly improving automation and efficiency. For more information on how IBM can help you complete your AI journey, explore the IBM portfolio of managed services and solutions
Sign up for an IBMid and create your IBM Cloud account.
More:
Posted in Ai
Comments Off on What is Artificial Intelligence (AI)? | IBM
How to mitigate bias in AI – VentureBeat
Posted: at 12:44 pm
Elevate your enterprise data technology and strategy at Transform 2021.
As the common proverb goes, to err is human. One day, machines may offer workforce solutions that are free from human decision-making mistakes; however, those machines learn through algorithms and systems built by programmers, developers, product managers, and software teams with inherent biases (like all other humans). In other words, to err is also machine.
Artificial intelligence has the potential to improve our lives in countless ways. However, since algorithms often are created by a few people and distributed to many, its incumbent upon the creators to build them in a way that benefits populations and communities equitably. This is much easier said than done no programmer can be expected to hold the full knowledge and awareness necessary to build a bias-free AI model, and further, the data gathered can be biased as a result of the way they are collected and the cultural assumptions behind those empirical methods. Fortunately, when building continuously learning AI systems of the future, there are ways to reduce that bias within models and systems. The first step is about recognition.
Its important to recognize that bias exists in the real world, in all industries and among all humans. The question to ask is not how to make bias go away but how to detect and mitigate such bias. Understanding this helps teams take accountability to ensure that models, systems, and data are incorporating inputs from a diverse set of stakeholders and samples.
With countless ways for bias to seep into algorithms and their applications, the decisions that impact models should not be made in isolation. Purposefully cultivating a workgroup of individuals from diversified backgrounds and ideologies can help inform decisions and designs that foster optimal and equitable outcomes.
Recently, the University of Cambridge conducted an evaluation of over 400 models attempting to detect COVID-19 faster via chest X-rays. The analysis found many algorithms had both severe shortcomings and a high risk of bias. In one instance, a model trained on X-ray images of adult chests was tested on a data set of X-rays from pediatric patients with pneumonia. Although adults experience COVID-19 at a higher rate than children, the model positively identified cases disproportionally. Its likely because the model weighted rib sizes in its analysis, when in fact, the most important diagnostic approach is to examine the diseased area of the lung and rule out other issues like a collapsed lung.
One of the bigger problems in model development is that the datasets rarely are made available due to the sensitive nature of the data, so its often hard to determine how a model is making a decision. This illustrates the importance of transparency and explainability in both how a model is created and its intended use. Having key stakeholders (i.g., clinicians, actuaries, data engineers, data scientists, care managers, ethicists, and advocates) developing a model in a single data view can remove several human biases that have persisted due to the siloed nature of healthcare.
Its also worth noting that diversity extends much further than the people creating algorithms. Fair algorithms test for bias in the underlying data in their models. In the case of the COVID-19 X-ray models, this was the Achilles heel. The data sampled and collected to build models can underrepresent certain groups whose outcomes we want to predict. Efforts must be made to build more complete samples with contributions from underrepresented groups to better represent populations.
Without developing more robust data sets and processes around how data is recorded and ingested, algorithms may amplify psychological or statistical bias from how the data was collected. This will negatively impact each step of the model-building process, such as the training, evaluation, and generalization phases. However, by including more people from different walks of life, the AI models built will have a broader understanding of the world, which will go a long way toward reducing the inherent biases of a single individual or homogeneous group.
It may surprise some engineers and data scientists, but lines of code can create unfairness in many ways. For example, Twitter automatically crops uploaded images to improve user experience, but its engineers received feedback that the platform was incorrectly missing or misidentifying certain faces. After multiple attempts to improve the algorithm, the team ultimately realized that image trimming was a decision best made by people. Choosing the argmax (largest predicted probability) for finally outputting predictions amplifies disparate impact. An enormous number of test data sets, as well as scenario-based testing, are needed to neutralize these concerns.
There will always be gaps in AI models, yet its important to maintain accountability for them and correct them. And fortunately, when teams detect potential biases with a base model that is built and performs sufficiently, existing methods can be used to de-bias the data. Ideally, models shouldnt run without having a proper continuous feedback loop where predicted outputs are reused to train new versions. When working with diverse teams, data, and algorithms, building feedback-aware AI can reduce the innate gaps where bias can sneak in, yet without the diversity of inputs, AI models will just re-learn from its bias.
If individuals and teams are cognizant of the existence of bias, then they have the necessary tools at the data, algorithm, and human levels to build a more responsible AI. The best solution is to be aware that these biases exist and maintain safety nets to address them for each project and model deployment. What tools or approaches do you use to create algorithm fairness in your industry? And most importantly, how do you define the purpose behind each model?
Akshay Sharmaisexecutive vice president of artificial intelligence at digital health company Sharecare.
View post:
Posted in Ai
Comments Off on How to mitigate bias in AI – VentureBeat
AI is about to shake up music forever but not in the way you think – BBC Science Focus Magazine
Posted: at 12:44 pm
Take a hike, Bieber. Step aside, Gaga. And watch out, Sheeran. Artificial intelligence is here and its coming for your jobs.
Thats, at least, what you might think after considering the ever-growing sophistication of AI-generated music.
While the concept of machine-composed music has been around since the 1800s (computing pioneer Ada Lovelace was one of the first to write about the topic), the fantasy has become reality in the past decade, with musicians such as Francois Pachet creating entire albums co-written by AI.
Some have even used AI to create new music from the likes of Amy Winehouse, Mozart and Nirvana, feeding their back catalogue into a neural network.
Even stranger, this July, countries across the world will even compete in the second annual AI Song Contest, a Eurovision-style competition in which all songs must be created with the help of artificial intelligence. (In case youre wondering, the UK scooped more than nul points in 2020, finishing in a respectable 6th place).
But will this technology ever truly become mainstream? Will artificial intelligence, as artist Grimes fears, soon make musicians obsolete?
To answer these questions and more, we sat down withProf Nick Bryan-Kinns, director of the Media and Arts Technology Centre at Queen Mary University of London. Below he explains how AI music is composed, why this technology wont crush humanity creativity and how robots could soon become part of live performances.
Music AIs use neural networks that are really large sets of bits of computers that try and mimic how the brain works. And you can basically throw lots of music at this neural network and it learns patterns just like how the human brain does by repeatedly being shown things.
Whats tricky about todays neural networks is theyre getting bigger and bigger. And theyre becoming harder and harder for humans to understand what theyre actually doing.
Were getting to a point now where we have these essentially black boxes that we put music into and nice new music comes out. But we dont really understand the details of what its doing.
These neural networks also consume a lot of energy. If youre trying to train AI to analyse the last 20 years of pop music, for instance, youre chucking all that data in there and then using a lot of electricity to do the analysis and to generate a new song. At some point, were going to have to question whether the environmental impact is worth this new music.
Im a sceptic on this. A computer may be able to make hundreds of tracks easily, but there is still likely still a human selecting which ones they think are nice or enjoyable.
Theres a little bit of smoke and mirrors going on with AI music at the moment. You can throw in Amy Winehouses back catalogue into an AI and a load of music will come out. But somebody has to go and edit that. They have to decide which parts they like and which parts the AI needs to work on a bit more.
The problem is that were trying to train the AI to make music that we like, but were not allowing it to make music that it likes. Maybe the computer likes a different kind of music than we do. Maybe the future would just be all the AIs listening to music together without humans.
Im also kind of sceptic on that one as well. AI can generate lyrics that are interesting and have an interesting narrative flow. But lyrics for songs are typically based on peoples life experiences, whats happened to them. People write about falling in love, things that have gone wrong in their life or something like watching the sunrise in the morning. AIs dont do that.
Im a little bit sceptical that an AI would have that life experience to be able to communicate something meaningful to people.
Read more:
This is where I think the big shift will be mash-ups between different kinds of musical styles. Theres research at the moment that takes the content of one kind of music and putting it in the style of another kind of music, exploring maybe three or four different genres at once.
While its difficult to try these mash-ups in a studio with real musicians, an AI can easily try a million different combinations of genres.
People say this with every introduction of new technology into music. With the invention of the gramophone, for example, everybody was worried, saying it would be terrible and the end of music. But of course, it wasnt. It was just a different way of consuming music.
AI might allow more people to make music because its now much easier to make a professional sounding single using just even your phone than it was 10 or 20 years ago.
A woman interacts with an AI music conductor during the 2020 Internet Conference in Wuzhen, Zhejiang Province of China. Getty
At the moment, AI is like a tool. But in the near future, it could be more of a co-creator. Maybe it could help you out by suggesting some basslines, or give you some ideas for different lyrics that you might want to use based on the genres that you like.
I think the co-creation between the AI and the human as equal creative partners will be the really valuable part of this.
AI can create a pretty convincing human voice simulation these days. But the real question is why you would want it to sound like a human anyway. Why shouldnt the AI sound like an AI, whatever that is? Thats whats really interesting to me.
I think were way too fixated on getting the machines to sound like humans. It would be much more interesting to explore how it would make its own voice if it had the choice.
I love musical robots. A robot that can play music has been a dream for so many for over a century. And in the last maybe five or 10 years, its really started to come together where youve got the AI that can respond in real-time and youve got robots that can actually move in very sort of human and emotional ways.
The fun thing is not just the music that theyre making, but its the gestures that go with the music. They can nod their heads or tap their feet to the beat. People are now building robots that you can play with in real-time in a sort of band like situation.
Whats really interesting to me is that this combination of technology has come together where we can really feel like its a real living thing that were playing music with.
Yeah, for sure. I think thatd be great! It will be interesting to see what an audience makes of it. At the moment its quite fun to play as a musician with a robot. But is it really fun watching robots perform? Maybe it is. Just look at Daft Punk!
Nick Bryan-Kinns is director of the Media and Arts Technology Centre at Queen Mary University of London, and professor of Interaction Design. He is also a co-investigator at the UKRI Centre for Doctoral Training in AI for Music, and a senior member of the Association for Computing Machinery.
Read more about the science of music:
Go here to read the rest:
AI is about to shake up music forever but not in the way you think - BBC Science Focus Magazine
Posted in Ai
Comments Off on AI is about to shake up music forever but not in the way you think – BBC Science Focus Magazine
AI can now convincingly mimic cybersecurity experts and medical researchers – The Next Web
Posted: at 12:44 pm
If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation flagged and unflagged has been aimed at the general public. Imagine the possibility of misinformation information that is false or misleading in scientific and technical fields like cybersecurity, public safety and medicine.
There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as facultymembers doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that its possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.
General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.
To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.
Much of the technology used to identify and manage misinformation is powered by artificial intelligence. AI allows computer scientists to fact-check large amounts of misinformation quickly, given that theres too much for people to detect without the help of technology. Although AI helps people detect misinformation, it has ironically also been used to produce misinformation in recent years.
Transformers have aided Google and other technology companies by improving their search engines and have helped the general public in combating such common problems as battling writers block.Transformers, like BERT from Google and GPT from OpenAI, use natural language processing to understand text and produce translations, summaries and interpretations. They have been used in such tasks as storytelling and answering questions, pushing the boundaries of machines displaying humanlike capabilities in generating text.
Transformers can also be used for malevolent purposes. Social networks like Facebook and Twitter have already faced the challenges of AI-generated fake news across platforms.
Our research shows that transformers also pose a misinformation threat in medicine and cybersecurity. To illustrate how serious this is, we fine-tuned the GPT-2 transformer model on open online sources discussing cybersecurity vulnerabilities and attack information. A cybersecurity vulnerability is the weakness of a computer system, and a cybersecurity attack is an act that exploits a vulnerability. For example, if a vulnerability is a weak Facebook password, an attack exploiting it would be a hacker figuring out your password and breaking into your account.
We then seeded the model with the sentence or phrase of an actual cyberthreat intelligence sample and had it generate the rest of the threat description. We presented this generated description to cyberthreat hunters, who sift through lots of information about cybersecurity threats. These professionals read the threat descriptions to identify potential attacks and adjust the defenses of their systems.
We were surprised by the results. The cybersecurity misinformation examples we generated were able to fool cyberthreat hunters, who are knowledgeable about all kinds of cybersecurity attacks and vulnerabilities. Imagine this scenario with a crucial piece of cyberthreat intelligence that involves the airline industry, which we generated in our study.
An example of AI-generated cybersecurity misinformation. The Conversation, CC BY-ND
This misleading piece of information contains incorrect information concerning cyberattacks on airlines with sensitive real-time flight data. This false information could keep cyber analysts from addressing legitimate vulnerabilities in their systems by shifting their attention to fake software bugs. If a cyber analyst acts on the fake information in a real-world scenario, the airline in question could have faced a serious attack that exploits a real, unaddressed vulnerability.
A similar transformer-based model can generate information in the medical domain and potentially fool medical experts. During the COVID-19 pandemic, preprints of research papers that have not yet undergone a rigorous review are constantly being uploaded to such sites as medrXiv. They are not only being described in the press but are being used to make public health decisions. Consider the following, which is not real but generated by our model after minimal fine-tuning of the default GPT-2 on some COVID-19-related papers.
An example of AI-generated health care misinformation. The Conversation, CC BY-ND
The model was able to generate complete sentences and form an abstract allegedly describing the side effects of COVID-19 vaccinations and the experiments that were conducted. This is troubling both for medical researchers, who consistently rely on accurate information to make informed decisions, and for members of the general public, who often rely on public news to learn about critical health information. If accepted as accurate, this kind of misinformation could put lives at risk by misdirecting the efforts of scientists conducting biomedical research.
Although examples like these from our study can be fact-checked, transformer-generated misinformation hinders such industries as health care and cybersecurity in adopting AI to help with information overload. For example, automated systems are being developed to extract data from cyberthreat intelligence that is then used to inform and train automated systems to recognize possible attacks. If these automated systems process such false cybersecurity text, they will be less effective at detecting true threats.
We believe the result could be an arms race as people spreading misinformation develop better ways to create false information in response to effective ways to recognize it.
Cybersecurity researchers continuously study ways to detect misinformation in different domains. Understanding how to automatically generate misinformation helps in understanding how to recognize it. For example, automatically generated information often has subtle grammatical mistakes that systems can be trained to detect. Systems can also cross-correlate information from multiple sources and identify claims lacking substantial support from other sources.
Ultimately, everyone should be more vigilant about what information is trustworthy and be aware that hackers exploit peoples credulity, especially if the information is not from reputable news sources or published scientific work.
This article byPriyanka Ranade, PhD Student in Computer Science and Electrical Engineering, University of Maryland, Baltimore County; Anupam Joshi, Professor of Computer Science & Electrical Engineering, University of Maryland, Baltimore County, and Tim Finin, Professor of Computer Science and Electrical Engineering, University of Maryland, Baltimore County,is republished from The Conversation under a Creative Commons license. Read the original article.
Read the original post:
AI can now convincingly mimic cybersecurity experts and medical researchers - The Next Web
Posted in Ai
Comments Off on AI can now convincingly mimic cybersecurity experts and medical researchers – The Next Web
AI-based loan apps are booming in India, but some borrowers miss out – VentureBeat
Posted: at 12:44 pm
Elevate your enterprise data technology and strategy at Transform 2021.
(Reuters) As the founder of a consumer rights non-profit in India, Karnav Shah is used to seeing sharp practices and disgruntled customers. But even he has been surprised by the sheer volume of complaints against digital lenders in recent years.
While most of the grievances are about unauthorised lending platforms misusing borrowers data or harassing them for missed payments, others relate to high interest rates or loan requests that were rejected without explanation, Shah said.
These are not like traditional banks, where you can talk to the manager or file a complaint with the head office. There is no transparency, and no one to ask for remedy, said Shah, founder of JivanamAsteya.
It is hurting young people starting off in their lives a loan being rejected can result in a low credit score, which will adversely affect bigger financial events later on, he told the Thomson Reuters Foundation.
Hundreds of mobile lending apps have mushroomed in India as smartphone use surged and the government encouraged digitization in banking, with financial technology (fintech) firms rushing to fill the gap in access to loans.
Unsecured loan apps, which promise quick loans even to those without a credit history or collateral, have been criticized for high lending rates, short repayment terms, as well as aggressive recovery methods and misuse of customer data.
At the same time, their use of algorithms to gauge the creditworthiness of first-time borrowers disproportionately excludes women and other traditionally marginalized groups, analysts say.
Credit scoring systems were intended to reduce the subjectivity in loan approvals by decreasing the role of a loan officers discretion on lending decisions, said Shehnaz Ahmed, fintech lead at the Vidhi Centre for Legal Policy in Delhi.
However, since alternative credit scoring systems employ thousands of data points and complex models, they could potentially be used to mask discriminatory policies and may also perpetuate existing forms of discrimination, she said.
Globally, about 1.7 billion people do not have a bank account, leaving them vulnerable to loan sharks and at risk of being excluded from vital government and welfare benefits, which are increasingly dispersed by electronic means.
Nearly 80% of Indians do now have a bank account, partly as a result of the governments financial inclusion policies, but young people and the poor often lack the formal credit histories that lenders use to gauge an applicants creditworthiness.
Almost a quarter of loan enquiries every month are from people with no credit history, according to TransUnion CIBIL, a company that generates credit scores.
Authorities have backed the use of AI for creating credit scores for so-called new to credit consumers, who account for about 60% of motorbike loans and more than a third of mortgages.
Algorithms help assess the creditworthiness of first-time borrowers by scanning their social media footprint, digital payments data, number of contacts and calling patterns.
TransUnion CIBIL recently launched an algorithm that has mapped the credit data of similar subjects that do have a credit history and whose information is comparable, said Harshala Chandorkar, the firms chief operating officer.
Women made up about 28% of retail borrowers in India last year, up three percentage points from 2014, and have a slightly higher average CIBIL score than men, she said, without answering a question about the risk of discrimination from algorithms.
CreditVidya, a credit information firm, uses an artificial intelligence (AI)-based algorithm that taps over 10,000 data points to calculate its scores.
A clear, unambiguous consent screen that articulates what data is collected and the purpose for which it will be used is displayed to the user to take his or her consent, it said.
EarlySalary, which says its mobile lending app has garnered more than 10 million downloads, uses an algorithm that collects text and browsing history, and information from social media platforms including Facebook and LinkedIn.
People who do not have a substantial social media presence could be at a disadvantage from such techniques, said Ahmed, adding that many online lending platforms provide little information on how they rate creditworthiness.
There is always an element of subjectivity in determining creditworthiness. However, this is heightened in the case of alternative credit scoring models that rely on several data points for assessing creditworthiness, she said.
Personal lending apps in India which are mainly intermediaries connecting borrowers with lending institutions fall in a regulatory gray zone now.
A long-delayed Personal Data Protection Bill under discussion by lawmakers would have conditions for requiring and storing personal data, and penalties for misuse of such data.
Authorized lending platforms are advised to engage in data capture with the informed consent of the customer, and publish detailed terms and conditions, said Satyam Kumar, a member of lobby group Fintech Association for Consumer Empowerment (FACE).
Regular audits and internal checks of the lending process are done to ensure no discrimination on the basis of gender or religion is done manually or via machine-based analysis, he said.
Indias central bank has said it will draw up a regulatory framework that supports innovation while ensuring data security, privacy, confidentiality and consumer protection.
That will help boost the value of digital lending to $1 trillion in 2023, according to Boston Consulting Group.
Digital lending will still skew towards historically privileged groups, with credit scoring systems also allocating loans more often to men than women in India, said Tarunima Prabhakar, a research fellow at Carnegie India.
If an algorithm evaluates credit scores based on the number of contacts on a phone, it would likely find men more creditworthy as Indian men have greater social mobility than women.
So women may face loan rejections or higher interest rates.
There is almost no transparency as to how these scores are reached, she said.
Digital lenders justify the secrecy on grounds of competitive advantage, but there needs to be some clarification, including explanations when loans are rejected, she added.
If these platforms make it easier for men but not women to start small businesses, it might reduce womens agency in an already asymmetric power dynamic, Prabhakar said.
In the absence of strong monitoring and institutions, alternative lending may perpetuate the same arbitrary lending practices of informal credit markets that they aim to resolve.
Read the original here:
AI-based loan apps are booming in India, but some borrowers miss out - VentureBeat
Posted in Ai
Comments Off on AI-based loan apps are booming in India, but some borrowers miss out – VentureBeat
Heres How AI Can Determine The Taste Of Coffee Beans – Forbes
Posted: at 12:44 pm
Coffee cups are pictured in the tasting area of the Vanibel cocoa and vanilla production facility, a ... [+] former 18th Century sugar refinery in Vieux-Habitants, Guadeloupe, on April 9, 2018. / (Photo credit HELENE VALENZUELA/AFP via Getty Images)
Artificial intelligence (AI) ispredictedto reach $126 billion by 2025. It is showing up in every industry, from healthcare and agriculture to education, finance and shipping. And now, AI has made a move to the food industry to discover and develop new flavors in food and drink.
In 2018, Danish brewer,CarlsburgusedAIto map and predict flavors from yeast and other ingredients in beer. IBM developed an AI for McCormick to create better spices. AndNotCo, which produces vegan NotMilks, uses AI to analyze molecular structures and find new combinations of plant-based ingredients.
A Colombian startup,Demetria, has raised$3 millionto date and is betting on its sensory digital fingerprint using AI to match a coffee bean's profile to the industry's standardcoffee flavor wheelcreated in 1995.
The new company says that this new sensory digital fingerprint for coffee beans will let roasters and producers assess quality and taste at any stage of the coffee production process.
Felipe Ayerbe, CEO of Demetria, says the wine-a-fication of coffee is here to stay.
"Coffee drinkers today have been exposed and are more aware of the taste and general experience they are looking for and are willing to pay more for that experience," said Ayerbe. "That is why you see an ever-growing array of possibilities and choices for something that is supposedly a commodity - different prices, origins, roasts, blends, flavor characteristics, preparations just like wine."
Ayerbe notes that coffee is still considered a tradable commodity, but the experience that consumers get is anything but a commodity. "In the last 20 years [..], coffee has undergone a voyage of premiumization where the most important variable is sensory quality - taste," adds Ayerbe.
Ayerbe says this revolution in specialty coffees was spurred in part by industry pioneers like Starbuck and Nespresso that upgraded the world's taste in coffee.
"With this sensory digital fingerprint, we are upgrading the industry from analog to digital by allowing the full value chain to [..] measure and manage the most important variable in the industry - taste," said Ayerbe. "We envision that farmers will for the first time not only be able to understand the quality of what they are selling but also manage their farming practices to optimize their quality, creating an unparalleled level of empowerment for them."
The sensor technology that Demetria uses has existed for 40 years.
"In the past several years, sensors have become miniaturized, more affordable and can connect to the cloud," said Ayerbe. "This allows for the collection, storage and analysis of huge amounts of data."
Ayerbe says the company uses handheld near-infra-red (NIR) sensors to read the spectral fingerprint of green coffee beans. This is because the different colors and wavelengths of the light spectrum react differently to each organic compound present in the coffee, representing the whole chemical composition of the beans.
"We then needed AI to translate the NIR data into the sensory language the industry understands," said Ayerbe. "And, until now, the taste or sensory quality of coffee beans has been determined by cupping, a manual, time-consuming process carried out by the industry's certified tasting experts, measured according to the industry's standard coffee tasting wheel," said Ayerbe.
With all the data gathered from the NIR readings and the cupping data, Demetria calibrated the AI to match a specific spectral fingerprint to an unmistakable taste profile.
Ayerbe says that the biggest hurdle the team had to overcome was determining subjective taste identifiers like body, balance and aftertaste that were in line with the standardized coffee taster's flavor wheel.
"These taste identifiers had to be determined holistically, rather than individually, to establish the true overall flavor profile of the coffee bean," said Ayerbe. "Also, coffee beans from the same sample are not homogenous, so for a single 300-500 gram sample of beans, multiple scans were required to gather enough data to represent an overall, unanimous flavor profile."
Demetria made thousands of scans that considered the slightest difference between each reading from a wide range of different coffee beans, and the result was a model with a sensorial digital fingerprint unique to Demetria.
Armed with the unique sensory digital fingerprint, Demetria built a matchmaking profile for a distinct flavor profile forCarcafeby training their AI and models to use spectral readings and correlating those with the cupping analyses from hundreds of samples coffee.
"The AI for the high-value profile required by Carcafe was gathered from multiple hundreds of samples of green coffee beans and measured against q-graders (cuppers) who sent over data on their cupping scores," said Ayerbe. "By compiling the big data cupping analysis, we were able to match the sensory fingerprint to this unique Colombian coffee."
Ayerbe said that after four months of processing the coffee samples, the company created a viable product for Carcafe where the AI was continuously retrained with new samples.
"The biggest technical challenge was training the AI to detect nuances in taste like a cupper can detect, rather than just clear profiles," said Ayerbe. "Clear profiles need to exist in the database, but the whole gamut of nuances need to be programmed too."
For the Carcafe profile that Demetria was matching, it needed to determine a particular sweetness; for example, chocolate is different from caramel which is different from brown sugar sweetness.
"So in the second iteration, we had to define the true taste and ensure that the AI could be specific enough to determine between these very similar but different types of sweetness," said Ayerbe. "To be able to pinpoint the producers that can match this profile exactly and give the farmers the tools to be able to grow that crop again consistently brings a new level of efficiency and transparency to Carcafe and their customers."
Ayerbe believes this process removes many variables and unknowns that currently exist in the coffee supply chain.
"If you can control more of the process, you will end up with less defective coffee, which will increase overall availability," adds Ayerbe. "It's also important to note that cuppers play a vital role in the training of the model, and this technology is in no way meant to replace their position in the industry."
Ayerbe says the problem with cupping is that it's a [..] scarce resource. "We are expanding the ability to assess sensory quality ubiquitously, along the whole value chain, and this especially is applicable at the producer level where cupping doesn't currently exist."
Ayerbe adds that their technology facilitates better use of the cuppers' time and allows traders and roasters to be more efficient in understanding who is producing the taste, type and quality of coffee they seek.
Continued here:
Heres How AI Can Determine The Taste Of Coffee Beans - Forbes
Posted in Ai
Comments Off on Heres How AI Can Determine The Taste Of Coffee Beans – Forbes
Transform 2021 puts the spotlight on women in AI – VentureBeat
Posted: at 12:44 pm
Elevate your enterprise data technology and strategy at Transform 2021.
VentureBeat is proud to bring back the Women in AI Breakfast and Awards online for Transform 2021. In the male-dominated tech industry, women are constantly faced with the gender equity gap. There is so much work in the tech industry to become more inclusive of bridging the gender gap while at the same time creating a diverse community.
VentureBeat is committed year after year to emphasize the importance of women leaders by giving them the platform to share their stories and obstacles they face in their male-dominated industries.As part of Transform 2021, we are excited to host our annual Women in AI Breakfast, presented by Capital One, and recognize women leaders accomplishments with our Women in AI Awards.
VentureBeats third annual Women in AI Breakfast, presented by Capital One, will commemorate women leading the AI industry. Join the digital networking session and panel on July 12 at 7:35 a.m. Pacific.
This digital breakfast includes networking and a discussion on the topic surrounding Women in AI: a seat at the table. Our panelists will explore how we can get more women into the AI workforce, plus the roles and responsibilities of corporates, academia, governments, & society as a whole in achieving this goal.
Featured speakers include Kay Firth Butterfield, Head of AI and Machine Learning and Member of the Executive Committee, World Economic Forum; Kathy Baxter, Principal Architect, Ethical AI Practice, Salesforce; Tiffany Deng, Program Management Lead- ML Fairness and Responsible AI, Google; and Teuta Mercado, Responsible AI Program Director, Capital One. Registration for Transform 2021 is required for attendance.
Once again, VentureBeat will be honoring extraordinary women leaders at the Women in AI Awards. The five categories this year include Responsibility & Ethics of AI, AI Entrepreneur, AI Research, AI Mentorship, and Rising Star.
Submit your nominations by July 9th at 5 p.m. Pacific. Learn more about the nomination process here.
The winners of the 2021 Women in AI Awards will be presented at VB Transform on July 16th, alongside the AI Innovation Awards. Register for Transform 2021 to join online.
See the rest here:
Transform 2021 puts the spotlight on women in AI - VentureBeat
Posted in Ai
Comments Off on Transform 2021 puts the spotlight on women in AI – VentureBeat
The Future of AI in 2021 – Analytics Insight
Posted: at 12:44 pm
AI is a large part of our world already, affecting online search results and the way we shop. Interest in AI has attracted long-term investments in AI use across several industries, particularly in customer service, medical diagnostics and self-driving vehicles. The increased data available through research has created better algorithms which have enabled more complex AI systems that improve a users experience with search engines and online translation tools, but also means that businesses can make far more focused sales and marketing drives to customers and financial markets have virtual assistants able to deal with more than the simplest of requests.
AI system improvements will involve the processing of massive amounts of data which needs improved computing power and better algorithms and tools. Using cryptography and blockchain has made it easier to build these advances since they can publicly share data whilst keeping company information confidential.
When it comes to broader cybersecurity, AI is critical in the identification and prediction of cybersecurity threats. This is particularly true for online casinos like Starspins casino, where the real money is involved, so peoples bank accounts need to be protected. Security is already good, but AI will take casino security to a level when it will be rare to hear of any security breaches on online gaming websites and apps.
It is not just Tesla that is focusing on autonomous driving. Current semi-autonomous vehicles still require drivers but improved technology is bringing forward the date of the first fully automated drive. There will almost certainly be delays before self-driving cars are seen on the roads because of thorny issues around liability in cases of accidents. Despite this, about 15 percent of vehicles sold in 2030 are forecast to be fully autonomous.
Another use of AI where there is continuing research and development is with conversational agents, most commonly known as chatbots. These are used most often in customer services and call centres. Chatbots are limited now, but the future of AI in 2021 will see improvements in how customer tasks are managed. This is through customer-facing AI assistants which act as the first port of call for customer queries, fielding simple questions and forwarding complex problems to live agents. There are agent-supporting AI assistants that are used by live customer service agents use to support them while they are interacting with customers, which improves productivity. A lot of that will be supported by advances in natural language understanding in a move away from todays narrow sets of responses which provide simple answers to simple questions.
Whilst it is exciting to see how far AI could go, AI models are complex and much work still needs to be done to make them most efficient. Fortunately, those that use AI do not have to understand with technology thanks to the use of Explainable AI (XAI). In the medical field, this means that a diagnosis made by an AI model will provide a doctor with the analysis behind the diagnosis, which is information that can also be understood by the patient.
From marketing to fulfillment and distribution AI will continue to play a central role in e-commerce for every business sector. Currently, it is difficult to integrate different AI models, but collaborative research among tech giants like Microsoft, Amazon and others is leading to the construction of an Open Neural Network Exchange (ONNX) to allow integration. This is forecast to be the foundation of all future AI. This will enable more complex chatbox communications and other personalized shopping advances as well as targeted image-based targeting advertising, and warehouse and inventory automation.
Today, around 75 percent of job applications are rejected by an automated applicant tracking system powered by AI before they are seen by a human being. Job seekers can use the same AI technology to scan their application and compare it with a job description. Job seekers will receive suggested changes to the application to make it a better match to pass the applicant tracking system process.
One of the biggest computing challenges is the amount of energy needed to solve more complex problems. In cases of speech recognition, AI-enabled chips within high-performance CPUs are needed to improve efficiency. The electricity bill for one supercharged language model AI was estimated at USD4.6 million.
Extended Reality (XR) will add touch, taste and smell to the Virtual Reality or Augmented Reality worlds to provide an enhanced immersive experience. Volkswagen is already using XR tools so customers can experience their cars in 3D.
Share This ArticleDo the sharing thingy
About AuthorMore info about author
Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.
Read the rest here:
Posted in Ai
Comments Off on The Future of AI in 2021 – Analytics Insight
AutoBrains Revolutionary AI Central To Leading Supplier’s ADAS and AV Growth Strategy – PRNewswire
Posted: at 12:44 pm
AutoBrains revolutionary unsupervised AI technology is at the center of Continental's growth strategy in ADAS and AV.
Key to the disruptive technology's advantages over traditional deep learning is its massively reduced reliance on expensive and often error-prone manually labelled training data sets. The unsupervised AI system successfully interprets and navigates unusual driving scenarios and edge cases where traditional supervised learning systems are least reliable. This increases driving safety and helps accelerate the adoption of ADAS and vehicles capable of higher levels of autonomy. Reduced reliance on stored data also means AutoBrains' system requires roughly ten times less computing power than currently available systems and can be produced at lower cost, increasing accessibility of ADAS across market segments at a time when regulations are requiring more driver assistance capabilities for passenger and commercial vehicles.
"We are thrilled to partner with Continental to bring our revolutionary technology to the market," said Igal Raichelgauz, CEO of AutoBrains. "Unsupervised AI is backed by more than 200 patents, over a decade of research and development, and a nearly 2-year incubation period with Continental, and we are excited to take the next step with Conti as our key partner."
Frank Petznick, Head of the Driver Assistance Systems Business Unit at Continental added, "We are excited to be partnering with AutoBrains to bring to market its advanced and proven AI technology that we believe will disrupt the ADAS and AV marketplace. Historically, AV and ADAS technologies have been limited by their dependence on supervised learning that uses massive labelled training data sets and requires enormous compute power. AutoBrains' AI breaks through those barriers with a different approach that processes relevant signals from the car's environment in much the same way that human drivers do. This technology boosts performance while saving compute power and energy. With AutoBrains we intend to push rapidly ahead toward a safer and increasingly autonomous driving experience."
AutoBrains emerged out of AI tech company, Cortica, after identifying the potential of the unsupervised learning technology to vastly improve AI for automotive. Following a period with Continental's Business Unit ADAS and co-pace "The Startup Program of Continental", in 2019, AutoBrains was formally spun off to focus exclusively on building unsupervised AI for autos.
"We are excited to see that combining the strength of AutoBrains AI Technology and the Continental ADAS System know how led to such a high-performance system and profound partnership" said Jrgen Bilo, Managing Director of co-pace, The Startup Program of Continental.
PR and Media Contact:Ben Williams, Associate Director, Breakwater Strategy, [emailprotected], +1-(508)-330-5321
SOURCE AutoBrains
Read the original post:
AutoBrains Revolutionary AI Central To Leading Supplier's ADAS and AV Growth Strategy - PRNewswire
Posted in Ai
Comments Off on AutoBrains Revolutionary AI Central To Leading Supplier’s ADAS and AV Growth Strategy – PRNewswire
How wearable AI could help you recover from covid – MIT Technology Review
Posted: at 12:44 pm
The Illinois program gives people recovering from covid-19 a take-home kit that includes a pulse oximeter, a disposable Bluetooth-enabled sensor patch, and a paired smartphone. The software takes data from the wearable patch and uses machine learning to develop a profile of each persons vital signs. The monitoring system alerts clinicians remotely when a patients vitals such as heart rateshift away from their usual levels.
Typically, patients recovering from covid might get sent home with a pulse oximeter. PhysIQs developers say their system is much more sensitive because it uses AI to understand each patients body, and its creators claim it is much more likely to anticipate important changes.
Its an enormous benefit, says Terry Vanden Hoek, the chief medical officer and head of emergency medicine at University of Illinois Health, which is hosting the pilot. Working with covid cases is hard, he says: When you work in the emergency department its sad to see patients who waited too long to come in for help. They would require intensive care on a ventilator. You couldnt help but ask, If we could have warned them four days before, could we have prevented all this?
Like Angela Mitchell, most of the study participants are African-American. Another large group are Latino. Many are also living with risk factors such as diabetes, obesity, hypertension, or lung conditions that can complicate covid-19 recovery. Mitchell, for example, has diabetes, hypertension, and asthma.
African-American and Latino communities have been hardest hit by the pandemic in Chicago and across the country. Many are essential workers or live in high-density, multigenerational housing.
For example, there are 11 people in Mitchells house, including her husband, three daughters, and six grandchildren. I do everything with my family. We even share covid-19 together! she says with a laugh. Two of her daughters tested positive in March 2020, followed by her husband, before Mitchell herself.
Although African-Americans are only 30% of Chicagos population, they made up about 70% of the citys earliest covid-19 cases. That percentage has declined, but African-Americans recovering from covid-19 still die at rates two to three times those for whites, and vaccination drives have been less successful at reaching this community. The PhysIQ system could help improve survival rates, the studys researchers say, by sending patients to the ER before its too late, just as they did with Mitchell.
PhysIQ founder Gary Conkright has previous experience with remote monitoring, but not in people. In the mid-1990s, he developed an early artificial-intelligence startup called Smart Signal with the University of Chicago. The company used machine learning to remotely monitor the performance of equipment in jet engines and nuclear power plants.
Our technology is very good at detecting subtle changes that are the earliest predictors of a problem, says Conkright. We detected problems in jet engines before GE, Pratt & Whitney, and Rolls-Royce because we developed a personalized model for each engine.
Smart Signal was acquired by General Electric, but Conkright retained the right to apply the algorithm to the human body. At that time, his mother was experiencing COPD and was rushed to intensive care several times, he said. The entrepreneur wondered if he could remotely monitor her recovery by adapting his existing AI system. The result: PhysIQ and the algorithms now used to monitor people with heart disease, COPD, and covid-19.
Its power, Conkright says, lies in its ability to create a unique baseline for each patienta snapshot of that persons normand then detect exceedingly small changes that might cause concern.
The algorithms need only about 36 hours to create a profile for each person.
The system gets to know how you are looking in your everyday life, says Vanden Hoek. You may be breathing faster, your activity level is falling, or your heart rate is different than the baseline. The advanced practice provider can look at those alerts and decide to call that person to check in. If there are concernssuch as potential heart or respiratory failure, he saysthey can be referred to a physician or even urgent care or the emergency department.
In the pilot, clinicians monitor the data streams around the clock. The system alerts medical staff when the participants condition changes even slightlyfor example, if their heart rate is different from what it normally is at that time of day.
The rest is here:
How wearable AI could help you recover from covid - MIT Technology Review
Posted in Ai
Comments Off on How wearable AI could help you recover from covid – MIT Technology Review







