Daily Archives: November 9, 2019

Remembering Phillip E. Johnson (1940-2019): The Man Who Lit the Match – Discovery Institute

Posted: November 9, 2019 at 8:44 am

Authors note: With great regret, we recognize the passing of Phillip Johnson, a key guiding spirit of the intelligent design movement. He died peacefully overnight this weekend, at age 79, at his home in Berkeley, California. I am publishing below an essay by Casey Luskin, written in 2011 for the website Darwin on Trial, coinciding with the 20th anniversary of Johnsons crucial book of the same name. He held the title of Program Advisor for Discovery Institutes Center for Science & Culture.

A special regret for me is that I never had the opportunity to meet Johnson, or Phil as he was called by those who knew him. But we in the community that seeks to advance the theory of intelligent design live in his presence every day. And we will continue to do so even following his death. It is that way with great men and great women: they launch a movement, or light a match, an image cited below, and stamp their vision and personality permanently on events, institutions, and persons that follow.

With his vision, Johnson changed the terms of the debate about origins, with brilliance and gusto. In doing so, he changed many lives, of scientists and others, across the globe. But he did it in the spirit of a gentleman: as John Mark Reynolds has written, he suffered fools gladly. Now there is a model to follow! He was very humble, as the greatest men often are, and refused credit for striking the match that became the fire that is currently at work consuming a desiccated theory left over from 19th century materialism. After all, Johnson wrote, the logs had been piled high, and the tinder gathered. Darwinian naturalists had accumulated a large stock of public discontent. True, but nevertheless it was Johnsons first book that set the fire, and not someone elses. That, with his subsequent contributions and leadership, much of it behind the scenes, makes him the Godfather of Intelligent Design.

We will have much more to say about Phils legacy and his personality in days and weeks to come. For now, the following comments cant be much improved upon:

Phillip Johnson, law professor emeritus of UC Berkeleys Boalt Hall School of Law, is widely recognized as the godfather of the contemporaryintelligent design(ID) movement. As the author of several booksand numerous articlesexplaining scientific, legal, and cultural dimension of the debate over ID and Darwinism, Johnson was one of the most prolific authors in the formative years of the movement.

It was Johnsons 1991 bookDarwin on Trialthat first convinced many thinkers that neo-Darwinian evolution was buttressed more by a philosophy of naturalism than by the scientific evidence. Johnsons influential writing became the magnet of scholars from a variety of fields biology, chemistry, physics, philosophy, theology, and law to forge the intelligent design movement.

The stories of many of these scientists and scholars are told in the volumeDarwins Nemesis(InterVarsity Press, 2006). But Johnson too recounted with humble surprise the impact of his work in the 2008 volumeIntelligent Design 101:

Fifteen years ago I published a book that I thought might add a few ounces of balance to the debate over Darwins theory of evolution. The main thrust of that book,Darwin onTrial, was that evolution is propped up more by naturalistic philosophy than by the scientific evidence. Much to my pleasant surprise, this book turned out to be the match that lit the tinder beneath a stockpile of dry logs. This is not to my credit; the logs had been piled high, and the tinder gathered. Darwinian naturalists had accumulated a large stock of public discontent. [p. 23]

Part of Johnsons vision as a legal scholar has been knowing how to ask the right questions. The 1980s was an era of controversy for Biblical creationists. While young earth creationists and old earth creationists squabbled about whether Noah rode a dinosaur, or a camel onto the Ark, elite materialists were happy to take over the culture.

With the mind of a law professor, Johnson was a master at spotting issues. And the key issue he saw in the origins debate was not the age of the earth or the differing interpretations of Genesis by Christians. It was a more fundamental question of interest to theists and non-theists alike:Is life the result of blind, undirected natural causes, or is it the result of purposeful design? By focusing on this question, Johnson transformed the entire origins debate. Johnson continues:

Darwin on Trialbecame a uniting force around which many like-minded individuals scholars of many stripes, churchgoers, students, and even open-minded agnostics who dared extend their skepticism to Darwin could rally. For many, that rallying cry ultimately became Intelligent Design!

It has been often said that all truth passes through three stages. First it is ignored. Then it is violently opposed. Finally, it is accepted as being self-evident. This seems to be the arc that intelligent design is traversing.

Many were content to ignore Johnsons ideas until they actually started to impact public education. In 1999, members of the Kansas State Board of Education voted to soften the dogmatism that had dominated evolution-instruction. Yet Johnson was critical of the 1999 Kansas decision because it removed some aspects of macroevolution from the curriculum. Johnson has always been a proponent of objective education not censorship. He argued inThe Wedge of Truththat students should learnboththe evidence for and against Darwinian evolution:

What educators in Kansas and elsewhere should be doing is to teach the controversy. Of course students should learn the orthodox Darwinian theory and the evidence that supports it, but they should also learn why so many are skeptical, and they should hear the skeptical arguments in their strongest form rather than in a caricature intended to make them look as silly as possible.

In 2001, the Ohio State Board followed Johnsons approach and required students to critically analyze the evidence for and against Darwinian evolution. Objective evolution education had won.

It was around this time that Darwin-lobbyists realized that they better stop ignoring Johnson, and start telling the world that unless students are prevented from questioning Darwinism, the sky will fall.

ID critics quickly learned that the most effective way to target ID was not to address its arguments, but to make accusations of secret, sinister motives among proponents. One imagines the godfather Phillip Johnson in a smoky dark room handing wedge documents to his eager followers, charging them to go forth and baptize converts to intelligent design.

On the contrary, with Phillip Johnson, what you see is what you get. As John Mark Reynolds explains inDarwins Nemesis:

Phillip Johnson is one of those rare individuals who is always the same person. He asks the same hard questions in Sunday School as he does in the Berkeley classroom. He has a unified personality. I have seen him in hundreds of different situations, and there is no split in his soul. [p. 27]

While Johnson wouldnt flatter himself with such praise, he too observes that he has never hidden anything. I always find these conspiracy theories amusing because our strategy has been transparent from the beginning, writes Johnson. After all, I titled my fifth bookThe Wedge of Truth: Splitting the Foundations of Naturalism.

What is more striking is Johnsons gentlemanly responses to critics. He is not a hater, not even of his enemies, writes John Mark Reynolds. This is why so many who disagree with him can still respect himHe suffers fools gladly. (Darwins Nemesis,pp. 26-27)

Ironically, intemperate efforts to attack Johnson often ended up drawing people to him, creating a growing network of scientists and other scholars interested in intelligent design. Biochemist Michael Behe explains how a biased critique ofDarwin on Trialin the journalScienceled Behe to join the ID movement:

The news item made me so mad that I wrote a letter to the editor ofScience, which they published I wrote that this Johnson fellow appears from his book to be a rather intelligent layman, and that scientists would do much better to address the substance of his arguments than to rely on ad hominem attacks. About a week later I received a letter with a return address of Boalt Hall. I was now in the loopI was within the circle of Phil Johnsons acquaintances and useful contacts. [Darwins Nemesis,pp. 44-45]

Behes story is not unusual for members of the ID movement. Attracted by his intellect, character, and boldness, a new generation of scientists and scholars became connected to each other through Johnson.

Some critics would like to call Johnson the father of ID. In fact, they sometimes claim that Johnson, a non-scientist, invented the term intelligent design as a scheme to get around a 1987 Supreme Court ruling that declared creationism unconstitutional.

Aside from the fact that this story isnt true, its also grossly anachronistic. ID thinking and arguments date back to the ancient Greeks, and even in its modern form, the term intelligent design was used long before Johnson got involved with the issue, and before any court contemplated creationism.

In this sense, Johnson is not, and cannot be the father of intelligent design. But the Godfather? Most definitely.

Photo credit: Lit match (top), byYaoqi LAI viaUnsplash; Phillip E. Johnson (below), by Greg Schneider.

See the article here:

Remembering Phillip E. Johnson (1940-2019): The Man Who Lit the Match - Discovery Institute

Posted in Darwinism | Comments Off on Remembering Phillip E. Johnson (1940-2019): The Man Who Lit the Match – Discovery Institute

Farmers are using AI to spot pests and catch diseases and many believe its the future of agriculture – INSIDER

Posted: at 8:42 am

In Leones, Argentina,a drone with a special camera flies low over 150 acres of wheat. It's able to check each stalk, one-by-one, spottingthe beginnings of a fungal infection that could potentially threaten this year's crop.

The flying robot is powered by computer vision: a kind of artificial intelligence being developed by start-ups around the world, and deployed by farmers looking for solutions that will help them grow food on an increasingly unpredictable planet.

Many food producers are struggling to manage threats to their crop like disease and pests, made worse by climate change, monocropping, and widespread pesticide use.

Catching things early is key.

Taranis, a company that works with farms on four continents, flies high-definition cameras above fields to provides "the eyes."

Machine learning a kind of artificial intelligence that's trained on huge data sets and then learns on its own is the "brains."

"I think that today, to increase yields in our lots, it's essential to have a technology that allows us to take decisions immediately," said Ernesto Agero, the producer on San Francisco Farm in Argentina.

The algorithm teaches itself to flag something as small as an individual insect, long before humans would usually identify the problem.

Similar technology is at work in Norway's fisheries, where stereoscopic cameras are a new weapon in the battle against sea lice, a pest that plagues farmers to the tune of hundreds of millions of dollars.

The Norwegian government is considering making this technology, developed by a start-up called Aquabyte, a standard tool for farms across the country.

Farmers annotated images to create the initial data set. Over time, the algorithm has continued to sharpen its skills with the goal of finding every individual louse.

But deploying computer vision is expensive, and for many it's still out of reach.

Bigger industrial farms tried using computer vision to identify and remove sick pigs at the outset of an African swine fever epidemic that is sweeping China, according The New York Times.

But half of China's farms are small-scale operations like this one, where that wasn't an option.

Chinese pig farmer Fan Chengyou lost everything.

"When the fever came, 398 pigs were buried alive," Chengyou said. "I really don't want to raise pigs anymore."

China the world's biggest pork producing country is expected to lose half its herd this year.

For many farmers in the world's major growing regions, 2019 was devastating.

Record flooding all along the Mississippi River Valley the breadbasket of the United States meant that many farmers couldn't plant anything at all this season.

And while computer vision can't stop extreme weather, it isat the heart of a growing trend that may eventually offer an alternative, sheltered from the elements.

Root AI enlists computer vision to teach its robots to pick fruit. Root AI

"Indoor growing powered by artificial intelligence is the future," said Josh Lessing, co-founder and CEO of Root AI, a research company that develops robots to assist in-door farmers.

Computer vision has taught a fruit-picking robot named Virgo to figure out which tomatoes are ripe, and how to pick them gently, so that a hot house can harvest just the tomatoes that are ready, and let the rest keep growing.

The Boston-based start-up is installing them at a handful commercial greenhouses in Canada starting in 2020.

80 Acres Farms, another pioneer in indoor growing, opened what it says is the world's first fully-automated indoor growing facility just last year.

The company, based in Cincinnati, currently has seven facilities in the United States, and plans to expand internationally over the next six months. Artificial intelligence monitors every step of the growing process.

"We can tell when a leaf is developing and if there are any nutrient deficiencies, necrosis, whatever might be happening to the leaf," said 80 Acres Farms, CEO, Mike Zelkind. "We can identify pest issues, we can identify a whole variety of things with vision systems today that we can also process."

Because the lettuce and vine crops are grown under colored LED lights, technicians can even manage photosynthesis

Thanks to the benefits of indoor-farming practices, Zelkind says 80 Acres Farms' crops grow faster and have the potential to be more nutrient-dense.

Humans need more than salad to survive, though. Experts say indoor farms will need to expand to a more diverse range to provide a comprehensive option for growing food, but the advances being made in this space are significant.

AI-powered indoor agriculture is attracting a whole new breed of farmer.

New techie farmers are ambitious, but they are also realistic about what it takes to make AI work.

Ryan Pierce comes from a cloud computing background, but decided to jump into indoor growing, despite little to no experience in agriculture. Now, Pierce works for Fresh Impact Farms, an indoor farm in Arlington, VA.

"It's really sexy to talk about AI and machine learning, but a lot of people don't realize is the sheer amount of data points that you actually need for it to be worthwhile," Pierce said.

There is a ways to go before artificial intelligence can truly solve the issues facing agriculture today and in the future.

Many AI projects are still in beta, and some have proven too good to be true.

Still, the appetite is high for finding solutions at the intersection of data, dirt and the robots that are learning to help us grow food.

AI for agriculture is valued at $600 million, and expected to reach $2.6 billion by 2025.

Go here to see the original:

Farmers are using AI to spot pests and catch diseases and many believe its the future of agriculture - INSIDER

Posted in Ai | Comments Off on Farmers are using AI to spot pests and catch diseases and many believe its the future of agriculture – INSIDER

Tech Optimization: Getting the most out of AI – Healthcare IT News

Posted: at 8:42 am

Artificial intelligence is a highly complex technology that, once implemented, requires ongoing oversight to make sure it is doing what is expected of it and ensure it is operating at optimal levels.

Healthcare provider organizations using AI technologies also need to make sure theyre getting the biggest bang for their buck. In other words, they need to optimize the AI so that the technologies are meeting the specific needs of their organizations.

We spoke with six artificial intelligence experts, each with extensive experience in healthcare deployments, who offered comprehensive advice on how CIOs and other health IT workers can optimize their AI systems and approaches to best work for their provider organizations.

Optimizing AI depends on the understanding of what AI is capable of and applying it to the right problem, said Joe Petro, chief technology officer at Nuance Communications, a vendor of AI technology for medical image interpretations.

There is a lot of hype out there, and, unfortunately, the claims are somewhat ridiculous, he said. To optimize AI, we all need to understand: the problem we are trying to solve; how AI can solve the problem; can an existing capability be augmented with AI; and, when AI is not helpful.

Joe Petro, Nuance Communications

For example, is traceability important? AI has a well-known black box limitation every fact or piece of evidence that contributed to a decision or conclusion made by the neural net is not always known.

It is sometimes impossible to trace back through the bread crumb trail leading to the conclusion made by the neural net, Petro explained. Therefore, if traceability is a requirement of the solution, you may need to retreat to a more traditional computational methodology, which is not always a bad thing.

Also, is the problem well-behaved and well-conditioned for AI? For example, he said, are there clear patterns in the solution to the problem that repeat, do not vary widely, and are essentially deterministic.

For example, if you give the problem to a series of experts, will they all arrive at the same answer, he posed. If humans are given the same inputs and disagree on the answer, then AI may not be able to make sense of the data, and the neural nets may deliver results that do not agree with the opinions of certain experts. Rest assured that AI will find a pattern the question is whether or not the pattern is repeatable and consistent.

So in todays world of AI, the problems being solved by AI, especially in healthcare, are deliberately narrowly defined, thereby increasing the accuracy and applicability of AI, Petro explained. Choosing the right problem to solve and narrowing the scope of that problem is key in delivering a great outcome, he advised.

Furthermore, training data needs to be readily available at the volume necessary to create dependable AI models that produce consistently verified results, he added. Unfortunately, sometimes there is no data available in the form that is required to train the neural nets. For example, in some cases, AI requires marked-up and annotated data. This kind of markup is sometimes not available.

When a radiologist reads an image, they may or may not indicate exactly where in the image the diagnosis was made. No data markup makes training sometimes impossible. When a CDI specialist or care coordinator reads through an entire case, they most likely will not indicate every piece of evidence that prompted a query back to a physician.

Again, no data markup makes training sometimes impossible, Petro stated. Therefore, someone needs to go back over the data and potentially add the markup and annotations to train the initial models. Markup is not always necessary, but we need to realize that the data we need is not always available and may need to be expensively curated. The fact is that data is essentially the new software. Without the right data, AI cannot produce the wanted results.

Ken Kleinberg, practice lead, innovative technologies, at consulting firm Point-of-Care Partners, cautioned that AI is being promoted as being able to solve just about any problem that involves a decision.

Many applications that were formerly addressed with proven rules-based or statistical approaches now are routinely being identified as AI targets, he explained. Given the many extra considerations for AI involving model selection, training, validation, the expertise required, etc., this may be overkill. In addition to ROI concerns, using AI may expose organizations to problems unique or common to AI that simpler or alternative approaches are less susceptible to.

Ken Kleinberg, Point-of-Care Partners

Even basic machine learning approaches that may do little more than try out a bunch of different statistical models require some degree of expertise to use, he added.

Considerations of which applications to pick for AI include how many possible variables are in play, known complexities and dependencies, data variability, historical knowledge, availability of content experts, transparency of decision requirements, liability concerns, and how often the system might need to be retrained and tested, Kleinberg advised.

Experience of the model builders and sophistication and investment with an AI platform also should be considered, but AI should not be an expensive hammer looking for a simple nail.

For example, it may be that only a handful of known variables are key to decision making to decide whether an intervention with a patient suffering from a specific condition is needed if the patient has these specific triggers, they are going to be brought in.

Why attempt to train a system on what is already known? he said. Sure, if the goal is to discover unknown nuances or dependencies, or deal with rare conditions, AI could be used with a broader set of variables. For most organizations, they will be safer to go with basic rules-based models where every aspect of the decision can be reviewed and modified as new knowledge is accumulated especially if there are a manageable number of rules, up to a few hundred. That could be a better initial step than going directly to an AI model.

In order to get the most out of an AI investment and optimize the technology for a specific healthcare provider organization, bring in members from across the organization not just the IT team or clinical leadership, said Sanjeev Kumar, vice president of artificial intelligence and data engineering at HMS (Healthcare Management Systems).

Its important to invest time to understand in detailed nuance the full workflow from patient scheduling to check-in to clinical workflow to discharge and billing, he said.

Sanjeev Kumar, HMS

Each member of the team will be able to communicate how AI technology will impact the patient experience from their perspective and how these new, rich insights will impact the everyday office workflow, Kumar said. Without this insight at the beginning of the implementation, you risk investing a significant amount of money in technology that is not used by the staff, that negatively impacts the patient experience or, worst of all, gives inappropriate insights.

Collectively, incorporating staff early on may cause additional investment in manpower, but will result in an output that can be used effectively throughout the organization, he added.

On another technology optimization front, healthcare provider organizations have to be very careful with their data.

Data is precious, and healthcare data is at the outer extreme of sensitive information, said Petro of Nuance Communications. In the process of optimizing AI technology, we need to make sure the AI vendor is a trusted partner that acts as a caretaker of the PHI. We have all heard the horror stories in the press about the misuse of data. This is unacceptable.

Partnering with AI vendors that are legitimate custodians of the data and only use the data within the limitations and constraints of the contract and HIPAA guidelines is a table stakes-governing dynamic, he added.

Make sure to ask the hard questions, he advised. Ask about the use of the data, what is the PHI data flow, how does it move, where does it come to rest, who has access to it, what is it used for, and how long does the vendor keep it. Healthcare AI companies need to be experts in the area of data usage and the limitations around data usage. If a vendor wobbles in these areas, move on.

Another very important consideration for providers optimizing AI technology is the amount of variability in the processes and data they are working with, said Michael Neff, vice president of professional services at Recondo Technology.

For example, clinical AI models created for a population of patients with similar ethnic backgrounds and a small range of ages is most likely simpler than the same model created for an ethnically diverse population, he explained. In the latter population, there will probably be a lot more edge cases, which will either require more training data or will need to be excluded from the model.

Michael Neff, Recondo Technology

If the decision is made to exclude those cases, or if a model is built from a more cohesive data set, it will be very important to limit the use of the AI model to the cases where its predictions are valid, he continued.

The same argument, he said, holds for business variability: A model trained with data sent from a specific payer may not be valid for other payers that a provider works with.

When using AI approaches and especially natural language processing it is key to provide an audit trail to justify recommendations and findings, advised Dr. Elizabeth Marshall, associate director of clinical analytics at Linguamatics IQVIA.

Any insights or features taken from clinical notes and used in AI algorithms need to be easily traced to the exact place in the document they came from, she cautioned. This enables clinical staff to validate the findings and build confidence in AI.

Dr. Elizabeth Marshall, Linguamatics IQVIA

For example, when ensuring a hospital is receiving the right repayment for chronic condition comorbidities such as hepatitis (chronic viral B and C) and HIV/AIDS. It is not only important to capture the data but also to ensure one is able to link the data back to the patients EHR encounter where the information was obtained, she said.

Further, its critical to consider how any insights will be made actionable and incorporated into clinical workflow; having a series of AI algorithms with no way to actually improve patient care is not impactful, Marshall said. For example, clinicians may want to improve the identification of patients who might be missed in a busy emergency department. Time is of the essence, and manually re- reviewing every radiology report, to look for missed opportunities of follow-up, wastes precious time.

Instead, they could use natural language processing to review unstructured sections for critical findings within the reports such as identifying patients with incidental pulmonary nodules, she advised.

When high-risk patients are identified, its critical to have a process in place for appropriate follow-up, she said. To actually improve care, the results need to be flagged in a risk register for follow-up by care coordinators after the patients are no longer in immediate danger.

On the AI spectrum, full replication of human thought is sometimes referred to as strong or full AI. This does not exist yet, certainly not in medicine.

In healthcare, we primarily are focused on narrow or weak AI, which could be described as the use of software or algorithms to complete specific problem-solving or reasoning tasks, at various levels of complexity, said Dr. Ruben Amarasingham, president and CEO of Pieces Technologies. These include specific focal tasks like reading a chest X-ray, interpreting the stage of a skin wound, reading a doctors note and understanding the concerns.

Dr. Ruben Amarasingham, Pieces Technologies

One optimization best practice is to understand how the proposed AI technology is being inserted into the workflow, whether the insertion truly decreases friction in the workflow from the perspective of the stakeholder (provider, patient and family) and effectively measuring and evaluating that value-add immediately after go-live, he said.

If the AI is not reducing stress and complexity of the workflow, it is either not working, not optimized or not worth it, he added.

AI models built by third parties may well serve the local needs of an organization at least as a starting point, but that could be a risk and be an opportunity for optimization, said Kleinberg of Point-of-Care Partners.

As the number of prebuilt models proliferate for example, sepsis, length-of-stay prediction, no-shows it becomes more important to understand the quality of the training and test sets used and attempt to recognize what assumptions and biases the prebuilt model may contain, he said. There has been a ton of recent research on how easily AI particularly deep learning models can be fooled, for example, not paying enough attention to whats in the background of an image. Are the models validated by any independent parties?

Consider an application that recommends the most appropriate type of medical therapy management program for a patient with a goal of increasing medication adherence, Kleinberg advised. To what degree might the test set have been chosen for individuals affected by certain environmental factors (warm versus cold climate), fitness levels, ethnic/economic background, number of medications taken, etc., and how does that compare to the local population to be analyzed, he added.

Retraining and testing the model with a more tuned-to-local demographic data set will be a key practice to achieve more optimized results, he advised.

Amarasingham of Pieces Technologies offered another AI optimization best practice: Health systems should set up governance systems for their AI technology.

I am excited to see the development of AI committees at health systems similar to the development of evidence-based medicine, data governance or clinical order set committees in the recent past, he said. These groups should be involved with evaluating the performance of their AI systems in the healthcare workplace and not rely solely on vendors to oversee their products or services.

They also could be tasked with developing the AI roadmap for their institution over time and as new AI technologies emerge, he added. These committees could be a mix of clinicians, administrators, informaticists and information system team members, he suggested.

Implementing any artificial intelligence technology can require a little more investment than originally anticipated, but if a healthcare organization starts small and plans properly, it will see true returns on that capital and manpower, advised Kumar of HMS.

All healthcare organizations provider, payer and employer will attest that AI has the ability to help transform healthcare operations, he stated. However, AI by itself is not a silver bullet to revolutionizing the system it requires people, process and technology planning, workflow transformation, and time to make sure that it is successful.

This means that to correctly optimize the technology, one needs to go slowly and make sure that one considers all factors that will impact the output from the data going in to how the insights are reported back to providers for action, he said.

In order to ensure that you get the most out of your investment, he concluded, know that you will need to invest more and take longer to see the results.

Twitter:@SiwickiHealthITEmail the writer:bill.siwicki@himssmedia.comHealthcare IT News is a HIMSS Media publication.

View original post here:

Tech Optimization: Getting the most out of AI - Healthcare IT News

Posted in Ai | Comments Off on Tech Optimization: Getting the most out of AI – Healthcare IT News

OpenAI has published the text-generating AI it said was too dangerous to share – The Verge

Posted: at 8:42 am

The research lab OpenAI has released the full version of a text-generating AI system that experts warned could be used for malicious purposes.

The institute originally announced the system, GPT-2, in February this year, but withheld the full version of the program out of fear it would be used to spread fake news, spam, and disinformation. Since then its released smaller, less complex versions of GPT-2 and studied their reception. Others also replicated the work. In a blog post this week, OpenAI now says its seen no strong evidence of misuse and has released the model in full.

GPT-2 is part of a new breed of text-generation systems that have impressed experts with their ability to generate coherent text from minimal prompts. The system was trained on eight million text documents scraped from the web and responds to text snippets supplied by users. Feed it a fake headline, for example, and it will write a news story; give it the first line of a poem and itll supply a whole verse.

Its tricky to convey exactly how good GPT-2s output is, but the model frequently produces eerily cogent writing that can often give the appearance of intelligence (though thats not to say what GPT-2 is doing involves anything wed recognize as cognition). Play around with the system long enough, though, and its limitations become clear. It particularly suffers with the challenge of long-term coherence; for example, using the names and attributes of characters consistently in a story, or sticking to a single subject in a news article.

The best way to get a feel for GPT-2s abilities is to try it out yourself. You can access a web version at TalkToTransformer.com and enter your own prompts. (A transformer is a component of machine learning architecture used to create GPT-2 and its fellows.)

Apart from the raw capabilities of GPT-2, the models release is notable as part of an ongoing debate about the responsibility of AI researchers to mitigate harm caused by their work. Experts have pointed out that easy access to cutting-edge AI tools can enable malicious actors; a dynamic weve seen with the use of deepfakes to generate revenge porn, for example. OpenAI limited the release of its model because of this concern.

However, not everyone applauded the labs approach. Many experts criticized the decision, saying it limited the amount of research others could do to mitigate the models harms, and that it created unnecessary hype about the dangers of artificial intelligence.

The words too dangerous were casually thrown out here without a lot of thought or experimentation, researcher Delip Rao told The Verge back in February. I dont think [OpenAI] spent enough time proving it was actually dangerous.

In its announcement of the full model this week, OpenAI noted that GPT-2 could be misused, citing third-party research stating the system could help generate synthetic propaganda for extreme ideological positions. But it also admitted that its fears that the system would be used to pump out a high-volume of coherent spam, overwhelming online information systems like social media, have not yet come to pass.

The lab also noted that its own researchers had created automatic systems that could spot GPT-2s output with ~95% accuracy, but that this figure was not high enough for standalone detection and means any system used to automatically spot fake text would need to be paired with human judges. This, though, is not particularly unusual for such moderation tasks, which often rely on humans in the loop to spot fake images and videos.

OpenAI says it will continue to watch how GPT-2 is used by the community and public, and will further develop its policies on the responsible publication of AI research.

Excerpt from:

OpenAI has published the text-generating AI it said was too dangerous to share - The Verge

Posted in Ai | Comments Off on OpenAI has published the text-generating AI it said was too dangerous to share – The Verge

Goldman Sachs, Nationwide, and More Highlight Benefits of AI at H2O World New York – Forbes

Posted: at 8:42 am

Water can flow, or it can crash. Water is formless, shapeless. It becomes the form of whatever container you put it in. This famous idea from actor and philosopher Bruce Lee is a part of the mission behind H2O.ai (H2O). At H2O World New York on October 22nd, H2O.ai celebrated its vision to make every company an AI company with its growing community of makers and builders.

H2O.ai has developed enterprise AI and machine learning platforms to give companies the ability to easily access and leverage data throughout their organizations. Its open source platform, H2O, allows the entire open source ecosystem to contribute to the development of AI, and contributed to the development of newer platforms such as H2O Driverless AI, an automated machine learning tool that takes some of the most difficult workflows and achieve the highest predictive accuracy by making them easier to interpret, and H2O Q, its newest platform, which helps business users and aspiring AI companies make AI apps.

While many companies attempt to embrace AI, they often lack the resources, especially when recruiting top data scientists from tech giants like Google, Facebook, and Microsoft, whose compensation packages can exceed $1M.

Aakriti Srikanth, Forbes 30 Under 30 honoree and Co-founder at Ahura AI

I see a lot of potential in H2O.ai to accelerate machine learning and to improve the workforce, said Aakriti Srikanth, Co-founder at Ahura AI and a Forbes 30 Under 30.

There is a huge talent shortage for data scientists in many companies, said Sri Ambati, CEO and Founder in his opening keynote. H2O.ai solves this problem by giving teams easier, simpler and cheaper AI platforms that implement machine learning algorithms."

H2O World New York featured sessions from many of the H2O.ai employees (or makers), as well as representatives from some of the top companies in financials services, insurance, and more, including Goldman Sachs, Nationwide Insurance, Disney, and Discover Financial Services, among others, all of whom have adopted the companys technology, including open source H2O and Driverless AI, to further their AI journey.

A core requirement of any company focused on AI is putting together a strong team.

Wieyan Zhao, Director of Data Science at Nationwide Insurance said, What matters is the skills that you have, and are you good at, for the things we want you to do...we have people coming from more than ten countries, speaking more than fifteen types of languages; the diversity gives you perspective when you come into the room and try to solve the problem.

During his H2O World session, A Decade of Data Science: The Nationwide Journey, Zhao explained how Nationwides core modeling team has a 10-to-1 ratio of data scientists to operations, with a diverse background and skill gives that helps them bring unique perspectives during peer and model reviews for each project.

Part of building a strong team, comes from having technology that lends itself to collaboration and brings technical expertise where an organization may be lacking it internally. To that end, H2O.ai boasts of attracting a large cohort of Kaggle Grandmasters, the data science equivalent of a Chess Grandmaster, to lead the way in building algorithms and new capabilities that data scientists can use in their day to day work. H2O.ai currently employs 13 Kaggle Grandmasters, roughly about 10 percent of those that exist globally. This gives H2O.ai customers the expertise necessary in the platform.

One of the things were doing in addition to building Driverless AI itself is building a platform that allows data scientists and engineers to collaborate and to store and share models and then deploy them, said Tom Kraljevic, VP of Engineering at H2O.ai.

"H2O.ai just makes modeling so much faster. Not only to produce data products, but we also created a platform for our customers, said ADP Principal Data Scientist, Xiaojing Wang.

The H2O.ai team onstage at H2O World New York

Customer love was a constant theme throughout H2O World, and Ambati compared the AI ecosystem to a team sport. At H2O.ai, this means uniting internal team members and external partners with a shared mission that challenges the status quo and creates a strong community that will build incredible things with AI. In fact, H2O.ai takes customer love very seriously, turning two of their largest customers, Goldman Sachs and Wells Fargo into investors in their recent funding rounds. When customers become investors, that is the true test of advocacy, said Ambati.

Goldman Sachs Managing Director, Charles Elkan, gave a keynote at H2O World where he spoke about The Promise and the Perils of deploying machine learning in the realm of finance. He gave the example of a healthcare company that was able to utilize machine learning to analyze patient responses and follow up with a database of clarifying questions, allowing a physician to review these answers and apply his or her experience and understanding on a deeper level, greatly increasing the physicians productivity.

"In order to be able to create something awesome like a great data science team, we need to be able to take those chances organizationally with people so as to produce those great outcomes we have been looking for, said Krish Swamy, Senior VP of Data and Enterprise Analytics at Wells Fargo.

See the original post:

Goldman Sachs, Nationwide, and More Highlight Benefits of AI at H2O World New York - Forbes

Posted in Ai | Comments Off on Goldman Sachs, Nationwide, and More Highlight Benefits of AI at H2O World New York – Forbes

The how and why of AI: enabling the intelligent network – Ericsson

Posted: at 8:42 am

Its easy to throw in a buzz word like AI these days to get some attention. But thats not my only intention with this piece. My aim is to offer some valuable insights into how AI can make a real difference in the world of radio access networks (RANs), looking at the way it works as well as the multiple benefits it brings.

Let me elaborate

From a technology perspective, AI in the RAN is about letting the hardware adjust its decision-making capability and allowing it to learn from patterns. Learning can improve the baseband functionality, which otherwise acts based on hard-coded settings. Of course, the pattern observations and execution of AI and machine learning (ML) require a certain amount of computing capacity. They also depend on software algorithms for decision making, as well as one or several data sources to use for pattern analysis.

While device types vary to some extent, 70 percent of all those used today are smartphones*. Their ability to differentiate depends on the relative level of their capabilities. Smartphones range from feature phones that support voice and text only, to full-on high-end 5G-capable devices.

Within the high-end smartphone segment, traffic behavior patterns for data services mostly shift between bursty and continuous. But on the networks side, the necessary capacity should always be available, regardless of what kind of app you are using at any given moment whether its chat, video streaming or gaming, for instance.

Smartphones perform measurements all the time, without most users being aware of it. These measurements are necessary to manage the radio link and the mobility, and to control how much output power each device needs to use. The network collects the measurement data in order to decide on the best connection for the device. Smartphones also carry key information about their network capabilities, which they conveniently report to the network. For instance, not all smartphones have 5G, but for those that do, the node can prepare a tailored network connection for each particular user.

Neighbor cells also report to each other on the status of capabilities, connected users and current load. This information can also be taken into consideration.

Ultimately, the scale of the benefits that AI can provide is determined by the hardware in place and the location of the boards. The hardware components of a mobile network today have to meet huge requirements in terms of managing the mobility of thousands of users in each cell. Not only that: they must also make sure that no connections are dropped and that the service is responding at all times.

Of course, routing and more central functions are rather executed from the core network components. So the node base stations do not have to carry full responsibility for the effectiveness of the entire network on their shoulders. But real-time mobility functions are located at the edge of the network, on the node.

Todays node often houses GSM, WCDMA, LTE and NR on a single site not always on one baseband, but such installations are soon to become commonplace as well.

Applying ML to software functionalities boosts the strength of the network significantly, since many network functions can benefit from the same algorithms. But the advantage of this comes at a cost, with some computing power being seized by the AI technology.

An Ericsson baseband will however run ML in parallel with regular node traffic without reducing the capacity of the baseband. Thats because our AI engineers have optimized the algorithms so that they can analyze huge amounts of data in real time, enabling instant traffic prediction. All this is facilitated by Ericssons many-core architecture, which is the software platform design of choice that all RAN Compute products are based on.

The reality is, service providers expect full-steam performance from their legacy products, even when new network capabilities are added and Ericsson is aware of this. Service providers also like to minimize opex and they incur significant operational costs when site visits need to be carried out. Ericsson is aware of this as well, which is why our ML features are integrated with our software releases, which can be applied on a remote basis without the need for any site visits at all.

We have reaped the benefits of AI in many areas of our lives from movie offerings being handed to us on a plate, to the voice and face recognition apps in our smartphones to the optical scanning features of our credit cards. You can look at your phone with your eyes wide open and unlock it, you can register your credit card details using the camera in your device In all such cases, AI simplifies the use of our devices, automating the steps that would otherwise have to be carried out in a repetitive, manual fashion. Imagine the hassle!

On the mobile network side, the use of AI is similar but not quite the same. While the initial use cases have been about automation, they are also about improving network coverage and the user experience by anticipating the needs of devices.

One practical example of this is that the measurements that smartphones carry out which were mentioned previously can be reduced significantly. By shortlisting the top neighbor cells at every node, the device will get an equally shortened to-do list of frequencies to listen to. This means that instead of numerous background measurements being performed by the device, battery power is conserved to do other fun stuff with.

For the service provider, one main benefit of implementing AI will be the reduction in opex, as fewer node configurations need to be added manually. But even more importantly, their spectrum assets can be used more efficiency, and spectrum is a valuable resource that they tend to have to pay for dearly.

All in all, AI for radio access networks is a sound investment. Ericssons software will improve coverage and spectrum use, and boost throughput. Then service providers can sit back, relax and let the machines do the work.

Ericsson Mobility Report, June 2019

Learn more about Ericssons AI-powered radio access networks

Join the webinar on December 3 at 15.00 CET

Learn more about artificial intelligence in the October 31 episode of the Voice of 5Gpodcast

Excerpt from:

The how and why of AI: enabling the intelligent network - Ericsson

Posted in Ai | Comments Off on The how and why of AI: enabling the intelligent network – Ericsson

Artificial Intelligence Can Be Biased. Here’s What You Should Know. – FRONTLINE

Posted: at 8:42 am

Artificial intelligence has already started to shape our lives in ubiquitous and occasionally invisible ways. In its new documentary, In The Age of AI, FRONTLINE examines the promise and peril this technology. AI systemsare being deployed by hiring managers,courts, law enforcement, and hospitals sometimes without the knowledge of the people being screened. And while these systems were initially lauded for being more objective than humans, its fast becoming clear that the algorithms harbor bias, too.

Its an issue Joy Buolamwini, a graduate researcher at the Massachusetts Institute of Technology, knows about firsthand. She founded the Algorithmic Justice League to draw attention to the issue, and earlier this year she testified at a congressional hearing on the impact of facial recognition technology on civil rights.

One of the major issues with algorithmic bias is you may not know its happening, Buolamwini told FRONTLINE. We spoke to her about how she encountered algorithmic bias, about her research, and what she thinks the public needs to know.

This interview has been edited for length and clarity.

On her first encounter with algorithmic bias.

The first time I had issues with facial detection technology was actually when I was an undergraduate at Georgia Tech, and I was working on a robot. The idea with this robot was to see if I could get it to play peek-a-boo with me. And peek-a-boo doesnt really work if your robot cant see you, and my robot couldnt see me. To get my project done, I borrowed my roommates face. She was lighter skinned than I was. That was my first time really using facial analysis technology and seeing that it didnt work for me the same way it worked for other people.

I went on to do many things and became a graduate student at MIT and I started working on projects that used facial analysis technology, face detection. So one project I did was something called the Aspire Mirror. You look into a mirror, a camera detects your face and then a lion can appear on you, or you can be somebody youre inspired by

[I]t wasnt detecting my face consistently, so I got frustrated. So what do you do when you get frustrated with your program? You debug. I started trying to figure out ways to make it work. I actually drew a face on my hand, and the system detected the face on my palm. And I was like, Wait, wait, wait, if its detecting the face I just drew on my palm, then anythings a possibility now. So I looked around my office and the white mask was there. So I was like, Theres no way! But why not?

I pick up the white mask, and I put it on and its instantaneous when I put on that white mask, and I mean just the symbolism of it was not lost to me. This is ridiculous that the system can detect this white mask that is not a real person, but cannot necessarily detect my face. So this is really when I started thinking, Okay, lets a dig a bit deeper with whats going on with these systems.

On digging a bit deeper into facial analysis technology.

Here was a question: Do these systems perform differently on various faces? There was already a 2012 reportthat actually came out from an FBI facial analysis expert showing that facial recognition systems in particular worked better on white faces than black faces. They didnt work as well on youthful faces. And they didnt work as well on women as compared to men. This was 2012, and why I keep bringing that up is this was before the deep learning revolution

Now we had a different approach that was supposed to be working much better. My question was, given these new approaches to facial analysis and facial recognition, are there still biases? Because what Im experiencing, what my friends are experiencing and what Im reading about with reports that say, Oh, weve solved face recognition, or Were 97% accurate from benchmarks those reports were not lining up to my reality.

What I focused on specifically was gender classification. I wanted to choose something that I thought would be straightforward to explain, not that gender is straightforward its highly complex. But insomuch as we were seeing binary gender classification, I thought that would be a place to start. By this time my weekend hobby was literally running my face through facial analysis and seeing what would happen. So some wouldnt detect my face and others would label me male. And I do not identify as male. This is what led down that corridor.

On finding the gold standard benchmarks were not representative.

When I ran this test, the first issue that I ran into which gave me some more insight with the issue were talking about algorithmic bias was that our measures for how well these systems perform were not representative of the world. Weve supposedly done well on gold standard benchmarks. So I started looking at the benchmarks. These are essentially the data sets we use to analyze how well were doing as a research community or as an industry on specific AI tasks. So facial recognition is one of these tasks that people are benchmarked on all the time.

What I started to see was something I call power shadows when either the inequalities or imbalances that we have in the world become embedded in our data.

The thing is, we often times dont question the status quo or the benchmark. This is the benchmark, why would I question it? But sometimes the gold standard turns out to be pyrite. And that is what was happening in this case. When I went to look at the research on the breakdown of various facial analysis systems, what I found was one of the leading gold standards, labeled Faces in the Wild, was over 70% male and 80% white. This is when I started looking into more and more data sets and seeing that you had massive skews. Sometimes you had massive skews because you were using celebrities. I mean, celebrities dont necessarily look like the rest of the world. What I started to see was something I call power shadows when either the inequalities or imbalances that we have in the world become embedded in our data.

All this to say, the measures that we had for determining progress with facial analysis technology were misleading because they werent representative of people at least the U.S. in that case. We didnt have data sets that were actually reflective of the world, so for my thesis at MIT, I created what I call the Pilot Parliaments Benchmark. I went to UN womens websites, I got a list of the top 10 nations in the world by their representation of women in parliament. So I chose European countries and African nations to try to get a spread on opposite ends of skin types, lighter skin and darker skin. After I ran into this issue that the benchmarks were misleading, I needed to make the benchmark.

On what her research found.

Then finally, I could get to the research question. So I wanted to know how accurate are they at this reduced task of binary gender classification which is not at all inclusive when it comes to guessing the gender of the face? And it turned out that there were major gaps. This was surprising because these were commercially sold products. You know how the story goes. It turns out, the systems work better on male-labeled faces than female-labeled faces, they work better on lighter faces than darker-skinned faces.

But one thing we did for this study, which I would stress for anybody whos thinking about doing research in algorithmic bias or concerned with algorithmic bias and AI harms, is we did an intersectional analysis. We didnt just look at skin type. We didnt just look at gender. We looked at the intersection. And the inspiration for this was from Kimberl Crenshaw, a legal scholar who in 1989 introduced the term of intersectionality. What would happen with the analysis is if you did it in aggregate just based on race, or if you did it in aggregate based on just gender, you might find based on those axes that there isnt substantial evidence of discrimination. But if you did it at the intersection, you would find there was a difference. And so I started looking at the research studies around facial analysis technologies and facial recognition technologies and I saw that usually we just have aggregate results just one number for accuracy. People are just optimizing for that overall accuracy, which means we dont get a sense of the various ways in which the system performs for different types of people. Its the differences in the performance, the accuracy disparities that I was fascinated by, but not just on a single axis but also on the intersection. So when we did the intersectional breakdown oooh, it was crazy.

We werent doing anything to try to trick the system. It was an optimistic test. This is why I was very surprised, because even with this optimistic test, in the worst-case scenario for the darkest-skinned women, you actually had error rates as high as 47% on a binary classification task.

I shared the results with the companies and I got a variety of responses.But I think the overall response, at least with the first study, was there was an acknowledgement of an issue with algorithmic bias.

On how AI is already affecting peoples lives.

Theres a paper that just came out from Science which is devastating, showing risk assessment algorithms used in health care actually have racial bias against black patients. Were talking about health care where the whole point is to try to optimize the benefit and what they were seeing was because they used how much money is spent on an individual as a proxy for how sick they were, it turned out it was not a good proxy because black patients who were actually sick were being said to be not as sick as they were.

When these systems fail, they fail most the people who are already marginalized, the people who are already vulnerable.

You also have AIs that are determining the kind of ads people see. And so there have been studies that show you can have discriminatory ad targeting. Or you can have a situation where you have an ad for CEO and the system over time learns to present that CEO ad to mainly men. You were saying, how do you know if youve encountered bias the thing is you might never know if youve encountered the bias. Something that might happen to other people you see phenotypic fails with passport renewals. So you have a report from a New Zealand man of Asian descent being told that his eyes are closed and he needs to upload another photo. Meanwhile, his eyes are not closed. You have, in the UK, a black man being told his mouth is open. His mouth was not open. You have these systems that are seeping into every day.

You have AI systems that are meant to verify if youre who you say you are. And so one way that can happen is with ride share apps. Uber, for example, will ping drivers to have them verify their ID. Theres actually a report from trans drivers who were saying that they were being repeatedly [asked] to submit their IDs because they were not matching. They were being either kicked out of the system or having to stop the car, test it again, which means youre not getting the same level of economic opportunity.

When these systems fail, they fail most the people who are already marginalized, the people who are already vulnerable. And so when we think about algorithmic bias, we really have to be thinking about algorithmic harm. Thats not to say we dont also have the risk of mass surveillance, which impacts everybody. We also have to think about whos going to be encountering the criminal justice system more often because of racial policing practices and injustices.

On what the public needs to know about algorithmic bias.

Theres no requirement for meaningful transparency, so these systems can easily be rolled out without our ever knowing. So one thing I wish people would do more of and something that companies also ought to do more of is having transparency so that you even know that an AI system was used in the first place. You just might never get the callback. You just might pay the higher price. You would never actually know. What I want the public to know is AI systems are being used in hidden ways that we should demand are made public.

The other thing I want the public to have is actually choice affirmative consent. Not only should I know if an AI system is being used, but lets say it makes the wrong decision or something that I contest. Theres no path to due process thats mandated right now. So if something goes wrong, what do you do?

Sometimes Ill hear, at least in the research community, efforts to de-bias AI or eradicate algorithmic bias. And its a tempting notion, lets just get rid of the bias and make the systems more fair, more inclusive, some ideal. And I always ask, but have we gotten rid of the humans? Because even if you create some system you believe is somehow more objective, its being used by humans at the end of the day. I dont think we can ever reach a true state of something being unbiased, because there are always priorities. This is something I call the coded gaze. The coded gaze is a reflection of the priorities, the preferences and also the prejudices of those who are shaping technology. This is not to say we cant do our best to try to create systems that dont produce harmful outcomes. Im not saying that at all. What I am saying is we also have to accept the fact that being human were going to miss something. Were not going to get it all right.

What I want the public to know is AI systems are being used in hidden ways that we should demand are made public.

Instead of thinking about Oh, were going to get rid of bias, what we can think about is bias mitigation knowing that we have flaws, knowing that our data has flaws, understanding that even systems we try to perfect to the best of our abilities are going to be used in the real world with all of its problems.

Before we get to the point where its having major harms with real world consequences, there need to be processes in place to check through different types of bias that could happen. So, for example, AI [systems]now have algorithmic risk assessments that they have as a process of really thinking through what the societal impact of the system are in its design and development stages before you get to the deployment. Those kinds of approaches, I believe, are extremely helpful, because then we can be proactive instead of reacting to the latest headline and playing bias whack-a-mole.

On proposals for oversight and regulation.

You have a proposal for an Algorithmic Accountability Act, this is a nationwide push that would actually require assessing systems for their social impact. And I think thats really important. We have something with the Algorithmic Justice League thats called the Safe Face Pledge, which outlines actionable steps companies can take to mitigate harms of AI systems.

I absolutely think regulation needs to be the first and foremost tool, but alongside regulation providing not just the critique of whats wrong with the system, but also steps that people can take to do better. Sometimes the step to take to do better is to commit to not developing a particular kind of technology or particular use case for technology. So with facial analysis systems, one of our banned uses is any situation where lethal force can be used. So it would mean were not supporting facial recognition on police body cameras. Or facial recognition on lethal autonomous weapons.

And I think the most important thing about the Safe Face Pledge that Ive seen is one, the conversations that Ive had with different vendors, where whether or not they adopt it actually going through those steps and thinking about their process and changes they can make in the process I believe has made internal shifts that likely would not hit the headlines. Because people would rather quietly make certain kinds of changes. The other thing is making it where the commitments have to be part of your business processes. Not a scouts honor pledge, just trust us. If you are committed to actually making this agreement, it means you have to change your terms of service and your business contracts to reflect what these commitments are.

On what should be done to fix the problem.

One, I think, demand transparency and ask questions. Ask questions if youre using a platform, if youre going to a job interview. Is AI being used? The other thing I do think is supporting legislative moves.

When I started talking about this, I think in 2016, it was such a foreign concept in the conversations that I would have. And now, today, I cant go online without seeing some kind of news article or story about a biased AI system of some shape or form. I absolutely think there has been an increase in public awareness, whether through books like Cathy ONeils Weapons of Math Destruction. Theres a great new book out by Dr. Ruha Benjamin Race After Technology.

People know its an issue and so Im excited about that. Has there been enough done? Absolutely not. Because people are just now waking up to the fact that theres a problem. Awareness is good, and then that awareness needs to lead to action. That is the phase were in. Companies have a role to play, governments have a role to play and individuals have a role to play.

When you see the bans in San Francisco [of facial recognition technology by the citys agencies] what you saw was a very powerful counter-narrative. What we were hearing was that this technology is inevitable, theres nothing you can do. When you hear theres nothing you can do, you stop trying. But what was extremely encouraging to me with the San Francisco ban and then you have Somerville that came from the folks who are in Boston people have a voice and people have a choice. This technology is not inherently inevitable. We have to look at it and say: What are the benefits and what are the harms? If the harms are too great, we can put restrictions and we can put limitations. And this is necessary. I do look to those examples and they give me hope.

The rest is here:

Artificial Intelligence Can Be Biased. Here's What You Should Know. - FRONTLINE

Posted in Ai | Comments Off on Artificial Intelligence Can Be Biased. Here’s What You Should Know. – FRONTLINE

The AI hiring industry is under scrutinybut it’ll be hard to fix – MIT Technology Review

Posted: at 8:42 am

The Electronic Privacy Information Center (EPIC) has asked the Federal Trade Commission to investigate HireVue, an AI tool that helps companies figure out which workers to hire.

Whats HireVue? HireVue is one of a growing number of artificial intelligence tools that companies use to assess job applicants. The algorithm analyzes video interviews, using everything from word choice to facial movements to figure out an employability score that is compared against that of other applicants. More than 100 companies have already used it on over a million applicants, according to the Washington Post.

Whats the problem? Its hard to predict which workers will be successful from things like facial expressions. Worse, critics worry that the algorithm is trained on limited data and so will be more likely to mark traditional applicants (white, male) as more employable. As a result, applicants who deviate from the traditionalincluding people dont speak English as a native language or who are disabledare likely to get lower scores, experts say. Plus, it encourages applicants to game the system by interviewing in a way that they know HireVue will like.

Whats next? AI hiring tools are not well regulated, and addressing the problem will be hard for a few reasons.

Most companies wont release their data or explain how the algorithms work, so its very difficult to prove any bias. Thats part of the reason there have been no major lawsuits so far. The EPIC complaint, which suggests that HireVues promise violates the FTCs rules against unfair and deceptive practices, is a start. But its not clear if anything will happen. The FTC has received the complaint but has not said whether it will pursue it.

Other attempts to prevent bias are well-meaning but limited. Earlier this year, Illinois lawmakers passed a law that requires employers to at least tell job seekers that theyll be using these algorithms, and to get their consent. But thats not very useful. Many people are likely to consent simply because they dont want to lose the opportunity.

Finally, just like AI in health or AI in the courtroom, artificial intelligence in hiring will re-create societys biases, which is a complicated problem. Regulators will need to figure out how much responsibility companies should be expected to shoulder in avoiding the mistakes of a prejudiced society.

More here:

The AI hiring industry is under scrutinybut it'll be hard to fix - MIT Technology Review

Posted in Ai | Comments Off on The AI hiring industry is under scrutinybut it’ll be hard to fix – MIT Technology Review

AI in drug development: the FDA needs to set standards – STAT

Posted: at 8:42 am

Artificial intelligence has become a crucial part of our technological infrastructure and the brain underlying many consumer devices. In less than a decade, machine learning algorithms based on deep neural networks evolved from recognizing cats in videos to enabling your smartphone to perform real-time translation between 27 different languages. This progress has sparked the use of AI in drug discovery and development.

Artificial intelligence can improve efficiency and outcomes in drug development across therapeutic areas. For example, companies are developing AI technologies that hold the promise of preventing serious adverse events in clinical trials by identifying high-risk individuals before they enroll. Clinical trials could be made more efficient by using artificial intelligence to incorporate other data sources, such as historical control arms or real-world data. AI technologies could also be used to magnify therapeutic responses by identifying biomarkers that enable precise targeting of patient subpopulations in complex indications.

Innovation in each of these areas would provide substantial benefits to those who volunteer to take part in trials, not to mention downstream benefits to the ultimate users of new medicines.

advertisement

Misapplication of these technologies, however, can have unintended harmful consequences. To see how a good idea can turn bad, just look at whats happened with social media since the rise of algorithms. Misinformation spreads faster than the truth, and our leaders are scrambling to protect our political systems.

Could artificial intelligence and machine learning similarly disrupt our ability to identify safe and effective therapies?

Even well-intentioned researchers can develop machine learning algorithms that exacerbate bias. For example, many datasets used in medicine are derived from mostly white, North American and European populations. If a researcher applies machine learning to one of these datasets and discovers a biomarker to predict response to a therapy, there is no guarantee the biomarker will work well, if at all, in a more diverse population. If such a biomarker was used to define the approved indication for a drug, that drug could end up having very different effects in different racial groups simply because it is filtered through the biased lens of a poorly constructed algorithm.

Concerns about bias and generalizability apply to most data-driven decisions, including those obtained using more traditional statistical methods. But the machine learning algorithms that enable innovations in drug development are more complex than traditional statistical models. They need larger datasets, more sophisticated software, and more powerful computers. All of that makes it more difficult, and more important, to thoroughly evaluate the performance of machine learning algorithms.

Companies operating at the intersection of drug development and technology need standards to ensure that artificial intelligence tools function as intended.

The FDA has already issued several proposals around the regulation of AI products, and it now has an opportunity to build on these efforts. The Center for Devices and Radiological Health has reviewed and cleared a number of devices that use AI. The center has also released a proposed framework, Artificial Intelligence and Machine Learning in Software as a Medical Device. These proposals, though, dont necessarily apply to AI-based tools used as part of the drug development process. As a result, biopharmaceutical and technology companies arent sure how these tools fit into current regulatory frameworks.

Im the founder and CEO of a company that uses artificial intelligence to streamline clinical trials and make them more efficient. You might expect me to counsel the FDA to back off on creating hurdles for companies that want to apply artificial intelligence to drug development. Not so. In a presentation to the FDA on Thursday, Ill argue that the agency should play an important role in ensuring that AI-based drug development tools meet appropriate standards.

The FDA has an opportunity to ease regulatory uncertainty by proposing a framework that guides how sponsors can use AI tools within drug development programs. By engaging with industry to develop a workable regulatory framework, the FDA can balance the opportunity for artificial intelligence to provide significant public health benefits with its mission to protect public health by ensuring that these new technologies are reliable. At the same time, the FDA could create a pathway for formal qualification of AI-based drug-development tools to ensure that these tools are sufficiently vetted.

In addition, it could encourage the exploratory use of AI-based technologies in drug development that would allow sponsors and regulators to better understand their advantages and disadvantages through use of new regulatory pathways, such as the Complex Innovative Trial Designs Pilot Program.

These concrete actions would open the door to innovative approaches to clinical trials that will make drug development more efficient and so help deliver new treatments to patients who need them as quickly as possible.

Charles K. Fisher, Ph.D., is the founder and CEO of San Francisco-based Unlearn.AI, Inc.

See the original post here:

AI in drug development: the FDA needs to set standards - STAT

Posted in Ai | Comments Off on AI in drug development: the FDA needs to set standards – STAT

Report: The Government and Tech Need to Cooperate on AI – WIRED

Posted: at 8:42 am

Americas national security depends on the government getting access to the artificial intelligence breakthroughs made by the technology industry.

So says a report submitted to Congress on Monday by the National Security Commission on Artificial Intelligence. The group, which includes executives from Google, Microsoft, Oracle, and Amazon, says the Pentagon and intelligence agencies need a better relationship with Silicon Valley to stay ahead of China.

AI adoption for national security is imperative, said Eric Schmidt, chair of the commission and formerly CEO of Google, at a news briefing Monday. The private sector and government officials need to build a shared sense of responsibility.

Mondays report says the US leads the world in both military might and AI technology. It predicts that AI can enhance US national security in numerous ways, for example by making cybersecurity systems, aerial surveillance, and submarine warfare less constrained by human labor and reaction times.

But the commission also unspools a litany of reasons that US dominance on the world stage and in AI may not last, noting that China is projected to overtake the US in R&D spending within 10 years, while US federal research spending as a percentage of GDP has returned to pre-Sputnik levels and should be increased significantly.

Robert Work, vice chair of the commission and previously deputy secretary of defense under Obama and Trump, continued the Cold War comparisons in Mondays news briefing. We've never faced a high-tech authoritarian competitor before, he said. The Soviet Union could compete with us in niche capabilities like nuclear weapons and space, but in the broad sense they were a technological inferior.

Created by Congress in August 2018 to offer recommendations on how the US should use AI in national security and defense, the NSCAI has strong tech industry representation. In addition to Schmidt, the 15-member commission includes Safra Katz, CEO of Oracle, Andy Jassy, the head of Amazons cloud business, and top AI executives from Microsoft and Google. Other members are from NASA, academia, the US Army, and the CIA's tech investment fund.

Mondays report says staying ahead of China depends in part on the US government getting more access to AI advances taking place inside tech companieslike those several of the commissioners work for. The document describes the Pentagon as struggling to access the best AI technology on the commercial market.

The Department of Defense has in recent years set up a series of programs aimed at forging closer relationships with Silicon Valley companies large and small. Mondays report suggests that pressure to find new ways to deepen relations will continue to grow, says William Carter, deputy director of the technology policy program at the Center for Strategic and International Studies. The report clearly articulates that DOD continuing to do business the way it always has and expecting the world to go along with it is not going to work, he says.

The commission wont send its final recommendations to Congress until late next year, but Monday's interim report says the US government should invest more in AI research and training, curtail inappropriate Chinese access to US exports and university research, and mull the ethical implications of AI-enhanced national security apparatus.

So far, attempts to draw tech companies into more national security contracts have had mixed results.

Employee protests forced Google to promise not to renew its piece of a Pentagon program, Project Maven, created to show how tech companies could help military AI projects. Microsoft has also faced internal protests over contracts with the Army and Immigration and Customs Enforcement.

Yet Microsoft CEO Satya Nadella and his Amazon counterpart Jeff Bezos have issued full-throated statements in support of the idea of taking national security contracts. Last month, Microsoft won a $10 billion Pentagon cloud-computing contract motivated in part by a desire to improve the departments AI capabilities. Deals like that could become more common if the commission proves to be influential.

Read the rest here:

Report: The Government and Tech Need to Cooperate on AI - WIRED

Posted in Ai | Comments Off on Report: The Government and Tech Need to Cooperate on AI – WIRED