Artificial Intelligence Platform Reduces Hospital Admissions By Over 50% In Trial – Forbes

Clare Medical's AI-based diagnostic tool reduced the need for admissions and other interventions by ... [+] over 50% in clinical trials.

An artificial intelligence-based diagnostic tool has reduced hospital admissions by 51% among at-risk elderly patients, according to results of a trial released by health care provider Clare Medical.

Announcing the results of its trial on Wednesday, Clare Medical concluded that predicting which patients have a higher probability of experiencing medical "events" significantly reduces the probability of such events occurring. This is because its AI-based tool provides clinicians with the opportunity for early intervention.

The trial found that the use of AI-based diagnostics has significantly positive outcomes with respect to reducing a patient's risk of requiring a hospital visit within 30 days. At a time when the coronavirus pandemic is placing significant strain on the worlds health systems, such outcomes could end up saving substantial amounts of time and money for hospitals, not to mention the benefits for patients themselves.

"This diagnostic tool has the potential to be a paradigm shift in how we surveil and monitor patients. By providing an alert to physicians for high risk patients, we believe it provides a remarkable ability to intervene early and positively alter a patient's care trajectory," says Clare Medical CEO Ron Lipstein.

The study was conducted with predominantly elderly patients with at least one underlying medical condition. Compared to the general population, such patients are at high risk for a variety of clinically significant outcomes, including urinary tract infections, pneumonia, falls and fractures, the exacerbation of chronic obstructive pulmonary disease, worsening diabetes, and worsening chronic kidney disease.

However, by using its artificial intelligence-based toolwhich uses an algorithm to screen patients medical charts and dataClare Medicals team was able to identify 12.8% of the participating patients as being at a 30-day increased risk of requiring a hospital or emergency room visit. This resulted in physicians being notified and patient cases being reviewed and acted on. As a consequence, only 6.3% of patientsor 51% feweractually ended up requiring a hospital or ER visit.

Clare Medical estimates that a single hospital admission involving an elderly patient having several comorbidities costs more than $30,000.

At the same time, admission after the onset of a condition puts the patient themselves at greater risk. This risk is currently being heightened by the fact that the burdens of the coronavirus pandemic have made hospitals and health systems less able to screen patients in general.

For example, the UK Government estimated in July that declines in emergency care, adult social care, elective care and primary care could result in 10,000, 16,000, 12,500 and 1,400 excess deaths respectively over 12 months (the figure for elective care covers a five-year timeframe), assuming that Covid-19 care continues to impact medical treatment for other conditions.

This is the kind of problem that AI diagnostics could help ease. By reducing the time and resources needed to provide reliable indicators of likely health events and conditions, artificial intelligence-based tools could potentially make it easier for hospitals to confront Covid-19 while still continuing to screen patients for other diseases.

And reassuringly enough, Clare Medical is far from being the only company using AI to accelerate and improve diagnosis. In July, Israel-based medical data analytics startup Diagnostic Robotics signed a deal with the American medical centre Mayo Clinic, which will use its AI-based tools to speed up the process of diagnosing and triaging patients in hospitals and emergency rooms. Likewise, this year has produced its fair share of academic research indicating that AI algorithms are as "effective as radiologists" in screening for breast cancer, for instance.

The coronavirus pandemic has given hospitals extra impetus for involving artificial intelligence in the diagnostic process, while AI models have recently been developed for detecting asymptomatic carriers of Covid-19.

In other words, its highly unlikely that Clare Medical will be the last medical provider to trial and roll out the use of AI. Because with the coronavirus potentially staying with us for some years to come, and with the worlds population getting older, hospitals will have only more to do in the future, not less.

Continued here:

Artificial Intelligence Platform Reduces Hospital Admissions By Over 50% In Trial - Forbes

The AI industry is built on geographic and social inequality, research shows – VentureBeat

The arm of global inequality is long, rendering itself visible particularly in the development of AI and machine learning systems. In a recent paper, researchers at Cornell, the Universite de Montreal, the National Institute of Statistical Sciences (U.S.), and Princeton argue that this inequality in the AI industry involves a concentration of profits and raises the danger of ignoring the contexts to which AI is applied.

As AI systems become increasingly ingrained in society, they said, those responsible for developing and implementing such systems stand to profit to a large extent. And if these players are predominantly located in economic powerhouses like the U.S., China, and the E.U., a disproportionate share of economic benefit will fall inside of these regions, exacerbating the inequality.

Whether explicitly in response to this inequality or not, calls have been made for broader inclusion in the development of AI. At the same time, some have acknowledged the limitations of inclusion. For example, in an analysis of publications at two major machine learning conference venues, NeurIPS 2020 and ICML 2020, none of the top 10 countries in terms of publication index were located in Latin America, Africa, or Southeast Asia, the coauthors of this new study note. Moreover, the full lists of the top 100 universities and top 100 companies by publication index included no companies or universities based in Africa or Latin America.

This inequality manifests in part in data collection. Previous research has found that ImageNet and OpenImages, two large, publicly available image datasets, are U.S.- and Euro-centric. Models trained on these datasets perform worse on images from Global South countries. For example, images of grooms are classified with lower accuracy when they come from Ethiopia and Pakistan, compared to images of grooms from the United States. Along this vein, because of how images of words like wedding or spices are presented in distinctly different cultures, publicly available object recognition systems fail to correctly classify many of these objects when they come from the Global South.

Labels, the annotations from which AI models learn relationships in data, also bear the hallmarks of inequality. A major venue for crowdsourcing labeling work is Amazon Mechanical Turk, but an estimated less than 2% of Mechanical Turk workers come from the Global South, with the vast majority originating from the U.S. and India. Not only are the tasks monotonous and the wages low on Samasource, another crowdsourcing workload platform, workers earn around $8 a day but a number of barriers exist to participation. A computer and reliable internet connection are required, and on Amazon Mechanical Turk, U.S. bank accounts and gift cards are the only forms of payment.

As the researchers point out, ImageNet, which has been essential to recent progress in computer vision, wouldnt have been possible without the work of data labelers. But the ImageNet workers themselves made a median wage of $2 per hour, with only 4% making more than the U.S. federal minimum wage of $7.25 per hour itself a far cry from a living wage.

As [a] significant part of the data collection pipeline, data labeling is an extremely low-paying job involving rote, repetitive tasks that offer no room for upward mobility, the coauthors wrote. Individuals may not require many technical skills to label data, but they do not develop any meaningful technical skills either. The anonymity of platforms like Amazons Mechanical Turk inhibit the formation of social relationships between the labeler and the client that could otherwise have led to further educational opportunities or better remuneration. Although data is central to the AI systems of today, data labelers receive only a disproportionately tiny portion of the profits of building these systems.

The coauthors also find inequality in the AI research labs established by tech giants like Google, Microsoft, Facebook, and others. Despite these centers presence throughout South and Latin America, they tend to be concentrated in certain countries, especially India, Brazil, Ghana, and Kenya. And the positions there often require technical expertise which the local population might not have, as illustrated by AI researchers and practitioners tendency to work and study in places outside of their home countries. The coauthors cite a recent report from Georgetown Universitys Center for Security and Emerging Technologies that found that while 42 of the 62 major AI labs are located outside of the U.S., 68% of the staff are located within the United States.

Even with long-term investment into regions in the Global South, the question remains of whether local residents are provided opportunities to join management and contribute to important strategic decisions, the coauthors wrote. True inclusion necessitates that underrepresented voices can be found in all ranks of a companys hierarchy, including in positions of upper management. Tech companies which are establishing a footprint in these regions are uniquely positioned to offer this opportunity to natives of the region.

The coauthors are encouraged by the efforts of organizations like Khipuand Black in AI, which have identified students, researchers, and practitioners in the field of AI and made improvements in increasing the number of Latin American and Black scholars attending and publishing at premiere AI conferences. Other communities based on the African continent, like Data Science Africa, Masakhane, and Deep Learning Indaba, have expanded their efforts with conferences, workshops, and dissertation awards and developed curricula for the wider African AI community.

But this being the case, the coauthors say a key component of future inclusion efforts should be to elevate the involvement and participation of those historically excluded from AI development. Currently, they argue, data labelers are often wholly detached from the rest of the machine learning pipeline, with workers oftentimes not knowing how their labor will be used nor for what purpose. The coauthors say these workers should be provided with education opportunities that allow them to contribute to the models they are building in ways beyond labeling.

Little sense of fulfillment comes from menial tasks [like labeling], and by exploiting these workers solely for their produced knowledge without bringing them into the fold of the product that they are helping to create, a deep chasm exists between workers and the downstream product, the coauthors wrote. Similarly, where participation in the form of model development is the norm, employers should seek to involve local residents in the ranks of management and in the process of strategic decision-making.

While acknowledging that it isnt an easy task, the coauthors suggest embracing AI development as a path forward for economic development. Rather than relying upon foreign spearheading of AI systems for domestic application, where returns from these systems often arent reinvested domestically, they encourage countries to create domestic AI development activity focused on high-productivity activities like model development, deployment, and research.

As the development of AI continues to progress across the world, the exclusion of those from communities most likely to bear the brunt of algorithmic inequity only stands to worsen, the coauthors wrote. We hope the actions we propose can help to begin the movement of communities in the Global South from being just beneficiaries or subjects of AI systems to being active, engaged participants. Having true agency over the AI systems integrated into the livelihoods of communities in the Global South will maximize the impact of these systems and lead the way for global inclusion of AI.

Go here to see the original:

The AI industry is built on geographic and social inequality, research shows - VentureBeat

Stavros Niarchos Foundation Conference explores humanity’s future with AI, the post-COVID world, and other pressing topics – PRNewswire

NEW YORK, June 10, 2020 /PRNewswire/ --Society seems to be at an inflection point on a number of fronts, but there's no consensus on what exactly we're pivoting away from, much less what we're turning toward. The SNF Conference, held online June 22-23, 2020 as part of the SNFestival: RetroFuture Edition, will explore how we can "bounce forward" into the post-pandemic world we want, how to ensure the growth of AI and other technology ends up working for humanity and not against it, and the evolving role of philanthropy in responding to crisis.

Icons and iconoclasts, innovators and prognosticators, artists and artificial intelligence researchers will come together to reflect on where we've arrived and consider big questions about what the future could and should hold. Speakers Include:

The free two-day event, also featuring SNF Co-President Andreas Dracopoulos, will open a dialogue between themes explored at past SNF Conferences, next year's Conference on Humanity and Artificial Intelligence, and our unprecedented present. Journalist Anna-Kynthia Bousdoukou, Managing Director of iMEdD and Executive Director of SNF DIALOGUES, will host.

For the past eight years, the SNF Conference has brought together top thinkers and visionaries to explore critical questions on the future of humanity and society and raise the level of democratic discourse. In this time of pandemic and rapid technological change, such dialogue helps us understand what is happening around us and the new choices ahead of us.

To see the full conference schedule and register, visit SNFConference.org.

The SNF Conference is one of the signature events of the free Summer Nostos Festival, this year taking place in a special virtual format from June 21-28, 2020. From a performance by rising Nigerian superstar Burna Boy, to events curated by magician Mark Mitton that will surprise and delight kids of all ages, to a tour of William Kentridge's South Africa studio led by the artist himself, to readings by esteemed actors from Selected Shorts, to a sing-along with Choir! Choir! Choir!, to an interactive Theater of War performance featuring Oscar Isaac, Jeffrey Wright and Frances McDormand, there's something for everyone.

Wherever you'll be, tune in for activities for all ages, a sneak preview of the dynamic shows to come at the 2021 SNFestival, thrilling performances by international artists, highlights from summers past, DJ beats, a virtual run, and much more. Explore the lineup.

Held each June, the free SNFestival is organized and made possible by the Stavros Niarchos Foundation (SNF).

About the Stavros Niarchos Foundation (SNF)

The Stavros Niarchos Foundation (SNF) is one of the world's leading private, international philanthropic organizations, making grants to nonprofit organizations in the areas of arts and culture, education, health and sports and social welfare. SNF funds organizations and projects worldwide that aim to achieve a broad, lasting, and positive impact for society at large and exhibit strong leadership and sound management. The Foundation also supports projects that facilitate the formation of public-private partnerships as an effective means for serving public welfare.

Since 1996, the Foundation has committed more than $3 billion through more than 4,600 grants to nonprofit organizations in 126 nations around the world.

See more at SNF.org.

Media Contact

Maggie Fiertz[emailprotected]646-307-6315

SOURCE Stavros Niarchos Foundation

Read more from the original source:

Stavros Niarchos Foundation Conference explores humanity's future with AI, the post-COVID world, and other pressing topics - PRNewswire

AI Can Write in English. Now It’s Learning Other Languages – WIRED

What's surprising about these large language models is how much they know about how the world works simply from reading all the stuff that they can find, says Chris Manning, a professor at Stanford who specializes in AI and language.

But GPT and its ilk are essentially very talented statistical parrots. They learn how to re-create the patterns of words and grammar that are found in language. That means they can blurt out nonsense, wildly inaccurate facts, and hateful language scraped from the darker corners of the web.

Amnon Shashua, a professor of computer science at the Hebrew University of Jerusalem, is the cofounder of another startup building an AI model based on this approach. He knows a thing or two about commercializing AI, having sold his last company, Mobileye, which pioneered using AI to help cars spot things on the road, to Intel in 2017 for $15.3 billion.

Shashuas new company, AI21 Labs, which came out of stealth last week, has developed an AI algorithm, called Jurassic-1, that demonstrates striking language skills in both English and Hebrew.

In demos, Jurassic-1 can generate paragraphs of text on a given subject, dream up catchy headlines for blog posts, write simple bits of computer code, and more. Shashua says the model is more sophisticated than GPT-3, and he believes that future versions of Jurassic may be able to build a kind of common-sense understanding of the world from the information it gathers.

Other efforts to re-create GPT-3 reflect the worldsand the internetsdiversity of languages. In April, researchers at Huawei, the Chinese tech giant, published details of a GPT-like Chinese language model called PanGu-alpha (written as PanGu-). In May, Naver, a South Korean search giant, said it had developed its own language model, called HyperCLOVA, that speaks Korean.

Jie Tang, a professor at Tsinghua University, leads a team at the Beijing Academy of Artificial Intelligence that developed another Chinese language model called Wudao (meaning "enlightenment'') with help from government and industry.

The Wudao model is considerably larger than any other, meaning that its simulated neural network is spread across more cloud computers. Increasing the size of the neural network was key to making GPT-2 and -3 more capable. Wudao can also work with both images and text, and Tang has founded a company to commercialize it. We believe that this can be a cornerstone of all AI, Tang says.

Such enthusiasm seems warranted by the capabilities of these new AI programs, but the race to commercialize such language models may also move more quickly than efforts to add guardrails or limit misuses.

Perhaps the most pressing worry about AI language models is how they might be misused. Because the models can churn out convincing text on a subject, some people worry that they could easily be used to generate bogus reviews, spam, or fake news.

I would be surprised if disinformation operators don't at least invest serious energy experimenting with these models, says Micah Musser, a research analyst at Georgetown University who has studied the potential for language models to spread misinformation.

Musser says research suggests that it wont be possible to use AI to catch disinformation generated by AI. Theres unlikely to be enough information in a tweet for a machine to judge whether it was written by a machine.

More problematic kinds of bias may be lurking inside these gigantic language models, too. Research has shown that language models trained on Chinese internet content will reflect the censorship that shaped that content. The programs also inevitably capture and reproduce subtle and overt biases around race, gender, and age in the language they consume, including hateful statements and ideas.

Similarly, these big language models may fail in surprising or unexpected ways, adds Percy Liang, another computer science professor at Stanford and the lead researcher at a new center dedicated to studying the potential of powerful, general-purpose AI models like GPT-3.

The rest is here:

AI Can Write in English. Now It's Learning Other Languages - WIRED

Why Fujifilm SonoSite is betting the future of ultrasound on artificial intelligence – GeekWire

Fujifilm SonoSite CEO Richard Fabian holds up the SonoSite 180, the companys first mobile ultrasound device that debuted in 1998, during the Life Science Washington Summit on Oct. 25, 2019. (GeekWire Photo / James Thorne)

Decades of technological advances have led to a revolution in ultrasound machines that has given rise to modern devices that weigh less than a pound and can display images on smartphones. But they still require an expert to make sense of the resulting images.

Its not as easy as it looks, said Richard Fabian, CEO of Fujifilm SonoSite, a pioneer of ultrasound technologies. A slight movement of your hand means all the difference in the world.

Thats why SonoSite is focused on a future in which artificial intelligence helps healthcare workers to make sense of ultrasounds in real-time. The idea is that computers can be trained to identify and label critical pieces of a medical image to help clinicians get answers without the need for specially-trained radiologists.

Using AI you can really quickly interpret whats going on. And the focus is on accuracy, its on confidence, and its on expanding ultrasound users, Fabian said during a talk at Life Science Washingtons annual summit in Bellevue, Wash. on Friday.

Bothell, Wash.-based SonoSite recently partnered with the Allen Institute for Artificial Intelligence (AI2) in Seattle on an effort to train AI to interpret ultrasound images. To train the models, SonoSite is using large quantities of clinical data that it gathered with the help of Partners HealthCare, a hospital in Boston.

Artificial intelligence has shown promise in interpreting medical imaging to diagnose diseases like early-stage lung cancer,breast cancerandcervical cancer. The advancements have drawn tech leaders including Google and Microsoft, who hope their AI and cloud capabilities can one day be an essential element of healthcare diagnostics.

SonoSite was initially launched with the idea of creating portable ultrasounds for the military. Its lightweight units are widely used by healthcare teams in both low-resource settings and emergency rooms.

Ultrasound imaging is significantly more affordable and portable than X-ray imaging, CT scans or PET scans, without the risk of radiation exposure. While the images it provides are not as clear, researchers think deep learning can make up some of that difference.

AI2 researchers are in the process of training deep learning models on ultrasound images in which the veins and arteries have been labeled by sonographers. One application of the AI-powered ultrasound would be to help clinicians find veins much faster and more accurately.

Fabian also gave the example of AI models labeling things such as organs and fluid build-ups inside the body, which could inform care decisions without the need for specialists. He thinks that future ultrasounds could deliver medical insights without ever displaying an image.

If ultrasound becomes cheap enough, it could become a patch [that gives] you the information that you need, said Fabian.

Here is the original post:

Why Fujifilm SonoSite is betting the future of ultrasound on artificial intelligence - GeekWire

The messy, secretive reality behind OpenAIs bid to save the world – MIT Technology Review

Every year, OpenAIs employees vote on when they believe artificial general intelligence, or AGI, will finally arrive. Its mostly seen as a fun way to bond, and their estimates differ widely. But in a field that still debates whether human-like autonomous systems are even possible, half the lab bets it is likely to happen within 15 years.

In the four short years of its existence, OpenAI has become one of the leading AI research labs in the world. It has made a name for itself producing consistently headline-grabbing research, alongside other AI heavyweights like Alphabets DeepMind. It is also a darling in Silicon Valley, counting Elon Musk and legendary investor Sam Altman among its founders.

Above all, it is lionized for its mission. Its goal is to be the first to create AGIa machine with the learning and reasoning powers of a human mind. The purpose is not world domination; rather, the lab wants to ensure that the technology is developed safely and its benefits distributed evenly to the world.

Sign up for The Algorithm artificial intelligence, demystified

The implication is that AGI could easily run amok if the technologys development is left to follow the path of least resistance. Narrow intelligence, the kind of clumsy AI that surrounds us today, has already served as an example. We now know that algorithms are biased and fragile; they can perpetrate great abuse and great deception; and the expense of developing and running them tends to concentrate their power in the hands of a few. By extrapolation, AGI could be catastrophic without the careful guidance of a benevolent shepherd.

OpenAI wants to be that shepherd, and it has carefully crafted its image to fit the bill. In a field dominated by wealthy corporations, it was founded as a nonprofit. Its first announcement said that this distinction would allow it to build value for everyone rather than shareholders. Its chartera document so sacred that employees pay is tied to how well they adhere to itfurther declares that OpenAIs primary fiduciary duty is to humanity. Attaining AGI safely is so important, it continues, that if another organization were close to getting there first, OpenAI would stop competing with it and collaborate instead. This alluring narrative plays well with investors and the media, and in July Microsoft injected the lab with a fresh $1 billion.

Christie Hemm Klok

But three days at OpenAIs officeand nearly three dozen interviews with past and current employees, collaborators, friends, and other experts in the fieldsuggest a different picture. There is a misalignment between what the company publicly espouses and how it operates behind closed doors. Over time, it has allowed a fierce competitiveness and mounting pressure for ever more funding to erode its founding ideals of transparency, openness, and collaboration. Many who work or worked for the company insisted on anonymity because they were not authorized to speak or feared retaliation. Their accounts suggest that OpenAI, for all its noble aspirations, is obsessed with maintaining secrecy, protecting its image, and retaining the loyalty of its employees.

Since its earliest conception, AI as a field has strived to understand human-like intelligence and then re-create it. In 1950, Alan Turing, the renowned English mathematician and computer scientist, began a paper with the now-famous provocation Can machines think? Six years later, captivated by the nagging idea, a group of scientists gathered at Dartmouth College to formalize the discipline.

It is one of the most fundamental questions of all intellectual history, right? says Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence (AI2), a Seattle-based nonprofit AI research lab. Its like, do we understand the origin of the universe? Do we understand matter?

The trouble is, AGI has always remained vague. No one can really describe what it might look like or the minimum of what it should do. Its not obvious, for instance, that there is only one kind of general intelligence; human intelligence could just be a subset. There are also differing opinions about what purpose AGI could serve. In the more romanticized view, a machine intelligence unhindered by the need for sleep or the inefficiency of human communication could help solve complex challenges like climate change, poverty, and hunger.

But the resounding consensus within the field is that such advanced capabilities would take decades, even centuriesif indeed its possible to develop them at all. Many also fear that pursuing this goal overzealously could backfire. In the 1970s and again in the late 80s and early 90s, the field overpromised and underdelivered. Overnight, funding dried up, leaving deep scars in an entire generation of researchers. The field felt like a backwater, says Peter Eckersley, until recently director of research at the industry group Partnership on AI, of which OpenAI is a member.

Christie Hemm Klok

Against this backdrop, OpenAI entered the world with a splash on December 11, 2015. It wasnt the first to openly declare it was pursuing AGI; DeepMind had done so five years earlier and had been acquired by Google in 2014. But OpenAI seemed different. For one thing, the sticker price was shocking: the venture would start with $1 billion from private investors, including Musk, Altman, and PayPal cofounder Peter Thiel.

The star-studded investor list stirred up a media frenzy, as did the impressive list of initial employees: Greg Brockman, who had run technology for the payments company Stripe, would be chief technology officer; Ilya Sutskever, who had studied under AI pioneer Geoffrey Hinton, would be research director; and seven researchers, freshly graduated from top universities or plucked from other companies, would compose the core technical team. (Last February, Musk announced that he was parting ways with the company over disagreements about its direction. A month later, Altman stepped down as president of startup accelerator Y Combinator to become OpenAIs CEO.)

But more than anything, OpenAIs nonprofit status made a statement. Itll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest, the announcement said. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. Though it never made the criticism explicit, the implication was clear: other labs, like DeepMind, could not serve humanity because they were constrained by commercial interests. While they were closed, OpenAI would be open.

In a research landscape that had become increasingly privatized and focused on short-term financial gains, OpenAI was offering a new way to fund progress on the biggest problems. It was a beacon of hope, says Chip Huyen, a machine learning expert who has closely followed the labs journey.

At the intersection of 18th and Folsom Streets in San Francisco, OpenAIs office looks like a mysterious warehouse. The historic building has drab gray paneling and tinted windows, with most of the shades pulled down. The letters PIONEER BUILDINGthe remnants of its bygone owner, the Pioneer Truck Factorywrap around the corner in faded red paint.

Inside, the space is light and airy. The first floor has a few common spaces and two conference rooms. One, a healthy size for larger meetings, is called A Space Odyssey; the other, more of a glorified phone booth, is called Infinite Jest. This is the space Im restricted to during my visit. Im forbidden to visit the second and third floors, which house everyones desks, several robots, and pretty much everything interesting. When its time for their interviews, people come down to me. An employee trains a watchful eye on me in between meetings.

wikimedia commons / tfinc

On the beautiful blue-sky day that I arrive to meet Brockman, he looks nervous and guarded. Weve never given someone so much access before, he says with a tentative smile. He wears casual clothes and, like many at OpenAI, sports a shapeless haircut that seems to reflect an efficient, no-frills mentality.

Brockman, 31, grew up on a hobby farm in North Dakota and had what he describes as a focused, quiet childhood. He milked cows, gathered eggs, and fell in love with math while studying on his own. In 2008, he entered Harvard intending to double-major in math and computer science, but he quickly grew restless to enter the real world. He dropped out a year later, entered MIT instead, and then dropped out again within a matter of months. The second time, his decision was final. Once he moved to San Francisco, he never looked back.

Brockman takes me to lunch to remove me from the office during an all-company meeting. In the caf across the street, he speaks about OpenAI with intensity, sincerity, and wonder, often drawing parallels between its mission and landmark achievements of science history. Its easy to appreciate his charisma as a leader. Recounting memorable passages from the books hes read, he zeroes in on the Valleys favorite narrative, Americas race to the moon. (One story I really love is the story of the janitor, he says, referencing a famous yet probably apocryphal tale. Kennedy goes up to him and asks him, What are you doing? and he says, Oh, Im helping put a man on the moon!) Theres also the transcontinental railroad (It was actually the last megaproject done entirely by hand a project of immense scale that was totally risky) and Thomas Edisons incandescent lightbulb (A committee of distinguished experts said Its never gonna work, and one year later he shipped).

Christie Hemm Klok

Brockman is aware of the gamble OpenAI has taken onand aware that it evokes cynicism and scrutiny. But with each reference, his message is clear: People can be skeptical all they want. Its the price of daring greatly.

Those who joined OpenAI in the early days remember the energy, excitement, and sense of purpose. The team was smallformed through a tight web of connectionsand management stayed loose and informal. Everyone believed in a flat structure where ideas and debate would be welcome from anyone.

Musk played no small part in building a collective mythology. The way he presented it to me was Look, I get it. AGI might be far away, but what if its not? recalls Pieter Abbeel, a professor at UC Berkeley who worked there, along with several of his students, in the first two years. What if its even just a 1% or 0.1% chance that its happening in the next five to 10 years? Shouldnt we think about it very carefully? That resonated with me, he says.

But the informality also led to some vagueness of direction. In May 2016, Altman and Brockman received a visit from Dario Amodei, then a Google researcher, who told them no one understood what they were doing. In an account published in the New Yorker, it wasnt clear the team itself knew either. Our goal right now is to do the best thing there is to do, Brockman said. Its a little vague.

Nonetheless, Amodei joined the team a few months later. His sister, Daniela Amodei, had previously worked with Brockman, and he already knew many of OpenAIs members. After two years, at Brockmans request, Daniela joined too. Imaginewe started with nothing, Brockman says. We just had this ideal that we wanted AGI to go well.

Throughout our lunch, Brockman recites the charter like scripture, an explanation for every aspect of the companys existence.

By March of 2017, 15 months in, the leadership realized it was time for more focus. So Brockman and a few other core members began drafting an internal document to lay out a path to AGI. But the process quickly revealed a fatal flaw. As the team studied trends within the field, they realized staying a nonprofit was financially untenable. The computational resources that others in the field were using to achieve breakthrough results were doubling every 3.4 months. It became clear that in order to stay relevant, Brockman says, they would need enough capital to match or exceed this exponential ramp-up. That required a new organizational model that could rapidly amass moneywhile somehow also staying true to the mission.

Unbeknownst to the publicand most employeesit was with this in mind that OpenAI released its charter in April of 2018. The document re-articulated the labs core values but subtly shifted the language to reflect the new reality. Alongside its commitment to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power, it also stressed the need for resources. We anticipate needing to marshal substantial resources to fulfill our mission, it said, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

We spent a long time internally iterating with employees to get the whole company bought into a set of principles, Brockman says. Things that had to stay invariant even if we changed our structure.

Christie Hemm Klok

That structure change happened in March 2019. OpenAI shed its purely nonprofit status by setting up a capped profit arma for-profit with a 100-fold limit on investors returns, albeit overseen by a board that's part of a nonprofit entity. Shortly after, it announced Microsofts billion-dollar investment (though it didnt reveal that this was split between cash and credits to Azure, Microsofts cloud computing platform).

Predictably, the move set off a wave of accusations that OpenAI was going back on its mission. In a post on Hacker News soon after the announcement, a user asked how a 100-fold limit would be limiting at all: Early investors in Google have received a roughly 20x return on their capital, they wrote. Your bet is that youll have a corporate structure which returns orders of magnitude more than Google ... but you dont want to unduly concentrate power? How will this work? What exactly is power, if not the concentration of resources?

The move also rattled many employees, who voiced similar concerns. To assuage internal unrest, the leadership wrote up an FAQ as part of a series of highly protected transition docs. Can I trust OpenAI? one question asked. Yes, began the answer, followed by a paragraph of explanation.

The charter is the backbone of OpenAI. It serves as the springboard for all the labs strategies and actions. Throughout our lunch, Brockman recites it like scripture, an explanation for every aspect of the companys existence. (By the way, he clarifies halfway through one recitation, I guess I know all these lines because I spent a lot of time really poring over them to get them exactly right. Its not like I was reading this before the meeting.)

How will you ensure that humans continue to live meaningful lives as you develop more advanced capabilities? As we wrote, we think its impact should be to give everyone economic freedom, to let them find new opportunities that arent imaginable today. How will you structure yourself to evenly distribute AGI? I think a utility is the best analogy for the vision that we have. But again, its all subject to the charter. How do you compete to reach AGI first without compromising safety? I think there is absolutely this important balancing act, and our best shot at that is whats in the charter.

OpenAI

For Brockman, rigid adherence to the document is what makes OpenAIs structure work. Internal alignment is treated as paramount: all full-time employees are required to work out of the same office, with few exceptions. For the policy team, especially Jack Clark, the director, this means a life divided between San Francisco and Washington, DC. Clark doesnt mindin fact, he agrees with the mentality. Its the in-between moments, like lunchtime with colleagues, he says, that help keep everyone on the same page.

In many ways, this approach is clearly working: the company has an impressively uniform culture. The employees work long hours and talk incessantly about their jobs through meals and social hours; many go to the same parties and subscribe to the rational philosophy of effective altruism. They crack jokes using machine-learning terminology to describe their lives: What is your life a function of? What are you optimizing for? Everything is basically a minmax function. To be fair, other AI researchers also love doing this, but people familiar with OpenAI agree: more than others in the field, its employees treat AI research not as a job but as an identity. (In November, Brockman married his girlfriend of one year, Anna, in the office against a backdrop of flowers arranged in an OpenAI logo. Sutskever acted as the officiant; a robot hand was the ring bearer.)

But at some point in the middle of last year, the charter became more than just lunchtime conversation fodder. Soon after switching to a capped-profit, the leadership instituted a new pay structure based in part on each employees absorption of the mission. Alongside columns like engineering expertise and research direction in a spreadsheet tab titled Unified Technical Ladder, the last column outlines the culture-related expectations for every level. Level 3: You understand and internalize the OpenAI charter. Level 5: You ensure all projects you and your team-mates work on are consistent with the charter. Level 7: You are responsible for upholding and improving the charter, and holding others in the organization accountable for doing the same.

The first time most people ever heard of OpenAI was on February 14, 2019. That day, the lab announced impressive new research: a model that could generate convincing essays and articles at the push of a button. Feed it a sentence from The Lord of the Rings or the start of a (fake) news story about Miley Cyrus shoplifting, and it would spit out paragraph after paragraph of text in the same vein.

But there was also a catch: the model, called GPT-2, was too dangerous to release, the researchers said. If such powerful technology fell into the wrong hands, it could easily be weaponized to produce disinformation at immense scale.

The backlash among scientists was immediate. OpenAI was pulling a publicity stunt, some said. GPT-2 was not nearly advanced enough to be a threat. And if it was, why announce its existence and then preclude public scrutiny? It seemed like OpenAI was trying to capitalize off of panic around AI, says Britt Paris, an assistant professor at Rutgers University who studies AI-generated disinformation.

Christie Hemm Klok

By May, OpenAI had revised its stance and announced plans for a staged release. Over the following months, it successively dribbled out more and more powerful versions of GPT-2. In the interim, it also engaged with several research organizations to scrutinize the algorithms potential for abuse and develop countermeasures. Finally, it released the full code in November, having found, it said, no strong evidence of misuse so far.

Amid continued accusations of publicity-seeking, OpenAI insisted that GPT-2 hadnt been a stunt. It was, rather, a carefully thought-out experiment, agreed on after a series of internal discussions and debates. The consensus was that even if it had been slight overkill this time, the action would set a precedent for handling more dangerous research. Besides, the charter had predicted that safety and security concerns would gradually oblige the lab to reduce our traditional publishing in the future.

This was also the argument that the policy team carefully laid out in its six-month follow-up blog post, which they discussed as I sat in on a meeting. I think that is definitely part of the success-story framing, said Miles Brundage, a policy research scientist, highlighting something in a Google doc. The lead of this section should be: We did an ambitious thing, now some people are replicating it, and here are some reasons why it was beneficial.

But OpenAIs media campaign with GPT-2 also followed a well-established pattern that has made the broader AI community leery. Over the years, the labs big, splashy research announcements have been repeatedly accused of fueling the AI hype cycle. More than once, critics have also accused the lab of talking up its results to the point of mischaracterization. For these reasons, many in the field have tended to keep OpenAI at arms length.

Christie Hemm Klok

This hasnt stopped the lab from continuing to pour resources into its public image. As well as research papers, it publishes its results in highly produced company blog posts for which it does everything in-house, from writing to multimedia production to design of the cover images for each release. At one point, it also began developing a documentary on one of its projects to rival a 90-minute movie about DeepMinds AlphaGo. It eventually spun the effort out into an independent production, which Brockman and his wife, Anna, are now partially financing. (I also agreed to appear in the documentary to provide technical explanation and context to OpenAIs achievement. I was not compensated for this.)

And as the blowback has increased, so have internal discussions to address it. Employees have grown frustrated at the constant outside criticism, and the leadership worries it will undermine the labs influence and ability to hire the best talent. An internal document highlights this problem and an outreach strategy for tackling it: In order to have government-level policy influence, we need to be viewed as the most trusted source on ML [machine learning] research and AGI, says a line under the Policy section. Widespread support and backing from the research community is not only necessary to gain such a reputation, but will amplify our message. Another, under Strategy, reads, "Explicitly treat the ML community as a comms stakeholder. Change our tone and external messaging such that we only antagonize them when we intentionally choose to."

There was another reason GPT-2 had triggered such an acute backlash. People felt that OpenAI was once again walking back its earlier promises of openness and transparency. With news of the for-profit transition a month later, the withheld research made people even more suspicious. Could it be that the technology had been kept under wraps in preparation for licensing it in the future?

Christie Hemm Klok

But little did people know this wasnt the only time OpenAI had chosen to hide its research. In fact, it had kept another effort entirely secret.

There are two prevailing technical theories about what it will take to reach AGI. In one, all the necessary techniques already exist; its just a matter of figuring out how to scale and assemble them. In the other, there needs to be an entirely new paradigm; deep learning, the current dominant technique in AI, wont be enough.

Most researchers fall somewhere between these extremes, but OpenAI has consistently sat almost exclusively on the scale-and-assemble end of the spectrum. Most of its breakthroughs have been the product of sinking dramatically greater computational resources into technical innovations developed in other labs.

Brockman and Sutskever deny that this is their sole strategy, but the labs tightly guarded research suggests otherwise. A team called Foresight runs experiments to test how far they can push AI capabilities forward by training existing algorithms with increasingly large amounts of data and computing power. For the leadership, the results of these experiments have confirmed its instincts that the labs all-in, compute-driven strategy is the best approach.

For roughly six months, these results were hidden from the public because OpenAI sees this knowledge as its primary competitive advantage. Employees and interns were explicitly instructed not to reveal them, and those who left signed nondisclosure agreements. It was only in January that the team, without the usual fanfare, quietly posted a paper on one of the primary open-source databases for AI research. People who experienced the intense secrecy around the effort didnt know what to make of this change. Notably, another paper with similar results from different researchers had been posted a few months earlier.

Christie Hemm Klok

In the beginning, this level of secrecy was never the intention, but it has since become habitual. Over time, the leadership has moved away from its original belief that openness is the best way to build beneficial AGI. Now the importance of keeping quiet is impressed on those who work with or at the lab. This includes never speaking to reporters without the express permission of the communications team. After my initial visits to the office, as I began contacting different employees, I received an email from the head of communications reminding me that all interview requests had to go through her. When I declined, saying that this would undermine the validity of what people told me, she instructed employees to keep her informed of my outreach. A Slack message from Clark, a former journalist, later commended people for keeping a tight lid as a reporter was sniffing around.

In a statement responding to this heightened secrecy, an OpenAI spokesperson referred back to a section of its charter. We expect that safety and security concerns will reduce our traditional publishing in the future, the section states, while increasing the importance of sharing safety, policy, and standards research. The spokesperson also added: Additionally, each of our releases is run through an infohazard process to evaluate these trade-offs and we want to release our results slowly to understand potential risks and impacts before setting loose in the wild.

One of the biggest secrets is the project OpenAI is working on next. Sources described it to me as the culmination of its previous four years of research: an AI system trained on images, text, and other data using massive computational resources. A small team has been assigned to the initial effort, with an expectation that other teams, along with their work, will eventually fold in. On the day it was announced at an all-company meeting, interns werent allowed to attend. People familiar with the plan offer an explanation: the leadership thinks this is the most promising way to reach AGI.

The man driving OpenAIs strategy is Dario Amodei, the ex-Googler who now serves as research director. When I meet him, he strikes me as a more anxious version of Brockman. He has a similar sincerity and sensitivity, but an air of unsettled nervous energy. He looks distant when he talks, his brows furrowed, a hand absentmindedly tugging his curls.

Amodei divides the labs strategy into two parts. The first part, which dictates how it plans to reach advanced AI capabilities, he likens to an investors portfolio of bets. Different teams at OpenAI are playing out different bets. The language team, for example, has its money on a theory postulating that AI can develop a significant understanding of the world through mere language learning. The robotics team, in contrast, is advancing an opposing theory that intelligence requires a physical embodiment to develop.

As in an investors portfolio, not every bet has an equal weight. But for the purposes of scientific rigor, all should be tested before being discarded. Amodei points to GPT-2, with its remarkably realistic auto-generated texts, as an instance of why its important to keep an open mind. Pure language is a direction that the field and even some of us were somewhat skeptical of, he says. But now it's like, Wow, this is really promising.

Over time, as different bets rise above others, they will attract more intense efforts. Then they will cross-pollinate and combine. The goal is to have fewer and fewer teams that ultimately collapse into a single technical direction for AGI. This is the exact process that OpenAIs latest top-secret project has supposedly already begun.

Christie Hemm Klok

The second part of the strategy, Amodei explains, focuses on how to make such ever-advancing AI systems safe. This includes making sure that they reflect human values, can explain the logic behind their decisions, and can learn without harming people in the process. Teams dedicated to each of these safety goals seek to develop methods that can be applied across projects as they mature. Techniques developed by the explainability team, for example, may be used to expose the logic behind GPT-2s sentence constructions or a robots movements.

Amodei admits this part of the strategy is somewhat haphazard, built less on established theories in the field and more on gut feeling. At some point were going to build AGI, and by that time I want to feel good about these systems operating in the world, he says. Anything where I dont currently feel good, I create and recruit a team to focus on that thing.

For all the publicity-chasing and secrecy, Amodei looks sincere when he says this. The possibility of failure seems to disturb him.

Were in the awkward position of: we dont know what AGI looks like, he says. We dont know when its going to happen. Then, with careful self-awareness, he adds: The mind of any given person is limited. The best thing Ive found is hiring other safety researchers who often have visions which are different than the natural thing I mightve thought of. I want that kind of variation and diversity because thats the only way that you catch everything.

The thing is, OpenAI actually has little variation and diversitya fact hammered home on my third day at the office. During the one lunch I was granted to mingle with employees, I sat down at the most visibly diverse table by a large margin. Less than a minute later, I realized that the people eating there were not, in fact, OpenAI employees. Neuralink, Musks startup working on computer-brain interfaces, shares the same building and dining room.

Christie Hemm Klok

According to a lab spokesperson, out of the over 120 employees, 25% are female or nonbinary. There are also two women on the executive team and the leadership team is 30% women, she said, though she didnt specify who was counted among these teams. (All four C-suite executives, including Brockman and Altman, are white men. Out of over 112 employees I identified on LinkedIn and other sources, the overwhelming number were white or Asian.)

In fairness, this lack of diversity is typical in AI. Last year a report from the New Yorkbased research institute AI Now found that women accounted for only 18% of authors at leading AI conferences, 20% of AI professorships, and 15% and 10% of research staff at Facebook and Google, respectively. There is definitely still a lot of work to be done across academia and industry, OpenAIs spokesperson said. Diversity and inclusion is something we take seriously and are continually working to improve by working with initiatives like WiML, Girl Geek, and our Scholars program.

Indeed, OpenAI has tried to broaden its talent pool. It began its remote Scholars program for underrepresented minorities in 2018. But only two of the first eight scholars became full-time employees, even though they reported positive experiences. The most common reason for declining to stay: the requirement to live in San Francisco. For Nadja Rhodes, a former scholar who is now the lead machine-learning engineer at a New Yorkbased company, the city just had too little diversity.

But if diversity is a problem for the AI industry in general, its something more existential for a company whose mission is to spread the technology evenly to everyone. The fact is that it lacks representation from the groups most at risk of being left out.

Nor is it at all clear just how OpenAI plans to distribute the benefits of AGI to all of humanity, as Brockman frequently says in citing its mission. The leadership speaks of this in vague terms and has done little to flesh out the specifics. (In January, the Future of Humanity Institute at Oxford University released a report in collaboration with the lab proposing to distribute benefits by distributing a percentage of profits. But the authors cited significant unresolved issues regarding the way in which it would be implemented.) This is my biggest problem with OpenAI, says a former employee, who spoke on condition of anonymity.

Christie Hemm Klok

They are using sophisticated technical practices to try to answer social problems with AI, echoes Britt Paris of Rutgers. It seems like they dont really have the capabilities to actually understand the social. They just understand that thats a sort of a lucrative place to be positioning themselves right now.

Brockman agrees that both technical and social expertise will ultimately be necessary for OpenAI to achieve its mission. But he disagrees that the social issues need to be solved from the very beginning. How exactly do you bake ethics in, or these other perspectives in? And when do you bring them in, and how? One strategy you could pursue is to, from the very beginning, try to bake in everything you might possibly need, he says. I dont think that that strategy is likely to succeed.

The first thing to figure out, he says, is what AGI will even look like. Only then will it be time to make sure that we are understanding the ramifications.

Last summer, in the weeks after the switch to a capped-profit model and the $1 billion injection from Microsoft, the leadership assured employees that these updates wouldnt functionally change OpenAIs approach to research. Microsoft was well aligned with the labs values, and any commercialization efforts would be far away; the pursuit of fundamental questions would still remain at the core of the work.

For a while, these assurances seemed to hold true, and projects continued as they were. Many employees didnt even know what promises, if any, had been made to Microsoft.

But in recent months, the pressure of commercialization has intensified, and the need to produce money-making research no longer feels like something in the distant future. In sharing his 2020 vision for the lab privately with employees, Altmans message is clear: OpenAI needs to make money in order to do researchnot the other way around.

This is a hard but necessary trade-off, the leadership has saidone it had to make for lack of wealthy philanthropic donors. By contrast, Seattle-based AI2, a nonprofit that ambitiously advances fundamental AI research, receives its funds from a self-sustaining (at least for the foreseeable future) pool of money left behind by the late Paul Allen, a billionaire best known for cofounding Microsoft.

But the truth is that OpenAI faces this trade-off not only because its not rich, but also because it made the strategic choice to try to reach AGI before anyone else. That pressure forces it to make decisions that seem to land farther and farther away from its original intention. It leans into hype in its rush to attract funding and talent, guards its research in the hopes of keeping the upper hand, and chases a computationally heavy strategynot because its seen as the only way to AGI, but because it seems like the fastest.

Yet OpenAI is still a bastion of talent and cutting-edge research, filled with people who are sincerely striving to work for the benefit of humanity. In other words, it still has the most important elements, and theres still time for it to change.

Near the end of my interview with Rhodes, the former remote scholar, I ask her the one thing about OpenAI that I shouldnt omit from this profile. I guess in my opinion, theres problems, she begins hesitantly. Some of them come from maybe the environment it faces; some of them come from the type of people that it tends to attract and other people that it leaves out.

But to me, it feels like they are doing something a little bit right, she says. I got a sense that the folks there are earnestly trying.

Update: We made some changes to this story after OpenAI asked us to clarify that when Greg Brockman said he didnt think it was possible to bake ethics in from the very beginning when developing AI, he intended it to mean that ethical questions couldnt be solved from the beginning, not that they couldnt be addressed from the beginning. Also, that after dropping out of Harvard he transferred straight to MIT rather than waiting a year. Also, that he was raised not on a farm, but "on a hobby farm." Brockman considers this distinction important.

In addition, we have clarified that while OpenAI did indeed "shed its nonprofit status," a board that is part of a nonprofit entity still oversees it, and that OpenAI publishes its research in the form of company blog posts as well as, not in lieu of, research papers. Weve also corrected the date of publication of a paper by outside researchers and the affiliation of Peter Eckersley (former, not current, research director of Partnership on AI, which he recently left).

Continue reading here:

The messy, secretive reality behind OpenAIs bid to save the world - MIT Technology Review

Everybodys talking about an AI tool that does (almost) everything – The Hustle

Wordsmiths, get nervous.

OpenAI, the artificial-intelligence outfit backed by Elon Musk, recently opened up access to GPT-3, a new AI language model.

Its so powerful that it can produce text thats practically indistinguishable from human work.

That used to be true. People fed books and scripts to these programs and generated hilariously bad endings to episodes of Game of Thrones.

But while earlier models mimicked a humans vocabulary and writing style, GPT-3 is able to analyze context. One guy used GPT-3 to write an entire blog post, and he was surprised by the quality of the results.

And thats not all GPT-3 can do

The API currently is in beta, and you have to request access to use it. When the API becomes commercially available, OpenAI will use the proceeds to fund further research.

We at The Hustle will be nervously watching for future developments. Last week, Microsoft cut dozens of full-time MSN staffers in favor of news curation thats handled by AI.

Sam, look away.

We asked Trung Phan, one of our Trends analysts, what he thought of this powerful new toy. Heres what he told us not aided by GPT-3 (or was he?).

In the classic sci-fi movie Terminator 2, the villainous AI Skynet becomes self-aware at 2:14am ET on August 29th, 1997.

That prediction may have been 23 years too soon, but here we are. *nervous laughing*

More seriously, the potential threat of general AI is grounded in something more (seemingly) benign than killer robots (read: The Paperclip Maximizer article and for a deeper dive this book).

The purpose of OpenAI is to ensure the development of AI benefits all of humanity. Fingers crossed.

More:

Everybodys talking about an AI tool that does (almost) everything - The Hustle

The Curious Case of Data Annotation and AI – RTInsights

Data annotation takes time. And for in-house teams, labeling data can be the proverbial bottleneck, limiting a companys ability to quickly train and validate machine learning models.

By its very definition, artificialintelligence refers to computer systems that can learn, reason, and act forthemselves, but where does this intelligence come from? For decades, thecollaborative intelligence of humans and machines has produced some of theworlds leading technologies. And while theres nothing glamorous about thedata being used to train todays AI applications, the role of data annotationin AI is nonetheless fascinating.

See also: New Tool Offers Help with Data Annotation

Poorly Labeled Data Leads to Compromised AI

Imagine reviewing hours of video footage sortingthrough thousands of driving scenes, to label all of the vehicles that comeinto frame, and youve got data annotation. Data annotation is the process oflabeling images, video, audio, and other data sources, so the data isrecognizable to computer systems programmed for supervised-learning. This isthe intelligence behind AI algorithms.

For companies using AI to solve worldproblems, improve operations, increase efficiencies, or otherwise gain acompetitive edge, training an algorithm is more than just collecting annotateddata, its sourcing superior quality training data and ensuring that data iscontributing to model validation, so applications can be brought to marketquickly, safely, and ethically.

Data is the most crucial element of machine learning.Without data annotation, computers couldnt be trained to see, speak, orperform intelligent functions, yet obtaining datasets, and labeling trainingdata are among the top limitations to adopt AI, according to the McKinsey Global Institute. Another knownlimitation is data bias, which can creep in at any stage of the training datalifecycle, but more often than not occurs from poor quality or inconsistentdata labeling.

The IDC shared that 50 percent of IT and dataprofessionals surveyed report data quality as a challenge in deploying AIworkloads, but where does quality data come from?

Open-source datasets are one way to collectdata for an ML model, but since many are curated for a specific use case, itmay not be useful for highly specialized needs. Also, the amount of data neededto train your algorithm may vary based on the complexity of the problem youretrying to solve, and the complexity of your model.

The Waymo Open Dataset is the largest, mostdiverse autonomous driving dataset to date, consisting of thousands of imageslabeled with millions of bounding boxes and object classes12 million 3Dbounding box labels and 1.2 million 2D bounding box labels, to be exact. Still,Waymo has plans to continuously grow the size of this dataset even further.

Why? Because current, accurate, and refresheddata is necessary to continuously train, validate, and maintain agile machinelearning models. There are always edge cases, and for some use cases, even moredata is needed. If the data is lacking in any way, those gaps compromise theintelligence of the algorithm in the form of bias, false positives, poorperformance, and other issues.

Lets say youre searching for a new laptop.When you type your specifications into the search bar, the results that come upare the work of millions of labeled and indexed data points, from product SKUsto product photos.

If your search returns results for a lunchbox,a briefcase, or anything else mistaken for the signature clamshell of a laptop,youve got a problem. You cant find it, so you cant buy it, and that companyjust lost a sale.

This is why quality annotated data is soimportant. Poor quality data has a direct correlation to biased and inaccuratemodels, and in some cases, improving data quality is as simple as making sureyou have the right data in the first place.

Vulcan Inc., experienced the challenge of diversity in their datasetfirst-hand while working to develop AI-enabled products that could record andmonitor African wildlife. While trying to detect cows in imagery, they realizedtheir model could not recognize cows in Africa, based on their dataset of cowsfrom Washington, alone. To get their ML model operating at peak performance,they needed to create a training dataset of their own.

Labeling Data, Demanding for AI Teams

As you might expect, data annotation takestime. And for in-house teams, labeling data can be the proverbial bottleneck,limiting your ability to quickly train and validate machine learning models.

Labeling datasets is arguably one of the hardest parts of building AI. Cognilytica reports that 80 percent of AIproject time is spent aggregating, cleaning, labeling, and augmenting data tobe used in machine learning models. Thats before any model development or AItraining even begins.

And while labeling data is not an engineeringchallenge, nor is it a data science problem, data annotation can provedemanding for several reasons.

The first is the sheer amount of time it takesto prepare large volumes of raw data for labeling. Its no secret, human effortis required to create datasets, and sorting irrelevant data from the desireddata is a task in and of itself.

Then, theres the challenge of getting theclean data labeled efficiently and accurately. A short video could take severalhours to annotate, depending on the object classes represented and theirdensity for the model to learn effectively.

An in-house team may not have enough dedicatedpersonnel to process the data in a timely manner, leaving model development ata standstill until this task is complete. In some cases, the added pressure ofkeeping the AI pipeline moving can lead to incomplete or partially labeleddata, or worse, blatant errors in the annotations.

Even in instances where existing personnel canserve as the in-house data annotation team, and they have the training andexpertise to do it well, few companies have the technology infrastructure tosupport an AI pipeline from ingestion to algorithm, securely and smoothly.

This is why organizations lacking the time fordata annotation, annotation expertise, clear strategies for AI adoption, ortechnology infrastructure to support the training data lifecycle partner withtrusted providers to build smarter AI.

To improve its retail item coverage from 91 to 98percent, Walmart worked with a specializeddata annotation partner to evaluate their data and ensure its accuracyto train Walmart systems. With more than 2.5 million items cataloged during thepartnership, the Walmart team has been able to focus on model development,rather than aggregating data.

How Data Annotation Providers Combine Humans and Tech

Data annotation providers have access to toolsand techniques that can help expedite the annotation process and improve theaccuracy of data labeling.

For starters, working day in and day out withtraining data means these companies see a range of scenarios where dataannotation is seamless and where things could be improved. They can then passthese learnings on to their clients, helping to create effective training datastrategies for AI development.

For organizations unsure of how tooperationalize AI in their business, an annotation provider can serve as atrusted advisor to your machine learning teamasking the right questions, atthe right time, under the right circumstances.

A recent report shared that organizations spend 5x moreon internal data labeling, for every dollar spent on third-party services. Thismay be due, in part, to the expense of assigning data scientists and ML engineerslabeling tasks. Still, theres also something to be said about the establishedplatforms, workflows, and trained workforce that allow annotation serviceproviders to work more efficiently.

Working with a trusted partner often meansthat the annotators assigned to your project receive training to understand thecontext of the data being labeled. It also means you have a dedicatedtechnology platform for data labeling. Over time, your dedicated team oflabelers can begin to specialize in your specific use-case, and this expertiseresults in lower costs and better scalability of your AI programs.

Technology platforms that incorporateautomation and reporting, such as automated QA, can also help improve labelingefficiency by helping to prevent logical fallacies, expedite training for datalabelers, and ensure a consistent measure of annotation quality. This alsohelps reduce the amount of manual QA time required by clients, as well as theannotation provider.

Few-click annotation is another example, whichuses machine learning to increase accuracy and reduce labeling time. Withfew-click annotation, the time it would take a human to annotate several pointscan be reduced down from two minutes to a few seconds. This combination ofmachine learning and the support of a human, who does a few clicks, produces alevel of labeling precision previously not possible with human effort alone.

The human in the loop is not going away in theAI supply chain. However,more data annotation providers are also using pre and post-processingtechnologies to support humans training AI. In pre-processing, machine learningis used to convert raw data into clean datasets, using a script. This does notreplace or reduce data labeling, but it can help improve the quality of theannotations and the labeling process.

There are no shortcuts to train AI, but a dataannotation provider can help expedite the labeling process, by leveragingin-house technology platforms, and acting as an extension of your team, toclose the loop between data scientists and data labelers.

See the rest here:

The Curious Case of Data Annotation and AI - RTInsights

Ai Weiwei To Release ‘Epic Film That Humanizes Global Refugee Crisis’ – Forbes


Forbes
Ai Weiwei To Release 'Epic Film That Humanizes Global Refugee Crisis'
Forbes
For the past year, the Chinese artist Ai Weiwei has been traveling to countries with large migrant and refugee populations -- 22 countries in all -- for a feature-length documentary titled Human Flow, which he has just announced will be released this ...

Read more:

Ai Weiwei To Release 'Epic Film That Humanizes Global Refugee Crisis' - Forbes

AI Sounds Great, But What About The Legal Issues? – UKTN

Disruptive technologies are developing faster than legislation. Artificial intelligence (AI) and machine learning will advance technology even more, and nobody knows the full extent of its capabilities.

Ageing legislators in government offices certainly are not in touch the digital technologies. Yet as more businesses invest in AI, Big Data and machine learning, the technological evolution progresses without legal restrictions and regulations.

The promise of AI is to deliver enhanced experiences, improves the quality of life and enables businesses to make better decisions for the benefit of their customers and their stakeholders.

The emergence of AI has undoubtedly created untapped issues which raise critical ethical, social and legal issues for consumers, businesses and lawmakers.

The latter group are already looking for solutions to ensure AI is used responsibly in government programs and business services. The protection of consumers and end-users is of utmost importance.

However, as we have seen in the past, the promise of new technologies is not always fulfilled. And more often than not, it is theconsumer that is the victim.

Foodtech startup Grocemania announces nationwide expansion

Privacy Concerns

AI and the Internet of Things (IoT) rely on the continuous interaction between devices. But to connect personal devices, consumers are required to hand over personal data.

Privacy is already a hot topic for discussion. Tech giants including Google and Facebook have already dented the public trust byviolating consumer privacy.

To date, multi-billion dollar companies are fined a fraction of the money they make and subsequent laws ultimately work in their favour whilst hindering SMEs.GDPR is a case-in-point.

Language learning app Memrise scores $15.5m Series B

In October 2016, the House of Commons in Britain published a report to addressRobotics and Artificial Intelligence. The paper primarily addresses privacy and accountability but raises more questions than it answers.

Accountability

AI arguably complicates the privacy issue even more. Lawmakers with the responsibility to update legislation that puts controls and limitations on how businesses use data have to iron out who is responsible for protecting personal data.

The key points to focus on are ethical issues, its development, deployment and the fundamental rights of consumers. Many of these regulations already exist in relation to digital technology but are foggy, to say the least.

South Yorkshire tech firm Metalysis closes 20m funding round

With the emergence of AI-powered IoT systems, there will be an increasing number of cases where more than one party is involved in the handling of consumer data.

The volume and relevancy of data will need to be scrutinised by legal regulations. But where does that leave businesses in what they can and cant do? What should and should not be deemed appropriate and what will the ramifications be for companies that breach consumer rights?

For example, to use IoT systems, consumers will have to hand over their phone number and/or email address. This data is then shared amongst three different companies for IoT to run smoothly. Will the consumer receive advertising spam from three different companies?

Even more complicated is the question of causality. For example, if a robot causes an error which results in a financial loss, is the company legally responsible for damages? Can a driverless car be accused of causing an accident?

Legislators for AI have to dig deep. The scope of the legislation should address programming errors, whether the statistical chances of glitches raise safety issues, whether testing protocols are sufficient and much more.

A concern for companies will be whether manufacturers are given the freedom to pass on the responsibility to their customers and if so, where does that leave businesses?

The legal complications around AI are a minefield. So far, lawmakers have failed to protect regular business owners and consumers. Before you invest in AI speak tolegal experts that have experience in with AI, Big Data and Machine Learning laws.

Excerpt from:

AI Sounds Great, But What About The Legal Issues? - UKTN

Google buys Kaggle and its gaggle of AI geeks – CNET

Machine learning is the next big thing, says Google with its acquisition of AI site Kaggle.

It doesn't take artificial intelligence to know Google thinks machine learning will be central to your future.

After all, the Silicon Valley powerhouse has been busy creating self-teaching tech that can translate languages, vamp with you on piano, and politely crush you at the ancient Chinese game of Go.

"Over time, the computer itself -- whatever its form factor -- will be an intelligent assistant helping you through your day," Google CEO Sundar Pichai wrote in his first-ever letter to shareholders, last year. "We will move from mobile first to an AI first world."

Now Google has taken another step toward that future. On Wednesday, the Google Cloud Platform said it had acquired Kaggle, what it calls the world's biggest community for data scientists and machine learning geeks.

Among other things, Kaggle lets AI enthusiasts "climb the world's most elite machine learning leaderboards," "explore and analyze a collection of high quality datasets," and "run code in the cloud and receive community feedback on your work," according to the site.

The Kaggle team will stay together and continue Kaggle as its own brand within Google Cloud, Kaggle CEO Anthony Goldbloom said in a blog post.

Fei-Fei Li, chief scientist, Google Cloud AI and machine learning, said in her own post that the acquisition would give Kaggle members direct access to the most advanced cloud machine learning environment.

"We must lower the barriers of entry to AI and make it available to the largest community of developers, users and enterprises, so they can apply it to their own unique needs," Li wrote. "With Kaggle joining the Google Cloud team, we can accelerate this mission."

So much for avenging your Go loss. Tennis, anyone?

See the original post:

Google buys Kaggle and its gaggle of AI geeks - CNET

AI can overhaul patient experience, but knowing its limitations is key – MobiHealthNews

Healthcare may be bracing for a major shortage of providers and services in the coming years, but even now the industry is straining to meet an ever-growing demand for personalized, patient-friendly care. Artificial intelligence has often been touted as the panacea for this challenge, with many pointing to finance, retail and other industries that have embraced automation.

But the consumerism adopted by other sectors doesn't always translate cleanly into healthcare, says Nagi Prabhu, chief product officer at Solutionreach. Whereas people may be ready to trust automation to handle their deliveries or even manage their finances, they still prefer the human touch when it comes to their personal health.

"That's what makes it challenging. There's an expectation that there's an interaction happening between the patient and provider, but the tools and services and resources that are available on the provider side are insufficient," Prabhu said during a HIMSS20 Virtual Webinar on AI and patient experience. "And that's what causing this big disconnect between what patients are seeing and wanting, compared to other industries where they have experienced it.

"You have got to be careful in terms of where you apply that AI, particularly in healthcare, because it must be in use cases that enrich human interaction. Human interaction is not replaceable," he said.

Despite the challenge, healthcare still has a number of "low-hanging fruit" use cases where automation can reduce the strain on healthcare staff without harming overall patient experience, Prabhu said. Chief among these patient communications, scheduling and patient feedback analysis, where the past decade's investments into natural language processing and machine learning have yielded tools that can handle straightforward requests at scale.

But even these implementations need to strike the balance between automation and a human touch, he warned. Take patient messaging, for example. AI can handle simple questions about appointment times or documentation. But if the patient asks a complex question about their symptoms or care plan, the tool should be able to gracefully hand off the conversation to a human staffer without major interruption.

"If you push the automation too far, from zero automation ... to 100% automation, there's going to be a disconnect because these tools aren't perfect," he said. "There needs to be a good balancing ... even in those use cases."

These types of challenges and automation strategies are already being considered, if not implemented, among major provider organizations, noted Kevin Pawl, senior director of patient access at Boston Children's Hospital.

"We've analyzed why patients and families call Boston Children's over 2 million phone calls to our call centers each year and about half are for non-scheduling matters," Pawl said during the virtual session. "Could we take our most valuable resource, our staff, and have them work on those most critical tasks? And could we use AI and automation to improve that experience and really have the right people in the right place at the right time?"

Pawl described a handful of AI-based programs his organization has deployed in recent years, such as Amazon Alexa skills for recording personal health information and flu and coronavirus tracking models to estimate community disease burden. In the patient experience space, he highlighted self-serve kiosks placed in several Boston Children's locations that guide patients through the check-in process but that still encourage users to walk over to a live receptionist if they become confused or simply are more comfortable speaking to a human.

For these projects, Pawl said that Boston Children's needed to design their offerings around unavoidable hurdles like patients' fear of change, or even around broader system interoperability and security. For others looking to deploy similar AI tools for patient experience, he said that programs must keep in mind the need for iterative pilots,the value of walking providers and patients alike through each step of any new experience,and how the workflows and preferences of these individuals will shape their adoption of the new tools.

"These are the critical things that we think about as we are evaluating what we are going to use," he said. "Err on the side of caution."

Prabhu punctuated these warnings with his own emphasis on the data-driven design of the models themselves. These systems need to have enough historical information available to understand to answer the patient's questions, as well as the intelligence to know when a human is necessary.

"And, when it is not confident, how do you get a human being involved to respond but at the same time from the patient perspective [the interaction appears] to continue?" he asked. "I think that is the key."

See the original post here:

AI can overhaul patient experience, but knowing its limitations is key - MobiHealthNews

Too many AI researchers think real-world problems are not relevant – MIT Technology Review

Any researcher whos focused on applying machine learning to real-world problems has likely received a response like this one: The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.

These words are straight from a review I received for a paper I submitted to the NeurIPS (Neural Information Processing Systems) conference, a top venue for machine-learning research. Ive seen the refrain time and again in reviews of papers where my coauthors and I presented a method motivated by an application, and Ive heard similar stories from countless others.

This makes me wonder: If the community feels that aiming to solve high-impact real-world problems with machine learning is of limited significance, then what are we trying to achieve?

The goal of artificial intelligence (pdf) is to push forward the frontier of machine intelligence. In the field of machine learning, a novel development usually means a new algorithm or procedure, orin the case of deep learninga new network architecture. As others have pointed out, this hyperfocus on novel methods leads to a scourge of papers that report marginal or incremental improvements on benchmark data sets and exhibit flawed scholarship (pdf) as researchers race to top the leaderboard.

Meanwhile, many papers that describe new applications present both novel concepts and high-impact results. But even a hint of the word application seems to spoil the paper for reviewers. As a result, such research is marginalized at major conferences. Their authors only real hope is to have their papers accepted in workshops, which rarely get the same attention from the community.

This is a problem because machine learning holds great promise for advancing health, agriculture, scientific discovery, and more. The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning. If others in the field had prioritized real-world applications, what other groundbreaking discoveries would we have made by now?

This is not a new revelation. To quote a classic paper titled Machine Learning that Matters (pdf), by NASA computer scientist Kiri Wagstaff: Much of current machine learning research has lost its connection to problems of import to the larger world of science and society. The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then.

Marginalizing applications research has real consequences. Benchmark data sets, such as ImageNet or COCO, have been key to advancing machine learning. They enable algorithms to train and be compared on the same data. However, these data sets contain biases that can get built into the resulting models.

More than half of the images in ImageNet (pdf) come from the US and Great Britain, for example. That imbalance leads systems to inaccurately classify images in categories that differ by geography (pdf). Popular face data sets, such as the AT&T Database of Faces, contain primarily light-skinned male subjects, which leads to systems that struggle to recognize dark-skinned and female faces.

While researchers try to outdo one another on contrived benchmarks, one in every nine people in the world is starving.

When studies on real-world applications of machine learning are excluded from the mainstream, its difficult for researchers to see the impact of their biased models, making it far less likely that they will work to solve these problems.

One reason applications research is minimized might be that others in machine learning think this work consists of simply applying methods that already exist. In reality, though, adapting machine-learning tools to specific real-world problems takes significant algorithmic and engineering work. Machine-learning researchers who fail to realize this and expect tools to work off the shelf often wind up creating ineffective models. Either they evaluate a models performance using metrics that dont translate to real-world impact, or they choose the wrong target altogether.

For example, most studies applying deep learning to echocardiogram analysis try to surpass a physicians ability to predict disease. But predicting normal heart function (pdf) would actually save cardiologists more time by identifying patients who do not need their expertise. Many studies applying machine learning to viticulture aim to optimize grape yields (pdf), but winemakers want the right levels of sugar and acid, not just lots of big watery berries, says Drake Whitcraft of Whitcraft Winery in California.

Another reason applications research should matter to mainstream machine learning is that the fields benchmark data sets are woefully out of touch with reality.

New machine-learning models are measured against large, curated data sets that lack noise and have well-defined, explicitly labeled categories (cat, dog, bird). Deep learning does well for these problems because it assumes a largely stable world (pdf).

But in the real world, these categories are constantly changing over time or according to geographic and cultural context. Unfortunately, the response has not been to develop new methods that address the difficulties of real-world data; rather, theres been a push for applications researchers to create their own benchmark data sets.

The goal of these efforts is essentially to squeeze real-world problems into the paradigm that other machine-learning researchers use to measure performance. But the domain-specific data sets are likely to be no better than existing versions at representing real-world scenarios. The results could do more harm than good. People who might have been helped by these researchers work will become disillusioned by technologies that perform poorly when it matters most.

Because of the fields misguided priorities, people who are trying to solve the worlds biggest challenges are not benefiting as much as they could from AIs very real promise. While researchers try to outdo one another on contrived benchmarks, one in every nine people in the world is starving. Earth is warming and sea level is rising at an alarming rate.

As neuroscientist and AI thought leader Gary Marcus once wrote (pdf): AIs greatest contributions to society could and should ultimately come in domains like automated scientific discovery, leading among other things towards vastly more sophisticated versions of medicine than are currently possible. But to get there we need to make sure that the field as whole doesnt first get stuck in a local minimum.

For the world to benefit from machine learning, the community must again ask itself, as Wagstaff once put it: What is the fields objective function? If the answer is to have a positive impact in the world, we must change the way we think about applications.

Hannah Kerner is an assistant research professor at the University of Maryland in College Park. She researches machine learning methods for remote sensing applications in agricultural monitoring and food security as part of the NASA Harvest program.

Visit link:

Too many AI researchers think real-world problems are not relevant - MIT Technology Review

To see what makes AI hard to use, ask it to write a pop song – MIT Technology Review

In the end most teams used smaller models that produced specific parts of a song, like the chords or melodies, and then stitched these together by hand. Uncanny Valley used an algorithm to match up lyrics and melodies that had been produced by different AIs, for example.

Another team, Dadabots x Portrait XO, did not want to repeat their chorus twice but couldnt find a way to direct the AI to change the second version. In the end the team used seven models and cobbled together different results to get the variation they wanted.

It was like assembling a jigsaw puzzle, says Huang: Some teams felt like the puzzle was unreasonably hard, but some found it exhilarating, because they had so many raw materials and colorful puzzle pieces to put together.

Uncanny Valley used the AIs to provide the ingredients, including melodies produced by a model trained on koala, kookaburra, and Tasmanian devil noises. The people on the team then put these together.

Its like having a quirky human collaborator that isn't that great at songwriting but very prolific, says Sandra Uitdenbogerd, a computer scientist at RMIT University in Melbourne and a member of Uncanny Valley. We choose the bits that we can work with.

But this was more compromise than collaboration. Honestly, I think humans could have done it equally well, she says.

Generative AI models produce output at the level of single notesor pixels, in the case of image generation. They dont perceive the bigger picture. Humans, on the other hand, typically compose in terms of verse and chorus and how a song builds. There's a mismatch between what AI produces and how we think, says Cai.

Cai wants to change how AI models are designed to make them easier to work with. I think that could really increase the sense of control for users, she says.

Its not just musicians and artists who will benefit. Making AIs easier to use, by giving people more ways to interact with their output, will make them more trustworthy wherever theyre used, from policing to health care.

We've seen that giving doctors the tools to steer AI can really make a difference in their willingness to use AI at all, says Cai.

Read the original here:

To see what makes AI hard to use, ask it to write a pop song - MIT Technology Review

How people are using AI to detect and fight the coronavirus – VentureBeat

The spread of the COVID-19 coronavirus is a fluid situation changing by the day, and even by the hour. The growing worldwide public health emergency is threatening lives, but its also impacting businesses and disrupting travel around the world. The OECD warns that coronavirus could cut global economic growth in half, and the Federal Reserve will cut the federal interest rates following the worst week for the stock market since 2008.

Just how the COVID-19 coronavirus will affect the way we live and work is unclear because its a novel disease spreading around the world for the first time, but it appears that AI may help fight the virus and its economic impact.

A World Health Organization report released last month said that AI and big data are a key part of the response to the disease in China. Here are some ways people are turning to machine learning solutions in particular to detect, or fight against, the COVID-19 coronavirus.

On February 19, the Danish company UVD Robots said it struck an agreement with Sunay Healthcare Supply to distribute its robots in China. UVDs robots rove around health care facilities spreading UV light to disinfect rooms contaminated with viruses or bacteria.

XAG Robot is also deploying disinfectant-spraying robots and drones in Guangzhou.

UC Berkeley robotics lab director and DexNet creator Ken Goldberg predicts that if the coronavirus becomes a pandemic, it may lead to the spread of more robots in more environments.

Robotic solutions to, for example, limit exposure of medical or service industry staff in hotels are deploying in some places today, but not every robot being rolled out is a winner.

The startup Promobot advertises itself as a service robot for business and recently showed off its robot in Times Square. The robot deploys no biometric or temperature analysis sensors. It just asks four questions in a screening, like Do you have a cough? It also requires people to touch a screen to register a response. A Gizmodo reporter who spoke to the bot called it dumb, but thats not even the worst part: Asking people in the midst of an outbreak soon to be declared a global pandemic to physically touch screens seems awfully counterproductive.

One way AI detects coronavirus is with cameras equipped with thermal sensors.

A Singapore hospital and public health facility is performing real-time temperature checks, thanks to startup KroniKare, with a smartphone and thermal sensor.

An AI system developed by Chinese tech company Baidu that uses an infrared sensor and AI to predict peoples temperatures is now in use in Beijings Qinghe Railway Station, according to an email sent to Baidu employees that was shared with VentureBeat.

Above: Health officers screen arriving passengers from China with thermal scanners at Changi International airport in Singapore on January 22, 2020.

Image Credit: Roslan Rahman / Getty Images

The Baidu approach combines computer vision and infrared to detect the forehead temperature of up to 200 people a minute within a range of 0.5 degree Celsius. The system alerts authorities if it detects a person with a temperature above 37.3 degree Celsius (99.1 degrees Fahrenheit) since fever is a tell-tale sign of coronavirus. Baidu may implement its temperature monitoring next in Beijing South Railway Station and Line 4 of the Beijing Subway.

Last month, Shenzhen MicroMultiCopter said in a statement that its deployed more than 100 drones capable in various Chinese cities. The drones are capable of not only thermal sensing but also spraying disinfectant and patrolling public places.

One company, BlueDot, says it recognized the emergence of high rates of pneumonia in China nine days before the World Health Organization. BlueDot was founded in response to the SARS epidemic. It uses natural language processing (NLP) to skim the text of hundreds of thousands of sources to scour news and public statements about the health of humans or animals.

Metabiota, a company thats working with the U.S. Department of Defense and intelligence agencies, estimates the risk of a disease spreading. It bases its predictions on factors like illness symptoms, mortality rate, and the availability of treatment.

The 40-page WHO-China Mission report released last month about initial response to COVID-19 cites how the country used big data and AI as part of its response to the disease. Use cases include AI for contact tracing to monitor the spread of disease and management of priority populations.

But academics, researchers, and health professionals are beginning to produce other forms of AI as well.

On Sunday, researchers from Renmin Hospital of Wuhan University, Wuhan EndoAngel Medical Technology Company, and China University of Geosciences shared work on deep learning that detected COVID-19 with what they claim is 95% accuracy. The model is trained with CT scans of 51 patients with laboratory-confirmed COVID-19 pneumonia and more than 45,000 anonymized CT scan images.

The deep learning model showed a performance comparable to expert radiologists and improved the efficiency of radiologists in clinical practice. It holds great potential to relieve the pressure on frontline radiologists, improve early diagnosis, isolation, and treatment, and thus contribute to the control of the epidemic, reads a preprint paper about the model published in medrxiv.org. (A preprint paper means it has not yet undergone peer review.)

The researchers say the model can decrease confirmation time from CT scans by 65%. In similar efforts taking place elsewhere, machine learning from Infervision thats trained on hundreds of thousands of CT scans is detecting coronavirus in Zhongnan Hospital in Wuhan.

In initial results shared in another preprint paper updated today on medrxiv using clinical data from Tongji hospital in Wuhan, a new system is capable of predicting survival rates with more than 90% accuracy.

The work was done by researchers from the School of Artificial Intelligence and Automation, as well as other departments from Huazhong University of Science and Technology in China.

The coathors say that coronavirus survival estimation today can draw from more than 300 lab or clinical results, but their approach only considers results related to lactic dehydrogenase (LDH), lymphocyte, and high-sensitivity C-reactive protein (hsCRP).

In another paper Deep Learning for Coronavirus Screening, released last month on arXiv by collaborators working with the Chinese government, the model uses multiple CNN models to classify CT image datasets and calculate the infection probability of COVID-19. In preliminary results, they claim the model is able to predict the difference between COVID-19, influenza-A viral pneumonia, and healthy cases with 86.7% accuracy.

The deep learning model is trained with CT scans of influenza patients, COVID-19 patients, and healthy people from three hospitals in Wuhan, including 219 images from 110 patients with COVID-19.

Because the outbreak is spreading so quickly, those on the front lines need tools to help them identity and treat affected people with just as much speed. The tools need to be accurate, too. Its unsurprising that there are already AI-powered solutions deployed in the wild, and its almost a certainty that more are forthcoming from the public and private sector alike.

Continued here:

How people are using AI to detect and fight the coronavirus - VentureBeat

Give AI self doubt to prevent RISE OF THE MACHINES, experts warn – Express.co.uk

GETTY

As the development of true artificial intelligence (AI) continues and experts work towards the singularity the point where machines will become smarter than humans researchers are examining ways of how to keep humans as the top beings on Earth.

Many experts have warned on the perils of developing machines that are as capable as us, as it could realistically make humans obsolete as they could take our jobs, and eventually see us as more of a hindrance and wipe us off the face of the Earth.

To combat this threat, scientists are proposing ideas that could prevent this, with one new idea being that developers should give AI self-doubt.

The idea goes that if AI has self-doubt, it will need to seek reassurance from humans, much in the same way a dog does, which will consolidate our place as top of the totem on Earth.

GETTY

A team from the University of California has conducted studies and shows that self-doubting robots are more obedient.

The team wrote in a paper published on arXiv: "It is clear that one of the primary tools we can use to mitigate the potential risk from a misbehaving AI system is the ability to turn the system off.

GETTY

"As the capabilities of AI systems improve, it is important to ensure that such systems do not adopt subgoals that prevent a human from switching them off.

Our goal is to study the incentives an agent has to allow itself to be switched off."

In one simulation, a robot mind was turned off by a human and allowed to turn itself back on.

Robots without the self-doubt reactivated themselves, but the one that did have it did not as it was uncertain of the outcome if it went against human wishes.

Asus

1 of 9

Asus Zenbo: This adorable little bot can move around and assist you at home, express emotions, and learn and adapt to your preferences with proactive artificial intelligence.

The team concluded: "Our analysis suggests that agents with uncertainty about their utility function have incentives to accept or seek out human oversight.

Thus, systems with uncertainty about their utility function are a promising area for research on the design of safe AI systems.

Follow this link:

Give AI self doubt to prevent RISE OF THE MACHINES, experts warn - Express.co.uk

An AI Is Beating Some Of The Best Dota Players In The World – Kotaku

OpenAI used the action at this years Dota 2 championships as an opportunity to show off its work by having top players lose repeatedly to its in-game bot.

Dotas normally a team game with a heavy emphasis on coordination and communication, but for players interested in beefing up their pure, technical ability, the game also has a 1v1 mode. Thats what tech company OpenAI used to show off its programming of a bot against one of the games most famous and beloved players, Danil Dendi Ishutin.

That mode has both players compete in the games mid-lane, with only the destruction of that first tower or two enemy kills earning either side a win. In addition, for purposes of this particular demonstration, specific items like Bottle and Soul Ring, which help players manage health and mana regeneration, were also restricted. Dendi decided to play Shadow Fiend, a strong but fragile hero who excels at aggressive plays, and to make it a mirror match the OpenAI bot did the same.

Rarely do you hear a crowd of people cheering over creep blocking, but thats what the fans in Key Arena did last night while watching the exhibition match. The earliest advantage in a 1v1 Dota face-off comes with one side slowing down their support wave of AI creeps enough to force the opponent farther into enemy territory and thats exactly what the bot managed to do within the first thirty seconds of the bout.

After that, things seemed to even out but Dendi, lacking a good read on his AI rival, played cautiously and ended up losing out on experience and gold as the bot was given space to land more last-hits. By three minutes in, OpenAI had already harassed Dendis tower and gained double the CS. The former TI winner suffered his first death as a result shortly after. At that point, with the AI unlikely to make a crucial mistake and Dendi falling further and further behind in experience points, the game match was all but over. The pro tried to change things around with last ditch attempt at a kill but he ended up sacrificing his own life to do it.

In a rematch, Dendi admitted that he was going to try and mimic the AIs strategy of pushing his lane early, explaining how the dynamic of a 1v1 fight in Dota is counter-intuitive since it relies on purely outplaying your opponent rather than trying to out think them. Switching sides from Radiant to Dire for game two, Dendi got off to an even worse. He and the opposing AI exchanged blows early, and within the first two minutes he as forced to retreat only to die along the way.

The OpenAI bot was trained, accroding to company CTO Greg Brockman, by playing many lifetimes worth of matches and only limited coaching along the way. Earlier in the week it had defeated other pros renowned for their technical play, including SumaiL and Arteezy, learning each time and improving itself. But these matches were more to test how far the bot had come than anything else. Self-playing was what got it to that point, with Brockman explaining in a blog post that the AIs learning style requires playing against opponents very close in skill level so it can make incremental adjustments to improve over time.

The company, funded in part by Elon Musk, is working on a number of different AI projects, including impersonating Reddit commenters, but games have always been an important part of designing and testing computer learning. From checkers and chess to StarCraft and now Dota, the well defined rule systems and clear win conditions are a natural fit.

And the 1v1 mode of Valves MOBA takes that logic even further, offering a way of limiting the number of variables operating in the form of other players. Rather than worry about what nine other people are doing and exponentially increasing the number of options and possibilities the AI has to contend with, 1v1 allows it to focus the games core elements, similar a beginner chess player practicing openings. The OpenAI teams ambitions dont stop there, however. The bots designers hope to see it perform in full-fledged 5v5 matches by next year.

You can watch the entire demo below.

Originally posted here:

An AI Is Beating Some Of The Best Dota Players In The World - Kotaku

Reimagining creativity and AI to boost enterprise adoption – TechTarget

An AI algorithm capable of thought and creation has the potential to enhance applications and unlock better analysis with less oversight for organizations. However, it still remains out of reach. Until then, AI has an important role to play in augmenting human creativity.

Since the inception of artificial intelligence, researchers have had a goal to create a machine capable of matching or surpassing a human's skills of reasoning and expression. Advancing AI past self-training to computational creativity will require going beyond data augmentation into original thought.

Currently, machine learning specializes in limited data creativity, with algorithms that can train on historical data and allow organizations to make better-informed decisions with analytics. These algorithms use training data sets to "predict" future outcomes and generate new data.

"There are dozens of examples in which different algorithms that, given the observation of real data, are capable of generating very plausible fictitious data, which is almost indistinguishable from real data," Haldo Sponton, vice president of technology and head of AI development at digital consultant firm Globant.

Algorithms can create data, but only when prompted to and only from something that has already been created -- current algorithms can only mirror training data. This falls short of the insular creativity the technology hoped to reach.

To Sponton, creativity is as universal as it is individual. Each being has the ability to be creative, but each individual has a unique approach to creation. Creativity is that ability to use imagination or have original ideas, as well as the ability to create. It is a fundamental feature of human intelligence, and AI cannot ignore it as a step to further advancement.

As AI processes more information, or takes on more intricate tasks, it can evolve and learn to make better decisions. What would make an AI creative is more than just training algorithms and learning outputs, but building from scratch and creating something new, unrelated to existing data.

"This evolution is really valuable, but true creativity has yet to be achieved," said Jess Kennedy, co-founder of Beeline, a SaaS company based in Jacksonville, Fla.

The potential of a creative machine capable of both learning and the ability to create on its own has tremendous potential in marketplace as well as enterprise settings.

A creative algorithm would be able to create data and discover trends without prompting and without supervision. This would mean less maintenance for an organization's data science team and lead to even greater insights, as they wouldn't have to be modeled on existing correlations.

The truth is that these algorithms generate new data, such as images or music, which can be considered a result of the imitation of the human creative process. Haldo Spontonvice president of technology and head of AI development, Globant

Overall, a creative AI would have the ability to find the best way to approach most any problem presented to it by an organization. Anything from hunting for anomalies in data sets to prevent fraud to making conversations with virtual assistants feel more natural.

"Tools based on AI algorithms will generate new creative processes, new ways of creating and thinking, new horizons to explore," Sponton said.

At the moment, artificial intelligence has not reached that level of advancement, and the enterprise applications of true creativity are out of reach. Apart from the difficulty of developing an AI capable of creativity, proving that it has had an original idea and is an added level of advancement.

There are some applications of creativity among existing AI technologies. Neural networks are at the point where they can identify tasks in the creative process. Supervised and unsupervised learning can find meaningful connections and patterns within an organization's data set. These systems and approaches have already proven their capabilities in the enterprise, from recommendations for users online to advanced analytics for business intelligence and analytics vendors.

The combination of creativity and AI has reached an impressive level, but the way we look at it may be hindering enterprise applications. Instead of focusing on developing an AI that can stand alone and be considered creative, experts note that AI is already successfully helping to further human creativity.

"AI has been used to create things like art and music, but it has been based on existing information and data provided to the AI interface in order to do so," Kennedy said.

This allows for the creation of traditionally creative materials by AI but falls short of that ultimate goal of a creative AI. This does, however, allow for a uniquely nonhuman approach to the creation of artistic works.

"Artists around the world are already adopting this technology for musical composition, for the creation of plastic works and even choreographies or sculptures (just appreciate the work of choreographer Wayne McGregor or plastic artist Sarah Meyohas)," Sponton said.

Adding another layer into the field of creative arts opens up new opportunities for expression and beauty for those working in the field. Instead of taking the human aspect out of this field, this augmentation role for AI finds a balance between creative AI and solely human creations.

"The truth is that these algorithms generate new data, such as images or music, which can be considered a result of the imitation of the human creative process," Sponton said.

AI is not at the stage where it can stand on its own and create, but for now, it serves a valuable role of creating data, analyzing processes and augmenting the creation process. When the time comes for an AI to take the next step, however, we may even have to redefine creativity.

See the original post:

Reimagining creativity and AI to boost enterprise adoption - TechTarget

AI continued its world domination at Mobile World Congress – Engadget

When it comes to the intersection of smartphones and AI, Motorola had the most surprising news at the show. In case you missed it, Motorola is working with Amazon (and Harman Kardon, most likely) to build a Moto Mod that will make use of Alexa. Even to me, someone who cooled on the Mods concept after an initial wave of interesting accessories slowed to a trickle, this seems like a slam dunk. Even better, Motorola product chief Dan Dery described what the company ultimately wanted to achieve: a way to get assistants like Alexa to integrate more closely with the personal data we keep on our smartphones.

In his mind, for instance, it would be ideal to ask an AI make a reservation at a restaurant mentioned in an email a day earlier. With Alexa set to be a core component of many Moto phones going forward, here's hoping Dery and the team find a way to break down the walls between AI assistants and the information that could make them truly useful. Huawei made headlines earlier this year when it committed to putting Alexa on the Mate 9, but we'll soon see if the company's integration will attempt to be as deep.

Speaking of Alexa, it's about to get some new competition in Asia. Line Inc., makers of the insanely popular messaging app of the same name, are building an assistant named Clova for smartphones and connected speakers. It will apparently be able to deal with complex questions in many forms: Development will initially focus on a first-party app, but should find its way into many different ones, giving users opportunities to talk to services that share some underlying tech.

LG got in on the AI assistant craze too, thanks to a close working relationship with Google. The LG V20 was the very first Nougat smartphone to be announced ... until Google stole the spotlight with its own Nougat-powered Pixel line. And the G6 was the first non-Pixel phone to come with Google's Assistant, a distinction that lasted for maybe a half-hour before Google said the assistant would roll out to smartphones running Android 6.0 and up. The utility is undeniable, and so far, Google Assistant on the G6 has been almost as seamless as the experience on a Pixel.

As a result, flagships like Sony's newly announced XZ Premium will likely ship with Assistant up and running as well, giving us Android fans an easier way to get things done via speech. It's worth pointing out that other flagship smartphones that weren't announced at Mobile World Congress either do or will rely on some kind of AI assistant to keep users pleased and productive. HTC's U Ultra has a second screen where suggestions and notifications generated by the HTC Companion will pop up, though the Companion isn't available on versions of the Ultra already floating around. And then there's Samsung's Galaxy S8, which is expected to come with an assistant named Bixby when it's officially unveiled in New York later this month.

While it's easy to think of "artificial intelligence" merely as software entities that can interact with us intelligently, machine-learning algorithms also fall under that umbrella. Their work might be less immediately noticeable at times, but companies are banking on the algorithmic ability to understand data that we can't on a human level and improve functionality as a result.

Take Huawei's P10, for instance. Like the flagship Mate 9 before it, the P10 benefits from a set of algorithms meant to improve performance over time by figuring out the order in which you like to do things and allocating resources accordingly. With its updated EMUI 5.1 software, the P10 is supposed to be better at managing resources like memory when the phone boots and during use -- all based on user habits. The end goal is to make phones that actually get faster over time, though it will take a while to see any real changes. (You also might never see performance improvements, since "performance" is a subjective thing anyway.)

Even Netflix showed up at Mobile World Congress to talk about machine-learning. The company is well aware that sustained growth and relevance will come as it improves the mobile-video experience. In the coming months, expect to see better-quality video using less network bandwidth, all thanks to algorithms that try quantify what it means for a video to "look good." Combine those algorithms with a new encoding scheme that compresses individual scenes in a movie or TV episode differently based on what's happening in them, and you have a highly complex fix your eyes and wallet will thank you for.

And, since MWC is just the right kind of absurd, we got an up-close look at a stunning autonomous race car called (what else?) RoboCar. Nestled within the sci-fi-inspired body are components that would've seemed like science fiction a few decades ago: There's a complex cluster of radar, LIDAR, ultrasonic and speed sensors all feeding information to an NVIDIA brain using algorithms to interpret all that information on the fly.

That these developments spanned the realms of smartphones, media and cars in a single, formerly focused trade show speak to how big a deal machine learning and artificial intelligence have become. There's no going back now -- all we can do is watch as companies make better use of the data offered to them, and hold those companies accountable when they inevitably screw up.

Click here to catch up on the latest news from MWC 2017.

See the original post here:

AI continued its world domination at Mobile World Congress - Engadget

DeepMind’s Newest AI Programs Itself to Make All the Right Decisions – Singularity Hub

When Deep Blue defeated world chess champion Garry Kasparov in 1997, it may have seemed artificial intelligence had finally arrived. A computer had just taken down one of the top chess players of all time. But it wasnt to be.

Though Deep Blue was meticulously programmed top-to-bottom to play chess, the approach was too labor-intensive, too dependent on clear rules and bounded possibilities to succeed at more complex games, let alone in the real world. The next revolution would take a decade and a half, when vastly more computing power and data revived machine learning, an old idea in artificial intelligence just waiting for the world to catch up.

Today, machine learning dominates, mostly by way of a family of algorithms called deep learning, while symbolic AI, the dominant approach in Deep Blues day, has faded into the background.

Key to deep learnings success is the fact the algorithms basically write themselves. Given some high-level programming and a dataset, they learn from experience. No engineer anticipates every possibility in code. The algorithms just figure it.

Now, Alphabets DeepMind is taking this automation further by developing deep learning algorithms that can handle programming tasks which have been, to date, the sole domain of the worlds top computer scientists (and take them years to write).

In a paper recently published on the pre-print server arXiv, a database for research papers that havent been peer reviewed yet, the DeepMind team described a new deep reinforcement learning algorithm that was able to discover its own value functiona critical programming rule in deep reinforcement learningfrom scratch.

Surprisingly, the algorithm was also effective beyond the simple environments it trained in, going on to play Atari gamesa different, more complicated taskat a level that was, at times, competitive with human-designed algorithms and achieving superhuman levels of play in 14 games.

DeepMind says the approach could accelerate the development of reinforcement learning algorithms and even lead to a shift in focus, where instead of spending years writing the algorithms themselves, researchers work to perfect the environments in which they train.

First, a little background.

Three main deep learning approaches are supervised, unsupervised, and reinforcement learning.

The first two consume huge amounts of data (like images or articles), look for patterns in the data, and use those patterns to inform actions (like identifying an image of a cat). To us, this is a pretty alien way to learn about the world. Not only would it be mind-numbingly dull to review millions of cat images, itd take us years or more to do what these programs do in hours or days. And of course, we can learn what a cat looks like from just a few examples. So why bother?

While supervised and unsupervised deep learning emphasize the machine in machine learning, reinforcement learning is a bit more biological. It actually is the way we learn. Confronted with several possible actions, we predict which will be most rewarding based on experienceweighing the pleasure of eating a chocolate chip cookie against avoiding a cavity and trip to the dentist.

In deep reinforcement learning, algorithms go through a similar process as they take action. In the Atari game Breakout, for instance, a player guides a paddle to bounce a ball at a ceiling of bricks, trying to break as many as possible. When playing Breakout, should an algorithm move the paddle left or right? To decide, it runs a projectionthis is the value functionof which direction will maximize the total points, or rewards, it can earn.

Move by move, game by game, an algorithm combines experience and value function to learn which actions bring greater rewards and improves its play, until eventually, it becomes an uncanny Breakout player.

So, a key to deep reinforcement learning is developing a good value function. And thats difficult. According to the DeepMind team, it takes years of manual research to write the rules guiding algorithmic actionswhich is why automating the process is so alluring. Their new Learned Policy Gradient (LPG) algorithm makes solid progress in that direction.

LPG trained in a number of toy environments. Most of these were gridworldsliterally two-dimensional grids with objects in some squares. The AI moves square to square and earns points or punishments as it encounters objects. The grids vary in size, and the distribution of objects is either set or random. The training environments offer opportunities to learn fundamental lessons for reinforcement learning algorithms.

Only in LPGs case, it had no value function to guide that learning.

Instead, LPG has what DeepMind calls a meta-learner. You might think of this as an algorithm within an algorithm that, by interacting with its environment, discovers both what to predict, thereby forming its version of a value function, and how to learn from it, applying its newly discovered value function to each decision it makes in the future.

LPG builds on prior work in the area.

Recently, researchers at the Dalle Molle Institute for Artificial Intelligence Research (IDSIA) showed their MetaGenRL algorithm used meta-learning to learn an algorithm that generalizes beyond its training environments. DeepMind says LPG takes this a step further by discovering its own value function from scratch and generalizing to more complex environments.

The latter is particularly impressive because Atari games are so different from the simple worlds LPG trained inthat is, it had never seen anything like an Atari game.

LPG is still behind advanced human-designed algorithms, the researchers said. But it outperformed a human-designed benchmark in training and even some Atari games, which suggests it isnt strictly worse, just that it specializes in some environments.

This is where theres room for improvement and more research.

The more environments LPG saw, the more it could successfully generalize. Intriguingly, the researchers speculate that with enough well-designed training environments, the approach might yield a general-purpose reinforcement learning algorithm.

At the least, though, they say further automation of algorithm discoverythat is, algorithms learning to learnwill accelerate the field. In the near term, it can help researchers more quickly develop hand-designed algorithms. Further out, as self-discovered algorithms like LPG improve, engineers may shift from manually developing the algorithms themselves to building the environments where they learn.

Deep learning long ago left Deep Blue in the dust at games. Perhaps algorithms learning to learn will be a winning strategy in the real world too.

Update (6/27/20): Clarified description of preceding meta-learning research to include prior generalization of meta-learning in RL algorithms (MetaGenRL).

Image credit: Mike Szczepanski /Unsplash

Follow this link:

DeepMind's Newest AI Programs Itself to Make All the Right Decisions - Singularity Hub