Daily Archives: April 26, 2020

AI used to predict Covid-19 patients’ decline before proven to work – STAT

Posted: April 26, 2020 at 6:45 pm

Dozens of hospitals across the country are using an artificial intelligence system created by Epic, the big electronic health record vendor, to predict which Covid-19 patients will become critically ill, even as many are struggling to validate the tools effectiveness on those with the new disease.

The rapid uptake of Epics deterioration index is a sign of the challenges imposed by the pandemic: Normally hospitals would take time to test the tool on hundreds of patients, refine the algorithm underlying it, and then adjust care practices to implement it in their clinics.

Covid-19 is not giving them that luxury. They need to be able to intervene to prevent patients from going downhill, or at least make sure a ventilator is available when they do. Because it is a new illness, doctors dont have enough experience to determine who is at highest risk, so they are turning to AI for help and in some cases cramming a validation process that often takes months or years into a couple weeks.

advertisement

Nobody has amassed the numbers to do a statistically valid test of the AI, said Mark Pierce, a physician and chief medical informatics officer at Parkview Health, a nine-hospital health system in Indiana and Ohio that is using Epics tool. But in times like this that are unprecedented in U.S. health care, you really do the best you can with the numbers you have, and err on the side of patient care.

Epics index uses machine learning, a type of artificial intelligence, to give clinicians a snapshot of the risks facing each patient. But hospitals are reaching different conclusions about how to apply the tool, which crunches data on patients vital signs, lab results, and nursing assessments to assign a 0 to 100 score, with a higher score indicating an elevated risk of deterioration. It was already used by hundreds of hospitals before the outbreak to monitor hospitalized patients, and is now being applied to those with Covid-19.

advertisement

At Parkview, doctors analyzed data on nearly 100 cases and found that 75% of hospitalized patients who received a score in a middle zone between 38 and 55 were eventually transferred to the intensive care unit. In the absence of a more precise measure, clinicians are using that zone to help determine who needs closer monitoring and whether a patient in an outlying facility needs to be transferred to a larger hospital with an ICU.

Meanwhile, the University of Michigan, which has seen a larger volume of patients due to a cluster of cases in that state, found in an evaluation of 200 patients that the deterioration index is most helpful for those who scored on the margins of the scale.

For about 9% of patients whose scores remained on the low end during the first 48 hours of hospitalization, the health system determined they were unlikely to experience a life-threatening event and that physicians could consider moving them to a field hospital for lower-risk patients. On the opposite end of the spectrum, it found 10% to 12% of patients who scored on the higher end of the scale were much more likely to need ICU care and should be closely monitored. More precise data on the results will be published in coming days, although they have not yet been peer-reviewed.

Clinicians in the Michigan health system have been using the score thresholds established by the research to monitor the condition of patients during rounds and in a command center designed to help manage their care. But clinicians are also considering other factors, such as physical exams, to determine how they should be treated.

This is not going to replace clinical judgement, said Karandeep Singh, a physician and health informaticist at the University of Michigan who participated in the evaluation of Epics AI tool. But its the best thing weve got right now to help make decisions.

Stanford University has also been testing the deterioration index on Covid-19 patients, but a physician in charge of the work said the health system has not seen enough patients to fully evaluate its performance. If we do experience a future surge, we hope that the foundation we have built with this work can be quickly adapted, said Ron Li, a clinical informaticist at Stanford.

Executives at Epic said the AI tool, which has been rolled out to monitor hospitalized patients over the past two years, is already being used to support care of Covid-19 patients in dozens of hospitals across the United States. They include Parkview, Confluence Health in Washington state, and ProMedica, a health system that operates in Ohio and Michigan.

Our approach as Covid was ramping up over the last eight weeks has been to evaluate does it look very similar to (other respiratory illnesses) from a machine learning perspective and can we pick up that rapid deterioration? said Seth Hain, a data scientist and senior vice president of research and development at Epic. What we found is yes, and the result has been that organizations are rapidly using this model in that context.

Some hospitals that had already adopted the index are simply applying it to Covid-19 patients, while others are seeking to validate its ability to accurately assess patients with the new disease. It remains unclear how the use of the tool is affecting patient outcomes, or whether its scores accurately predict how Covid-19 patients are faring in hospitals. The AI system was initially designed to predict deterioration of hospitalized patients facing a wide array of illnesses. Epic trained and tested the index on more than 100,000 patient encounters at three hospital systems between 2012 and 2016, and found that it could accurately characterize the risks facing patients.

When the coronavirus began spreading in the United States, health systems raced to repurpose existing AI models to help keep tabs on patients and manage the supply of beds, ventilators and other equipment in their hospitals. Researchers have tried to develop AI models from scratch to focus on the unique effects of Covid-19, but many of those tools have struggled with bias and accuracy issues, according to a review published in the BMJ.

The biggest question hospitals face in implementing predictive AI tools, whether to help manage Covid-19 or advanced kidney disease, is how to act on the risk score it provides. Can clinicians take actions that will prevent the deterioration from happening? If not, does it give them enough warning to respond effectively?

In the case of Covid-19, the latter question is the most relevant, because researchers have not yet identified any effective treatments to counteract the effects of the illness. Instead, they are left to deliver supportive care, including mechanical ventilation if patients are no longer able to breathe on their own.

Knowing ahead of time whether mechanical ventilation might be necessary is helpful, because doctors can ensure that an ICU bed and a ventilator or other breathing assistance is available.

Singh, the informaticist at the University of Michigan, said the most difficult part about making predictions based on Epics system, which calculates a score every 15 minutes, is that patients ratings tend to bounce up and down in a sawtooth pattern. A change in heart rate could cause the score to suddenly rise or fall. He said his research team found that it was often difficult to detect, or act on, trends in the data.

Because the score fluctuates from 70 to 30 to 40, we felt like its hard to use it that way, he said. A patient whos high risk right now might be low risk in 15 minutes.

In some cases, he said, patients bounced around in the middle zone for days but then suddenly needed to go to the ICU. In others, a patient with a similar trajectory of scores could be managed effectively without need for intensive care.

But Singh said that in about 20% of patients it was possible to identify threshold scores that could indicate whether a patient was likely to decline or recover. In the case of patients likely to decline, the researchers found that the system could give them up to 40 hours of warning before a life-threatening event would occur.

Thats significant lead time to help intervene for a very small percentage of patients, he said. As to whether the system is saving lives, or improving care in comparison to standard nursing practices, Singh said the answers will have to wait for another day. You would need a trial to validate that question, he said. The question of whether this is saving lives is unanswerable right now.

Go here to read the rest:

AI used to predict Covid-19 patients' decline before proven to work - STAT

Posted in Ai | Comments Off on AI used to predict Covid-19 patients’ decline before proven to work – STAT

AI could transform open source intelligence in the developing world – C4ISRNet

Posted: at 6:45 pm

In developed nations, there is a rich trove of data that the intelligence community can and does mine.

Valuable information can be pulled from media reports, public financial information and social media posts. Websites track user activity, and smartphones are constantly gobbling up information about their users, from geolocations to search histories and more. By using artificial intelligence tools, analysts are able to make sense of this torrent of publicly available data and turn it into usable open-source intelligence, known as OSINT.

But not every part of the world produces that vast torrent of data.

That exists in a very small handful of places throughout the world where that doesnt existbasically every developing economy, said Ben Leo, chief executive of FRAYM, a geospatial data and analytics company. [In developing economies] you are not able to get a comprehensive and representative picture of what the population looks like through the same types of techniques that are being used in the U.S.

The U.S. military and intelligence community are increasingly interested in leveraging OSINT for predictive analysisafter all, properly collected and processed OSINT can help warn regional commanders of upcoming political protests, political violence, extremist attacks or other kinds of security related events could take place, said Leo. Notably, the Army awarded BAE Systems a $437 million task order for open source intelligence support in October.

Of course, in order to create usable and reliable OSINT, companies like FRAYM will need to create data rich analysis in data poor areas.

What we do is we gobble up the very high quality, underutilized datasets that are out there. We bring in additional public datasets and we bring them all together using our AI/ML algorithms to produce this hyper local data at scale, said Leo.

The company takes geotagged household data and feeds that into its machine learning algorithm, and from there it can then produce data down to a 1 km x 1 km grid level across dozens of characteristics, such as religion, ethnicity, language, age, education access, electricity, media consumption and more.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

FRAYM has provided its services to the U.S. government in the past, but company officials declined to name any agencies they were currently working with or would like to work with.

In the past, he explained, there were really only two ways to make predictions in data poor areas. First, analysts could monitor events through social media. While that can help commanders understand the situation on the ground, it has very limited predictive power.

Previously, youve been stuck in two worlds, said Leo. Youve either been stuck in a world where Im going to monitor social media and try to apply natural language processing or other tools that will aggregate and make sense of that data so that I can try to identify a tipping point. When is the chatter getting to a certain point where it feels like something important is happening in a particular city? Its too late at that point. That might be helpful for basic situational awareness, but that is not nearly as helpful or powerful for a practitioner or combatant commander than getting way left of boom.

When is the chatter getting to a certain point where it feels like something important is happening in a particular city? Its too late at that point. That might be helpful for basic situational awareness, but that is not nearly as helpful or powerful for a practitioner or combatant commander than getting way left of boom, he explained.

The other method was to basically plot out how and where events have unfolded in the past and try to find correlations that can predict future events. That, too, is very limited.

But with access to proprietary data the U.S. government does not have, Leo says his company has been able to create a unique source of OSINT for data starved areas.

This is the first time that this kind of very comprehensive and rich data has been brought to bear in developing markets. Basically, anywhere where there is not a data identity-centric ecosystem, FRAYM brings a tremendous set of powerful solutions to the marketplace, said Leo.

See the original post here:

AI could transform open source intelligence in the developing world - C4ISRNet

Posted in Ai | Comments Off on AI could transform open source intelligence in the developing world – C4ISRNet

This AI wrote such emo lyrics that humans thought it was My Chemical Romance – The Next Web

Posted: at 6:45 pm

If you think the songs in the charts sound like they were made by machines, youre probably wrong an AIslyrics would be better.

Thats according to research by ticket site TickPick, which recently tested whetherpeopleprefer artificial or humansongwriters.

The company scraped thousands oflyrics from genius.com and grouped them intorock, rap, country, and pop songs. The wordswere then fed toatext-generating machine called GPT-2, which used machine learning to create new sets of lyrics.

The system composed 100 songs in each genre, which the TickPick team turned into four original six-track albums. They then ran the lyrics through Grammarlys plagiarism checker to check that the AI songwriters werent stealing from the artists that inspired them.

[Read:Researcher builds AI rapper to spit sick rhymes with mixed results]

They then tested whether 1,003 music fans could spot which lyrics were made by AI and which were written by real musicians and whether they preferred the songs created by humans or machines.

In each category, the respondents were shown three lyrics written by acclaimed human artists,and one created by an AI.

When asked which verse was the most emotional, almost 40% of people said they more touched by the AIs wordsthan lyrics writtenby Adele, R.E.M., and Johnny Cash.

And who can blame them? Only a heart of stone would be unmoved by this tear-jerker:

I stand alone and think its better to be alone. Lonely days, I just cant find the will to go on. Im in this state, and my eyes show me that Ive been taken.

After wiping tears from their eyes, the respondents were asked which songwriter was the most creative.

Again, the AIsmashed the so-called legends, attracting 65% of votes for this inspirational poetry:

When clouds part to reveal a man in the wilderness outside the pale light of morning. A secret within the door can hear him say. The clouds will reveal what I mean.

Humanitys last chance to overcome the machines came in the overall favorite category and theAI was finally defeated. It nonetheless deserves applause for this imaginative effort:

I got my rig in the back of my Beemer. Professional when I graze, Im professional when I argue. 40 glass, Im laughing at that s***, Ima be roaring at that s***

The experiment also revealed which genres are hardest forAI songwriters to master.

The respondents struggled to spot which pop and country lyrics were written by an AI. And its rock song was so emo that they thought it was written by My Chemical Romance or Nirvana.

However, they were less convinced by artificial rapper Young AI. Almost 36% of them recognized that a human did not create these bars:

In the city at night, wild stars appear. From far away, theres a quiet storm. About to collapse, Im in a rush to buy a house. The disappointment, just too strong to overcome. My ego and my consciousness got me out the track. So I search for answers, but there arent none.

The researchers believe this is because the unusual syntax of rap songs is hard foralgorithms to interpret, which should keep rappers safe in their jobs for now. But for rockers, pop stars, and country singers, it might be time to pass their mics to the machines.

Published April 24, 2020 14:26 UTC

Continue reading here:

This AI wrote such emo lyrics that humans thought it was My Chemical Romance - The Next Web

Posted in Ai | Comments Off on This AI wrote such emo lyrics that humans thought it was My Chemical Romance – The Next Web

The buzzword is AI – The Hindu

Posted: at 6:45 pm

Wake up and rush to a tuition centre at 6 a.m., get back for a hurried breakfast, off to school to cover portions, tests and exams, back home for a quick snack and off to another tuition class. Manage to squeeze in some screen and game time with your friends in between. Does this routine sound familiar? But surely, come coronavirus lockdown, life has changed. In many cases, students have online tutoring sessions for their curricular learning.

A Dell Technologies report claims that 85% of jobs in 2030 have not been invented yet. We did not know that we would be under such a strict lockdown, two months back. As life around us is changing, it is important for students to use this time to focus on computational thinking methods, work on better ways to explore their innate creativity and articulate their ideas better.

Learning modules

Can you imagine a situation where you can spend time playing games on the computer and convince your parents you are actually learning artificial intelligence (AI), neural networks and writing algorithms? You are in luck. Atal Innovation Mission (AIM), a flagship initiative of the National Institution for Transforming India (NITI Aayog), Government of India, has released #Tinkerfromhome learning modules on AI and gaming.

AI is a new buzzword in the higher education sector. It is not just for the software engineers but is a must-have skill for all. We can see AI applications in agriculture, medicine and automotive sector. The #TinkerfromHome module, suitable for students from class VI to XII, teaches concepts of AI in byte-sized units. These are good primers to learn AI, dabble in programming and create gaming modules. The modules have been prepared in collaboration with NASSCOM and are available on the AIM website under the Atal Tinkering Lab curriculum. Besides, video tutorials and live YouTube sessions are available on the AIMs YouTube channel.

The do-and-learn approach using tutorials and activities are captivating. The weblinks that give an opportunity to experience AI in action are fun. Simple explanations of complicated terms like neural networks, binary programming and algebra are the winning features of the module.

The self-paced modules can be done by students on their own. Students can start their baby steps in programming with Scratch and Python. Scratch is a visual programming software that helps students to create their own animations and visual stories. Assignments are an opportunity to demonstrate creativity. Practise sessions are often games you can play online with friends. The module gives the students just enough background in algebra, probability and statistics to understand machine learning. It is also wrapped up with a note on the ethics of using AI with a few real-life scenarios.

The modules were released to NITI Aayogs Atal Tinkering Lab students initially but are available to all students and schools from the AIM website. Schools like Kaligi Ranganathan Montford Matriculation School, Chennai, have introduced these modules through their class WhatsApp groups. Sai Praseedha, a class VIII student, has completed the module in record time. Her assignment in which she has imagined an AI pen was featured on the ATL Facebook page.

The COVID lockdown, though has been tough on the economy and life in general, has ensured more focus on online education. A focus on recognising creativity and new ideas will for sure develop new age students that we are all looking for.

The writer is a Mentor of Change with Atal Innovation Mission.

You have reached your limit for free articles this month.

Register to The Hindu for free and get unlimited access for 30 days.

Find mobile-friendly version of articles from the day's newspaper in one easy-to-read list.

Enjoy reading as many articles as you wish without any limitations.

A select list of articles that match your interests and tastes.

Move smoothly between articles as our pages load instantly.

A one-stop-shop for seeing the latest updates, and managing your preferences.

We brief you on the latest and most important developments, three times a day.

Not convinced? Know why you should pay for news.

*Our Digital Subscription plans do not currently include the e-paper ,crossword, iPhone, iPad mobile applications and print. Our plans enhance your reading experience.

Originally posted here:

The buzzword is AI - The Hindu

Posted in Ai | Comments Off on The buzzword is AI – The Hindu

AI Weekly: AI models illustrate the importance of continued social distancing – VentureBeat

Posted: at 6:45 pm

As the COVID-19 pandemic rages on unabated in countries around the world, theres a shared desire among those forced to shelter in place to see the extent to which social distancing is slowing the diseases spread. Its understandable collateral damage from government-imposed business closures threatens to devastate entire industries. As of this week, 26 million Americans have filed for unemployment claims, according to the U.S. Bureau of Labor Statistics, and the International Monetary Fund predicts a global financial crisis rivaling the Great Depression.

Fortunately, a preprint study published by researchers at the University of Texas, the Southwest Research Institute, and the University of Texas Health Science Center in San Antonio strongly implies that quarantining and physical distancing are having the intended effects. Using a hybrid AI system dubbed SIRNet and several epidemiological models, which were trained on smartphone location data along with population-weighted density and other data points from the startup Safe Graph, World Health Organization, the U.S. Centers for Disease Control and Prevention, and elsewhere, the coauthors claim they managed to accurately predict the outcomes of various social distancing policies.

People can check their states projections on a website published by the University of Texas COVID-19 Modeling Consortium.

The country-, state-, and country-level location data from tens of millions of smartphones ingested by the researchers system was used to predict contact rate, a function of population density as well as movement and interactions among people in a region. This was plotted against COVID-19 case count data, specifically a time series set that captured active, recovered, and fatal cases of COVID-19 at varying levels of geographical granularity, to which the researchers applied a 10-day lag time to account for the delay between infectiousness and receiving a positive test confirmation.

The researchers report that, based on projected forecasts three weeks into the future (the systems maximum), only a continuation of quarantine-level mobility will result in low COVID-19 case counts. If restrictions were to be reduced by around 50%, the system projects that some communities would reach the edge of stable peak cases, where the death curve would either stay at a low peak or quickly, sharply increase. And if 75% of a population was able to move about as freely as they normally would, the system predicts the result would be a slightly delayed peak approximately 2/3 of the maximum peak during 100% mobility (except in South Korea).

In Bexar County, Texas, where mobility as of April 11 was approximately 50% of normal, relaxing social distancing measures could result in a runaway growth in deaths and hospitalizations. the system shows. By contrast, in King County, Washington, where heavy mobility restrictions remain in place, the system predicts that continuing those measures would depress the number of new deaths to close to zero by June.

The system agrees with an MIT model detailed in an early April preprint paper, which found that in places like South Korea, where there was immediate government intervention, the virus spread plateaued more quickly. Trained on data collected from Wuhan (China), Italy, South Korea, and the U.S. after the 500th case was recorded in each region, it learned to predict patterns in the infection spread, drawing a correlation between quarantine measures and a reduction in the virus effective reproduction number.

A separate model one published earlier this year by researchers at Microsoft, the Indian Institute of Technology, and TCS Research (the R&D division of Tata Consultancy Services) learned policies automatically as a function of disease parameters like infectiousness, gestation period, duration of symptoms, probability of death, population density, and movement propensity. Over the course of 75 simulations with simulations lasting 52 weeks (364 days), it showed that governments that locked down 5% to 10% of communities experienced a lower peak of COVID-19 infections.

Elsewhere, an international team of researchers used human mobility data supplied by Baiduto elucidate the role of COVID-19 transmission in Chinese cities. They found that, following the implementation of control and containment measures, the correlation between the geographic distribution of COVID-19 cases and mobility dropped and growth rates became negative in most locations, indicating that the measures mitigated the spread of COVID-19.

As encouraging as the predictions might be, its important to keep in mind that even the best algorithms like those developed by HealthMap, Metabiota, and BlueDot, which were among the first to accurately identify the spread of COVID-19 can only learn patterns from historical data. As the Brookings Institution notedin a recent report, while some epidemiological models employ AI, epidemiologists largely work with statistical models that incorporate subject-matter expertise.

[A]ccuracy alone does not indicate enough to evaluate the quality of predictions, wrote the Brookings reports author. If not carefully managed, an AI algorithm will go to extraordinary lengths to find patterns in data that are associated with the outcome it is trying to predict. However, these patterns may be totally nonsensical and only appear to work during development.

Nevertheless, the models provide a preponderance of evidence in support of quarantining and distancing policies even as those policies come under fire from protesters.

For AI coverage, send news tips toKhari JohnsonandKyle WiggersandAI editor Seth Colaner and be sure tosubscribe to the AI Weekly newsletterand bookmark ourAI Channel.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

Excerpt from:

AI Weekly: AI models illustrate the importance of continued social distancing - VentureBeat

Posted in Ai | Comments Off on AI Weekly: AI models illustrate the importance of continued social distancing – VentureBeat

Someone taught an AI to draw dicks after feeding it 25,000 doodles of penises – PC Gamer

Posted: at 6:45 pm

Back in 2016, Google wowed everyone with an artificial intelligence called Quick, Draw! that can predict what you are drawingunless that drawing is a penis. That glaring (and presumably intentional) omission was soon rectified by Mozilla, who hired design firm Moniker to create a riff on Quick, Draw! by making an AI that could detect if you were drawing a penis and then chastise you for it.

At the time it was a funny joke that also highlighted the subtle ways big tech companies can police the behavior its users. But the data used to train that AI to recognize crudely-drawn male genitalia, now containing over 25,000 noodle doodles, was made public. So, naturally, someone used those 25,000 dick drawings to teach an AI how to draw penises of its own.

It's called Dick-RNN (which stands for recursive neural network), and it's pretty damn good at what it does. Created by a redditor named RichardRNN (heh), Dick-RNN is capable of drawing dicks of all shapes and sizes with all sorts of fun details, like pubic hair.

You can try it for yourself over on Dick-RNN's GitHub page, where there are four separate demos for this AI to show off its uncanny ability to crudely recreate male genitalia. The first demo, called the Main Dick Demo, works by asking you to first draw a pair of testicles that the AI will then complete. The next two demos, Predict Multiple Dicks and Predict Single Dick work mostly the same way but with some extra options. I'm more partial the final demo, though, where Dick-RNN just goes wild drawing dicks for as long as you have the window open.

Is this a very weird thing to program? Absolutely, but RichardRNN directly cites Moniker's original dick-doodle-shaming AI as his inspiration. "I also believe that 'Doodling a penis is a light-hearted symbol for a rebellious act' and also 'think our moral compasses should not be in the hands of big tech,' he writes on the GitHub page.

If you're fluent in Javascript, you can dive into the code yourself to learn how RichardRNN built this neural network, or you can just have fun watching it do its thing.

Take that, Google.

See more here:

Someone taught an AI to draw dicks after feeding it 25,000 doodles of penises - PC Gamer

Posted in Ai | Comments Off on Someone taught an AI to draw dicks after feeding it 25,000 doodles of penises – PC Gamer

Facebook, AWS team up to produce open-source PyTorch AI libraries, grad student says he successfully used GPT-2 to write his homework…. – The…

Posted: at 6:45 pm

Roundup Hello El Reg readers. If you're stuck inside, and need some AI news to soothe your soul, here's our weekly machine-learning roundup.

Nvidia GTC virtual keynote coming to YouTube: Nvidia cancelled its annual GPU Technology Conference in Silicon Valley in March over the ongoing coronavirus pandemic. The keynote speech was promised to be screened virtually, and then that got canned, too. Now, its back.

CEO Jensen Huang will present his talk on May 14 on YouTube at 0600 PT (1300 UTC). Yes, thats early for people on the US West Coast. And no, Jensen isnt doing it live at that hour: the video is prerecorded.

Still, graphics hardware and AI fans will probably want to keep an eye on the presentation. Huang is expected to unveil specs for a new GPU architecture reportedly named the A100, which is expected to be more powerful than its Tesla V100 chips. Youll be able to watch the keynote when it comes out on Nvidias YouTube channel, here.

Also, Nvidia has partnered up with academics at Kings College London to release MONAI, an open-source AI framework for medical imaging.

The framework packages together tools to help researchers and medical practitioners process image data for computer vision models built with PyTorch. These include things like segmenting features in 3D scans or classifying objects in 2D.

Researchers need a flexible, powerful and composable framework that allows them to do innovative medical AI research, while providing the robustness, testing and documentation necessary for safe hospital deployment, said Jorge Cardoso, chief technology officer of the London Medical Imaging & AI Centre for Value-based Healthcare. Such a tool was missing prior to Project MONAI.

You can play with MONAI on GitHub here, or read about it more here.

New PyTorch libraries for ML production: Speaking of PyTorch, Facebook and AWS have collaborated to release a couple of open-source goodies for deploying machine-learning models.

There are now two new libraries: TorchServe and TorchElastic. TorchServe provides tools to manage and perform inference with PyTorch models. It can be used in any cloud service, and you can find the instructions on how to install and use it here.

TorchElastic allows users to train large models over a cluster of compute nodes with Kubernetes. The distributed training means that even if some servers go down for maintenance or random network issues, the service isnt completely interrupted. It can be used on any cloud provider that supports Kubernetes. You can read how to use the library here.

These libraries enable the community to efficiently productionize AI models at scale and push the state of the art on model exploration as model architectures continue to increase in size and complexity, Facebook said this week.

MIT stops working with blacklisted AI company: MIT has discontinued its five-year research collaboration with iFlyTek, a Chinese AI company the US government flagged as being involved in the ongoing persecution of Uyghur Muslims in China.

Academics at the American university made the decision to cut ties with the controversial startup in February. iFlyTek is among 27 other names that are on the US Bureau of Industry and Securitys Entity List, which forbids American organizations from doing business with without Uncle Sam's permission. Breaking the rules will result in sanctions.

We take very seriously concerns about national security and economic security threats from China and other countries, and human rights issues, Maria Zuber, vice president of research at MIT, said, Wired first reported.

MIT entered a five-year deal with iFlyTek in 2018 to collaborate on AI research focused on human-computer interaction, speech recognition, and computer vision.

The relationship soured when it was revealed iFlyTek was helping the Chinese government build a mass automated voice recognition and monitoring system, according to the non-profit Human Rights Watch. That technology was sold to police bureaus in the provinces of Xinjiang and Anhui, where the majority of the Uyghur population in China resides.

OpenAIs GPT-2 writes university papers: A cheeky masters degree student admitted this week to using OpenAIs giant language model GPT-2 to help write his essays.

The graduate student, named only as Tiago, was interviewed by Futurism. We're told that although he passed his assignments using the machine-learning software, he said the achievement was down to failings within the business school rather than to the prowess of state-of-the-art AI technology.

In other words, his science homework wasn't too rigorously marked in this particular unnamed school, allowing him to successfully pass off machine-generated write-ups of varying quality as his own work and GPT-2's output does vary in quality, depending on how you use it.

You couldnt write an essay on science that could be anywhere near convincing using the methods that I used," he said. "Many of the courses that I take in business school wouldnt make it possible as well.

"However, some particular courses are less information-dense, and so if you can manage to write a few pages with some kind of structure and some kind of argument, you can get through. Its not that great of an achievement, I would say, for GPT-2.

Thanks to the Talk to Transformer tool, anyone can use GPT-2 on a web browser. Tiago would feed opening sentences to the model, and copy and paste the machine-generated responses to put in his essay.

GPT-2 is pretty convincing at first: it has a good grasp of grammar, and there is some level of coherency in its opening paragraphs when responding to a statement or question. Its output quality begins to fall apart, becoming incoherent or absurd, as it rambles in subsequent paragraphs. It also doesnt care about facts, which is why it wont be good as a collaborator for subjects such as history and science.

Sponsored: Webcast: Build the next generation of your business in the public cloud

Read the rest here:

Facebook, AWS team up to produce open-source PyTorch AI libraries, grad student says he successfully used GPT-2 to write his homework.... - The...

Posted in Ai | Comments Off on Facebook, AWS team up to produce open-source PyTorch AI libraries, grad student says he successfully used GPT-2 to write his homework…. – The…

China’s military developing 6G internet to power AI army of the future – Express.co.uk

Posted: at 6:45 pm

The authoritarian regime are concocting a new technology that will make 5G look like yesterday's news, in the depths of the Ministry of Science and Technology chief telecoms engineer Xin Su is working on a close to zero latency 6G internet network. The network is being solely produced for the army that supports Beijing's centrally controlled CCP. Consumers may benefit by 2030 at the earliest.

This 6G technology has the capability to put China for the first time ahead of the US, because of 6G's vastly superior bandwidth, extremely low latency, and high connectivity properties.

The future of combat will be autonomous and reliant on data drive artificial intelligence.

5G US autonomous drones will therefore by outmatched by China's 6G alternatives.

An article titled If 6G Were to be Used in the Future Battlefield, published by the PLAs China National Defence News on Monday 13 April.

The article said 6G had a distinct technological edge and rich potential for military applications when compared to 5G.

READ MORE:South China Sea: US military operations bringing China to brink of war

Better internet access, high transmission rates, low delay and broad bandwidth would deliver military advances, such as gathering intelligence, visualising combat operations and delivering precise logistical support.

The article added: Based on the 6G network, the commander could make the right decisions quickly after the control-and-command network mined, learned and analysed vast data from the ground,

The China National Defence News report said that battle units could get highly specific and instantaneous information on troop locations and equipment, allowing the military to make tailored logistic plans.

China officially started researching the 6G telecoms technology in early November, according to a Ministry of Science and Technology notice.The ministry announced that it had two teams overseeing 6G research.

More here:

China's military developing 6G internet to power AI army of the future - Express.co.uk

Posted in Ai | Comments Off on China’s military developing 6G internet to power AI army of the future – Express.co.uk

How governments can build trust in AI while fighting COVID-19 – World Economic Forum

Posted: at 6:45 pm

AI has become a key weapon in tracking and tracing cases during this pandemic. Deploying those technologies has sometimes meant balancing the need to conquer the virus with the conflicting need to protect individual privacy. As the initial crisis gives way to long-term policies and public health practices, governments will need to build trust in AI to ensure future protections can be deployed and maintained.

AIs surveillance superpowers are being used to help break the chains of viral transmission across the globe. Russia, for instance, maintains COVID-19 quarantines through large-scale monitoring of citizens with CCTV cameras and facial recognition.

A new strain of Coronavirus, COVID 19, is spreading around the world, causing deaths and major disruption to the global economy.

Responding to this crisis requires global cooperation among governments, international organizations and the business community, which is at the centre of the World Economic Forums mission as the International Organization for Public-Private Cooperation.

The Forum has created the COVID Action Platform, a global platform to convene the business community for collective action, protect peoples livelihoods and facilitate business continuity, and mobilize support for the COVID-19 response. The platform is created with the support of the World Health Organization and is open to all businesses and industry groups, as well as other stakeholders, aiming to integrate and inform joint action.

As an organization, the Forum has a track record of supporting efforts to contain epidemics. In 2017, at our Annual Meeting, the Coalition for Epidemic Preparedness Innovations (CEPI) was launched bringing together experts from government, business, health, academia and civil society to accelerate the development of vaccines. CEPI is currently supporting the race to develop a vaccine against this strand of the coronavirus.

China is using AI-powered drones and robots to detect population movement and social gatherings, and to identify individuals with a fever or who arent wearing masks.

Meanwhile, Israel is using AI-driven contact tracing algorithms to send citizens personalised text messages, instructing them to isolate after being near someone with a positive diagnosis.

The fuel for much of this life-saving AI is personal data. In fact, South Koreas high-octane blend of data from credit card payments, mobile location, CCTV, facial scans, temperature monitors and medical records has been a key part of a broader strategy to trace contacts, test aggressively and enforce targeted lockdowns. The combination of these effects has helped the country flatten its curve. Late into its outbreak, the country still had not suffered more than eight deaths on any one day.

Contact tracing app TraceTogether, released by the Singapore government to curb the spread COVID-19

Image: REUTERS/Edgar Su

Despite these benefits, we must still approach privacy seriously, carefully and pragmatically, even as citizens might be more willing than ever to forgo their civil liberties and data protection regulators begrudgingly concede that extraordinary times can outweigh even the strongest of privacy rights.

Where privacy is curtailed, its important that all dimensions of AI ethics are considered to maintain public trust in its use over the medium to long-term. If organizations hope to ensure the publics continued participation, they must ensure the data being willingly offered in the spirit of offering a social good is treated with the utmost responsibility.

As a vaccine is at least 18 months away, long-term solutions will be needed that assist with tracking efforts while preserving public trust and cooperation.

"Where privacy is curtailed, its important that all dimensions of AI ethics are considered to maintain public trust in its use over the medium to long-term."

To be sure, some governments are working with Telecoms and Big Tech to access aggregate anonymised location data showing trends of movement. Additionally, Google and Apple recently agreed to an unprecedented cooperation to allow anonymous (and voluntary) global contact tracing.

Still, opt-in initiatives can create gaps and vulnerabilities. For instance, Singapore reported over one million people had downloaded its TraceTogether app. However, at least 75% of the countrys 5.5 million population need to sign up for the app to be effective.

Governments must put in place appropriate AI governance architectures that enable the creation long-term solutions to conquer COVID-19 and other potential health crises. These include:

These are extraordinary times that call for extraordinary measures, yes. But governments and businesses must learn how to manage privacy and trust to help fight this crisis in the months ahead and other public health crises to come.

Appropriate ethical AI architecture can ensure that we leverage the best that AI can offer to the present situation without exploiting an anxious publics desire to find fast solutions. Good AI governance was needed long before COVID-19 arrived. Now, its that much more critical.

License and Republishing

World Economic Forum articles may be republished in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

View post:

How governments can build trust in AI while fighting COVID-19 - World Economic Forum

Posted in Ai | Comments Off on How governments can build trust in AI while fighting COVID-19 – World Economic Forum

AI Used to Monitor Health of Coral Reefs and Detect Ocean Trash Pollution – Unite.AI

Posted: at 6:45 pm

Along with unsupervised machine learning and supervised learning, another common form of AI creation is reinforcement learning. Beyond regular reinforcement learning, deep reinforcement learning can lead to astonishingly impressive results, thanks to the fact that it combines the best aspects of both deep learning and reinforcement learning. Lets take a look at precisely how deep reinforcement learning operates. Note that this article wont delve too deeply into the formulas used in deep reinforcement learning, rather it aims to give the reader a high level intution for how the process works.

Before we dive into deep reinforcement learning, it might be a good idea to refresh ourselves on how regular reinforcement learning works. In reinforcement learning, goal-oriented algorithms are designed through a process of trial and error, optimizing for the action that leads to the best result/the action that gains the most reward. When reinforcement learning algorithms are trained, they are given rewards or punishments that influence which actions they will take in the future. Algorithms try to find a set of actions that will provide the system with the most reward, balancing both immediate and future rewards.

Reinforcement learning algorithms are very powerful because they can be applied to almost any task, being able to flexibly and dynamically learn from an environment and discover possible actions.

Photo: Megajuice via Wikimedia Commons, CC 1.0 (https://commons.wikimedia.org/wiki/File:Reinforcement_learning_diagram.svg)

When it comes to deep reinforcement learning, the environment is typically represented with images. An image is a capture of the environment at a particular point in time. The agent must analyze the images and extract relevant information from them, using the information to inform which action they should take. Deep reinforcement learning is typically carried out with one of two different techniques: value-based learning and policy-based learning.

Value-based learning techniques make use of algorithms and architectures like convolutional neural networks and Deep-Q-Networks. These algorithms operate by converting the image to greyscale and cropping out unnecessary parts of the image. Afterward, the image undergoes various convolutions and pooling operations, extracting the most relevant portions of the image. The important parts of the image are then used to calculate the Q-value for the different actions the agent can take. Q-values are used to determine the best course of action for the agent. After the initial Q-values are calculated, backpropagation is carried out in order that the most accurate Q-values can be determined.

Policy-based methods are used when the number of possible actions that the agent can take is extremely high, which is typically the case in real-world scenarios. Situations like these require a different approach because calculating the Q-values for all the individual actions isnt pragmatic. Policy-based approaches operate without calculating function values for individual actions. Instead, they adopt policies by learning the policy directly, often through techniques called Policy Gradients.

Policy gradients operate by receiving a state and calculating probabilities for actions based on the agents prior experiences. The most probable action is then selected. This process is repeated until the end of the evaluation period and the rewards are given to the agent. After the rewards have been dealt with the agent, the networks parameters are updated with backpropagation.

Because Q-Learning is such a large part of the deep reinforcement learning process, lets take some time to really understand how the Q-learning system works.

The Markov Decision Process

A markov decision process. Photo: waldoalvarez via Pixabay, Pixbay License (https://commons.wikimedia.org/wiki/File:Markov_Decision_Process.svg)

In order for an AI agent to carry out a series of tasks and reach a goal, the agent must be able to deal with a sequence of states and events. The agent will begin at one state and it must take a series of actions to reach an end state, and there can be a massive number of states existing between the beginning and end states. Storing information regarding every state is impractical or impossible, so the system must find a way to preserve just the most relevant state information. This is accomplished through the use of a Markov Decision Process, which preserves just the information regarding the current state and the previous state. Every state follows a Markov property, which tracks how the agent change from the previous state to the current state.

Deep Q-Learning

Once the model has access to information about the states of the learning environment, Q-values can be calculated. The Q-values are the total reward given to the agent at the end of a sequence of actions.

The Q-values are calculated with a series of rewards. There is an immediate reward, calculated at the current state and depending on the current action. The Q-value for the subsequent state is also calculated, along with the Q-value for the state after that, and so on until all the Q-values for the different states have been calculated. There is also a Gamma parameter that is used to control how much weight future rewards have on the agents actions. Policies are typically calculated by randomly initializing Q-values and letting the model converge toward the optimal Q-values over the course of training.

Deep Q-Networks

One of the fundamental problems involving the use of Q-learning for reinforcement learning is that the amount of memory required to store data rapidly expands as the number of states increases. Deep Q Networks solve this problem by combining neural network models with Q-values, enabling an agent to learn from experience and make reasonable guesses about the best actions to take. With deep Q-learning, the Q-value functions are estimated with neural networks. The neural network takes the state in as the input data, and the network outputs Q-value for all the different possible actions the agent might take.

Deep Q-learning is accomplished by storing all the past experiences in memory, calculating maximum outputs for the Q-network, and then using a loss function to calculate the difference between current values and the theoretical highest possible values.

Deep Reinforcement Learning vs Deep Learning

One important difference between deep reinforcement learning and regular deep learning is that in the case of the former the inputs are constantly changing, which isnt the case in traditional deep learning. How can the learning model account for inputs and outputs that are constantly shifting?

Essentially, to account for the divergence between predicted values and target values, two neural networks can be used instead of one. One network estimates the target values, while the other network is responsible for the predictions. The parameters of the target network are updated as the model learns, after a chosen number of training iterations have passed. The outputs of the respective networks are then joined together to determine the difference.

Policy-based learning approaches operate differently than Q-value based approaches. While Q-value approaches create a value function that predicts rewards for states and actions, policy-based methods determine a policy that will map states to actions. In other words, the policy function that selects for actions is directly optimized without regard to the value function.

Policy Gradients

A policy for deep reinforcement learning falls into one of two categories: stochastic or deterministic. A deterministic policy is one where states are mapped to actions, meaning that when the policy is given information about a state an action is returned. Meanwhile, stochastic policies return a probability distribution for actions instead of a single, discrete action.

Deterministic policies are used when there is no uncertainty about the outcomes of the actions that can be taken. In other words, when the environment itself is deterministic. In contrast, stochastic policy outputs are appropriate for environments where the outcome of actions is uncertain. Typically, reinforcement learning scenarios involve some degree of uncertainty so stochastic policies are used.

Policy gradient approaches have a few advantages over Q-learning approaches, as well as some disadvantages. In terms of advantages, policy-based methods converge on optimal parameters quicker and more reliably. The policy gradient can just be followed until the best parameters are determined, whereas with value-based methods small changes in estimated action values can lead to large changes in actions and their associated parameters.

Policy gradients work better for high dimensional action spaces as well. When there is an extremely high number of possible actions to take, deep Q-learning becomes impractical because it must assign a score to every possible action for all time steps, which may be impossible computationally. However, with policy-based methods, the parameters are adjusted over time and the number of possible best parameters quickly shrinks as the model converges.

Policy gradients are also capable of implementing stochastic policies, unlike value-based policies. Because stochastic policies produce a probability distribution, an exploration/exploitation trade-off does not need to be implemented.

In terms of disadvantages, the main disadvantage of policy gradients is that they can get stuck while searching for optimal parameters, focusing only on a narrow, local set of optimum values instead of the global optimum values.

Policy Score Function

The policies used to optimize a models performance aim to maximize a score function J(). If J() is a measure of how good our policy is for achieving the desired goal, we can find the values of that gives us the best policy. First, we need to calculate an expected policy reward. We estimate the policy reward so we have an objective, something to optimize towards. The Policy Score Function is how we calculate the expected policy reward, and there are different Policy Score Functions that are commonly used, such as: start values for episodic environments, the average value for continuous environments, and the average reward per time step.

Policy Gradient Ascent

Gradient ascent aims to move the parameters until they are at the place where the score is highest. Photo: Public Domain (https://commons.wikimedia.org/wiki/File:Gradient_ascent_(surface).png)

After the desired Policy Score Function is used, and an expected policy reward calculated, we can find a value for the parameter which maximizes the score function. In order to maximize the score function J(), a technique called gradient ascent is used. Gradient ascent is similar in concept to gradient descent in deep learning, but we are optimizing for the steepest increase instead of decrease. This is because our score is not error, like in many deep learning problems. Our score is something we want to maximize. An expression called the Policy Gradient Theorem is used to estimate the gradient with respect to policy .

In summary, deep reinforcement learning combines aspects of reinforcement learning and deep neural networks. Deep reinforcement learning is done with two different techniques: Deep Q-learning and policy gradients.

Deep Q-learning methods aim to predict which rewards will follow certain actions taken in a given state, while policy gradient approaches aim to optimize the action space, predicting the actions themselves. Policy-based approaches to deep reinforcement learning are either deterministic or stochastic in nature. Deterministic policies map states directly to actions while stochastic policies produce probability distributions for actions.

See the article here:

AI Used to Monitor Health of Coral Reefs and Detect Ocean Trash Pollution - Unite.AI

Posted in Ai | Comments Off on AI Used to Monitor Health of Coral Reefs and Detect Ocean Trash Pollution – Unite.AI