Monthly Archives: July 2021

How the National Science Foundation is taking on fairness in AI – Brookings Institution

Posted: July 23, 2021 at 4:14 am

Most of the public discourse around artificial intelligence (AI) policy focuses on one of two perspectives: how the government can support AI innovation, and how the government can deter its harmful or negligent use. Yet there can also be a role for government in making it easier to use AI beneficiallyin this niche, the National Science Foundation (NSF) has found a way to contribute. Through a grant-making program called Fairness in Artificial Intelligence (FAI), the NSF is providing $20 million in funding to researchers working on difficult ethical problems in AI. The program, a collaboration with Amazon, has now funded 21 projects in its first two years, with an open call for applications in its third and final year. This is an important endeavor, furthering a trend of federal support for the responsible advancement of technology, and the NSF should continue this important line of funding for ethical AI.

The FAI program is an investment in what the NSF calls use-inspired research, where scientists attempt to address fundamental questions inspired by real world challenges and pressing scientific limitations. Use-inspired research is an alternative to the traditional basic research, which attempts to make fundamental advances in scientific understanding without necessarily a specific practical goal. NSF is better known for basic research in computer science, where the NSF provides 87% of all federal basic research funding. Consequently, the FAI program is a relatively small portion of the NSFs total investment in AIaround $3.3 million per year, considering that Amazon covers half of the cost. In total, the NSF requested $868 million in AI spending, about 10% of its entire budget for 2021, and Congress approved every penny. Notably, this is a broad definition of AI spending that includes many applications of AI to other fields, rather than fundamental advances in AI itself, which is likely closer to $100 or $150 million, by rough estimation.

The FAI program is specifically oriented towards the ethical principle of fairnessmore on this choice in a moment. While this may seem unusual, the program is a continuation of prior government funded research into the moral implications and consequences of technology. Starting in the 1970s, the federal government started actively shaping bioethics research in response to public outcry following the APs reporting on the Tuskegee Syphilis Study. While the original efforts may have been reactionary, they precipitated decades of work towards improving the biomedical sciences. Launched alongside the Human Genome Project in 1990, there was an extensive line of research oriented towards the ethical, legal, and social implications of genomics. Starting in 2018, the NSF funded 21 exploratory grants on the impact of AI on Society, a precursor to the current FAI program. Today, its possible to draw a rough trend line through these endeavors, in which the government is becoming more concerned with first pure science, then the ethics of the scientific process, and now the ethical outcomes of the science itself. This is a positive development, and one worth encouraging.

NSF made a conscious decision to focus on fairness rather than other prevalent themes like trustworthiness or human-centered design. Dr. Erwin Gianchandani, an NSF deputy assistant director, has described four categories of problems in FAIs domain, and these can each easily be tied to present and ongoing challenges facing AI. The first category is focused on the many conflicting mathematical definitions of fairness and the lack of clarity around which are appropriate in what contexts. One funded project studied the human perceptions of what fairness metrics are most appropriate for an algorithm in the context of bail decisionsthe same application of the infamous COMPAS algorithm. The study found that survey respondents slightly preferred an algorithm that had a consistent rate of false positives (how many people were unnecessarily kept in jail pending trial) between two racial groups, rather than an algorithm which was equally accurate for both racial groups. Notably, this is the opposite quality of the COMPAS algorithm, which was fair in its total accuracy, but resulted in more false positives for Black defendants.

The second category, Gianchandani writes, is to understand how an AI system produces a given result. The NSF sees this as directly related to fairness because giving an end-user more information about an AIs decision empowers them to challenge that decision. This is an important pointby default, AI systems disguise the nature of a decision-making process and make it harder for an individual to interrogate the process. Maybe the most novel project funded by NSF FAI attempts to test the viability of crowdsourcing audits of AI systems. In a crowdsourced audit, many individuals might sign up for a toole.g., a website or web browser extensionthat pools data about how those individuals were treated by an online AI system. By aggregating this data, the crowd can determine if the algorithm is being discriminatory, which would be functionally impossible for any individual user.

The third category seeks to use AI to make existing systems fairer, an especially important task as governments around the world are continuing to consider if and how to incorporate AI systems into public services. One project from researchers at New York University seeks, in part, to tackle the challenge of fairness when an algorithm is used in support of a human decision-maker. This is perhaps inspired by a recent evaluation of judges using algorithmic risk assessments in Virginia, which concluded that the algorithm failed to improve public safety and had the unintended effect of increasing incarceration of young defendants. The NYU researchers have a similar challenge in minddeveloping a tool to identify and reduce systemic biases in prosecutorial decisions made by district attorneys.

The fourth category is perhaps the most intuitive, as it aims to remove bias from AI systems, or alternatively, make sure AI systems work equivalently well for everyone. One project looks to create common evaluation metrics for natural language processing AI, so that their effectiveness can be compared across many different languages, helping to overcome a myopic focus on English. Other projects looks at fairness in less studied methods, like network algorithms, and still more look to improve in specific applications, such as for medical software and algorithmic hiring. These last two are especially noteworthy, since the prevailing public evidence suggests that algorithmic bias in health-core provisioning and hiring is widespread.

Critics may lament that Big Tech, which plays a prominent role in AI research, is present even in this federal programAmazon is matching the support of the NSF, so each organization is paying around $10 million. Yet there is no reason to believe the NSFs independence has been compromised. Amazon is not playing any role in the selection of the grant applications, and none of the grantees contacted had any concerns about the grant-selection process. NSF officials also noted that any working collaboration with Amazon (such as receiving engineering support) is entirely optional. Of course, it is worth considering what Amazon has to gain from this partnership. Reading the FAI announcement, it sticks out that the program seeks to contribute to trustworthy AI systems that are readily accepted and that projects will enabled broadened acceptance of AI systems. It is not a secret that the current generation of large technology companies would benefit enormously from increased public trust in AI. Still, corporate funding towards genuinely independent research is good and unobjectionable especially relative to other options like companies directly funding academic research.

Beyond the funding contribution, there may be other societal benefits from the partnership. For one, Amazon and other technology companies may pay more attention to the results of the research. For a company like Amazon, this might mean incorporating the results into its own algorithms, or into the AI systems that it sells through Amazon Web Services (AWS). Adoption into AWS cloud services may be especially impactful, since many thousands of data scientists and companies use those services for AI. As just an example, Professor Sandra Wachter of the Oxford Internet Institute was elated to learn that a metric of fairness she and co-authors had advocated for had been incorporated into an AWS cloud service, making it far more accessible for data science practitioners. Generally speaking, having an expanded set of easy-to-use features for AI fairness makes it more likely that data scientists will explore and use these tools.

In its totality, FAI is a small but mighty research endeavor. The myriad challenges posed by AI are all improved with more knowledge and more responsible methods driven by this independent research. While there is an enormous amount of corporate funding going into AI research, it is neither independent nor primarily aimed at fairness, and may entirely exclude some FAI topics (e.g., fairness in the government use of AI). While this is the final year of the FAI program, one of NSF FAIs program directors, Dr. Todd Leen, stressed when contacted for this piece that the NSF is not walking away from these important research issues, and that FAIs mission will be absorbed into the general computer science directorate. This absorption may come with minor downsidesfor instance, a lack of clearly specified budget line and no consolidated reporting on the funded research projects. The NSF should consider tracking these investments and clearly communicating to the research community that AI fairness is an ongoing priority of the NSF.

The Biden administration could also specifically request additional NSF funding for fairness and AI. For once, this funding would not be a difficult sell to policymakers. Congress funded the totality of the NSFs $868 million budget request for AI in 2021, and President Biden has signaled clear interest in expanding science funding; his proposed budget calls for a 20% increase in NSF funding for fiscal year 2022, and the administration has launched a National AI Research Taskforce co-chaired by none other than Dr. Erwin Gianchandani. With all this interest, bookmarking $5 to $10 million per year explicitly for the advancement of fairness in AI is clearly possible, and certainly worthwhile.

The National Science Foundation and Amazon are donors to The Brookings Institution. Any findings, interpretations, conclusions, or recommendations expressed in this piece are those of the author and are not influenced by any donation.

Read the rest here:

How the National Science Foundation is taking on fairness in AI - Brookings Institution

Posted in Ai | Comments Off on How the National Science Foundation is taking on fairness in AI – Brookings Institution

Getting Industrial About The Hybrid Computing And AI Revolution – The Next Platform

Posted: at 4:14 am

For oil and gas companies looking at drilling wells in a new field, the issue becomes one of return vs. cost. The goal is simple enough: install the fewest number of wells that will draw them the most oil or gas from the underground reservoirs for the longest amount of time. The more wells installed, the higher the cost and the larger the impact on the environment.

However, finding the right well placements quickly becomes a highly complex math problem. Too few wells sited in the wrong places leaves a lot of resources in the ground. Too many wells placed too close together not only can sharply increase the cost but cause wells to pump from the same area.

Shahram Farhadi knows how complex the challenge is. Farhadi is the chief technology officer for industrial AI at Beyond Limits, a startup spun off by Caltech and NASAs Jet Propulsion Lab to commercialize technologies built for space exploration for industrial settings. The company, founded in 2014, aims to leverage cognitive AI, machine learning, and deep learning techniques in industries like oil and gas, manufacturing and industrial Internet of Things (IoT), power and natural resources, and healthcare and other evolving markets, many of which have already been using HPC environments to run their most complicated programs.

Placing wells within a reservoir is one of those problems that involves a sequential decision-making process that changes and grows with each decision made. Farhadi notes that in chess, there are almost 5 million possible moves after the first five are made. For the game Go, that number becomes 10 to the 12th power. When optimizing well placement in a small reservoir from where and when to drill to how many producer and injector wells there can be as many as 10 to the 20th power possible combinations after five sequential, non-commutative choices of vertical drilling locations.

The combination of advanced AI frameworks with HPC can greatly reduce the challenge.

Anything the AI can learn such as basic rules for how far the wells should be separated and apply to the problem will help decrease the number of computations, to hammer them down to something that is more tangible, Farhadi tells The Next Platform.

Where to place wells has been a challenge for oil and gas companies for years, during which time they developed seismic imaging capabilities and simulation models that run on HPC systems that describe reservoirs beneath the ground. They also use optimizers to run variations of the model to determine how many of which kinds of wells should we place where. There have been at least two generations of engineers who worked to perfect these equations and their nuances, tuning and learning from the data, Farhadi says.

The problem has been that they have worked on these computations using a combination of brute force and such optimizations as particle swarm and genetic algorithms atop computationally expensive reservoir simulators, making such a complex problem even more challenging. Thats where Beyond Limits advanced AI frameworks can come in.

The industry is really equipped with really good simulations and the opportunity of a high-performance AI could be, how about we use the simulations to generate the data and then learn from that generated data? he says. In that sense, you are going some good miles. Other industries are also doing this now, like with the auto industry, this is happening more or less. But from the energy industry standpoint, these simulations are fairly rich.

Beyond Limits is applying such techniques as deep reinforcement learning (DRL), using a framework to train a reinforcement learning agent to make optimal sequential recommendations for placing wells. It also uses reservoir simulations and novel deep convolutional neural networks to work. The agent takes in the data and learns from the various iterations of the simulator, allowing it to reduce the number of possible combinations of moves after each decision is made. By remembering what it learned from the previous iterations, the system can more quickly whittle the choices down to the one best answer.

One area that we looked at specifically is the simulation of subsurface movement of fluids, Farhadi says. Think of a body of a rock that is found somewhere that has oil in it. It also has water that has come to it and as you take out this hydrocarbon, this whole dynamic changes. Things will kick in. You might have water breaking through, but its quite a delicate process that is happening down there. A lot of time goes into building this image because you have limited information. But lets say you have built the image and you have a simulator now that if you tell this simulator, I want to place a well here [and] a well here, the simulator can evolve this in time and give you the flow rates and say, If you do this, this is what youre going to get. Now if I operate this asset, the question for me is just exactly that: How many wells do I put in this? What kind of wells do I want to put vertical [and] horizontal? Do I want to inject water from the beginning? Do I want to inject gas? This is basically the expertise of reservoir engineering. Its playing the game of how to optimally extract this natural resource from these assets, and the assets are usually billions of dollars of value. This is a very, very precious asset for any company that is producing oil and gas. The question is, how do you extract the max out of it now?

The goal is to get down to a high net present value (NPV) score essentially the amount of oil or gas that will be captured (and sold) and the amount of money made after costs are figured in. The fewest wells needed to extract the most resources will mean more profit.

The NPV initially does some iteration, but after about 150,000 times of interacting with the simulator, it can get to something like $40 million dollars of NPV, he says. The key thing here is the fact that this simulation on its own can be expensive to run, so you optimize it, be smart and use it efficiently.

That included creating a system that would allow Beyond Limits to most efficiently scale the model to where the oil and gas companies needed it. The company tested it using three systems two of which were CPU-only and one that was a hybrid running CPUs and GPUs. Beyond Limits used an on-premises 20-core CPU system running Intel Core i9-7900X chips, a cloud-based 96-core CPU system with the same processors, and the hybrid setup, with a 20-core CPU and two Nvidia Ampere A100 GPU accelerators on a p4d.24xlarge Amazon Web Services instance.

The company also took it a step further by including a 36-hour run on a p4d.24xlarge AWS instance using a setup with 90 CPU cores and eight A100 GPUs.

The metrics benchmarked were around the instantaneous rate of reinforcement learning calculation, the number of episodes and forward action-explorations during the progress of reinforcement learning and the value of the best solution found in terms of NPV.

What Beyond Limits found was that the hybrid setup outperformed both CPU-only systems. In terms of benchmarks, the hybrid setup delivered a peak in terms of processing speed of 184.3 percent over the 96-core system and 1,169.5 percent over the 20-core operation. To reach the same number of actions explored at the end of 120,000 seconds, the CPU-GPU hybrid had an improvement in time elapsed of 245.4 percent over the 20 CPU cores and 152.9 percent of the 96 CPU cores. (See chart below.) Regarding NPV, the hybrid instance had a boost of about 109 percent compared to the 20-core CPU setup for vertical wells.

Scale and efficiency are key when trying to reach optimal NPV, because not only do calculations such as the number and types of wells used add to the costs, but so do computational needs.

This problem is very, very complicated in terms of the number of possible combinations, so the more hardware you throw at it, the higher you get and obviously there are physical limits to that, Farhadi says. The GPU becomes a real value-add because you can now achieve NPVs that are higher. Just because you were able to have higher grades, you would be able to have more FLOPs or you could compute more. You have a higher chance of finding better configurations. The idea here was to show that there is this technology that can help with highly combinatorial simulation-based optimizations called reinforcement learning, and we have benchmarked it on simple, smaller reservoir models. But if you were to take it to the actual field models with this number of cells, its going to be on its own, like a massive high-performance training system.

Beyond Limits is also building advanced AI systems for other industries. One example is a system designed to help with planning of a refinery. Another AI system helps chemists more quickly and efficiently build formulas for engine oil and other lubricants, he says.

For the practices that you have relied on a human expert to come up with a framework and [to] solve a problem, it is important for them that whatever system you build is honoring that and can digest that, Farhadi says. Its not only data, its also that knowledge thats human. How do we incorporate and then bring this together? For example, how do you make the knowledge that your engineer learned about from the data or how do you use the physics as a constraint for your AI? Its an interesting field. Even in the frontiers of deep learning [and] machine learning, this is now being looked at. Instead of just looking at the pixels, now lets see if we can have more robust representations of hierarchical understandings of the objects that come our way. We really started this way earlier than 2014, because one big motivation was that the industries we went to required it. That was what they had and they needed to augment it, maybe with digital assistants. It has data elements to it, but they were not quite competent.

View post:

Getting Industrial About The Hybrid Computing And AI Revolution - The Next Platform

Posted in Ai | Comments Off on Getting Industrial About The Hybrid Computing And AI Revolution – The Next Platform

Diverse AI teams are key to reducing bias – VentureBeat

Posted: at 4:14 am

All the sessions from Transform 2021 are available on-demand now. Watch now.

An Amazon-built resume-rating algorithm, when trained on mens resumes, taught itself to prefer male candidates and penalize resumes that included the word women.

A major hospitals algorithm, when asked to assign risk scores to patients, gave white patients similar scores to Black patients who were significantly sicker.

If a movie recommendation is flawed, thats not the end of the world. But if you are on the receiving end of a decision [that] is being used by AI, that can be disastrous, Huma Abidi, senior director of AI SW products and engineering at Intel, said during a session on bias and diversity in AI at VentureBeats Transform 2021 virtual conference. Abidi was joined by Yakaira Nuez, senior director of research and insights at Salesforce, and Fahmida Y Rashid, executive editor of VentureBeat.

In order to produce fair algorithms, the data used to train AI needs to be free of bias. For every dataset, you have to ask yourself where the data came from, if that data is inclusive, if the dataset has been updated, and so on. And you need to utilize model cards, checklists, and risk management strategies at every step of the development process.

The best possible framework is that we were actually able to manage that risk from the outset we had all of the actors in place to be able to ensure that the process was inclusive, bringing the right people in the room at the right time that were representative of the level of diversity that we wanted to see and the content. So risk management strategies are my favorite. I do believe in order for us to really mitigate bias that its going to be about risk mitigation and risk management, Nuez said.

Make sure that diversity is more than just a buzzword and that your leadership teams and speaker panels are reflective of the people you want to attract to your company, Nuez said.

When thinking about diversity, equity, and inclusion work, or bias and racism, the most impact tends to be in areas in which individuals are most at risk, Nuez said. Health care, finance, and legal situations anything involving police and child welfare are all sectors where bias causes the most amount of harm when it shows up. So when people are working on AI initiatives in these spaces to increase productivity or efficiencies, it is even more critical that they are thinking deliberately about bias and potential for harm. Each person is accountable and responsible for managing that bias.

Nuez discussed how the responsibility of a research and insights leader is to curate data so executives can make informed decisions about product direction. Nuez is not just thinking about the people pulling the data together, but also the people who may not be in the target market, to give insight into people Salesforce would not have known anything about otherwise.

Nuez regularly asks the team to think about bias and whether it is present in the data, like asking whether the panel of individuals for a project is diverse. If the feedback is not from an environment that is representative of the target ecosystem, then that feedback is less useful.

Those questions are the small little things that I can do at the day to day level to try to move the needle a bit at Salesforce, Nuez said.

Research has shown that minorities often have to whiten their rsums in order to get callbacks and interviews. Companies and organizations can weave diversity and inclusion into their stated values to address this issue.

If its already not part of your core mission statement, its really important to add those things diversity, inclusion, equity. Just doing that, by itself, will help a lot, Abidi said.

Its important to integrate these values into corporate culture because of the interdisciplinary nature of AI: Its not just engineers; we work with ethicists, we have lawyers, we have policymakers. And all of us come together in order to fix this problem, Abidi said.

Additionally, commitments by companies to help fix gender and minority imbalances also provide an end goal for recruitment teams: Intel wants women in 40% of technical roles by 2030. Salesforce is aiming to have 50% of its U.S. workforce made up of underrepresented groups, including women, people of color, LGBTQ+ employees, people with disabilities, and veterans.

Original post:

Diverse AI teams are key to reducing bias - VentureBeat

Posted in Ai | Comments Off on Diverse AI teams are key to reducing bias – VentureBeat

The Global Artificial Intelligence (AI) Chips Market is expected to grow by $ 73.49 billion during 2021-2025, progressing at a CAGR of over 51% during…

Posted: at 4:14 am

Global Artificial Intelligence (AI) Chips Market 2021-2025 The analyst has been monitoring the artificial intelligence (AI) chips market and it is poised to grow by $ 73. 49 billion during 2021-2025, progressing at a CAGR of over 51% during the forecast period.

New York, July 22, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Global Artificial Intelligence (AI) Chips Market 2021-2025" - https://www.reportlinker.com/p05006367/?utm_source=GNW Our report on the artificial intelligence (AI) chips market provides a holistic analysis, market size and forecast, trends, growth drivers, and challenges, as well as vendor analysis covering around 25 vendors.The report offers an up-to-date analysis regarding the current global market scenario, latest trends and drivers, and the overall market environment. The market is driven by the increasing adoption of AI chips in data centers, increased focus on developing AI chips for smartphones, and the development of AI chips in autonomous vehicles. In addition, the increasing adoption of AI chips in data centers is anticipated to boost the growth of the market as well.The artificial intelligence (AI) chips market analysis includes the product segment and geographic landscape.

The artificial intelligence (AI) chips market is segmented as below:By Product ASICs GPUs CPUs FPGAs

By Geography North America Europe APAC South America MEA

This study identifies the convergence of AI and IoT as one of the prime reasons driving the artificial intelligence (AI) chips market growth during the next few years. Also, increasing investments in ai start-ups and advances in the quantum computing market will lead to sizable demand in the market.

The analyst presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters. Our report on artificial intelligence (AI) chips market covers the following areas: Artificial intelligence (AI) chips market sizing Artificial intelligence (AI) chips market forecast Artificial intelligence (AI) chips market industry analysis

This robust vendor analysis is designed to help clients improve their market position, and in line with this, this report provides a detailed analysis of several leading artificial intelligence (AI) chips market vendors that include Alphabet Inc., Broadcom Inc., Intel Corp., NVIDIA Corp., Qualcomm Inc., Advanced Micro Devices Inc., Huawei Investment and Holding Co. Ltd., International Business Machines Corp., Samsung Electronics Co. Ltd., and Taiwan Semiconductor Manufacturing Co. Ltd. Also, the artificial intelligence (AI) chips market analysis report includes information on upcoming trends and challenges that will influence market growth. This is to help companies strategize and leverage all forthcoming growth opportunities.The study was conducted using an objective combination of primary and secondary information including inputs from key participants in the industry. The report contains a comprehensive market and vendor landscape in addition to an analysis of the key vendors.

The analyst presents a detailed picture of the market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters such as profit, pricing, competition, and promotions. It presents various market facets by identifying the key industry influencers. The data presented is comprehensive, reliable, and a result of extensive research - both primary and secondary. Technavios market research reports provide a complete competitive landscape and an in-depth vendor selection methodology and analysis using qualitative and quantitative research to forecast the accurate market growth.Read the full report: https://www.reportlinker.com/p05006367/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Original post:

The Global Artificial Intelligence (AI) Chips Market is expected to grow by $ 73.49 billion during 2021-2025, progressing at a CAGR of over 51% during...

Posted in Ai | Comments Off on The Global Artificial Intelligence (AI) Chips Market is expected to grow by $ 73.49 billion during 2021-2025, progressing at a CAGR of over 51% during…

Predicting a Boiling Crisis Infrared Cameras and AI Provide Insight Into Physics of Boiling – SciTechDaily

Posted: at 4:13 am

By Matthew Hutson, MIT Department of Nuclear Science and EngineeringJuly 22, 2021

Pictures of the boiling surfaces taken using a scanning electron microscope: Indium tin oxide (top left), copper oxide nanoleaves (top right), zinc oxide nanowires (bottom left), and porous coating of silicon dioxide nanoparticles obtained by layer-by-layer deposition (bottom right). Credit: SEM photos courtesy of the researchers.

MIT researchers train a neural network to predict a boiling crisis, with potential applications for cooling computer chips and nuclear reactors.

Boiling is not just for heating up dinner. Its also for cooling things down. Turning liquid into gas removes energy from hot surfaces, and keeps everything from nuclear power plants to powerful computer chips from overheating. But when surfaces grow too hot, they might experience whats called a boiling crisis.

In a boiling crisis, bubbles form quickly, and before they detach from the heated surface, they cling together, establishing a vapor layer that insulates the surface from the cooling fluid above. Temperatures rise even faster and can cause catastrophe. Operators would like to predict such failures, and new research offers insight into the phenomenon using high-speed infrared cameras and machine learning.

Matteo Bucci, the Norman C. Rasmussen Assistant Professor of Nuclear Science and Engineering at MIT, led the new work,published on June 23, 2021, in Applied Physics Letters. In previous research, his team spent almost five years developing a technique in which machine learning could streamline relevant image processing. In the experimental setup for both projects, a transparent heater 2 centimeters across sits below a bath of water. An infrared camera sits below the heater, pointed up and recording at 2,500 frames per second with a resolution of about 0.1 millimeter. Previously, people studying the videos would have to manually count the bubbles and measure their characteristics, but Bucci trained a neural network to do the chore, cutting a three-week process to about five seconds. Then we said, Lets see if other than just processing the data we can actually learn something from an artificial intelligence, Bucci says.

The goal was to estimate how close the water was to a boiling crisis. The system looked at 17 factors provided by the image-processing AI: the nucleation site density (the number of sites per unit area where bubbles regularly grow on the heated surface), as well as, for each video frame, the mean infrared radiation at those sites and 15 other statistics about the distribution of radiation around those sites, including how theyre changing over time. Manually finding a formula that correctly weighs all those factors would present a daunting challenge. But artificial intelligence is not limited by the speed or data-handling capacity of our brain, Bucci says. Further, machine learning is not biased by our preconceived hypotheses about boiling.

To collect data, they boiled water on a surface of indium tin oxide, by itself or with one of three coatings: copper oxide nanoleaves, zinc oxide nanowires, or layers of silicon dioxide nanoparticles. They trained a neural network on 85 percent of the data from the first three surfaces, then tested it on 15 percent of the data of those conditions plus the data from the fourth surface, to see how well it could generalize to new conditions. According to one metric, it was 96 percent accurate, even though it hadnt been trained on all the surfaces. Our model was not just memorizing features, Bucci says. Thats a typical issue in machine learning. Were capable of extrapolating predictions to a different surface.

The team also found that all 17 factors contributed significantly to prediction accuracy (though some more than others). Further, instead of treating the model as a black box that used 17 factors in unknown ways, they identified three intermediate factors that explained the phenomenon: nucleation site density, bubble size (which was calculated from eight of the 17 factors), and the product of growth time and bubble departure frequency (which was calculated from 12 of the 17 factors). Bucci says models in the literature often use only one factor, but this work shows that we need to consider many, and their interactions. This is a big deal.

This is great, says Rishi Raj, an associate professor at the Indian Institute of Technology at Patna, who was not involved in the work. Boiling has such complicated physics. It involves at least two phases of matter, and many factors contributing to a chaotic system. Its been almost impossible, despite at least 50 years of extensive research on this topic, to develop a predictive model, Raj says. It makes a lot of sense to us the new tools of machine learning.

Researchers have debated the mechanisms behind the boiling crisis. Does it result solely from phenomena at the heating surface, or also from distant fluid dynamics? This work suggests surface phenomena are enough to forecast the event.

Predicting proximity to the boiling crisis doesnt only increase safety. It also improves efficiency. By monitoring conditions in real-time, a system could push chips or reactors to their limits without throttling them or building unnecessary cooling hardware. Its like a Ferrari on a track, Bucci says: You want to unleash the power of the engine.

In the meantime, Bucci hopes to integrate his diagnostic system into a feedback loop that can control heat transfer, thus automating future experiments, allowing the system to test hypotheses and collect new data. The idea is really to push the button and come back to the lab once the experiment is finished. Is he worried about losing his job to a machine? Well just spend more time thinking, not doing operations that can be automated, he says. In any case: Its about raising the bar. Its not about losing the job.

Reference: Decrypting the boiling crisis through data-driven exploration of high-resolution infrared thermometry measurements by Madhumitha Ravichandran, Guanyu Su, Chi Wang, Jee Hyun Seong, Artyom Kossolapov, Bren Phillips, Md Mahamudur Rahman and Matteo Bucci, 23 June 2021, Applied Physics Letters.DOI: 10.1063/5.0048391

Link:

Predicting a Boiling Crisis Infrared Cameras and AI Provide Insight Into Physics of Boiling - SciTechDaily

Posted in Ai | Comments Off on Predicting a Boiling Crisis Infrared Cameras and AI Provide Insight Into Physics of Boiling – SciTechDaily

Atos and Graphcore Partner to Deliver Advanced AI HPC Solutions Worldwide – HPCwire

Posted: at 4:13 am

PARIS and BRISTOL, England, July 22, 2021 Atos and Graphcore today announce that they have signed a partnership to accelerate performance and innovation in Artificial Intelligence (AI) by integrating Graphcores advanced IPU compute systems into Atos recently launched ThinkAI offering to bring AI high-performance solutions to customers worldwide.

This partnership will mutually benefit both parties. Atos long-standing position as a European leader in high-performance computing (HPC) and trusted advisor, provider and integrator of HPC solutions at scale will give Graphcore access to a multitude of new customers, sectors and geographies. Graphcore in turn will work with Atos globally to expand its global reach by targeting large corporate enterprise in sectors including finance, healthcare, telecoms and consumer internet as well as national labs and universities focused on scientific research, which are rapidly developing their AI capabilities.

ThinkAI brings together Atos AI business consultancy expertise with its experts at the AtosCenter of Excellence in Advanced Computing with its digital security capabilities and its software, such as Atos HPC Software Suites, to enable organizations to accelerate time to AI operationalization and industrialization.

Graphcore, the UK-headquartered maker of the Intelligence Processing Unit (IPU), plays a significant role in Atos ThinkAI offering, which is focused on the twin objectives of accelerating pure artificial intelligence applications and augmenting traditional HPC simulation with AI. Graphcores IPU-POD systems for scale-up datacentre computing will be an integral part of ThinkAI.

Even before todays formal launch of the partnership, the two companies welcomed their first major joint customer, one of the largest cloud providers in South Korea, which will be using Graphcore systems in large-scale AI cloud datacenters, in a deal facilitated by Atos.

ThinkAI represents a massive commitment to the future of artificial intelligence by one of the worlds most trusted technology companies. For Atos to have put Graphcore as a key part of its strategy says a great deal about the maturity of our hardware and software, and the ability of our systems to deliver on customer needs, said Fabrice Moizan, GM and SVP Sales EMEAI and Asia Pacific at Graphcore.

Agns Boudot, Senior Vice President, Head of HPC & Quantum at Atos said: With ThinkAI, were making it possible for organizations from any industry to achieve breakthroughs with AI. Graphcores IPU hardware and Poplar software is opening up new opportunities for innovators to explore the potential of AI for their organizations, complemented with our industry-tailored AI business consultancy, digital security capabilities and software, were excited to be orchestrating these cutting-edge technologies in our ThinkAI solution.

About Atos

Atos is a global leader in digital transformation with 105,000 employees and annual revenue of over 11 billion. European number one in cybersecurity, cloud and high performance computing, the Group provides tailored end-to-end solutions for all industries in 71 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos operates under the brands Atos and Atos|Syntel. Atos is a SE (Societas Europaea), listed on the CAC40 Paris stock index.

The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space.

About Graphcore

Graphcore is the inventor of the Intelligence Processing Unit (IPU), the worlds most sophisticated microprocessor, specifically designed for the needs of current and next-generation artificial intelligence workloads.

Graphcores IPU-POD datacenter systems, for scale up and scale out AI compute, offer the ability to run large models across multiple IPUs, or to share the compute resource between different users and workloads.

Since its founding in 2016, Graphcore has raised more than $730 million in funding.

Investors include Sequoia Capital, Microsoft, Dell, Samsung, BMW iVentures, Robert Bosch Venture Capital, as well as leading AI innovators including Demis Hassabis (Deepmind), Pieter Abbeel (UC Berkeley), and Zoubin Ghahramani (Google Brain).

Source: Graphcore

See the rest here:

Atos and Graphcore Partner to Deliver Advanced AI HPC Solutions Worldwide - HPCwire

Posted in Ai | Comments Off on Atos and Graphcore Partner to Deliver Advanced AI HPC Solutions Worldwide – HPCwire

AI spots shipwrecks from the ocean surface and even from the air – The Conversation US

Posted: at 4:13 am

The Research Brief is a short take about interesting academic work.

In collaboration with the United States Navys Underwater Archaeology Branch, I taught a computer how to recognize shipwrecks on the ocean floor from scans taken by aircraft and ships on the surface. The computer model we created is 92% accurate in finding known shipwrecks. The project focused on the coasts of the mainland U.S. and Puerto Rico. It is now ready to be used to find unknown or unmapped shipwrecks.

The first step in creating the shipwreck model was to teach the computer what a shipwreck looks like. It was also important to teach the computer how to tell the difference between wrecks and the topography of the seafloor. To do this, I needed lots of examples of shipwrecks. I also needed to teach the model what the natural ocean floor looks like.

Conveniently, the National Oceanic and Atmospheric Administration keeps a public database of shipwrecks. It also has a large public database of different types of imagery collected from around the world, including sonar and lidar imagery of the seafloor. The imagery I used extends to a little over 14 miles (23 kilometers) from the coast and to a depth of 279 feet (85 meters). This imagery contains huge areas with no shipwrecks, as well as the occasional shipwreck.

Finding shipwrecks is important for understanding the human past think trade, migration, war but underwater archaeology is expensive and dangerous. A model that automatically maps all shipwrecks over a large area can reduce the time and cost needed to look for wrecks, either with underwater drones or human divers.

The Navys Underwater Archaeology Branch is interested in this work because it could help the unit find unmapped or unknown naval shipwrecks. More broadly, this is a new method in the field of underwater archaeology that can be expanded to look for various types of submerged archaeological features, including buildings, statues and airplanes.

This project is the first archaeology-focused model that was built to automatically identify shipwrecks over a large area, in this case the entire coast of the mainland U.S. There are a few related projects that are focused on finding shipwrecks using deep learning and imagery collected by an underwater drone. These projects are able to find a handful of shipwrecks that are in the area immediately surrounding the drone.

Wed like to include more shipwreck and imagery data from all over the world in the model. This will help the model get really good at recognizing many different types of shipwrecks. We also hope that the Navys Underwater Archaeology Branch will dive to some of the places where the model detected shipwrecks. This will allow us to check the models accuracy more carefully.

Im also working on a few other archaeological machine learning projects, and they all build on each other. The overall goal of my work is to build a customizable archaeological machine learning model. The model would be able to quickly and easily switch between predicting different types of archaeological features, on land as well as underwater, in different parts of the world. To this end, Im also working on projects focused on finding ancient Maya archaeological structures, caves at a Maya archaeological site and Romanian burial mounds.

Continue reading here:

AI spots shipwrecks from the ocean surface and even from the air - The Conversation US

Posted in Ai | Comments Off on AI spots shipwrecks from the ocean surface and even from the air – The Conversation US

Real-time Interpretation: The next frontier in radiology AI – MedCity News

Posted: at 4:13 am

In the nine years since AlexNet spawned the age of deep learning, artificial intelligence (AI) has made significant technological progress in medical imaging, with more than 80 deep-learning algorithms approved by the U.S. FDA since 2012 for clinical applications in image detection and measurement. A 2020 survey found that more than 82% of imaging providers believe AI will improve diagnostic imaging over the next 10 years and the market for AI in medical imaging is expected to grow 10-fold in the same period.

Despite this optimistic outlook, AI still falls short of widespread clinical adoption in radiology. A 2020 survey by the American College of Radiology (ACR) revealed that only about a third of radiologists use AI, mostly to enhance image detection and interpretation; of the two thirds who did not use AI, the majority said they saw no benefit to it. In fact, most radiologists would say that AI has not transformed image reading or improved their practices.

Why is there such a huge gap between AIs theoretical utility and its actual use in radiology? Why hasnt AI delivered on its promise in radiology? Why arent we there yet?

The reason isnt because companies havent tried to innovate. Its because they were trying to automate away the radiologists job and failed, burning plenty of investors and leaving them reluctant to fund other projects aimed at translating AIs theoretical utility into real-world use cases.

AI companies seem to have mistaken Charles Friedmans fundamental theorem of biomedical informatics: it isnt that a computer can accomplish more than a human; its that a human using a computer can accomplish more than a human alone. Creation of this human-machine symbiosis in radiology will require AI companies to understand:

Together, these features, delivered as a unified cloud-based solution, would simplify and optimize the radiology workflow while augmenting the radiologists intelligence.

History Lessons

Modern deep learning dawned in 2012, when AlexNet won the ImageNet challenge, leading to the resurgence of AI as we think of it today. With the problem of image classification sufficiently solved, AI companies decided to apply their algorithms to images that have the greatest impact on human health: radiographs. These post-AlexNet companies can be viewed as falling into three generations.

The first generation approached the field with the assumption that AI know-how was sufficient for commercial success, and so focused on building early teams with knowledge around algorithms. However, this group drastically underestimated the difficulty of acquiring and labeling large-enough medical imaging data sets to train these models. Without sufficient data, these first-generation companies either failed or had to pivot away from radiology.

The second generation corrected for failures of their predecessors by launching with data partnerships in hand either with academic medical centers or large private healthcare groups. However, these startup companies encountered the twin problems of integrating their tools into the radiology workflow and building a business model around them. Hence they ended up building functional features without any commercial traction.

The third generation of AI companies in radiology realized that success required an understanding of the radiology workflow, in addition to the algorithms and data. These companies have largely converged on the same use case: triage. Their tools rank-order images based on their urgency for the patient, thereby sorting how work flows to the radiologist without interfering in the execution of that work.

The third generations solutions for the radiology workflow are a positive advancement that demonstrate there is a path towards adoption, but there is still much more AI could do beyond triage and worklist reordering. So where should the next wave of AI go in radiology?

Going For The Flow

To date, AI has demonstrated value in its ability to handle asynchronous tasks such as image triage and detection. Whats even more interesting is the potential to enhance real-time image interpretation by giving the computer context that lets it work with the radiologist.

There are many aspects of the radiologists workflow where radiologists want improvements and that AI-based context could optimize and streamline. These include, but are certainly not limited to: setting the radiologists preferred image hanging protocols; auto-selection of the proper reporting template for the case; ensuring the radiologists dictation is entered into the correct section of the report; and removing the need to repeat image measurements for the report.

Individually, a shortcut that optimizes any one of these workflow steps a micro-optimization would have a small impact on the overall workflow. But the collective impact of an entire compendium of these micro-optimizations on the radiologists workflow would be quite large.

In addition to its impact on the radiology workflow, the concept of a micro-optimization compendium makes a feasible and sustainable business possible; whereas it would be difficult, if not impossible, to build a business around a tool that optimized just one of those steps.

Radiology Tools for Thought

In other areas of software development, we are witnessing a resurgence in tools for thought technology that extends the human mind and in these areas, creating a product that improves decision making and user experience is table stakes. Uptake of this idea is slower in healthcare, where computers and technology have failed to improve usability and workflow and continue to lack integration.

The number and complexity of medical images continues to increase as novel applications of imaging for screening and diagnosis emerge; but the total number of radiologists is not increasing at the same rate. The ongoing expansion of medical imaging therefore requires better tools for thought. Without them, we will eventually reach a breaking point when we cannot read all of the images generated, and patient care will suffer.

The next wave of AI must solve the workflow of real-time interpretation in radiology and we must embrace that technology when it comes. No single feature will address this problem. Only a compendium of micro-optimizations, delivered continually and at high velocity via the cloud, will solve it.

Photo: metamorworks, Getty Images

Follow this link:

Real-time Interpretation: The next frontier in radiology AI - MedCity News

Posted in Ai | Comments Off on Real-time Interpretation: The next frontier in radiology AI – MedCity News

Disability rights advocates are worried about discrimination in AI hiring tools – MIT Technology Review

Posted: at 4:13 am

Making hiring technology accessible means ensuring both that a candidate can use the technology and that the skills it measures dont unfairly exclude candidates with disabilities, says Alexandra Givens, the CEO of the Center for Democracy and Technology, an organization focused on civil rights in the digital age.

AI-powered hiring tools often fail to include people with disabilities when generating their training data, she says. Such people have long been excluded from the workforce, so algorithms modeled after a companys previous hires wont reflect their potential.

Even if the models could account for outliers, the way a disability presents itself varies widely from person to person. Two people with autism, for example, could have very different strengths and challenges.

As we automate these systems, and employers push to whats fastest and most efficient, theyre losing the chance for people to actually show their qualifications and their ability to do the job, Givens says. And that is a huge loss.

Government regulators are finding it difficult to monitor AI hiring tools. In December 2020, 11 senators wrote a letter to the US Equal Employment Opportunity Commission expressing concerns about the use of hiring technologies after the covid-19 pandemic. The letter inquired about the agencys authority to investigate whether these tools discriminate, particularly against those with disabilities.

The EEOC responded with a letter in January that was leaked to MIT Technology Review. In the letter, the commission indicated that it cannot investigate AI hiring tools without a specific claim of discrimination. The letter also outlined concerns about the industrys hesitance to share data and said that variation between different companies software would prevent the EEOC from instituting any broad policies.

I was surprised and disappointed when I saw the response, says Roland Behm, a lawyer and advocate for people with behavioral health issues. The whole tenor of that letter seemed to make the EEOC seem like more of a passive bystander rather than an enforcement agency.

The agency typically starts an investigation once an individual files a claim of discrimination. With AI hiring technology, though, most candidates dont know why they were rejected for the job. I believe a reason that we havent seen more enforcement action or private litigation in this area is due to the fact that candidates dont know that theyre being graded or assessed by a computer, says Keith Sonderling, an EEOC commissioner.

Sonderling says he believes that artificial intelligence will improve the hiring process, and he hopes the agency will issue guidance for employers on how best to implement it. He says he welcomes oversight from Congress.

Read more here:

Disability rights advocates are worried about discrimination in AI hiring tools - MIT Technology Review

Posted in Ai | Comments Off on Disability rights advocates are worried about discrimination in AI hiring tools – MIT Technology Review

Ai Weiwei unveils giant iron tree to warn people what they risk losing – Reuters

Posted: at 4:13 am

PORTO, Portugal, July 22 (Reuters) - Chinese artist and dissident Ai Weiwei unveiled a 32-meter-tall (105 ft) tropical tree made of iron in the Portuguese city of Porto on Thursday, an artwork he hopes will raise awareness of the devastating consequences of deforestation.

Four years ago, Ai was in Brazil to investigate the threats faced by its forests when he stumbled upon an endangered ancient tree of the Caryocar genus in the northeastern Atlantic forest.

Using scaffolding, a team moulded the tree and shipped the mould to China, where it was cast before being sent to Portugal, Ai's new home, to be assembled and exhibited for the first time. read more

The exhibition, which also includes installations composed of iron tree roots, is taking place at Porto's Serralves museum and park, and will be open to visitors until next year.

"People should look at these works and think of what we could lose in the future," Ai, 63, told Reuters by telephone. "It's... a warning about what we are going to lose if we don't act."

Ai's tree stands leafless, has a hollow trunk and the iron looks rusty, reminding visitors of the environmental threats facing the planet.

In Brazil's Amazon, deforestation has surged since right-wing President Jair Bolsonaro took office in 2019. read more

Bolsonaro has called for mining and agriculture in protected areas of the Amazon and weakened environmental agencies.

"Brazil has a clear policy which sacrifices their best resource: their rainforest, their nature," Ai said. "And that's not just Brazil's best resource...it's planet earth's best resource."

Scientists say protection of the Amazon is vital to curbing climate change because of the vast amount of greenhouse gas its rainforest absorbs.

"The problem is that we never learn from our mistakes... we never really learn a lesson," Ai said, urging the world to prepare for "even bigger" environmental disasters.

Reporting by Catarina Demony and Violeta Santos Moura; Editing by Andrei Khalip, Alexandra Hudson

Our Standards: The Thomson Reuters Trust Principles.

Read the original:

Ai Weiwei unveils giant iron tree to warn people what they risk losing - Reuters

Posted in Ai | Comments Off on Ai Weiwei unveils giant iron tree to warn people what they risk losing – Reuters