Employing artificial intelligence to spot infrastructure issues – Construction Dive

Dive Brief:

The rollout of the infrastructure-analyzing AI technology comes in the wake of the American Society of Civil Engineers giving the countrys infrastructure a D+. The organization found more than 56,000 bridges nationally were structurally deficient.

Dynamic Infrastructure's technology continuously processes past and current inspection reports and visuals, identifying future maintenance risks and evolving defects, the company claimed in its release. The result is a live, cloud-based risk analysis of any bridge or tunnel. The system automatically alerts users when changes are detected in maintenance and operating conditions before they develop into large-scale failures.

Dynamic Infrastructures technology uses AI to analyze past and current photos to spot developing issues.

Courtesy of Dynamic Infrastructure

The firm said its creates a "visual medical record" for each asset, based on existing images taken from past and current inspection reports and interim inspections.

The analysis can use any visual source, from smartphones and drones to laser scanning. The images are compared and serve as the basis for the alerts on changes in maintenance conditions, are accessible through a browser, and can be instantly shared with peers and contractors to speed maintenance workflows.

In addition to Suffolk County, New York, the firm said it has projects in other states, as well as in Germany, Switzerland, Greece and Israel for clients who operate a total of 30,000 infrastructure assets.

Continued here:

Employing artificial intelligence to spot infrastructure issues - Construction Dive

CPPI and HSDF Release Report from Virtual Symposium: Artificial Intelligence: Transforming the Government Mission – PRNewswire

This is a notification that the first release under this account has been submitted. Please follow up as appropriate.

Congressman Jim Banks (R-IN-3) agreed, warning, "The U.S. cannot afford to cede leadership in the technological arms race. If China surpassesus in a field like AI or quantum, it will have significant implications for U.S. national security, economic competitiveness, and way of life."

Luke McCormack, former Department Homeland Security CIO, led an engaging discussion onAI Applications for Good, while former Acting Commissioner of Customs and Border Protection, David Aguilar explored how AI and Edge Computing on the Frontlines is transforming the border mission. Dr. Reggie Brothers, CEO of NuWave Solutions and former Under Secretary for Science and Technology at the Department of Homeland Security directed an intriguing discussion on Achieving Security Outcomes through AI.

Speakers all agreed that the government is making progress in AI, but a more coordinated approach is needed, and that the government can learn from the more advanced capabilities and understanding of the private sector. "With AI we need a comprehensive, whole-of-government approach that leverages public-private partnerships to our greatest advantage," stated Congresswoman Robin Kelly (D-IL-2).

"The success of AI applications in government depends a great deal on security, data governance, clean data sets and an understanding of data sources," summarized Megan Mance, Executive Director of HSDF. "Leveraging cloud and storage innovations will help deliver AI capabilities to the mission operators on the front line."

Read the full report at https://www.hsdf.org

ABOUT EVENT ORGANIZERS:This event was hosted by the Center for Public Policy Innovation (CPPI) and the Homeland Security and Defense Forum (HSDF). CPPI is a 501(c)(3) not-for-profit think tank whose mission is to assist government officials in addressing the challenging issues brought on by the rapid advancement of Technology. HSDF's mission is to facilitate dialogue between the public and private sectors on homeland and national security issues.

Media Contact: Megan Mance[emailprotected]

SOURCE Homeland Security and Defense Forum

https://www.hsdf.org

Read more:

CPPI and HSDF Release Report from Virtual Symposium: Artificial Intelligence: Transforming the Government Mission - PRNewswire

Top 10 AI and machine learning stories of 2020 – Healthcare IT News

Toward the tail end of pre-pandemic 2019, Mayo Clinic Chief Information Officer Cris Ross stood on a stage in California and declared, "This artificial intelligence stuff is real."

Indeed, while some may argue that AI and machine learning might have been harnessed better during the early days of COVID-19, and while the risk of algorithmic bias is very real, there's little question that artificial intelligence, evolving and maturing by the day for an array of use cases across healthcare.

Here are the most-read stories about AI during this most unusual year.

UK to use AI for COVID-19 vaccine side effects. On a day when vaccines, developed in record time, first begin to be administered in the U.S., it's worth remembering AI's crucial role in helping the world get to this hopefully pivotal moment.

AI algorithm IDs abnormal chest X-rays from COVID-19 patients. Machine learning has been a hugely valuable diagnostic tool as well, as illustrated by this story about a tool from cognitive computing vendor behold.ai that promises 'instant triage" based on lung scans offering faster diagnosis of COVID-19 patients and helping with resource allocation.

How AI use cases are evolving in the time of COVID-19. In a HIMSS20 Digital presentation, leaders from Google Cloud, Nuance and Health Data Analytics Institute shared perspective on how AI and automation were being deployed for pandemic response from the hunt for therapeutics and vaccines to analytics to optimize revenue cycle strategies.

Microsoft launches major $40M AI for Health initiative. The company said the the five-year AI for Health (part of its $165 million AI for Good initiative) will help healthcare organizations around the world deploy with leading edge technologies in the service of three key areas: accelerating medical research, improving worldwide understanding to protect against global health crises such as COVID-19 and reducing health inequity.

How AI and machine learning are transforming clinical decision support. "Todays digital tools only scratch the surface," said Mayo Clinic Platform President Dr. John Halamka. "Incorporating newly developed algorithms that take advantage of machine learning, neural networks, and a variety of other types of artificial intelligence can help address many of the shortcomings of human intelligence."

Clinical AI vendor Jvion unveils COVID Community Vulnerability Map. In the very early days of the pandemic, clinical AI company Jvion launched this intereactive map, which tracks the social determinants of health, helping identify populations down to the census-block level that are at risk for severe outcomes.

AI bias may worsen COVID-19 health disparities for people of color. An article in the Journal of the American Medical Informatics Association asserts that biased data models could further the disproportionate impact the COVID-19 pandemic is already having on people of color. "If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden," said researchers.

The origins of AI in healthcare, and where it can help the industry now. "The intersection of medicine and AI is really not a new concept," said Dr. Taha Kass-Hout, director of machine learning and chief medical officer at Amazon Web Services. (There were limited chatbots and other clinical applications as far back as the mid-60s.) But over the past few years, it has become ubiquitous across the healthcare ecosystem. "Today, if youre looking at PubMed, it cites over 12,000 publications with deep learning, over 50,000 machine learning," he said.

AI, telehealth could help address hospital workforce challenges. "Labor is the largest single cost for most hospitals, and the workforce is essential to the critical mission of providing life-saving care," noted a January American Hospital Association report on the administrative, financial, operational and clinical uses of artificial intelligence. "Although there are challenges, there also are opportunities to improve care, motivate and re-skill staff, and modernize processes and business models that reflect the shift toward providing the right care, at the right time, in the right setting."

AI is helping reinvent CDS, unlock COVID-19 insights at Mayo Clinic. In a HIMSS20 presentation, JohnHalamka shared some of the most promising recent clinical decision support advances at the Minnesota health system and described how they're informing treatment decisions for an array of different specialties and helping shape its understanding of COVID-19. "Imagine the power [of] an AI algorithm if you could make available every pathology slide that has ever been created in the history of the Mayo Clinic," he said. "That's something we're certainly working on."

Twitter:@MikeMiliardHITNEmail the writer:mike.miliard@himssmedia.comHealthcare IT News is a HIMSS publication.

See more here:

Top 10 AI and machine learning stories of 2020 - Healthcare IT News

The 8 Best Books About Artificial Intelligence to Read Now – WIRED

The great man theory holds that history is largely made by heroesbig, brawny, brainy dudes (always dudes) who reshape the future with brute force and brilliance. WIRED alum Alex Davies new book refutes that outdated theory. In Driven, Davies digs into the history of autonomous vehicles and the goofy, spirited cast of characters (still mostly dudes) who are working to shepherd the tech into existence. As Davies reveals, teamwork makes the dream work. Until it doesnt. Then the lawsuitsand in one engineers case, handcuffsfly.

Eventually, robot cars might reshape the way modern life works. Autonomous vehicles could be a $7 trillion business by 2050; today, multibillion-dollar companies like Alphabet, General Motors, Ford, and Tesla race to hammer out the kinks. But back at the opening of the century, AVs were an academic hobbyhorse. Then an obscure clause in a 2001 funding bill poured government money into developing robot tech. Just a few years later, Darpa held a literal robot race across the Mojave Desert. The kooky entrants are the same engineers banking millions at the worlds largest AV companies today. For many, the money was a nice incentive. But as one roboticist tells Davies, most are driven by the classic maker ethos: I sought something that would dent the world, that I could do with my own hands, that would happen in my time.

To paraphrase another visionary, the course of true engineering never did run smooth. Davies sharp narrative chronicles the personality clashes, philosophical divergences, funding crunches, and, in a shocking number of cases, troublesome wild creatures that get in robotics way. (A tip: When racing a robot across the desert, keep an eye out for the native tortoises, which will pee on you if you try to move them.) This is a book for anyone whos sick of the hero narrative, and who wants to learn about how the business of building world-shaking robots truly creaks along.Aarian Marshall

See original here:

The 8 Best Books About Artificial Intelligence to Read Now - WIRED

KODA AUTO wins Pioneer Award for its use of artificial intelligence and Sound Analyser app – Automotive World

KODA AUTO has been presented with the Pioneer Award for its use of artificial intelligence in vehicle diagnostics. At this years CONNECTED CAR Awards, the trade magazines Auto Bild and Computer Bild chose KODAs Sound Analyser app as the winner of the editorial prize reserved for particularly innovative ideas. Using a smartphone or tablet, the noises a car makes while in operation can be recorded and compared with stored sound patterns. On this basis, the software can quickly and reliably detect if the vehicle in question requires any maintenance.

KODA AUTO has been using AI technologies for even more accurate car diagnostics in terms of servicing since June 2019. The Sound Analyser app, which runs on standard smartphones or tablets, is just one example. It records the noises a car makes while it is running and compares them with stored sound patterns. In the event of any discrepancies, the app uses an algorithm to determine what they are and how they can be resolved. In this way, Sound Analyser helps to make vehicle maintenance more efficient, reduce the time a car spends at the garage and achieve even higher levels of customer satisfaction.

The smartphone app was developed by KODA AUTO DigiLab, the Czech carmakers innovation workshop, and was initially trialled with a total of 245 KODA dealers across 14 countries in June 2019. With ongoing use, the AI-based app continuously recognises and learns new sound patterns. The software can identify the sounds produced by components such as the steering system, the air conditioning compressor or the clutches in the direct-shift gearbox with an accuracy of over 90 per cent.

The Pioneer Award is part of the CONNECTED CAR Awards, which was hosted by Auto Bild and Computer Bild for the seventh time. The readers are invited to vote for tomorrows most promising trends and the biggest innovations. Special ideas, projects and concepts are presented with the two trade journals editorial Pioneer Award.

Technologies based on artificial intelligence perform cognitive functions that otherwise only humans are capable of. In addition to Sound Analyser, KODA AUTO is exploring numerous other applications for AI-based technologies. The Follow the Vehicle project sees the car manufacturer, together with the VB Technical University of Ostrava, test passenger car convoys, in which autonomous cars follow a lead vehicle with a driver. The company also uses imaging technology to identify available parking spaces at its headquarters in Mlad Boleslav.

SOURCE: KODA

See the original post here:

KODA AUTO wins Pioneer Award for its use of artificial intelligence and Sound Analyser app - Automotive World

Artificial Intelligences Are Power Hungry, But Not How You Think – Nextgov

Fueled by government and industry partnerships as well as a plethora of private companies, the United States is poised to take the lead in the worldwide development and use of artificial intelligence and its close cousins machine learning, cognitive computing, deep learning and advanced expert systems. The technology is already revolutionizing many industries, making inroads into government, and shows no sign of slowing down its evolution.

The federal government has been a proponent of AI and its intrinsic advantages for some time. The AI in Government Act of 2020 (H.R. 2575) passed the House and has been placed on the legislative agenda in the Senate. The bill would create centers of excellence in the General Services Administration that will help agencies adopt AI technologies and plan for its use across government.

Military and intelligence agencies have also been actively working to integrate AI into their capabilities. They are actively studying the ethics of the technology, including what AI should and should not be allowed to do. In February, the Defense Department adopted five ethical principles for using AI. The intelligence community released its own Artificial Intelligence Ethics for the Intelligence Community, which is very similar to the DOD plan, though with a few variants more suited to civilian and non-battlefield deployments.

Keeping AIs on an ethical short leash is important because on some level people fear AIs, or at least highly mistrust them. Quite a few sci-fi movies and TV shows feature a power-hungry AI trying to take over the world or eliminate humanity. Its unlikely that anyone would be stupid enough to build an AI that wants to kill people, much less give it a platform to do so. But it turns out that with AI, we should have been worried about a different kind of power craving.

An article in The Print magazine recently covered this years virtual Semicon West conference, which is generally attended by those who manufacture computer chips. At the show, Applied Materials CEO Gary Dickerson warned his colleagues during the keynote address that the use of AI would spike power consumption in data centers to the point where it might make them difficult to maintain.

AI has the potential to change everything, Dickerson said. But AI has an Achilles heel that, unless addressed, will prevent it from reaching its true potential. That Achilles heel is power consumption. Training neural networks is incredibly energy-intensive when done with the technology thats available today.

As an example of the scope of the problem, Dickerson said that data centers today consume just 2% of the worlds electricity supply. Because of the use of AI, by 2025 he predicted that demand would shoot up to 15%.

The problem I think is not just the chips and hardware, but the fact that AIs are generally not optimized to use computing resources. Most of them grab as much power as they need, or whatever is available, to complete their tasks. To test this out, I experimented with some AIs in my test lab which I was planning on covering in a future column.

One of the things that I can do in my lab is monitor the exact power consumption of various devices and machines being reviewed. I do that to confirm that devices are as efficient as they claim, or to check to see how much standby power they drain when not in use. But I can also apply a standard electrical payment rate to determine how much each task or operation that a machine performs will cost.

For example, on a test workstation, it costs just one cent to open up a Microsoft Word file, and almost double that to open Adobe Photoshop. You generally dont think of individual computer tasks costing money, but doing something like opening a file causes the computer to use more resources like the disk drive, graphics card and memory. That in turn generates heat, which forces more power to the cooling system. My calculation is not fully precise because to do that I would need to take into account the systems thermal design power, which individual actions generally wont be able to measurably affect. But it does show the relative power-hungry nature of different components or programs.

Looking at the different AIs that I had in my lab, I first used one that was designed to scan my incoming email and generate automatic responses based on my previous interactions. When it was initially ingesting data it ran the workstation pretty hard, consuming 53 cents worth of power above what the workstation would normally need over the same period. Thereafter, it spent between two and three cents every time an email came in, though it generated more when updating its database or learning new information.

Another AI that I tested is designed to look at programming code to search for vulnerabilities and then suggest alternative fixes. It can also be set to automatically make changes to the code, which I allowed. In the case of that AI, it only pushed the workstation when I was actively feeding it code, but when it was active it was quite a beast. The workstations internal fans sounded like jet engines preparing for takeoff. Had the AI been constantly on duty, the workstation would have consumed about 1,300 kilowatts of power over a calendar year, which is about five times more than if the machine were idol or performing less intensive tasks.

Based on those results, its easy to see how AIs could force data centers to consume five or six times more power than they do right now. Im not sure what all the ramifications are at having one-sixth of the worlds total power output flowing into U.S. data centers, but its something we should think about. Even the effects on the environment and global warming should probably be taken into consideration.

We are doing a good job at keeping AIs on the right side of ethics, but perhaps we should also find ways to curb their appetite for power. It might be time to add some kind of resource efficiency guideline to those ethics statements to help keep future AIs in check before the power consumption problem becomes too big to manage, and before it puts the brakes on our otherwise lightning fast AI development programs.

John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology. He is the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys

Link:

Artificial Intelligences Are Power Hungry, But Not How You Think - Nextgov

Now is the time to ensure artificial intelligence works for Europeans – EU News

AI is not infallible, it is made by people and humans can make mistakes. That is why people need to be aware when AI is used, how it works and how to challenge automated decisions. The EU needs to clarify how existing rules apply to AI. And organisations need to assess how their technologies can interfere with people's rights both in the development and use of AI, says FRA Director Michael OFlaherty. We have an opportunity to shape AI that not only respects our human and fundamental rights but that also protects and promotes them.

The FRA report Getting the future right Artificial intelligence and fundamental rights in the EU identifies pitfalls in the use of AI, for example in predictive policing, medical diagnoses, social services, and targeted advertising. It calls on the EU and EU countries to:

The report is part of FRAs project on artificial intelligence and big data. It draws on over 100 interviews with public and private organisations already using AI. These include observations from experts involved in monitoring potential fundamental rights violations. Its analysis is based on real uses of AI from Estonia, Finland, France, the Netherlands and Spain.

On 14 December, FRA and the German Presidency of the Council of the EU organise a conference Doing AI the European way: Protecting fundamental rights in an era of artificial intelligence.

For more, please contact: media@fra.europa.eu / Tel.: +43 1 580 30 653

Read more from the original source:

Now is the time to ensure artificial intelligence works for Europeans - EU News

Currux Vision LLC Announces Industry Leading Accuracy of Artificial Intelligence Smart City Traffic Platform Testing with the City of San Jos – PR Web

HOUSTON (PRWEB) December 17, 2020

Currux Vision LLC (Currux Vision), the innovative, infra-tech AI and machine learning solutions company today released test results with the Department of Transportation of the City of San Jos, California. San Jos, recently named the Most Innovative City in the U.S., utilized the fully integrated, AI-based SmartCity ITS for city intersections and roadways. This innovative, cost-effective platform accurately monitors traffic, and provides information that can potentially prevent or reduce congestion and accidents, creating safer roads for drivers, cyclists, and pedestrians around the clock.

Comprehensive AI Traffic and Safety Solutions for Municipalities, DOTs, Toll Road Operators or Private Communities & Commercial Properties

The San Jos Department of Transportation and Currux Vision are focused on creating a safer and smarter city. Currux Visions SmartCity ITS continues to deliver one of the most comprehensive autonomous traffic management platforms including basic and advanced traffic management, big data collection, and analytics, and real-time alerts. SmartCity ITS includes a wide range of Vision Zero initiatives including vehicular (roadway and intersection), pedestrian, and bicycle associated near miss detection, and autonomous real-time prediction of hazardous traffic conditions, such as wrong way driver, stopped vehicles, speeding, running red lights and stop signs, parking infringements, etc. The combination of these capabilities provides cities with a single, accurate solution to improve residents quality of life and optimally allocate increasingly scarce resources.

The extensive tests with the San Jos Department of Transportation at key intersections confirmed that Currux Vision can operate with an up to 99% accuracy averaged under various conditions including day and night, rain, camera vibrations and even partial camera view obstruction. Moreover, Currux Vision can achieve high resolution results with older legacy digital and analog camera systems that offer lower resolution. Testing included but was not limited to vehicle detection and classification, turning movement counts, pedestrian counts, bicycle discrimination, stopped vehicles, and speeding.

Focused on Wide-Scale Adoption of Autonomous AI Systems at the Edge

Increasing urbanization, traffic, mode shift, and increasing focus on safety drive the urgent need for a next-generation traffic management solution like our SmartCity ITS. We believe that efficient mobility and being able to do more with less creates economic opportunities, enables trade, improves quality of life, and facilitates access to markets and services effectively leveraging resources. We designed our SmartCity ITS to significantly accelerate wide-scale adoption of autonomous AI capabilities by cities, DOTs and private infrastructure developers both in the U.S. and internationally. We are happy to have worked with a great partner like San Joss Department of Transportation to prove these transportation solutions. Alex Colosivschi, Founder and CEO of Currux Vision

Testing this leading-edge technology is a step in implementing the City of San Jos Smart City Vision - using game-changing technologies and data-driven decision-making which will drive continuous improvement in how we serve our community, and to promote concrete benefits in safety, sustainability, economic opportunity, and quality of life for our residents. We look forward to seeing reduced traffic congestion and accidents with connected infrastructure, real time big data analytics and alerts, and machine learning that can power next-generation traffic systems, reduce emissions, identify high-accident intersections, and allow us to better target mitigation efforts. Ho Nguyen, ITS Manager, City of San Jos

About Currux Vision LLCCurrux Vision (https://currux.vision) uses the latest in AI, machine learning, and computer vision technologies to develop and deploy edge-based autonomous AI systems for smart infrastructure. Currux Vision SmartCity ITS is used by DOTs and municipalities throughout the U.S. and has processed billions of traffic data points. Designed from the ground up, our fully integrated platform works with any camera, high and low resolution, and securely and rapidly operates within an agencys network. Additionally, the ease of installation and operation, powerful and flexible back-end capabilities, and attractive price point are key differentiators.

About the City of San JosWith more than one million residents, San Jos is one of the most diverse large cities in the United States and is Northern Californias largest city and the 10th largest city in the nation. San Joss transformation into a global innovation center has resulted in one of the largest concentrations of technology companies and expertise in the world. In 2011, the City adopted Envision San Jos 2040, a long-term growth plan that sets forth a vision and a comprehensive road map to guide the Citys anticipated growth through the year 2040. It was named Most Innovative City in the U.S by the Center for Digital Government in November, 2020.

Share article on social media or email:

Continue reading here:

Currux Vision LLC Announces Industry Leading Accuracy of Artificial Intelligence Smart City Traffic Platform Testing with the City of San Jos - PR Web

How artificial intelligence helped me overcome my dyslexia – The Guardian

Im 10 years old. Minutes into a maths lesson and my palms have already begun to sweat. Ive positioned myself in the back row, but the teacher walks up and down the aisles of the classroom, peering over our shoulders. I dont understand the rules. The teachers voice becomes a blur, and I stare at the numbers on the board, willing them to make sense. I wasnt a shy child, if anything I was bold and kind of brash, but I couldnt ask for help. I didnt have the language to explain what the numbers were doing to my brain.

Soon Id have a name for what I was experiencing dyslexia and Id begin to find ways to accommodate my learning style. As with everything, there are scales here. Dyslexia presents and impacts people in different ways, and I was lucky to be at a great school. But I had to learn to overcome my fear of numbers and words. I had to do battle with my confidence. Its only now I realise that this was the cause of me honing my greatest skill: learning to learn. Discovering more about different learning styles was a gamechanger and where my love of artificial intelligence technology was born.

Flash forward and now Im a tech entrepreneur and co-founder of CognitionX, a market intelligence platform for AI. Two years ago I was appointed by government ministers Matt Hancock and Greg Clark, to assemble a team of experts in AI to form a council responsible for supporting the government and its office for artificial intelligence. Ive been fortunate enough to have a front-row seat as the world is transformed by new technology but on a personal level Im drawn to AI because I want more support too. My dyslexia means I need more help, like spotting simple mistakes in my writing.

I rely on apps such as SwiftKey and Grammarly as one might an old friend. SwiftKey in particular is a huge help in my day-to-day life. Its an app for your smartphone keyboard that uses AI to make much better recommendations than the inbuilt spelling and grammar check. Even better is its new feature that turns my voice to text so I dont have to type or leave a voice note when Im struggling to find exactly the right way to say something. Grammarly is my go-to for my laptop. It combines rules, patterns, and AI deep learning techniques to help you improve your writing.

The drawback is that if something goes wrong with either of these apps, I feel as Im back in the classroom again, freefalling, my brain foggy, letters and numbers jumbled up. I worry Im over reliant on these technologies, but Im also thankful for their existence. Because they use machine learning, which operates by learning how I use the apps each time, we grow together. Its a conundrum but one Im conscious of and take into account every day.

And this is why its important to note that not only am I looking for AI support, Im looking for human support. The need for a conversation at the back of the class hasnt been replaced by technology its been augmented by it. Technology and people need to work in tandem.

I think it was my dyslexia and my need to see things from a different angle that enabled me to be open to the rewards of AI. But this doesnt mean that there arent risks. I grapple with the potential pitfalls of AI, particularly its bias against people underrepresented in tech across society. We are hurtling towards AI, machine learning and robotics at breakneck speed and people are being left behind. This means a risk of job loss in an already struggling climate.

One and a half million people in England are at high risk of losing their jobs to automation in the coming years, and a 2019 Office for National Statistics report revealed that 70% of them are women. Covid will no doubt increase these risks the shift to online working has only made it easier for companies to increase automation. This is why I want to urge women to get ahead of the game. Now more than ever is a good time to become the person in your company who has learned to master the newest software. Even for those who are proudly the least techie, it is time to change tune. Im not suggesting that everyone should retrain to become data scientists or AI experts. Its more about having an understanding of how to work with products that have AI built in.

I only ever advocate for AI systems in the workplace if they have a Human in the Loop approach. HITL is a way to build AI systems that makes sure there is always a person with a key role somewhere in the decision-making process. This guarantees that whatever the outcome happens to be, its arrived at through a combination of steps taken by a machine and the person, together. Its this sort of system I want to encourage women to become the best at navigating.

Throughout history a set of qualities traditionally associated with women compassion, care, empathy and nurturing have been dismissed or sidelined by the market. Today, care work is either among the lowest paid of jobs, or its done for free (mainly by women) in the home. But these qualities, which have always been vital, are about to become ever more necessary and much harder to undermine.

Many aspects of jobs are going to be assigned to machines, but they can never do everything that humans can. A machine may be able to predict and detect diseases invisible to the human eye, but the one thing it cant do is connect on a human level and offer genuine care.

Human empathy is something machines cant offer and so, together with an AI system, a doctor could present an accurate diagnosis in a caring way. This can only happen, however, if the doctor in question decides to embrace and fully understand how to get the best out of the AI system, which will take training and an appetite to learn.

Women have also developed another skill that will become vital in the coming years: staying on their toes. For centuries women have faced all kinds of discrimination and prejudice. Women have had to know how to be vigilant and resilient, to anticipate change and to read subtle cues and analyse the world for risks. In the world of AI, this means staying one step ahead of the machine.

The way I see it, this new wave of technology could be a tsunami that knocks you down, or it could be the wave that we ride together to a brighter future. The moment I began to truly understand this, I knew I had to share what Id learned about its possible risks as well as its rewards and why it is that women were more likely to suffer the negative effects.

Its really crucial for women to challenge the tendency to sometimes see tech as boring, scary or for someone else. Im not a scientist, engineer, developer or techie. It takes me a long time to understand technological ideas because theyre mostly founded in complex mathematics. It was a really liberating moment when I realised that I didnt need to understand the precise inner workings of AI machines in order to understand the ramifications of this technology.

All you need is to get a good grasp on how to adapt and thrive in this new world and what you can do to support others to do the same.

There are simple ways of achieving this and one of them is learning how to talk to technologies which use AI. You dont need to rush out to the shops there is AI you can talk to in products you may already have. If youre an Apple user, talk to Siri, or Cortana if you use Microsoft and Google has an assistant too. Set your alarm to be voice-activated or use a voice assistant to add appointments to your calendar, or to search the internet for you. My friends tell me that theyve given up on their home system, or that they cant bear that their car is trying to talk to them. My response is always to tell them: this technology isnt going anywhere. So instead of avoiding it, find ways to make the technology work for you before you end up working for it.

How to Talk to Robots by Tabitha Goldstaub is published by 4th Estate at 12.99. Buy it for 11.30 from guardianbookshop.com

More here:

How artificial intelligence helped me overcome my dyslexia - The Guardian

Leveraging artificial intelligence to reach fathers and support child nutrition in India during COVID-19 – World Bank Group

Photo: Vishnu Nishad/Unsplash

COVID-19 is not the only pandemic striking India. Eight-year-old Rakesh died of hunger during the national lockdown in March 2020. The 1-year-old son of Sevak Ram succumbed to acute malnourishment in June. These are not isolated cases. India is fighting a dual battle against malnutrition and COVID-19. The lockdown has disrupted access to rations and other essential services, closed schools (cutting off midday meals for children), and led to job losses, putting millions of families in India at even higher risk of extreme poverty and malnutrition.

Even before the pandemic, India accounted for a third of the global burden of malnutrition. In rural areas, stunting, wasting, and impaired immunity are common due to nutrient deficiencies. While India has made progress in maternal and child nutrition in recent years, disparities persist and the COVID-19 pandemic has further accelerated the crisis. The COVID-19 lockdown and the associated economic shocks put formerly food-secure households at increased risk, which will have a direct impact on the nutritional intake of children.

At the same time, the lockdowns have increased the time fathers spend at home and opened opportunities to increase their involvement in child nutrition and development. Earlier diagnostic work conducted in households with small children had established high smartphone penetration and social media usage among young fathers. Could we use this information, coupled with fathers increased time with their children, for a rapid intervention to prevent child nutrition from backsliding?

The World Banks eMBeD team and Quilt.AI set out to understand whether social media could be used to get fathers motivated and active in ensuring their childrens nutrition. We did this in three steps:

First, we set out to explore the online discourse on child nutrition, with a focus on fathers. What do fathers search for, care about, and talk about? Quilt.AIs Culture AI was deployed to extract the digital footprint of parents and caregivers in two of Indias poorest and most malnourished states: Bihar and Uttar Pradesh. Data pertaining to 47 million searches and 1,500 unique keywords from 2019-20, blog posts on nutrition, social media uploads, and content consumed by caregivers was analyzed by a team of researchers. We looked at fathers and mothers (based on their online profiles and identification), though we were specifically after fathers. We found:

Evolution of search queries for terms related to child nutrition in the states of Bihar and Uttar Pradesh.

Second, the online discourse was further segmented into seven types of personas with distinct characteristics to help gain a nuanced understanding of how caregivers, with different gender, socio-economic and age demographics, express themselves on nutrition, both on social media platforms and search behavior. This population segmentation helped identify which caregiver profiles to target during the social media intervention. We decided to focus on three profiles: Traditional caregivers, caregivers in transition, and modern caregivers, all with specific characteristics (age, gender, location):

Third, we designed a social media intervention to get fathers more involved into child nutrition. We used the profiles to identify specific ways to frame and present messages to fathers under each of these profiles, for a campaign that will target them as they engage with social media for other things.

The pilot campaign is happening now (on Facebook and Instagram) in 52 districts in the states of Uttar Pradesh and Bihar. The online campaigns images and messages cover 14 different topic areas, from expanding the concept of what makes a good provider, to increasing fathers role in child feeding (see example below). Some messages provide clear actions fathers can take to engage with their children in food-related activities; others link nutrition with aspects of child development where fathers are more involved, such as cognitive development and educational outcomes. Each topic message has been adapted to appeal to each of the profiles identified. We will look at the campaigns effects on knowledge, interest, and behaviors.

English text of the image: Eggs are packed with protein and can be made in many ways; discuss new recipes with your wife today!

Pushed by the constraints of COVID-19 regulations, our team had to adapt. Using insights from online profiles and search behavior to inform the design of communications interventions offers an opportunity to tailor interventions when you cannot collect additional data from intended target populations. Pilots like this can also help us unlock insights and overcome barriers to effective social and behavior change interventions, and particularly how to connect peoples online behavior with their real life. Stay tuned for our results.

Go here to read the rest:

Leveraging artificial intelligence to reach fathers and support child nutrition in India during COVID-19 - World Bank Group

Artificial Intelligence Advances Showcased at the Virtual 2020 AACC Annual Scientific Meeting Could Help to Integrate This Technology Into Everyday…

CHICAGO, Dec. 13, 2020 /PRNewswire/ -- Artificial intelligence (AI) has the potential to revolutionize healthcare, but integrating AI-based techniques into routine medical practice has proven to be a significant challenge. A plenary session at the virtual 2020 AACC Annual Scientific Meeting & Clinical Lab Expo will explore how one clinical lab overcame this challenge to implement a machine learning-based test, while a second session will take a big picture look at what machine learning is and how it could transform medicine.

Machine learning is a type of AI that uses statistics to find patterns in massive amounts of data. It could launch healthcare into a new era by mining medical data to find cures for diseases, identify vulnerable patients before they become ill, and better personalize testing and treatments. In spite of this technology's promise, though, the medical community continues to grapple with numerous barriers to adoption, and in the field of laboratory medicine in particular, very few machine learning tests are currently offered as part of regular care.

A 10-year machine learning project undertaken by Ulysses G.J. Balis, MD, and his colleagues at the University of Michigan in Ann Arbor could help to change this by providing a blueprint for other healthcare institutions looking to harness AI. As Dr. Balis will discuss in his plenary session, his institute developed and implemented a machine learning test called ThioMon to guide treatment of inflammatory bowel disease (IBD) with azathioprine. With an approximate cost of only $20 a month, azathioprine is much cheaper than other IBD medications (which can cost thousands of dollars a month), but its dosage needs to be finetuned for each patient, making it difficult to prescribe. ThioMon solves this issue by analyzing a patient's routine lab test results to determine if a particular dose of azathioprine is working or not.

Balis's team found that the test performs just as well as a colonoscopy, which is the current gold standard for assessing IBD patient response to medication. Even more exciting is that clinical labs could use ThioMon's general approachanalyzing routine lab test results with machine learning algorithmsto solve any number of other patient care challenges.

"There are dozens, if not hundreds of additional diagnoses that we can extract from the routine lab values that we've been generating for decades," said Dr. Balis. "This lab data is, in essence, a gold mine, and the development of these machine learning tools marks the start of a new gold rush."

One of the additional conditions that this machine learning approach can diagnose is, in fact, COVID-19. In the session, "How Clinical Laboratory Data Is Impacting the Future of Healthcare?" Jonathan Chen, MD, PhD, of Stanford University, and Christopher McCudden, PhD, of the Eastern Ontario Regional Laboratory Association, will touch on a new machine learning test that analyzes routine lab test results to determine if patients have COVID-19 even before their SARS-CoV-2 test results come back. As COVID-19 cases in the U.S. reach record highs, this test could enable labs to diagnose COVID-19 patients quickly even if SARS-CoV-2 test supply shortages worsen or if SARS-CoV-2 test results become backlogged due to demand.

Beyond this, Drs. Chen and McCudden plan to give a bird's eye view of what machine learning is, how it works, and how it can improve efficiency, reduce costs, and improve patient outcomesparticularly by democratizing patient access to medical expertise.

"Medical expertise is the scarcest resource in the healthcare system," said Dr. Chen, "and computational, automated tools will allow us to reach the tens of millions of people in the U.S.and the billions of people worldwidewho currently don't have access to it."

Machine Learning Sessions at the 2020 AACC Annual Scientific MeetingAACC Annual Scientific Meeting registration is free for members of the media. Reporters can register online here:https://www.xpressreg.net/register/aacc0720/media/landing.asp

Session 14001: Between Scylla and Charybdis: Navigating the Complex Waters of Machine Learning in Laboratory Medicine

Session 34104: How Clinical Laboratory Data Is Impacting the Future of Healthcare?

Abstract A-005: Machine Learning Outperforms Traditional Screening and Diagnostic Tools for the Detection of Familial Hypercholesterolemia

About the 2020 AACC Annual Scientific Meeting & Clinical Lab ExpoThe AACC Annual Scientific Meeting offers 5 days packed with opportunities to learn about exciting science from December 13-17, all available on an online platform. This year, there is a concerted focus on the latest updates on testing for COVID-19, including a talk with current White House Coronavirus Task Force testing czar, Admiral Brett Giroir. Plenary sessions include discussions on using artificial intelligence and machine learning to improve patient outcomes, new therapies for cancer, creating cross-functional diagnostic management teams, and accelerating health research and medical breakthroughs through the use of precision medicine.

At the virtual AACC Clinical Lab Expo, more than 170 exhibitors will fill the digital floor with displays and vital information about the latest diagnostic technology, including but not limited to SARS-CoV-2 testing, mobile health, molecular diagnostics, mass spectrometry, point-of-care, and automation.

About AACCDedicated to achieving better health through laboratory medicine, AACC brings together more than 50,000 clinical laboratory professionals, physicians, research scientists, and business leaders from around the world focused on clinical chemistry, molecular diagnostics, mass spectrometry, translational medicine, lab management, and other areas of progressing laboratory science. Since 1948, AACC has worked to advance the common interests of the field, providing programs that advance scientific collaboration, knowledge, expertise, and innovation. For more information, visit http://www.aacc.org.

Christine DeLongAACCSenior Manager, Communications & PR(p) 202.835.8722[emailprotected]

Molly PolenAACCSenior Director, Communications & PR(p) 202.420.7612(c) 703.598.0472[emailprotected]

SOURCE AACC

http://www.aacc.org

See the rest here:

Artificial Intelligence Advances Showcased at the Virtual 2020 AACC Annual Scientific Meeting Could Help to Integrate This Technology Into Everyday...

Global Artificial Intelligence in Supply Chain Management Market was Valued at US$ 1549.5 Mn in 2019 Growing at a CAGR of 25.12% over the Forecast…

Request for Sample Copy of This [emailprotected] https://www.absolutemarketsinsights.com/request_sample.php?id=747

Enquiry Before Buying @ https://www.absolutemarketsinsights.com/enquiry_before_buying.php?id=747

Get Full Information of this premium [emailprotected] https://www.absolutemarketsinsights.com/reports/Artificial-Intelligence-In-Supply-Chain-Management-2020---2028-747

About Us:

Absolute Markets Insights strives to be your main man in your business resolve by giving you insight into your products, market, marketing, competitors, and customers. Visit

Contact Us:

Email id:[emailprotected] Contact Name:Shreyas TannaPhone:+91-740-024-2424

Photo: https://mma.prnewswire.com/media/1384957/Artificial_Intelligence_in_Supply_Chain_Management_Market.jpg Logo: https://mma.prnewswire.com/media/831667/Absolute_Market_Insights_Logo.jpg

SOURCE Absolute Markets Insights

Read this article:

Global Artificial Intelligence in Supply Chain Management Market was Valued at US$ 1549.5 Mn in 2019 Growing at a CAGR of 25.12% over the Forecast...

Are AI and job automation good for society? Globally, views are mixed – Pew Research Center

As artificial intelligence (AI) plays a growing role in the everyday lives of people around the world, views on AIs impact on society are mixed across 20 global publics, according to a recent Pew Research Center survey.

This analysis is based on a survey conducted across 20 publics from October 2019 to March 2020 across Europe, Russia, the Americas and the Asia-Pacific region. The surveys were conducted by face-to-face interviews in Russia, Poland, the Czech Republic, India and Brazil. In all other places, the surveys were conducted by telephone. All surveys were conducted with representative samples of adults ages 18 and older in each survey public.

Here are the questions used for the report, along with responses, and its methodology.

A median of about half (53%) say the development of artificial intelligence, or the use of computer systems designed to imitate human behaviors, has been a good thing for society, while 33% say it has been a bad thing.

Opinions are also divided on another major technological development: using robots to automate many jobs humans have done in the past. A median of 48% say job automation has been a good thing, while 42% say its had a negative impact on society.

The survey conducted in late 2019 and early 2020 in 20 places across Europe, the Asia-Pacific region, and in the United States, Canada, Brazil and Russia comes as automation has remade workplaces around the world and AI increasingly powers things from social media algorithms to technology in cars and everyday appliances.

Views of AI are generally positive among the Asian publics surveyed: About two-thirds or more in Singapore (72%), South Korea (69%), India (67%), Taiwan (66%) and Japan (65%) say AI has been a good thing for society. Many places in Asia have emerged as world leaders in AI.

Most other places surveyed fall short of a majority saying AI has been good for society. In France, for example, views are particularly negative: Just 37% say AI has been good for society, compared with 47% who say it has been bad for society. In the U.S. and UK, about as many say it has been a good thing for society as a bad thing. By contrast, Sweden and Spain are among a handful of places outside of the Asia-Pacific region where a majority (60%) views AI in a positive light.

As with AI, Asian publics surveyed stand out for their relatively positive views of the impact of job automation. Many Asian publics have made major strides in the development of robotics and AI. The South Korean and Singaporean manufacturing industries, for instance, have the highest and second highest robot density of anywhere in the world. Singapore is also pursuing its goal of becoming the worlds first smart nation, and the government has identified AI as one of many key development areas necessary to reach that goal. Japan has also long been a world leader in robotics manufacturing and development, and robots and AI are increasingly integrated into everyday life there to help with tasks ranging from household chores to elder care.

Men are significantly more likely than women to say artificial intelligence has been a good thing for society in 15 of the 20 places surveyed. In Japan, for example, nearly three-quarters of men (73%) have positive views of AI, compared with 56% of women. In the U.S., 53% of men say AI has been a positive thing, compared with 40% of women.

People with more education are also more likely to have a positive view of AI. This gap is largest in the Netherlands, where a majority of those with a college education or higher (61%) see AI favorably, compared with 43% of those with less education. In the 11 publics where age is a significant factor in views of AI, younger people usually have a more positive view of the technology than older people.

There are similar patterns by gender and education in views of job automation. The educational differences are particularly large in some places: In Italy, for instance, about two-thirds of people with at least a college education (65%) say job automation has been a good thing for society, compared with just 38% of people with less education. Among adults with more education, those who took three or more science courses tend to see job automation more positively than people who took fewer science classes.

Note: Here are the questions used for the report, along with responses, and its methodology.

More:

Are AI and job automation good for society? Globally, views are mixed - Pew Research Center

Landmark artificial intelligence legislation advances toward becoming law Defense bill awaits possible presidential veto and Congressional override;…

The House and Senate have voted to send this years National Defense Authorization Act (FY 2021 NDAA) to President Trump, who has threatened to veto the $731.6 billion defense policy legislation. While there may be a veto of the NDAA, there is still a good chance it becomes law in the coming weeks.

It is noteworthy that the sprawling, 4,517-page defense bill includes the most substantial legislation addressing artificial intelligence (AI) approved by Congress to date, incorporating landmark legislation setting national policy on the emerging technology already shaping and transforming virtually every aspect of military and civilian life.

The bill includes substantial provisions on the policies related to AI and increased funding for several different agencies to expand work on AI issues and the training of an AI-skilled workforce, among other things.

This publication provides an overview of the key AI initiatives and the funding provided for AI programs.

Legislative status

Both chambers of Congress passed the NDAA conference report resolving differences between the House and Senate versions of the legislation by bipartisan, veto-proof majorities (84-13 in the Senate, 335-78 in the House). Lawmakers are hoping to see a defense authorization bill enacted for the 60th consecutive fiscal year. Even if the bill is vetoed, Congress could override that veto.

Under the Constitution, the president has up to ten days, excluding Sundays, from when the bill was approved in the Senate (Thursday, December 10) to decide whether to sign the bill into law or veto it, which could mean that the bill might be vetoed after members of Congress have left Washington for the Christmas holiday. Leaders in the Democratic-controlled House have vowed to return to DC for a veto override vote before the January 3 swearing-in of the next Congress, but it has not been announced yet what the Republican-controlled Senate plans to do as far as a veto override.

National Artificial Intelligence Initiative Act of 2020 (Division E)

The NDAA includes a 63-page portion of the bill titled Division E, the National Artificial Intelligence Initiative Act of 2020. As explained in the House-Senate Joint Explanatory Statement, the provision was contained only in the version that the House passed earlier this year, but the Senate agreed to include it in the final compromise version with certain changes. Division E draws heavily on legislation introduced earlier this year, (HR 6216), the National Artificial Intelligence Initiative Act of 2020, as well as legislation from 2019, the Artificial Intelligence Initiative Act or AIIA (S 1558), to establish a coordinated, civilian-led federal initiative to accelerate research and development and encourage investments in trustworthy AI systems for the economic and national security of the United States. The NDAA includes both Department of Defense (DoD) and non-DoD AI provisions.

Among the Division E highlights:

The conferees believe that artificial intelligence systems have the potential to transform every sector of the United States economy, boosting productivity, enhancing scientific research, and increasing U.S. competitiveness and that the United States government should use this Initiative to enable the benefits of trustworthy artificial intelligence while preventing the creation and use of artificial intelligence systems that behave in ways that cause harm. The conferees further believe that such harmful artificial intelligence systems may include high-risk systems that lack sufficient robustness to prevent adversarial attacks; high-risk systems that harm the privacy or security of users or the general public; artificial general intelligence systems that become self-aware or uncontrollable; and artificial intelligence systems that unlawfully discriminate against protected classes of persons, including on the basis of sex, race, age, disability, color, creed, national origin, or religion. Finally, the conferees believe that the United States must take a whole of government approach to leadership in trustworthy artificial intelligence, including through coordination between the Department of Defense, the Intelligence Community, and the civilian agencies.

Among the Department of Defense AI highlights:

Other DoD AI provisions:

Continue reading here:

Landmark artificial intelligence legislation advances toward becoming law Defense bill awaits possible presidential veto and Congressional override;...

The Next Generation Of Artificial Intelligence – Forbes

AI legend Yann LeCun, one of the godfathers of deep learning, sees self-supervised learning as the ... [+] key to AI's future.

The field of artificial intelligence moves fast. It has only been 8 years since the modern era of deep learning began at the 2012 ImageNet competition. Progress in the field since then has been breathtaking and relentless.

If anything, this breakneck pace is only accelerating. Five years from now, the field of AI will look very different than it does today. Methods that are currently considered cutting-edge will have become outdated; methods that today are nascent or on the fringes will be mainstream.

What will the next generation of artificial intelligence look like? Which novel AI approaches will unlock currently unimaginable possibilities in technology and business? This article highlights three emerging areas within AI that are poised to redefine the fieldand societyin the years ahead. Study up now.

The dominant paradigm in the world of AI today is supervised learning. In supervised learning, AI models learn from datasets that humans have curated and labeled according to predefined categories. (The term supervised learning comes from the fact that human supervisors prepare the data in advance.)

While supervised learning has driven remarkable progress in AI over the past decade, from autonomous vehicles to voice assistants, it has serious limitations.

The process of manually labeling thousands or millions of data points can be enormously expensive and cumbersome. The fact that humans must label data by hand before machine learning models can ingest it has become a major bottleneck in AI.

At a deeper level, supervised learning represents a narrow and circumscribed form of learning. Rather than being able to explore and absorb all the latent information, relationships and implications in a given dataset, supervised algorithms orient only to the concepts and categories that researchers have identified ahead of time.

In contrast, unsupervised learning is an approach to AI in which algorithms learn from data without human-provided labels or guidance.

Many AI leaders see unsupervised learning as the next great frontier in artificial intelligence. In the words of AI legend Yann LeCun: The next AI revolution will not be supervised. UC Berkeley professor Jitenda Malik put it even more colorfully: Labels are the opium of the machine learning researcher.

How does unsupervised learning work? In a nutshell, the system learns about some parts of the world based on other parts of the world. By observing the behavior of, patterns among, and relationships between entitiesfor example, words in a text or people in a videothe system bootstraps an overall understanding of its environment. Some researchers sum this up with the phrase predicting everything from everything else.

Unsupervised learning more closely mirrors the way that humans learn about the world: through open-ended exploration and inference, without a need for the training wheels of supervised learning. One of its fundamental advantages is that there will always be far more unlabeled data than labeled data in the world (and the former is much easier to come by).

In the words of LeCun, who prefers the closely related term self-supervised learning: In self-supervised learning, a portion of the input is used as a supervisory signal to predict the remaining portion of the input....More knowledge about the structure of the world can be learned through self-supervised learning than from [other AI paradigms], because the data is unlimited and the amount of feedback provided by each example is huge.

Unsupervised learning is already having a transformative impact in natural language processing. NLP has seen incredible progress recently thanks to a new unsupervised learning architecture known as the Transformer, which originated at Google about three years ago. (See #3 below for more on Transformers.)

Efforts to apply unsupervised learning to other areas of AI remain at earlier stages, but rapid progress is being made. To take one example, a startup named Helm.ai is seeking to use unsupervised learning to leapfrog the leaders in the autonomous vehicle industry.

Many researchers see unsupervised learning as the key to developing human-level AI. According to LeCun, mastering unsupervised learning is the greatest challenge in ML and AI of the next few years.

One of the overarching challenges of the digital era is data privacy. Because data is the lifeblood of modern artificial intelligence, data privacy issues play a significant (and often limiting) role in AIs trajectory.

Privacy-preserving artificial intelligencemethods that enable AI models to learn from datasets without compromising their privacyis thus becoming an increasingly important pursuit. Perhaps the most promising approach to privacy-preserving AI is federated learning.

The concept of federated learning was first formulated by researchers at Google in early 2017. Over the past year, interest in federated learning has exploded: more than 1,000 research papers on federated learning were published in the first six months of 2020, compared to just 180 in all 2018.

The standard approach to building machine learning models today is to gather all the training data in one place, often in the cloud, and then to train the model on the data. But this approach is not practicable for much of the worlds data, which for privacy and security reasons cannot be moved to a central data repository. This makes it off-limits to traditional AI techniques.

Federated learning solves this problem by flipping the conventional approach to AI on its head.

Rather than requiring one unified dataset to train a model, federated learning leaves the data where it is, distributed across numerous devices and servers on the edge. Instead, many versions of the model are sent outone to each device with training dataand trained locally on each subset of data. The resulting model parameters, but not the training data itself, are then sent back to the cloud. When all these mini-models are aggregated, the result is one overall model that functions as if it had been trained on the entire dataset at once.

The original federated learning use case was to train AI models on personal data distributed across billions of mobile devices. As those researchers summarized: Modern mobile devices have access to a wealth of data suitable for machine learning models....However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center....We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates.

More recently, healthcare has emerged as a particularly promising field for the application of federated learning.

It is easy to see why. On one hand, there are an enormous number of valuable AI use cases in healthcare. On the other hand, healthcare data, especially patients personally identifiable information, is extremely sensitive; a thicket of regulations like HIPAA restrict its use and movement. Federated learning could enable researchers to develop life-saving healthcare AI tools without ever moving sensitive health records from their source or exposing them to privacy breaches.

A host of startups has emerged to pursue federated learning in healthcare. The most established is Paris-based Owkin; earlier-stage players include Lynx.MD, Ferrum Health and Secure AI Labs.

Beyond healthcare, federated learning may one day play a central role in the development of any AI application that involves sensitive data: from financial services to autonomous vehicles, from government use cases to consumer products of all kinds. Paired with other privacy-preserving techniques like differential privacy and homomorphic encryption, federated learning may provide the key to unlocking AIs vast potential while mitigating the thorny challenge of data privacy.

The wave of data privacy legislation being enacted worldwide today (starting with GDPR and CCPA, with many similar laws coming soon) will only accelerate the need for these privacy-preserving techniques. Expect federated learning to become an important part of the AI technology stack in the years ahead.

We have entered a golden era for natural language processing.

OpenAIs release of GPT-3, the most powerful language model ever built, captivated the technology world this summer. It has set a new standard in NLP: it can write impressive poetry, generate functioning code, compose thoughtful business memos, write articles about itself, and so much more.

GPT-3 is just the latest (and largest) in a string of similarly architected NLP modelsGoogles BERT, OpenAIs GPT-2, Facebooks RoBERTa and othersthat are redefining what is possible in NLP.

The key technology breakthrough underlying this revolution in language AI is the Transformer.

Transformers were introduced in a landmark 2017 research paper. Previously, state-of-the-art NLP methods had all been based on recurrent neural networks (e.g., LSTMs). By definition, recurrent neural networks process data sequentiallythat is, one word at a time, in the order that the words appear.

Transformers great innovation is to make language processing parallelized: all the tokens in a given body of text are analyzed at the same time rather than in sequence. In order to support this parallelization, Transformers rely heavily on an AI mechanism known as attention. Attention enables a model to consider the relationships between words regardless of how far apart they are and to determine which words and phrases in a passage are most important to pay attention to.

Why is parallelization so valuable? Because it makes Transformers vastly more computationally efficient than RNNs, meaning they can be trained on much larger datasets. GPT-3 was trained on roughly 500 billion words and consists of 175 billion parameters, dwarfing any RNN in existence.

Transformers have been associated almost exclusively with NLP to date, thanks to the success of models like GPT-3. But just this month, a groundbreaking new paper was released that successfully applies Transformers to computer vision. Many AI researchers believe this work could presage a new era in computer vision. (As well-known ML researcher Oriol Vinyals put it simply, My take is: farewell convolutions.)

While leading AI companies like Google and Facebook have begun to put Transformer-based models into production, most organizations remain in the early stages of productizing and commercializing this technology. OpenAI has announced plans to make GPT-3 commercially accessible via API, which could seed an entire ecosystem of startups building applications on top of it.

Expect Transformers to serve as the foundation for a whole new generation of AI capabilities in the years ahead, starting with natural language. As exciting as the past decade has been in the field of artificial intelligence, it may prove to be just a prelude to the decade ahead.

See original here:

The Next Generation Of Artificial Intelligence - Forbes

St. Louis Is Grappling With Artificial Intelligence’s Promise And Potential Peril – St. Louis Public Radio

Tinus Le Rouxs company, FanCam, takes high-resolution photos of crowds having fun. That might be at Busch Stadium, where FanCam is installed, or on Market Street, where FanCam set up its technology to capture Blues fans celebrating after the Stanley Cup victory.

As photos, theyre a fun souvenir. But paired with artificial intelligence, theyre something more: a tool that gives professional sports teams a much more detailed look at whos in the audience, including their estimated age and gender. The idea, he explained Thursday on St. Louis on the Air, is to help teams understand their fans a bit better understand when theyre leaving their seats, what merchandise are they wearing?

Now that the pandemic has made crowd size a matter of public health, Le Roux noted that FanCam can help teams tell whether the audience has swelled past 25% capacity or how many patrons are wearing masks.

But for all the technologys power, Le Roux believes in limits. He explained that he is not interested in technology that would allow him to identify individuals in the crowd.

We dont touch facial recognition. Ethically, its dubious, he said. In fact, Im passionately against the use of facial recognition in public spaces. What we do is use computer vision to analyze these images for more generalized data.

Not all tech companies share those concerns. Detroit now uses facial recognition as an investigatory tool. Earlier this year, that practice led to the wrongful arrest of a Black man. The ACLU has now filed a lawsuit seeking to stop the practice there.

Locally, Sara Baker, policy director for the ACLU of Missouri, said the concerns go far beyond facial recognition.

The way in which many technologies are being used, on the surface, the purpose is benign, she said. The other implication of that is, what rights are we willing to sacrifice in order to engage with those technologies? And that centers, really, on your right to privacy, and if you are consenting to being surveilled or not, and how that data is being used on the back end as well.

Baker cited the license readers now in place around the city, as well as Persistent Surveillance Systems attempts to bring aerial surveillance to the city as a potential concern. The Board of Aldermen has encouraged Mayor Lyda Krewson to enter negotiations with the company as a way to stop crime, although Baltimores experience with the technology has yet to yield the promised results.

That could involve surveillance of the entire city, Baker said. In Baltimore, that means 90% of outdoor activities are surveilled. I think were getting to a point where we need to have robust conversations like this when were putting our privacy rights on the line, because I think we have a shared value of wanting to keep some aspects of our lives private to ourselves.

To that end, Baker said shed like to see the St. Louis Board of Aldermen pass Board Bill 95, which would regulate surveillance in the city. She said it offers common sense guardrails for how surveillance is used in the city.

Other than California and Illinois, Le Roux said, few states have even grappled with technologys capabilities.

I think the legal framework is still behind, and we need to catch up, Le Roux said.

Le Roux will be speaking more about the ethical issues around facial recognition at Prepare.ais Prepare 2020 conference. The St. Louis-based nonprofit hosts the annual conference to explore issues around artificial intelligence. (Thanks to the ongoing pandemic, Prepare 2020 is now entirely virtual and entirely free.)

Prepare.ais mission is to increase collaboration around fourth-industrial revolution technologies in order to advance the human experience.

Le Roux said he hopes more tech leaders and those who understand the building blocks of technology have a seat at the table as regulations are being written. And Baker said her hope is that local governments proceed with caution in turning to new technologies being touted as a way to solve crime.

We have over 600 cameras in the city of St. Louis, she said. Weve spent up to $100,000 a pop on different surveillance technologies, and weve spent over $4 million in the past three years on these types of surveillance technologies, and weve done it without any real audit or understanding of how the data is being used, and whether its being used ethically. And that is what needs to change.

Related Event

What: Prepare 2020

When: Now through Oct. 28

St. Louis on the Air brings you the stories of St. Louis and the people who live, work and create in our region. The show is hosted by Sarah Fenske and produced by Alex Heuer, Emily Woodbury, Evie Hemphill and Lara Hamdan. The audio engineer is Aaron Doerr.

View post:

St. Louis Is Grappling With Artificial Intelligence's Promise And Potential Peril - St. Louis Public Radio

For artificial intelligence to flourish, governments need to think ahead about its responsible use – Firstpost

Hareesh TibrewalaOct 16, 2020 19:30:24 IST

In the words of Singularity University founder Ray Kurzweil, "In the 21st century, we wont experience 100 years of progress, it will be more like 20,000 years of progress (at today's rate). And one key driver for this mind-boggling rate of change, is the use of technology powered by artificial intelligence.

To some extent, "artificial intelligence"is a misnomer. There is nothing really artificial about it. Artificial Intelligence (AI) simply means

So far, the human mind was the fastest known data cruncher, and we prided on calling ourselves an intelligent species as a result. Intelligence is, simply put, the ability to crunch a lot of data and use that to arrive at decisions. Now, for the first time inhuman history, we will see another "species" computers capable ofbehaving more "intelligently" than we do.This creates its ownchallenges and opportunities.

The use of AI will lead to exponential innovation, that help solve the basic needs of humanity and eliminate all kind of shortages and help humans live a far more comfortable and longer lives. Some examples

Thus, governments need to be thinking ahead, about how best to harness the power of AI andbegin putting in place mechanisms and regulationsto ensure its responsible use. This thinking has to happen not only at national government level but also at a global level.With technologyconnectingpeople across the world in onebig, interconnected village, it is important that governments across borders start cooperating and collaborating for mechanisms and processes to manage the powerof AI. Itwill be important to ensure no single entity be it a corporation or country is allowed tomisuse the power of AI for global supremacy.

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions.

If there's one thingwe can take away from the COVID-19 pandemic, it is thepowerof information sharing, collaboration and pre-defined protocols. If governments across the world had collaborated to share information about the coronavirus, and had pre-approvedprotocols in place for an event like this (shutting borders or the sharing ofintelligenceonvaccine efforts), we wouldn't have seen the situation escalate to the one we're finding ourselves in.

It is commendable that the government of India has shown the foresight and vision to bring conversation around artificial intelligence in the public domain. Irrespective of political affiliations, there is no denying that this government demonstrates a high level of alertness around the use of technology for bringing around social change be it is the Jan-Dhan Yojna (to weed out middle men) or the Aadhaar card (to link financial transactions and minimize tax evasion). Responsible AI can have a positive impact in almost every facet of social change education, agriculture, healthcare, manufacturing or infrastructure.

India, on account of its large IT industry, can be a leader, not only in providing AI solutions to the world, but also leading the global political dialogue aroundresponsible AI.

The author is the joint CEO of Mirum, India.

Read the rest here:

For artificial intelligence to flourish, governments need to think ahead about its responsible use - Firstpost

Looper column: Artificial intelligence, humanity and the future – SouthCoastToday.com

Columns share an authors personal perspective.

*****

In September, the British news website The Guardian published a story written entirely by an AI - an artificial intelligence - that learned how to write from scanning the internet. The piece received a lot of press because in it the AI stated it had no plans to destroy humanity. It did, however, admit that it could be programmed in a way that might prove destructive.

The AI is not beyond making mistakes. I noted its erroneous claim that the word robot derives from Greek. An AI that is mistaken about where a word comes from might also be mistaken about where humanity is headed. Or it might be lying. Not a pleasant thought.

Artificial intelligence is based on the idea that computer programs can learn and grow. No less an authority than Stephen Hawking has warned that AI, unbounded by the slow pace of biological development, might quickly supersede its human developers.

Other scientists are more optimistic, believing that AI may provide solutions to many of humanitys age-old problems, including disease and famine. Of course, the destruction of biological life would be one solution to disease and famine.

Hawking worried that a growing and learning computer program might eventually destroy the world. I doubt it ever occurred to Hawking that his fears regarding AI could once have been expressed toward BI - biological intelligence; that is, humans - at their creation.

Did nonhuman life forms, like those the Bible refers to as angels, foresee the dangerous possibilities presented by the human capacity to grow and learn? Might not the angel Gabriel, like the scientist Hawking, have warned of impending doom?

AI designers are not blazing a trail but following one blazed by God himself. For example, their creations are made, as was Gods, in their own image. And, like Gods creation, theirs is designed to transcend its original specs. There is, however, this difference: AI designers do not know how to introduce a will into their creations.

The capacity for growth, designed into humankind from the first, is seldom given the consideration it deserves. For one thing, it implies the Creators enormous self-confidence. God, unlike humans, is not threatened by the growth of his creation. In fact, he delights in it. He does not need to worry about protecting himself.

That the Creator wants his creatures to grow is good news, for it means God is a parent. That is what parents are like. They long for their children to become great and good. No wonder Jesus taught his followers to call God Father.

Given that God created such beings knowing what could - and if theologians are correct, what would - go wrong, he must have considered the outcome of creation to be so magnificent and good as to merit present pain and suffering. When people fault God for current evil, they do so without comprehending future good.

The present only makes sense in the light of the future, and the future only offers hope if we will become more and better than we currently are. Outside of the context of a magnificent future, present injustices, sorrows and suffering appear overwhelming.

The hope presented in the Bible is audacious. It is unparalleled and unrivaled. The Marxist hopes for a better world. The Christian hopes for a perfect one: a new heaven and new earth, where everything is right and everyone exists in glory. The hope of the most enthusiastic Marxist fades before this shining hope the way a candle fades before the noonday sun.

This hope is not just that human pains will be forgotten, swallowed up in bliss. It is not just that shame will be buried when we die and left in the grave when we rise. Christian hope is not just that evil and injustice will be destroyed. It is that when God is all and is in all, we will be more than we have ever been.

The long story of weapons and wars, of marriages broken, and innocence stolen turns out to be different than we thought and better than we dreamed. It is the introduction to a story of astounding goodness, displayed in our creation, redemption, and glorious future.

Shayne Looper is the pastor of Lockwood Community Church in Coldwater, Michigan. His blog, The Way Home, is at shaynelooper.com.

Here is the original post:

Looper column: Artificial intelligence, humanity and the future - SouthCoastToday.com

The relentless threat of artificial intelligence taking our jobs away – Mint

I recently came across a quote by the father of Information Theory and Massachusetts Institute of Technology professor Claude Shannon: I visualize a time when we will be to robots what dogs are to humans, and Im rooting for the machines." Shannon did not seem to like human beings much, but this view set off another thought process in my mind. As a technology writer and digital transformation practitioner, the second-most asked question of me is: Will artificial intelligence (AI) take our jobs, and what should I do to protect mine or my childs"? (For the most oft- asked query, you will have to read on).

Whether AI will take our jobs or create new ones is one of the greatest debates of the modern world. Every instance when a revolutionary new technology comes in, the same thought fearfully raises its head. It bothered Ned Ludd in 1779 after the invention of the Spinning Jenny, which threatened to take his job as a textile factory apprentice. He went and smashed a machine or two, catalysing a movement against textile technology, and started the Luddite movement. New-age Luddites worried about personal computers and their job-destroying potential. This movement was particularly strident in India, with computers being smashed by worker unions. It turned out that the information technology (IT) revolution created millions of jobs, and catapulted India to its tech-superpower status.

But AI, everyone says, is different, and we need to think about it in a different way. They may be right. This is the first time we have a technology which could potentially replace us, and could perhaps be even more powerful than us, armed with the theoretical potential to turn humans redundant. Turing Award winner Alan Perlis articulates this worry best when he says, A year spent in artificial intelligence is enough to make one believe in God." Most global institutions, though, are far more sanguine. The World Economic Forum predicts that by 2022, AI will create 133 million new jobs. It also says that from a 30/70 division of labour between machines and humans, the ratio will dramatically shift to 52/48 by 2025. IT consultancy Gartner claims that AI will create two million net new jobs by the same year.

But where will these jobs come from? I have a simple way to think about it: Most of the work we do can be rudimentarily divided into English (or any other language), and arithmetic. English is the creative part (strategy, communication, messaging), while arithmetic is the analytical part (excel sheets, number crunching, financial planning). The arithmetic bit will get taken away by robots first (case in point: Robotic Process Automation) and humans will still own the English bit. However, while even this language part has started getting chipped away by AI, specifically by Deep Learning, Neural Networks and now Generative Pre-trained Transformer 3, AI now creates great music too. Search for AI-written music on YouTube, and you will find lots of it. Microsoft Research and ING teamed up to have AI paint a Rembrandt painting (nextrembrandt.com), and it painted a critic-defying one six centuries after the master died. AI writes poetry and prose, and defeats humans at games that are instinct- and imagination-driven.

Perhaps the best explanation of AIs impact on jobs is given by Kai Fu Lee, an acclaimed AI investor and practitioner. His famous matrix has optimization-to-strategy on one axis, and no-compassion-to-full-compassion on another. High optimization and low compassion jobs, like telesales, customer support, dishwashing, radiology work and truck driving, will be the first to go, while high compassion jobs like running a company, striking deals, teaching and caring for the elderly will be last. There will also be AI-only jobs, where AI will support humans, or humans will assist AI. Then there will be jobs that will always be for humans, requiring communications skills, empathy, compassion, trust, creativity and reasoning. In fact, Lee has a list of the 10 safest jobs: psychiatry, therapy, medical care, AI-related research and engineering, fiction writing, teaching, criminal law, computer science and engineering, science, and management.

If you think of it, it is more about humans than about AI. Udacity co-founder Sebastian Thrun puts it best: I think that artificial intelligence is almost a humanities discipline. Its really an attempt to understand human intelligence and human cognition."

And that brings me to the question that people ask me most often: Will AI replace humans?" The short answer is that in some ways, it will. The long answer is for another column.

Jaspreet Bindra is the author of The Tech Whisperer, and founder of Digital Matters

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Read more here:

The relentless threat of artificial intelligence taking our jobs away - Mint

MediFind Selected as Finalist for Best in Artificial Intelligence in the Shorty Social Good Awards – The Wellsboro Gazette

PHILADELPHIA, Oct. 15, 2020 /PRNewswire/ -- MediFind, an advanced platform that uses artificial intelligence to help patients make more informed health decisions, today announced it has been selected as aShorty Social Good Award Finalist for Best in Artificial Intelligence. This award honors the most creative and effective use of artificial intelligence tools to support a social good program, initiative or social good goal.

MediFind was selected based on its use of AI and machine learning to help people facing serious, complex and rare diseases find better care, faster. The platform enables patients and their families to explore symptoms and findexperts, second opinions, clinical trials and the latest research for thousands of health conditions all in one place. By leveraginga proprietary combination of artificial intelligenceand medical experts, MediFind evaluates over 2.5 million global physicians and analyzes over 100,000 research articles each month, using information from dozens of disparate datasets.

This is the latest in a streak of recognitions for the company, also being shortlisted for theVesalius Innovation Award from Karger Publishers,Reuter's Pharma Awards USA for Most Valuable Service or Digital TherapyandSierra Ventures/Startup50 Top 25 Startups.

The Shorty Social Good Awards honor the social initiative brands, agencies and nonprofits that are working to make our world a better place. While the Shorty Awards have long honored the best of social media and digital, these awards include efforts made by organizations to improve sustainability and diversity internally, foster globally minded business partnerships and increase employee community and civic engagement.

Finalists were selected by members of theReal Time Academy of Short Form Arts & Sciences, comprised of luminaries from advertising, media, entertainment and technology. The group includes Ogilvy Vice President of Social Change Kate Hull Fliflet, Owner and CEO at Black Girls Run Jay Ell Alexander, Director of Social Impact at MTV, VH1 and Logo Maxwell Zorick, Founder and CEO at The Phluid Project Rob Smith and more. Social Good Award winners will be announced and honored at a digital ceremony on Thursday, November 19th, in New York City.

MediFind is honored to be in the company of the other Best in Artificial Intelligence finalists, including IBM, Goodwill and Hypergiant Industries. The full list of honorees can also be viewed atAdWeek.

ABOUT MEDIFIND

Founded on Rare Disease Day in 2020, MediFind is a proprietary technology platform that uses big data to help connect patients to the right care team and treatment protocols faster, improving their chances of optimal health outcomes. With a searchable database powered by advanced machine learning and algorithms, MediFind makes it easy for people facing the most challenging health conditions to locate top doctors, review the latest research and learn about clinical trials. Research findings are summarized in plain language so patients can make more informed decisions faster, because when it comes to health, nothing is more valuable than time. Learn more about MediFind atwww.medifind.com.

ABOUT THE SHORTY SOCIAL GOOD AWARDS

The Shorty Social Good Awards are presented by the Shorty Awards and produced by Sawhorse Media, a New York-based technology company. Sawhorse also created and runs Muck Rack, the leading network to connect with journalists on social media.

MEDIA CONTACT:Erin OvadalSSPReovadal@sspr.com

Read more:

MediFind Selected as Finalist for Best in Artificial Intelligence in the Shorty Social Good Awards - The Wellsboro Gazette