AI is here to stay, but are we sacrificing safety and privacy? A free public Seattle U course will explore that – Seattle Times

The future of artificial intelligence (AI) is here: self-driving cars, grocery-delivering drones and voice assistants like Alexa that control more and more of our lives, from the locks on our front doors to the temperatures of our homes.

But as AI permeates everyday life, what about the ethics and morality of the systems? For example, should an autonomous vehicle swerve into a pedestrian or stay its course when facing a collision?

These questions plague technology companies as they develop AI at a clip outpacing government regulation, and have led Seattle University to develop a new ethics course for the public.

Launched last week, the free, online course for businesses is the first step in a Microsoft-funded initiative to merge ethics and technology education at the Jesuit university.

Seattle U senior business-school instructor Nathan Colaner hopes the new course will become a well-known resource for businesses as they realize that [AI] is changing things, he said. We should probably stop to figure out how.

The course developed by Colaner, law professor Mark Chinen and adjunct business and law professor Tracy Ann Kosa explores the meaning of ethics in AI by looking at guiding principles proposed by some nonprofits and technology companies. A case study on facial recognition in the course encourages students to evaluate different uses of facial-recognition technology, such as surveillance or identification, and to determine how the technology should be regulated. The module draws on recent studies that revealed facial-analysis systems have higher error rateswhen identifying images of darker-skinned females in comparison to lighter-skinned males.

The course also explores the impact of AI on different occupations.

The publics desire for more guidance around AI may be reflected in a recent Northeastern University and Gallup survey that found only 22% of U.S. respondents believed colleges or universities were adequately preparing students for the future of work.

Many people who work in tech arent required to complete a philosophy or ethics course in school, said Quinn, which he believes contributes to blind spots in the development of technology. Those blind spots may have lead to breaches of public trust, such as government agencies use of facial recognition to scan license photos without consent, Alexa workers listening to the voice commands of unaware consumers and racial bias in AI algorithms.

As regulations on emerging technology wend through state legislatures, colleges, such as University of Washington and Stanford University, have created ethics courses to mitigate potential harmful effects. Seattle Universitys course goes a step further by opening a course to the public.

The six-to-eight-hour online course is designed to encourage those on the front end of AI deployment, such as managers, to understand the ethical issues behind some of the technologies. Students test their understanding of the self-paced course through quizzes at the end of each module. Instructors will follow up withpaid in-person workshops at the university that cater to the needs of individual businesses.

The initiative was spawned by an August 2018 meeting between Microsoft president Brad Smith and Seattle University administrators, in which the tech company promised $2.5 million toward the construction of the schools new engineering building.The conversation quickly veered into a lengthy discussion about ethical issues around AI development, such as fairness and accountability of tech companies and their workers, said Michael Quinn, the dean of the universitys College of Science and Engineering.

At the meeting, Microsoft promised Seattle University another $500,000 to support the development of a Seattle University ethics and technology initiative. Quinn called the AI ethics lab a natural opportunity to jump at for the college that requires an ethics course to graduate. It was already a topic circulating around campus: Staff and faculty had recently spearheaded a book club to discuss contemporary issues related to ethics and technology.

The initiative will also provide funding for graduate research assistants to create a website with articles and resources on moral issues around AI, as well as for the university to hire a faculty director to manage the initiative. Seattle University philosophy professors will offer an ethics and technology course for students in 2021.

Quinn believes institutions of higher education have a role in educating the public and legislators on finding a middle ground between advancing AI technology and protecting basic human rights. People are starting to worry about the implications [of AI] in terms of their privacy, safety and employment, Quinn said.

AI is developing faster than legislation can keep up with, so its a prime subject for ethics, said Colaner. He is particularly concerned about the use of AI in decision making, such as algorithms used to predict recidivism rates in court, and in warfare through drone strikes.

Washington state Sen. Joe Nguyen, D-White Center, agrees that higher education has a large role in preparing the public for a future more reliant on AI. In an industry setting, said Quinn, employers often push workers to advance technology as far as possible without considering its impact on different communities.AI ethics in education, however, serves as a safeguard [for] meaningful innovation, offers a critical eye and shows how it impacts people in a social-justice aspect.

Ahead of the current session, Seattle University instructors consulted Nguyen on draft legislation about an algorithmic accountability bill that was re-introduced this legislative session after failing to pass last year.

The bill would provide guidelines for the adoption of automated systems that assist in government decision making, and requires agencies to produce an accountability report on the capabilities of software as well as how data is collected and used.

Law professor Ben Alarie, who is also the CEO of a company that uses AI to make decisions in tax cases, believes the public accessibility of the Seattle University course could help businesses avoid potential disruptions.

One of the benefits of having a program like this available to everyone, is that businesses can build in safeguards and develop these technologies in a responsible way, he said.

Read more here:

AI is here to stay, but are we sacrificing safety and privacy? A free public Seattle U course will explore that - Seattle Times

AI can automatically rewrite outdated text in Wikipedia articles – Engadget

The machine learning-based system is trained to recognize the differences between a Wikipedia article sentence and a claim sentence with updated facts. If it sees any contradictions between the two sentences, it uses a "neutrality masker" to pinpoint both the contradictory words that need deleting and the ones it absolutely has to keep. After that, an encoder-decoder framework determines how to rewrite the Wikipedia sentence using simplified representations of both that sentence and the new claim.

The system can also be used to supplement datasets meant to train fake news detectors, potentially reducing bias and improving accuracy.

As-is, the technology isn't quite ready for prime time. Humans rating the AI's accuracy gave it average scores of 4 out of 5 for factual updates and 3.85 out of 5 for grammar. That's better than other systems for generating text, but that still suggests you might notice the difference. If researchers can refine the AI, though, this might be useful for making minor edits to Wikipedia, news articles (hello!) or other documents in those moments when a human editor isn't practical.

Continued here:

AI can automatically rewrite outdated text in Wikipedia articles - Engadget

AIoT: The Power of Combining AI with the IoT – Reliable Plant Magazine

When people hear the terms artificial intelligence (AI) and the internet of things (IoT), most think of a futuristic world often depicted in the movies. However, many of those predictions are now coming to fruition in this fourth industrial revolution that is currently transforming the way the world works in every way imaginable.

Even though the full capability of AI and the IoT are still in their relative infancy, these two technologies are now being combined across every industry in scenarios where information and problem-solving can improve outcomes for all stakeholders.

The last great convergence of this magnitude occurred in the late 1990s as mobile phones and the internet were on a collision course that has changed the course of human history. Most people now hold more computing power in the palm of their hand than was required to put a man on the moon in 1969. The convergence of AI and the IoT are about to do the same thing on an even greater scale.

The ability to capture large amounts of data has exploded in the last three to five years. Along with these advances come new threats and concerns about privacy and security. Large volumes of user data and company proprietary information are tempting targets for dark web hackers and even government entities around the world. There are also new responsibilities that come with this increased capability.

Sensors can now be applied to everything. This means that infinitely more data can be collected from every process or transaction in real time. IoT devices are the front line of this data collection process in manufacturing environments, customer service departments and consumer products in peoples homes. Any device with a chipset has the potential to be connected to a network and begin streaming large swaths of data 24/7.

Complex algorithms offer the capability to perform predictive analytics from every conceivable angle. Machine learning (ML), a subset of artificial intelligence, continues to upgrade workflows and simplify problem solving.

Companies can now capture all the meaningful data surrounding their processes and problems and develop specific solutions for real- world challenges within the organization to improve reliability, efficiency and sustainability.

While AI and the IoT are impressive superpowers in their own right, thanks to the concept of convergence, 1+1=3. The IoT improves the value of AI by allowing real-time connectivity, signaling and data exchange. AI boosts the capabilities of the IoT by applying machine learning to improve decision making.

Many in the industry are now referring to this convergence simply as AIoT. Presently, many AIoT applications are fairly monolithic, as companies build the expertise and systems to deploy and support these powerful technologies across their entire organization. The coming years will see this convergence allow more optimization and networking, which will create even more value.

Some of the most well-respected minds have predicted full digital integration between humans and computers by the year 2030. Between this and ongoing advances in automation and robotics, up to 40 percent of the current workforce could be replaced by technology within the next 10-15 years. Consider that by 2023:

Solutions providers and hardware manufacturers are already in full swing to take advantage of this digital technology gold rush and position themselves in the evolving industrial landscape. Forward-looking companies like Amazon are offering re-education and training opportunities for employees in soon-to-be obsolete job functions.

Convergence is a concept everyone should become familiar with, as all manner of technology discoveries and advances are being combined to innovate and disrupt the way the entire world lives, works and plays.

Joseph Zulick is a writer and manager at MRO Electric and Supply.

Go here to see the original:

AIoT: The Power of Combining AI with the IoT - Reliable Plant Magazine

Neuromodulation Is the Secret Sauce for This Adaptive, Fast-Learning AI – Singularity Hub

As obstinate and frustrating as we are sometimes, humans in general are pretty flexible when it comes to learningespecially compared to AI.

Our ability to adapt is deeply rooted within our brains chemical base code. Although modern AI and neurocomputation have largely focused on loosely recreating the brains electrical signals, chemicals are actually the prima donna of brain-wide neural transmission.

Chemical neurotransmitters not only allow most signals to jump from one neuron to the next, they also feedback and fine-tune a neurons electrical signals to ensure theyre functioning properly in the right contexts. This process, traditionally dubbed neuromodulation, has been front and center in neuroscience research for many decades. More recently, the idea has expanded to also include the process of directly changing electrical activity through electrode stimulation rather than chemicals.

Neural chemicals are the targets for most of our current medicinal drugs that re-jigger brain functions and states, such as anti-depressants or anxiolytics. Neuromodulation is also an immensely powerful way for the brain to flexibly adapt, which is why its perhaps surprising that the mechanism has rarely been explicitly incorporated into AI methods that mimic the brain.

This week, a team from the University of Liege in Belgium went old school. Using neuromodulation as inspiration, they designed a new deep learning model that explicitly adopts the mechanism to better learn adaptive behaviors. When challenged on a difficult navigational task, the team found that neuromodulation allowed the artificial neural net to better adjust to unexpected changes.

For the first time, cognitive mechanisms identified in neuroscience are finding algorithmic applications in a multi-tasking context. This research opens perspectives in the exploitation in AI of neuromodulation, a key mechanism in the functioning of the human brain, said study author Dr. Damien Ernst.

Neuromodulation often appears in the same breath as another jargon-y word, neuroplasticity. Simply put, they just mean that the brain has mechanisms to adapt; that is, neural networks are flexible or plastic.

Cellular neuromodulation is perhaps the grandfather of all learning theories in the brain. Famed Canadian psychologist and father of neural networks Dr. Donald Hebb popularized the theory in the 1900s, which is now often described as neurons that fire together, wire together. On a high level, Hebbian learning summarizes how individual neurons flexibly change their activity levels so that they better hook up into neural circuits, which underlie most of the brains computations.

However, neuromodulation goes a step further. Here, neurochemicals such as dopamine dont necessarily directly help wire up neural connections. Rather, they fine-tune how likely a neuron is to activate and link up with its neighbor. These so-called neuromodulators are similar to a temperature dial: depending on context, they either alert a neuron if it needs to calm down so that it only activates when receiving a larger input, or hype it up so that it jumps into action after a smaller stimuli.

Cellular neuromodulation provides the ability to continuously tune neuron input/output behaviors to shape their response to external stimuli in different contexts, the authors wrote. This level of adaptability especially comes into play when we try things that need continuous adjustments, such as how our feet strike uneven ground when running, or complex multitasking navigational tasks.

To be very clear, neuromodulation isnt directly changing synaptic weights. (Ughwhat?)

Stay with me. You might know that a neural network, either biological or artificial, is a bunch of neurons connected to each other through different strengths. How readily one neuron changes a neighboring neurons activityor how strongly theyre linkedis often called the synaptic weight.

Deep learning algorithms are made up of multiple layers of neurons linked to each other through adjustable weights. Traditionally, tweaking the strengths of these connections, or synaptic weights, is how a deep neural net learns (for those interested, the biological equivalent is dubbed synaptic plasticity).

However, neuromodulation doesnt directly act on weights. Rather, it alters how likely a neuron or network is to be capable of changing their connectionthat is, their flexibility.

Neuromodulation is a meta-level of control; so its perhaps not surprising that the new algorithm is actually composed of two separate neural networks.

The first is a traditional deep neural net, dubbed the main network. It processes input patterns and uses a custom method of activationhow likely a neuron in this network is to spark to life depends on the second network, or the neuromodulatory network. Here, the neurons dont process input from the environment. Rather, they deal with feedback and context to dynamically control the properties of the main network.

Especially important, said the authors, is that the modulatory network scales in size with the number of neurons in the main one, rather than the number of their connections. Its what makes the NMN different, they said, because this setup allows us to extend more easily to very large networks.

To gauge the adaptability of their new AI, the team pitted the NMN against traditional deep learning algorithms in a scenario using reinforcement learningthat is, learning through wins or mistakes.

In two navigational tasks, the AI had to learn to move towards several targets through trial and error alone. Its somewhat analogous to you trying to play hide-and-seek while blindfolded in a completely new venue. The first task is relatively simple, in which youre only moving towards a single goal and you can take off your blindfold to check where you are after every step. The second is more difficult in that you have to reach one of two marks. The closer you get to the actual goal, the higher the rewardcandy in real life, and a digital analogy for AI. If you stumble on the other, you get punishedthe AI equivalent to a slap on the hand.

Remarkably, NMNs learned both faster and better than traditional reinforcement learning deep neural nets. Regardless of how they started, NMNs were more likely to figure out the optimal route towards their target in much less time.

Over the course of learning, NMNs not only used their neuromodulatory network to change their main one, they also adapted the modulatory networktalk about meta! It means that as the AI learned, it didnt just flexibly adapt its learning; it also changed how it influences its own behavior.

In this way, the neuromodulatory network is a bit like a library of self-help booksyou dont just solve a particular problem, you also learn how to solve the problem. The more information the AI got, the faster and better it fine-tuned its own strategy to optimize learning, even when feedback wasnt perfect. The NMN also didnt like to give up: even when already performing well, the AI kept adapting to further improve itself.

Results show that neuromodulation is capable of adapting an agent to different tasks and that neuromodulation-based approaches provide a promising way of improving adaptation of artificial systems, the authors said.

The study is just the latest in a push to incorporate more biological learning mechanisms into deep learning. Were at the beginning: neuroscientists, for example, are increasingly recognizing the role of non-neuron brain cells in modulating learning, memory, and forgetting. Although computational neuroscientists have begun incorporating these findings into models of biological brains, so far AI researchers have largely brushed them aside.

Its difficult to know which brain mechanisms are necessary substrates for intelligence and which are evolutionary leftovers, but one thing is clear: neuroscience is increasingly providing AI with ideas outside its usual box.

Image Credit: Image by Gerd Altmann from Pixabay

Excerpt from:

Neuromodulation Is the Secret Sauce for This Adaptive, Fast-Learning AI - Singularity Hub

What to know about AI and diversity ahead of OurCrowd Summit 2020 – The Times of Israel

When a panel of five men and two women took the stage at the annual OurCrowd Summit in Jerusalem last year to discuss the hype around artificial intelligence (AI), they accomplished the trifecta that is often hard to find at conferences: They provided a session that was informative, interesting, AND entertaining.

They did it by dissecting the good (better customer and work experiences), the bad (black boxes), and the ugly (bias) around AI and its impact. As they talked about the importance of large data sets from a variety of sources and shared different perspectives on topics that heated the conversation, the panel itself became an example of why numbers and diversity matter.

Large numbers and diversity matter even more now as AI moves beyond the hype and more businesses in the tech and traditional sectors transition to being model-based with machine learning algorithms at the core.

In late 2018 it was revealed that Amazon dropped its AI-powered internal recruiting tool because it was biased in favor of male candidates. In November 2019, news broke that Goldman Sachs was being investigated for sex discrimination after claims were made that its credit algorithm used for Apple Card is sexist. These are just two gender-related bias examples and barely touch other biases, such as race.

The main culprit for bias is a lack of diversity on the teams developing these solutions, which remain overwhelmingly white and male.

The issue has become so significant that CIOs in the US have made diversifying their tech teams a priority in 2020. A number of investment funds, including WestRiver Group (WRG) in Seattle, are also increasingly paying attention and investing in diverse management teams as a business advantage.

In Israel, several new initiatives have been launched in recent years by the government, non-profit organizations, and companies to support all diversity in the tech sector.

Power in Diversity, an initiative launched by Israeli investment firm Vintage Investment Partners, is working with companies to address these issues head on in Israels tech sector. For example, it recently created a workshop for Tel Aviv-based Yotpos R&D department, which hired three tech teams of Haredi women last year, to address and challenge differences and stereotypes to help make the work environment more inclusive.

Kaltura, a video technology company co-founded by serial entrepreneur Michal Tsur, announced its pledge in 2019 to work towards increasing female leadership at the company to 50% by 2024. It will also work to increase the number of female employees at all levels of the company to 50% in that time frame.

So, whats the lesson to keep in mind at OurCrowd Summit 2020? The diverse makeup of the AI panel last year was able to move the discussion past all the hype and to address serious issues, like explainable AI, and paint a more complete picture of AI. A picture that has played out throughout the past year.

These broader perspectives and insights into whats happening is something to keep in mind at the conference this year, especially for sessions that dont have a diverse lineup of speakers, such as the session on how AI is transforming industries, and as a result may not show the full picture, or, put another way, all the data points.

Lisa Damast is the Head of Marketing for RTInsights.com and the founder of the weekly newsletter Gender Diversity in Tech. She previously was the Israel correspondent and bureau chief for the financial news publication Mergermarket, and has been published on the Financial Times's website, Israel21c, and Green Prophet. She blogs about topics related to Israeli women in tech and female entrepreneurship.

Follow this link:

What to know about AI and diversity ahead of OurCrowd Summit 2020 - The Times of Israel

Report on AI in UK public sector: Some transparency on how government uses it to govern us would be nice – The Register

A new report from the Committee for Standards in Public life has criticised the UK government's stance on transparency in AI governance and called for ethics to be "embedded" in the frameworks.

The 74-page treatise noted that algorithms are currently being used or developed in healthcare, policing, welfare, social care and immigration. Despite this, the government doesn't publish any centralised audit on the extent of AI use across central government or the wider public sector.

Most of what is in the public realm at present is thanks to journalists and academics making Freedom of Information requests or rifling through the bins of public procurement data, rather than public bodies taking the proactive step of releasing information about how they use AI.

The committee said the public should have access to the "information about the evidence, assumptions and principles on which policy decisions have been made".

In focus groups assembled for the review, members of the public themselves expressed a clear desire for openness, as you'd expect.

"This serious report sadly confirms what we know to be the case that the Conservative government is failing on openness and transparency when it comes to the use of AI in the public sector," shadow digital minister Chi Onwurah MP said in a statement.

"The government urgently needs to get a grip before the potential for unintended consequences gets out of control," said Onwurah, who argued that the public sector should not accept further AI algorithms in decision-making processes without introducing further regulation.

Simon Burall, senior associate with the public participation charity Involve, commented: "It's important that these debates involve the public as well as elected representatives and experts, and that the diversity of the communities that are affected by these algorithms are also involved in informing the trade-offs about when these algorithms should be used and not."

Predictive policing programmes are already being used to identify crime "hotspots" and make individual risk assessments where police use algorithms to determine the likelihood of someone committing a crime.

But human rights group Liberty has urged police to stop using these programmes because they entrench existing biases. Using inadequate data and indirect markers for race (like postcodes) could perpetuate discrimination, the group warned. There is also a "severe lack of transparency" with regard to how these techniques are deployed, it said.

The committee's report noted that the "application of anti-discrimination law to AI needs to be clarified".

In October 2019, the Graun reported that one in three local councils were using algorithms to make welfare decisions. Local authorities have bought machine learning packages from companies including Experian, TransUnion, Capita and Peter Thiel's data-mining biz Palantir which has its fans in the US public sector to support a cost-cutting drive.

These algorithms have already caused cock-ups. North Tyneside council was forced to drop TransUnion, whose system it used to check housing and council tax benefit claims, when welfare payments to an unknown number of people were delayed thanks to the computer's "predictive analytics" wrongly classifying low-risk claims as high risk.

The report stopped short of recommending an independent AI regulator. Instead it said: "All regulators must adapt to the challenges that AI poses to their specific sectors."

The committee endorsed the government's intention to establish the Centre for Data Ethics and Innovation as "an independent, statutory body that will advise government and regulators in this area". So that's all right then.

Sponsored: Detecting cyber attacks as a small to medium business

Continued here:

Report on AI in UK public sector: Some transparency on how government uses it to govern us would be nice - The Register

Why Clearview AI is a threat to us all – Engadget

Corporate backlash against Clearview clearly hasn't dissuaded law enforcement agencies from using the surveillance system either. According to the company, more than 600 police departments across the US reportedly use the Clearview service -- including the FBI and DHS.

The Chicago Police Department paid $50,000 for a two-year license for the system, CBS News reports, though a spokesperson for the CPD noted that only 30 officers have access to it and the system is not used for live surveillance as it is in London.

"The CPD uses a facial matching tool to sort through its mugshot database and public source information in the course of an investigation triggered by an incident or crime," it said in a statement to CBS.

Despite the CPD's assurances that it would not take advantage of the system, Clearview's own marketing team appears to be pushing police departments to do exactly that. In a November email to the Green Bay PD, acquired by BuzzFeed, the company actively encouraged officers to search the database for themselves, acquaintances, even celebrities.

"Have you tried taking a selfie with Clearview yet?" the email read. "It's the best way to quickly see the power of Clearview in real time. Try your friends or family. Or a celebrity like Joe Montana or George Clooney."

"Your Clearview account has unlimited searches. So feel free to run wild with your searches," the email continued.

That's not to say that the system is completely without merit. Participating law enforcement agencies are already using it to quickly track down shoplifting, identity theft and credit card fraud suspects. Clearview also claims that its app helped the NYPD track down a terrorism suspect last August, but the agency disputes the company's involvement in the case. Clearview is also reportedly being used to help locate child sex victims; however, its use in those classes of cases remains anecdotal at best and runs the risk of hurting the same kids it's aiming to help.

Using Clearview to track minors, even if done with the best of lawful intentions, is a veritable minefield of privacy and data security concerns. Because the police are expected to upload investigation images to Clearview's servers, the company could potentially collect a massive amount of highly sensitive data on any number of underage sex abuse survivors. And given that the company's security measures are untested, unregulated and unverified, the public has no assurances that data will be safe if and when Clearview's systems are attacked.

What's more, Clearview's system suffers the same shortcomings as other facial recognition systems: It's not as good at interpreting black and brown faces as it is for whites. The company claims that its search is accurate across "all demographic groups," but the ACLU vehemently disagrees. When Clearview pitched its services to the North Miami Police Department back in October 2019, the company included a report from a three-member panel reading, "The Independent Review Panel determined that Clearview rated 100 percent accurate, producing instant and accurate matches for every photo image in the test. Accuracy was consistent across all racial and demographic groups." This study was reportedly conducted using the same methodology as the ACLU's 2018 test of Amazon's Rekognition system, a claim that the ACLU rejects. The Civil Liberties Union notes that none of the three sitting on the review board panel had any prior experience in evaluating facial recognition systems.

"Clearview's technology gives government the unprecedented power to spy on us wherever we go -- tracking our faces at protests, [Alcoholics Anonymous] meetings church, and more," ACLU Northern California attorney Jacob Snow told BuzzFeed News. "Accurate or not, Clearview's technology in law enforcement hands will end privacy as we know it."

And it's not like the police abusing their surveillance powers for personal gain is anything new. In 2016, an Associated Press investigation discovered that police around the country routinely accessed secure databases to look up information on citizens that had nothing to do with their police work, including to stalk ex-girlfriends. In 2013, a Florida cop looked up the personal information of a bank teller he was interested in. In 2009, a pair of FBI agents were caught surveilling a women's dressing room where teenage girls were trying on prom dresses. These are not isolated incidents. In the same year that Clearview was founded, DC cops attempted to intimidate Facebook into giving them access to the personal profiles of more than 230 presidential inauguration protesters. With Clearview available, the police wouldn't even need to contact Facebook as Clearview has likely already scraped and made accessible the dirt the cops are looking for.

"The weaponization possibilities of this are endless," Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University, told The New York Times in January. "Imagine a rogue law enforcement officer who wants to stalk potential romantic partners, or a foreign government using this to dig up secrets about people to blackmail them or throw them in jail."

Unsurprisingly, Clearview's financial backers remain unconcerned about the system's potential for abuse. "I've come to the conclusion that because information constantly increases, there's never going to be privacy," David Scalzo, founder of Kirenaga Partners and early Clearview investor, told The New York Times. "Laws have to determine what's legal, but you can't ban technology. Sure, that might lead to a dystopian future or something, but you can't ban it."

Luckily, our elected representatives are starting to take notice of the dangers that unregulated facial recognition technologies like Clearview pose to the public. A handful of California cities including San Francisco, Oakland and Alameda have all passed moratoriums on their local governments' use of the technology. California, New Hampshire and Oregon have passed restrictions at the state level and a number of other municipalities are considering taking similar steps in the near future.

Senator Edward J. Markey (D-MA) has also taken recent note of Clearview's behavior. In January, the Senator sent a strongly worded letter to CEO Ton-That stating, "Clearview's product appears to pose particularly chilling privacy risks, and I am deeply concerned that it is capable of fundamentally dismantling Americans' expectation that they can move, assemble or simply appear in public without being identified." The senator also included a list of 14 questions for Ton-That to address by Wednesday, February 12th.

Whether Clearview bows to legal and legislative pressure here in the US remains to be seen, but don't get your hopes up. The company is already looking to expand its services to 22 countries around the world, including a number of nations which have been accused of committing human rights abuses. That includes the UAE, Qatar and Singapore, as well as Brazil and Columbia, both of which have endured years of political and social strife. There are even a few EU nations Clearview is looking to target, including Italy, Greece and the Netherlands.

Pretty soon, we won't be able to set foot in public without our presence being noticed, cataloged and tabulated. And when the government has the ability to know where anyone is at given time, our civil liberties will irreparably erode. All so that a handful of developers and investors could make a quick buck selling our faces to the police in the name of public safety.

Continued here:

Why Clearview AI is a threat to us all - Engadget

Is AI cybersecuritys salvation or its greatest threat? – VentureBeat

This article is part of a VB special issue. Read the full series here: AI and Security.

If youre uncertain whether AI is the best or worst thing to ever happen to cybersecurity, youre in the same boat as experts watching the dawn of this new era with a mix of excitement and terror.

AIs potential to automate security on a broader scale offers a welcome advantage in the short term. Yet unleashing a technology designed to eventually take humans out of the equation as much as possible naturally gives the industry some pause. There is an undercurrent of fear about the consequences if things run amok or attackers learn to make better use of the technology.

Everything you invent to defend yourself can also eventually be used against you, said Geert van der Linden, an executive vice president of cybersecurity for Capgemini. This time does feel different, because more and more, we are losing control as human beings.

In VentureBeats second quarterly special issue, we explore this algorithmic angst across multiple stories, looking at how important humans remain in the age of AI-powered security, how deepfakes and deep media are creating a new security battleground even as the cybersecurity skills gap is a concern, surveillance powered by AI cameras is on the rise, AI-powered ransomware is rearing its head, and more.

Each evolution of computing in recent decades has brought new security threats and new tools to fight them. From networked PCs to cloud computing to mobile, the trend is always toward more data stored in ways that introduce unfamiliar vulnerabilities, larger attack vectors, and richer targets that attract increasingly well-funded bad actors.

The AI security era is coming into focus quickly, and the design of these security tools, the rules that govern them, and the way theyre deployed carry increasingly high stakes. The race is on to determine whether AI will help keep people and businesses secure in an increasingly connected world or push us into the digital abyss.

In a hair-raising prediction last year, Juniper Research forecast that the annual cost of data breaches will increase from $3 trillion in 2019 to $5 trillion in 2024. This will be due to a mix of fines for regulation violations, lost business, and recovery costs. But it will also be driven by a new variable: AI.

Cybercrime is increasingly sophisticated; the report anticipates that cybercriminals will use AI, which will learn the behavior of security systems in a similar way to how cybersecurity firms currently employ the technology to detect abnormal behavior, reads Junipers report. The research also highlights that the evolution of deepfakes and other AI-based techniques is also likely to play a part in social media cybercrime in the future.

Given that every business is now a digital business to some extent, spending on infrastructure defense is exploding. Research firm Cybersecurity Ventures notes that the global cybersecurity market was worth $3.5 billion in 2014 but increased to $120 billion in 2017. It projects that spending will grow to an annual average of $200 billion over the next five years. Tech giant Microsoft alone spends $1 billion each year on cybersecurity.

With projections of a 1.8 million-person shortfall for the cybersecurity workforce by 2022, this spending is due in part to the growing costs of recruiting talent. AI boosters believe the technology will reduce costs by requiring fewer humans while still making systems safe.

When were running security operation centers, were pushing as hard as we can to use AI and automation, said Dave Burg, EY Americas cybersecurity leader. The goal is to take a practice that would normally maybe take an hour and cut it down to two minutes, just by having the machine do a lot of the work and decision-making.

In the short-term, companies are bubbling with optimism that AI can help them turn the tide against the mounting cybersecurity threat.

In a report on AI and cybersecurity last summer, Capgemini reported that 69% of enterprise executives surveyed felt AI would be essential for responding to cyberthreats. Telecom led all other industries, with 80% of executives counting on AI to shore up defenses. Utilities executives were at the low end, with only 59% sharing that opinion.

Overall bullishness has triggered a wave of investments in AI cybersecurity, to bulk up defenses, but also to pursue a potentially lucrative new market.

Early last year, Comcast made a surprise move when it announced the acquisition of BluVector, a spinoff of defense contractor Northrop Grumman that uses artificial intelligence and machine learning to detect and analyze increasingly sophisticated cyberattacks. The telecommunications giant said it wanted to use the technology internally, but also continue developing it as a service it could sell to others.

Subsequently, Comcast launched Xfinity xFi Advanced Security, which automatically provides security for all the devices in a customers home that are connected to its network. It created the service in partnership with Cujo AI, a startup based in El Segundo, California that developed a platform to spot unusual patterns on home networks and send Comcast customers instant alerts.

Cujo AI founder Einaras von Gravrock said the rapid adoption of connected devices in the home and the broader internet of things (IoT) has created too many vulnerabilities to be tracked manually or blocked effectively by conventional firewall software. His startup turned to AI and machine learning as the only option to fight such a battle at scale.

Von Gravrock argued that spending on such technology is less of a cost and more of a necessity. If a company like Comcast wants to convince customers to use a growing range of services, including those arriving with the advent of 5G networks, the provider must be able to convince people they are safe.

When we see the immediate future, all operators will have to protect your personal network in some way, shape, or form, von Gravrock said.

Capgeminis aforementioned report found that overall, 51% of enterprises said they were heavily using some kind of AI for detection, 34% for prediction, and 18% to manage responses. Detection may sound like a modest start, but its already paying big dividends, particularly in areas like fraud detection.

Paris-based Shift has developed algorithms that focus narrowly on weeding out fraud in insurance. Shifts service can spot patterns in data such as contracts, reports, photos, and even videos that are processed by insurance companies. With more than 70 clients, Shift has amassed a huge amount of data that has allowed it to rapidly fine-tune its AI. The intended result is more efficiency for insurance companies and a better experience for customers, whose claims are processed faster.

The startup has grown quickly after raising $10 million in 2016, $28 million in 2017, and $60 million last year. Cofounder and CEO Jeremy Jawish said the key was adopting a narrow focus in terms of what it wanted to do with AI.

We are very focused on one problem, Jawish said. We are just dealing with insurance. We dont do general AI. That allows us to build up the data we need to become more intelligent.

While this all sounds potentially utopian, a dystopian twist is gathering momentum. Security experts predict that 2020 could be the year hackers really begin to unleash attacks that leverage AI and machine learning.

The bad [actors] are really, really smart, said Burg of EY Americas. And there are a lot of powerful AI algorithms that happen to be open source. And they can be used for good, and they can also be used for bad. And this is one of the reasons why I think this space is going to get increasingly dangerous. Incredibly powerful tools are being used to basically do the inverse of what the defenders [are] trying to do on the offensive side.

In an experiment back in 2016, cybersecurity company ZeroFox created an AI algorithm called SNAPR that was capable of posting 6.75 spear phishing tweets per minute that reached 800 people. Of those, 275 recipients clicked on the malicious link in the tweet. These results far outstripped the performance of a human, who could generate only 1.075 tweets per minute, reaching only 125 people and convincing just 49 individuals to click.

Likewise, digital marketing firm Fractl demonstrated how AI could unleash a tidal wave of fake news and disinformation. Using publicly available AI tools, it created a website that includes 30 highly polished blog posts, as well as an AI-generated headshot for the non-existent author of the posts.

And then there is the rampant use of deepfakes, which employ AI to match images and sound to create videos that in some cases are almost impossible to identify as fake. Adam Kujawa, the director of Malwarebytes Labs, said hes been shocked at how quickly deepfakes have evolved. I didnt expect it to be so easy, he said. Some of it is very alarming.

In a 2019 report, Malwarebytes listed a number of ways it expects bad actors to start using AI this year. That includes incorporating AI into malware. In this scenario, the malware uses AI to adapt in real time if it senses any detection programs. Such AI malware will likely be able to target users more precisely, fool automated detection systems, and threaten even larger stashes of personal and financial information.

I should be more excited about AI and security, but then I look at this space and look at how malware is being built, Kujawa said. The cat is out of the bag. Pandoras box has been opened. I think this technology is going to become the norm for attacks. Its so easy to get your hands on and so easy to play with this.

Researchers in computer vision are already struggling to thwart attacks designed to disrupt the quality of their machine learning systems. It turns out that these learning systems remain remarkably easy to fool using adversarial attacks. External third parties can detect how a machine learning system works and then introduce code that confuses the system and causes it to misidentify images.

Even worse is that leading researchers acknowledge we dont really have a solution for stopping mischief makers from wreaking havoc on these systems.

Can we defend against these attacks? asked Nicolas Papernot, an AI researcher at Google Brain, during a presentation in Paris last year. Unfortunately, the answer is no.

In response to possible misuse of AI, the cybersecurity industry is doing what its always done during such technology transitions try to stay one step ahead of malicious players.

Back in 2018, BlackBerry acquired cybersecurity startup Cylance for $1.4 billion. Cylance had developed an endpoint protection platform that used AI to look for weaknesses in networks and shut them down if necessary. Last summer, BlackBerry created a new business unit led by its CTO that focuses on cybersecurity research and development (R&D). The resulting BlackBerry Labs has a dedicated team of 120 researchers. Cylance was a cornerstone of the lab, and the company said machine learning would be among the primary areas of focus.

Following that announcement, in August the company introduced BlackBerry Intelligent Security, a cloud-based service that uses AI to automatically adapt security protocols for employees smartphones or laptops based on location and patterns of usage. The system can also be used for IoT devices or, eventually, autonomous vehicles. By instantly assessing a wide range of factors to adjust the level of security, the system is designed to keep a device just safe enough without having to always require maximum security settings an employee might be tempted to circumvent.

Otherwise, youre left with this situation where you have to impose the most onerous security measures, or you have to sacrifice security, said Frank Cotter, senior vice president of product management at BlackBerry. That was the intent behind Cylance and BlackBerry Labs, to get ahead of the malicious actors.

San Diego-based MixMode is also looking down the road and trying to build AI-based security tools that learn from the limitations of existing services. According to MixMode CTO Igor Mezic, existing systems may have some AI or machine learning capability, but they still require a number of rules that limit the scope of what they can detect and how they can learn and require some human intervention.

Weve all seen phishing emails, and theyre getting way more sophisticated, Mezic said. So even as a human, when I look at these emails and try to figure out whether this is real or not, its very difficult. So, it would be difficult for any rule-based system to discover, right? These AI methodologies on the attack side have already developed to the place where you need human intelligence to figure out whether its real. And thats the scary part.

AI systems that still include some rules also tend to throw off a lot of false positives, leaving security teams overwhelmed and eliminating any initial advantages that came with automation, Mezic said. MixMode, which has raised about $13 million in venture capital, is developing what it describes as third-wave AI.

In this case, the goal is to make AI security more adaptive on its own rather than relying on rules that need to be constantly revised to tell it what to look for. MixModes platform monitors all nodes on a network to continually evaluate typical behavior. When it spots a slight deviation, it analyzes the potential security risk and rates it from high to low before deciding whether to send up an alert. The MixMode system is always updating its baseline of behavior so no humans have to fine-tune the rules.

Your own AI system needs to be very cognizant that an external AI system might be trying to spoof it or even learn how it operates, Mezic said. How can you write a rule for that? Thats the key technical issue. The AI system must learn to recognize whether there are any changes on the system that feel like theyre being made by another AI system. Our system is designed to account for that. I think we are a step ahead. So lets try to make sure that we keep being a step ahead.

Yet this type of unsupervised AI starts to cross a frontier that makes some observers nervous. It will eventually be used not just in business and consumer networks, but also in vehicles, factories, and cities. As it takes on predictive duties and makes decisions about how to respond, such AI will balance factors like loss of life against financial costs.

Humans will have to carefully weigh whether they are ready to cede such power to algorithms, even though they promise massive efficiencies and increased defensive power. On the other hand, if malicious actors are mastering these tools, will the rest of society even have a choice?

I think we have to make sure that as we use the technology to do a variety of different things we also are mindful that we need to govern the use of the technology and realize that there will likely be unforeseen consequences, said Burg of EY Americas. You really need to think through the impact and the consequences, and not just be a naive believer that the technology alone is the answer.

Read the original here:

Is AI cybersecuritys salvation or its greatest threat? - VentureBeat

AI helps radiologists improve accuracy in breast cancer detection with lesser recalls – Healthcare IT News

A new study, conducted by Korean academic hospitals and Lunit, a medical AI company specializing in developing AI solutions for radiology and oncology, demonstrated the benefits of AI-aided breast cancer detection from mammography images. The study was published online on 6 February 2020, in Lancet Digital Health and features large-scale data of over 170,000 mammogram examinations from five institutions across South Korea, USA, and the UK, consisting of Asian and Caucasian female breast images.

TOP FINDINGS

One of the major findings showed that AI, in comparison to the radiologists, displayed better sensitivity in detecting cancer with mass (90% vs 78%) and distortion or asymmetry (90% vs 50%). The AI was better in the detection of T1 cancers, which is categorized as early-stage invasive cancer. AI detected 91% of T1 cancers and 87% of node-negative cancers, whereas the radiologist reader group detected 74% for both.

Another finding was a significant improvement in the performance of radiologists, before and after using AI. According to the study, the AI alone showed 88.8% sensitivity in breast cancer detection, whereas radiologists alone showed 75.3%. When radiologists were aided by AI, the accuracy increased by 9.5% to 84.8%.

An important factor in diagnosing mammograms is breast density and dense breast tissues, mostly from the Asian population, make it harder to interpret as dense tissue is more likely to mask cancers in mammograms. According to the studys findings, the diagnostic performance of AI was less affected by breast density, whereas radiologists' performance was prone to density, showing higher sensitivity for fatty breasts at 79.2% compared to dense breasts at 73.8%. When aided by AI, the radiologists sensitivity when interpreting dense breasts increased by 11%.

THE LARGER TREND

Findings from a study published in Nature indicated that Googles AI model spotted breast cancer in de-identified screening mammograms with greater accuracy, with fewer false positives and false negatives than experts, HealthCareITNews reported.

Lunit recently raised a $26M Series C funding from Korean and Chinese investors, which the company said was its biggest funding round, according to a DealStreetAsia report in January.

ON THE RECORD

It is an unprecedented quantity of data with accurate ground truth--especially the 36,000 cancer cases, which is seven times larger than the usual number of datasets from resembling studies conducted previously, said Hyo-Eun Kim, the first author of the study and Chief Product Officer at Lunit.

Prof. Eun-Kyung Kim, the corresponding author of the study and a breast radiologist at Yonsei University Severance Hospital, said: One of the biggest problems in detecting malignant lesions from mammography images is that to reduce false negativesmissed casesradiologists tend to increase recalls, casting a wider safety net, which brings an increased number of unnecessary biopsies.

It requires extensive experience to correctly interpret breast images, and our study showed that AI can help find more breast cancer with lesser recalls, also detecting cancers in its early stage of development.

Continue reading here:

AI helps radiologists improve accuracy in breast cancer detection with lesser recalls - Healthcare IT News

Tech Firm Acquires Boston-based Provider of AI-Based Drone Inspections – Transmission & Distribution World

On February 6, 2020, the American Association of Blacks in Energy (AABE) hosted a panel of utility executives from the Kansas City area at the Burns & McDonnell world headquarters. Panelists included Ray Kowalik, CEO of Burns & McDonnell; John Bridson, VP of generation at Evergy; and Bill Johnson, general manager of Kansas City Board of Public Utilities (BPU).Paula Glover, president and CEO of the AABE moderated the panel, covering topics such as climate change, customer satisfaction, and diversity and inclusion.

The event was kicked off in typical Burns & McDonnell fashion with a safety moment.Izu Mbata, staff electrical engineer at Burns & McDonnell, who is also the communications chair of the Kansas-Missouri AABE Chapter, stated that the Chapter hopes to make an impact and inspire the communities where they live and work.The other panelists expressed similar sentiments.Bridson said if Evergy is doing its job correctly, it will result in stronger communities for people to live and work.

Panelists each made a presentation on the future of energy.Common industry themes included the decrease in electricity consumption, changes in generation resources, the proliferation of renewable energy, and the need for investments in transmission infrastructure to bring renewable energy to load.As an example, Bridson described that wind generation in the Southwest Power Pool (SPP) peaked at 71% just last week.

Paula Glover shocked the crowd by naming Kansas City as the number five top city to be affected by climate change, according to the Weather Channel.Ray Kowalik of Burns & McDonnell said that the energy industry is doing its part with respect to carbon reduction, but there must be a balance between reliability and cost.When asked about what they are doing to mitigate climate change, Johnson indicated that the BPU has been aggressive in moving from coal generation to renewables and has decreased carbon dioxide (CO2) emissions by 56% since 2012.Similarly, Bridson stated that by the end of 2020, Evergy will have reduced CO2 emissions from 2005 levels by 40%.Evergy and the BPU both also discussed their respective community solar farm initiatives.

The panelists also agreed that it is always good business to have a diverse workforce.According to Kowalik, "Our industry is woefully underrepresented by women and minorities."The panel discussed diversity initiatives within their respective organizations including training, recruiting, supplier diversity programs, partnerships with higher education institutions, scholarships, internships, and internal scorecards.

Laron Evans, diverse business manager of the T&D Group at Burns & McDonnell, who is also the president of the Kansas-Missouri AABE Chapter, provided the following concluding remarks, "Studies have shown that diverse and inclusive teams of people make better business decisions. The opportunity to participate in inclusive collaboration helps us to stay on the forefront of innovation while moving our communities forward."

Read more:

Tech Firm Acquires Boston-based Provider of AI-Based Drone Inspections - Transmission & Distribution World

Recent AI Developments Offer a Glimpse of the Future of Drug Discovery – Tech Times

The science and practice of medicine has been around for much of recorded human history. Even today, doctors still swear an oath that dates back to ancient Greece, containing many of the ethical obligations we still expect our physicians to adhere to. It is one of the most necessary and universal fields of human study.

Despite the importance of medicine, though, true breakthroughs don't come easily. In fact, most medical professionals will only see a few within their lifetime. Developments such as the first medical x-ray, penicillin, stem cell therapy - true game changers that advance the cause of medical care don't happen often.

That's especially true when it comes to the development of medications. It takes a great deal of research and testing to find compounds that have medicinal benefits. Armies of scientists armed with microplate readers to measure absorbance, centrifuges for sample separation, and hematology analyzers to test compound efficacy make up just the beginnings of the long and labor-intensive process. It's why regulators tend to approve around 22 new drugs per year for public use, leaving many afflicted patients waiting for cures that may come too late.

Now, however, some recent advances in AI technology are promising to speed that process up. It could be the beginnings of a new medical technology breakthrough on the same order of magnitude as the ones mentioned earlier. Here's what's going on.

One of the reasons that it takes so long to develop new drug therapies, even for diseases that have been around for decades, is that much of the process relies on humans screening different molecule types to find ones likely to have an effect on the disease in question. Much of that work calls for lengthy chemical property analysis, followed by structured experimentation. On average, all of that work takes between three and six years to complete.

Recently, researchers have begun to adapt next-generation AI implementations for molecule screening that could cut that time down significantly. In one test, a startup called Insilico Medicine matched its' AI platform up against the already-completed work of human researchers seeking treatment options for fibrosis. It had taken them 8 years to come up with viable candidate molecules. It took the AI just 21 days. Although further refinements are required to put the AI on par with the human researchers in terms of result quality (the AI candidates performed a bit worse in treating fibrosis), the results were overwhelmingly positive.

Another major time-consuming hurdle that drug developers face is in trying to detect adverse side effects or toxicity in their new compounds. It's difficult because such effects don't always surface in clinical trials. Some take years to show up, long after scores of patients have already suffered from them. To avoid those outcomes, pharmaceutical firms take lots of time to study similar compounds that have already have reams of human interaction data, looking for patterns that could indicate a problem.

It's yet another part of the process that AI is proving adept at. AI systems can analyze vast amounts of data about known compounds to generate predictions about how a new molecule may behave. They can also model interactions between a new compound and different physical and chemical environments. That can provide clues to how a new drug might affect different parts of a human body. Best of all, AI can accomplish those tasks with more accuracy and in a fraction of the time it would take a human research team.

Even at this early stage of the development of drug discovery AI systems, there's every reason to believe that AI-developed drugs will be on the market in the very near future. In fact, there's already an AI-designed drug intended to treat obsessive-compulsive disorder (OCD) entering human trials in Japan. If successful, it will then proceed to worldwide testing and eventual regulatory approval processes in multiple countries.

It's worth noting that the drug in question took a mere 12 months for the AI to create, which would represent a revolution in the way we develop new disease treatments. With that as a baseline, it's easy to foresee drug development and testing cycles in the future reduced to weeks, not years. It's also easy to predict the advent of personalized drug development, with AI selecting and creating individualized treatments using patient physiological and genetic data. Such outcomes would render the medical field unrecognizable compared to today - and could create a disease-free future and a new human renaissance like nothing that's come before it.

2018 TECHTIMES.com All rights reserved. Do not reproduce without permission.

See the rest here:

Recent AI Developments Offer a Glimpse of the Future of Drug Discovery - Tech Times

UK public sector failing to be open about its use of AI, review finds – TechCrunch

A report into the use of artificial intelligence by the U.K.s public sector has warned that the government is failing to be open about automated decision-making technologies which have the potential to significantly impact citizens lives.

Ministers have been especially bullish on injecting new technologies into the delivery of taxpayer-funded healthcare with health minister Matt Hancock setting out a tech-fueled vision of preventative, predictive and personalised care in 2018, calling for a root and branch digital transformation of the National Health Service (NHS) to support piping patient data to a new generation of healthtech apps and services.

He has also personally championed a chatbot startup, Babylon Health, thats using AI for healthcare triage and which is now selling a service in to the NHS.

Policing is another area where AI is being accelerated into U.K. public service delivery, with a number of police forces trialing facial recognition technology and Londons Met Police switching over to a live deployment of the AI technology just last month.

However the rush by cash-strapped public services to tap AI efficiencies risks glossing over a range of ethical concerns about the design and implementation of such automated systems, from fears about embedding bias and discrimination into service delivery and scaling harmful outcomes to questions of consent around access to the data sets being used to build AI models and human agency over automated outcomes, to name a few of the associated concerns all of which require transparency into AIs if theres to be accountability over automated outcomes.

The role of commercial companies in providing AI services to the public sector also raises additional ethical and legal questions.

Only last week, a court in the Netherlands highlighted the risks for governments of rushing to bake AI into legislation after it ruled an algorithmic risk-scoring system implemented by the Dutch government to assess the likelihood that social security claimants will commit benefits or tax fraud breached their human rights.

The court objected to a lack of transparency about how the system functions, as well as the associated lack of controlability ordering an immediate halt to its use.

The U.K. parliamentary committee that reviews standards in public life has today sounded a similar warning publishing a series of recommendations for public-sector use of AI and warning that the technology challenges three key principles of service delivery: openness, accountability and objectivity.

Under the principle of openness, a current lack of information about government use of AI risks undermining transparency, it writes in an executive summary.

Under the principle of accountability, there are three risks: AI may obscure the chain of organisational accountability; undermine the attribution of responsibility for key decisions made by public officials; and inhibit public officials from providing meaningful explanations for decisions reached by AI. Under the principle of objectivity, the prevalence of data bias risks embedding and amplifying discrimination in everyday public sector practice.

This review found that the government is failing on openness, it goes on, asserting that: Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.

In 2018, the UNs special rapporteur on extreme poverty and human rights raised concerns about the U.K.s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale warning then that the impact of a digital welfare state on vulnerable people would be immense, and calling for stronger laws and enforcement of a rights-based legal framework to ensure the use of technologies like AI for public service provision does not end up harming people.

Per the committees assessment, it is too early to judge if public sector bodies are successfully upholding accountability.

Parliamentarians also suggest that fears over black box AI may be overstated and rather dub explainable AI a realistic goal for the public sector.

On objectivity, they write that data bias is an issue of serious concern, and further work is needed on measuring and mitigating the impact of bias.

The use of AI in the U.K. public sector remains limited at this stage, according to the committees review, with healthcare and policing currently having the most developed AI programmes where the tech is being used to identify eye disease and predict reoffending rates, for example.

Most examples the Committee saw of AI in the public sector were still under development or at a proof-of-concept stage, the committee writes, further noting that the Judiciary, the Department for Transport and the Home Office are examining how AI can increase efficiency in service delivery.

It also heard evidence that local government is working on incorporating AI systems in areas such as education, welfare and social care noting the example of Hampshire County Council trialing the use of Amazon Echo smart speakers in the homes of adults receiving social care as a tool to bridge the gap between visits from professional carers, and points to a Guardian article which reported that one-third of U.K. councils use algorithmic systems to make welfare decisions.

But the committee suggests there are still significant obstacles to what they describe as widespread and successful adoption of AI systems by the U.K. public sector.

Public policy experts frequently told this review that access to the right quantity of clean, good-quality data is limited, and that trial systems are not yet ready to be put into operation, it writes. It is our impression that many public bodies are still focusing on early-stage digitalisation of services, rather than more ambitious AI projects.

The report also suggests that the lack of a clear standards framework means many organisations may not feel confident in deploying AI yet.

While standards and regulation are often seen as barriers to innovation, the Committee believes that implementing clear ethical standards around AI may accelerate rather than delay adoption, by building trust in new technologies among public officials and service users, it suggests.

Among 15 recommendations set out in the report is a call for a clear legal basis to be articulated for the use of AI by the public sector. All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery, the committee writes.

Another recommendation is for clarity over which ethical principles and guidance applies to public sector use of AI with the committee noting there are three sets of principles that could apply to the public sector, which is generating confusion.

The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use, it recommends.

It also wants the Equality and Human Rights Commission to develop guidance on data bias and anti-discrimination to ensure public sector bodies use of AI complies with the U.K. Equality Act 2010.

The committee is not recommending a new regulator should be created to oversee AI but does call on existing oversight bodies to act swiftly to keep up with the pace of change being driven by automation.

It also advocates for a regulatory assurance body to identify gaps in the regulatory landscape and provide advice to individual regulators and government on the issues associated with AI supporting the governments intention for the Centre for Data Ethics and Innovation (CDEI), which was announced in 2017, to perform this role. (A recent report by the CDEI recommended tighter controls on how platform giants can use ad targeting and content personalisation.)

Another recommendation is around procurement, with the committee urging the government to use its purchasing power to set requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards.

This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements, it suggests.

Responding to the report in a statement, shadow digital minister Chi Onwurah MP accused the government of driving blind, with no control over who is in the AI driving seat.

This serious report sadly confirms what we know to be the case that the Conservative Government is failing on openness and transparency when it comes to the use of AI in the public sector, she said.The Government is driving blind, with no control over who is in the AI driving seat. The Government urgently needs to get a grip before the potential for unintended consequences gets out of control.

Last year, I argued in parliament that Government should not accept further AI algorithms in decision making processes without introducing further regulation. I will continue to push the Government to go further in sharing information on how AI is currently being used at all level of Government. As this report shows, there is an urgent need for practical guidance and enforceable regulation that works. Its time for action.

Continued here:

UK public sector failing to be open about its use of AI, review finds - TechCrunch

The AI Revolution Is Here – A Podcast And Interview With Nate Yohannes – Forbes

Nates perspective on AI being built for everybody on the planet is birthed from one of the most unique foundations possible. Hes the offspring of a revolutionary who stepped on a landmine in 1978 fighting for democracy on the Horn of Africa, Eritrea, one of the worst violators of human rights in the world to becoming a lawyer who was then appointed by President Obama to serve on behalf of the White House. Nates father losing much of his vision in the landmine attack was the catalyst for his passion for AI Computer Vision; computers reasoning over people, places and things.

Nate Yohannes AI, Microsoft

His role at Microsoft AI merges the world of business and product strategy, while he works closely with Microsofts AI Ethics & Society team. Nate believes that Microsofts leadership decision to embed Ethics & Society into engineering teams is one of the most durable advantages they offer design products with the filter of ethics up front is unique and valuable for everyone. AI is the catalyst for the fourth industrial revolution - the most significant technological advancement thus far and, AI has the potential to solve incredible challenges for all of humanity (climate, education, design, customer experiences, governance, food, etc.). The biggest concern could be the potential for un-expected and un-intended consequences when building and deploying AI products. Very similar to the unintended consequences we see today with social media companies and the misuse of privacy and data. AI will change the world, how it does this is our choice. Its critical to have appropriate representation at decision making tables when building AI products to mitigate thousands or millions of unexpected consequences potentially. From gender and race to financial, health and even location-based data. Solving this challenge of the unexpected consequences and incorporating inclusivity shouldnt hinder innovation and the ambition from maximizing revenue; instead, it should enhance it. Creating products that will have the most extensive consumer base possible, everyone. Its an inspiring conversation about how to make the possible a reality with a different mindset.

This should be a guiding light for how all companies develop AI for the highest good (not just greater good). If every company or even the government will be a digital platform by 2030, OK, 75% of us will be, then AI will sit at the center of these organizations.

Nate Yohannes Speaking to AI.

Doing it the right way is part of the puzzle. Thinking more about how it can be applied to the whole world is the tantalizing promise. Nate Yohannes is a Principal Program Manager for Mixed Reality & AI Engineering at Microsoft. He recently was a Director of Corporate Business Development & Strategy for AI, IoT & Intelligent Cloud. Hes on the Executive Advisory Board of the Nasdaq Entrepreneurial Center and an Expert for MITs Inclusive Innovation Challenge. From 2014 2017, he served in President Obamas administration as the Senior Advisor to the Head of Investments and Innovation, US Small Business Administration and on the White House Broadband Opportunity Council.

Nate was selected for the inaugural White House Economic Leadership class. He started his career as the Assistant General Counsel at the Money Management Institute. He is a graduate of the State University of New York College at Geneseo and of the University of Buffalo School of Law, where he was a Barbara and Thomas Wolfe Human Rights Fellow. Hes admitted to practice law in New York State.

Read more from the original source:

The AI Revolution Is Here - A Podcast And Interview With Nate Yohannes - Forbes

Why creating an AI department is not a good idea – Policy Options

When I give public talks and training on AI in government, I am often asked why Canada has no Department of Artificial Intelligence to govern AI. Some jurisdictions, like the United Arab Emirates, have a minister for AI, and a few others, like the UK, have a small government office focused on AI; but to my knowledge there are no jurisdictions with a department of AI.

The call for a department of AI is a well-meaning response to the growing importance of AI and highlights the gap that exists between technology adoption in government and the much higher proficiency of the private sector with this technology. A dramatic state of affairs should be addressed through decisive action and, so the logic goes, that decisive action should be nothing short of updating the machinery of government to create a new department or agency directly focused on the governance of AI. While the sentiment is understandable, the idea of creating a department of AI is riddled with misunderstandings and needs to be handled with care.

Understanding AI

First, few applications called AI have much in common with one another from a technical standpoint, at least not in the way that the term AI is generally used today. For instance, the type of AI used in security cameras to identify suspects is a very different technology than the AI that is used to support advanced translation services. There is a large and common overestimation of AIs shared characteristics across applications. In reality, when we talk about AI, we are talking about many different things.

Whats more, a great many of the new applications we call AI are fairly linear, albeit impressive, advances from a pre-existing technology or process, most of which already had a governance framework. In the world of policy, the permitted applications of AI with regard to traffic cameras have much more to do with the existing rules governing traffic cameras than with, say, an AI application that can play chess.

In that sense, AI is less analogous to a field like astronomy and more analogous to something like electricity: both AI and electricity are found in a wide range of fields and would be poorly served by an oversight body created on the basis of all potential applications. Imagine creating a department of electricity to be responsible for regulating every instance and application in which electricity is found, or a department of computers to oversee all conceivable applications of computing power. Not only would such an organization be unwieldy, but it would be unlikely to have clear goals and purposes.

Making new departments

From an administrative and organizational effectiveness standpoint, its not clear that AI capacities should be housed together in a single department. The computer science and data science at the heart of AI are functionally very different from the operation of things like postal services or seaport governance, where activities must be clustered together to achieve economies of scale in the use of a particular piece of machinery or infrastructure.

In contrast to the technologies of the industrial age, which are highly location dependent and benefit immensely from clustering, software and data science applications do not depend on location, or at least not nearly to the same degree. Even if significant benefits of co-location did exist for AI uses in the federal government, being housed in the same departmental structure is no guarantee that staff would even be housed in the same building; most departments are split across multiple sites, and even across multiple provinces. Its hard to see how a new department of AI would ever resemble an army of AI professionals saluting from the same cubicle farm.

Concentrating all the government of Canadas AI capacities in a single institution would also come with negative side effects. With the transfer of those capacities to a centralized institutional vessel, other departments would be left with very little or no AI capacity of their own. AI would be monopolized by the single department of AI, which would then support the AI projects of other departments as needed.

There is a precedent for such an arrangement: the strategy behind the creation in 2011 of Shared Services Canada, which sought to put all government IT services under one roof. For a variety of reasons this change in machinery has been held responsible for harming the quality of government technological services, and the Trudeau government assessed SSC as being in need of renewal only nine years after its founding. Its hard to imagine that a duplication of this approach for AI would fare much better.

Alternative institutional structures

Instead of a megadepartment of AI that houses all potential applications, it is possible to imagine taking a broader approach, permitting applications and governance to be decentralized but with a complementary concentration of expertise and resources that can be lent out to other departments as necessary. But this modified vision of new AI machinery is unlikely to be uniquely different from existing institutions like Statistics Canada, which already has renowned strengths in the data, statistics and modelling techniques that are central to AI. Statistics Canada needs to be brought up to speed on AI, but that can be done without creating a new department.

Perhaps a department of AI could better be viewed as a much more focused federal R&D instrument, responsible for the sorts of specialized, leading-edge AI research that might benefit from the concentration of facilities for supercomputers and the like. An institution with this sole mandate does not exist at the federal level and could even make some sense at a technical level; however, the idea is a complete non-starter due to jurisdictional issues. While the National Research Council does operate in this space for the federal government, most public R&D functions and funding are passed off to Canadas 260 or so post-secondary institutions, which are nominally independent and overseen by provincial governments.

Bringing all of these AI activities under one (figurative) roof would require a huge bureaucratic street fight, to strip powers and functions away from their hundreds of existing owners. With such high cost in political and administrative capital in order to unlock a dubious and unknown benefit, its implausible that such a change would ever be attempted. Certainly, there may be no solitary department addressing AI, but its not obvious that having one would be an improvement.

If it aint broke

Under the existing arrangement, overarching AI policy for the government of Canada is the purview of central agencies and Innovation, Science and Economic Development, with political leadership provided by the new cross-departmental minister of state for digital government. Meanwhile, responsibility for delivery and applications is housed in the line departments, which use AI applications and are closest to their effects. This set-up has been successful so far in practice; Canada ranks among the leading countries in the world for AI preparedness. Even the minister of digital government is not assigned sole responsibility for AI.

The idea of a departmental stovepipe for the new and exciting field of AI is tantalizing, but the state can meaningfully commit to doing more about AI in other ways. Creating a new department would be an incredibly expensive and disruptive undertaking, adding new layers of administration and technical complications. Machinery of government changes are in fact an immense distraction from the everyday business of government; very little AI governance will get done while staff are being moved and new mandates debated.

Due to its onerous nature, changing the machinery of government is seldom regarded as a way of acting either quickly or decisively. Petronius Arbiter, a Roman administrator writing on governance, is generally credited with the observation We tend to meet any new situation by reorganizing, and what a wonderful method it can be for creating the illusion of progress while actually producing confusion, inefficiency, and demoralization. While AI is growing in importance and needs better governance to match, a machinery change should take place only if there are clear problems with the existing institutional structure and clarity that the change will represent an obvious improvement. Its not clear that either of these would be true in the case of AI.

Photo:Shutterstockby byGorodenkoff

Do you have something to say about the article you just read? Be part of thePolicy Optionsdiscussion, and send in your own submission.Here is alinkon how to do it.|Souhaitez-vous ragir cet article ?Joignez-vous aux dbats dOptions politiqueset soumettez-nous votre texte en suivant cesdirectives.

Continued here:

Why creating an AI department is not a good idea - Policy Options

The White House wants to spend hundreds of millions more on AI research – MIT Technology Review

The news: The White House is pumping hundreds of millions more dollars into artificial-intelligence research. In budget plans announced on Monday, the administration bumped funding for AI research at the Defense Advanced Research Projects Agency (DARPA) from $50 million to $249 million and at the National Science Foundation from $500 million to $850 million. Other departments, including the Department of Energy and the Department of Agriculture, are also getting a boost to their funding for AI.

Why it matters: Many believe that AI is crucial for national security. Worried that the US risks falling behind China in the race to build next-gen technologies, security experts have pushed the Trump administration to increase its funding.

Public spending: For now the money will mostly flow to DARPA and the NSF. But $50 million of the NSFs budget has been allocated to education and job training, especially in community colleges, historically black colleges and universities, and minority-serving institutions. The White House says it also plans to double funding of AI research for purposes other than defense by 2022.

See the article here:

The White House wants to spend hundreds of millions more on AI research - MIT Technology Review

Protecting privacy in an AI-driven world – Brookings Institution

Our world is undergoing an information Big Bang, in which the universe of data doubles every two years and quintillions of bytes of data are generated every day.1 For decades, Moores Law on the doubling of computing power every 18-24 months has driven the growth of information technology. Nowas billions of smartphones and other devices collect and transmit data over high-speed global networks, store data in ever-larger data centers, and analyze it using increasingly powerful and sophisticated softwareMetcalfes Law comes into play. It treats the value of networks as a function of the square of the number of nodes, meaning that network effects exponentially compound this historical growth in information. As 5G networks and eventually quantum computing deploy, this data explosion will grow even faster and bigger.

The impact of big data is commonly described in terms of three Vs: volume, variety, and velocity.2 More data makes analysis more powerful and more granular. Variety adds to this power and enables new and unanticipated inferences and predictions. And velocity facilitates analysis as well as sharing in real time. Streams of data from mobile phones and other online devices expand the volume, variety, and velocity of information about every facet of our lives and puts privacy into the spotlight as a global public policy issue.

Artificial intelligence likely will accelerate this trend. Much of the most privacy-sensitive data analysis todaysuch as search algorithms, recommendation engines, and adtech networksare driven by machine learning and decisions by algorithms. As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed.

As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed.

Facial recognition systems offer a preview of the privacy issues that emerge. With the benefit of rich databases of digital photographs available via social media, websites, drivers license registries, surveillance cameras, and many other sources, machine recognition of faces has progressed rapidly from fuzzy images of cats3 to rapid (though still imperfect) recognition of individual humans. Facial recognition systems are being deployed in cities and airports around America. However, Chinas use of facial recognition as a tool of authoritarian control in Xinjiang4 and elsewhere has awakened opposition to this expansion and calls for a ban on the use of facial recognition. Owing to concerns over facial recognition, the cities of Oakland, Berkeley, and San Francisco in California, as well as Brookline, Cambridge, Northampton, and Somerville in Massachusetts, have adopted bans on the technology.5 California, New Hampshire, and Oregon all have enacted legislation banning use of facial recognition with police body cameras.6

This policy brief explores the intersection between AI and the current privacy debate. As Congress considers comprehensive privacy legislation to fill growing gaps in the current checkerboard of federal and state privacy, it will need to consider if or how to address use personal information in artificial intelligence systems. In this brief, I discuss some potential concerns regarding artificial intelligence and privacy, including discrimination, ethical use, and human control, as well as the policy options under discussion.

The challenge for Congress is to pass privacy legislation that protects individuals against any adverse effects from the use of personal information in AI, but without unduly restricting AI development or ensnaring privacy legislation in complex social and political thickets. The discussion of AI in the context of the privacy debate often brings up the limitations and failures of AI systems, such as predictive policing that could disproportionately affect minorities7 or Amazons failed experiment with a hiring algorithm that replicated the companys existing disproportionately male workforce.8 These both raise significant issues, but privacy legislation is complicated enough even without packing in all the social and political issues that can arise from uses of information. To evaluate the effect of AI on privacy, it is necessary to distinguish between data issues that are endemic to all AI, like the incidence of false positives and negatives or overfitting to patterns, and those that are specific to use of personal information.

The privacy legislative proposals that involve these issues do not address artificial intelligence in name. Rather, they refer to automated decisions (borrowed from EU data protection law) or algorithmic decisions (used in this discussion). This language shifts peoples focus from the use of AI as such to the use of personal data in AI and to the impact this use may have on individuals. This debate centers in particular on algorithmic bias and the potential for algorithms to produce unlawful or undesired discrimination in the decisions to which the algorithms relate. These are major concerns for civil rights and consumer organizations that represent populations that suffer undue discrimination.

Addressing algorithmic discrimination presents basic questions about the scope of privacy legislation. First, to what extent can or should legislation address issues of algorithmic bias? Discrimination is not self-evidently a privacy issue, since it presents broad social issues that persist even without the collection and use of personal information, and fall under the domain of various civil rights laws. Moreover, making these laws available for debate could effectively open a Pandoras Box because of the charged political issues they touch on and the multiple congressional committees with jurisdiction over various such issues. Even so, discrimination is based on personal attributes such as skin color, sexual identity, and national origin. Use of personal information about these attributes, either explicitly ormore likely and less obviouslyvia proxies, for automated decision-making that is against the interests of the individual involved thus implicates privacy interests in controlling how information is used.

This charade of consent has made it obvious that notice-and-choice has become meaningless. For many AI applications it will become utterly impossible.

Second, protecting such privacy interests in the context of AI will require a change in the paradigm of privacy regulation. Most existing privacy laws, as well as current Federal Trade Commission enforcement against unfair and deceptive practices, are rooted in a model of consumer choice based on notice-and-choice (also referred to as notice-and-consent). Consumers encounter this approach in the barrage of notifications and banners linked to lengthy and uninformative privacy policies and terms and conditions that we ostensibly consent to but seldom read. This charade of consent has made it obvious that notice-and-choice has become meaningless. For many AI applicationssmart traffic signals and other sensors needed to support self-driving cars as one prominent exampleit will become utterly impossible.

Although almost all bills on Capitol Hill still rely on the notice-and-choice model in some degree, key congressional leaders as well as privacy stakeholders have expressed desire to change this model by shifting the burden of protecting individual privacy from consumers over to the businesses that collect data.9 In place of consumer choice, their model focuses on business conduct by regulating companies processing of datawhat they collect and how they can use it and share it. Addressing data processing that results in any algorithmic discrimination can fit within this model.

A model focused on data collection and processing may affect AI and algorithmic discrimination in several ways:

In addition to these provisions of general applicability that may affect algorithmic decisions indirectly, a number of proposals specifically address the subject.10

The responses to AI that are currently under discussion in privacy legislation take two main forms. The first targets discrimination directly. A group of 26 civil rights and consumer organizations wrote a joint letter advocating to prohibit or monitor use of personal information with discriminatory impacts on people of color, women, religious minorities, members of the LGBTQ+ community, persons with disabilities, persons living on l winsome, immigrants, and other vulnerable populations.11 The Lawyers Committee for Civil Rights Under Law and Free Press Action have incorporated this principle into model legislation aimed at data discrimination affecting economic opportunity, public accommodations, or voter suppression.12 This model is substantially reflected in the Consumer Online Privacy Rights Act, which was introduced in the waning days of the 2019 congressional session by Senate Commerce Committee ranking member Maria Cantwell (D-Wash.). It also includes a similar provision restricting the processing of personal information that discriminates against or classifies individuals on the basis of protected attributes such race, gender, or sexual orientation.13 The Republican draft counterproposal addresses the potential for discriminatory use of personal information by calling on the Federal Trade Commission to cooperate with agencies that enforce discrimination laws and to conduct a study.14

This approach to algorithmic discrimination implicates debates over private rights of action in privacy legislation. The possibility of such individual litigation is a key point of divergence between Democrats aligned with consumer and privacy advocates on one hand, and Republicans aligned with business interests on the other. The former argue that private lawsuits are a needed force multiplier for federal and state enforcement, while the latter express concern that class action lawsuits, in particular, burden business with litigation over trivial issues. In the case of many of the kinds of discrimination enumerated in algorithmic discrimination proposals, existing federal, state, and local civil rights laws enable individuals to bring claims for discrimination. Any federal preemption or limitation on private rights of action in federal privacy legislation should not impair these laws.

The second approach addresses risk more obliquely, with accountability measures designed to identify discrimination in the processing of personal data. Numerous organizations and companies as well as several legislators propose such accountability. Their proposals take various forms:

A sense of fairness suggests such a safety valve should be available for algorithmic decisions that have a material impact on individuals lives. Explainability requires (1) identifying algorithmic decisions, (2) deconstructing specific decisions, and (3) establishing a channel by which an individual can seek an explanation. Reverse-engineering algorithms based on machine learning can be difficult, and even impossible, a difficulty that increases as machine learning becomes more sophisticated. Explainability therefore entails a significant regulatory burden and constraint on use of algorithmic decision-making and, in this light, should be concentrated in its application, as the EU has done (at least in principle) with its legal effects or similarly significant effects threshold. As understanding increases about the comparative strengths of human and machine capabilities, having a human in the loop for decisions that affect peoples lives offers a way to combine the power of machines with human judgment and empathy.

Because of the difficulties of foreseeing machine learning outcomes as well as reverse-engineering algorithmic decisions, no single measure can be completely effective in avoiding perverse effects. Thus, where algorithmic decisions are consequential, it makes sense to combine measures to work together. Advance measures such as transparency and risk assessment, combined with the retrospective checks of audits and human review of decisions, could help identify and address unfair results. A combination of these measures can complement each other and add up to more than the sum of the parts. Risk assessments, transparency, explainability, and audits also would strengthen existing remedies for actionable discrimination by providing documentary evidence that could be used in litigation. Not all algorithmic decision-making is consequential, however, so these requirements should vary according to the objective risk.

The window for this Congress to pass comprehensive privacy legislation is narrowing. While the Commerce Committee in each house of Congress has been working on a bipartisan basis throughout 2019 and have put out discussion drafts, they have yet to reach agreement on a bill. Meanwhile, the California Consumer Privacy Act went into effect on Jan. 1, 2020,21 impeachment and war powers have crowded out other issues, and the presidential election is going into full swing.

The window for this Congress to pass comprehensive privacy legislation is narrowing.

In whatever window remains to pass privacy legislation before the 2020 election, the treatment of algorithmic decision-making is a substantively and politically challenging issue that will need a workable resolution. For a number of civil rights, consumer, and other civil society groups, establishing protections against discriminatory algorithmic decision-making is an essential part of legislation. In turn, it will be important to Democrats in Congress. At a minimum, some affirmation that algorithmic discrimination based on personal information is subject to existing civil rights and nondiscrimination laws, as well as some additional accountability measures, will be essential to the passage of privacy legislation.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative, and Amazon and Intel provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Read more:

Protecting privacy in an AI-driven world - Brookings Institution

ARMs new edge AI chips promise IoT devices that wont need the cloud – The Verge

Edge AI is one of the biggest trends in chip technology. These are chips that run AI processing on the edge or, in other words, on a device without a cloud connection. Apple recently bought a company that specializes in it, Googles Coral initiative is meant to make it easier, and chipmaker ARM has already been working on it for years. Now, ARM is expanding its efforts in the field with two new chip designs: the Arm Cortex-M55 and the Ethos-U55, a neural processing unit meant to pair with the Cortex-M55 for more demanding use cases.

The benefits of edge AI are clear: running AI processing on a device itself, instead of in a remote server, offers big benefits to privacy and speed when it comes to handling these requests. Like ARMs other chips, the new designs wont be manufactured by ARM; rather, they serve as blueprints for a wide variety of partners to use as a foundation for their own hardware.

But what makes ARMs new chip designs particularly interesting is that theyre not really meant for phones and tablets. Instead, ARM intends for the chips to be used to develop new Internet of Things devices, bringing AI processing to more devices that otherwise wouldnt have those capabilities. One use case ARM imagines is a 360-degree camera in a walking stick that can identify obstacles, or new train sensors that can locally identify problems and avoid delays.

As for the specifics, the Arm Cortex-M55 is the latest model in ARMs Cortex-M line of processors, which the company says offers up to a 15x improvement in machine learning performance and a 5x improvement in digital signal processing performance compared to previous Cortex-M generations.

For truly demanding edge AI tasks, the Cortex-M55 (or older Cortex-M processors) can be combined with the Ethos-U55 NPU, which takes things a step further. It can offer another 32x improvement in machine learning processing compared to the base Cortex-M55, for a total of 480x better processing than previous generations of Cortex-M chips.

While those are impressive numbers, ARM says that the improvement in data throughput here will make a big difference in what edge AI platforms can do. Current Cortex-M platforms can handle basic tasks like keyword or vibration detection. The M55s improvements let it work with more advanced things like object recognition. And the full power of a Cortex-M chip combined with the Ethos-U55 promises even more functionality, with the potential for local gesture and speech recognition.

All of these advances will take some time to roll out. While ARM is announcing the designs today and releasing documentation, it doesnt expect actual silicon to arrive until early 2021 at the earliest.

See more here:

ARMs new edge AI chips promise IoT devices that wont need the cloud - The Verge

Hyperlink InfoSystem Positioned as Leader In App Development, AI Development and Salesforce Development – Yahoo Finance

NEW YORK, Feb. 12, 2020 /PRNewswire/ -- The world is moving so fast and people have no time to do their routine works manually. Maintaining a business is not as much easier or harder in this modern world. Technology is helping the society to use the time effectively and avoid such manual work to operate as a digital process. Technologies have been getting into the game like mobile applications, artificial intelligence, IoT and so on and the fact is that these are portable to use, no external person is required to look at the work. The reason behind the requirement is that it helps their process to get manage on time and also moreover the possibility to track the work is easy.

Hyperlink_Infosystem_Logo

Considering this is in mind, app development companiesare focusing on working with the latest technologies and developing their client work with high efficiency. By allowing such technologies for your work will help your time to get worth a lot. The important fact is to make sure of what people prefer to accomplish their work.

Hyperlink InfoSystemis well-known as top app developersand adding to that it is also offering services like Artificial Intelligence, Salesforce Development, Blockchain and Internet of Things to AR/VR. Since 2011, the company has been developed & delivered 3200+ apps and 1500+ websites for clients around the world. They have a team of 250+ developers who always prefer to deliver the solutions that work on the latest technologies and grow their client's business.

In the business aspect, Hyperlink InfoSystem is helping several startups to enterprise-level businesses from various industries to boost up their work with the latest technologies. They have vast experience of working on AI, Blockchain, IoT, Salesforce and other technologies that makes them industry leaders and trusted IT partner. Company is also recognized as Top App Developers in 2020as well as one of the Top Salesforce Consultants in 2020by Clutch.co which is B2B reviews and rating website. They have also gained a position on the list of top 10 AI Service providers in 2020.

Harnil Oza, the CEO of Hyperlink InfoSystem, says, "We have almost 9 years of experience in the app development industry with the best collection of reviews and one of the top leaders in the app development industries around the globe. And recently positioned as one of the global leaders for AI and Salesforce services which is a proud moment for me and team of Hyperlink InfoSystem. We wish to work and develop more solutions to satisfy the client's custom requirements."

About Hyperlink InfoSystem:

Hyperlink InfoSystem is an established & popular top web & mobile app development companybased in New York, USA with the development center in India. Company's talented team of 250+ developers offers world-class services in the area of Mobile app & Web Development, Blockchain Development, AR & VR App Development, Game App Development, Artificial Intelligence, Data Science, Salesforce & much more. Since 2011, the company has successfully built 3200+ mobile apps for more than 2300 clients around the world.

Awarded As Top Mobile App Development Companies in 2020;https://appdevelopmentcompanies.co

Awarded As Top Artificial Intelligence(AI) Development Companies in 2020;https://topsoftwarecompanies.co/local-firms/artificial-intelligence-development

Story continues

See the original post:

Hyperlink InfoSystem Positioned as Leader In App Development, AI Development and Salesforce Development - Yahoo Finance

Global Artificial Intelligence Market Is Projected to Reach $390.9 Billion by 2025: Report – Crowdfund Insider

The global artificial intelligence (AI) market size is projected to hit $390.9 billion by 2025. The market is expected to achieve a compound annual growth rate of 46.2% from 2019 to 2025.

AI is a major technological innovation along with Big Data advancements, machine learning (ML), deep learning, and blockchain or distributed ledger technology (DLT).

These technologies are being integrated across a wide range of high-performance applications. Major developments in digital image and voice recognition software are driving the growth of the regional market, according to a release published by Research and Markets.

As noted in the release:

The two major factors fueling market growth are emerging AI technologies and growth in big data espousal. Rising prominence of AI is enabling new players to venture into the market by offering niche application-specific solutions.

Companies across the globe are consolidating their operations in order to remain competitive. In January 2017, Microsoft acquired Maluuba in order to advance its AI and deep learning development efforts. Established industry participants are working on hardware and software solutions that incorporate these new technologies.

North America, by far, held the lions share in the worlds AI market in 2018 due to substantial investments from government agencies, established presence of industry participants, and unparalleled technical expertise. The Asia Pacific (APAC) region, however, is expected to overtake N. America to emerge as the worlds leading regional market by 2025, recording the highest CAGR, the release noted.

This may be due to significant improvements in information storage capacity, high computing power, and parallel processing, all of which have contributed to the swift uptake of artificial intelligence technology in end-use industries such as automotive and healthcare, the release stated.

Read the original here:

Global Artificial Intelligence Market Is Projected to Reach $390.9 Billion by 2025: Report - Crowdfund Insider

A new view of eugenics shows its ties to the slavery era – Daily Northwestern

Close

Professor Rana Hogarth gives a talk on her new research in the Hagstrum Room of University Hall on Monday. Her lecture argued that the eugenics movement was motivated by the views of the slavery era.

Jason Beeferman/The Daily Northwestern

Professor Rana Hogarth gives a talk on her new research in the Hagstrum Room of University Hall on Monday. Her lecture argued that the eugenics movement was motivated by the views of the slavery era.

Jason Beeferman/The Daily Northwestern

Jason Beeferman/The Daily Northwestern

Professor Rana Hogarth gives a talk on her new research in the Hagstrum Room of University Hall on Monday. Her lecture argued that the eugenics movement was motivated by the views of the slavery era.

University of Illinois Prof. Rana Hogarth discussed her new research into eugenicist movements in University Hall on Monday. Her talk argued that contrary to common views of American history eugenics is actually a continuation of the views of the slavery era, rather than a seperate movement.

Through her talk, Hogarth presented the idea that eugenics was used to affirm prexisting beliefs that originated in the slavery era.

Eugenic-era race crossing studies owed a lot of their creation to old ideas about race mixing from the era of slavery, Hogarth said. Most people think of eugenics as this forward, new genetic science, which it is, but they were actually taking old ideas and repackaging them with new science.

Hogarths research specifically focused on two early 20th century studies of Charles Davenport, a leader of the eugenics movement in the United States. The two studies examined mixed-race populations in the Caribbean.

The lecture, titled, Legacies of Slavery in the Era of Eugenics: Charles B. Davenports Race-Crossing Studies, was part of the Klopsteg Lecture Series, which aims to present popular understandings of science for the general public.

Hogarth discussed multiple aspects of Davenports experiments, including his reluctance to acknowledge the role of white men in the existence of people identifying as mixed-race in the first place. Davenport, for example, would describe his subjects as fair skinned babies from dark mothers, without ever mentioning the role of a white father.

Davenport attempted to craft a narrative that played into white perceptions about black female sexuality, that only suddenly subtly implicated white men, Hogarth said.

Ken Alder is the founding director of the Science in Human Culture program, which hosts the lecture series. Alder said the talk itself was fabulous.

This particular aspect of (eugenics) was a sort of scientific justification for something that Americans already wanted to do, Alder said.

Raina Bhagat is a first year PhD student in comparative literary studies who attended the lecture. Bhagat said she was especially intrigued by Hogarts discussion of how eugenicists sought to use hair as an indication of ancestry.

It felt like a very contemporary link of this research that centered at the beginning of the 20th century, to here in the 21st century (with) the idea that hair comes in different shapes and sizes, Bhagat said.

Bhagat was referring to a discovery Hogarth made while digging through the archives of the American Philosophical Society in Philadelphia.

After asking for all the materials relating to Davenport, she found a tiny manila envelope listed under the category of Family Histories. When Hogarth opened the envelope, to her surprise, human samples of hair fell out.

Though the hair was unexpected, it was definitely fascinating, Hogarth said.

When I went to the archives, I was like, this is really gross, but this is totally going into my research, Hogarth said.

To Hogarth, the human hair samples were more than an unusual find.

To me, seeing something like a human article, a part of somebodys body in this archive tells me that this is about reading peoples bodies, Hogarth said. This is about science, what science can tell us about somebodys potential or about someones ancestry by literally studying something as minute your hair. That to me is very telling.

Email: jasonbeeferman2023@u.northwestern.edu

Related Stories:Brainstorm: Why does Social Darwinism still exist?Satoshi Kanazawa, whose work has been criticized as racist, is facing mounting backlash from the Northwestern community

More:

A new view of eugenics shows its ties to the slavery era - Daily Northwestern