Elon Musk Reminds Us of the Possible Dangers of Unregulated AI – Futurism

The Machines Will Win

Late Fridaynight, Elon Musk tweeted a photoreigniting the debate over AI safety. The tongue-in-cheek post contained a picture of a gambling addiction ad stating In the end the machines will win, not so obviously referring to gambling machines. On a more serious note, Musk said that the danger AI poses is more of a risk than the threat posed by North Korea.

In an accompanying tweet, Musk elaborated on the need for regulation in the development of artificially intelligent systems. This echoes his remarks earlier this month when he said, AI just something that I think anything that represents a risk to the public deserves at least insight from the government because one of the mandates of the government is the public well-being.

From scanning the comments on the tweets, it seems that most people agree with Musks assessment to varying degrees of snark. One user, Daniel Pedraza, expressed a need for adaptability in any regulatory efforts. [We] need a framework thats adaptable no single fixed set of rules, laws, or principles that will be good for governing AI. [The] field is changing and adapting continually and any fixed set of rules that are incorporated risk being ineffective quite quickly.

Many experts are leery of developing AI too quickly. The possible threats it could pose may sound like science fiction, but they could ultimately prove to be valid concerns.

Experts like Stephen Hawking have long warned about the potential for AI to destroy humanity. In a 2014 interview, the renownedphysicist stated that The development of artificial intelligence could spell the end of the human race. Even more, he sees the proliferation of automation as a detrimental force to the middle class. Another expert, Michael Vassar, chief science officer of MetaMed Research, stated: If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.

Itsclear, at least in the scientific community, that unfettered development of AI may not be in humanitys best interest. Efforts are already underway to begin to formulate some of these rules to ensure the development of ethically aligned AI. The Institute of Electrical and Electronics Engineers presented their first draft of guidelineswhich they hope will steer developers in the correct direction.

Additionally, the biggest names in tech are also coming together to self-regulate before government steps in. Researchers and scientists from large tech companies like Google, Amazon, Microsoft, IBM, and Facebook have already initiated discussions toensure that AI is a benefit to humanity and not a threat.

Artificial Intelligence has a long way to go before it can get anywhere near advanced enough to pose a threat. However, progress is moving forward by leaps and bounds. One expert, Ray Kurzweil, predicts that computers will be smarter than humans by 2045 a paradigm shift known as The Singularity. However, he does not think that this is anything to fear. Perhaps tech companies self-policing will be enough to ensure those fears are unfounded, or perhaps the governments hand will ultimately be needed. Whichever way you feel, its not too early to begin having these conversations. In the meantime, though, try not to worry too much unless, of course, youre a competitive gamer.

Here is the original post:

Elon Musk Reminds Us of the Possible Dangers of Unregulated AI - Futurism

CFPB Highlights the Growing Role of Artificial Intelligence in the Delivery of Financial Services – JD Supra

TheConsumer Financial Protection Bureau("CFPB") has published guidance on July 7, 2020 which highlights the potential use of Artificial Intelligence (AI) in the delivery of financial servicesparticularly in credit underwriting models. In addition to providing an overview of the ways in which AI is being used by financial institutions, the publication addresses: (1) industry uncertainty about how AI fits into the existing regulatory framework, especially for credit underwriting; and (2) the tools that the CFPB has been using to promote innovation, facilitate compliance, and reduce regulatory uncertainty.

As the publication notes, financial institutions are starting to deploy AI across a range of functions, including as virtual assistants that can fulfill customer requests, in models to detect fraud or other potential illegal activity, or as compliance monitoring tools. Credit underwriting is one specific area in which AI may have a profound impact. Credit underwriting models that are built upon AI have the potential to expand credit access by permitting lenders to evaluate creditworthiness of some of the millions of consumers who are unscorable using traditional underwriting systems. These new AI infused models and technologies will typically allow lenders to evaluate more information about credit applicants, which go beyond the information that the lenders would have been able to assess using traditional consumer reporting agency reports. In turn, consideration of such information may lead to more efficient credit decisions and potentially lower the cost of credit. On the other hand, however, AI may create or amplify risks of unlawful discrimination, lack of transparency, and privacy concerns. Further, bias may be found in the source data or model construction, which can lead to inaccurate predictions. Thus, in considering the implementation of AI, ensuring that the innovation is consistent with consumer protections will be critical.

Despite AIs potential benefits, industry uncertainty about how AI fits into the existing regulatory compliance framework may be slowing its adoption, especially for credit underwriting. One vital issue is how complex AI models address the adverse action notice requirements in the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA). ECOA and FCRA require creditors to provide consumers with the main reasons for denial of credit or other adverse action. While these notice provisions serve important anti-discrimination, accuracy, and educational purposes, industry stakeholders may have questions about how institutions can comply with these requirements if the reasons driving an AI model decision are based on complex interrelationships. To alleviate this concern, the publication provides specific examples of the ways in which creditors can comply with ECOA and FCRA when issuing adverse action notices based on AI models.

In addition to concluding that the existing regulatory framework has built-in flexibility that can be compatible with AI algorithms, the publication goes on to outline the various tools that the Bureau uses to promote innovation, facilitate compliance, and reduce regulatory uncertainty, including:

In particular, the first two policies (TDP & CAS) provide for a legal safe harbor that could reduce regulatory uncertainty in the area of AI and adverse action notices. The third policy discusses the ways in which stakeholders can obtain No-Action Letters from the CFPB, which can effectively provide increased regulatory certainty through a statement that the CFPB will not bring a supervisory or enforcement action against a company for providing a product or service under certain facts and circumstances.

This latest publication is a good sign for industry participants as it reaffirms previous guidance published by the CFPB which shows that the Bureau is committed to helping spur innovation consistent with consumer protections. By working together, industry stakeholders and the Bureau may be able to facilitate the use of this promising technology to expand access to credit and benefit consumers.

Read the original post:

CFPB Highlights the Growing Role of Artificial Intelligence in the Delivery of Financial Services - JD Supra

Military Deception: AI’s Killer App? – War on the Rocks

This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (parts a. and b.), which asks how artificial intelligence will affect the character and/or the nature of war, and what might happen if the United States fails to develop robust AI capabilities that address national security issues.

In the 1983 film WarGames, Professor Falken bursts into the war room at NORAD to warn, What you see on these screens up here is a fantasy a computer-enhanced hallucination. Those blips are not real missiles, theyre phantoms! The Soviet nuclear attack onscreen, he explained, was instead a simulation created by WOPR, an artificial intelligence of Falkens own invention.

WOPRs simulation now seems more prescient than most other 20th century predictions about how artificial intelligence, or AI, would change the nature of warfare. Contrary to the promise that AI would deliver an omniscient view of everything happening in the battlespace the goal of U.S. military planners for decades it now appears that technologies of misdirection are winning.

Military deception, in short, could prove to be AIs killer app.

At the turn of this century, Admiral Bill Owens predicted that U.S. commanders would soon be able to see everything of military significance in the combat zone. In the 1990s, one military leader echoed that view, promising that in the first quarter of the 21st century, it will become possible to find, fix or track, and target anything that moves on the surface of the earth. Two decades and considerable progress in most areas of information technology have failed to realize these visions, but predictions that perfect battlespace knowledge is a near-term inevitability persist. A recent Foreign Affairs essay contends that in a world that is becoming one giant sensor, hiding and penetrating never easy in warfare will be far more difficult, if not impossible. It claims that once additional technologies such as quantum sensors are fielded, there will be nowhere to hide.

Conventional wisdom has long held that advances in information technology would inevitably advantage finders at the expense of hiders. But that view seems to have been based more on wishful thinking than technical assessment. The immense potential of AI for those who want to thwart would-be finders could offset if not exceed its utility for enabling them. Finders, in turn, will have to contend with both understanding reality and recognizing what is fake, in a world where faking is much easier.

The value of military deception is the subject of one of the oldest and most contentious debates among strategists. Sun Tzu famously decreed that all warfare is based on deception, but Carl von Clausewitz dismissed military deception as a desperate measure, a last resort for those who had run out of better options. In theory, military deception is extremely attractive. One influential study noted that all things being equal, the advantage in a deception lies with the deceiver because he knows the truth and he can assume that the adversary is eagerly searching for its indicators.

If deception is so advantageous, why doesnt it dominate the practice of warfare already? A major reason is that historically, military deception was planned and carried out in a haphazard, unsystematic way. During World War II, for example, British deception planners engaged in their work much in the manner of college students perpetrating a hoax but they still accomplished feats such as convincing the Germans to expect the Allied invasion of France in Pas-de-Calais rather than Normandy. Despite such triumphs, military commanders have often hesitated to gamble on the uncertain risk-benefit tradeoff of deception plans, as these require investments in effort and resources that would otherwise be applied against the enemy in a more direct fashion. If the enemy sees through the deception, it ends up being worse than useless.

Deception via Algorithm

Whats new is that researchers have invented machine learning systems that can optimize deception. The disturbing new phenomenon called deepfakes is the most prominent example. These are synthetic artifacts (such as images) created by computer systems that compete with themselves and self-improve. In these generative adversarial networks, a generator produces fake examples and a discriminator attempts to identify them. Each refines itself based on the others outputs. This technique produces photorealistic deepfakes of imaginary people, but it can be adapted to generate seemingly real sensor signatures of critical military targets.

Generative adversarial networks can also produce novel forms of disinformation. Take, for instance, the image of unrecognizable objects that went viral earlier this year (fig. 1). The image resembles an indoor scene, but upon closer inspection it contains no recognizable items. It is neither an adversarial example, an image of something that machine learning systems misidentify nor a deepfake, though it was created using a similar technique. The picture does not make any sense to either humans or machines.

This kind of ambiguity-increasing deception could be a boon for militaries with something to hide. Could they design such nonsensical images with AI and paint them on the battlespace using decoys, fake signal traffic, and careful arrangements of genuine hardware? This approach could render multi-billion-dollar sensor systems useless because the data they collect would be incomprehensible to both AI and human analysts. Proposed schemes for deepfake detection would probably be of little help, since these require knowledge of real examples in order to pinpoint subtle statistical differences in the fakes. Adversaries will minimize their opponents opportunities to collect real examples for instance, by introducing spurious deepfake artifacts into their genuine signals traffic.

Rather than lifting the fog of war, AI and machine learning may enable the creation of fog of war machines automated deception planners designed to exacerbate knowledge quality problems.

Figure 1: This bizarre image generated by a generative adversarial network resembles a real scene at first glance but contains no recognizable objects.

Deception via Sensors and Inadequate Algorithms

Meanwhile, the combined use of AI and sensors to enhance situational awareness could make new kinds of military deception possible. AI systems will be fed data by a huge number of sensors everything from space-based synthetic-aperture radar to cameras on drones to selfies posted on social media. Most of that data will be irrelevant, noisy, or disinformation. Detecting many kinds of adversary targets is hard, and indications of such detection will often be rare and ambiguous. AI and machine learning will be essential to ferret them out fast enough and use the subtle clues received by multiple sensors to estimate the locations of potential targets.

To use AI to see everything requires solving a multisource-multitarget information fusion problem that is, to combine information collected from multiple sources to estimate the tracks of multiple targets on an unprecedented scale. Unfortunately, designing algorithms to do this is far from a solved problem, and there are theoretical reasons to believe it will be hard to go far beyond the much-discussed limitations of deep learning. The systems used today, which are only just starting to incorporate machine learning, work fairly well in permissive environments with low noise and limited clutter, but their performance degrades rapidly in more challenging environments. While AI should improve the robustness of multisource-multitarget information fusion, any means of information fusion is limited by the assumptions built into it and wrong assumptions will lead to wrong conclusions even in the hands of human-machine teams or superintelligent AI.

Moreover, some analysts backed by some empirical evidence contend that the approaches typically used today for multisource-multitarget information fusion are unsound. That means that these algorithms may not estimate the correct target state even if they are implemented perfectly and have high-quality data. The intrinsic difficulty of information fusion demands the use of approximation techniques that will sometimes find wrong answers. This could create a potentially rich attack surface for adversaries. Fog of war machines might be able to exploit the flaws in these approximation algorithms to deceive would-be finders.

Neither Offense- nor Defense-Dominant

Thus, AI seems poised to increase the advantages hiders have always enjoyed in military deception. Using data from their own operations, they can model their own forces comprehensively and then use this knowledge to build a fog of war machine. Finders, meanwhile, are forced to rely upon noisy, incomplete, and possibly mendacious data to construct their own tracking algorithms.

If technological progress boosts deception, it will have unpredictable effects. In some circumstances, improved deception benefits attackers; in others, it bolsters defenders. And while effective deception can impel an attacker to misdirect his blows, it does nothing to shield the defender from those that do land. Rather than shifting the offense-defense balance, AI might inaugurate something qualitatively different: a deception-dominant world in which countries can no longer gauge that balance.

Thats a formula for a more jittery world. Even if AI-enhanced military intelligence, surveillance, and reconnaissance prove effective, states that are aware that they dont know what the enemy is hiding are likely to feel insecure. For example, even earnest, mutual efforts to increase transparency and build trust would be difficult because both sides could not discount the possibility their adversaries were deceiving them with the high-tech equivalent of a Potemkin village. That implies more vigilance, more uncertainty, more resource consumption, and more readiness-fatigue will follow. As Paul Bracken observed, the thing about deception is that it is hard to prove it will really work, but technology ensures that we will increasingly need to assume that it will.

Edward Geist is a policy researcher and Marjory Blumenthal is a senior policy researcher at the RAND Corporation. Geist received a Smith Richardson Strategy and Policy Fellowship to write a book on artificial intelligence and nuclear warfare.

Image: U.S. Navy (Photo by Mass Communication Specialist 1st Class Carlos Gomez)

See the article here:

Military Deception: AI's Killer App? - War on the Rocks

Adobe Doubles Down On Academia To Get Smart About AI And Algos – AdExchanger

Adobe is looking to get schooled on AI and data science.

While many technology giants foster relationships with academics by offering them lucrative part-time consultancy positions. Adobe is pursuing a different tack: dishing out $50,000 no-strings-attached grants to professors and doctoral students working on projects of joint interest.

What academia provides is more the advanced mathematical algorithms and the advanced research thats gone into other related areas but hasnt been applied to our field, said Anil Kamath, Adobes VP of technology.

Artificial Intelligentsia

Adobe was not an early promoter of AI products as were other major technology players, like Google with its Automated Insights pattern-recognition tool, IBM with Watson and Einstein from Salesforce.

But Adobes research grant program, which has dished out 40 grants for a total of $2 million in the past four years, is bringing algorithmic AI into the company through academic work.

Adobe is also doing outreach at events. A top priority at its fourth annual Data Science Symposium held in San Jose last week was to identify AI and machine learning research proposals.

We want to take the areas of focus for data scientists working on AI and machine learning and funnel them to real-world digital marketing problems, Kamath said.

For instance, Rutgers University computer science professor Shan Muthukrishnan, developed an algorithm that takes the hundreds of dimensions of audience data coming into a cloud cookies, browser data, device IDs, audience, demographic data points, location, et al. and learns to pluck potentially actionable marketing trends from the raw data stream.

Adobe doesnt own Muthukrishnans research, the grant is considered a hands-off gift to the university andthe work that comes out of the research process is open to public and peer review.

But Adobe does provide the data and in that is able to point the professors research at thorny product issues facing the company.

Productizing Research

What is Adobe getting out of this? It doesnt own the research or the algorithms being developed and researchers from Oracle or Salesforce could likewise read the theses and academic journal papers that result from Adobe research grants.

One way Adobe capitalizes on its grants is by connecting major customers with academic researchers to help them solve their big data problems.

Alice Li worked with Adobe as part of her doctoral research in 2012 and more recently won a grant from Adobe to work on an attribution model for an online jewelry retailer and Adobe client meant to bridge the gap between last-touch and multitouch attribution.

The jeweler ended up deploying the attribution model, and continues to use it as part of its work with Adobe.

The company is judging the marginal value of their marketing channels based on what theyre told by those marketing channels, Li said. Google tells them paid search is very effective, Facebook tells them social is very effective, but academic research will be objective and rigorous.

And although the grants dont give Adobe intellectual property rights or new software, they do get productized through human capital, namely interns and cross-employed researchers.

This past year Adobe awarded a grant to a Stanford professor and graduate student working on sequential recommendations, specifically: How should a platform lead users through video tutorial sequences based on factors like whether that person is on a free version of a product or already a paid subscriber? And how likely is the user to churn and abandon the platform entirely?

The graduate student is employed over the summer by Adobe, where surprise, surprise hell be working on its video tutorial sequencing.

Usually the PhD students working in that area have a chance to work with us, Kamath said. And thats where we see some of the best results translating research work and putting it into product.

Talent Funnel

But even more important that capitalizing on research to inform product development is the chance to secure long-term talent.

A part of it is getting academia and researchers to think about and work on problems that are relevant to us, Kamath said. [But] another big part is recruiting these machine learning and data science students who are really competitively sought after.

Its good for students to consider industry concerns, said Stanford professor Ramesh Johari, whos worked on multiple Adobe research grants, including the sequential learning algorithm.

Students can benefit by being aware of the connections between the algorithmic work theyre doing and actual problems people face, he said.

Hes seen students receive their masters or PhD and go on to corporate tech development in order to develop the work they did in school. Adobe researchers have also served as co-authors on academic projects and sat on thesis defense committees.

The advantages of academic outreach arent immediate and are hard to quantify with a dollar sign, aside from the batch of $50,000 checks the company awards each year, of course. But the long-term advantages are clear.

Adobe is really well served by work intersecting with the academic community, Johari said.

Read the original:

Adobe Doubles Down On Academia To Get Smart About AI And Algos - AdExchanger

The Facial Recognition Company That Scraped Facebook And Instagram Photos Is Developing Surveillance Cameras – BuzzFeed News

Clearview AI, the secretive company thats built a database of billions of photos scraped without permission from social media and the web, has been testing its facial recognition software on surveillance cameras and augmented reality glasses, according to documents seen by BuzzFeed News.

Clearview, which claims its software can match a picture of any individual to photos of them that have been posted online, has quietly been working on a surveillance camera with facial recognition capabilities. That device is being developed under a division called Insight Camera, which has been tested by at least two potential clients according to documents.

On its website which was taken offline after BuzzFeed News requested comment from a Clearview spokesperson Insight said it offers the smartest security camera that is now in limited preview to select retail, banking and residential buildings.

Insight Cameras main site had no obvious connection to Clearview, but BuzzFeed News was able to link it to the facial recognition company by comparing the code from Insight and Clearviews respective log-in pages, which both shared numerous references to Clearviews servers. This shared code also mentioned something called Fastlane, a "checkin app."

Clearview CEO Hoan Ton-That and a company spokesperson did not respond to multiple requests for comment about Insight or its work in experimenting with physical devices. After BuzzFeed reached out to inquire about Insight Camera, the entitys website disappeared.

Despite publicly claiming it is working with law enforcement agencies alone, Clearview has been aggressively pushing its technology into the private sector. As BuzzFeed News first reported, Clearview documents indicated more than 2,200 public and private entities have been credentialed to use its facial recognition software including Macys, Kohls, the National Basketball Association, and Bank of America.

Clearview has never publicly mentioned Insight Camera. A list of organizations that had been credentialed to use its app that was viewed by BuzzFeed News showed Clearview had identified two entities experimenting with its surveillance cameras in a category called has_security_camera_app.

Those two organizations, the United Federation of Teachers (UFT) and New York City real estate firm Rudin Management, deployed Insight Camera in trials, BuzzFeed News confirmed. In a statement, UFT, a labor union that represents teachers in New York City public schools, said the technology was successful in helping security personnel identify individuals who had made threats against employees so they could be prevented from entering one of its offices.

We did not access the larger Clearview database, a spokesperson for UFT told BuzzFeed News. Instead, we used Insight Camera in a self-contained, closed system that relied exclusively on images generated on site.

UFT did not say how many photos were in that closed system, which it maintained is separate from the database of more than 3 billion photos that Clearview AI said it has scraped from millions of sites including Facebook, Instagram, and YouTube. Clearviews desktop software and mobile app allow users to run static photos through a facial recognition system that matches people to existing media in a few seconds, but Insight Camera, according to those that used it, attempted to flag individuals of interest using facial recognition on a live video feed.

A spokesperson for Rudin Management, which has a portfolio of 18 residential and 16 commercial office buildings as well as two condominiums in New York City, confirmed to BuzzFeed News that it had tested Insight cameras.

We beta test many products to see if they would be additive to our portfolio and tenants, the spokesperson said. In this case we decided it was not and we do not currently use the software."

BuzzFeed News discovered Insight after analyzing a copy of Clearviews web app, which is discoverable to the public, and determining that it contained code for a security_camera app. Entities that had access to that security camera app appear to have been able to log in to the Insight Camera website, which was registered last April.

A BuzzFeed News analysis of the Insight Camera site found that it was almost a perfect clone of the code found at Clearview AIs web page. Though there were some aesthetic differences between the two sites, both appeared to share the same code to communicate with Clearviews servers.

Although Clearview has recently stated its services are intended for law enforcement, the company has maintained significant interest in the private sector. As BuzzFeed News reported previously, Ton-That had entered his company in a retail technology accelerator in the summer of 2018, before claiming that the company would focus on law enforcement.

A presentation from the companys early pitches to investors recently reviewed by BuzzFeed News suggests that in early 2018 the company wasnt focused on law enforcement at all. On one slide, the company named four industries in which it was testing its technology: banking, retail, insurance, and oil. The only mention of government or public entities is in reference to a pilot at an unnamed "major federal agency."

Banking: The worlds largest bank selected Clearview to provide security background checks for its annual shareholders meeting, the company wrote on one of its slides. Retail: Manhattans top food retailer has hired CV to provide facial- recognition hardware & software for its supermarket chain.

Privacy advocate Evan Greer, deputy director of digital rights activist group Fight for the Future, said that brick-and-mortar stores are seen as community spaces and that one of the most attractive applications for Clearview in the private sector would be screening people as they enter a store to see if they have a criminal record. She remained skeptical of Clearviews technology.

Theyre claiming that this technology can do all kinds of stuff and institutions are easily dazzled by that, Greer said. But its relatively new technology for applications like this and its totally untested. We know that there are better ways to keep people safe that dont violate their rights.

Clearview has also been actively experimenting with wearables with the help of Vuzix, a Rochester, New Yorkbased manufacturer of augmented reality glasses. Clearview data reviewed by BuzzFeed News showed accounts associated with Vuzix ran nearly 300 searches, some as recently as November. Matt Margolis, Vuzixs director of business development, acknowledged that his company had sent the startup sets of its augmented reality glasses for testing, noting Clearview was one of a few facial recognition developers it had partnered with.

Its not something anybody is buying off the shelf, but I cant deny that its in development, though its not something were selling today, Margolis told BuzzFeed News. We do have a number of other partners that use facial recognition, but they dont do the same thing that Clearview is doing. Theyre not using photos that are crawled off the web.

Clearview's link to Vuzix was first reported by Gizmodo. The companys interest in smart glasses was first reported by the New York Times.

Vuzix, which counts Intel as a shareholder, initially focused on entertainment and gaming, before moving into the defense and homeland security markets, according to a financial filing from last year. On its company blog in February, Vuzix cited the sci-fi film RoboCop, where officers used smartglasses with live facial recognition, as an inspiration, and noted that countries including Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates already screen crowds to match faces against a massive database.

BuzzFeed News previously reported that Clearview AI had provided its facial recognition technology to entities in Saudi Arabia and the UAE, two countries known for their human rights violations. The company previously did not respond to questions about entities that have used its software.

Last week, in an email to BuzzFeed News, Clearview attorney Tor Ekeland said, "There are numerous inaccuracies in this illegally obtained information. As there is an ongoing Federal investigation, we have no further comment."

Margolis, who has seen demos of Clearview, acknowledged that a wearable with facial recognition could be abused with a lot of negative possibilities, but noted that systems are only as good as the biometric information on which they rely. He said that Clearviews technology was accurate on the tests he had seen and called the billions of photos that the company ingested from the web part of the public domain.

Tech used the right way is the real goal...to keep people safe, he said. You want to find the wrongdoers. Its not a bad thing for society.

Code from Clearview AIs app analyzed by BuzzFeed News also suggested the startup had experimented with tech from RealWear, a Vancouver, Washingtonbased augmented reality glasses manufacturer. The code included instructions to new users to scan a Clearview QR code to pair its app with a RealWear device. Data viewed by BuzzFeed News showed that accounts associated with RealWear had run more than 70 searches as recently as last month.

In an interview, RealWear CEO Andy Lowery said he had never heard of Clearview before, but found that his company sold the startup a few devices about a year ago. He told BuzzFeed News that RealWear doesnt market or sell in any significant way to police forces, and compared his company to a phone manufacturer like Samsung in that it could not control what applications developers built or put on its devices.

Lowery could not explain why Clearviews data showed that accounts associated with RealWear had been running searches with the facial recognition technology, but didnt rule out one of his 115 employees trying the software.

I havent seen any evidence that theyre working with us in any sort of way, he said. I dont even see them selling or reselling anything with our devices.

See the rest here:

The Facial Recognition Company That Scraped Facebook And Instagram Photos Is Developing Surveillance Cameras - BuzzFeed News

Rise of AI | the most exciting conference for Artificial …

WELOVE AI

We personally work and live with Artificial Intelligence every day. AI is already consuming our personal lives, we are simply not always aware of it. It is therefore extremely important for usto understand the status of Artificial Intelligence. We have created a platform toshare our vision and learnings about Artificial Intelligence. Together wediscuss the implications of Rise of AI for our human life, companies, society and politics.

We have selected speakers who have a clear message, a missionand vision of the future. Each speaker knows their field of Artificial Intelligence inside out. We also have workshops with our AI Topic Leaders, where you can dive deeper into one specific topic and share your knowledge with us. For 2018 we will offer you two stages: Artificial Intelligence Vision and Applied Artificial Intelligence.

We have limitthe conference to 500people in order to keep it intimate and build an environment where each participant can freely share their thoughts and opinions on the future.

Read more from the original source:

Rise of AI | the most exciting conference for Artificial ...

Carrboro startup Tanjo to leverage its AI platform to help with NC’s reopening – WRAL Tech Wire

CARRBORO A Carrboro artificial intelligence (AI) startup is leveraging its technology platform to help business and community leaders navigate with North Carolinas COVID-19 reopening.

Carrboro-based Tanjois teaming up with the Digital Health Institute for Transformation (DHIT) to build an engine that uses machine learning and advanced analytics to ingest huge amounts of national and regional data and then provide actionable insights.

Successfully reopening our economy without risking the destruction of the health of our communities is the central challenge we are attempting to overcome, said Michael Levy, president of DHIT, in a statement. More reliable, local data-driven expert guidance enabled by smart AI is critical to allow the safe and successful reopening of our communities and businesses.

Consider the breadth of intelligence: health and epidemiological data, labor and economic data, occupational data, consumer behavior and attitudinal data, and environmental data.

Tanjo,founded by serial entrepreneur Richard Boyd in 2017, said it is designing dashboard to give stakeholders real-time intelligence and predictive modeling on population health risk, consumer sentiment and community resiliency.

Richard BoydUsers will be able to view the risk to their business and county, as well as simulate the impact of implementing evidence-based recommendations, enabling them to make informed decisions.

As part of the 2020 COVID-19 Recovery Act, the North Carolina Policy Collaboratory in late July awarded the DHIT a grant to research, validate, and build a simulation platform for North Carolinas business and community leaders.

DHIT and Tanjo entered into a formal strategic partnership in November 2019, pre-COVID 19.

The seven NC counties chosen for the initial pilot are: Ashe, Buncombe, Gates, Mecklenburg, New Hanover, Robeson, and Wake.

The overall project is a collaboration between Tanjo, DHIT, the Institute for Convergent Sciences and Innovate Carolina at the University of North Carolina, Chapel Hill, and the NC Chamber Foundation, among other key stakeholders.

If you are a community organization or business located in the counties listed above and are interested in being a beta tester for this initiative, contact communityconfidence@dhitglobal.org.

See the rest here:

Carrboro startup Tanjo to leverage its AI platform to help with NC's reopening - WRAL Tech Wire

‘T-Minus AI’: A look at the intersection of geopolitics and autonomy – C4ISRNet

China has a national plan for it. Russia says it will determine the ruler of the world. The United States is investing heavily to develop it.

The race is on to create, control and weaponize artificial intelligence.

In Michael Kanaans book T-Minus AI: Humanitys Countdown to Artificial Intelligence and the New Pursuit of Global Power, set for release Aug. 25, the realities of AI from a human-oriented perspective are laid out for the reader. Such technology, often shrouded in mystery and misunderstood, is made easy to comprehend through a discussion on the global implications of developing AI. Kanaan is one of the Air Forces AI leaders.

The following excerpt, edited for length and clarity, introduces how, in late 2017, the conversation about artificial intelligence changed forever.

It was a Friday morning, Sept. 1, 2017, and not yet dawn when I stepped out of Reagan National Airport and followed my bag into the back of a waiting SUV. After flying east all night from San Francisco to D.C., I still had two hours before a Pentagon briefing with Lt. Gen. VeraLinn Dash Jamieson. She was the deputy chief of staff for U.S. Air Force intelligence and the countrys most senior Air Force intelligence officer, a three-star officer responsible for a staff of 30,000 and an overall budget of $55 billion.

As the Air Force lead officer for artificial intelligence and machine learning, Id been reporting directly to Jamieson for over two years. The briefing that morning was to discuss the commitments wed just received from two of Silicon Valleys most prominent AI companies. After months of collective effort, the new agreements were significant steps forward. They were also crucial proof that the long history of cooperation between the American public and private sectors could reasonably be expected to continue. With the world marching steadfastly into the promising but unsettled fields of AI, it was becoming critical that Americans do so, if not entirely in harmony, then at least to the sounds of the same beat.

My apartment was only a short ride away. I was looking forward to a hot shower and strong coffee. But as the SUV pulled out of the terminal and into the morning darkness, a message alert pinged from my phone. It was a text from the general. Short and to the point, as usual. See Putin comments re AI.

A quick web search pulled up a quote already posting to news feeds everywhere. At a televised symposium broadcast throughout Russia only an hour earlier, President Vladimir Putin had crafted a sound bite making headlines around the globe. His unambiguous three sentences translated to: Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

As the driver accelerated up the Interstate 395 ramp toward the city, a heavy rain started to fall, hitting hard against the cars metal surfaces. Far off, through the window on my right, the dome of the Capitol building glistened in white light beyond the blurred, dark space of the Potomac River. Playing at background volume over the front speakers, a National Public Radio newscaster was describing a 3-mile-wide asteroid named Florence. Streaking past our planet that morning, the massive rock would be little more than 4 million miles away at its closest point tremendously far by human standards, but breathtakingly near by the infinite scales of space. It was the largest object NASA had ever tracked to pass so closely by our planet. On only a slightly different trajectory, it would have altered Earths entire landscape. And, like for the dinosaurs before us, it would have changed everything. It would have changed life. A perfect metaphor, I thought, impeccably timed to coincide with Putins comments about AI.

I looked back at his words. The message they carried rang like an alarm I didnt need to hear, but the motivation behind them wasnt so clear. Former KGB officers speak carefully and only for calculated reasons. Putin is no exception. His words matter, always. And so does his purpose. But what was it here? Just to offer a commentary or forecast? No. Not his style. A call to action, then, to energize his own population? Perhaps. But, more than that, this was a statement to other statesmen, a confirmation that he and his government were awake and aware that a sophisticatedly deep effort was underway to accomplish a new world order.

Only a month earlier, China had released a massive three-part strategy aimed at achieving very clear benchmarks of advances in AI. First, by 2020, China planned to match the highest levels of AI technology and application capabilities in the U.S. or anywhere else in the world. Second, by 2025, they intend to capture a verifiable lead over all countries in the development and production of core AI technologies, including voice- and visual-recognition systems. Last, by 2030, China intends to dominantly lead all countries in all aspects and related fields of AI. To be the sole leader, the worlds unquestioned and controlling epicenter of AI. Period. That is Chinas declared national plan.

With the Chinese governments newly published AI agenda available for the world to see, Putins words resolved any ambiguity about its implication. True to his style, his message was clear and concise. Whoever becomes the leader will become the ruler of the world.

Straightforward, I thought. And hes right. But focused administrations around the globe already know the profound potential of AI. The Chinese clearly do its driving their domestic and foreign agendas. And the Saudis, the European Union nations, the U.K., and the Canadians they know it, too. And private enterprise is certainly focused in, from Google, Facebook, Amazon, Apple and Microsoft to their Chinese state-controlled counterparts Baidu, Alibaba, Tencent and the telecom giant Huawei.

AI technologies have been methodically evolving since the 1960s, but over most of those years, the advances were sporadic and relatively slow. From the earliest days, private funding and government support for AI research ebbed and flowed in direct relation to the successes and failures of the latest predictions and promises. At the lowest points of progress, when little was being accomplished, investment capital dried up. And when it did, efforts slowed. It was the usual interdependent circle of cause and effect. Twice, during the late 70s and then again during the late 80s and early 90s, the pace of progress all but stopped. Those years became known as the AI winters.

But, in the last 10 to 15 years, a number of major breakthroughs, in machine learning in particular, again propelled AI out of the dark and into another invigorated stage. A new momentum emerged, and an unmistakable race started to take shape. Insightful governments and industry leaders began doing everything possible to stay within reach of the lead, positioning themselves for any possible path to the front.

Now, for all to hear, Putin had just declared everything at stake. Without any room for misunderstanding, he equated AI superiority to global supremacy, to a strength akin to economic or even nuclear domination. He said it for public consumption, but it was rife with political purpose. Whoever becomes the leader in this sphere will become the ruler of the world.

Those words would undoubtedly add another level of urgency to the days meetings. That was certain. I redirected the driver to the Pentagon and looked down at my phone to answer the generals text. Landed. Saw quote. On my way in.

The shower would have to wait.

***

In the months that followed, Putins now infamous few sentences proved impactful across continents, industries and governments. His comments provided the additional, final push that accelerated the planets sense of seriousness about AI and propelled most everyone into a higher gear forward. Public and private enterprises around the globe reassessed their focuses and levels of commitment. Governments and industries that had previously dedicated only minimal percentages of their research and defense budgets to the new technology suddenly saw things differently. It quickly became unacceptable to slow-walk AI efforts and protocols, and no longer defensible to incubate AI innovations for longer than the shortest time necessary.

Now, not long after, the pace of the race has quickened to a full sprint. National strategies and demonstratable use have become the measurements that matter. Rollouts have become requisite. To accomplish them, agendas are more focused, aggressive and well funded. Sooner than many expected, AI is proving itself a dominant force of economic, political and cultural influence, and is poised to transform much of what we know and much of what we do. China, Russia and others are utilizing AI in ways the world needs to recognize. Thats not to say all efforts and iterations in the West are without criticism. Theyre not. But if this new technology causes or contributes to a shift in power from the West to the East, everyone will be affected. Everything will change.

The future is here, and the world ahead looks far different than ever before.

No longer just science fiction or fantastic speculation, artificial intelligence is real. Its here, all around us, and it has already become an integral and influential part of our lives. Although weve taken only our first few steps into this new frontier of technological innovation, AI is providing us powerful new methods of conducting our affairs and accomplishing our goals. We use these new tools every day, usually without choice and often without even realizing it from applications that streamline our personal lives and social activities to business programs and practices that enable new ways of acquiring a competitive advantage. Ive learned a lot about the common misperceptions and misgivings people have when trying to understand AI. Most conversations about artificial intelligence either begin or end with one or more of the following questions:

Although the answers to those questions merit long discussions and are open to differing opinions, they should at least be manageable and factually accurate. The topics shouldnt be too difficult to discuss or debate not conversationally or even at policymaking or political levels. Unfortunately, they generally are.

But the conversational disconnects that usually occur arent because of some complex technical details or confusing computer issues. Instead, its usually, simply, because of the same old obstacles that too often stand in the way of many other conversations. Regardless of the topic, and even when it matters most, we too frequently speak below, above, around or past one another especially when we dont have an equal amount of information, a shared base of knowledge or a common set of experiences. In those instances, we make too many assumptions, allow too many things to go without saying and use too many words that hold different meanings for different people. In short, too many confusions are never clarified and too many more are created. As a consequence, were doomed for frustration and failure from the start, inevitably unable to understand one another and incapable of appreciating each others perspectives and talking points. My goal throughout this book is to avoid those pitfalls.

The best way to start is to first address the most common misperceptions of all, the ones we tend to bring with us into the AI conversation. The first of these is the assumption that AI is unavoidably destined, sooner or later, to develop its own consciousness and its own autonomous, evil intent. For that idea, we can thank science fiction and the entertainment industry. Make no mistake, Im an ardent fan of science fiction, both on screen and in books. Without any doubt, the sci-fi genre has given us fine works of imagination, insight and art. Many great fiction writers and filmmakers are extremely knowledgeable about technology and conscientiously concerned about our future. Time and again theyve proven themselves true visionaries, and were unquestionably better off for their work. They spark our curiosity, ignite our imaginations, increase our appetite for knowledge, and encourage our interests in science and societal issues.

But when it comes to their scientific portrayals of artificial intelligence, our most popular authors and screenwriters have too often generated an array of exotic fears by focusing our attention on distant, dystopian possibilities instead of present-day realities. Science fiction that depicts AI usually aligns a computers intelligence with consciousness, and then frightens us by portraying future worlds in which AI isnt only conscious, but also evil-minded and intent, self-motivated even, to overtake and destroy us. To create drama, there has to be conflict, and the humans in these stories are almost always overwhelmed and outmatched, naturally unable to compete against the machines vastly superior intelligence and mechanical strength. Iconic movies like 2001: A Space Odyssey, The Matrix, The Terminator, Ex Machina, and I, Robot, along with television series such as Westworld and Black Mirror, have turned our underlying fears and suspicions into deep-seated and bleak expectations.

Even today, commercial companies that offer AI products and consumer services routinely have to fight our distrust of intelligent machines as a basic, necessary part of their regular marketing efforts. Just think of all the television commercials for AI-enabled products we now see, and consider how many of them are focused first on trying to put us at ease by casting a polite and gentle glow to the figurative, artificial face of their AI, even when that face has absolutely nothing to do with the services their products actually provide.

AI is an extremely powerful tool, and it has immense implications we must consider and evaluate carefully. Its a very sharp instrument that shouldnt be callously wielded or casually accepted, especially when its in the wrong hands or when its used for intentionally intrusive or oppressive purposes. These are serious issues, and there are significant steps we must take to ensure AI is properly designed and implemented. Fortunately, and contrary to what many people think, its not necessary to have a background in computer science, mathematics or engineering in order to very meaningfully understand AI and its technological implications. With just a basic comprehension of a few fundamental concepts behind todays computers and related sciences, its entirely possible to connect the relevant dots and understand the overall picture.

Creating tools to facilitate our lives is the strength of humankind. Its what we do. Given enough time, it was arguable, perhaps even inevitable, that we would create the ultimate tool artificial intelligence itself. But what exactly does it mean that weve accomplished that task? And how is AI even possible? In large part, the answers lie in the history of ourselves and of our own biological intelligence. It turns out that artificially replicating what we know about the human thought process, at least as best we can, is a highly effective blueprint for creating something similar in a machine. Its our own evolution and our own history that teach us the fundamentals that make it all possible.

Read the rest here:

'T-Minus AI': A look at the intersection of geopolitics and autonomy - C4ISRNet

The ‘Skynet’ Gambit – AI At The Brink – Seeking Alpha

"The deployment of full artificial intelligence could well mean the end of the human race." - Stephen Hawking

"He can know his heart, but he don't want to. Rightly so. Best not to look in there. It ain't the heart of a creature that is bound in the way that God has set for it. You can find meanness in the least of creatures, but when God made man the devil was at his elbow. A creature that can do anything. Make a machine. And a machine to make the machine. And evil that can run itself a thousand years, no need to tend it." - Cormac McCarthy, Blood Meridian: Or the Evening Redness in the West

Let me declare at the outset that this article has been tough to write. I am by birthright an American, an optimist and a true believer in our innovative genius and its power to drive better lives for us and the world around us. Ive grown up in the mellow sunshine of Moores law, and lived first hand in a world of unfettered innovation and creativity. That is why it is so difficult to write the following sentence:

Its time for federal regulation of AI and IoT technologies.

I say that reluctantly but with growing certainty. I have come to believe that we share a moral obligation to act now in order to protect our children and grandchildren. We need to take this moment, wake up, and listen to the voices that are warning us that the confluence of technologies that power the AI revolution are advancing so rapidly that they pose a clear and present danger to our lives and well-being.

So this article is about why I have come to feel that way and why I think you should join me in that feeling. Obviously, this has financial implications. Since you are a tech investor, you almost certainly invested in one or more of the companies - like Nvidia (NASDAQ:NVDA), Google (NASDAQ:GOOG) (NASDAQ:GOOGL), and Baidu (NASDAQ:BIDU) - that are profiting from driving the breakneck advances we are seeing in AI base technologies and the myriad of embedded use cases that make the technology so seductive. Indeed, if we look at the entire tech industry ecosystem, from chips through applications and beyond them to their customers that are transforming their business through their use, we can hardly ignore the implications of this present circumstance.

So why? How did we get to this moment? Like me, youve probably been aware of the warnings of well-known luminaries like Elon Musk, Bill Gates, Stephen Hawking and many others, and, like me, you have probably noted their commentary but moved on to consider the next investment opportunity. Personally, being the optimist that I am, I certainly respected those arguments but believed even more strongly that we would innovate ourselves out of the danger zone. So why the change? Two words - one name - Bruce Schneier.

If you have been interested in the fields of cryptology and computer security, you have no doubt heard his name. Now with IBM (NYSE:IBM) as its chief spokesperson on security, he is a noted author and contributor to current thinking on the entire gamut of issues that confront us in this new era of the cloud, IoT, and Internet-based threats to personal privacy and computer system integrity. Mr. Schneiers seminal talk at the recent RSA conference brought it all into focus for me, and I encourage you to watch it. I will briefly recap his argument and then work out some of the consequences that flow from Schneiers argument. So here goes.

Schneiers case begins by identifying the problem - the rise of the cyber-physical system. He points how our day-to-day reality is being subverted as IoT literally stands the world on its head, dematerializing and virtualizing our physical environment. What used to be dumb is now smart. Things that used to be discrete and disconnected are now networked and interconnected in subtle and powerful ways. This is the conceptual linkage that really connected the dots for me. As he puts it in his security blog:

We're building a world-size robot, and we don't even realize it. [] The world-size robot is distributed. It doesn't have a singular body, and parts of it are controlled in different ways by different people. It doesn't have a central brain, and it has nothing even remotely resembling a consciousness. It doesn't have a single goal or focus. It's not even something we deliberately designed. It's something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world. This world-size robot is actually more than the Internet of Things. [] And while it's still not very smart, it'll get smarter. It'll get more powerful and more capable through all the interconnections we're building. It'll also get much more dangerous.

More powerful, indeed. It is at this point where AI and related technologies enter the equation to build a host of managers, agents, bots, natural language interfaces, and other facilities that allow us to leverage the immense scale and reach of our IoT devices - devices that, summed altogether, encompass our physical world and exert enormous power for good and in the wrong hands for evil.

Surely, we can manage this? Well, no, says Schneier - not the way we are going about it now. The problem is, as he cogently points out, our business model for building software and systems is notoriously callus when it comes to security. Our "fail fast fix fast", minimum-market-requirementsfor-version1-shipment protocol is famous for delivering product that comes with a hack me first invitation that is all too often accepted. So whats the difference you may ask? Weve been muddling along with this problem for years. We dig ourselves into trouble, we dig ourselves out. Fail fast, fix fast. Life goes on. Lets go make some money.

Or maybe it doesnt. The IoT phenomenon is leading us headlong into deployment of literally billions of sensors embedded deep into our most personal physical surroundings, connecting us to system entities and actors, nefarious and benign, that now have access to intimate data about our lives. Bad as that is, its not the worst thing. This same access gives these bad actors the potential to control the machines that provide life-sustaining services to us. Its one thing to have your credit card data hacked, its entirely another thing to have a bad actor in control of, say, the power grid, an operating theater robot, your car, or the engine of the airplane you're riding in. Our very lives depend on the integrity of these machines. Do we need to emphasize this point? Fail fast, fix fast does not belong in this world.

So if the prospect of a body-count stat on the next after-action report from some future hack doesnt alarm you, how about this scenario. What if it wasnt a hack? What if it was an unforeseen interaction of otherwise benign AIs that we are relying on to run the system in question? Can we be sure to fully understand the entire capability of an AI that is, say, balancing the second-to-second demands of the power grid?

One thing we can count on - the AI that we are building now will be smarter and more capable tomorrow. How smart is the AI were building? How good is it? Scary good. So lets let Musk answer the question. How smart are these machines were building? [Theyll be] smarter than us. Theyll do everything better than us", he says. So whats the problem? Youre not going to like the answer.

We wont know that the AI has a problem until the AI breaks and we may not know why it broke then. The intrinsic nature of the cognitive software we are building with deep neural nets is that a decision is the product of interactions with thousands and possibly millions of previous decisions from lower levels in the training data, and those decision criteria may well have already been changed as feedback loops communicate learning upstream and down. The system very possibly cant tell us "why". Indeed, the smarter the AI is, the less likely it may be able to answer the why question.

Hard as it is, we really need to understand the scale of the systems we are building. Think about autonomous cars as one, rather small, example. Worldwide the industry has built 88 million cars and light trucks in 2016 and another 26 million medium and heavy trucks. Sometime in the 2025 to 2030 time frame, all of them will be autonomous. With the rise of the driving as a service model, there may not be as many new vehicles being produced, but the numbers will still be huge and fleet sizes will grow every year as the older, self-driving vehicles are replaced. What are the odds that the AI that runs these vehicles performs flawlessly? Can we expect perfection? Our very lives depend on it. God forbid a successful hack into this platform!

Beyond that, what if perfection will kill us? Ultimately, these machines may require our guidance to make moral decisions. Question. You and your spouse are in a car that is in the center lane of the three-lane freeway operating at the 70 mph speed limit. A motorcyclist directly left of you - to the right a family of five in autonomous minivan. Enter a drunk self-driving and old pickup the wrong way at high speed weaving through the three lanes directly in your path. Should your car evade to the left lane and risk the life of the motorcyclist? One would hope our vehicle wouldnt move right and put the family of five at risk. Should it be programmed to conduct a first, do no harm policy which would avoid a swerve into either lane and would simply break as hard as possible in the center lane and hope for the best?

Whatever the scenario, the AIs we develop and deploy, however rich and deep the learning data they have been exposed to, will confront situations that they havent encountered before. In the dire example above and in more mundane conundrums, who ultimately sets the policy that must be adhered to?Should the developer? How about the user (in cases where this is practical)? Or should we have a common policy that must be adhered to by all? For sure, any policy implemented in our driving scenario above will save lives and perform better than any human driver. Even so, in vehicles, airplanes, SCADA systems, chemical plants and myriad other AIs inhabiting devices operating in innately hazardous operating regimes, will it be sufficient to let their in extremis actions be opaque and unknowable? Surely not, but will the AI as developed always gives us the control to change it?

Finally, we must consider a factor that is certainly related to scale but is uniquely and qualitatively different - the network. How freely and ubiquitously should these AIs interconnect? Taken on its face, the decision seems to have been made. The very term, Internet of Things, seems to imply an interconnection policy that is as freewheeling and chaotic as our Internet of people. Is this what We, the People want? Should some AIs - say our nuclear reactors or more generally our SCADA systems - operate with limited or no network connection? Seems likely, but how much further should we go? Who makes the decision?

Beyond such basic questions come the larger issues brought on by the reality of network power. Lets consider the issue of learning and add to that the power of vast network scale in our new cyber physical world. The word seems so simple, so innocuous. How could learning be a bad thing? AI powered IoT systems must be connected to deliver the value we need from them. Our autonomous vehicles, terrestrial and airborne, for example, will be in constant communication with nearby traffic, improving our safety by step-functions.

So how does the fleet learn? Lets take the example from above. Whatever the result, the incident forensics will be sent to the cloud where developers will presumably incorporate the new data in the master learning set. How will the new master be tested? How long? How rigorously? What will be the re-deployment model? Will the new improved version of the AI be proprietary and not shared with the other vehicle manufacturers, leaving their customers at a safety disadvantage? These are questions that demand government purview.

Certainly, there is no unanimous consensus here regarding the threat of AI. Andrew Ng of Baidu/Stanford disagrees that AI will be a threat to us in the foreseeable future. So does Mark Zuckerberg. But these disagreements are only with the overt existential threat - i.e. that a future AI may kill us. More broadly, though, there is very little disagreement that our AI/IoT-powered future poses broad economic and sociopolitical issues that could literally rip our societies apart. What issues? How about the massive loss of jobs and livelihood of perhaps the majority of our population over the course of the next 20 years? As is nicely summarized in this recent NY Times article, AI will almost certainly exacerbate the already difficult problem we have with income disparities. Beyond that, the global consequences of the AI revolution could generate a dangerous dependency dynamic among countries other than the US and China that do not own AI IP.

We could go on and on, but hopefully the issue is clear. Through the development and implementation of increasing capable AI-powered IoT systems, we are embarking upon a voyage into an exciting but dangerous future state which we can barely imagine from our current vantage point. Now is the time to step back and assess where we are and what we need to do going forward. Schneiers prescription for the problem is that the tech industry must get in front of this issue and drive a workable consensus among industry stakeholders and governmental authorities and regulatory bodies about the problem, its causes and potential effects, and most importantly, a reasonable solution to the problems that protects the public while allowing the industry room to innovate and build.

There is no turning back, but we owe it to ourselves and our posterity to do our utmost to get it right. As technologists we are inherently self-interested in protecting and nurturing the opportunity we all have in this exciting new realm. This is natural and understandable. Our singular focus on agility and innovation has brought the world many benefits and will bring many more. But we are not alone and it would be completely irresponsible to insist that we are the only stakeholder in the outcomes we are engineering.

This decision - to engage and attempt to manage the design of the new and evolving regulatory regime - has enormous implications. There is undoubtedly risk. Poor or heavy-handed regulation could well exact a tremendous opportunity cost. One could well imagine a world in which Nvidia's GPU business is severely affected by regulatory inspection and delay, for example. But that is the very reason we need to engage now. The economic leverage that AI provides in every sector of our economy leads us inescapably to economic and wealth-building scenarios beyond anything the world has seen before. As participants and investors, we must do what we can to protect this opportunity to build unprecedented levels of wealth for our country and ourselves. Schneier argues that we are best serving our self-interest by engaging government now rather than burying our heads in the sand waiting for the inevitable backlash that will come when (not if!) these massive systems fail catastrophically in the future.

Schneier has got the right idea. We need to broaden the conversation, lead the search for solutions, and communicate the message to the many non-tech constituencies - including all levels of government - that there is an exciting future ahead but that future must include appropriate regulations that protect the American people and indeed the entire human race.

We wont get a second chance to get this right.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

See the original post here:

The 'Skynet' Gambit - AI At The Brink - Seeking Alpha

AI-powered government finances: making the most of data and machines – Global Government Forum

Photo by Karolina Grabowska via Pexels

Governments are paying growing attention to the potential of artificial intelligence the simulation of human intelligence processes by machines to enhance what they do.

To explore how public authorities are approaching the use of AI for tasks related to public finances,Global Government Fintech the sister title of Global Government Forum convened an international panel on 4 October 2022 for a webinar titled How can AI help public authorities save money and deliver better outcomes?.

The discussion, organised in partnership with SAS and Intel, highlighted how AI is already helping departments to deliver results. But also that AI remains very much an emerging and, to many, rather nebulous field with many hurdles to clear before widespread use. Discussions of artificial intelligence often bring up connotations of an Orwellian nature, dystopian futures, Frankenstein said Peter Kerstens, advisor, technological innovation & cyber security at the European Commissions Financial Services Department. That is really a challenge for positive adoption and fair use of artificial intelligence because people are apprehensive about it.

Like most technology-based areas, it is a field that is also moving very quickly. If the last class you took in data science was three years ago, its already dated, cautioned Steve Keller, acting director of data strategy at the US Treasurys Bureau of the Fiscal Service, in his own opening remarks.

Kerstens began by describing the very name artificial intelligence as a big problem, asserting that AI is neither artificial nor is it particularly intelligent at least not in a way that humans are intelligent.

A better way to think about artificial intelligence and machine learning is self-learning high-capacity data processing and data analytics, and the application of mathematical and statistical methodologies to data, he explained. That is, of course, not a very appealing name, but that is what it is. But the self-learning or self-empowering element is very important in AI because you have to look at it in comparison to traditional data processing.

Continuing this theme of caution he further explained: Like old technology, AI enhances human and organisational capability for the better, but potentially also for the worse. So, it really depends on what use you make of that tool. You can make very positive use of it. But you can also make very negative uses of it. And thats why governance of your artificial intelligence and machine learning, and potentially rules and ethics, are important.

For financial regulators, AI is proving useful to help process the vast amounts of data and reports that companies must submit. It goes beyond human capability, or you have to put lots and lots of people onto it to process just the incoming information, he said.

Read more:Biden sets out AI Bill of Rights to protect citizens from threats from automated systems

Kerstens then mentioned AIs potential for law enforcement. Monitoring the vast volumes of money moving through the financial system for fraud, sanctions and money laundering requires very powerful systems. But this is also risky because it comes very close to mass surveillance, he said. So, if you apply artificial intelligence or machine learning engines onto all of these flows, you really get into this dystopian future of Big Brother.

Kerstens also touched on AIs use in understanding macroeconomic developments. Typically, macro- economic policy assessment is very politically driven, and this blurs the objectivity of the assessment. AI assessment is much more independent, because it just looks at the data without any preconceived notions and draws conclusions, including conclusions that may not necessarily be very desirable, he said.

The US Treasurys Keller described the ultimate aim of AI as being to improve decision accuracy, forecasting and speed trying to use data to make scientific decisions. This includes, he continued, testing and verifying our assumptions with data to help make sure that we dont break things, but also help us ask important questions.

He provided four AI use areas for the Bureau of the Fiscal Service: Treasury warrants (authorisations that a payment be made); fraud detection; monitoring; and entity resolution.

In the first area, he said the focus was turning bills into literally a dataset the bureau has experimented with using natural language processing to turn written legislation into coherent, machine-readable data that has account codes and budgeted dollars for those account codes; in the second area, he said the focus was checking people are who they say they are (and how we detect that at scale); in the third area, uses include monitoring whether people are using services correctly.

Were collecting data from so many elements, and often in large public-sector areas, the left hand doesnt talk to the right hand, he said, in the context of entity resolution. We often need to find a way to connect these two up in such a way that we are looking at the same entity so that we can share data in the long run. So, data can be brought together and utilised by data scientists or eventually to create AI that would help these other three things to happen.

Read more: Artificial intelligence in the public sector: an engine for innovation in government if we get it right

Keller also raised ethical, upskilling and cultural considerations. If people start buying IT products that are going to have AI organically within them, or theyre building them [questions should arise such as]: are we doing it ethically? Do we have analytics standards? How are we testing? Are we actually getting value from the product? Or is it a total risk?.

He concluded his opening remarks by outlining how the bureau was building an internal data ecosystem, including a data governance council, data analytics lab, high-value use case compendium and data university.

The Centre for Data Ethics & Innovation (CDEI), which is part of the UK Department for Digital, Culture, Media and Sport, was established three years ago to drive responsible innovation across the public sector.

A huge focus is around supporting teams to think about governance approaches, the centres deputy director, Sam Cannicott, explained. How do they develop and deploy technology in a responsible way? How do they have mechanisms for identifying and then addressing some of the ethical questions that these technologies raise?.

The CDEI has worked with a varied cross-section of the public sector including the Ministry of Defence (to explore responsible AI use in defence); police forces; and the Department for Education and local authorities to explore the use of data analytics in childrens social care. These are all really sensitive often controversial areas, but also where data can help inform decision-making, he said.

Read more: Canada to create top official to police artificial intelligence under new data law

The CDEI does not prescribe what should be done. Instead it helps different teams to think through these questions themselves.

Ultimately, the questions are complex, Cannicott said. While lots of teams might seek an easy answer, [to] be told what youre doing is fine, its often more complicated, particularly when we look at how you develop a system, then deploy it, and continue to monitor and evaluate. So, we support teams to think about the whole lifecycle process.

The CDEIs current work programme is focused on three areas: building an effective AI assurance ecosystem (including exploring standards and impact assessments, as well as risk assessments that might be undertaken before a technology is deployed); responsible data access, including a focus on privacy-enhancing technologies; and transparency (the CDEI has been working with theCentral Digital and Data Officeto develop the UKs first public sector algorithmic transparency standard).

This is underpinned by a public attitudes function to ensure citizens views inform the CDEIs work important when it comes to the critical challenge of trust.

Dr Joseph Castle, adviser on strategic relationships and open source technologies at SAS, described how public authorities around the globe are using AI across diverse set of fields, ranging from areas such as infrastructure and transport through to healthcare.

In government finance, he said, authorities are using analytics and AI to assess policy, risk, fraud and improper payments.

Castle, who previously worked for more than 20 years in various US federal government roles, provided two examples of SAS work in the public sector: with Italys Ministry of Economics and Finance (MEF), and with Belgiums Federal Public Service Finance.

In the Italian example, he said MEF used analytics to calculate risk on financial guarantees, providing up-to-date reporting for improved systematic liquidity and risk management during COVID-19; work with the Belgian ministry, meanwhile, has been on using analytics and AI to predict the impact of new tax rules.

The most recent focus for public entities has been on AI research and governance, leading to a better understanding of AI technology itself and responsible innovation, he said. Public sector AI maturation allows for improved service, reduced costs and trusted outcomes.

Australias National Artificial Intelligence Centre launched in December 2021. It aims to accelerate positive AI adoption and innovation to benefit businesses and communities.

Stela Solar, who is the centres director, described AIs ability to scale as incredibly powerful. But, she said, it is incredibly important that organisations exploring and using AI tools do so responsibly and inclusively.

In opening remarks reflecting the centres focus, she proposed three factors that would be important to help maximise AIs impact beyond government.

The first, she said, is that more should be done to connect businesses with research- and innovation-based organisations. A national listening tour organised by the centre had found, she said, low awareness of AIs capabilities. Unless we empower every business to be connected to those opportunities, we wont really succeed, she warned.

Her second point focused on small- and medium-sized businesses. Much of the guidance that exists is really targeted at large enterprises to experience, create and adopt AI, she said. But small and medium business is really struggling in this area, which is ironic as AI really presents as a great equaliser opportunity because it can deal with scale and take action at scale. It can really uplift the impact that small and medium businesses can have.

Her third point focused on community understanding, which she described as a critical factor in accelerating the uptake of AI technologies. This includes achieving engagement from diverse perspectives in how AI is shaped, created [and] implemented.

Topics including trust in AI systems, the risk of bias and overcoming scepticism were addressed further during the webinars Q&A.

In terms of trust, what goes in to any AI tool affects what comes out. How reliable they are [AI systems] depends on how good and how unbiased the dataset was, Kerstens said. Does it have known biases or something that is a proxy for biases? For example, sometimes people use addresses. Peoples addresses, especially in countries where you have very diverse populations, and where different population groups and different racial or religious groups live in particular areas, can be a proxy for religious affiliation, or for race. If youre not careful, your artificial intelligence engine is going to build in these biases, and therefore its going to be biased.

Its not just about bias within AI, its bias in the data, said Castle, emphasising the importance of responsible innovation across the analytics lifecycle.

Read more: Brazils national AI strategy is unachievable, government study finds

Solar provided a further dimension, adding that organisations can often find themselves working with substantial gaps in data (which she referred to as data deserts). Its actually been impressive to see some of the grassroots efforts across communities to gather datasets to increase representation and diversity in data, she said, giving examples from Queensland and New South Wales where, respectively, communities had provided data to help shape and steer investments and fill gaps in elderly health data.

On this theme she said that co-design of AI systems with the communities who the technology serves or affects will go a long way to address some of the biases and also will go a long way into the question of what should be done and what shouldnt be done.

Scepticism about the use of AI from policymakers, particularly those who are not technologists, was discussed as a common challenge.

Sometimes theres a push to use these technologies because they can be seen as a way to save money, observed Cannicott. There is also nervousness because some have seen where things have gone wrong, and they dont want to be to blame.

He emphasised the importance of experimentation, governance (having really clear accountability and decision-making frameworks to walk through the ethical challenges that might come up and how you might address them) and public engagement.

Some polling we did fairly recently suggested that around half of people dont think the data that government collects from them is used for their benefit, he said. Theres quite a bit of a trust gap there [so] decision makers [have] to start demonstrating that they are able to use data in a way that benefits peoples lives.

Keller emphasised the importance of incorporating recourse into AI systems. If I build a system that detects fraud, and flag somebody is a villain and theyre not, we need to give them an easy route to appeal that process, he said.

AI is often a purely technical conversation. But, when it comes to government use of AI, policy and politics inevitably get entwined.

To develop artificial intelligence, you need vast amounts of data. Europeans tend to look at personal data protection in a different way than people in the US do, pointed out Kerstens.

Organisational leaders driven by doctrines could struggle to accept a role for AI. If you run an organisation or a governmental entity based on politics, artificial intelligence isnt something youre going to like very much because it is the data speaking to you, he continued. They do like artificial intelligence and data when the data confirms a doctrinal or political view. But if the data does not support [their] view, theyll dismiss it.

Public sector agencies also need to be savvy about AI solutions they are buying. Increasingly, public-sector organisations are being sold off-the-shelf tools. And actually, thats quite a dangerous space to be in, said Cannicott. Because, for example, if you [look at] childrens social-care different geographies, different populations theres all sorts of different factors in that data. If youre not clear on where the data is coming from to build those tools initially, then you probably shouldnt be using that technology. Thats also where testing and experimentation is very important.

There is clearly momentum building behind AI. But an over-riding theme from the webinar was the extent to which many remain in the dark or deeply sceptical.

Often Ive seen AI be implemented by someone whos very passionate, and it stays as this hobby experiment and project, said Solar, emphasising the importance of developing a base-level understanding of AI across all levels of an organisation. For it really to get the momentum across the organisation and to be rolled out into full production, with all the benefits that it can bring, you really need to bring along the policy decision-makers, the leaders the entire organisational chain, she said.

Kerstens concluded by emphasising that the story of AIs growing deployment across the public sector (and beyond) remains in the early chapters. AI is very powerful. Its just very early days, he said. But what people are most afraid of is that they dont understand how the artificial intelligence engine thinks. We should focus on productive, useful applications and not the nefarious ones.

AIs advocates will be hoping that fewer people, over time, come to compare it to the tale to Frankenstein.

The Global Government Fintech webinar How can AI help public authorities save money and deliver better outcomes? was held on 4 October 2022, with the support of knowledge partners SAS and Intel. You can watch the 75-minute webinar via our dedicated event page.

Read more: AI intelligence: equipping public and civil service leaders with the skills to embrace emerging technologies

Go here to see the original:

AI-powered government finances: making the most of data and machines - Global Government Forum

Microsoft Is Building Its Own AI Hardware With Project Brainwave – Fortune

Microsoft outlined on Tuesday the next step in its quest to bring powerful artificial intelligence to market.

Tech giantsnamely Microsoft and Google have been leapfrogging each other, trying to apply AI technologies to a wide range of applications in medicine, computer security, and financial services, among other industries.

Project Brainwave, detailed in a Microsoft Research blog post, builds on the company's previously disclosed field programmable gate array (FPGA) chips, with the goal of making real-time AI processing a reality. These chips are exciting to techies because they are more flexible than the standard central processing unit (CPU) used in traditional servers and PCs they can be reprogrammed to take on new and different tasks rather than being swapped out for entirely new hardware.

The broader story here is that Microsoft will make services based on these new smart chips available as part of its Azure cloud sometime in the future.

Microsoft ( msft ) says it is now imbuing deep neural network (DNN) capabilities into those chips. Deep neural network technology is a subset of AI that brings high-level human-like thought processing to computers.

Microsoft is working with Altera, now a unit of Intel , on these chips. Google has been designing its own special AI chips, known as Tensor Processing Units, or TPUs. One potential benefit of Microsoft's Brainwave is that it supports multiple AI frameworksincluding Google TensorFlow, which as pointed out by my former Fortune colleague Derrick Harris , Google TPUs support only TensorFlow.

Read more from the original source:

Microsoft Is Building Its Own AI Hardware With Project Brainwave - Fortune

Navigating the New Landscape of AI Platforms – Harvard Business Review

Executive Summary

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tooling for AI systems than they do building the AI systems themselves. Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling, and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies.

Nearly two years ago, Seattle Sport Sciences, a company that provides data to soccer club executives, coaches, trainers and players to improve training, made a hard turn into AI. It began developing a system that tracks ball physics and player movements from video feeds. To build it, the company needed to label millions of video frames to teach computer algorithms what to look for. It started out by hiring a small team to sit in front of computer screens, identifying players and balls on each frame. But it quickly realized that it needed a software platform in order to scale. Soon, its expensive data science team was spending most of its time building a platform to handle massive amounts of data.

These are heady days when every CEO can see or at least sense opportunities for machine-learning systems to transform their business. Nearly every company has processes suited for machine learning, which is really just a way of teaching computers to recognize patterns and make decisions based on those patterns, often faster and more accurately than humans. Is that a dog on the road in front of me? Apply the brakes. Is that a tumor on that X-ray? Alert the doctor. Is that a weed in the field? Spray it with herbicide.

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tools for AI systems than they do building the systems themselves. A recent survey of 500 companies by the firm Algorithmia found that expensive teams spend less than a quarter of their time training and iterating machine-learning models, which is their primary job function.

Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies, like Seattle Sports Science.

Frustrated that its data science team was spinning its wheels, Seattle Sports Sciences AI architect John Milton finally found a commercial solution that did the job. I wish I had realized that we needed those tools, said Milton. He hadnt factored the infrastructure into their original budget and having to go back to senior management and ask for it wasnt a pleasant experience for anyone.

The AI giants, Google, Amazon, Microsoft and Apple, among others, have steadily released tools to the public, many of them free, including vast libraries of code that engineers can compile into deep-learning models. Facebooks powerful object-recognition tool, Detectron, has become one of the most widely adopted open-source projects since its release in 2018. But using those tools can still be a challenge, because they dont necessarily work together. This means data science teams have to build connections between each tool to get them to do the job a company needs.

The newest leap on the horizon addresses this pain point. New platforms are now allowing engineers to plug in components without worrying about the connections.

For example, Determined AI and Paperspace sell platforms for managing the machine-learning workflow. Determined AIs platform includes automated elements to help data scientists find the best architecture for neural networks, while Paperspace comes with access to dedicated GPUs in the cloud.

If companies dont have access to a unified platform, theyre saying, Heres this open source thing that does hyperparameter tuning. Heres this other thing that does distributed training, and they are literally gluing them all together, said Evan Sparks, cofounder of Determined AI. The way theyre doing it is really with duct tape.

Labelbox is a training data platform, or TDP, for managing the labeling of data so that data science teams can work efficiently with annotation teams across the globe. (The author of this article is the companys co-founder.) It gives companies the ability to track their data, spot, and fix bias in the data and optimize the quality of their training data before feeding it into their machine-learning models.

Its the solution that Seattle Sports Sciences uses. John Deere uses the platform to label images of individual plants, so that smart tractors can spot weeds and deliver pesticide precisely, saving money and sparing the environment unnecessary chemicals.

Meanwhile, companies no longer need to hire experienced researchers to write machine-learning algorithms, the steam engines of today. They can find them for free or license them from companies who have solved similar problems before.

Algorithmia, which helps companies deploy, serve and scale their machine-learning models, operates an algorithm marketplace so data science teams dont duplicate other peoples effort by building their own. Users can search through the 7,000 different algorithms on the companys platform and license one or upload their own.

Companies can even buy complete off-the-shelf deep learning models ready for implementation.

Fritz.ai, for example, offers a number of pre-trained models that can detect objects in videos or transfer artwork styles from one image to another all of which run locally on mobile devices. The companys premium services include creating custom models and more automation features for managing and tweaking models.

And while companies can use a TDP to label training data, they can also find pre-labeled datasets, many for free, that are general enough to solve many problems.

Soon, companies will even offer machine-learning as a service: Customers will simply upload data and an objective and be able to access a trained model through an API.

In the late 18th century, Maudslays lathe led to standardized screw threads and, in turn, to interchangeable parts, which spread the industrial revolution far and wide. Machine-learning tools will do the same for AI, and, as a result of these advances, companies are able to implement machine-learning with fewer data scientists and less senior data science teams. Thats important given the looming machine-learning, human resources crunch: According to a 2019 Dun & Bradstreet report, 40 percent of respondents from Forbes Global 2000 organizations say they are adding more AI-related jobs. And the number of AI-related job listings on the recruitment portal Indeed.com jumped 29 percent from May 2018 to May 2019. Most of that demand is for supervised-learning engineers.

But C-suite executives need to understand the need for those tools and budget accordingly. Just as Seattle Sports Sciences learned, its better to familiarize yourself with the full machine-learning workflow and identify necessary tooling before embarking on a project.

That tooling can be expensive, whether the decision is to build or to buy. As is often the case with key business infrastructure, there are hidden costs to building. Buying a solution might look more expensive up front, but it is often cheaper in the long run.

Once youve identified the necessary infrastructure, survey the market to see what solutions are out there and build the cost of that infrastructure into your budget. Dont fall for a hard sell. The industry is young, both in terms of the time that its been around and the age of its entrepreneurs. The ones who are in it out of passion are idealistic and mission driven. They believe they are democratizing an incredibly powerful new technology.

The AI tooling industry is facing more than enough demand. If you sense someone is chasing dollars, be wary. The serious players are eager to share their knowledge and help guide business leaders toward success. Successes benefit everyone.

Excerpt from:

Navigating the New Landscape of AI Platforms - Harvard Business Review

Beer, bots and broadcasts: companies start using AI in the cloud … – Information Management

(Bloomberg) -- Back in October, Deschutes Brewery Inc.s Brian Faivre was fermenting a batch of Obsidian Stout in a massive tank. Something was amiss; the beer wasnt fermenting at the usual temperature. Luckily, a software system triggered a warning and he fixed the problem.

"We would have had to dump an entire batch," the brewmaster said. When beer is your bottom line, that's a calamity.

The software that spotted the temperature anomaly is from Microsoft Corp. and it's a new type that uses a powerful form of artificial intelligence called machine learning. What makes it potentially revolutionary is that Deschutes rented the tool over the internet from Microsoft's cloud-computing service.

Day to day, Deschutes uses the system to decide when to stop one part of the brewing process and begin another, saving time while producing better beer, the company says.

The Bend, Oregon-based brewer is among a growing number of enterprises using new combinations of AI tools and cloud services from Microsoft, Amazon.com Inc. and Alphabet Inc.'s Google. C-SPAN is using Amazon image-recognition to automatically identify who is in the government TV programs it broadcasts. Insurance company USAA is planning to use similar technology from Google to assess damage from car accidents and floods without sending in human insurance adjusters. The American Heart Association is using Amazon voice recognition to power a chat bot registering people for a charity walk in June.

AI software used to require thousands of processors and lots of power, so only the largest technology companies and research universities could afford to use it. An early Google system cost more than $1 million and used about 1,000 computers. Deschutes has no time for such technical feats. It invests mostly in brewing tanks, not data centers. Only when Microsoft, Amazon and Google began offering AI software over the internet in recent years did these ideas seem plausible.

Amazon is the public cloud leader right now, but each company has its strengths. Democratizing access to powerful AI software is the latest battleground, and could decide which tech giant emerges as the ultimate winner in a cloud infrastructure market worth $25 billion this year, according to researcher IDC.

"There's a new generation of applications that require a lot more intense data science and machine learning. There is a race for who is going to provide the tools for that," said Diego Oppenheimer, chief executive officer of Algorithmia Inc., a startup that runs a marketplace for algorithms that do some of the same things as Microsoft, Amazon and Google's technology.

If the tools become widespread, they could transform work as more automation lets companies get more done with the same human work force.

C-SPAN, which runs three TV stations and five web channels, previously used a combination of closed-caption transcripts and manpower to determine when a new speaker started talking and who it was. It was so time-consuming, the network only tagged about half of the events it broadcast. C-SPAN began toying with Amazon's image-recognition cloud service the same day it launched, said Alan Cloutier, technical manager for the network's archives.

Now the network is using it to match all speakers against a database it maintains of 99,000 government officials. C-SPAN plans to enter all the data into a system that will let users search its website for things like Bernie Sander's healthcare speeches or all times Devin Nunes mentions Russia.

As companies try to better analyze, optimize and predict everything from sales cycles to product development, they are trying AI techniques like deep learning, a type of machine learning that's produced impressive results in recent years. IDC expects spending on such cognitive systems and AI to grow 55 percent a year for the next five years. The cloud-based portion of that should grow even faster, IDC analyst David Schubmehl said.

"In the fullness of time deep learning will be one of the most popular workloads on EC2," said Matt Wood, Amazon Web Services' general manager for deep learning and AI, referring to its flagship cloud service, Elastic Compute Cloud.

Pinterest Inc. uses Amazon's image-recognition service to let users take a picture of an item -- say a friend's shoes -- and see similar footwear. Schools in India and Tacoma, Washington, are using Microsoft's Azure Machine Learning to predict which students may drop out, and farmers in India are using it to figure out when to plant peanut crops, based on monsoon data. Johnson & Johnson is using Google's Jobs machine-learning algorithm to comb through candidates' skills, preferences, seniority and location to match job seekers to the right roles.

Google is late to the public cloud business and is using its AI experience and massive computational resources to catch up. A new "Advanced Solutions Lab" lets outside companies participate in training sessions with machine-learning experts that Google runs for its own staff. USAA was first to participate, tapping Google engineers to help construct software for the financial-services company. Heather Cox, USAA's chief technology officer, plans a multi-year deal with Google.

The three leaders in the public cloud today have also made capabilities like speech and image recognition available to customers who can design apps that hook into these AI features -- Microsoft offers 25 different ones.

"You can build software that is cognitive -- that can sense emotion and understand your intent, recognize speech or whats in an image -- and we provide all of that in the cloud so customers can use it as part of their software," said Microsoft vice president Joseph Sirosh.

Amazon, in November, introduced similar tools. Rekognition tells users what's in an image, Polly converts text to human-like speech and Lex -- based on the company's popular Alexa service -- uses speech and text recognition for building conversational bots. It plans more this year.

Chris Nicholson, CEO of AI company Skymind Inc., isnt sure how large the market really is for AI in the cloud. The massive data sets some companies want to use are still mostly stored in house and it's expensive and time-consuming to move them to the cloud. Its easier to bring the AI algorithms to the data than the other way round, he said.

Amazon's Wood disagrees, noting healthy demand for the company's Snowball appliance for transferring large amounts of information to its data centers. Interest was so high that in November Amazon introduced an 18-wheeler truck called Snowmobile that can move 100 petabytes of data.

Microsoft's Sirosh said the cloud can be powerful for companies that don't want to invest in the processing power to crunch the data needed for AI-based apps.

Take Norwegian power company eSmart Systems AS, which developed drones that photograph power lines. The company wrote its own algorithm to scan the images for locations that need repair. But it rents the massive computing power needed to run the software from Microsoft's Azure cloud service, CEO Knut Johansen said.

As the market grows and competition intensifies, each vendor will play to their strengths.

"Google has the most credibility based on tools they have; Microsoft is the one that will actually be able to convince the enterprises to do it; and Amazon has the advantage in that most corporate data in the cloud is in AWS," said Algorithmia's Oppenheimer. "It's anybody's game."

Read the original:

Beer, bots and broadcasts: companies start using AI in the cloud ... - Information Management

Think Tank: Will AI Save Humanity? – WWD

There is a lot of fear surrounding artificial intelligence. Some are related to the horror perpetuated in dystopian sci-fi films while others have deep concerns over the impact on the job market.

But I see the adaptation of AI as being just as significant as the discovery of fire or the first domestication of crops and animals. We no longer need so much time spent on X, therefore we can evolve to Y.

It will be an evolutionary process that is simply too hard to fathom now.

Here, I present five ways that AI will not only make our lives better, but make us better human beings too.

1. AI will allow us to be more human

How many of us have sat at a computer and felt more like an appendage to the machine than a human using a tool? Ill admit I have questioned quite a few times in my life whether the standard desk job was natural or proper for a human. Over the next year or two we will see AI sweeping in and removing the machine-like functions from our day-to-day jobs. Suddenly, humans will be challenged to focus on the more human side of our capabilities things like creativity, strategy and inspiration.

In fact, it will be interesting to see a shift where parents start urging their children to move into more creative fields in order to secure safe jobs. Technical fields will of course still exist, but those gifted individuals will also be challenged to use their know-how creatively or in new ways, producing even more advanced use cases.

2. AI will make us more aware

Many industries have been drowning in data. We have become experts on collecting and storing figures, but have fallen short on truly utilizing our databases at scale and In real-time. AI comes in and suddenly we have years of data turned into easy to communicate, actionable insights and even auto-execution in things like digital marketing. We went from flying blind to being perfectly aware of our reality.

For the fashion industry, this means our marketing initiatives will have a higher success rate, but for things like the medical world, environmental studies etc. the impact is more powerful. What if a machine was monitoring our health and could immediately be aware of our ailment and even immediately administer the cure? What if this reduced costs and medical misdiagnosis? What if this freed up the medical community to focus on more research and faster, better treatments?

3. AI will make us more genuine

In a future where AI acts as a partner to help us become more aware of the truth and more aware of reality, it will be more and more difficult for disinterest to exist in the work place. Humans will need to move into disciplines they genuinely connect with and are passionate about in order to remain relevant professionally. Why? Well the machine-like jobs will begin to disappear, data will be real-timeand things will constantly be evolving, so in order to stay on top of the game there will need to be a self-taught component.

It will be hard to fake the level of interest needed to meaningfully contribute at that point. This may be a hard adjustment for some, but there is already an undercurrent, or an intuitive feeling that this shift is taking place. Most of us are already reaching for a more genuine existence when we think of our careers.

4. AI will free up our collective brain power

AI is ultimately going to replace a lot of our machine-like tasks, therefore freeing up our collective time. This time will naturally need to be invested elsewhere. Historically, when shifts like this have happened across cultures we witness advancements in arts and technology. I do not think that this wave will be different, though this new industrial revolution will not be isolated to one country or culture, but in many ways, will be global.

This is the first time such a thing has happened at such as scale. Will this shift inspire a global wave of introspection? Could we be on the brink of a global renaissance?

5. AI will allow us to overcome our most pressing issues

All of which brings us to four simple words: our world will evolve. Just like our ancestors moving from hunter-gatherers into more permanent settlements, we are now moving into a new organizational structure where global, real-time data is at our fingertips.

Our most talented minds will be able to work more quickly and focus on things at a higher level. Are we witnessing the next major step in human evolution? Will we embrace our ability to be more aware, more genuine and ultimately more connected? I can only think that, if we do, we will see some incredible things in our lifetime.

If we can overcome fears and anxieties, we can pull together artificial intelligence and human intelligence that could overcome any global obstacle. Whether it is climate change, disease or poverty, we can find a solution together. More than ever, for the human race, anything is now possible.

Courtney Connell is the marketing director at luxury lingerie brand Cosabella, where she is working to change the brandsdirect-to-consumer and wholesale efforts with artificial intelligence.

Read the rest here:

Think Tank: Will AI Save Humanity? - WWD

A.I. can’t solve this: The coronavirus could be highlighting just how overhyped the industry is – CNBC

Monitors display a video showing facial recognition software in use at the headquarters of the artificial intelligence company Megvii, in Beijing, May 10, 2018. Beijing is putting billions of dollars behind facial recognition and other technologies to track and control its citizens.

Gilles Sabri | The New York Times

The world is facing its biggest health crisis in decades but one of the world's most promising technologies artificial intelligence (AI) isn't playing the major role some may have hoped for.

Renowned AI labs at the likes of DeepMind, OpenAI, Facebook AI Research, and Microsoft have remained relatively quiet as the coronavirus has spread around the world.

"It's fascinating how quiet it is," said Neil Lawrence, the former director of machine learning at Amazon Cambridge.

"This (pandemic) is showing what bulls--t most AI is. It's great and it will be useful one day but it's not surprising in a pandemic that we fall back on tried and tested techniques."

Those techniques include good, old-fashioned statistical techniques and mathematical models. The latter is used to create epidemiological models, which predict how a disease will spread through a population. Right now, these are far more useful than fields of AI like reinforcement learning and natural-language processing.

Of course, there are a few useful AI projects happening here and there.

In March, DeepMind announced that it hadused a machine-learning technique called "free modelling" to detail the structures of six proteins associated with SARS-CoV-2, the coronavirus that causes the Covid-19 disease.Elsewhere, Israeli start-up Aidoc is using AI imaging to flag abnormalities in the lungs and a U.K. start-up founded by Viagra co-inventor David Brown is using AI to look for Covid-19 drug treatments.

Verena Rieser, a computer science professor at Heriot-Watt University, pointed out that autonomous robots can be used to help disinfect hospitals and AI tutors can support parents with the burden of home schooling. She also said "AI companions" can help with self isolation, especially for the elderly.

"At the periphery you can imagine it doing some stuff with CCTV," said Lawrence, adding that cameras could be used to collect data on what percentage of people are wearing masks.

Separately, a facial recognition system built by U.K. firm SCC has also been adapted to spot coronavirus sufferers instead of terrorists.In Oxford, England, Exscientia is screening more than 15,000 drugs to see how effective they are as coronavirus treatments. The work is being done in partnership withDiamond Light Source, the U.K.'s national "synchotron."

But AI's role in this pandemic is likely to be more nuanced than some may have anticipated. AI isn't about to get us out of the woods any time soon.

"It's kind of indicating how hyped AI was," said Lawrence, who is now a professor of machine learning at the University of Cambridge. "The maturity of techniques is equivalent to the noughties internet."

AI researchers rely on vast amounts of nicely labeled data to train their algorithms, but right now there isn't enough reliable coronavirus data to do that.

"AI learns from large amounts of data which has been manually labeled a time consuming and expensive task," said Catherine Breslin, a machine learning consultant who used to work on Amazon Alexa.

"It also takes a lot of time to build, test and deploy AI in the real world. When the world changes, as it has done, the challenges with AI are going to be collecting enough data to learn from, and being able to build and deploy the technology quickly enough to have an impact."

Breslin agrees that AI technologies have a role to play. "However, they won't be a silver bullet," she said, adding that while they might not directly bring an end to the virus, they can make people's lives easier and more fun while they're in lockdown.

The AI community is thinking long and hard about how it can make itself more useful.

Last week, Facebook AI announced a number of partnerships with academics across the U.S.

Meanwhile, DeepMind's polymath leader Demis Hassabis is helping the Royal Society, the world's oldest independent scientific academy, on a new multidisciplinary project called DELVE (Data Evaluation and Learning for Viral Epidemics). Lawrence is also contributing.

See original here:

A.I. can't solve this: The coronavirus could be highlighting just how overhyped the industry is - CNBC

Google’s AI-powered Assistant is coming to millions more Android phones – TNW

Last year, when Google first showed off its clever Assistant chatbot for searching the Web, playing music and video from your preferred apps and controlling your smart home appliances, it was exclusive to the companys Pixel phone, Home speaker and the Allo messaging app.

If you havent tried it yet, youll be glad to know that Assistant is rolling out to many more devices; Google says itll make the bot available to all phones running Android 6.0 and newer, as long as theyre running Google Play Services.

Run an early-stage company? We're inviting 250 to exhibit at TNW Conference and pitch on stage!

One of the first handsets to get it is the new LG G6; starting this week, itll roll out to English users in the US, Australia, Canada and the UK, as well as German speakers in Germany. More languages and countries will be covered over the coming year.

With that, Assistant is set to become a household name. In addition to mobile devices, the bot is also available on Android Wear 2.0-based smartwatches and is coming soon to TVs and cars. Todays announcement will likely see Google in good stead as it takes on Amazons Alexa, which is also quickly gaining ground and expanding its list of capabilities.

The Google Assistant is coming to more Android phones on The Keyword

Originally posted here:

Google's AI-powered Assistant is coming to millions more Android phones - TNW

Meet The AI Designed To Help Humans, Not Replace Them – Forbes

ASAPP founder Gustavo Sapoznik developed software that trains customer-service reps to be radically more productive, winning the young startup an $800 million valuation.

If youve ever felt your blood boil after sitting on hold for 40 minutes before reaching an agent . . . who then puts you back on hold, consider that its often even worse on the other end of the line. A customer-service representative for JetBlue, for instance, might have to flip rapidly among a dozen or more computer programs just to link your frequent-flier number to a specific itinerary.

Imagine that cognitive load, while you have someone screaming at you or complaining about some serious problem, and youre swiveling between 20 screens to see which one you need to be able to help this person, says Gustavo Sapoznik, 34, the founder and CEO of ASAPP, a New York Citybased developer of AI-powered customer-service software.

Sapoznik remembers just such a scene while shadowing a call-center agent at a very large company (he wont name names), watching the worker navigate a Frankenstack patchwork of software, entering a callers information into six different billing systems before locating it. That was an eye-opening moment.

The problem has only gotten worse during the pandemic. Call centers for banks, finance companies, airlines and service companies are being overrun. Call volumes for ASAPPs customers have spiked between 200% and 900% since the crisis began, according to Sapoznik. Making call centers work isnt the sexiest use of cutting-edge AI, but its a lucrative one.

If we can automate half of this thing away, we can get to the same place by making people twice as productive.

According to estimates from Forrester Research, global revenues for call centers are around $15 billion a year. In all, ASAPP has raised $260 million at a recent valuation of $800 million, per data from Pitchbook. Silicon Valley heavy hitters including Kleiner Perkins chairman John Doerr and former Cisco CEO John Chambers are on ASAPPs board, along with Dave Strohm of Greylock and March Capitals Jamie Montgomery. Clients include JetBlue, Sprint and satellite TV provider Dish, all of whom who sign up for multiyear contracts contributing to ASAPPs estimated $40 million in revenue, according to startup tracker Growjo.

ASAPP has drawn this investor interest by flipping AI on its head. For years engineers have perfected artificial intelligence to perform repetitive tasks better than humans. Rather than having people train AI systems to replace them, ASAPP makes AI that trains people to be radically more productive.

Pure automation capabilities are [used] out of an imperative to reflect costs, but at the expense of customer experience. Theyve been around for 20 or 30 years but they havent really solved much of the problem, Sapoznik says. ASAPPs thinking: If we can automate half of this thing away, we can get to the same place by making people twice as productive.

The company is a standout on Forbes second annual AI 50 list of up-and-coming companies to watch, rated highly for its use of artificial intelligence as a core attribute by an expert panel of judges. Its focus on using AI to keep humans in the loop is also what sets ASAPP apart, although its competing in the same call-center sandbox as fellow AI 50 listees Observe.ai of San Francisco and Cresta, which is chaired by AI legend Sebastian Thrun, the Stanford professor who greenlit Googles self-driving car program.

ASAPPs focus is natural language processing and converting speech to text using proprietary technology developed by a group led by a found- ing member of the speech team for Apples Siri. Its software then displays suggested responses or relevant resources on a call-center agents screen, minimizing the need to toggle between applications. Sapoznik and his engineers also studied the most effective human representatives, trying to replicate their expertise into ASAPP software via machine learning. That software then coaches call-center staff on effective ways to respond to customer queries and tracks down critical information. If a caller asks how to cancel a flight, for example, ASAPP software automatically pulls up helpful documents for the agent to browse. If a customer reads a 16-digit account number, its instantly transcribed and displayed on the agents screen for easy reference.

When things go right, companies using ASAPP technology see the number of calls successfully handled per hour increase from 40% to more than 150%. That can mean lower stress for call- center workers, which in turn reduces the high turnover associated with that line of work.

A licensed pilot with a fondness for classical music who studied math at the University of Chicago, Sapoznik first applied his coding skills to his familys real estate and financial business in Miami. Id been doing some work in investments where you build machine-learning product capabilities to trade the markets. The impact there is that theres a number that goes up or goes down, he says. Merely making money didnt excite him.

Sapoznik hopes that optimizing call centers is just a start for ASAPP, which he founded in 2014. Hes actively searching for similar gigantic-size business opportunities with brokenness and tons of interesting data. He thinks ASAPP can do that because its built like a research organization80% of its 300 employees are researchers or engineers.

The exciting thing about ASAPP is not so much what theyre going after now, but whether or not they can go beyond that, says Forrester analyst Kjell Carlsson. They, like so many of us, see the incredible potential of [using] natural language processing for augmented intelligence.

Summarizing ASAPPs potential, Sapoznik draws on his experience as a pilotin aviation, automation has steadily transformed the cock- pit. Its increased safety from a pretty dramatic perspective, and it hasnt gotten rid of pilots yet, he says. Its just taken away chunks of their workloads.

Get Forbes' daily top headlines straight to your inboxfor news on the world's most important entrepreneurs and superstars, expert career advice, and success secrets.

Go here to see the original:

Meet The AI Designed To Help Humans, Not Replace Them - Forbes

AI.Reverie wins contract to improve navigation capabilities for USAF – Airforce Technology

]]> AI.Reverie has secured SBIR Phase 2 contract from AFWERX. Credit: Markus Spiske on Unsplash.

Sign up here for GlobalData's free bi-weekly Covid-19 report on the latest information your industry needs to know.

AI.Reverie has secured Phase 2 Small Business Innovation Research (SBIR) contract from AFWERX for the US Air Force (USAF).

Under the $1.5m contract, AI.Reverie will build artificial intelligence (AI) algorithms and improve navigation capabilities supporting the 7th Bomb Wing at Dyess Air Force Base (AFB).

The company will use synthetic data to train and improve the accuracy of vision algorithms for navigation through its Rapid Capabilities office.

Synthetic data or computer-generated images are economical and can be generated faster than hand-labelled photos, solving the limitations associated with real data.

The advanced technology creates vision algorithms needed to save lives during operations.

Phase 2 SBIR contract awarded to AI.Reverie follows its co-publication with the IQT Lab CosmiQ Works that highlighted the value of synthetic data to train computer vision algorithms.

Furthermore, the research partners released RarePlanes for academic and commercial use with open dataset of real and synthetic overhead imagery.

USAF Major Anthony Bunker said: As the world has gotten smaller, the ability to navigate based on visual terrain features has become an ever-increasing challenge.

Computer vision algorithms can be trained to recognise these world-wide terrain features by ingesting large amounts of diverse data.

We are excited to collaborate with AI.Reverie to improve navigation capabilities given the companys ability to generate fully annotated data at scale with its synthetic data platform.

In May this year, AI.Reverie and Green Revolution Cooling (GRC) secured AFWERX SBIR Phase I contract from the USAF. The contract was for enhancing computer vision models for the US Department of Defense (DoD).

See the article here:

AI.Reverie wins contract to improve navigation capabilities for USAF - Airforce Technology

Lunar Rover Footage Upscaled With AI Is as Close as You’ll Get to the Experience of Driving on the Moon – Gizmodo

The last time astronauts walked on the moon was in December of 1972, decades before high-definition video cameras were available. They relied on low-res grainy analog film to record their adventures, which makes it hard for viewers to feel connected to whats going on. But using modern AI techniques to upscale classic NASA footage and increase the frame rate suddenly makes it feel like youre actually on the moon.

The YouTube channel Dutchsteammachine has recently uploaded footage from the Apollo 16 mission that looks like nothing youve ever seen before, unless you were an actual Apollo astronaut. Originally captured on 16-millimeter film at just 12 frames per second, footage of the lunar rover heading to Station 4, located on the rim of the moons Shorty Crater, was increased to a resolution of 4K and interpolated so that it now runs at 60 frames per second using the DAIN artificial intelligence platform.

Most of us immediately turn off the motion-smoothing options on a new TV, but heres a demonstration of how, when done properly, it can dramatically change the feeling of what youre watching. Even without immersive VR goggles, you genuinely feel like youre riding shotgun on the lunar rover.

The footage has been synced to the original audio from this particular mission, which also serves to humanize the astronauts if you listen along. Oftentimes, when bundled up in their thick spacesuits, the Apollo astronauts seem like characters from a science fiction movie. But listening to their interactions and narration of what theyre experiencing during this mission, they feel human again, like a couple of friends out on a casual Sunday afternoon driveeven thought that drive is taking place over 238,000 miles away from Earth.

G/O Media may get a commission

See the rest here:

Lunar Rover Footage Upscaled With AI Is as Close as You'll Get to the Experience of Driving on the Moon - Gizmodo

Ai Definition and Meaning – Bible Dictionary

AI

a'-i (`ay, written always with the definite article, ha-`ay, probably meaning "the ruin," kindred root, `awah):

(1) A town of central Palestine, in the tribe of Benjamin, near and just east of Bethel (Genesis 12:8). It is identified with the modern Haiyan, just south of the village Der Diwan (Conder in HDB; Delitzsch in Commentary on Genesis 12:8) or with a mound, El-Tell, to the north of the modern village (Davis, Dict. Biblical). The name first appears in the earliest journey of Abraham through Palestine (Genesis 12:8), where its location is given as east of Bethel, and near the altar which Abraham built between the two places. It is given similar mention as he returns from his sojourn in Egypt (Genesis 13:3). In both of these occurrences the King James Version has the form Hai, including the article in transliterating. The most conspicuous mention of Ai is in the narrative of the Conquest. As a consequence of the sin of Achan in appropriating articles from the devoted spoil of Jericho, the Israelites were routed in the attack upon the town; but after confession and expiation, a second assault was successful, the city was taken and burned, and left a heap of ruins, the inhabitants, in number twelve thousand, were put to death, the king captured, hanged and buried under a heap of stones at the gate of the ruined city, only the cattle being kept as spoil by the people (Joshua 7; 8). The town had not been rebuilt when Jos was written (Joshua 8:28). The fall of Ai gave the Israelites entrance to the heart of Canaan, where at once they became established, Bethel and other towns in the vicinity seeming to have yielded without a struggle. Ai was rebuilt at some later period, and is mentioned by Isa (Isaiah 10:28) in his vivid description of the approach of the Assyrian army, the feminine form (`ayyath) being used. Its place in the order of march, as just beyond Michmash from Jerusalem, corresponds with the identification given above. It is mentioned also in post-exilic times by Ezra 2:28 and Nehemiah 7:32, (and in Nehemiah 11:31 as, `ayya'), identified in each case by the grouping with Bethel.

(2) The Ai of Jeremiah 49:3 is an Ammonite town, the text probably being a corruption of `ar; or ha-`ir, "the city" (BDB).

Edward Mack

Visit link:

Ai Definition and Meaning - Bible Dictionary