3 ways AI is changing the game for recruiters and talent managers – Forbes

AI transforms the nature of work, but doesnt change the jobs to be done.

From voice-activated smart speakers like Google Home to the spam filter on our work emails, AI has infiltrated our daily lives. Depending on who you talk to, AI will either enable us to do our jobs better or make them completely redundant. The reality is that AI transforms the nature of work, but doesnt change the jobs to be done. The aspects that make us inherently human critical reasoning, communication and empathy will still be vital attributes in the future of work.

If you give a computer a problem, it learns from its interactions with the problem to identify a solution faster than humans can. But, if you ask a computer to look at two paintings and say which is more interesting, it cannot. Unlike people, artificial intelligence is not able to think abstractly and emotionally.

By supplementing human intelligence and creativity with technology that reduces menial processes, there is a great opportunity to enable recruiters not replace them. McKinsey research shows that over two thirds of businesses (69%) believe AI brings value to their Human Resources function.

Here are three ways AI improves recruitment practices:

1. Reducing unconscious bias

People have an unintentional tendency to make decisions based on their underlying beliefs, experiences and feelings its how we make sense of the world around us. And recruiting is no different. In fact, theres bias in something as straightforward as the words we choose.

Research shows that job descriptions that use descriptive words like support and understanding are biased towards female applicants, whereas competitive and lead are biased towards males. When we use these loaded words, were limiting the pool of candidates who will apply for an open role, making the recruiting process biased and affecting hiring outcomes. AI-enabled tools such as Textio can support recruiters to identify the use of bias in role description wording. Removing these words and making descriptions neutral and inclusive can lead to 42% more applications.

Unconscious bias can extend beyond our choice of words to the decisions we make about candidates. Unintentionally, recruiters and hiring managers can decide to interview someone based on the university they attended or even where they are from, or view them as a cultural fit based on answers. But decisions based on these familiarities disregard important factors like a candidates previous work experience and skills. When AI is used to select the shortlist for interviews, it can circumvent bias that was introduced by manually scanning resumes.

While AI can reduce bias, this is only true if the programs themselves are designed carefully. Machine learning algorithms are subject to the potentially biased programming choice of the people who build them and the data set theyre given. While development of this technology is still being fine tuned, we need to focus on finding the balance between artificial intelligence and human intelligence. We shouldnt rely solely on one or the other, but instead use them to complement each other.

2. Improving recruitment for hiring managers, recruiters and candidates

It goes without saying that recruitment is a people-first function. Candidates want to speak to a recruiter or hiring manager and form an authentic connection, which they wont be able to get from interacting with a machine.

Using AI, recruiters can remove tedious and time-consuming processes, so recruiters have more time to focus on engaging candidates as part of the assessment process.

XOR is a good example of this. The platform enables pre-screening of applications, qualifications and automatic interview scheduling. By taking out these tedious administrative tasks from a recruiters day, they can optimise their time to focus on finding the best fit for the role.

AI also helps create an engaging and personalised candidate experience. AI can be leveraged to nurture talent pools by serving relevant content to candidates based on their previous applications. At different stages of the process, AI can ask candidates qualifying questions, learn what types of roles they would be interested in, and serve them content that assists in their application.

But AI does have a different impact on the candidate experience depending on what stage it is implemented in the recruitment process. Some candidates prefer interacting with a chatbot at the start of the application process, as they feel more comfortable to ask general questions such as salary and job location. For delivery firm Yodel, implementing chatbots at the initial stage of the application process resulted in a decrease in applicant drop-off rates. Now only 8% of applicants choose not to proceed with their application, compared to a previous drop-off rate of 50-60%.

When it comes to more meaningful discussions such as how the role aligns with a candidate's career goals and how they can progress within the company human interaction is highly valued. Considering when and how you use AI to enhance the recruitment experience is key to getting the best results.

3. Identifying the best candidate for a role

At its core, recruitment is about finding the best person for a role. During the screening process, recruiters can use AI to identify key candidates by mapping the traits and characteristics of previous high-performing employees in the same role to find a match. This means recruiters are able to fill open roles more quickly and ensure that new hires are prepared to contribute to their new workplace.

PredictiveHire is one of these tools. It uses AI to run initial screening of job applications, making the process faster and more objective by pulling data and trends from a companys previous high-performing employees and scanning against candidate applications. With 88% accuracy, PredictiveHire identifies the traits inherent to a companys high performers so recruiters can progress them to the interview stage.

Undoubtedly, we will continue to see more exciting applications of AI in the next few years. The talent search process can certainly be streamlined and improved by incorporating AI. For recruiters, it is about finding the right balance in marrying AI application and human intelligence to make the hiring process what it should be seamless and engaging.

Read more:

3 ways AI is changing the game for recruiters and talent managers - Forbes

AI enhanced content coming to future Android TVs – Android Authority

Whenever an event like IFA rolls around, the artificial intelligence buzzword emerges to dazzle prospective customers and investors. However, the number of actually impressive use cases for AI is increasing. TCL, one of the industrys biggest TV brands, showcased the AI capabilities of its second-generation AiPQ Engine onstage at IFA. Get ready for Android TV and other smart TVs with AI enhancements in the near future.

TCLs little chip leverages machine learning capabilities to recognize parts of video content, such as landscape backgrounds or faces to ensure accurate skin mapping. The AI processor can also adjust audio playback based on the scene content or music. It can also increase or lower volume based on ambient sounds in your living room. TCL also envisions this being used to dynamically upscale 4K content using super-resolution enhancements.

The bottom line is that this AI display processor can detect and enhance both audio and visual content dynamically, rather than relying on adaptive presets and standard settings. The video embedded below showcases some of the processors features.

It looks pretty nifty, especially when combined with TCLs other TV innovations. These include QLED and mini LED display technology, living room hands-free voice controls, and pop-up cameras for making calls and chatting on social media. Keep an eye out future TCL Android TVs sporting these enhanced AI capabilities.

See also: AI-enhanced displays are coming to affordable smartphones

Read more:

AI enhanced content coming to future Android TVs - Android Authority

‘Why not change the world?’: Grant will fast-track AI tools for screening high-risk COVID cases – Health Imaging

While many existing algorithms fail to account for comorbidities during screening, Yan said his team will incorporate such information, including scan data to assess lung function, demographic information, vital signs and laboratory blood tests.

Many imaging societies, including the American College of Radiology, have urged physicians to avoid using CT as a first-line tool to screen patients during the pandemic. Other countries, including hard-hit Northern Italy, however, have leaned heavily on the modality.

Yan is among the latest in a long line of projects harnessing the power of AI to spot people at higher risk for the novel virus.

It is tremendously important to me and my team that we can contribute our knowledge and skills to fight the COVID-19 pandemic, Yan said. It is our way to answer, Why not change the world? the unofficial Rensselaer motto.

Massachusetts General Hospital is also partnering with RPI to bring this project to fruition.

View post:

'Why not change the world?': Grant will fast-track AI tools for screening high-risk COVID cases - Health Imaging

Protecting privacy in an AI-driven world – Brookings Institution

Our world is undergoing an information Big Bang, in which the universe of data doubles every two years and quintillions of bytes of data are generated every day.1 For decades, Moores Law on the doubling of computing power every 18-24 months has driven the growth of information technology. Nowas billions of smartphones and other devices collect and transmit data over high-speed global networks, store data in ever-larger data centers, and analyze it using increasingly powerful and sophisticated softwareMetcalfes Law comes into play. It treats the value of networks as a function of the square of the number of nodes, meaning that network effects exponentially compound this historical growth in information. As 5G networks and eventually quantum computing deploy, this data explosion will grow even faster and bigger.

The impact of big data is commonly described in terms of three Vs: volume, variety, and velocity.2 More data makes analysis more powerful and more granular. Variety adds to this power and enables new and unanticipated inferences and predictions. And velocity facilitates analysis as well as sharing in real time. Streams of data from mobile phones and other online devices expand the volume, variety, and velocity of information about every facet of our lives and puts privacy into the spotlight as a global public policy issue.

Artificial intelligence likely will accelerate this trend. Much of the most privacy-sensitive data analysis todaysuch as search algorithms, recommendation engines, and adtech networksare driven by machine learning and decisions by algorithms. As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed.

As artificial intelligence evolves, it magnifies the ability to use personal information in ways that can intrude on privacy interests by raising analysis of personal information to new levels of power and speed.

Facial recognition systems offer a preview of the privacy issues that emerge. With the benefit of rich databases of digital photographs available via social media, websites, drivers license registries, surveillance cameras, and many other sources, machine recognition of faces has progressed rapidly from fuzzy images of cats3 to rapid (though still imperfect) recognition of individual humans. Facial recognition systems are being deployed in cities and airports around America. However, Chinas use of facial recognition as a tool of authoritarian control in Xinjiang4 and elsewhere has awakened opposition to this expansion and calls for a ban on the use of facial recognition. Owing to concerns over facial recognition, the cities of Oakland, Berkeley, and San Francisco in California, as well as Brookline, Cambridge, Northampton, and Somerville in Massachusetts, have adopted bans on the technology.5 California, New Hampshire, and Oregon all have enacted legislation banning use of facial recognition with police body cameras.6

This policy brief explores the intersection between AI and the current privacy debate. As Congress considers comprehensive privacy legislation to fill growing gaps in the current checkerboard of federal and state privacy, it will need to consider if or how to address use personal information in artificial intelligence systems. In this brief, I discuss some potential concerns regarding artificial intelligence and privacy, including discrimination, ethical use, and human control, as well as the policy options under discussion.

The challenge for Congress is to pass privacy legislation that protects individuals against any adverse effects from the use of personal information in AI, but without unduly restricting AI development or ensnaring privacy legislation in complex social and political thickets. The discussion of AI in the context of the privacy debate often brings up the limitations and failures of AI systems, such as predictive policing that could disproportionately affect minorities7 or Amazons failed experiment with a hiring algorithm that replicated the companys existing disproportionately male workforce.8 These both raise significant issues, but privacy legislation is complicated enough even without packing in all the social and political issues that can arise from uses of information. To evaluate the effect of AI on privacy, it is necessary to distinguish between data issues that are endemic to all AI, like the incidence of false positives and negatives or overfitting to patterns, and those that are specific to use of personal information.

The privacy legislative proposals that involve these issues do not address artificial intelligence in name. Rather, they refer to automated decisions (borrowed from EU data protection law) or algorithmic decisions (used in this discussion). This language shifts peoples focus from the use of AI as such to the use of personal data in AI and to the impact this use may have on individuals. This debate centers in particular on algorithmic bias and the potential for algorithms to produce unlawful or undesired discrimination in the decisions to which the algorithms relate. These are major concerns for civil rights and consumer organizations that represent populations that suffer undue discrimination.

Addressing algorithmic discrimination presents basic questions about the scope of privacy legislation. First, to what extent can or should legislation address issues of algorithmic bias? Discrimination is not self-evidently a privacy issue, since it presents broad social issues that persist even without the collection and use of personal information, and fall under the domain of various civil rights laws. Moreover, making these laws available for debate could effectively open a Pandoras Box because of the charged political issues they touch on and the multiple congressional committees with jurisdiction over various such issues. Even so, discrimination is based on personal attributes such as skin color, sexual identity, and national origin. Use of personal information about these attributes, either explicitly ormore likely and less obviouslyvia proxies, for automated decision-making that is against the interests of the individual involved thus implicates privacy interests in controlling how information is used.

This charade of consent has made it obvious that notice-and-choice has become meaningless. For many AI applications it will become utterly impossible.

Second, protecting such privacy interests in the context of AI will require a change in the paradigm of privacy regulation. Most existing privacy laws, as well as current Federal Trade Commission enforcement against unfair and deceptive practices, are rooted in a model of consumer choice based on notice-and-choice (also referred to as notice-and-consent). Consumers encounter this approach in the barrage of notifications and banners linked to lengthy and uninformative privacy policies and terms and conditions that we ostensibly consent to but seldom read. This charade of consent has made it obvious that notice-and-choice has become meaningless. For many AI applicationssmart traffic signals and other sensors needed to support self-driving cars as one prominent exampleit will become utterly impossible.

Although almost all bills on Capitol Hill still rely on the notice-and-choice model in some degree, key congressional leaders as well as privacy stakeholders have expressed desire to change this model by shifting the burden of protecting individual privacy from consumers over to the businesses that collect data.9 In place of consumer choice, their model focuses on business conduct by regulating companies processing of datawhat they collect and how they can use it and share it. Addressing data processing that results in any algorithmic discrimination can fit within this model.

A model focused on data collection and processing may affect AI and algorithmic discrimination in several ways:

In addition to these provisions of general applicability that may affect algorithmic decisions indirectly, a number of proposals specifically address the subject.10

The responses to AI that are currently under discussion in privacy legislation take two main forms. The first targets discrimination directly. A group of 26 civil rights and consumer organizations wrote a joint letter advocating to prohibit or monitor use of personal information with discriminatory impacts on people of color, women, religious minorities, members of the LGBTQ+ community, persons with disabilities, persons living on l winsome, immigrants, and other vulnerable populations.11 The Lawyers Committee for Civil Rights Under Law and Free Press Action have incorporated this principle into model legislation aimed at data discrimination affecting economic opportunity, public accommodations, or voter suppression.12 This model is substantially reflected in the Consumer Online Privacy Rights Act, which was introduced in the waning days of the 2019 congressional session by Senate Commerce Committee ranking member Maria Cantwell (D-Wash.). It also includes a similar provision restricting the processing of personal information that discriminates against or classifies individuals on the basis of protected attributes such race, gender, or sexual orientation.13 The Republican draft counterproposal addresses the potential for discriminatory use of personal information by calling on the Federal Trade Commission to cooperate with agencies that enforce discrimination laws and to conduct a study.14

This approach to algorithmic discrimination implicates debates over private rights of action in privacy legislation. The possibility of such individual litigation is a key point of divergence between Democrats aligned with consumer and privacy advocates on one hand, and Republicans aligned with business interests on the other. The former argue that private lawsuits are a needed force multiplier for federal and state enforcement, while the latter express concern that class action lawsuits, in particular, burden business with litigation over trivial issues. In the case of many of the kinds of discrimination enumerated in algorithmic discrimination proposals, existing federal, state, and local civil rights laws enable individuals to bring claims for discrimination. Any federal preemption or limitation on private rights of action in federal privacy legislation should not impair these laws.

The second approach addresses risk more obliquely, with accountability measures designed to identify discrimination in the processing of personal data. Numerous organizations and companies as well as several legislators propose such accountability. Their proposals take various forms:

A sense of fairness suggests such a safety valve should be available for algorithmic decisions that have a material impact on individuals lives. Explainability requires (1) identifying algorithmic decisions, (2) deconstructing specific decisions, and (3) establishing a channel by which an individual can seek an explanation. Reverse-engineering algorithms based on machine learning can be difficult, and even impossible, a difficulty that increases as machine learning becomes more sophisticated. Explainability therefore entails a significant regulatory burden and constraint on use of algorithmic decision-making and, in this light, should be concentrated in its application, as the EU has done (at least in principle) with its legal effects or similarly significant effects threshold. As understanding increases about the comparative strengths of human and machine capabilities, having a human in the loop for decisions that affect peoples lives offers a way to combine the power of machines with human judgment and empathy.

Because of the difficulties of foreseeing machine learning outcomes as well as reverse-engineering algorithmic decisions, no single measure can be completely effective in avoiding perverse effects. Thus, where algorithmic decisions are consequential, it makes sense to combine measures to work together. Advance measures such as transparency and risk assessment, combined with the retrospective checks of audits and human review of decisions, could help identify and address unfair results. A combination of these measures can complement each other and add up to more than the sum of the parts. Risk assessments, transparency, explainability, and audits also would strengthen existing remedies for actionable discrimination by providing documentary evidence that could be used in litigation. Not all algorithmic decision-making is consequential, however, so these requirements should vary according to the objective risk.

The window for this Congress to pass comprehensive privacy legislation is narrowing. While the Commerce Committee in each house of Congress has been working on a bipartisan basis throughout 2019 and have put out discussion drafts, they have yet to reach agreement on a bill. Meanwhile, the California Consumer Privacy Act went into effect on Jan. 1, 2020,21 impeachment and war powers have crowded out other issues, and the presidential election is going into full swing.

The window for this Congress to pass comprehensive privacy legislation is narrowing.

In whatever window remains to pass privacy legislation before the 2020 election, the treatment of algorithmic decision-making is a substantively and politically challenging issue that will need a workable resolution. For a number of civil rights, consumer, and other civil society groups, establishing protections against discriminatory algorithmic decision-making is an essential part of legislation. In turn, it will be important to Democrats in Congress. At a minimum, some affirmation that algorithmic discrimination based on personal information is subject to existing civil rights and nondiscrimination laws, as well as some additional accountability measures, will be essential to the passage of privacy legislation.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings InstitutionsArtificial Intelligence and Emerging Technology (AIET) Initiative, and Amazon and Intel provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Read more:

Protecting privacy in an AI-driven world - Brookings Institution

Elon Musk Reminds Us of the Possible Dangers of Unregulated AI – Futurism

The Machines Will Win

Late Fridaynight, Elon Musk tweeted a photoreigniting the debate over AI safety. The tongue-in-cheek post contained a picture of a gambling addiction ad stating In the end the machines will win, not so obviously referring to gambling machines. On a more serious note, Musk said that the danger AI poses is more of a risk than the threat posed by North Korea.

In an accompanying tweet, Musk elaborated on the need for regulation in the development of artificially intelligent systems. This echoes his remarks earlier this month when he said, AI just something that I think anything that represents a risk to the public deserves at least insight from the government because one of the mandates of the government is the public well-being.

From scanning the comments on the tweets, it seems that most people agree with Musks assessment to varying degrees of snark. One user, Daniel Pedraza, expressed a need for adaptability in any regulatory efforts. [We] need a framework thats adaptable no single fixed set of rules, laws, or principles that will be good for governing AI. [The] field is changing and adapting continually and any fixed set of rules that are incorporated risk being ineffective quite quickly.

Many experts are leery of developing AI too quickly. The possible threats it could pose may sound like science fiction, but they could ultimately prove to be valid concerns.

Experts like Stephen Hawking have long warned about the potential for AI to destroy humanity. In a 2014 interview, the renownedphysicist stated that The development of artificial intelligence could spell the end of the human race. Even more, he sees the proliferation of automation as a detrimental force to the middle class. Another expert, Michael Vassar, chief science officer of MetaMed Research, stated: If greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.

Itsclear, at least in the scientific community, that unfettered development of AI may not be in humanitys best interest. Efforts are already underway to begin to formulate some of these rules to ensure the development of ethically aligned AI. The Institute of Electrical and Electronics Engineers presented their first draft of guidelineswhich they hope will steer developers in the correct direction.

Additionally, the biggest names in tech are also coming together to self-regulate before government steps in. Researchers and scientists from large tech companies like Google, Amazon, Microsoft, IBM, and Facebook have already initiated discussions toensure that AI is a benefit to humanity and not a threat.

Artificial Intelligence has a long way to go before it can get anywhere near advanced enough to pose a threat. However, progress is moving forward by leaps and bounds. One expert, Ray Kurzweil, predicts that computers will be smarter than humans by 2045 a paradigm shift known as The Singularity. However, he does not think that this is anything to fear. Perhaps tech companies self-policing will be enough to ensure those fears are unfounded, or perhaps the governments hand will ultimately be needed. Whichever way you feel, its not too early to begin having these conversations. In the meantime, though, try not to worry too much unless, of course, youre a competitive gamer.

Here is the original post:

Elon Musk Reminds Us of the Possible Dangers of Unregulated AI - Futurism

CFPB Highlights the Growing Role of Artificial Intelligence in the Delivery of Financial Services – JD Supra

TheConsumer Financial Protection Bureau("CFPB") has published guidance on July 7, 2020 which highlights the potential use of Artificial Intelligence (AI) in the delivery of financial servicesparticularly in credit underwriting models. In addition to providing an overview of the ways in which AI is being used by financial institutions, the publication addresses: (1) industry uncertainty about how AI fits into the existing regulatory framework, especially for credit underwriting; and (2) the tools that the CFPB has been using to promote innovation, facilitate compliance, and reduce regulatory uncertainty.

As the publication notes, financial institutions are starting to deploy AI across a range of functions, including as virtual assistants that can fulfill customer requests, in models to detect fraud or other potential illegal activity, or as compliance monitoring tools. Credit underwriting is one specific area in which AI may have a profound impact. Credit underwriting models that are built upon AI have the potential to expand credit access by permitting lenders to evaluate creditworthiness of some of the millions of consumers who are unscorable using traditional underwriting systems. These new AI infused models and technologies will typically allow lenders to evaluate more information about credit applicants, which go beyond the information that the lenders would have been able to assess using traditional consumer reporting agency reports. In turn, consideration of such information may lead to more efficient credit decisions and potentially lower the cost of credit. On the other hand, however, AI may create or amplify risks of unlawful discrimination, lack of transparency, and privacy concerns. Further, bias may be found in the source data or model construction, which can lead to inaccurate predictions. Thus, in considering the implementation of AI, ensuring that the innovation is consistent with consumer protections will be critical.

Despite AIs potential benefits, industry uncertainty about how AI fits into the existing regulatory compliance framework may be slowing its adoption, especially for credit underwriting. One vital issue is how complex AI models address the adverse action notice requirements in the Equal Credit Opportunity Act (ECOA) and the Fair Credit Reporting Act (FCRA). ECOA and FCRA require creditors to provide consumers with the main reasons for denial of credit or other adverse action. While these notice provisions serve important anti-discrimination, accuracy, and educational purposes, industry stakeholders may have questions about how institutions can comply with these requirements if the reasons driving an AI model decision are based on complex interrelationships. To alleviate this concern, the publication provides specific examples of the ways in which creditors can comply with ECOA and FCRA when issuing adverse action notices based on AI models.

In addition to concluding that the existing regulatory framework has built-in flexibility that can be compatible with AI algorithms, the publication goes on to outline the various tools that the Bureau uses to promote innovation, facilitate compliance, and reduce regulatory uncertainty, including:

In particular, the first two policies (TDP & CAS) provide for a legal safe harbor that could reduce regulatory uncertainty in the area of AI and adverse action notices. The third policy discusses the ways in which stakeholders can obtain No-Action Letters from the CFPB, which can effectively provide increased regulatory certainty through a statement that the CFPB will not bring a supervisory or enforcement action against a company for providing a product or service under certain facts and circumstances.

This latest publication is a good sign for industry participants as it reaffirms previous guidance published by the CFPB which shows that the Bureau is committed to helping spur innovation consistent with consumer protections. By working together, industry stakeholders and the Bureau may be able to facilitate the use of this promising technology to expand access to credit and benefit consumers.

Read the original post:

CFPB Highlights the Growing Role of Artificial Intelligence in the Delivery of Financial Services - JD Supra

Military Deception: AI’s Killer App? – War on the Rocks

This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the first question (parts a. and b.), which asks how artificial intelligence will affect the character and/or the nature of war, and what might happen if the United States fails to develop robust AI capabilities that address national security issues.

In the 1983 film WarGames, Professor Falken bursts into the war room at NORAD to warn, What you see on these screens up here is a fantasy a computer-enhanced hallucination. Those blips are not real missiles, theyre phantoms! The Soviet nuclear attack onscreen, he explained, was instead a simulation created by WOPR, an artificial intelligence of Falkens own invention.

WOPRs simulation now seems more prescient than most other 20th century predictions about how artificial intelligence, or AI, would change the nature of warfare. Contrary to the promise that AI would deliver an omniscient view of everything happening in the battlespace the goal of U.S. military planners for decades it now appears that technologies of misdirection are winning.

Military deception, in short, could prove to be AIs killer app.

At the turn of this century, Admiral Bill Owens predicted that U.S. commanders would soon be able to see everything of military significance in the combat zone. In the 1990s, one military leader echoed that view, promising that in the first quarter of the 21st century, it will become possible to find, fix or track, and target anything that moves on the surface of the earth. Two decades and considerable progress in most areas of information technology have failed to realize these visions, but predictions that perfect battlespace knowledge is a near-term inevitability persist. A recent Foreign Affairs essay contends that in a world that is becoming one giant sensor, hiding and penetrating never easy in warfare will be far more difficult, if not impossible. It claims that once additional technologies such as quantum sensors are fielded, there will be nowhere to hide.

Conventional wisdom has long held that advances in information technology would inevitably advantage finders at the expense of hiders. But that view seems to have been based more on wishful thinking than technical assessment. The immense potential of AI for those who want to thwart would-be finders could offset if not exceed its utility for enabling them. Finders, in turn, will have to contend with both understanding reality and recognizing what is fake, in a world where faking is much easier.

The value of military deception is the subject of one of the oldest and most contentious debates among strategists. Sun Tzu famously decreed that all warfare is based on deception, but Carl von Clausewitz dismissed military deception as a desperate measure, a last resort for those who had run out of better options. In theory, military deception is extremely attractive. One influential study noted that all things being equal, the advantage in a deception lies with the deceiver because he knows the truth and he can assume that the adversary is eagerly searching for its indicators.

If deception is so advantageous, why doesnt it dominate the practice of warfare already? A major reason is that historically, military deception was planned and carried out in a haphazard, unsystematic way. During World War II, for example, British deception planners engaged in their work much in the manner of college students perpetrating a hoax but they still accomplished feats such as convincing the Germans to expect the Allied invasion of France in Pas-de-Calais rather than Normandy. Despite such triumphs, military commanders have often hesitated to gamble on the uncertain risk-benefit tradeoff of deception plans, as these require investments in effort and resources that would otherwise be applied against the enemy in a more direct fashion. If the enemy sees through the deception, it ends up being worse than useless.

Deception via Algorithm

Whats new is that researchers have invented machine learning systems that can optimize deception. The disturbing new phenomenon called deepfakes is the most prominent example. These are synthetic artifacts (such as images) created by computer systems that compete with themselves and self-improve. In these generative adversarial networks, a generator produces fake examples and a discriminator attempts to identify them. Each refines itself based on the others outputs. This technique produces photorealistic deepfakes of imaginary people, but it can be adapted to generate seemingly real sensor signatures of critical military targets.

Generative adversarial networks can also produce novel forms of disinformation. Take, for instance, the image of unrecognizable objects that went viral earlier this year (fig. 1). The image resembles an indoor scene, but upon closer inspection it contains no recognizable items. It is neither an adversarial example, an image of something that machine learning systems misidentify nor a deepfake, though it was created using a similar technique. The picture does not make any sense to either humans or machines.

This kind of ambiguity-increasing deception could be a boon for militaries with something to hide. Could they design such nonsensical images with AI and paint them on the battlespace using decoys, fake signal traffic, and careful arrangements of genuine hardware? This approach could render multi-billion-dollar sensor systems useless because the data they collect would be incomprehensible to both AI and human analysts. Proposed schemes for deepfake detection would probably be of little help, since these require knowledge of real examples in order to pinpoint subtle statistical differences in the fakes. Adversaries will minimize their opponents opportunities to collect real examples for instance, by introducing spurious deepfake artifacts into their genuine signals traffic.

Rather than lifting the fog of war, AI and machine learning may enable the creation of fog of war machines automated deception planners designed to exacerbate knowledge quality problems.

Figure 1: This bizarre image generated by a generative adversarial network resembles a real scene at first glance but contains no recognizable objects.

Deception via Sensors and Inadequate Algorithms

Meanwhile, the combined use of AI and sensors to enhance situational awareness could make new kinds of military deception possible. AI systems will be fed data by a huge number of sensors everything from space-based synthetic-aperture radar to cameras on drones to selfies posted on social media. Most of that data will be irrelevant, noisy, or disinformation. Detecting many kinds of adversary targets is hard, and indications of such detection will often be rare and ambiguous. AI and machine learning will be essential to ferret them out fast enough and use the subtle clues received by multiple sensors to estimate the locations of potential targets.

To use AI to see everything requires solving a multisource-multitarget information fusion problem that is, to combine information collected from multiple sources to estimate the tracks of multiple targets on an unprecedented scale. Unfortunately, designing algorithms to do this is far from a solved problem, and there are theoretical reasons to believe it will be hard to go far beyond the much-discussed limitations of deep learning. The systems used today, which are only just starting to incorporate machine learning, work fairly well in permissive environments with low noise and limited clutter, but their performance degrades rapidly in more challenging environments. While AI should improve the robustness of multisource-multitarget information fusion, any means of information fusion is limited by the assumptions built into it and wrong assumptions will lead to wrong conclusions even in the hands of human-machine teams or superintelligent AI.

Moreover, some analysts backed by some empirical evidence contend that the approaches typically used today for multisource-multitarget information fusion are unsound. That means that these algorithms may not estimate the correct target state even if they are implemented perfectly and have high-quality data. The intrinsic difficulty of information fusion demands the use of approximation techniques that will sometimes find wrong answers. This could create a potentially rich attack surface for adversaries. Fog of war machines might be able to exploit the flaws in these approximation algorithms to deceive would-be finders.

Neither Offense- nor Defense-Dominant

Thus, AI seems poised to increase the advantages hiders have always enjoyed in military deception. Using data from their own operations, they can model their own forces comprehensively and then use this knowledge to build a fog of war machine. Finders, meanwhile, are forced to rely upon noisy, incomplete, and possibly mendacious data to construct their own tracking algorithms.

If technological progress boosts deception, it will have unpredictable effects. In some circumstances, improved deception benefits attackers; in others, it bolsters defenders. And while effective deception can impel an attacker to misdirect his blows, it does nothing to shield the defender from those that do land. Rather than shifting the offense-defense balance, AI might inaugurate something qualitatively different: a deception-dominant world in which countries can no longer gauge that balance.

Thats a formula for a more jittery world. Even if AI-enhanced military intelligence, surveillance, and reconnaissance prove effective, states that are aware that they dont know what the enemy is hiding are likely to feel insecure. For example, even earnest, mutual efforts to increase transparency and build trust would be difficult because both sides could not discount the possibility their adversaries were deceiving them with the high-tech equivalent of a Potemkin village. That implies more vigilance, more uncertainty, more resource consumption, and more readiness-fatigue will follow. As Paul Bracken observed, the thing about deception is that it is hard to prove it will really work, but technology ensures that we will increasingly need to assume that it will.

Edward Geist is a policy researcher and Marjory Blumenthal is a senior policy researcher at the RAND Corporation. Geist received a Smith Richardson Strategy and Policy Fellowship to write a book on artificial intelligence and nuclear warfare.

Image: U.S. Navy (Photo by Mass Communication Specialist 1st Class Carlos Gomez)

See the article here:

Military Deception: AI's Killer App? - War on the Rocks

Adobe Doubles Down On Academia To Get Smart About AI And Algos – AdExchanger

Adobe is looking to get schooled on AI and data science.

While many technology giants foster relationships with academics by offering them lucrative part-time consultancy positions. Adobe is pursuing a different tack: dishing out $50,000 no-strings-attached grants to professors and doctoral students working on projects of joint interest.

What academia provides is more the advanced mathematical algorithms and the advanced research thats gone into other related areas but hasnt been applied to our field, said Anil Kamath, Adobes VP of technology.

Artificial Intelligentsia

Adobe was not an early promoter of AI products as were other major technology players, like Google with its Automated Insights pattern-recognition tool, IBM with Watson and Einstein from Salesforce.

But Adobes research grant program, which has dished out 40 grants for a total of $2 million in the past four years, is bringing algorithmic AI into the company through academic work.

Adobe is also doing outreach at events. A top priority at its fourth annual Data Science Symposium held in San Jose last week was to identify AI and machine learning research proposals.

We want to take the areas of focus for data scientists working on AI and machine learning and funnel them to real-world digital marketing problems, Kamath said.

For instance, Rutgers University computer science professor Shan Muthukrishnan, developed an algorithm that takes the hundreds of dimensions of audience data coming into a cloud cookies, browser data, device IDs, audience, demographic data points, location, et al. and learns to pluck potentially actionable marketing trends from the raw data stream.

Adobe doesnt own Muthukrishnans research, the grant is considered a hands-off gift to the university andthe work that comes out of the research process is open to public and peer review.

But Adobe does provide the data and in that is able to point the professors research at thorny product issues facing the company.

Productizing Research

What is Adobe getting out of this? It doesnt own the research or the algorithms being developed and researchers from Oracle or Salesforce could likewise read the theses and academic journal papers that result from Adobe research grants.

One way Adobe capitalizes on its grants is by connecting major customers with academic researchers to help them solve their big data problems.

Alice Li worked with Adobe as part of her doctoral research in 2012 and more recently won a grant from Adobe to work on an attribution model for an online jewelry retailer and Adobe client meant to bridge the gap between last-touch and multitouch attribution.

The jeweler ended up deploying the attribution model, and continues to use it as part of its work with Adobe.

The company is judging the marginal value of their marketing channels based on what theyre told by those marketing channels, Li said. Google tells them paid search is very effective, Facebook tells them social is very effective, but academic research will be objective and rigorous.

And although the grants dont give Adobe intellectual property rights or new software, they do get productized through human capital, namely interns and cross-employed researchers.

This past year Adobe awarded a grant to a Stanford professor and graduate student working on sequential recommendations, specifically: How should a platform lead users through video tutorial sequences based on factors like whether that person is on a free version of a product or already a paid subscriber? And how likely is the user to churn and abandon the platform entirely?

The graduate student is employed over the summer by Adobe, where surprise, surprise hell be working on its video tutorial sequencing.

Usually the PhD students working in that area have a chance to work with us, Kamath said. And thats where we see some of the best results translating research work and putting it into product.

Talent Funnel

But even more important that capitalizing on research to inform product development is the chance to secure long-term talent.

A part of it is getting academia and researchers to think about and work on problems that are relevant to us, Kamath said. [But] another big part is recruiting these machine learning and data science students who are really competitively sought after.

Its good for students to consider industry concerns, said Stanford professor Ramesh Johari, whos worked on multiple Adobe research grants, including the sequential learning algorithm.

Students can benefit by being aware of the connections between the algorithmic work theyre doing and actual problems people face, he said.

Hes seen students receive their masters or PhD and go on to corporate tech development in order to develop the work they did in school. Adobe researchers have also served as co-authors on academic projects and sat on thesis defense committees.

The advantages of academic outreach arent immediate and are hard to quantify with a dollar sign, aside from the batch of $50,000 checks the company awards each year, of course. But the long-term advantages are clear.

Adobe is really well served by work intersecting with the academic community, Johari said.

Read the original:

Adobe Doubles Down On Academia To Get Smart About AI And Algos - AdExchanger

The Facial Recognition Company That Scraped Facebook And Instagram Photos Is Developing Surveillance Cameras – BuzzFeed News

Clearview AI, the secretive company thats built a database of billions of photos scraped without permission from social media and the web, has been testing its facial recognition software on surveillance cameras and augmented reality glasses, according to documents seen by BuzzFeed News.

Clearview, which claims its software can match a picture of any individual to photos of them that have been posted online, has quietly been working on a surveillance camera with facial recognition capabilities. That device is being developed under a division called Insight Camera, which has been tested by at least two potential clients according to documents.

On its website which was taken offline after BuzzFeed News requested comment from a Clearview spokesperson Insight said it offers the smartest security camera that is now in limited preview to select retail, banking and residential buildings.

Insight Cameras main site had no obvious connection to Clearview, but BuzzFeed News was able to link it to the facial recognition company by comparing the code from Insight and Clearviews respective log-in pages, which both shared numerous references to Clearviews servers. This shared code also mentioned something called Fastlane, a "checkin app."

Clearview CEO Hoan Ton-That and a company spokesperson did not respond to multiple requests for comment about Insight or its work in experimenting with physical devices. After BuzzFeed reached out to inquire about Insight Camera, the entitys website disappeared.

Despite publicly claiming it is working with law enforcement agencies alone, Clearview has been aggressively pushing its technology into the private sector. As BuzzFeed News first reported, Clearview documents indicated more than 2,200 public and private entities have been credentialed to use its facial recognition software including Macys, Kohls, the National Basketball Association, and Bank of America.

Clearview has never publicly mentioned Insight Camera. A list of organizations that had been credentialed to use its app that was viewed by BuzzFeed News showed Clearview had identified two entities experimenting with its surveillance cameras in a category called has_security_camera_app.

Those two organizations, the United Federation of Teachers (UFT) and New York City real estate firm Rudin Management, deployed Insight Camera in trials, BuzzFeed News confirmed. In a statement, UFT, a labor union that represents teachers in New York City public schools, said the technology was successful in helping security personnel identify individuals who had made threats against employees so they could be prevented from entering one of its offices.

We did not access the larger Clearview database, a spokesperson for UFT told BuzzFeed News. Instead, we used Insight Camera in a self-contained, closed system that relied exclusively on images generated on site.

UFT did not say how many photos were in that closed system, which it maintained is separate from the database of more than 3 billion photos that Clearview AI said it has scraped from millions of sites including Facebook, Instagram, and YouTube. Clearviews desktop software and mobile app allow users to run static photos through a facial recognition system that matches people to existing media in a few seconds, but Insight Camera, according to those that used it, attempted to flag individuals of interest using facial recognition on a live video feed.

A spokesperson for Rudin Management, which has a portfolio of 18 residential and 16 commercial office buildings as well as two condominiums in New York City, confirmed to BuzzFeed News that it had tested Insight cameras.

We beta test many products to see if they would be additive to our portfolio and tenants, the spokesperson said. In this case we decided it was not and we do not currently use the software."

BuzzFeed News discovered Insight after analyzing a copy of Clearviews web app, which is discoverable to the public, and determining that it contained code for a security_camera app. Entities that had access to that security camera app appear to have been able to log in to the Insight Camera website, which was registered last April.

A BuzzFeed News analysis of the Insight Camera site found that it was almost a perfect clone of the code found at Clearview AIs web page. Though there were some aesthetic differences between the two sites, both appeared to share the same code to communicate with Clearviews servers.

Although Clearview has recently stated its services are intended for law enforcement, the company has maintained significant interest in the private sector. As BuzzFeed News reported previously, Ton-That had entered his company in a retail technology accelerator in the summer of 2018, before claiming that the company would focus on law enforcement.

A presentation from the companys early pitches to investors recently reviewed by BuzzFeed News suggests that in early 2018 the company wasnt focused on law enforcement at all. On one slide, the company named four industries in which it was testing its technology: banking, retail, insurance, and oil. The only mention of government or public entities is in reference to a pilot at an unnamed "major federal agency."

Banking: The worlds largest bank selected Clearview to provide security background checks for its annual shareholders meeting, the company wrote on one of its slides. Retail: Manhattans top food retailer has hired CV to provide facial- recognition hardware & software for its supermarket chain.

Privacy advocate Evan Greer, deputy director of digital rights activist group Fight for the Future, said that brick-and-mortar stores are seen as community spaces and that one of the most attractive applications for Clearview in the private sector would be screening people as they enter a store to see if they have a criminal record. She remained skeptical of Clearviews technology.

Theyre claiming that this technology can do all kinds of stuff and institutions are easily dazzled by that, Greer said. But its relatively new technology for applications like this and its totally untested. We know that there are better ways to keep people safe that dont violate their rights.

Clearview has also been actively experimenting with wearables with the help of Vuzix, a Rochester, New Yorkbased manufacturer of augmented reality glasses. Clearview data reviewed by BuzzFeed News showed accounts associated with Vuzix ran nearly 300 searches, some as recently as November. Matt Margolis, Vuzixs director of business development, acknowledged that his company had sent the startup sets of its augmented reality glasses for testing, noting Clearview was one of a few facial recognition developers it had partnered with.

Its not something anybody is buying off the shelf, but I cant deny that its in development, though its not something were selling today, Margolis told BuzzFeed News. We do have a number of other partners that use facial recognition, but they dont do the same thing that Clearview is doing. Theyre not using photos that are crawled off the web.

Clearview's link to Vuzix was first reported by Gizmodo. The companys interest in smart glasses was first reported by the New York Times.

Vuzix, which counts Intel as a shareholder, initially focused on entertainment and gaming, before moving into the defense and homeland security markets, according to a financial filing from last year. On its company blog in February, Vuzix cited the sci-fi film RoboCop, where officers used smartglasses with live facial recognition, as an inspiration, and noted that countries including Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, and the United Arab Emirates already screen crowds to match faces against a massive database.

BuzzFeed News previously reported that Clearview AI had provided its facial recognition technology to entities in Saudi Arabia and the UAE, two countries known for their human rights violations. The company previously did not respond to questions about entities that have used its software.

Last week, in an email to BuzzFeed News, Clearview attorney Tor Ekeland said, "There are numerous inaccuracies in this illegally obtained information. As there is an ongoing Federal investigation, we have no further comment."

Margolis, who has seen demos of Clearview, acknowledged that a wearable with facial recognition could be abused with a lot of negative possibilities, but noted that systems are only as good as the biometric information on which they rely. He said that Clearviews technology was accurate on the tests he had seen and called the billions of photos that the company ingested from the web part of the public domain.

Tech used the right way is the real goal...to keep people safe, he said. You want to find the wrongdoers. Its not a bad thing for society.

Code from Clearview AIs app analyzed by BuzzFeed News also suggested the startup had experimented with tech from RealWear, a Vancouver, Washingtonbased augmented reality glasses manufacturer. The code included instructions to new users to scan a Clearview QR code to pair its app with a RealWear device. Data viewed by BuzzFeed News showed that accounts associated with RealWear had run more than 70 searches as recently as last month.

In an interview, RealWear CEO Andy Lowery said he had never heard of Clearview before, but found that his company sold the startup a few devices about a year ago. He told BuzzFeed News that RealWear doesnt market or sell in any significant way to police forces, and compared his company to a phone manufacturer like Samsung in that it could not control what applications developers built or put on its devices.

Lowery could not explain why Clearviews data showed that accounts associated with RealWear had been running searches with the facial recognition technology, but didnt rule out one of his 115 employees trying the software.

I havent seen any evidence that theyre working with us in any sort of way, he said. I dont even see them selling or reselling anything with our devices.

See the rest here:

The Facial Recognition Company That Scraped Facebook And Instagram Photos Is Developing Surveillance Cameras - BuzzFeed News

Rise of AI | the most exciting conference for Artificial …

WELOVE AI

We personally work and live with Artificial Intelligence every day. AI is already consuming our personal lives, we are simply not always aware of it. It is therefore extremely important for usto understand the status of Artificial Intelligence. We have created a platform toshare our vision and learnings about Artificial Intelligence. Together wediscuss the implications of Rise of AI for our human life, companies, society and politics.

We have selected speakers who have a clear message, a missionand vision of the future. Each speaker knows their field of Artificial Intelligence inside out. We also have workshops with our AI Topic Leaders, where you can dive deeper into one specific topic and share your knowledge with us. For 2018 we will offer you two stages: Artificial Intelligence Vision and Applied Artificial Intelligence.

We have limitthe conference to 500people in order to keep it intimate and build an environment where each participant can freely share their thoughts and opinions on the future.

Read more from the original source:

Rise of AI | the most exciting conference for Artificial ...

Carrboro startup Tanjo to leverage its AI platform to help with NC’s reopening – WRAL Tech Wire

CARRBORO A Carrboro artificial intelligence (AI) startup is leveraging its technology platform to help business and community leaders navigate with North Carolinas COVID-19 reopening.

Carrboro-based Tanjois teaming up with the Digital Health Institute for Transformation (DHIT) to build an engine that uses machine learning and advanced analytics to ingest huge amounts of national and regional data and then provide actionable insights.

Successfully reopening our economy without risking the destruction of the health of our communities is the central challenge we are attempting to overcome, said Michael Levy, president of DHIT, in a statement. More reliable, local data-driven expert guidance enabled by smart AI is critical to allow the safe and successful reopening of our communities and businesses.

Consider the breadth of intelligence: health and epidemiological data, labor and economic data, occupational data, consumer behavior and attitudinal data, and environmental data.

Tanjo,founded by serial entrepreneur Richard Boyd in 2017, said it is designing dashboard to give stakeholders real-time intelligence and predictive modeling on population health risk, consumer sentiment and community resiliency.

Richard BoydUsers will be able to view the risk to their business and county, as well as simulate the impact of implementing evidence-based recommendations, enabling them to make informed decisions.

As part of the 2020 COVID-19 Recovery Act, the North Carolina Policy Collaboratory in late July awarded the DHIT a grant to research, validate, and build a simulation platform for North Carolinas business and community leaders.

DHIT and Tanjo entered into a formal strategic partnership in November 2019, pre-COVID 19.

The seven NC counties chosen for the initial pilot are: Ashe, Buncombe, Gates, Mecklenburg, New Hanover, Robeson, and Wake.

The overall project is a collaboration between Tanjo, DHIT, the Institute for Convergent Sciences and Innovate Carolina at the University of North Carolina, Chapel Hill, and the NC Chamber Foundation, among other key stakeholders.

If you are a community organization or business located in the counties listed above and are interested in being a beta tester for this initiative, contact communityconfidence@dhitglobal.org.

See the rest here:

Carrboro startup Tanjo to leverage its AI platform to help with NC's reopening - WRAL Tech Wire

‘T-Minus AI’: A look at the intersection of geopolitics and autonomy – C4ISRNet

China has a national plan for it. Russia says it will determine the ruler of the world. The United States is investing heavily to develop it.

The race is on to create, control and weaponize artificial intelligence.

In Michael Kanaans book T-Minus AI: Humanitys Countdown to Artificial Intelligence and the New Pursuit of Global Power, set for release Aug. 25, the realities of AI from a human-oriented perspective are laid out for the reader. Such technology, often shrouded in mystery and misunderstood, is made easy to comprehend through a discussion on the global implications of developing AI. Kanaan is one of the Air Forces AI leaders.

The following excerpt, edited for length and clarity, introduces how, in late 2017, the conversation about artificial intelligence changed forever.

It was a Friday morning, Sept. 1, 2017, and not yet dawn when I stepped out of Reagan National Airport and followed my bag into the back of a waiting SUV. After flying east all night from San Francisco to D.C., I still had two hours before a Pentagon briefing with Lt. Gen. VeraLinn Dash Jamieson. She was the deputy chief of staff for U.S. Air Force intelligence and the countrys most senior Air Force intelligence officer, a three-star officer responsible for a staff of 30,000 and an overall budget of $55 billion.

As the Air Force lead officer for artificial intelligence and machine learning, Id been reporting directly to Jamieson for over two years. The briefing that morning was to discuss the commitments wed just received from two of Silicon Valleys most prominent AI companies. After months of collective effort, the new agreements were significant steps forward. They were also crucial proof that the long history of cooperation between the American public and private sectors could reasonably be expected to continue. With the world marching steadfastly into the promising but unsettled fields of AI, it was becoming critical that Americans do so, if not entirely in harmony, then at least to the sounds of the same beat.

My apartment was only a short ride away. I was looking forward to a hot shower and strong coffee. But as the SUV pulled out of the terminal and into the morning darkness, a message alert pinged from my phone. It was a text from the general. Short and to the point, as usual. See Putin comments re AI.

A quick web search pulled up a quote already posting to news feeds everywhere. At a televised symposium broadcast throughout Russia only an hour earlier, President Vladimir Putin had crafted a sound bite making headlines around the globe. His unambiguous three sentences translated to: Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

As the driver accelerated up the Interstate 395 ramp toward the city, a heavy rain started to fall, hitting hard against the cars metal surfaces. Far off, through the window on my right, the dome of the Capitol building glistened in white light beyond the blurred, dark space of the Potomac River. Playing at background volume over the front speakers, a National Public Radio newscaster was describing a 3-mile-wide asteroid named Florence. Streaking past our planet that morning, the massive rock would be little more than 4 million miles away at its closest point tremendously far by human standards, but breathtakingly near by the infinite scales of space. It was the largest object NASA had ever tracked to pass so closely by our planet. On only a slightly different trajectory, it would have altered Earths entire landscape. And, like for the dinosaurs before us, it would have changed everything. It would have changed life. A perfect metaphor, I thought, impeccably timed to coincide with Putins comments about AI.

I looked back at his words. The message they carried rang like an alarm I didnt need to hear, but the motivation behind them wasnt so clear. Former KGB officers speak carefully and only for calculated reasons. Putin is no exception. His words matter, always. And so does his purpose. But what was it here? Just to offer a commentary or forecast? No. Not his style. A call to action, then, to energize his own population? Perhaps. But, more than that, this was a statement to other statesmen, a confirmation that he and his government were awake and aware that a sophisticatedly deep effort was underway to accomplish a new world order.

Only a month earlier, China had released a massive three-part strategy aimed at achieving very clear benchmarks of advances in AI. First, by 2020, China planned to match the highest levels of AI technology and application capabilities in the U.S. or anywhere else in the world. Second, by 2025, they intend to capture a verifiable lead over all countries in the development and production of core AI technologies, including voice- and visual-recognition systems. Last, by 2030, China intends to dominantly lead all countries in all aspects and related fields of AI. To be the sole leader, the worlds unquestioned and controlling epicenter of AI. Period. That is Chinas declared national plan.

With the Chinese governments newly published AI agenda available for the world to see, Putins words resolved any ambiguity about its implication. True to his style, his message was clear and concise. Whoever becomes the leader will become the ruler of the world.

Straightforward, I thought. And hes right. But focused administrations around the globe already know the profound potential of AI. The Chinese clearly do its driving their domestic and foreign agendas. And the Saudis, the European Union nations, the U.K., and the Canadians they know it, too. And private enterprise is certainly focused in, from Google, Facebook, Amazon, Apple and Microsoft to their Chinese state-controlled counterparts Baidu, Alibaba, Tencent and the telecom giant Huawei.

AI technologies have been methodically evolving since the 1960s, but over most of those years, the advances were sporadic and relatively slow. From the earliest days, private funding and government support for AI research ebbed and flowed in direct relation to the successes and failures of the latest predictions and promises. At the lowest points of progress, when little was being accomplished, investment capital dried up. And when it did, efforts slowed. It was the usual interdependent circle of cause and effect. Twice, during the late 70s and then again during the late 80s and early 90s, the pace of progress all but stopped. Those years became known as the AI winters.

But, in the last 10 to 15 years, a number of major breakthroughs, in machine learning in particular, again propelled AI out of the dark and into another invigorated stage. A new momentum emerged, and an unmistakable race started to take shape. Insightful governments and industry leaders began doing everything possible to stay within reach of the lead, positioning themselves for any possible path to the front.

Now, for all to hear, Putin had just declared everything at stake. Without any room for misunderstanding, he equated AI superiority to global supremacy, to a strength akin to economic or even nuclear domination. He said it for public consumption, but it was rife with political purpose. Whoever becomes the leader in this sphere will become the ruler of the world.

Those words would undoubtedly add another level of urgency to the days meetings. That was certain. I redirected the driver to the Pentagon and looked down at my phone to answer the generals text. Landed. Saw quote. On my way in.

The shower would have to wait.

***

In the months that followed, Putins now infamous few sentences proved impactful across continents, industries and governments. His comments provided the additional, final push that accelerated the planets sense of seriousness about AI and propelled most everyone into a higher gear forward. Public and private enterprises around the globe reassessed their focuses and levels of commitment. Governments and industries that had previously dedicated only minimal percentages of their research and defense budgets to the new technology suddenly saw things differently. It quickly became unacceptable to slow-walk AI efforts and protocols, and no longer defensible to incubate AI innovations for longer than the shortest time necessary.

Now, not long after, the pace of the race has quickened to a full sprint. National strategies and demonstratable use have become the measurements that matter. Rollouts have become requisite. To accomplish them, agendas are more focused, aggressive and well funded. Sooner than many expected, AI is proving itself a dominant force of economic, political and cultural influence, and is poised to transform much of what we know and much of what we do. China, Russia and others are utilizing AI in ways the world needs to recognize. Thats not to say all efforts and iterations in the West are without criticism. Theyre not. But if this new technology causes or contributes to a shift in power from the West to the East, everyone will be affected. Everything will change.

The future is here, and the world ahead looks far different than ever before.

No longer just science fiction or fantastic speculation, artificial intelligence is real. Its here, all around us, and it has already become an integral and influential part of our lives. Although weve taken only our first few steps into this new frontier of technological innovation, AI is providing us powerful new methods of conducting our affairs and accomplishing our goals. We use these new tools every day, usually without choice and often without even realizing it from applications that streamline our personal lives and social activities to business programs and practices that enable new ways of acquiring a competitive advantage. Ive learned a lot about the common misperceptions and misgivings people have when trying to understand AI. Most conversations about artificial intelligence either begin or end with one or more of the following questions:

Although the answers to those questions merit long discussions and are open to differing opinions, they should at least be manageable and factually accurate. The topics shouldnt be too difficult to discuss or debate not conversationally or even at policymaking or political levels. Unfortunately, they generally are.

But the conversational disconnects that usually occur arent because of some complex technical details or confusing computer issues. Instead, its usually, simply, because of the same old obstacles that too often stand in the way of many other conversations. Regardless of the topic, and even when it matters most, we too frequently speak below, above, around or past one another especially when we dont have an equal amount of information, a shared base of knowledge or a common set of experiences. In those instances, we make too many assumptions, allow too many things to go without saying and use too many words that hold different meanings for different people. In short, too many confusions are never clarified and too many more are created. As a consequence, were doomed for frustration and failure from the start, inevitably unable to understand one another and incapable of appreciating each others perspectives and talking points. My goal throughout this book is to avoid those pitfalls.

The best way to start is to first address the most common misperceptions of all, the ones we tend to bring with us into the AI conversation. The first of these is the assumption that AI is unavoidably destined, sooner or later, to develop its own consciousness and its own autonomous, evil intent. For that idea, we can thank science fiction and the entertainment industry. Make no mistake, Im an ardent fan of science fiction, both on screen and in books. Without any doubt, the sci-fi genre has given us fine works of imagination, insight and art. Many great fiction writers and filmmakers are extremely knowledgeable about technology and conscientiously concerned about our future. Time and again theyve proven themselves true visionaries, and were unquestionably better off for their work. They spark our curiosity, ignite our imaginations, increase our appetite for knowledge, and encourage our interests in science and societal issues.

But when it comes to their scientific portrayals of artificial intelligence, our most popular authors and screenwriters have too often generated an array of exotic fears by focusing our attention on distant, dystopian possibilities instead of present-day realities. Science fiction that depicts AI usually aligns a computers intelligence with consciousness, and then frightens us by portraying future worlds in which AI isnt only conscious, but also evil-minded and intent, self-motivated even, to overtake and destroy us. To create drama, there has to be conflict, and the humans in these stories are almost always overwhelmed and outmatched, naturally unable to compete against the machines vastly superior intelligence and mechanical strength. Iconic movies like 2001: A Space Odyssey, The Matrix, The Terminator, Ex Machina, and I, Robot, along with television series such as Westworld and Black Mirror, have turned our underlying fears and suspicions into deep-seated and bleak expectations.

Even today, commercial companies that offer AI products and consumer services routinely have to fight our distrust of intelligent machines as a basic, necessary part of their regular marketing efforts. Just think of all the television commercials for AI-enabled products we now see, and consider how many of them are focused first on trying to put us at ease by casting a polite and gentle glow to the figurative, artificial face of their AI, even when that face has absolutely nothing to do with the services their products actually provide.

AI is an extremely powerful tool, and it has immense implications we must consider and evaluate carefully. Its a very sharp instrument that shouldnt be callously wielded or casually accepted, especially when its in the wrong hands or when its used for intentionally intrusive or oppressive purposes. These are serious issues, and there are significant steps we must take to ensure AI is properly designed and implemented. Fortunately, and contrary to what many people think, its not necessary to have a background in computer science, mathematics or engineering in order to very meaningfully understand AI and its technological implications. With just a basic comprehension of a few fundamental concepts behind todays computers and related sciences, its entirely possible to connect the relevant dots and understand the overall picture.

Creating tools to facilitate our lives is the strength of humankind. Its what we do. Given enough time, it was arguable, perhaps even inevitable, that we would create the ultimate tool artificial intelligence itself. But what exactly does it mean that weve accomplished that task? And how is AI even possible? In large part, the answers lie in the history of ourselves and of our own biological intelligence. It turns out that artificially replicating what we know about the human thought process, at least as best we can, is a highly effective blueprint for creating something similar in a machine. Its our own evolution and our own history that teach us the fundamentals that make it all possible.

Read the rest here:

'T-Minus AI': A look at the intersection of geopolitics and autonomy - C4ISRNet

The ‘Skynet’ Gambit – AI At The Brink – Seeking Alpha

"The deployment of full artificial intelligence could well mean the end of the human race." - Stephen Hawking

"He can know his heart, but he don't want to. Rightly so. Best not to look in there. It ain't the heart of a creature that is bound in the way that God has set for it. You can find meanness in the least of creatures, but when God made man the devil was at his elbow. A creature that can do anything. Make a machine. And a machine to make the machine. And evil that can run itself a thousand years, no need to tend it." - Cormac McCarthy, Blood Meridian: Or the Evening Redness in the West

Let me declare at the outset that this article has been tough to write. I am by birthright an American, an optimist and a true believer in our innovative genius and its power to drive better lives for us and the world around us. Ive grown up in the mellow sunshine of Moores law, and lived first hand in a world of unfettered innovation and creativity. That is why it is so difficult to write the following sentence:

Its time for federal regulation of AI and IoT technologies.

I say that reluctantly but with growing certainty. I have come to believe that we share a moral obligation to act now in order to protect our children and grandchildren. We need to take this moment, wake up, and listen to the voices that are warning us that the confluence of technologies that power the AI revolution are advancing so rapidly that they pose a clear and present danger to our lives and well-being.

So this article is about why I have come to feel that way and why I think you should join me in that feeling. Obviously, this has financial implications. Since you are a tech investor, you almost certainly invested in one or more of the companies - like Nvidia (NASDAQ:NVDA), Google (NASDAQ:GOOG) (NASDAQ:GOOGL), and Baidu (NASDAQ:BIDU) - that are profiting from driving the breakneck advances we are seeing in AI base technologies and the myriad of embedded use cases that make the technology so seductive. Indeed, if we look at the entire tech industry ecosystem, from chips through applications and beyond them to their customers that are transforming their business through their use, we can hardly ignore the implications of this present circumstance.

So why? How did we get to this moment? Like me, youve probably been aware of the warnings of well-known luminaries like Elon Musk, Bill Gates, Stephen Hawking and many others, and, like me, you have probably noted their commentary but moved on to consider the next investment opportunity. Personally, being the optimist that I am, I certainly respected those arguments but believed even more strongly that we would innovate ourselves out of the danger zone. So why the change? Two words - one name - Bruce Schneier.

If you have been interested in the fields of cryptology and computer security, you have no doubt heard his name. Now with IBM (NYSE:IBM) as its chief spokesperson on security, he is a noted author and contributor to current thinking on the entire gamut of issues that confront us in this new era of the cloud, IoT, and Internet-based threats to personal privacy and computer system integrity. Mr. Schneiers seminal talk at the recent RSA conference brought it all into focus for me, and I encourage you to watch it. I will briefly recap his argument and then work out some of the consequences that flow from Schneiers argument. So here goes.

Schneiers case begins by identifying the problem - the rise of the cyber-physical system. He points how our day-to-day reality is being subverted as IoT literally stands the world on its head, dematerializing and virtualizing our physical environment. What used to be dumb is now smart. Things that used to be discrete and disconnected are now networked and interconnected in subtle and powerful ways. This is the conceptual linkage that really connected the dots for me. As he puts it in his security blog:

We're building a world-size robot, and we don't even realize it. [] The world-size robot is distributed. It doesn't have a singular body, and parts of it are controlled in different ways by different people. It doesn't have a central brain, and it has nothing even remotely resembling a consciousness. It doesn't have a single goal or focus. It's not even something we deliberately designed. It's something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world. This world-size robot is actually more than the Internet of Things. [] And while it's still not very smart, it'll get smarter. It'll get more powerful and more capable through all the interconnections we're building. It'll also get much more dangerous.

More powerful, indeed. It is at this point where AI and related technologies enter the equation to build a host of managers, agents, bots, natural language interfaces, and other facilities that allow us to leverage the immense scale and reach of our IoT devices - devices that, summed altogether, encompass our physical world and exert enormous power for good and in the wrong hands for evil.

Surely, we can manage this? Well, no, says Schneier - not the way we are going about it now. The problem is, as he cogently points out, our business model for building software and systems is notoriously callus when it comes to security. Our "fail fast fix fast", minimum-market-requirementsfor-version1-shipment protocol is famous for delivering product that comes with a hack me first invitation that is all too often accepted. So whats the difference you may ask? Weve been muddling along with this problem for years. We dig ourselves into trouble, we dig ourselves out. Fail fast, fix fast. Life goes on. Lets go make some money.

Or maybe it doesnt. The IoT phenomenon is leading us headlong into deployment of literally billions of sensors embedded deep into our most personal physical surroundings, connecting us to system entities and actors, nefarious and benign, that now have access to intimate data about our lives. Bad as that is, its not the worst thing. This same access gives these bad actors the potential to control the machines that provide life-sustaining services to us. Its one thing to have your credit card data hacked, its entirely another thing to have a bad actor in control of, say, the power grid, an operating theater robot, your car, or the engine of the airplane you're riding in. Our very lives depend on the integrity of these machines. Do we need to emphasize this point? Fail fast, fix fast does not belong in this world.

So if the prospect of a body-count stat on the next after-action report from some future hack doesnt alarm you, how about this scenario. What if it wasnt a hack? What if it was an unforeseen interaction of otherwise benign AIs that we are relying on to run the system in question? Can we be sure to fully understand the entire capability of an AI that is, say, balancing the second-to-second demands of the power grid?

One thing we can count on - the AI that we are building now will be smarter and more capable tomorrow. How smart is the AI were building? How good is it? Scary good. So lets let Musk answer the question. How smart are these machines were building? [Theyll be] smarter than us. Theyll do everything better than us", he says. So whats the problem? Youre not going to like the answer.

We wont know that the AI has a problem until the AI breaks and we may not know why it broke then. The intrinsic nature of the cognitive software we are building with deep neural nets is that a decision is the product of interactions with thousands and possibly millions of previous decisions from lower levels in the training data, and those decision criteria may well have already been changed as feedback loops communicate learning upstream and down. The system very possibly cant tell us "why". Indeed, the smarter the AI is, the less likely it may be able to answer the why question.

Hard as it is, we really need to understand the scale of the systems we are building. Think about autonomous cars as one, rather small, example. Worldwide the industry has built 88 million cars and light trucks in 2016 and another 26 million medium and heavy trucks. Sometime in the 2025 to 2030 time frame, all of them will be autonomous. With the rise of the driving as a service model, there may not be as many new vehicles being produced, but the numbers will still be huge and fleet sizes will grow every year as the older, self-driving vehicles are replaced. What are the odds that the AI that runs these vehicles performs flawlessly? Can we expect perfection? Our very lives depend on it. God forbid a successful hack into this platform!

Beyond that, what if perfection will kill us? Ultimately, these machines may require our guidance to make moral decisions. Question. You and your spouse are in a car that is in the center lane of the three-lane freeway operating at the 70 mph speed limit. A motorcyclist directly left of you - to the right a family of five in autonomous minivan. Enter a drunk self-driving and old pickup the wrong way at high speed weaving through the three lanes directly in your path. Should your car evade to the left lane and risk the life of the motorcyclist? One would hope our vehicle wouldnt move right and put the family of five at risk. Should it be programmed to conduct a first, do no harm policy which would avoid a swerve into either lane and would simply break as hard as possible in the center lane and hope for the best?

Whatever the scenario, the AIs we develop and deploy, however rich and deep the learning data they have been exposed to, will confront situations that they havent encountered before. In the dire example above and in more mundane conundrums, who ultimately sets the policy that must be adhered to?Should the developer? How about the user (in cases where this is practical)? Or should we have a common policy that must be adhered to by all? For sure, any policy implemented in our driving scenario above will save lives and perform better than any human driver. Even so, in vehicles, airplanes, SCADA systems, chemical plants and myriad other AIs inhabiting devices operating in innately hazardous operating regimes, will it be sufficient to let their in extremis actions be opaque and unknowable? Surely not, but will the AI as developed always gives us the control to change it?

Finally, we must consider a factor that is certainly related to scale but is uniquely and qualitatively different - the network. How freely and ubiquitously should these AIs interconnect? Taken on its face, the decision seems to have been made. The very term, Internet of Things, seems to imply an interconnection policy that is as freewheeling and chaotic as our Internet of people. Is this what We, the People want? Should some AIs - say our nuclear reactors or more generally our SCADA systems - operate with limited or no network connection? Seems likely, but how much further should we go? Who makes the decision?

Beyond such basic questions come the larger issues brought on by the reality of network power. Lets consider the issue of learning and add to that the power of vast network scale in our new cyber physical world. The word seems so simple, so innocuous. How could learning be a bad thing? AI powered IoT systems must be connected to deliver the value we need from them. Our autonomous vehicles, terrestrial and airborne, for example, will be in constant communication with nearby traffic, improving our safety by step-functions.

So how does the fleet learn? Lets take the example from above. Whatever the result, the incident forensics will be sent to the cloud where developers will presumably incorporate the new data in the master learning set. How will the new master be tested? How long? How rigorously? What will be the re-deployment model? Will the new improved version of the AI be proprietary and not shared with the other vehicle manufacturers, leaving their customers at a safety disadvantage? These are questions that demand government purview.

Certainly, there is no unanimous consensus here regarding the threat of AI. Andrew Ng of Baidu/Stanford disagrees that AI will be a threat to us in the foreseeable future. So does Mark Zuckerberg. But these disagreements are only with the overt existential threat - i.e. that a future AI may kill us. More broadly, though, there is very little disagreement that our AI/IoT-powered future poses broad economic and sociopolitical issues that could literally rip our societies apart. What issues? How about the massive loss of jobs and livelihood of perhaps the majority of our population over the course of the next 20 years? As is nicely summarized in this recent NY Times article, AI will almost certainly exacerbate the already difficult problem we have with income disparities. Beyond that, the global consequences of the AI revolution could generate a dangerous dependency dynamic among countries other than the US and China that do not own AI IP.

We could go on and on, but hopefully the issue is clear. Through the development and implementation of increasing capable AI-powered IoT systems, we are embarking upon a voyage into an exciting but dangerous future state which we can barely imagine from our current vantage point. Now is the time to step back and assess where we are and what we need to do going forward. Schneiers prescription for the problem is that the tech industry must get in front of this issue and drive a workable consensus among industry stakeholders and governmental authorities and regulatory bodies about the problem, its causes and potential effects, and most importantly, a reasonable solution to the problems that protects the public while allowing the industry room to innovate and build.

There is no turning back, but we owe it to ourselves and our posterity to do our utmost to get it right. As technologists we are inherently self-interested in protecting and nurturing the opportunity we all have in this exciting new realm. This is natural and understandable. Our singular focus on agility and innovation has brought the world many benefits and will bring many more. But we are not alone and it would be completely irresponsible to insist that we are the only stakeholder in the outcomes we are engineering.

This decision - to engage and attempt to manage the design of the new and evolving regulatory regime - has enormous implications. There is undoubtedly risk. Poor or heavy-handed regulation could well exact a tremendous opportunity cost. One could well imagine a world in which Nvidia's GPU business is severely affected by regulatory inspection and delay, for example. But that is the very reason we need to engage now. The economic leverage that AI provides in every sector of our economy leads us inescapably to economic and wealth-building scenarios beyond anything the world has seen before. As participants and investors, we must do what we can to protect this opportunity to build unprecedented levels of wealth for our country and ourselves. Schneier argues that we are best serving our self-interest by engaging government now rather than burying our heads in the sand waiting for the inevitable backlash that will come when (not if!) these massive systems fail catastrophically in the future.

Schneier has got the right idea. We need to broaden the conversation, lead the search for solutions, and communicate the message to the many non-tech constituencies - including all levels of government - that there is an exciting future ahead but that future must include appropriate regulations that protect the American people and indeed the entire human race.

We wont get a second chance to get this right.

Disclosure: I/we have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

See the original post here:

The 'Skynet' Gambit - AI At The Brink - Seeking Alpha

AI-powered government finances: making the most of data and machines – Global Government Forum

Photo by Karolina Grabowska via Pexels

Governments are paying growing attention to the potential of artificial intelligence the simulation of human intelligence processes by machines to enhance what they do.

To explore how public authorities are approaching the use of AI for tasks related to public finances,Global Government Fintech the sister title of Global Government Forum convened an international panel on 4 October 2022 for a webinar titled How can AI help public authorities save money and deliver better outcomes?.

The discussion, organised in partnership with SAS and Intel, highlighted how AI is already helping departments to deliver results. But also that AI remains very much an emerging and, to many, rather nebulous field with many hurdles to clear before widespread use. Discussions of artificial intelligence often bring up connotations of an Orwellian nature, dystopian futures, Frankenstein said Peter Kerstens, advisor, technological innovation & cyber security at the European Commissions Financial Services Department. That is really a challenge for positive adoption and fair use of artificial intelligence because people are apprehensive about it.

Like most technology-based areas, it is a field that is also moving very quickly. If the last class you took in data science was three years ago, its already dated, cautioned Steve Keller, acting director of data strategy at the US Treasurys Bureau of the Fiscal Service, in his own opening remarks.

Kerstens began by describing the very name artificial intelligence as a big problem, asserting that AI is neither artificial nor is it particularly intelligent at least not in a way that humans are intelligent.

A better way to think about artificial intelligence and machine learning is self-learning high-capacity data processing and data analytics, and the application of mathematical and statistical methodologies to data, he explained. That is, of course, not a very appealing name, but that is what it is. But the self-learning or self-empowering element is very important in AI because you have to look at it in comparison to traditional data processing.

Continuing this theme of caution he further explained: Like old technology, AI enhances human and organisational capability for the better, but potentially also for the worse. So, it really depends on what use you make of that tool. You can make very positive use of it. But you can also make very negative uses of it. And thats why governance of your artificial intelligence and machine learning, and potentially rules and ethics, are important.

For financial regulators, AI is proving useful to help process the vast amounts of data and reports that companies must submit. It goes beyond human capability, or you have to put lots and lots of people onto it to process just the incoming information, he said.

Read more:Biden sets out AI Bill of Rights to protect citizens from threats from automated systems

Kerstens then mentioned AIs potential for law enforcement. Monitoring the vast volumes of money moving through the financial system for fraud, sanctions and money laundering requires very powerful systems. But this is also risky because it comes very close to mass surveillance, he said. So, if you apply artificial intelligence or machine learning engines onto all of these flows, you really get into this dystopian future of Big Brother.

Kerstens also touched on AIs use in understanding macroeconomic developments. Typically, macro- economic policy assessment is very politically driven, and this blurs the objectivity of the assessment. AI assessment is much more independent, because it just looks at the data without any preconceived notions and draws conclusions, including conclusions that may not necessarily be very desirable, he said.

The US Treasurys Keller described the ultimate aim of AI as being to improve decision accuracy, forecasting and speed trying to use data to make scientific decisions. This includes, he continued, testing and verifying our assumptions with data to help make sure that we dont break things, but also help us ask important questions.

He provided four AI use areas for the Bureau of the Fiscal Service: Treasury warrants (authorisations that a payment be made); fraud detection; monitoring; and entity resolution.

In the first area, he said the focus was turning bills into literally a dataset the bureau has experimented with using natural language processing to turn written legislation into coherent, machine-readable data that has account codes and budgeted dollars for those account codes; in the second area, he said the focus was checking people are who they say they are (and how we detect that at scale); in the third area, uses include monitoring whether people are using services correctly.

Were collecting data from so many elements, and often in large public-sector areas, the left hand doesnt talk to the right hand, he said, in the context of entity resolution. We often need to find a way to connect these two up in such a way that we are looking at the same entity so that we can share data in the long run. So, data can be brought together and utilised by data scientists or eventually to create AI that would help these other three things to happen.

Read more: Artificial intelligence in the public sector: an engine for innovation in government if we get it right

Keller also raised ethical, upskilling and cultural considerations. If people start buying IT products that are going to have AI organically within them, or theyre building them [questions should arise such as]: are we doing it ethically? Do we have analytics standards? How are we testing? Are we actually getting value from the product? Or is it a total risk?.

He concluded his opening remarks by outlining how the bureau was building an internal data ecosystem, including a data governance council, data analytics lab, high-value use case compendium and data university.

The Centre for Data Ethics & Innovation (CDEI), which is part of the UK Department for Digital, Culture, Media and Sport, was established three years ago to drive responsible innovation across the public sector.

A huge focus is around supporting teams to think about governance approaches, the centres deputy director, Sam Cannicott, explained. How do they develop and deploy technology in a responsible way? How do they have mechanisms for identifying and then addressing some of the ethical questions that these technologies raise?.

The CDEI has worked with a varied cross-section of the public sector including the Ministry of Defence (to explore responsible AI use in defence); police forces; and the Department for Education and local authorities to explore the use of data analytics in childrens social care. These are all really sensitive often controversial areas, but also where data can help inform decision-making, he said.

Read more: Canada to create top official to police artificial intelligence under new data law

The CDEI does not prescribe what should be done. Instead it helps different teams to think through these questions themselves.

Ultimately, the questions are complex, Cannicott said. While lots of teams might seek an easy answer, [to] be told what youre doing is fine, its often more complicated, particularly when we look at how you develop a system, then deploy it, and continue to monitor and evaluate. So, we support teams to think about the whole lifecycle process.

The CDEIs current work programme is focused on three areas: building an effective AI assurance ecosystem (including exploring standards and impact assessments, as well as risk assessments that might be undertaken before a technology is deployed); responsible data access, including a focus on privacy-enhancing technologies; and transparency (the CDEI has been working with theCentral Digital and Data Officeto develop the UKs first public sector algorithmic transparency standard).

This is underpinned by a public attitudes function to ensure citizens views inform the CDEIs work important when it comes to the critical challenge of trust.

Dr Joseph Castle, adviser on strategic relationships and open source technologies at SAS, described how public authorities around the globe are using AI across diverse set of fields, ranging from areas such as infrastructure and transport through to healthcare.

In government finance, he said, authorities are using analytics and AI to assess policy, risk, fraud and improper payments.

Castle, who previously worked for more than 20 years in various US federal government roles, provided two examples of SAS work in the public sector: with Italys Ministry of Economics and Finance (MEF), and with Belgiums Federal Public Service Finance.

In the Italian example, he said MEF used analytics to calculate risk on financial guarantees, providing up-to-date reporting for improved systematic liquidity and risk management during COVID-19; work with the Belgian ministry, meanwhile, has been on using analytics and AI to predict the impact of new tax rules.

The most recent focus for public entities has been on AI research and governance, leading to a better understanding of AI technology itself and responsible innovation, he said. Public sector AI maturation allows for improved service, reduced costs and trusted outcomes.

Australias National Artificial Intelligence Centre launched in December 2021. It aims to accelerate positive AI adoption and innovation to benefit businesses and communities.

Stela Solar, who is the centres director, described AIs ability to scale as incredibly powerful. But, she said, it is incredibly important that organisations exploring and using AI tools do so responsibly and inclusively.

In opening remarks reflecting the centres focus, she proposed three factors that would be important to help maximise AIs impact beyond government.

The first, she said, is that more should be done to connect businesses with research- and innovation-based organisations. A national listening tour organised by the centre had found, she said, low awareness of AIs capabilities. Unless we empower every business to be connected to those opportunities, we wont really succeed, she warned.

Her second point focused on small- and medium-sized businesses. Much of the guidance that exists is really targeted at large enterprises to experience, create and adopt AI, she said. But small and medium business is really struggling in this area, which is ironic as AI really presents as a great equaliser opportunity because it can deal with scale and take action at scale. It can really uplift the impact that small and medium businesses can have.

Her third point focused on community understanding, which she described as a critical factor in accelerating the uptake of AI technologies. This includes achieving engagement from diverse perspectives in how AI is shaped, created [and] implemented.

Topics including trust in AI systems, the risk of bias and overcoming scepticism were addressed further during the webinars Q&A.

In terms of trust, what goes in to any AI tool affects what comes out. How reliable they are [AI systems] depends on how good and how unbiased the dataset was, Kerstens said. Does it have known biases or something that is a proxy for biases? For example, sometimes people use addresses. Peoples addresses, especially in countries where you have very diverse populations, and where different population groups and different racial or religious groups live in particular areas, can be a proxy for religious affiliation, or for race. If youre not careful, your artificial intelligence engine is going to build in these biases, and therefore its going to be biased.

Its not just about bias within AI, its bias in the data, said Castle, emphasising the importance of responsible innovation across the analytics lifecycle.

Read more: Brazils national AI strategy is unachievable, government study finds

Solar provided a further dimension, adding that organisations can often find themselves working with substantial gaps in data (which she referred to as data deserts). Its actually been impressive to see some of the grassroots efforts across communities to gather datasets to increase representation and diversity in data, she said, giving examples from Queensland and New South Wales where, respectively, communities had provided data to help shape and steer investments and fill gaps in elderly health data.

On this theme she said that co-design of AI systems with the communities who the technology serves or affects will go a long way to address some of the biases and also will go a long way into the question of what should be done and what shouldnt be done.

Scepticism about the use of AI from policymakers, particularly those who are not technologists, was discussed as a common challenge.

Sometimes theres a push to use these technologies because they can be seen as a way to save money, observed Cannicott. There is also nervousness because some have seen where things have gone wrong, and they dont want to be to blame.

He emphasised the importance of experimentation, governance (having really clear accountability and decision-making frameworks to walk through the ethical challenges that might come up and how you might address them) and public engagement.

Some polling we did fairly recently suggested that around half of people dont think the data that government collects from them is used for their benefit, he said. Theres quite a bit of a trust gap there [so] decision makers [have] to start demonstrating that they are able to use data in a way that benefits peoples lives.

Keller emphasised the importance of incorporating recourse into AI systems. If I build a system that detects fraud, and flag somebody is a villain and theyre not, we need to give them an easy route to appeal that process, he said.

AI is often a purely technical conversation. But, when it comes to government use of AI, policy and politics inevitably get entwined.

To develop artificial intelligence, you need vast amounts of data. Europeans tend to look at personal data protection in a different way than people in the US do, pointed out Kerstens.

Organisational leaders driven by doctrines could struggle to accept a role for AI. If you run an organisation or a governmental entity based on politics, artificial intelligence isnt something youre going to like very much because it is the data speaking to you, he continued. They do like artificial intelligence and data when the data confirms a doctrinal or political view. But if the data does not support [their] view, theyll dismiss it.

Public sector agencies also need to be savvy about AI solutions they are buying. Increasingly, public-sector organisations are being sold off-the-shelf tools. And actually, thats quite a dangerous space to be in, said Cannicott. Because, for example, if you [look at] childrens social-care different geographies, different populations theres all sorts of different factors in that data. If youre not clear on where the data is coming from to build those tools initially, then you probably shouldnt be using that technology. Thats also where testing and experimentation is very important.

There is clearly momentum building behind AI. But an over-riding theme from the webinar was the extent to which many remain in the dark or deeply sceptical.

Often Ive seen AI be implemented by someone whos very passionate, and it stays as this hobby experiment and project, said Solar, emphasising the importance of developing a base-level understanding of AI across all levels of an organisation. For it really to get the momentum across the organisation and to be rolled out into full production, with all the benefits that it can bring, you really need to bring along the policy decision-makers, the leaders the entire organisational chain, she said.

Kerstens concluded by emphasising that the story of AIs growing deployment across the public sector (and beyond) remains in the early chapters. AI is very powerful. Its just very early days, he said. But what people are most afraid of is that they dont understand how the artificial intelligence engine thinks. We should focus on productive, useful applications and not the nefarious ones.

AIs advocates will be hoping that fewer people, over time, come to compare it to the tale to Frankenstein.

The Global Government Fintech webinar How can AI help public authorities save money and deliver better outcomes? was held on 4 October 2022, with the support of knowledge partners SAS and Intel. You can watch the 75-minute webinar via our dedicated event page.

Read more: AI intelligence: equipping public and civil service leaders with the skills to embrace emerging technologies

Go here to see the original:

AI-powered government finances: making the most of data and machines - Global Government Forum

The Evolution of Godless Practices: Eugenics, Infanticide, and Transhumanism – The Epoch Times

Commentary

There is a straight line that runs from eugenics through infanticide to transhumanism. All three are the devils work in the material world.

Let us explore the issue.

From biblical times to the present, there has been an ongoing battle between good and evil that has been waged through the centuries. On one side, there are those who believe in a higher power and that mankind was made in the image of God, which is an immutable constant not to be trifled with or corrupted by human beings. Another way of saying that is that people in this camp believe that human nature itself has been immutable and constant through the millennia and that only by looking to God can mankind improve his condition.

On the other side are those who deny the existence of God and believe that mankind is the supreme intelligence in control of human destiny, is malleable and can be shaped through science, and can evolve to a superior form of man through the planning and experimentation by natural leaders over time. A corollary for the people in this camp is that there are no moral or religious constraints on their practices intended to evolve mankind toward some conceivedand ever-changingfuture vision.

Many in the first camp would characterize the ongoing battle as good versus evil. Mankind has been arguing about, defining, and redefining what it means to be good for millennia. The ancients defined good in terms of normal versus different, knowledge versus ignorance, and later right versus wrong in the context of defining laws and justice. Societies and governments were organized around these concepts.

The concept of evil has also been defined by many cultures through the ages. The dictionary now defines evil as morally wrong or bad; immoral; wicked. Most Americans (and most people in general) have an innate understanding of what is evil on a personal level. Some refer to that understanding as a conscience. Even the people in the second camp who believe that mankind is the supreme being have vestiges of consciences informed by religion and experience.

Governments have been organized to reflect, monitor, and control the cultural norms of the people governed. Logic dictates that those governments would also implement and enforce those cultural norms from a different philosophical framework from each other, partly as shaped by their respective religious philosophies and the various ideologies that have been developed and tested over the centuries, such as monarchical rule, Marxism, fascism, socialism, communism, corporate capitalism, patriarchy, oligarchy, philosopher kings, etc.

The totalitarian ideologies that were developed and tested during the 20th century (particularly fascism and communism) denied the existence of God so that the governments formed that exercised those ideologies would be unconstrained by religious and moral boundaries in their pursuit of developing and grooming their version of modern man in their societies.

Examples of those perversions of mankind include the following:

The Nazis were obsessed with racial theories that resulted in a pseudoscientific racial classification system in which Aryans (people of German and Nordic descent) were considered to be the master race at the apex of the human pyramid and destined to rule the world (on Nazi terms, of course). Jews were by Nazis to be on the lowest level of the hierarchy.

Nazi society was organized around these concepts to develop and promote those with the purest Aryan blood at the expense of those with lesser classifications in the hierarchy. For example, German boys were educated, inculcated, and groomed in Nazi principles through the Hitler Youth program. Girls were brainwashed through two parallel programs: Young Girls was an organization for girls aged 10 to 14, while the League of German Girls was for girls aged 14 to 18, with the latter being focused on comradeship, domestic duties, and motherhood.

Soviet Man was to be the ultimate proof that communism worked and that mankind could evolve for the better (as defined by the communists) without Gods guidance. The Russian communists attempted to shape the individual consciousness, character, and social practices in order to get the people to conform to the Marxist view of the perfect citizen.

The Soviets experimented with telepathic research, cybernetic simulations, and mass hypnotism over television to control the minds of their citizenry. This was a forerunner of the mass formation psychosis being exploited by the left and globalists these days.

The Chinese Communist Party (CCP) has controlled education in China for decades to politically indoctrinate students at all levels according to CCP ideology, principles, history, racial theories, global objectives, etc., and, most importantly, conditioning everyone to acquiesce to the CCPs control of all aspects of Chinese society.

CCP leader Xi Jinping said in 2019, We need to strengthen political guidance for young people, guide them to voluntarily insist on the Partys leadership, to listen to the Party and follow the Party.

And the CCP is leading the world in implementing social controls to monitor compliance with CCP directives by all Chinese citizens. Educate, monitor, control, and discipline: the perfect world with perfectly compliant citizens envisioned for all by the Chinese communists.

Theories initiated in the 19th century bore spoiled fruit in the 20th and 21st centuries. Karl Marxs theories (Marxism) begat the Communist Manifesto, whose godless adherents continue to plague the world today. Charles Darwins theory of evolution begat eugenics, which the National Institute of Healths Genome Research Institute defines as an immoral and pseudoscientific theory that claims it is possible to perfect people and groups through genetics and the scientific laws of inheritance. John Deweys theory of progressive education similarly haunts U.S. public education to this very day.

Eugenics was/is particularly evil. Its adherents included the Nazis and Americans associated with the Population Society, the Committee on Eugenics, which studied selective and restrictive human breeding, and the American Eugenics Society.

Former President Franklin D. Roosevelt even promoted Nazi sympathizer and eugenicist Franklin Osborn to positions in government, including chairman of the Civilian Advisory Committee on Selective Service, chairman of the Army Committee on Welfare and Recreation, and chief of the Morale Branch of the War Department.

The Nazis used eugenics to justify the sterilization and ultimately the elimination of undesirables, including Jews, homosexuals, gypsies, Slavs, and others. Eugenics theories led directly to the Nazi genocide that killed millions in Europe in the 1930s and 1940s.

Eugenics was also the basis for implementing sterilization laws in the United States in over 30 states, with some of those laws persisting on the books until the 1980s, according to the NIH Genome Research Institute. Over 60,000 people deemed to be idiots, imbeciles, promiscuous (females), or feebleminded were sterilized in the United States in the 20th century.

Eugenics also heavily influenced Margaret Sanger. A eugenicist and racist, Sanger founded the Birth Control League (1921) and its successor, Planned Parenthood (1942). She supported several eugenics initiatives: sterilization of people with mental and physical disabilities, segregation in concentration camps of undesirable criminals (for example, prostitutes, paupers, drug addicts, and the unemployed), and mandatory birth control training for mothers with serious disease (choice was not part of the equation).

Another of Sangers initiatives was the Negro Project, in which predominantly black neighborhoods were targeted for birth control programs. Its purpose was evident, as she later disclosed in a private letter in December 1939: We dont want word to go out that we want to exterminate the Negro population.

Planned Parenthood was, of course, one of the main organizations behind the passage of Roe v. Wade in 1973. The U.S. Supreme Court decision claimed that a womans right to an abortion was implicit in the right to privacy protected by the 14th Amendment of the U.S. Constitution. As a selling point, its advocates first claimed that abortions should only be performed in rare instances, such as rape, incest, or to save the mothers life.

Once the federal government funded Planned Parenthoods abortion clinics, abortion advocates incentivized by federal money pushed the boundaries of acceptable abortions from within the first trimester to eventually the evil practice of partial-birth abortions, in which a child being birthed is halted halfway and killed with scissors to the neck. Thus, simple therapeutic abortions have evolved to infanticidethe barbaric killing of children.

It should be noted that Adolf Hitlers Nazis were rightfully accused of committing genocide by killing 6 million Jews during the Holocaust. Similarly, communist China has been accused of committing genocide against 2 million Uyghurs in Xinjiang. However, these numbers pale in comparison to 63 million, which is the number of children aborted in the United States since the passage of Roe v. Wade in 1973. That number includes over 19 million black babies aborted. That is real genocide, whose underpinning philosophy is evil, as eugenics has become infanticide.

The newest effort by the godless camp to meddle with the natural course of humanity is the transhumanism movement, which seeks to accelerate human evolution through advanced technologies. It is a segue from eugenics because it aims to enhance the human species through the addition of advanced biological and physical (mechanical, bio-mechanical) technologies or, as Britannica puts it, to augment or increase human sensory reception, emotive ability, or cognitive capacity as well as radically improve human health and extend human life spans.

In short, the goal is to create super-humans who will live forever. And just like eugenics, there is no natural selection involved, but rather a selective implementation by and for those willing to pay the cost (and to be experimented upon). And there is no limit to the experimentation and no ethical constraints on applying the technologies.

God versus man. Good versus evil. The eternal struggle.

First came eugenics, then came abortion on demand and infanticide, and now there is the new horizon posed by transhumanism. That is the evolution of an evil kind! None of these have moral, ethical, or religious constraintsall are arbitrary as determined by those holding political power.

The Biden administration has cleared the decks for transhumanism through the signing on Sept. 12 of the Executive Order on Advancing Biotechnology and Biomanufacturing Innovation for a Sustainable, Safe, and Secure American Bioeconomy. Into the Brave New World we gowith much trepidation!

Views expressed in this article are the opinions of the author and do not necessarily reflect the views of The Epoch Times.

Follow

Stu Cvrk retired as a captain after serving 30 years in the U.S. Navy in a variety of active and reserve capacities, with considerable operational experience in the Middle East and the Western Pacific. Through education and experience as an oceanographer and systems analyst, Cvrk is a graduate of the U.S. Naval Academy, where he received a classical liberal education that serves as the key foundation for his political commentary.

Visit link:

The Evolution of Godless Practices: Eugenics, Infanticide, and Transhumanism - The Epoch Times

Evolution of the joint IAEA, IARC, and WHO cancer control assessments (imPACT Reviews) IARC – IARC

5 October 2022

A new Policy Review by researchers from the International Agency for Research on Cancer (IARC), the International Atomic Energy Agency (IAEA), the World Health Organization (WHO), and partner institutions presents the evolution of the IAEA, IARC, and WHO joint advisory service to help countries assess national capacities and the readiness of the health system to plan and implement cancer control strategies. These assessments are known as integrated mission of Programme of Action for Cancer Therapy (imPACT) Reviews. The Policy Review was published in The Lancet Oncology.

The researchers describe the methodology of imPACT Reviews and present several country case studies. imPACT Reviews consist of a standardized assessment of the different aspects of cancer control prevention, early detection, diagnosis and treatment, palliation of symptoms, and survivorship as well as cancer surveillance and governance. Each agency is responsible for specific topics; the IARC assessment covers cancer surveillance and early detection.

The joint imPACT Reviews programme supports national health authorities in planning an integrated and specific approach to cancer control and measuring progress in implementation. Since the programme began in 2005, 111 imPACT Reviews have been implemented in 96 countries and more than 800 experts have been deployed.

Veljkovikj I, Ilbawi AM, Roitberg F, Luciani S, Barango P, Corbex M, et al.Evolution of the joint International Atomic Energy Agency (IAEA), International Agency for Research on Cancer (IARC), and WHO cancer control assessments (imPACT Reviews)Lancet Oncol, Published online 26 September 2022;https://doi.org/10.1016/S1470-2045(22)00387-4

Read the article

See original here:

Evolution of the joint IAEA, IARC, and WHO cancer control assessments (imPACT Reviews) IARC - IARC

Bitcoin’s role in the evolution of money with University of Exeter’s Dr. Jack Rogers – CoinGeek

Is Bitcoin just another kind of money? An undergraduate module dedicated to Bitcoin at the University of Exeter sheds light on the question by delving into the history of money. The module, called Bitcoin, Money and Trust, was launched in 2018 after a high demand from students to learn about Bitcoin.

Dr. Jack Rogers, a senior lecturer in economics at the University of Exeter launched the Bitcoin module as a precursor to an MSc Fintech course which he leads. He says participation increased tenfold in recent yearsfrom less than 50 students when it launched to 700 this year.

Despite the impressive turnout, Jack believes that probably the big number is partly driven for the wrong reasonsall the various hype and sense that you could get rich from this.

On this weeks episode of CoinGeek Conversations, Charles Miller talks to Jack about the University of Exeters Bitcoin teaching, the evolution of money and the role Bitcoin plays.

Jack points to the emergence of central banking as a step change in history. He quoted Felix Martin, the author of Money: The Unauthorized Biography, whom he recalls speaking of a compromise power structure between the central bank and the government. For the first time in centuries, a decoupling between money and the state is happening before our eyes, Charles suggests. Jack agrees, saying a new technology that allows people to potentially pay each other without using existing fiat-based systems has indeed raised fundamental questions. He pertains to the authors view on cryptocurrency and how its lead to a disruption in the payment system which central banks have been in control of for a long time.

Jack believes the disruption in central banking was inevitable eventually: I think this stuff, central bank, digital currencies and things that you see now, maybe it was coming anyway I think the emergence of Bitcoin and all the hype and everything has kind of brought that forward.

Based on Jacks comments, its safe to say that the future of money will depend on the outcome of the competition between blockchain-based payment systems. For now, he admits no one is certain as to where Bitcoin is heading.

One of my students did a great dissertation on this, speculating that in 20 years time, will there be loads of different types of money? What does it look like? I mean, no-one can really say, Jack said.

Dr. Jack Rogers is co-authoring a textbook alongside Brendan Lee and Neil Smith. The book will be out by the end of 2023.

Hear the whole of Dr. Jack Rogers interview in this weeks CoinGeek Conversations podcast or catch up with other recent episodes:

You can also watch the podcast video on YouTube.

Please subscribe to CoinGeek Conversations this is part of the podcasts ninth season. If youre new to it, there are plenty of previous episodes to catch up with.

Heres how to find them:

Search for CoinGeek Conversations wherever you get your podcasts

Subscribe oniTunes

Listen onSpotify

Visit theCoinGeek Conversations website

Watch on theCoinGeek Conversations YouTube playlist

New to Bitcoin? Check out CoinGeeksBitcoin for Beginnerssection, the ultimate resource guide to learn more about Bitcoinas originally envisioned by Satoshi Nakamotoand blockchain.

Original post:

Bitcoin's role in the evolution of money with University of Exeter's Dr. Jack Rogers - CoinGeek

Ariana Grande’s Beauty Evolution Is Something You Have To See – The List

Centered on a fictional high school for creative teenagers, "Victorious" is arguably one of the most entertaining, funny, and stylish shows for kids and teens from the early 2010s. Ariana Grande rose to fame for portraying Cat Valentine on the show at only 16 years old (via Alexa Answers). Grande eventually starred in the spinoff, "Sam & Cat," where her loveable character moved up from a supporting role to one of the main characters. On these shows, the ditsy yet sweet Cat rocked bold red hair (via YouTube), and Grande became known for that daring hairstyle.

During this era of Grande's career, she often flaunted glossy lips, rosy cheeks, and makeup that made her eyes look bigger and wider, per YouTube, matching her character's fun-loving yet somewhat clueless personality. However, this look didn't last forever for the pop star, and Grande shared in an interview that she's more than just that playful redhead. Grande said, "The red was Cat, and that was very much a character, and it was very much a portion of my life that I love and I am so grateful for... and I think fondly of that, but again, is not me" (via YouTube).

Read the original post:

Ariana Grande's Beauty Evolution Is Something You Have To See - The List

Lady Gaga’s Beauty Evolution Is Something You Have To See – The List

The late 2010s brought about another memorable change in style for Lady Gaga, which began when she was cast as wannabe singer Ally in the hit 2018 reboot of "A Star is Born."Her red carpet looks for this era were suitably sleek and feminine, with a notable example being her 2019 Golden Globes ensemble. Matching her hair to her ballgown, Gaga looked classically beautiful, especially with the subtle shimmer on her eyelids and her nude lip (via Allure).

In 2022, Gaga gradually started to bring back some of the theatricality from her early "Mother Monster" days. Photographed on the Chromatica Ball tour, her heavy eyeliner and bold lips were signature Gaga and hearken back to her wilder looks from 2009/2010. The rest of her tour outfits were equally as bold, combining black, latex, and lots of leather (via Variety).

From heavy bangs and embellished eyes to pared-back glossy lips and ballgowns, Gaga is a true performer through and through no matter the occasion.

Read more:

Lady Gaga's Beauty Evolution Is Something You Have To See - The List

Ed Jones on Pace’s evolution and being part of the family business – Mediaweek

Share

Share

Email

Pace began as the in-house advertising department of Heaths Motors, a Ford dealership, and was established in 1964.

Since those foundational years, the Geelong-based agency has grown, developed, and evolved into the modern and diverse company it is today. It is one of the longest-serving agencies in the country that is also family owned and operated.

Account directorEd Jonesspoke toMediaweekabout Paces long-standing partnerships with clients, being part of the family business and the agencys future in sustainability.

While his family ran the agency, Jones shared that his journey to his role wasnt direct. He shared that he initially studied medicine before pursuing aviation and then doing private consultations and marketing strategies in the field.

Then I ended up finding myself enjoying everything here at the agency. Its a very diverse place, and my grandfather started it up.

It was never my plan to go into marketing, advertising, events, or anything like that. But funnily enough, I just stumbled across happiness, which was very unpredicted, he said.

Unlike most, Jones dipped his toe into the media industry growing up.I used to get dragged in here during school holidays because of the family tie-ins. I was brought into the art department or whatever I could be useful in between school holidays, so call it child labour, he joked.Jones shared that he officially began working with Pace 13 years ago temporarily while studying medicine. He said: Before long, I had found myself intertwined with a number of our key accounts, and it was just fun. So, I stuck around.

Pace is led by Jones uncleNicholas Heathwho has been at the agencys helm for over two decades. He noted that during the 1990s, Heath focused on sustainability and long-term planning for the agency.

He (Heath) was trying to get the agency to a point where you didnt have to think where next weeks wages were coming. With retail, automotive and other clients at that time, work was seasonal, Jones explained.

You had four extremely painful peak workloads a year and often some lulls in between, So cash flow was not part of the operation, he added.

Jones said that his uncle examined retainer base models and long-term contracts. He also noted that Heath landed key accounts in sectors such as healthcare, government, and tourism, which gave the agency plenty of continued momentum.

Jones continued: The agency will be 60 years old in 2024 and has just been going through the most excruciatingly painful but most rapid period of growth in the agencys history. Its pretty gun-ho at the moment.

Jones shared that he is often his uncles right-hand man to their team of 24, along with two recent full-time additions and eight regular subcontractors.

Pace has four facets as an agency marketing, advertising and media, events and digital that work collaboratively.

Its probably the most diverse and comprehensive range of services all under one roof. Its done and dusted in there, Jones said.

Jones noted many of their clients go to them with complex problems that require a consolidated team to fix the issue and deliver while working as an integrated team with their organisation.

That broad skill set is super useful for our strengths. It is pretty rare, he said.

Jones noted that over the years, they adopted strategic recruitment to hire people and build a solid team of hardworking and passionate people who bring intelligence, integrity and capability to the team.

Were able to solve very complex things and quite nimbly have small or large organisations. Were finding that many small and large organisations with complex problems need people like us. Its busy, which is a good thing.

Jones shared the agency still has some of its foundation clients to this day, in addition to maintaining long-standing client partnerships, noting their average client tenure is over ten years.

Of course, we have projects and things that come and go at a much shorter timeline than that, but our client relationships last long, he said.

Jones noted the agencys previous work with the likes of Shell, Alcoa, Viva Energy, and Haymes paint, as well as healthcare clients and ASX-listed companies.

Some clients may have reshaped or changed along the way because were talking decades. But we have many clients that have been around a long time, to the point where they have more changeover than we do, and we end up being the basket of knowledge, he said.

Jones continued: There have been some accounts where we have held the baton as theyve gone through change and disruption. So, weve been the one consistent part along the way, which is rare.

When youve got clients you work with that long, you get to know them very personally and professionally. It makes for a good, strong working relationship. You do better work because you care, he added.

Jones remained tight-lipped on Paces recent wins but noted that over the decades, the agency has worked with more than a thousand projects, many of which are a part of the agencys history.

Jones shared that they often re-winning old clients and projects theyve previously worked with. He explained that sometimes such relationships are forced to change.

Jones said: You find that you recross paths, and so thats that to us is a win when you end up working with some of the more sexy, reputable brands. They give us a bit of bragging rights.

Jones added that this is often an opportunity for Pace to reprove ourselves or reimagine things for the brand. He noted: Because its not a brand-new client from scratch, often its rediscovering an existing relationship thats got some sort of history to the place.

Pace did well during the pandemic, according to Jones. But he noted that it was overwhelmingly well, that it almost got hard to keep up.

Were about getting the job done, right to a high standard, collecting the odd award along the way, but its not why we do it.

You adapt. You are whatever is required on the day. When clients have a downturn, or theres some economic decline, he said, referring to COVID and the 2008 GFC.

At the end of the day, we are a value-for-money based agency, and were always a fair price tag and hand on heart, were going to do what we said were going to do.

Jones added: So being reliable and predictable to a degree, but also providing high-quality output and the agencys diversity. We can go wherever the flavour is and adapt as we need as opposed to just being a digital or niche agency.

In an economic downturn, we thrive because we know how to cut money and create opportunity, Jones added.

Sustainability is an area the agency has its eyes set on expanding in as it looks ahead. Jones said: Its not just flavour of the month, but the whole sustainability sector, particularly as everyone hits for net zero. Theres a lot of stuff were doing in that kind of space.

We also do a lot of economic development work and in tourism. Weve taken upon ourselves to provide a bit of leadership in that space, Jones added, highlighting a recent state government event led by Pace on zero-emission vehicles.

Jones noted that the event showcased plenty of technological innovation their existing clients are interested in as they adapt and reshape their footprints.

Jones also noted that the agency has been keeping busy with the demand for content as businesses catch up post-pandemic. He said: We cannot produce enough. At the moment, we think were on top of it and getting asked to do more.

The flavour is shifting from clicks and basic metrics against engagement to the quality of attention were generating and what that means, he said.

Pace re-joined as a member of the IMAA earlier this year at the suggestion of the agencys head of growth,Simon Larcey.

As a member, Jones noted that the agency has enjoyed a range of benefits with the industry body, including industry awareness, staff training and industry representation.

Jones added: Having a network and shoulders to tap on for various resources or requirements or knowledge is always good.

Top image: Ed Jones

Read the rest here:

Ed Jones on Pace's evolution and being part of the family business - Mediaweek