Can Warfighters Remain the Masters of AI? – War on the Rocks

Editors Note: This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the second question (part b.) on the types of AI expertise and skill sets the national security workforce needs.

The Department of Defense is engaging in a dangerous experiment: It is investing heavily in artificial intelligence (AI) equipment and technologies while simultaneously underinvesting in preparation for the workforce will need to understand its implementation. In a previous article, Michael Horowitz and Lauren Kahn called this an AI literacy gap. Americas AI workforce, in uniform or out, is not prepared to use this fast-advancing area of technology. Are we educating the masters of artificial intelligence, or are we training its servants?

The U.S. government as a whole, and by extension the military services in particular, are flush with AI mania. One could be forgiven for thinking that dominance in AI is todays preeminent military competition. It is certainly true that advances in technology including AI will be drivers of national success in the future. However, we are currently in the boost phase of excitement about AI. From our perspective as cutting-edge practitioners and educators in the field of statistics as applied to military problems, it is almost certain that the expectation for AI in the mid-term will not be completely met. This interplay between inflated expectations, technical realities, and eventual productive systems is reflected in the business world as well and is described as part of the Gartner Hype Cycle.

Figure 1: Gartner Hype Cycle. Notably, AI technologies are on the upswing of hype. How will the Department of Defense position itself with a particular eye toward manpower to survive the inevitable crash?

The current hype about AI is high and is likely to be followed by a crash. This is not a statement about the technology or even the U.S. government so much as human nature. The more promising the technology, the harder the eventual crash will be, before entering the productive phase.

As an example of the disconnect between technologies and manpower, the Defense Department recently added data scientist to its job series descriptions, although a universally accepted definition of what a data scientist is remains elusive. Our working definition of a data scientist is someone who sits at the intersection of the disciplines of statistics and computer science. On the back side of the curve, the glacial pace of Defense Department budgetary programming means that current AI initiatives will be around for the long haul, and that means that there will need to be a cadre of individuals with the requisite education to see us through the hype cycles trough of disillusionment.

At the same time, the Navy in particular is shedding its AI-competent manpower at an alarming rate. By AI-competent manpower we mean operationally experienced officers with the requisite statistical, computer programming, and data skills to bridge advanced computing research into combat relevant, data-driven decisions.

We have observed several trends to support this assertion. Navy officers directly involved in operations and eligible for command at sea (unrestricted Line officers) taking the Naval Postgraduate Schools operations analysis curriculum mathematics applied to military problems focusing on statistics, optimization and associated disciplines has decreased dramatically in the past 10 years. For example, the last U.S. naval aviator to graduate with the operations analysis subspecialty was in 2014. The Navys assessment division (OPNAV N81) the sponsor for the operations analysis community has also recognized this trend and directed the creation of a tailored 18-month program for unrestricted line officers, with the objective of gaining more analytical talent in the fleet. Other Navy communities, such as information warfare, are only now recognizing the need for officers educated in the programming, statistical, and analytical skills needed to fully develop AI for naval warfare, and are beginning to send one or two officers annually to earn operations research degrees. We are personally aware of at least two cases where flag officers became directly involved in the detailing of officers with an operations research or systems analysis specialty. What is interesting about these cases is that these officers are considered unpromotable from their unrestricted line communities of origin that is, these officers spent career time on in-depth education, and are frequently penalized for it.

We write in the contexts of our roles as professionals, as well as retired naval officers and frequent commenters on defense policy. As such, it is our firm opinion that the Navys future with artificial intelligence rests critically on the natural intelligence that enables and guides it.

First, Its About Perspective

The true challenges to AI lie in policy, not technology. What is the impact of AI, and what is the right historical parallel? Many organizations both in and out of government reason that AI is a big computery thing, so it should go with all of the other big computery things, which frequently means it gets categorized as subordinate to the IT department. Although IT infrastructure is a necessary component for artificial intelligence, we think that this categorization is a mistake. It is clear to us that in the coming era, the difference between warrior and computer guy may become blurred to the point of non-existence. An excellent historical example is that of Capt. Joe Rochefort, who was considered derisively at the time to be what we might now call a computer geek but who, in retrospect, was one of the architects of victory at Midway and by extension the entire Pacific theater.

We think that a useful historical parallel to draw with the broad introduction of AI into the service is the introduction of nuclear power to the Navy some 65 years ago. It would have been an unthinkable folly to stop the education of nuclear-qualified engineers while introducing the USS Nautilus and USS Enterprise to the fleet. This is, in so many words, the Navys current strategy toward technical education for its officers at the dawn of naval AI.

Similarly, while there are many offices working on AI in the Navy, there most likely needs to be a single strong personality like Hyman Rickover for the nuclear Navy and Wayne Myers for Aegis who will unify these efforts. This individual will need to understand the statistical science behind machine learning, have a realistic appreciation for its potential operational applications, and the authority to cultivate the necessary manpower to develop and use those applications.

Next, Its Manpower

Echoing other writers in these pages, it may seem paradoxical that the most important component to building better thinking machines is better thinking humans. However, the writing is on the wall for both industry and government: The irreplaceable element of success is to have the right people in critical jobs. The competition for professionals in statistics and AI is tight and expected to become tighter. Simply put, the military will not be able to compete for existing talent on the open market. Nor can the open market provide people who understand the naval applications of the science. As with nuclear power, in order for the Navy to successfully harness AI, it needs sailors who are educated to become its masters not trained as its servants.

There is a shortage of people working in the fields of applied mathematics, specifically AI, and nobody will truly know how the systems developed now will react when eventually deployed and tested in situ on board actual ships conducting actual operations. The ultimate judge of the Navys success in AI will be the crews that man the ships on which it is installed. It is our observation that the technical depth of these crews is decreasing as time progresses. This is why it is critical that the services particularly the Navy grow their own and build a cadre of professionals with the requisite education and experience (and who happen to be deployable). Aviators, information warfare officers, submariners, and surface officers should be inspired to obtain technical, professional, and tactical analytical skills to best apply future AI at sea. One cannot help recalling a historical analogy learning how best to apply new radar technology during the nighttime Battle of Cape Esperance in October 1942. In this battle, Adm. Norman Scott and his battle staff were not in the ship with the best radar capabilities, which resulted in confusion as to enemy locations and friendly identification. A better knowledge of this new technology may have resulted in its more efficient employment.

What will inspire officers to gain the skills to serve as masters to AI and subsequently resist seduction by the private sector, remaining in the service instead? Organizational recognition of their value through promotion within their operational fields and opportunities to perform at a higher level much faster than they would find in the outside market. This is, sadly, not the practice of naval personnel actions. Paradoxically, time spent away from warfare communities gaining advanced education skills in areas such as those needed to be a master of AI is currently seen as dead time at best, and a career killer at worst. In the near future, the use of advanced algorithms to guide warfare and operational decisions will no longer be a subspecialty but rather an integral part of the warfighting mission of the Navy. Accordingly, moving away from the educational quota system, derived from subspecialty requirements, is a solid first step. In its place should be a Navy educational board to select due course officers for specific educational programs that will shape the Navys future, not meet the requirements of the current staffs.

When the Navy introduced nuclear engineering, it established a nuclear engineering school to meet its manpower requirements. When the Navy introduced the Aegis combat system, it established a dedicated Aegis school to meet its manpower requirements. The difference between these historical examples and AI is that AI does not need the same physical safeguards as radioactive materials and high-power radars. The Navy currently has the ability to better prepare its AI workforce through multiple institutions and methods both military and civilian including the Naval Postgraduate School, civilian institutions, and fellowships. Programs exist in these institutions that provide the programming, mathematics, and computer science skills needed to gain a deep appreciation for AI technology. Better incentivizing and using the tools already in place will allow sailors to use AI science for warfighting advantages. Where possible, the Navy should partner with industry and outside academic institutions to augment military experience with the lessons being learned commercially, resulting in a technical education with an operational purpose.

AI technology is maturing and the educational programs exist. AI technology exists. The critical element is the sailors who are going to be its masters for integration and deployment. These challenges may be solved internally by policy not externally with technology. It will ultimately be those policies that determine the success of the fleet.

Harrison Schramm is a non-resident senior fellow at the Center for Strategic and Budgetary Assessments. While on active duty, he was a helicopter pilot and operations research analyst. He enjoys professional accreditation from INFORMS, the American Statistical Association and the Royal (U.K.) Statistical Society. He is the 2018 recipient of the Clayton Thomas Prize for contributions to the profession of operations research and the 2014 Richard H. Barchi Prize. As a pilot, Schramm was awarded the Naval Helicopter Associations Aircrew of the Year (2004).

Jeff Kline is a professor of practice in the Naval Postgraduate School Department of Operations Research. Kline supports applied analytical research in maritime operations and security, tactical analysis, risk assessment and future force composition studies. He has served on the U.S. chief of naval operations Fleet Design Advisory Board and several naval study board committees of the National Academies. His faculty awards include the Superior Civilian Service Medal, 2019 J. Steinhardt Award for Lifetime Achievement in Military Operations Research, 2011 Institute for Operations Research and Management Science (INFORMS) Award for Teaching of OR Practice, 2007 Hamming Award for interdisciplinary research, and 2007 Wayne E. Meyers Award for Excellence in Systems Engineering Research.

Image: Naval Postgraduate School (Photo by Javier Chagoya)

Go here to read the rest:

Can Warfighters Remain the Masters of AI? - War on the Rocks

The Human-Powered Companies That Make AI Work – Forbes

Machine learning models require human labor for data labeling

The hidden secret of artificial intelligence is that much of it is actually powered by humans. Well, to be specific, the supervised learning algorithms that have gained much of the attention recently are dependent on humans to provide well-labeled training data that can be used to train machine learning algorithms. Since machines have to first be taught, they cant teach themselves (yet), so it falls upon the capabilities of humans to do this training. This is the secret achilles heel of AI: the need for humans to teach machines the things that they are not yet able to do on their own.

Machine learning is what powers todays AI systems. Organizations are implementing one or more of the seven patterns of AI, including computer vision, natural language processing, predictive analytics, autonomous systems, pattern and anomaly detection, goal-driven systems, and hyperpersonalization across a wide range of applications. However, in order for these systems to be able to create accurate generalizations, these machine learning systems must be trained on data. The more advanced forms of machine learning, especially deep learning neural networks, require significant volumes of data to be able to create models with desired levels of accuracy. It goes without saying then, that the machine learning data needs to be clean, accurate, complete, and well-labeled so the resulting machine learning models are accurate. Whereas it has always been the case that garbage in is garbage out in computing, it is especially the case with regards to machine learning data.

According to analyst firm Cognilytica, over 80% of AI project time is spent preparing and labeling data for use in machine learning projects:

Percentage of time allocated to machine learning tasks (Source: Cognilytica)

(Disclosure: Im a principal analyst at Cognilytica)

Fully one quarter of this time is spent providing the necessary labels on data so that supervised machine learning approaches will actually achieve their learning objectives. Customers have the data, but they dont have the resources to label large data sets, nor do they have a mechanism to insure accuracy and quality. Raw labor is easy to come by, but its much harder to guarantee any level of quality from a random, mostly transient labor force. Third party managed labeling solution providers address this gap by providing the labor force to do the labeling combined with the expertise in large-scale data labeling efforts and an infrastructure for managing labeling workloads and achieving desired quality levels.

According to a recent report from research firm Cognilytica, over 35 companies are currently engaged in providing human labor to add labels and annotation to data to power supervised learning algorithms. Some of these firms use general, crowdsourced approaches to data labeling, while others bring their own, managed and trained labor pools that can address a wide range of general and domain-specific data labeling needs.

As detailed in the Cognilytica report, the tasks for data labeling and annotation depend highly on the sort of data to be labeled for machine learning purposes and the specific learning task that is needed. The primary use cases for data labeling fall into the following major categories:

These labeling tasks are getting increasingly more complicated and domain-specific as machine learning models are developed that can handle more general use cases. For example, innovative medical technology companies are building machine learning models that can identify all manner of concerns within medical images, such as clots, fractures, tumors, obstructions, and other concerns. To build these models requires first training machine learning algorithms to identify those issues within images. To train the machine learning models requires lots of data that has been labeled with the specific areas of concern identified. To accomplish that labeling task requires some level of knowledge as to how to identify a particular issue and the knowledge of how to appropriately label it. This is not a task for the random, off-the-street individual. This requires some amount of domain expertise.

Consequently, labeling firms have evolved to provide more domain-specific capabilities and expanded the footprint of their offerings. As machine learning starts to be applied to ever more specific areas, the needs for this sort of domain-specific data labeling will only increase. According to the Cognilytica report, the demand for data labeling services from third parties will grow from $1.7 Billion (USD) in 2019 to over $4.1B by 2024. This is a significant market, much larger than most might be aware of.

Increasingly, machines are doing this work of data labeling as well. Data labeling providers are applying machine learning to their own labeling efforts to perform some of the work of labeling, perform quality control checks on human labor, and optimize the labeling process. These firms use machine learning inferencing to identify data types, things that dont match the structure of a data column, potential data quality or formatting issues, and provides recommendations to users for how they could clean the data. In this way, machine learning is helping the process of improving machine learning. AI applied to AI. Quite interesting.

For the foreseeable future, the need for human-based data labeling for machine learning will not diminish. If anything, the use of machine learning continues to grow into new domains that require new knowledge to be built and learned by systems. This in turn requires well-labeled data to learn in those new domains, and in turn, requires the services of the hidden army of human laborers making AI work as well as it does today.

Continued here:

The Human-Powered Companies That Make AI Work - Forbes

56% of marketers think AI will negatively impact branding in 2020, study says – Marketing Dive

Dive Brief:

As the use of AI expands into a growing array of marketing functions, Bynder'sstudy suggests marketers are concerned with how the technology will impact creativity and branding. Brand building is a top priority for marketers in 2020 following a period when many turned their focus to driving short-term performance lifts.

However, marketers'concerns over automation do not seem to be impacting investments, as most are still ramping up their tech stack and partnerships with martech companies.

"Marketing organizations readily adopted technology for analytics, digital channels and other functions that clearly benefit from automation, said Andrew Hally, SVP of global marketing at Bynder, in a statement. "The challenge ahead is to harness emerging technologies like AI to maintain creative excellence while satisfying business demand for growing volumes and faster delivery."

The Bynder report follows a December study by the Advertising Research Foundation that highlighted how different approaches to data causes tension on marketing teams. That report revealed how researchers and creatives or strategists approach research and data is preventing creative efforts from reaching their full potential. Only 65% of creatives and strategists believe research and data are important for the creative process, while 84% of researchers found it to be key, according to the report. These varying perspectives illustrate that technology can cause issues among marketing teams, despite being foundational to modern day marketing.

Read more here:

56% of marketers think AI will negatively impact branding in 2020, study says - Marketing Dive

How Microsoft runs its $40M ‘AI for Health’ initiative – TechCrunch

Last week, Microsoft announced the latest news in its ongoing AI for Good program: a $40M effort to apply data science and AI to difficult and comparatively under-studied conditions like tuberculosis, SIDS and leprosy. How does one responsibly parachute into such complex ecosystems as a tech provider, and what is the process for vetting recipients of the companys funds and services?

Tasked with administrating this philanthropic endeavor is John Kahan, chief data analytics officer and AI lead in the AI for Good program. I spoke with him shortly after the announcement to better understand his and Microsofts approach to entering areas where they have never tread as a company and where the opportunity lies for both them and their new partners.

Kahan, a Microsoft veteran of many years, is helping to define the program as it develops, he explained at the start of our interview.

John Kahan: About a year ago, they announced my role in conjunction with expanding AI for Good from being really a grants-oriented program, where we gave money away, to a program where we use data science to help literally infuse AI and data to drive change around the world. It is 100% philanthropic we dont do anything thats commercial-related.

TechCrunch: This kind of research is still a very broad field, though. How do you decide what constitutes a worthwhile investment of resources?

See the original post:

How Microsoft runs its $40M 'AI for Health' initiative - TechCrunch

Why EU will find it difficult to legislate on AI – EUobserver

Artificial Intelligence (AI) especially machine learning is a technology that is spreading rapidly around the world.

AI will become a standard tool to help steer cars, improve medical care or automate decision making within public authorities. Although intelligent technologies are drivers of innovation and growth, the global proliferation of them is already causing serious harm in its wake.

Last month, a leaked white paper showed that the European Union is considering putting a temporary ban on facial recognition technologies in public spaces until the potential risks are better understood.

But many AI technologies in addition to facial recognition warrant more concern, especially from European policymakers.

More and more experts have scrutinised the threat that 'Deep Fake' technologies may pose to democracy by enabling artificial disinformation; or consider the Apple Credit Card which grants much higher credit scores to husbands when compared to their wives, even though they share assets.

Global companies, governments, and international organisations have reacted to these worrying trends by creating AI ethics boards, charters, committees, guidelines, etcetera, all to address the problems this technology presents - and Europe is no exception.

The European Commission set up a High Level Expert Group on AI to draft guidelines on ethical AI.

Unfortunately, an ethical debate alone will not help to remedy the destruction caused by the rapid spread of AI into diverse facets of life.

The latest example of this shortcoming is Microsoft, one of the largest producers of AI-driven services in the world.

Microsoft, who has often tried to set itself apart from its Big Tech counterparts as being a moral leader, has recently taken heat for its substantial investment in facial recognition software that is used for surveillance purposes.

"AnyVision" is allegedly being used by Israel to track Palestinians in the West Bank. Although investing in this technology goes directly against Microsoft's own declared ethical principles on facial recognition, there is no redress.

It goes to show that governing AI - especially exported technologies or those deployed across borders - through ethical principles does not work.

The case with Microsoft is only a drop in the bucket.

Numerous cases will continue to pop up or be uncovered in the coming years in all corners of the globe given a functioning and free press, of course.

This problem is especially prominent with facial recognition software, as the European debate reflects. Developed in Big Tech, facial recognition products have been procured by government agencies such as customs and migration officers, police officers, security forces, the military, and more.

This is true for many regions of the world: like in America, the UK, as well as several states in Africa, Asia, and more.

Promising more effective and accurate methods to keep the peace, law enforcement agencies have adopted the use of AI to super-charge their capabilities.

This comes with specific dangers, though, which is shown in numerous reports from advocacy groups and watchdogs saying that the technologies are flawed and deliver more false matches disproportionately for women and darker skin tones.

If law enforcement agencies know that these technologies have the potential to be more harmful to subjects who are more often vulnerable and marginalised, then there should be adequate standards for implementing facial recognition in such sensitive areas.

Ethical guidelines neither those coming from Big Tech nor those coming from international stakeholders are not sufficient to safeguard citizens from invasive, biased, or harmful practices of police or security forces.

Although these problems have surrounded AI technologies in previous years, this has not yet resulted in a successful regulation to make AI "good" or "ethical" terms that mean well but are incredibly hard to define, especially on an international level.

This is why, even though actors from private sector, government, academia, and civil society have all been calling for ethical guidelines in AI development, these discussions remain vague, open to interpretation, non-universal, and most importantly, unenforceable.

In order to stop the faster-is-better paradigm of AI development and remedy some of the societal harm already caused, we need to establish rules for the use of AI that are reliable and enforceable.

And arguments founded in ethics are not strong enough to do so; ethical principles fail to address these harms in a concrete way.

As long as we lack rules that work, we should at least use guidelines that already exist to protect vulnerable societies to the best of our abilities. This is where the international human rights legal framework could be instrumental.

We should be discussing these undue harms as violations of human rights, utilising international legal frameworks and language that has far-reaching consensus across different nations and cultural contexts, is grounded in consistent rhetoric, and is in theory enforceable.

AI development needs to promote and respect human rights of individuals everywhere, not continue to harm society at a growing pace and scale.

There should be baseline standards in AI technologies, which are compliant with human rights.

Documents like the Universal Declaration of Human Rights and the UN Guiding Principles which steer private sector behaviour in human-rights compliant ways need to set the bar internationally.

This is where the EU could lead by example.

By refocusing on these existing conventions and principles, Microsoft's investment in AnyVision, for example, would be seen as not only a direct violation of its internal principles, but also as a violation of the UN Guiding Principles, forcing the international community to scrutinise the company's business activities more deeply and systematically, ideally leading to redress.

Faster is not better. Fast development and dissemination of AI systems has led to unprecedented and irreversible damages to individuals all over the world. AI does, indeed, provide huge potential to revolutionise and enhance products and services, and this potential should be harnessed in a way that benefits everyone.

More:

Why EU will find it difficult to legislate on AI - EUobserver

AI Predicts Coronavirus Could Infect 2.5 Billion And Kill 53 Million. Doctors Say Thats Not Credible, And Heres Why – Forbes

An AI-powered simulation run by a technology executive says that Coronavirus could infect as many as 2.5 billion people within 45 days and kill as many as 52.9 million of them. Fortunately, however, conditions of infection and detection are changing, which in turn changes incredibly important factors that the AI isnt aware of.

And that probably means were safer than we think.

Probably being the operative word.

A new Coronavirus tracker app, with data on infections, deaths, and survival

Rational or not, fear of Coronavirus has spread around the world.

Facebook friends in Nevada are buying gas masks. Surgical-quality masks are selling out in Vancouver, Canada, where many Chinese have recently immigrated. United and other airlines have canceled flights to China, and a cruise ship with thousands of passengers is quarantined off the coast of Italy after medical professionals discovered one infected passenger.

A new site that tracks Coronavirus infections globally says we are currently at 24,566 infected, 493 dead, and 916 recovered.

All this prompted James Ross, co-founder of fintech startup HedgeChatter, to build a model for estimating the total global reach of Coronavirus.

"I started with day over day growth, he told me, using publicly available data released by China. [I then] took that data and dumped it into an AI neural net using a RNN [recurrent neural network] model and ran the simulation ten million times. That output dictated the forecast for the following day. Once the following days output was published, I grabbed that data, added it to the training data, and re-ran ten million times.

The results so far have successfully predicted the following days publicly-released numbers within 3%, Ross says.

The results were shocking. Horrific, even.

Coronavirus predictions via a neural net, assuming conditions don't change. Note: doctors say ... [+] conditions will change, and are changing.

From 50,000 infections and 1,000 deaths after a week to 208,000 infections and almost 4,400 deaths after two weeks, the numbers keep growing as each infected person infects others in turn.

In 30 days, the model says, two million could die. And in just 15 more days, the death toll skyrockets.

But there is good news.

The model doesnt know every factor, which Ross knows.

And multiple doctors and medical professionals says the good news is that the conditions and data fed into the neural network are changing. As those conditions change, the results will change massively.

One important change: the mortality rate.

If a high proportion of infected persons are asymptomatic, or develop only mild symptoms, these patients may not be reported and the actual number of persons infected in China may be much higher than reported, says Professor Eyal Leshem at Sheba Medical Center in Israel. This may also mean that the mortality rate (currently estimated at 2% of infected persons) may be much lower.

Wider infection doesnt sound like good news, but if it means that the death rate is only .5% or even .1% ... Coronavirus is all of a sudden a much less significant problem.

Also, now that the alarm has gone out, behavior changes.

And that changes the spread of the disease.

Effective containment of this outbreak in China and prevention of spread to other countries is expected to result in a much lower number infected and deaths than estimated, Leshem says.

Dr. Amesh A. Adalja, a senior scholar at Johns Hopkins Center for Health Security, agrees.

The death rate is falling as we understand that the majority of cases are not severe and once testing is done on larger groups of the population not just hospitalized patients we will see that the breadth of illness argues against this being a severe pandemic.

Thats one of the key factors: who are medical doctors seeing? What data are we not getting?

The reported death rate early in an outbreak is usually inflated because we investigate the sickest people first and many of them die, giving a skewed picture, says Brian Labus, an assistant professor at the UNLV school of public health. The projections seem unrealistically high. Flu infected about 8% of the population over 7-8 months last year; this model has one-third of Earths population being infected in 6 weeks.

All these factors combined create potentially large changes in both the rate of infection and mortality, and even small changes have huge impacts on computer forecasts, says Dr. Jack Regan, CEO and founder of LexaGene, which makes automated diagnostic equipment.

Small changes in transmissibility, case fatality rate, etc., can have big changes in total worldwide mortality rate.

Even so, were not completely out of the woods yet.

"To date, with every passing day, we have only seen an increase in the number of cases and total deaths, Regan says. As each sick individual appears to be infecting more than one other - the rate of spread seems to be increasing (i.e. accelerating), making it even more difficult to contain. It appears clear that this disease will continue to spread, and arguably - is unlikely to be contained and as such may very well balloon into a worldwide pandemic.

In other words, despite all medical efforts, Coronavirus is likely to go global.

But, thanks to all those medical efforts, its unlikely to be as deadly as predicted.

Its worth noting, after all, that the common flu, which has been around forever and is blamed for killing 50 million people after World War I, is still around. So far this season, the flu has infected 19 million, caused 180,000 hospitalizations, and killed 10,000 ... just in the United States.

And no-ones buying masks, closing borders, or stopping flights for that.

As for the technologist who created the AI-driven model in the first place? No-one would be happier if its predictions turn out to be just bad dreams.

Although AI and neural nets can be used to solve for and/or predict for many things, there are always additional variables which need to be added to fine tune the models, Ross told me. Hopefully governments will understand that additional proactive action today will result in less reactive action tomorrow.

Read more from the original source:

AI Predicts Coronavirus Could Infect 2.5 Billion And Kill 53 Million. Doctors Say Thats Not Credible, And Heres Why - Forbes

Those Three Clever Dogs Trained To Drive A Car Provide Valuable Lessons For AI Self-Driving Cars – Forbes

Perhaps this dog would prefer driving the car, just like three dogs that were trained to do so.

Weve all seen dogs traveling in cars, including how they like to peek out an open window and enjoy the fur-fluffing breeze and dwell in the cacophony of scents that blow along in the flavorful wind.

The life of a dog!

Dogs have also frequently been used as living props in commercials for cars, pretending in some cases to drive a car, such as the Subaru Barkleys advertising campaign that initially launched on TV in 2018 and continued in 2019, proclaiming that Subaru cars were officially dog tested and dog approved.

Cute, clever, and memorable.

What you might not know or might not remember is that there were three dogs that were trained on driving a car and had their moment of unveiling in December of 2012 when they were showcased by driving a car on an outdoor track (the YouTube posted video has amassed millions of views).

Yes, three dogs named Monty, Ginny, and Porter were destined to become the first true car drivers on behalf of the entire canine family.

Monty at the time was an 18-month-old giant schnauzer cross, while the slightly younger Ginny at one year of age was a beardie whippet cross, and Porter was a youthful 10-month-old beardie.

All three were the brave astronauts of their era and were chosen to not land on the moon but be the first to actively drive a car, doing so with their very own paws.

I suppose we ought to call them dog-o-nauts.

You might be wondering whether it was all faked.

I can guess that some might certainly think so, especially those that already believe that the 1969 moon landing was faked, and thus dogs driving a car would presumably be a piece of cake to fake in comparison.

The dog driving feat was not faked.

Well, lets put it this way, the effort was truthful in the sense that the dogs were indeed able to drive a car, albeit with some notable constraints involved.

Lets consider some of the caveats:

Specially Equipped Driving Controls

First, the car was equipped with specialized driving controls to allow the dogs to work the driving actions needed to steer the car, use the gas, shift gears, and apply the brakes of the vehicle.

The front paws of the dog driver were able to reach the steering wheel and gear-stick, while the back paws used extension levers to reach the accelerator and brake pedals. When a dog sat in the drivers seat, they did so on their haunches.

Of course, I dont think any of us would be hard-pressed to quibble about the use of specialized driving controls. I hope that establishing physical mechanisms to operate the driving controls would seem quite reasonable and not out of sorts per se.

We should willingly concede that having such accouterments is perfectly okay since its not the access to the controls that ascertains driving acumen but instead the ability to appropriately use the driving controls that are the core consideration.

By the way, the fact too that they operated the gear shift is something of a mind-blowing nature, particularly when you consider that most of todays teenage drivers have never worked a stick shift and always used only an automatic transmission.

Dogs surpass teenage drivers in the gear-stick realm, it seems.

Specialized Training On How To Drive

Secondly, as another caveat, the dogs were given about 8 weeks of training on how to drive a car.

I dont believe you can carp about the training time and need to realize that teenagers oftentimes receive weeks or even months of driving training, doing so prior to being able to drive a car on their own.

When you think about it, an 8-week or roughly two-month time frame to train a dog on nearly any complex task is remarkably short and illustrates how smart these dogs were.

One does wonder how many treats were given out during that training period, but I digress.

Focused On Distinct Driving Behaviors

Thirdly, the dogs learned ten distinct behaviors for purposes of driving.

For example, one behavior consisted of shifting the car into gear. Another behavior involved applying the brakes. And so on.

You might ponder this aspect for a moment.

How many distinct tasks are involved in the physical act of driving a car?

After some reflection, youll realize that in some ways the driving of a car is extremely simplistic.

You need to steer, turning the wheel either to the left, right, or keep it straight ahead. In addition, you need to be able to use the accelerator, either pressing lightly or strongly, and you need to use the brake, either pressing lightly or strongly. Plus, well toss into the mix the need to shift gears.

In short, driving a car does not involve an exhaustive and nor complicated myriad of actions.

It makes sense that weve inexorably devolved car driving into a small set of simple chores.

Early versions of cars had many convoluted tasks that had to be manually undertaken. Over time, the automakers aimed to make car driving so simple that anyone could do it.

This aided the widespread adoption of cars by the populous as a whole and led to the blossoming of the automotive industry by being able to sell a car to essentially anyone.

Driving On Command

Fourth, and the most crucial of the caveats, the dogs were commanded by a trainer during the driving act.

I hate to say it, but this caveat is the one that regrettably undermines the wonderment and imagery of the dogs driving a car.

Sorry.

A trainer stood outside the car and yelled commands to the dogs, telling them to shift gears or to steer to the right, etc.

Okay, lets all agree that the dogs were actively driving the car, and working the controls of the car, and serving as the captain of the ship in that they alone were responsible for the car as it proceeded along on the outdoor track. They were even wearing seat-belts, for gosh sake.

Thats quite amazing!

On the other hand, they were only responding to the commands being uttered toward them.

Thus, the dogs werent driving the car in the sense that the dogs were presumably not gauging the roadway scenery and nor mentally calculating what driving actions to undertake.

It would be somewhat akin to putting a human driver blindfolded into a drivers seat and asking them to drive, along with you sitting next to the driver and telling them what actions to take.

Yes, technically, the person would be the driver of the car, though I believe wed all agree they werent driving in the purest sense of the meaning of driving.

By and large, driving a car in its fullest definition consists of being able to assess the scene around the vehicle and render life-or-death judgments about what driving actions to take. Those mental judgments are then translated into our physical manipulation of the driving controls, such as opting to hit the gas or slam on the brakes.

One must presume that the dogs were not capable of doing the full driving act and were instead like the blindfolded human driver that merely is reacting to commands given to them.

Does this mean that those dogs werent driving the car?

I suppose it depends upon how strict you want to be about the definition of driving.

If you are a stickler, you would likely cry foul and assert that the dogs were not driving a car.

If you are someone with a bit more leniency, you probably would concede that the dogs were driving a car, and then under your breath and with a wee bit of a smile mutter that they were determinedly and doggedly driving that car.

Perhaps we shouldnt be overly dogmatic about it.

You might also be wondering whether a dog could really, in fact, drive a car, doing so in the fuller sense of driving, if the dog perchance was given sufficient training to do so.

In other words, would a dog have the mental capacity to grasp the roadway status and be able to convert that into suitable driving actions, upon which then the dog would work the driving controls?

At this juncture in the evolution of dogs, one would generally have to say no, namely that a dog would not be able to drive a car in a generalized way.

That being said, it would potentially be feasible to train a dog to drive a car in a constrained environment whereby the roadway scenery was restricted, and the dog did not need to broadly undertake a wholly unconstrained driving task.

Before I dig more deeply into this topic herein, please do not try placing your beloved dog into the drivers seat of your car and force them to drive.

Essentially, Im imploring you, dont try this at home.

I mention this warning because I dont want people to suddenly get excited about tossing their dog into the drivers seat to see what happens.

Bad idea.

Dont do it.

As mentioned, the three driving dogs were specially trained, and drove only on a closed-off outdoor track, doing so under the strict supervision of their human trainers and with all kinds of safety precautions being undertaken.

The whole matter was accomplished by the Royal New Zealand Society for the Prevention of Cruelty to Animals (SPCA), done as a publicity stunt that aimed to increase the adoption of neglected or forgotten dogs.

It was a heartwarming effort with a decent basis and please dont extrapolate the matter into any unbecoming and likely dangerous replicative efforts.

Speaking of shifting gears, one might wonder whether the dogs that drove a car might provide other insights to us humans.

Heres todays question: What lessons if any can be learned by dogs driving cars that could be useful for the advent of AI-based true self-driving cars?

Lets unpack the matter and see.

The Levels Of Self-Driving Cars

It is important to clarify what I mean when referring to true self-driving cars.

True self-driving cars are ones that the AI drives the car entirely on its own and there isnt any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we dont yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since semi-autonomous cars require a human driver, the adoption of those types of cars wont be markedly different than driving conventional vehicles, so theres not much new per se to cover about them on this topic (though, as youll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public be forewarned about a disturbing aspect thats been arising lately, namely that in spite of those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Spiritual-Moral Values

For Level 4 and Level 5 true self-driving vehicles, there wont be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

If thats the case, it seems like theres no opportunity for dogs to drive cars.

Yes, thats true, namely that if humans arent driving cars then there seems little need or basis to ask dogs to drive cars.

But thats not what we can learn from the effort to teach dogs to drive a car.

Lets tackle some interesting facets that arose when dogs were tasked with driving a car:

Humans Giving Commands

First, recall that the dogs were responding to commands that were given to them while sitting at the steering wheel.

In a manner of speaking (pun intended), you could suggest that we humans will be giving commands to the AI driving systems that are at the wheel of true self-driving cars.

Using Natural Language Processing (NLP), akin to how you converse with Alexa or Siri, as a passenger in a self-driving car you will instruct the AI about various aspects of the driving.

In theory, you wont though be telling the AI to hit the gas or pound on the brakes. Presumably, the AI driving system will be adept enough to handle all of the everyday driving aspects involved and its not your place to offer commands about doing the driving chore.

Instead, youll tell the AI where you want to go.

You might divert the journey by suddenly telling the AI that you are hungry and want to swing through a local McDonalds or Taco Bell.

You might explain to the AI that it can drive leisurely and take you through the scenic part of town since you arent in a hurry and are a tourist in the town or city.

In some ways, you can impact the driving task, perhaps telling the AI that you are carsick and want it to slow down or not take curves so fast.

There are numerous open questions as yet resolved about the interaction between the human passengers and the AI driving systems (see my detailed discussion at this link here).

For example, if you tell the AI to follow that car, similar to what happens in movies or when you are trying to chase after someone, should the AI obediently do so, or should it question why you want to follow the other car?

We dont presumably want AI self-driving cars that are stalking others.

See the original post here:

Those Three Clever Dogs Trained To Drive A Car Provide Valuable Lessons For AI Self-Driving Cars - Forbes

Analytics Insight Magazine Names ‘The 10 Most Innovative Global AI Executives’ – Business Wire

SAN JOSE, Calif. & HYDERABAD, India--(BUSINESS WIRE)--Analytics Insight Magazine, a brand of Stravium Intelligence has named The 10 Most Innovative Global AI Executives in its January issue.

The magazine issue features ten seasoned disruptors who have significantly contributed towards the AI-driven transformation of their respective organizations and industries. These foresighted innovators are driving the next-generation of intelligent offerings across current business landscapes globally. Here are the AI Executives who made the list:

Featuring as the Cover Story is Gary Fowler, who serves as the CEO, President, and Co-founder of GSD Venture Studios. Previously, he co-founded top CIS accelerators GVA and SKOLKOVO Startup Academy where majority of these companies achieved success soon after their launch. Most recently, Gary co-founded Yva.ai with David Yang, one of Russias most famous entrepreneurs.

The issue further includes:

Lane Mendelsohn: President of Vantagepoint AI, Lane is an experienced executive with a demonstrated history of working in the computer software industry. He is skilled in Business Planning, Analytical Skills, Sales, Enterprise Software, and E-commerce.

Chethan KR: Chethan is the CEO of SynctacticAI. He is an entrepreneur with 13+yrs of experience in the IT and software development industry. He looks at setting the vision for his companys platform, deriving growth strategies and establishing partnerships with industries.

Christopher Rudolf: Christopher is the Founder and CEO of Volv Global and has over 30 years of experience, as a technology entrepreneur and business advisor, working with many blue-chip organisations to solve their critical global scale data problems.

Kalyan Sridhar: Kalyan Sridhar is the Managing Director at PTC, responsible for managing its operations in India, Sri Lanka and Bangladesh. He has 28 years of experience in senior executive roles spanning Sales, Business Development, Business Operations and Channel Sales in the IT industry.

Kashyap Kompella: Kashyap serves as the CEO and Chief Analyst of rpa2ai Research, and has 20 years of experience as an Industry Analyst, Hands-on Technologist, Management Consultant and M&A Advisor to leading companies and startups across sectors.

Kumardev Chatterjee: Kumardev is the Co-founder and CEO of Unmanned Life and also serves as Founder and Chairman of the European Young Innovators Forum. He holds an MSc. in Computer Science from University College London.

Niladri Dutta: Niladri, CEO at Tardid Technologies, has a background in protocol stacks, large transactional applications, and analytics. He ensures that the company has a long-term strategy in place that keeps the team and customers excited all the time.

Sarath SSVS: Sarath serves as the CEO and Founder of two AI-driven companies, SeeknShop.IO and IntentBI. He is a seasoned innovator with over 13 years of experience in machine learning, data science, and product management.

Prithvijit Roy: Prithvijit is the Founder and CEO of BRIDGEi2i Analytics Solutions. His specialties are business analytics, big data, data mining, shared services, knowledge process outsourcing (KPO), analytics consulting services, managed analytics services, and significant others.

The disruptive wave of AI has made some significant impacts across multiple industries. AI breakthroughs have even influenced the vision and practices of top industry executives and pushed them towards becoming innovative AI pioneers of their own space. Ushering in the new era, more and more leaders are spurring the innovation and spearhead the transformation journey to translate their revamped vision into best AI practices.

Read the detailed coverage here. For more information, please visit https://www.analyticsinsight.net.

About Analytics Insight

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by AI, big data and analytics companies across the globe. The Analytics Insight Magazine features opinions and views from top leaders and executives in the industry who share their journey, experiences, success stories, and knowledge to grow profitable businesses.

To set up an interview or advertise your brand, contact info@analyticsinsight.net.

Go here to read the rest:

Analytics Insight Magazine Names 'The 10 Most Innovative Global AI Executives' - Business Wire

Fintech workforce to expand 19% by 2030 thanks to AI, Cambridge University predicts – Finextra

In a recent report, the Cambridge Centre for Alternative Finance (CCAF) and the World Economic Forum (WEF) found that rather than observing AI as a single instrument for blanket application across the industry, AI can be viewed as a toolkit that is being used to tinker and build services in an abundance of ways to achieve a variety of objectives.

Using data collected in a global survey during 2019, the report analysed a sample of 151 fintechs and incumbents across 33 countries to paint a rich picture of how artificial technology is being developed and deployed within the financial services sector.

While 77% of respondents noted that they expect AI to become an essential business driver across the financial services industry in the near term, the report found that the way incumbents and fintechs are leveraging AI technologies differ in a number of ways.

A higher share of fintechs tend to be creating AI-based products and services, employing autonomous decision-making systems, and relying on cloud-based systems.

Whereas incumbents appear to focus on harnessing AI to improve existing products. This might explain why AI appears to have a higher positive impact on fintechs profitability.

30% of the fintechs surveyed indicated a substantial increase in profit as a result of AI, while only 7% of incumbents indicated such profitability.

As incumbents tend to leverage AI capabilities to foster process innovation within existing products and systems, fintechs are setting a wider trend of selling AI-enabled products as a service.

This approach presents a distinct new value proposition for firms (largely fintechs at this stage), to achieve two-fold economies of scale.

The firms can leverage both the prong of training AI and the prong of servicing new business areas, to offer superior services with unique selling points. The report refers to this as an AI Flywheel where business innovation can become a self-reinforcing cycle.

Another key difference is that while incumbents expect AI technologies to replace almost 9% of jobs within their organisation by 2030, fintechs forecast that AI will expand their workforce by 19%.

Reductions are expected to be most numerous within investment management, noting an anticipated net decrease of 24% over the next 10 years. The report predicts that in line with these figures, 37,700 new fintech roles would be created within the pool of firms in the surveyed sample.

The report also highlights a topic that holds particular currency at present, being quality and access to data and talent required to interpret that data: Regardless of how innovative an AI technology is, its ability to deliver real economic value is contingent upon the data it consumes, the report says.

This concern is of huge importance for sustainable finance, as firms look increasingly toward AI technologies to drive investment returns in line with ESG policy.

The report says that responses illustrate AI-enabled impact assessment and sustainable investing appears to possess the highest correlation with high AI-induced returnshowever, real-world adoption may still be thwarted by data-related issues and a lack of algorithmic explainability.

Given the central role AI is increasingly playing within the financial services industry, FCA and Bank of England recently established the AI Public Private Forum (AIPPF) to explore the technical and public policy issues surrounding the adoption of AI and machine learning across the banking system.

Finextra Research and ResponsibleRisk will be focusing on sustainable finance in investment and asset management at the second SustainableFinance.Live Co-Creation Workshop in March 2020.

Register your interest for the event, where you will be able to discuss the demand for sustainability, the challenges that lie ahead for sustainable investment and how firms across financial services and technology can achieve the UNs Sustainable Development Goals by 2030.

Read more here:

Fintech workforce to expand 19% by 2030 thanks to AI, Cambridge University predicts - Finextra

The age of AI: supercharging Europe’s tech transformation – Euronews

Business Planet travels to Dublin to see how one innovative firm's AI tech is offering children immersive play and learning experiences through voice recognition technology.Machines that make learning child's play

Irish company SoapBox Labs says it has developed the worlds most accurate and safe voice recognition technology for children. Founded in 2013, the company uses artificial intelligence tailored to help kids play and learn more immersively using their own voices.

"While people are probably familiar with speech recognition for adults, in voice assistance or in smart speakers, our technology has been built specifically for children, and thats modelling their voices and speech behaviours using state of the art AI technology,explains SoapBox Labs CEO, Patricia Scanlon.

The company's technology has attracted the tech giants. The firm is already working with Microsoft to boost child literacy and tackle problems surrounding child data privacy.

Its technology was also recently chosen by Reach Every Reader, a US based literacy project backed by Mark Zuckerberg and his wife Priscilla Chan, through their ChanZuckerberg Initiative.

Going forward the company aims to be the leading provider of voice tech and so-called kid-tech, in both smart toys and education.

We already have 27 global commercial licenses and weve scaled the platform into multiple languages, so our main markets now are education, and thats literacy and language learning globally, and now we are moving into off-line, voice enabling for smart toys and robotics."

Soapbox Labs received a grant of nearly 1.5 million euros from the European Innovation Council Accelerator. It seeks to give top class innovators and entrepreneurs the financial rocket fuel they need to take off, while also putting them on the right path to starting and running a successful business.

"We can provide up to 2.5 million euros in grant funding, up to 15 million euros in equity funding, obviously networking, coaching and then leveraging in other private investors, explains Professor Mark Ferguson, the Chair of the European Innovation Council Advisory Board. He adds: "Were looking for people with game changing ideas, that are going to create world-beating companies, great for Europeans, [with a] really dedicated team, fantastic technology, thats going to disrupt and create good markets.

Despite the benefits and opportunities connected with the rise of such game-changing technology, the advent of Artificial Intelligence has some people concerned. With that in mind, the European Commission presented its approach to AI and robotics in 2018 with its Communication on AI for Europe. This seeks to recognise the strategic importance of these technologies as key drivers of economic development and business, while also emphasising that AI needs to be developed in a responsible way in accordance with EU values.

In the next few weeks, the European Commission will present its latest proposals for further developing the EU's approach to artificial intelligence.

So the EIC accelerator is different from all of the other accelerators that are out there. Essentially, we are interested in companies with a high risk, a risk where perhaps the private investor may not yet go. And those companies need to be really disruptive. They need to be disrupting existing markets, or have a completely new product, and we are willing to invest there. We will want to crowd in subsequent investment. Well want to de-risk it, and well want to make sure there's a very secure business plan, and well want to make sure the technology and the science is good, but we are willing to take a higher risk than others - and that's the real difference.

Well, there are various things. First of all, there is a grant application, you can get up to 2.5 million euros, and equity up to 15 million euros. So that's the investment part of the portfolio. Then, of course, there is networking and consultancy, so you can get help from a whole range of consultants across Europe who will provide coaching in whatever area the company needs, and then there's networking with larger corporates So that's essentially what the EIC is doing. And its real objective is to invest in disruptive technologies that will be fantastic businesses for Europe, that will hopefully solve a lot of societal problems and that will grow in scale to be large companies. I mean, I often paraphrase it by saying, those companies are solving relevant problems in responsible ways.

I think you have to have a good business plan. You have to have the good technology. You have to have the right team, so that team can take the business and grow it. You need to know what you're asking for, your plans. So that's basically the kind of tips. There's no restriction. I mean, this is about building the businesses for Europe. We want Europeans to get the success out of these companies, both in their economy and in terms of what the companies are doing. And we want those companies to be world leading...so my strong recommendation, is read the form, answer the questions, feel free to network and talk with people. Make an application. You will get feedback, and that's what I would advise people to do.

Read this article:

The age of AI: supercharging Europe's tech transformation - Euronews

Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes Around The World – BuzzFeed News

Facebook confirmed to BuzzFeed News that it has sent a cease-and-desist letter to Clearview AI, asking the company to stop using information from Facebook and Instagram.

Last updated on February 5, 2020, at 8:51 p.m. ET

Posted on February 5, 2020, at 6:09 p.m. ET

As legal pressures and US lawmaker scrutiny mounts, Clearview AI, the facial recognition company that claims to have a database of more than 3 billion photos scraped from websites and social media, is looking to grow around the world.

A document obtained via a public records request reveals that Clearview has been touting a rapid international expansion to prospective clients using a map that highlights how it either has expanded, or plans to expand, to at least 22 more countries, some of which have committed human rights abuses.

The document, part of a presentation given to the North Miami Beach Police Department in November 2019, includes the United Arab Emirates, a country historically hostile to political dissidents, and Qatar and Singapore, the penal codes of which criminalize homosexuality.

Clearview CEO Hoan Ton-That declined to explain whether Clearview is currently working in these countries or hopes to work in them. He did confirm that the company, which had previously claimed that it was working with 600 law enforcement agencies, has relationships with two countries on the map.

Its deeply alarming that they would sell this technology in countries with such a terrible human rights track record."

Clearview is focused on doing business in USA and Canada, Ton-That said. Many countries from around the world have expressed interest in Clearview.

Albert Fox Cahn, a fellow at New York University and the executive director of the Surveillance Technology Oversight Project, told BuzzFeed News that he was disturbed by the possibility that Clearview may be taking its technology abroad.

Its deeply alarming that they would sell this technology in countries with such a terrible human rights track record, enabling potentially authoritarian behavior by other nations, he said.

Clearview has made headlines in past weeks for a facial recognition technology that it claims includes a growing database of some 3 billion photos scraped from social media sites like Instagram, Twitter, YouTube, and Facebook, and for misrepresenting its work with law enforcement by falsely claiming a role in the arrest of a terrorism suspect. The company, which has received cease-and-desist orders from Twitter, YouTube, and Facebook argues that it has a First Amendment right to harvest data from social media.

There is also a First Amendment right to public information, Ton-That told CBS News Wednesday. So the way we have built our system is to only take publicly available information and index it that way.

Cahn dismissed Ton-Thats argument, describing it as more about public relations than it is about the law.

No court has ever found the First Amendment gives a constitutional right to use publicly available information for facial recognition, Cahn said. Just because Clearview may have a right to scrape some of this data, that doesnt mean that they have an immunity from lawsuits from those of us whose information is being sold without our consent.

Scott Drury, a lawyer representing a plaintiff suing Clearview in Illinois for violating a state law on biometric data collection, agreed. Clearviews conduct violates citizens constitutional rights in numerous ways, including by interfering with citizens right to access the courts, he told BuzzFeed News. The issue is not limited to scraping records, but rather whether a private company may scrape records with the intent of performing biometric scans and selling that data to the government.

Clearviews conduct violates citizens constitutional rights in numerous ways."

Potentially more problematic is Clearviews inclusion of nine European Union countries among them Italy, Greece, and the Netherlands on its expansion map. These countries have strict privacy protections under the General Data Protection Regulation (GDPR), a 2016 law that requires businesses to protect the personal data and privacy of EU citizens. Joseph Jerome, a policy counsel for the Center for Democracy and Technology, said it was unclear whether Clearview AI's technology would violate the GDPR.

Jerome said that GDPR protects any information that could be used to identify a person biometric data included but that the EU made exceptions for law enforcement and national security. Clearview also highlighted other non-EU European countries on its map that it hoped to do business with, including the United Kingdom and Ukraine.

Beyond the map which also points to plans to expand to Brazil, Colombia, and Nigeria Clearview has boasted about its exploits abroad. Its website has a large testimonial from a detective constable in the sex crimes unit in Canadian law enforcement who claims that Clearview is hands-down the best thing that has happened to victim identification in the last 10 years. When asked, Ton-That declined to identify the detective or the agency they serve.

Clearview and Ton-That have on occasion exaggerated the company's business relationships, and the presentation sent to North Miami Beach has a few misrepresentations, including two examples in which it suggested that it was used in the investigation of crimes in New York. An NYPD spokesperson previously denied that the department has any relationship with the company and said that the software was not used in either investigation.

Clearview AI has also encouraged law enforcement to test its facial recognition tool in unusual situations, such as identifying dead bodies. The presentation shows graphic images of a dead man and mugshots of a person whom Clearview claimed matched the deceased victim.

Clearview AI has been aggressively promoting its service to US law enforcement. It has suggested that police officers run wild with the tool, encouraging them to test it on friends, family, and celebrities. Emails obtained via a public record request show the company challenging police in Appleton, Wisconsin, to run 100 searches a week.

Investigators who do 100+ Clearview searches have the best chances of successfully solving crimes with Clearview in our experience, the email said. Its the best way to thoroughly test the technology. You never know when a search will turn up a match.

There are currently no federal laws that restrict facial recognition or scraping biometric data from the internet. On Thursday, the House Committee on Homeland Security will hold a hearing to examine the Department of Homeland Security's use facial recognition technology. Ton-That has previously said Clearview is working with DHS.

On Wednesday, Facebook told BuzzFeed News that it had sent multiple letters to Clearview AI to clarify the social network's policies and request information about what the startup was doing. In those letters, Facebook, which owns Instagram, asked that Clearview cease and desist from using any data, images, or media from its social networking sites. Facebook board member Peter Thiel is an investor in Clearview.

Scraping peoples information violates our policies, which is why weve demanded that Clearview stop accessing or using information from Facebook or Instagram," a Facebook spokesperson said. A spokesperson for Thiel did not immediately respond to a request for comment.

Feb. 06, 2020, at 00:28 AM

The House Committee on Homeland Security will hold the hearing on facial recognition. An earlier version of this post misstated the committee.

Read more from the original source:

Clearview AI Wants To Sell Its Facial Recognition Software To Authoritarian Regimes Around The World - BuzzFeed News

Why AI Is The Perfect Drinking Buddy For The Alcoholic Beverage Industry – Analytics India Magazine

The use of AI-driven processes to increase efficiency in the F&B market is no longer an anomaly. A host of breweries and distilleries have incorporated the technology to not only develop flavour profiles faster, but also for other functions, including packaging, marketing, as well as to ensure they meet all food-safety regulations.

Although the intention is not to find a replacement for the brewmaster/distiller, it becomes a thrilling learning experiment that equips them with multiple data points that could help them come up with innovative ideas.

Here is a list of companies that have successfully blended technology into their beverages to make a heady cocktail:

The company claims to be the worlds first to use AI algorithms and machine learning to create innovative beers that adapt to users taste preferences. Based on customer feedback, the recipe for their brews goes through multiple iterations to generate various combinations. IntelligentX currently has four different varieties Black AI, Golden AI, Pale AI, and Amber AI.

How does it work?

Codes are printed on the cans which direct customers to the Facebook Messenger app. They are then asked to give feedback on the beer they tried by answering a series of 10 questions. The data points gathered are then fed into an AI algorithm to spot trends and inform the overall brewing process. Furthermore, using the feedback, the AI also learns to ask better questions each time to get better outcomes.

Although the insights gathered give brewmasters a window into understanding customer preferences better, the final decision to heed the AIs recommendations to create a fresh brew rests on them. But what is certain is that without technological intervention, such a large collection of data would not only be difficult to process, but also extremely time-consuming.

Multi-award-winning Swedish whiskey distillery Mackmyra Whisky collaborated with Microsoft and Finnish tech company Fourkind to create the worlds first AI-generated whiskey. Using Microsoft Azure and Machine Learning Studio, Fourkinds resulting AI solution was fed into Mackmyras existing recipes and customer feedback data to create thousands of different recipes.

Following this, the distillerys key master blender Angela DOrazio used her experience to review which ingredients would work well together, filtering down the recipes to more desirable combinations. Since this process was repeated multiple times over, the AI algorithm picked up on which combinations worked best and using machine learning, began producing more desirable mixes. Eventually, DOrazio was able to filter it down to five recipes, finally arriving at recipe number 36 which ultimately became the worlds first AI-generated whiskey that went into production.

This AI-generated, but human-curated whiskey has opened the doors to new and innovative combinations that would otherwise have never been discovered. Monikered Intelligens, the first batch of this blend was launched in September 2019.

The Copenhagen-based brewery started a multimillion-dollar project in 2017 to analyse different flavours in its beer using AI. Unlike IntelligentX which uses customer feedback to improve its brew, Carlsberg has accomplished this by developing a taste-sensing platform that helps identify the differential elements of the flavours.

Under the ongoing Beer Fingerprinting Project, 1000 different beer samples are created each day. With the help of advanced sensors, the flavour fingerprint of each sample is determined. Following this, different yeasts are analysed to map the flavours and help make a distinction between them. Thus, the data collected by this AI-powered system could potentially be used to develop new varieties of brews.

Launched in collaboration with Microsoft, Aarhus University and the Technical University of Denmark, the project marked a shift from conventional practices that did not involve any technology.

The brewers of Budweiser and Corona had also jumped on the AI bandwagon to shake up its business. The company had invested in a slew of initiatives to improve how it brews beer. The Beer Garage is one such initiative. Sitting at the interjection of a startup ecosystem and the AB InBev business, it focuses on developing technology-driven solutions. ZX Ventures another offshoot of its larger business was launched in 2015 with the objective of creating new products that address consumer needs.

Anchored around these enterprises, AB InBev is using machine learning capabilities to stay ahead of the curve in three broad areas:

This maker of Belgian-inspired ales has begun integrating AI and IoT into its brewing process to improve both the quality of the beer, as well as its manufacturing process. It started when a significant problem came to light at the packaging stage.

When the beer was loaded into bottles, it was observed that the level at which it was filled was inconsistent. Another problem was the excessive foaming inside the bottles. This spiked the oxygen levels in the beer, which is known to ruin the flavour and reduce the beers shelf life.

After partnering with IBM, the tech giant installed a camera at SCBs warehouse, which took pictures of the beer as it crossed the bottle line. When combined with other data collected during the packaging operations, the team of engineers at IBM uploaded it to the Cloud. At this point, brewers at SCB also provided specific criteria which they found to be useful and this was then left with Watson algorithms to interpret the large amount of data quickly and solve the problem.From losing more than $30,000 a month in beer spillage, SCB found a solution by building AI and IoT into its brewing processes.

See the original post:

Why AI Is The Perfect Drinking Buddy For The Alcoholic Beverage Industry - Analytics India Magazine

Twitter says AI tweet recommendations helped it add millions of users – The Verge

Twitter had 152 million daily users during the final months of 2019, and it says the latest spike was thanks in part to improved machine learning models that put more relevant tweets in peoples timelines and notifications. The figure was released in Twitters Q4 2019 earnings report this morning.

Daily users grew from 145 million the prior quarter and 126 million during the same period a year earlier. Twitter says this was primarily driven by product improvements, such as the increased relevance of what people are seeing in their main timeline and their notifications.

By default, Twitter shows users an algorithmic timeline that highlights what it thinks theyll be most interested in; for users following few accounts, it also surfaces likes and replies by the people they follow, giving them more to scroll through. Twitters notifications will also highlight tweets that are being liked by people you follow, even if you missed that tweet on your timeline.

Twitter has continually been trying to reverse concerns about its user growth. The services monthly user count shrank for a full year going into 2019, leading it to stop reporting that figure altogether. Instead, it now shares daily users, a metric that looks much rosier.

Compared to many of its peers, though, Twitter still has an enormous amount of room to grow. Snapchat, for comparison, reported 218 million daily users during its final quarter of 2019. Facebook reported 1.66 billion daily users over the same time period.

Twitter also announced a revenue milestone this quarter: it brought in more than $1 billion in quarterly revenue for the first time. The total was just over the milestone $1.01 billion during its final quarter, up from $909 million that quarter the prior year.

Last quarter, Twitter said that its ad revenue took a hit due to bugs that limited its ability to target ads and share advertising data with partners. At the time, the company said it had taken steps to remediate the issue, but it didnt say whether it was resolved. In this quarters update, Twitter says it has since shipped remediations to those issues.

See original here:

Twitter says AI tweet recommendations helped it add millions of users - The Verge

THE AI IN INSURANCE REPORT: How forward-thinking insurers are using AI to slash costs and boost customer satis – Business Insider India

The insurance sector has fallen behind the curve of financial services innovation - and that's left hundreds of billions in potential cost savings on the table.

The most valuable area in which insurers can innovate is the use of artificial intelligence (AI): It's estimated that AI can drive cost savings of $390 billion across insurers' front, middle, and back offices by 2030, according to a report by Autonomous NEXT seen by Business Insider Intelligence. The front office is the most lucrative area to target for AI-driven cost savings, with $168 billion up for grabs by 2030.

There are three main aspects of the front office that stand to benefit most from AI. First, Chatbots and automated questionnaires can help insurers make customer service more efficient and improve customer satisfaction. Second, AI can help insurers offer more personalized policies for their customers. Finally, by streamlining the claims management process, insurers can increase their efficiency.

In the AI in Insurance Report, Business Insider Intelligence will examine AI solutions across key areas of the front office - customer service, personalization, and claims management - to illustrate how the technology can significantly enhance the customer experience and cut costs along the value chain. We will look at companies that have accomplished these goals to illustrate what insurers should focus on when implementing AI, and offer recommendations on how to ensure successful AI adoption.

The companies mentioned in this report are: IBM, Lemonade, Lloyd's of London, Next Insurance, Planck, PolicyPal, Root, Tractable, and Zurich Insurance Group.

Here are some of the key takeaways from the report:

In full, the report:

Interested in getting the full report? Here are two ways to access it:

The choice is yours. But however you decide to acquire this report, you've given yourself a powerful advantage in your understanding of AI in insurance.

See the original post here:

THE AI IN INSURANCE REPORT: How forward-thinking insurers are using AI to slash costs and boost customer satis - Business Insider India

AI startup Cresta launches from stealth with millions from Greylock and a16z – TechCrunch

As Silicon Valleys entrepreneurs cluster around the worldview that artificial intelligence is poised to change how we work, investors are deciding which use cases make the most sense to pump money into right now. One focus has been the relentless communication between companies and customers that takes place at call centers.

Call center tech has spawned dozens if not hundreds of AI startups, many of which have focused on automating services and using robotic voices to point customers somewhere they can spend money. There has been a lot of progress, but not all of those products have delivered. Cresta is more focused on using AI suggestions to help human contact center workers make the most of an individual call or chat session and lean on whats worked well for past interactions that were deemed successful.

I think that there will always be very basic boring stuff that can be automated like frequently asked questions and Oh, whats the status of my order?, CEO Zayd Enam says. But theres always the role of the person thats building the relationship between the company and the customer, and thats a really strategic role for companies in the modern age.

Udacity co-founder Sebastian Thrun is the startups board chairman and is listed as a co-founder. Enam met Thrun during his PhD research at Stanford focused on workplace productivity. Cresta is launching from stealth and announcing that theyve raised $21 million in funding from investors including Greylock Partners and Andreessen Horowitz. The company recently closed a $15 million Series A round.

Cresta wants to use AI to school customer service workers and salespeople on how to close the deal.

Theres quite a lot of turnover in contact center jobs and that can leave companies reticent to spend a ton of time investing in each employees training. Naturally, there are some inherent issues where the workers interacting with an individual customer might not have the experience necessary to suggest a solution that they might if they had more experience. In terms of live feedback, for many, fumbling through paper scripts at their desk can be about as good as it gets. Cresta is hoping that by tapping improvements in natural language processing, their software can help alleviate some stress for contact center workers and help them move conversations in the direction of selling something else for their company.

Cresta is entering a field where theres already quite a bit of interest from established software giants. Salesforce, Google and Twilio all operate AI-driven products for contact centers. Even with substantial competition, Enam believes Crestas team of 30 can offer its customers a lot more individual attention.

Were one of the few technical teams where were just obsessed with the customer, to the point where its normal for people on our team to fly to the customer and live by a call center in an Airbnb for a week, Enam said. When Greylock led the Series A, they had heard that and said thats what gave them so much conviction that we were the team to solve the problem.

Sun Microsystems co-founder Andy Bechtolsheim, Mark Leslie and Vivi Nevo are also investors in Cresta.

Read the original post:

AI startup Cresta launches from stealth with millions from Greylock and a16z - TechCrunch

Two named to run Harvard chemistry department in wake of academic’s arrest on criminal charge in China case – The Boston Globe

This was Professor Liebers final year as Chemistry and Chemical Biology (CCB) chair, and our office was in the process of considering potential successors," Stubbs said. Both Professors Dan Kahne and Ted Betley are held in very high regard by their colleagues, and both were strongly recommended for the chair role.

Stubbs said that given "the urgency of attending to the students and postdoctoral scholars in the Lieber group, the need to draw the CCB community together in mutual support, and the ongoing demands of administration of a large and complex department, we decided that co-chairmanship was a good approach. I am very grateful to Professors Betley and Kahne for their willingness to step into these roles on such short notice, and I look forward to working in partnership with them.

The interim chairs boast impressive credentials.

Betley runs Harvards Betley Research Group, which works in the field of synthetic inorganic chemistry to design new complexes capable of activating unreactive chemical bonds, says Betleys biography on Harvards website. We design catalysts comprised of first-row transition elements where precise control of the molecular electronic structure leads to reactivity in organometallic catalysis and small molecule activation."

The bio says Betley "has been recognized by the Technology Review as one of the top 35 US technological innovators, as well as by the NSF, DOE, and DOD with Early Career Awards."

Kahnes lab at Harvard, the Kahne Research Group, is interested in the problem of antibiotic resistance, his online bio says. To develop new approaches to treat resistant bacterial infections, we focus on the protein machines that assemble the outer membrane that protects Gram-negative bacteria from toxic molecules.

Federal prosecutors, meanwhile, are focused on Lieber, whos charged with making false statements to federal authorities. Hes currently free on $1 million bond and hasnt yet entered a plea.

The case stretches back to 2011, when a professor at a leading Chinese university e-mailed a contract to Lieber. He told Lieber he had been recommended for a global recruitment program, part of the communist governments Thousand Talents Plan to lure high-level scientific talent and, in some cases, reward them for stealing proprietary information, federal investigators have said.

A few days later, Lieber traveled to Chinas Wuhan University of Technology to sign a long-term agreement. When the terms were finalized, he would be paid $50,000 a month, $158,000 in living expenses, and $1.5 million to establish a research lab at the Chinese university, according to legal filings.

But Lieber kept that secret from Harvard, according to federal prosecutors, and when questioned by Defense Department investigators in 2018, denied he had ever participated in the Thousand Talents program.

Harvard officials last week said Lieber, who was arrested at his university office, had been placed on paid administrative leave.

The charges brought by the U.S. government against Professor Lieber are extremely serious, the university said in a prior statement. "Harvard is cooperating with federal authorities, including the National Institutes of Health, and is initiating its own review of the alleged misconduct.

Tonya Alanez of the Globe Staff contributed to this report.

Travis Andersen can be reached at travis.andersen@globe.com. Follow him on Twitter @TAGlobe.

The rest is here:
Two named to run Harvard chemistry department in wake of academic's arrest on criminal charge in China case - The Boston Globe

Can negative emission technologies overcome climate catastrophe? | News – Chemistry World

Humanity is running out of time to deal with the climate crisis. The UNs Intergovernmental Panel on Climate Change says that we need to limit atmospheric carbon dioxide to less than 450 parts per million in order to have a chance to keep average global surface temperatures from rising more than 1.5C by the end of the century. Thats been identified as the safety margin that would avoid irreversible, adverse climate change effects .

But CO2 levels keep rising, and the amount in the atmosphere topped 415 ppm in 2019. Given current trends in emissions, it seems likely that we will surpass the 450 ppm threshold within 13 to 15 years, according to Klaus Lackner, director of Arizona State Universitys Center for Negative Carbon Emissions.

Even to meet the more modest goals of the Paris Agreement, we would need to eliminate net emissions in industrialised countries by the middle of the century a target that many scientists see as unrealistic.

We have the technology to do this for most emissions, but for some like agricultural methane and aviation we cant, says Stephen Pacala, an ecologist at Princeton University who chaired a committee for the US National Academies of Sciences, Engineering, and Medicine that issued a report on negative emissions technologies (Nets) back in 2018.

Nets offer the hope of not just stopping new emissions, but reclaiming some of the CO2 that has already been released into the atmosphere. While every effort must be made to reduce such emissions, in some cases Nets may be less expensive and disruptive, according to the report by Pacalas committee.

We will probably overshoot the target even if we work as hard as we can to reduce emissions, so we need to think about balancing the books, says Lackner. Either we stop using fossil fuels altogether, or for every tonne of carbon we release, we also sequester one tonne.

Before the industrial revolution, the planets carbon cycle was mostly in balance. CO2 dissolves in and out of the surface of the oceans, and is sucked out of the atmosphere by photosynthetic plants and algae, which then release it again when they die . Every year these two carbon sinks exchange around 367 billion tonnes of CO2 each with the atmosphere, according to Britton Stephens, who studies the carbon cycle at the National Center for Atmospheric Research in Boulder, Colorado. Slowly, large amounts of carbon also get locked away long-term at the bottom of oceans and in fossil fuels.

But, as humans began burning those fossil fuels, that long-dormant carbon went back into the cycle faster than the land and ocean could remove it. The system is inherently in balance, Stephens explains. But that little bit extra matters the coal and oil we dug up is screwing up that balance.

Humans are pumping around 36 billion tonnes of CO2 into the atmosphere each year, with another 5.5 billion coming from land use changes like deforestation . The oceans and terrestrial plants have sped up their carbon sequestration in response, sucking up about 9 billion and 14 billion tonnes respectively, leaving around 18 billion tonnes piling up in the atmosphere every year .

The hope is that Nets, in combination with reductions in new releases, will help us to drive that 18 billion tonne annual net increase in CO2 emissions down to zero, and then beyond. This would begin lowering global atmospheric CO2 concentrations to closer to where they were before the industrial revolution.

We havent cleaned up the litter from the past 200 years, says Lackner. We have to go backwards and clean up not just the future, but the past.

The job will be huge. Not only will the carbon dioxide already in the atmosphere need to be removed, but as that concentration starts falling, the carbon previously sequestered by the ocean and land biomass will begin coming back out, as the system strives to remain in equilibrium. In order to bring the concentration of CO2 in the atmosphere down by 100ppm, Lackner estimates that about 40 billion tonnes of CO2 would need to be removed from the air every year for 40 years. That figure is roughly equal to the total amount of carbon dioxide humanity releases every year.

There are several different Nets that could help to turn back the clock on Earths carbon emissions. The simplest, and cheapest, of these is not really a technology at all its just changing our land-use practices to reduce deforestation, and planting more trees to expand existing forests.

As those new trees grow, they will take up CO2 from the atmosphere and store it for decades. Reforestation and afforestation efforts have the potential to make a significant contribution to negative emissions, but the process is land-hungry, according Pacala.

Some experts, however, argue that there is enough land available to begin major reforestation efforts without needing to encroach on farmland or cities. A study led by researchers at Swiss university ETH Zurich, published last year, found there is `room for almost an extra 1 billion hectares of tree cover on the planet using currently available land. They estimated that these extra trees could ultimately capture two-thirds of human-made carbon emissions since the industrial revolution, and store more than 750 billion tonnes of CO2 around a quarter of whats currently in the atmosphere .

Countries and environmental groups are already moving forward with reforestation efforts. The UNs Trillion Tree Campaign supports tree-planting efforts around the world, and claims that 13.6 billion trees have already been planted on its watch. In Canada alone, Prime Minister Justin Trudeau pledged during the last election campaign to plant 2 billion trees in the country over the next 10 years.

Most scientists, however, dont think that planting trees can provide all of the negative emissions that the world needs. For example, many experts say the ETH Zurich study did not take into account the CO2 that will be released by the oceans as the atmospheric concentration drops, so they say even a billion more hectares of trees would be insufficient .

I dont think photosynthesis can deliver what we need without a huge impact on the land, says Lackner. Wed need to almost double existing forests if we want to hide that much carbon.

There are other options, however, that might be effective in concert with expanding forests.

One option, known as Biomass Energy with Carbon Capture and Storage (Beccs), aims to extract bioenergy from biomass and capture and store the carbon. The process involves using the same land to grow forests over and over again, burning the wood to provide energy while capturing and sequestering the carbon elsewhere.

Its a mechanism for transferring CO2 that was originally in the air to underground, explains Chris Rayner, an organic chemist at the University of Leeds in the UK. Exactly the opposite of what weve been doing for the past 200 years.

The basic process for biomass carbon capture is similar to the carbon capture and storage (CCS) units that are being demonstrated at scale at coal-fired power plants around the world, which mainly use amine chemistry to remove CO2 from the gas stream of the plant. The main difference is that biomass emissions contain fewer impurities like sulfur, so desulfinates are not required before the releases go to the capture unit. Although there are successful examples of coal CCS at scale, so far there is no full-scale demonstration of CCS for biomass energy.

Rayner, however, is hopeful that will soon change. A company he helped to found in Leeds, called C-Capture, is working with the Drax power plant in northern England to prove that BECCS works. Underlying C-Captures technology is an entirely new chemistry developed by Rayner and his colleagues, which is not based on amines.

We went back to first principles and developed a new chemistry which has better performance, reduced toxicity, and is compatible with a wider array of building materials, Rayner says.

He is not yet ready to share the details of that c
hemistry, but confirms that a small demonstration unit has been operating at Drax since February 2019, capturing about one tonne of CO2 daily. The project will soon expand to about 100 tonnes per day, with the ultimate goal of reaching 10,000 tonnes each day at full scale. This could enable Drax to become the worlds first negative emissions power station, the company says.

The IPCC estimates that globally, BECCS could potentially remove around 10 billion tonnes of CO2 from the atmosphere each year . But Stephens is less optimistic that the technology will play a big role in decarbonisation. The emissions associated with harvesting, processing and transporting the wood means that there are very few places where there would be a net benefit.

It doesnt seem to have as big a potential as reforestation or just not cutting the trees down in the first place, he says.

A third promising negative emissions technology is direct air capture sucking CO2 out of the atmosphere without any biological intermediary. Under this method, huge industrial scrubbers push air over the same chemical sorbents used in CCS systems to catch CO2, which can then be concentrated for storage. While the process is currently energy-intensive and expensive, its one advantage over CCS systems is that it can be done anywhere.

One of the nice features of air capture is the plant can go where you want the CO2 to end up, not where it is produced, Lackner explains. Air capture plants can also be sited to leverage renewable energy sources for example, in areas where wind does the work of moving the air for you, or in places where solar energy can provide much of the electricity required.

Once the CO2 is captured, there are many options for dealing with it. It can be stored in underground reservoirs, turned into natural gas, sold as a raw material to fill fire extinguishers or make fizzy drinks, or injected into basalt formations where it mineralises into rock.

One company based in Zurich, Climeworks, is already operating commercial direct air capture plants in several countries across Europe, including Switzerland, Italy, Germany, the Netherlands and Iceland. The company has a variety of corporate and private customers, and offers a monthly subscription for those who want to decrease their carbon footprint. Theyre interested in reducing their emissions directly, they want to do more than just offset them, says Climeworks spokesperson Louise Charles.

Depending on the method of carbon sequestration chosen, the potential for direct air capture is huge. Geological storage, for example, could provide almost limitless long-term storage, according to the Royal Society and the Royal Academy of Engineering. The capture units themselves are modular, allowing as many to be used as necessary for any given project. The only significant limiting factor on the technology is the cost since the market for CO2 as a raw material is small, there is no real commercial driving force for deploying the technology at large scales.

For Climeworks, the cost to capture one tonne of CO2 is currently around $600. Thats still too high for widespread adoption of the technology, according to Lackner. The target is $100 per tonne and the cost of direct capture is expected to come down fairly quickly in the future, as the technology improves and the price of renewable energy falls. The price tag on capturing all the excess CO2 that humanity produces at $100 per tonne would still be $1.8 trillion every year global GDP is around $80 trillion.

However, once the price reaches about $100 per tonne, Lackner says it becomes feasible to treat carbon capture as a simple waste management issue. In the same way that we pay to have our rubbish taken away he and others predict that carbon capture through tree planting, biomass energy plants, or direct air capture will become a service industry.

For household waste we pay someone to deliver a service, rather than make a product, Lackner tells Chemistry World. The same will have to happen with CO2.

Visit link:
Can negative emission technologies overcome climate catastrophe? | News - Chemistry World

My Chemical Romance and the evolution of emo – Louder

When The Daily Mail waged atypically putrid and ill-informed campaign against My Chemical Romance and the dangers of emo music in 2008, it was the first time many people had ever been confronted by the term. Much guffawing and puzzled looks were exchanged around the country by so-called normal folk. What was this emo music that My Chemical Romance were the leading lights of?

The irony, to anyone au fait with the roots of this music, is that when MCR were tagged as the genres figureheads, it totally changed the definition of what emo actually was. The tag emo, derived from the emotional hardcore of the mid-80s punk scene, bears little or no resemblance to Gerard Way and co. From Rites Of Springs meek and melody-heavy tunes, the Descendents geeky, lovelorn buzzsaw punk or Fugazis discordant, socially conscious and freeform ire, the inspiration for emo was radically different from theself-loathing horror punk itsnow associated with.

It was established as a genuine movement and sub-genre during the 90s as a slew of bands took the sound of hardcore and stripped it of all the bullish machismo that had become the norm, instead infusing it with an honesty and sensitivity that had never been heard before. Jawbox, Far, Nada Surf, Gameface, Garrison and more all existed deep within the underground, pulling in a more introspective, thoughtful college audience that eschewed the glue-sniffing, phlegm-gobbing aesthetic of traditional punk rock. These were bands who were influenced as much by The Smiths as they were by Black Flag ironic given that MCR openly admitted that those two groups had a huge influence on their sound.

What they didnt do was sell records, ensuring that emo was still an unheard-of, word-of-mouth movement in the main. That was until the turn of the millennium, as the globe-straddling commercial behemoth of nu metal began to run out of ideas and its fans were forced to searchelsewhere for an antidote to its creative decline.

Those seduced by the heavier elements soon found sanctuary in the nascent metalcore movement and the reimagining of thrash that bands such as Lamb Of God andTrivium delivered. But for those who related to early nu metals wounded lyrical honesty and forward-thinking sonic approach, the void was filled by a group of post-hardcore acts, led by Glassjaw, At The Drive-In and And You Will Know Us By The Trail Of Dead. Theybeganto actually infiltrateMTVandmainstream culture while being confusingly monikeredas emo, post-rock and screamo at various times. Clearly, emo was stillimpossible to pin down to an actual sound.

It was the success of Jimmy Eat World, Thursday, Taking Back Sunday and British acts Funeral For A Friend and Hundred Reasons that offered emo a clearly defined sound and look. Skinny jeans, fringes and classic American apparel were married to chiming guitars, whisper-to-shriek vocals and a melding of anthemic choruses with indie-esque punk.

This is where MCR come in. Having toured with the aforementioned Thursday and Taking Back Sunday here in the UK, it was easy to pigeonhole them alongside their peers, yet they were radically different to those bands. The only real comparisons would be AFI and Alkaline Trio, two bands that ignored heartbreak and introspection and instead concentrated on a black-hearted, gothic-heavy, macabre sound that wasstrongly influenced by the Misfits B-movie schlock punk.

In fact, Gerard Way himself stated bluntly that MCR never felt partofor identified with the scene. Basically, its never been an accurate way to describe us, he told American college website TheMaine Campus. I think emo is fucking garbage; its bullshit. Ithink theres bands that we unfortunately get lumped in with thatare considered emo and by default that starts to make us emo.

Of course, once MCR broke, the look and sound of emo were definedby their every action. Despite being vocally anti-violence andanti-suicide, themes of self-harm, depression and distress becameinexplicably linked with their sound and image. They were followed by countless also-rans trying to pull the exact same trick. Now every band that adds even a touch of melancholy to their music, from Black Veil Brides to Bring Me The Horizon, are sneeringly referred to with the tag.

For better or worse, the change in emos DNA isall due to the massive impact of My Chemical Romance.

My Chemical Romance head out on tour later this year. Check out full dates below:

Jun 18: Milton Keynes, Stadium MK, UKJun 20: Milton Keynes, Stadium MK, UKJun 21: Milton Keynes, Stadium MK, UKSep 09: Detroit, Little Caesars Arena, MISep 11: St Paul Xcel Energy Center, MNSep 12: Chicago Riot Fest, ILSep 14: Toronto Scotiabank Arena, ONSep 15: Boston TD Garden, MASep 17: Brooklyn Barclays Center, NYSep 18: Philadelphia Wells Fargo Center, PASep 20: Atlanta Music Midtown, GASep 22: Newark Prudential Center, NJSep 26: Sunrise BB&T Center, FLSep 29: Houston Toyoto Center, TXSep 30: Dallas American Airline Center, TXOct 02: Denver Pepsi Center, COOct 04: Tacoma Dome, WAOct 06: Oakland Arena, CAOct 08: Los Angeles The Forum, CAOct 10: Sacramento Aftershock, CAOct 11: Las Vegas T-Mobile Arena, NV

See the article here:
My Chemical Romance and the evolution of emo - Louder

UWMadison Chemistry Department receives Board of Regent’s 2020 Diversity Award – The Badger Herald

The University of WisconsinMadison Department of Chemistry received the 2020 Diversity Award from the University of Wisconsin System Board of Regents.

According to a news release by UWMadison Communications, the UWMadison Chemistry Department received this award in recognition of their programs designed to help underrepresented groups in the field excel in the Chemistry graduate program.

The news release said one of the main underrepresented groups in the doctoral programs are women, many receiving undergraduate degrees but not moving on from those programs.

Chair of the Chemistry Department Judith Burstyn said in the release that the award celebrates their efforts towards creating more diversity, and this is just the beginning of the departments efforts.

New PEOPLE program director emphasizes access, diversity as paramountThe Precollege Enrichment Opportunity Program for Learning Excellence program recently appointed its former assistant director as its new director in Read

These achievements are the result of everyone in the department who works tirelessly to build diversity through the creation of key programs and mentorship of students, Burstyn said in the news release.

A few of the UWMadison Chemistry Departments key programs include Chemistry Opportunities and Research Experiences for Undergraduates. According to the programs websites, they serve as opportunities in the department to reach underrepresented populations and allow them to explore the doctoral programs.

One of the key programs designed for graduate student success is a program called Catalyst. According to the programs website, Catalyst is a mentoring program geared towards helping first generation, low-income and underrepresented student populations.

The program consists of a peer-mentoring scaffold and a professional development seminar series that helps create a sense of belonging and connection between participating first-year students and their peers, department, campus, and the Madison community, the Catalyst website said.

UW recognizes efforts, challenges faced by returning adult studentsThe University of Wisconsin recently announced the winners of its Outstanding Undergraduate Returning Adult Student Awards. These awards were created Read

According to the news release, the UW System Office of Academic and Student Affairs chooses the recipients of the award. Other recipients across the state include an Associate Professor from the UWMilwaukee School of Architecture and Urban Planning and UWStouts Fostering Success program.

The news release said the Board of Regents awards each recipient $7,500, and the chemistry department will put this money towards further development of its existing programs to reach underrepresented students.

The three recipients of the award will be honored at the 12th annual Regents Diversity Awards at the Board of Regents meeting in Madison Feb. 7.

Everyone benefits from diverse perspectives, Burstyn said in the news release. We recognized that need in our department and have worked to find effective solutions.

More here:
UWMadison Chemistry Department receives Board of Regent's 2020 Diversity Award - The Badger Herald

Changing Science, Changing Scientists: How Technology Has Changed The Role of an Analytical Chemist – Technology Networks

Its an exciting day when a new piece of kit arrives in the lab. Between postdocs planning assays, PhD students wondering what will happen if they break it and lab managers wondering how they can fit it on the benchtop, technology advancements are something that affect every person involved in science. But certain advances in technology dont just promise new capabilities but threaten to change from the ground up how research is conducted in a lab group or company.Andrew Anderson, vice president innovation, informatics strategy at industry software solutions provider ACD/Labs, has had a unique vantage point into how technology affects the analytical chemistry field. In this interview, I ask Andrew how the day-to-day life of a chemist has been changed by technologies like AI and automation and how budding chemists can get their skillset up to scratch to handle the changing face of analytical science.

Ruairi Mackenzie (RM): How are technological advancements changing the current job specifications for a research scientist?Andrew Anderson (AA): Its a great question. If we had talked five years ago, I would have a different vision than I have today. In the pharmaceutical industry there are some good examples, particularly around commercializing Katalyst D2D (read more here). As you may recall we worked collaboratively with one major pharmaceutical company and then since then several others. In order to describe the changes Ive seen in scientists job roles in these companies Ill talk about what Im used to, particularly in chemistry. If you think about the pharmaceutical industry, traditionally therapeutics are made using small molecule technology and they matriculate through a drug discovery and development process, ultimately into commercialization.

If you go back five years, what we saw, particularly in discovery, was reliance on an external ecosystem of suppliers and contract research organizations, contract development and manufacturing organizations. Those ecosystems are healthy and vibrant. But what we are also seeing now is a resurgence, in my view of the market, of internal investment. This is based on that market immersion in working with these different perspectives and clients. What I see is a shift back to investment in core infrastructure, like technology platforms, robotics and automation. I wont go into the reasons behind that resurgence, but I do think that it feels like the scientists working on really pivotal projects want to have a shift back into a more balanced portfolio between internal and external work.

I could surmise or assume that theres several reasons for that. One might be the ability to collaborate at the project team level. I dont think that the technology that supports collaboration has fully replaced the level of, well call it quality, face-to-face collaboration Certainly there are efforts to use technology to make geographically disparate collaborators work more effectively together. But when you look at the level of cross-functional effort that goes on to work through the drug discovery and development and commercialization process, those folks at times need to work very closely together. Thats one reason. The other is, from my perspective, advances in experimentation technology. You can have in-house staff that are today using far more productive technology than they were in the past.

Early adopters of this new technology have realized productivity gains in their drug discovery and development processing. As an example, utilizing high throughput experimentation technology you can produce materials at a rate faster than you could in the past. Now there are drawbacks, but certainly the high throughput experimentation paradigm is yielding benefits. We do see an uplifted interest in investing in automation and so you have in-house capabilities that are highly productive. Youre certainly going to continue to leverage externalized resources for work that doesnt require automation. But I think balancing between the two is really important and we do see that senior leaders within these organizations are also recognizing the value of that balance. I do see an uplift in hiring of internal staff in parts of the world where major pharmaceutical R&D operations are being performed.

What I see is a lot of chemists moving into those organizations. Now, the chemist of today is a different chemist than they were even five years ago.

RM: How has the role of a chemist today changed from five years ago?AA: Lets pretend youre a medicinal chemist in a pharmaceutical organization. Your responsibility is to determine how to optimize a particular lead compound and make it into something that would be nominated for candidacy to clinical development. In the old world your responsibility was largely focused around a fume hood, and your work would be often devoted to some design work, understanding what molecule to make based on things like structure to activity relationships. You may even do modeling to determine how a particular molecule might fit into a particular drug target and optimize the molecule based on how it should fit in.

Now, a changing factor is artificial intelligence, where you have machines prescribing what to make. That is a realistic future where youre using in silico tools to help augment the scientists decision making around what to make next.

Following a machine-augmented approach to defining what to make, you now decide how to make it. Youll want to make these materials and subject them to physical assays, in vitro assays etc. You would then utilize tools to help prescribe the process of making a particular material. If you have heard anything about the innovation in retrosynthetic analysis and reaction prediction, that is an area where scientists of today will utilize those technologies. What that then implies is if youre having machine learning applications or artificial intelligence applications prescribing what to make and how to make it, presumably that process can be very fast with the speed of computers and other factors.

Where the bottleneck moves to is in how to make things in parallel and in high throughput. The next technological innovation is in high throughput experimentation. Where youre able to produce more materials faster than you could in the past. Historically, youd work on one or a small set of reactions at one time and weve seen in todays paradigm or maybe the short-term futures paradigm, depending on what company youre talking to, you can now use automation tools to produce up to 1500 molecules at a time. The rate of going through that traditional trial and error process to arrive at a drug development candidate is much faster if you utilize the combination of artificial intelligence for design, artificial intelligence for reaction planning and then automation tools for high throughput and parallel experimentation.

The final thing youll want to have is what Id call no loss fidelity decision support interfaces. What a lot of companies are also investing in is looking across, from design to execution to task, all of the data that is generated during those discrete unit operations in the scientific process to be able to present that data in a holistic fashion to decision makers.

From my perspective what that means for the scientist is in addition to their chemistry knowledge and their biology knowledge, their pharmaceutical knowledge they also need to be able to deal with a lot of data. Part of their job transitions from being a chemist to almost being like a data scientist or a data engineer.

RM: Does that mean that todays analytical chemist will spend less time in the fume hood, or will they be expecting to spend the same amount in the fume hood, and on top of that analyze data?AA: I would say that in the future there is no fume hood and what I mean by that is instead of interfacing with what youd classically visualize as a fume hood with reaction flasks and the like, the future paradigm or even the current paradigm is youre walking up to robots who are ins
ide the fume hood or glove box. Where youre effectively providing machine instructions to the robots who go and do the work for you, okay? The transition is that the scientists arent touching materials any more. Youre essentially providing machines with instructions across this set of unit operations you would execute during the process. Thats certainly different than what you would do even three years ago as a traditional chemist. Youre really interfacing with robotics and digital software interfaces.

RM: How can software help chemists in this new role?AA: Its the transcription and translation between systems. Certainly, there is a significant amount of human effort. We talked about this data engineering need currently to be able to transcribe information from one system to another and a simple example is if Ive executed a reaction with an automated reactor system I would say the majority of analysis that is performed is off the deck. What I mean by that is the reaction deck that has a robot, a robotic arm, that dispenses materials into containers, those containers will serve as reaction vessels, those reaction vessels are subjected to different environmental conditions, like heating or stirring or pressurization, etc.

At the conclusion or even during the experiment youll want to perform some sort of analysis to determine how the reaction is going. Often times what that means is the robot will sample either at the end of the experiment or during the experiment and create analysis samples. The analysis equipment is usually separate from the reaction equipment.

I need to make sure that the data that I generate from the analytical experiment is somehow associated through the sample provenance to my reaction experiment. Thats indeed one of the challenges right now is interfacing between these systems. What were a strong advocate for is to make software do that work of transcription and translation; make a software do that for you. What we work on is helping our customers interface the reaction equipment and the analysis equipment by creating digital representations of that sample provenance and then formatting those digital representations so that they can be consumed by, for example, analysis equipment.

A practical example is if Ive sampled a 96-well plates worth of reactions at the end of the experiment, the reaction, Im going to sample and drop into 96 HPLC vials and then Id walk over and load those HPLC vials. What I need are identifiers that associate the HPLC vial to the position in the 96-well plate so that I know what the sample belongs to.

Within our Katalyst application we have identifiers for the reaction plate, and we map those identifiers to the sample plate that you would load onto the system. Furthermore we prepare a sequence file for those samples, where the sample identifier relationships, are accounted for. Whether its a comment field or the name of the file or a variety of ways to make that association. Then, what we do is once the data is acquired, we read, so Katalyst will read the sample identifier, make the association to the appropriate reaction information.

What that then gives is a software experience that has all of the reaction information, like what reagents did I add and what product did I make, and I have all the analytical data associated to that reaction information. Now what Im able to do is walk up to a software interface that has all of that information in one place. Traditionally what scientists would have to do after this whole experiment is take data from the analytical software package and data from the reaction and make the associations themselves. That can be quite time-consuming work. We reduced that work practically to zero.

RM: Will software advances mean that scientists dont need all this data-handling training or will it just take a lot of the manual labor of data handling out of the equation?AA: Theres two schools of thought from my perspective. The first is that you build tightly integrated monolithic systems. There are certain companies that build these very high-end platforms with perfectly integrated monolithic applications. These are robotics platforms coupled with software, all tightly integrated. While those are great and, in that paradigm, you see less data engineering, because these are monoliths, theyre not modular.

Theres a consequence; if the scientific experiment youre performing doesnt fit into the platform, it wont be supported by the platform. Ill give you an example to illustrate the point. Say you had a type of chemistry that required a pressure level that the platform couldnt support. Now youre relegated back to doing the fume hood chemistry that you would do traditionally. Your platform doesnt support it. The breadth of experiments that you can perform with those monolithic platforms is limited. The analogy I like to say is that its like you have a house and all you want to do is move the couch, but you have to rebuild the house to do so. In these examples the monoliths, while they are efficient for the intended scope, if the scope changes its very difficult.

Another trend we see is modular automation. If you need to change a particular element or a unit operation in your automated process, there are plenty of options for that particular unit. Its the data integration that becomes a burden.

What we try to do is offer an ability to integrate or change different components of the platform using software and integration tools to reduce the risk of creating monoliths in a platform, make them modular a priori. You do that with effective software integration. Your data gets integrated. As opposed to doing hard code like building software that operates equipment in a monolith, were effectively using the software that exists in the modular component and providing instruction lists between them. That instruction list can be human-delivered or software-delivered. It depends on the modular components application programming interface and what it can receive and support, etc.

The point is: I dont think youll ever have to completely not understand data engineering because of that need for modularity. We certainly want to reduce the burden of manual transcription between systems, but we would facilitate either an automated or very convenient and efficient mechanism by which you can translate information, by virtue of automatically reformatting data. If we can reformat that data using software, it greatly reduces the burden on a scientist to transcribe information from one system to another.

RM: Do you have any other advice for new chemists coming into a field which has changed so rapidly in the last five years?AA: I would say that the more you have experience in dealing with predictive applications, certainly that is an important skill set to acquire. The second thing is being able to deal with data using some of the more modern data processing and analysis tools is also equally important. Finally, from my perspective, because were talking about high throughput and parallel, I cant help but think that a good understanding of statistics is an important skill set to acquire. The reason being that if you have access to highly scalable reaction equipment, the ability to assure that youre conducting an effective statistical design of experiment, so that you capture as many variables as possible with the minimum set of experiments, thats a really important knowledge set to have because if you can execute 1536 experience in parallel, its probably a good idea to maximize the amount of information youll glean from those 1536 experiments. One way to do that is to utilize statistical design of experiment math. I think thats an important thing to be aware of. By the way, a lot of AI and a lot of machine learning, a lot of those statistics that you can get double the benefit not just in your experiment design but also in the way you analyze data.

Andrew Anderson was speaking to Ruairi J Mackenzie, Science Writer for Technology Networks

Read more:
Changing Science, Changing Scientists: How Technology Has Changed The Role of an Analytical Chemist - Technology Networks