Microsoft Introduces Lobe: A Free Machine Learning Application That Allows You To Create AI Models Without Coding – MarkTechPost

Microsoft has releasedLobe, a free desktop application that lets Windows and Mac users create customized AI models without writing any code. Several customers are already using the app for tracking tourist activity around coral reefs, the company said.

Lobeis available on Windows and Mac as a desktop app. Presently it only supports image classification by categorizing the image to a single label overall. Microsoft says that there will be new releases supporting other neural networks in the near future.

To create an AI in Lobe, a user first needs to import a collection of images. These images are used as a dataset to train the application. Lobe analyzes the input images and sifts through a built-in library of neural network architectures to find the most suitable model for processing the dataset. Then it trains the model on the provided data, creating an AI model optimized to scan images for the users specific object or action.

AutoML is a technology that can automate parts and most of the machine learning creation workflow, reducing the advancement costs. Microsoft has made AutoML features available to enterprises in its Azure public cloud. The existing AI tools in Azure target only advanced projects. Lobe being free, easy to access, and convenient to use can now support even simple use cases that were not adequately addressed by the existing AI tools.

The Nature Conservancy is a nonprofit environmental organization that used Lobe to create an AI. This model analyzes the pictures taken by tourists in the Caribbean to identify where and when visitors interact with coral reefs. A Seattle auto marketing firm,Sincro LLC,has developed an AI model that scans vehicle images in online ads to filter out pictures that are less appealing to the customers.

GitHub: https://github.com/lobe

Website: https://lobe.ai/

Related

Continue reading here:
Microsoft Introduces Lobe: A Free Machine Learning Application That Allows You To Create AI Models Without Coding - MarkTechPost

5 Emerging AI And Machine Learning Trends To Watch In 2021 – CRN: Technology news for channel partners and solution providers

Artificial Intelligence and machine learning have been hot topics in 2020 as AI and ML technologies increasingly find their way into everything from advanced quantum computing systems and leading-edge medical diagnostic systems to consumer electronics and smart personal assistants.

Revenue generated by AI hardware, software and services is expected to reach $156.5 billion worldwide this year, according to market researcher IDC, up 12.3 percent from 2019.

But it can be easy to lose sight of the forest for the trees when it comes to trends in the development and use of AI and ML technologies. As we approach the end of a turbulent 2020, heres a big-picture look at five key AI and machine learning trends not just in the types of applications they are finding their way into, but also in how they are being developed and the ways they are being used.

The Growing Role Of AI And Machine Learning In Hyperautomation

Hyperautomation, an IT mega-trend identified by market research firm Gartner, is the idea that most anything within an organization that can be automated such as legacy business processes should be automated. The pandemic has accelerated adoption of the concept, which is also known as digital process automation and intelligent process automation.

AI and machine learning are key components and major drivers of hyperautomation (along with other technologies like robot process automation tools). To be successful hyperautomation initiatives cannot rely on static packaged software. Automated business processes must be able to adapt to changing circumstances and respond to unexpected situations.

Thats where AI, machine learning models and deep learning technology come in, using learning algorithms and models, along with data generated by the automated system, to allow the system to automatically improve over time and respond to changing business processes and requirements. (Deep learning is a subset of machine learning that utilizes neural network algorithms to learn from large volumes of data.)

Bringing Discipline To AI Development Through AI Engineering

Only about 53 percent of AI projects successfully make it from prototype to full production, according to Gartner research. When trying to deploy newly developed AI systems and machine learning models, businesses and organizations often struggle with system maintainability, scalability and governance, and AI initiatives often fail to generate the hoped-for returns.

Businesses and organizations are coming to understand that a robust AI engineering strategy will improve the performance, scalability, interpretability and reliability of AI models and deliver the full value of AI investments, according to Gartners list of Top Strategic Technology Trends for 2021.

Developing a disciplined AI engineering process is key. AI engineering incorporates elements of DataOps, ModelOps and DevOps and makes AI a part of the mainstream DevOps process, rather than a set of specialized and isolated projects, according to Gartner.

Increased Use Of AI For Cybersecurity Applications

Artificial intelligence and machine learning technology is increasingly finding its way into cybersecurity systems for both corporate systems and home security.

Developers of cybersecurity systems are in a never-ending race to update their technology to keep pace with constantly evolving threats from malware, ransomware, DDS attacks and more. AI and machine learning technology can be employed to help identify threats, including variants of earlier threats.

AI-powered cybersecurity tools also can collect data from a companys own transactional systems, communications networks, digital activity and websites, as well as from external public sources, and utilize AI algorithms to recognize patterns and identify threatening activity such as detecting suspicious IP addresses and potential data breaches.

AI use in home security systems today is largely limited to systems integrated with consumer video cameras and intruder alarm systems integrated with a voice assistant, according to research firm IHS Markit. But IHS says AI use will expand to create smart homes where the system learns the ways, habits and preferences of its occupants improving its ability to identify intruders.

The Intersection Of AI/ML and IoT

The Internet of Things has been a fast-growing area in recent years with market researcher Transforma Insights forecasting that the global IoT market will grow to 24.1 billion devices in 2030, generating $1.5 trillion in revenue.

The use of AI/ML is increasingly intertwined with IoT. AI, machine learning and deep learning, for example, are already being employed to make IoT devices and services smarter and more secure. But the benefits flow both ways given that AI and ML require large volumes of data to operate successfully exactly what networks of IoT sensors and devices provide.

In an industrial setting, for example, IoT networks throughout a manufacturing plant can collect operational and performance data, which is then analyzed by AI systems to improve production system performance, boost efficiency and predict when machines will require maintenance.

What some are calling Artificial Intelligence of Things: (AIoT) could redefine industrial automation.

Persistent Ethical Questions Around AI Technology

Earlier this year as protests against racial injustice were at their peak, several leading IT vendors, including Microsoft, IBM and Amazon, announced that they would limit the use of their AI-based facial recognition technology by police departments until there are federal laws regulating the technologys use, according to a Washington Post story.

That has put the spotlight on a range of ethical questions around the increasing use of artificial intelligence technology. That includes the obvious misuse of AI for deepfake misinformation efforts and for cyberattacks. But it also includes grayer areas such as the use of AI by governments and law enforcement organizations for surveillance and related activities and the use of AI by businesses for marketing and customer relationship applications.

Thats all before delving into the even deeper questions about the potential use of AI in systems that could replace human workers altogether.

A December 2019 Forbes article said the first step here is asking the necessary questions and weve begun to do that. In some applications federal regulation and legislation may be needed, as with the use of AI technology for law enforcement.

In business, Gartner recommends the creation of external AI ethics boards to prevent AI dangers that could jeopardize a companys brand, draw regulatory actions or lead to boycotts or destroy business value. Such a board, including representatives of a companys customers, can provide guidance about the potential impact of AI development projects and improve transparency and accountability around AI projects.

Read more here:
5 Emerging AI And Machine Learning Trends To Watch In 2021 - CRN: Technology news for channel partners and solution providers

Vanderbilt trans-institutional team shows how next-gen wearable sensor algorithms powered by machine learning could be key to preventing injuries that…

A trans-institutional team of Vanderbilt engineering, data science and clinical researchers has developed a novel approach for monitoring bone stress in recreational and professional athletes, with the goal of anticipating and preventing injury. Using machine learning and biomechanical modeling techniques, the researchers built multisensory algorithms that combine data from lightweight, low-profile wearable sensors in shoes to estimate forces on the tibia, or shin bonea common place for runners stress fractures.

The research builds off the researchers2019 study,which found that commercially available wearables do not accurately monitor stress fracture risks.Karl Zelik, assistant professor of mechanical engineering, biomedical engineering and physical medicine and rehabilitation, sought to develop a better technique to solve this problem.Todays wearablesmeasure ground reaction forceshow hard the foot impacts or pushes against the groundto assess injury risks like stress fractures to the leg, Zelik said. While it may seem intuitive to runners and clinicians that the force under your foot causes loading on your leg bones, most of your bone loading is actually from muscle contractions. Its this repetitive loading on the bone that causes wear and tear and increases injury risk to bones, including the tibia.

The article, Combining wearable sensor signals, machine learning and biomechanics to estimate tibial bone force and damage during running was publishedonlinein the journalHuman Movement Scienceon Oct. 22.

The algorithms have resulted in bone force data that is up to four times more accurate than available wearables, and the study found that traditional wearable metrics based on how hard the foot hits the ground may be no more accurate for monitoring tibial bone load than counting steps with a pedometer.

Bones naturally heal themselves, but if the rate of microdamage from repeated bone loading outpaces the rate of tissue healing, there is an increased risk of a stress fracture that can put a runner out of commission for two to three months. Small changes in bone load equate to exponential differences in bone microdamage, said Emily Matijevich, a graduate student and the director of theCenter for Rehabilitation Engineering and Assistive TechnologyMotion Analysis Lab. We have found that 10 percent errors in force estimates cause 100 percent errors in damage estimates. Largely over- or under-estimating the bone damage that results from running has severe consequences for athletes trying to understand their injury risk over time. This highlights why it is so important for us to develop more accurate techniques to monitor bone load and design next-generation wearables. The ultimate goal of this tech is to better understand overuse injury risk factors and then prompt runners to take rest days or modify training before an injury occurs.

The machine learning algorithm leverages the Least Absolute Shrinkage and Selection Operator regression, using a small group of sensors to generate highly accurate bone load estimates, with average errors of less than three percent, while simultaneously identifying the most valuable sensor inputs, saidPeter Volgyesi, a research scientist at the Vanderbilt Institute for Software Integrated Systems. I enjoyed being part of the team.This is a highly practical application of machine learning, markedly demonstrating the power of interdisciplinary collaboration with real-life broader impact.

This research represents a major leap forward in health monitoring capabilities. This innovation is one of the first examples of a wearable technology that is both practical to wear in daily life and can accuratelymonitor forces on and microdamage to musculoskeletal tissues.The team has begun applying similar techniques to monitor low back loading and injury risks, designed for people in occupations that require repetitive lifting and bending. These wearables could track the efficacy of post-injury rehab or inform return-to-play or return-to-work decisions.

We are excited about the potential for this kind of wearable technology to improve assessment, treatment and prevention of other injuries like Achilles tendonitis, heel stress fractures or low back strains, saidMatijevich, the papers corresponding author.The group has filed multiple patents on their invention and is in discussions with wearable tech companies to commercialize these innovations.

This research was funded by National Institutes of Health grant R01EB028105 and the Vanderbilt University Discovery Grant program.

Go here to see the original:
Vanderbilt trans-institutional team shows how next-gen wearable sensor algorithms powered by machine learning could be key to preventing injuries that...

Using Machine Learning in Financial Services and the regulatory implications – Lexology

Financial services firms have been increasingly incorporating Artificial Intelligence (AI) into their strategies to drive operational and cost efficiencies. Firms must ensure effective governance of any use of AI. The Financial Conduct Authority (FCA) is active in this area, currently collaborating with The Alan Turing Institute to examine a potential framework for transparency in the use of AI in financial markets.

In simple terms, AI involves algorithms that can make human-like decisions, often on the basis of large volumes of data, but typically at a much faster and more efficient rate. In 2019, the FCA and the Bank of England (BoE) issued a survey to almost 300 firms, including banks, credit brokers, e-money institutions, financial market infrastructure firms, investment managers, insurers, non-bank lenders and principal trading firms, to understand the extent to which they were using Machine Learning (ML), a sub-category of AI. While AI is a broad concept, ML involves a methodology whereby a computer programme learns to recognise patterns of data without being explicitly programmed.

The key findings included:

The use cases for ML identified by the FCA and BoE were largely focused around the following areas:

Anti-money laundering and countering the financing of terrorism

Financial institutions have to analyse customer data continuously from a wide-range of sources in order to comply with their AML obligations. The FCA and BoE found that ML was being used at several stages within the process to:

Customer engagement

Firms were increasingly using Chatbots, which enable customers to contact firms without having to go through human agents via call centres or customer support. Chatbots can reduce the time and resources needed to resolve consumer queries.

ML can facilitate faster identification of user intent and recommend associated content which can help address consumers issues. For more complex matters which cannot be addressed by the Chatbot, the ML will transfer the consumer to a human agent who should be better placed to deal with the query.

Sales and trading

The FCA and BoE reported that ML use cases in sales and trading broadly fell under three categories ranging from client-facing to pricing and execution:

Insurance pricing

The majority of respondents in the insurance sector used ML to price general insurance products, including motor, marine, flight, building and contents insurance. In particular, ML applications were used for:

Insurance claims management

Of the respondents in the general insurance sector, 83% used ML for claims management in the following scenarios:

Asset management

ML currently appears to provide only a supporting role in the asset management sector. Systems are often used to provide suggestions to fund management (which apply equally to portfolio decision-making or execution only trades):

All of these applications have back-up systems and human-in-the-loop safeguards. They are aimed at providing fund managers with suggestions, with a human in charge of the decision making and trade execution.

Regulatory obligations

Although there is no overarching legal framework which governs the use of AI in financial services, Principle 3 of the FCAs Principles for Business makes clear that firms must take reasonable care to organise and control their affairs responsibly and effectively, with adequate risk management systems. If regulated activities being conducted by firms are increasingly dependent on ML or, more broadly, AI, firms will need to ensure that there is effective governance around the use of AI and that systems and controls adequately ensure that the use of ML and AI is not causing harm to consumers or the markets.

There are a number of risks in adopting AI, for example, algorithmic bias caused by insufficient or inaccurate data (note that the main barrier to widespread adoption of AI is the availability of data) and lack of training of systems and AI users, which could lead to poor decisions being made. It is therefore imperative that firms fully understand the design of the MI, have stress-tested the technology prior to its roll-out in business areas and have effective quality assurance and system feedback measures in place to detect and prevent poor outcomes.

Clear records should be kept of the data used by the ML, the decision making around the use of ML and how systems are trained and tested. Ultimately, firms should be able to explain how the ML reached a particular decision.

Where firms outsource to AI service providers, they retain the regulatory risk if things go wrong. As such, the regulated firm should ensure it carries out sufficient due diligence on the service provider, that it understands the underlying decision-making process of the service providers AI and ensure the contract includes adequate monitoring and oversight mechanisms where the AI services are important in the context of the firms regulated business, and appropriate termination provisions.

The FCA announced in July 2019 that it is working with The Alan Turing Institute on a year-long collaboration on AI transparency in which they will propose a high-level framework for thinking about transparency needs concerning uses of AI in financial markets. The Alan Turing Institute has already completed a project on explainable AI with the Information Commissioner in the content of data protection. A recent blog published by the FCA stated:

the need or desire to access information about a given AI system may be motivated by a variety of reasons there are a diverse range of concerns that may be addressed through transparency measures. one important function of transparency is to demonstrate trustworthiness which, in turn, is a key factor for the adoption and public acceptance of AI systems transparency may [also] enable customers to understand and where appropriate challenge the basis of particular outcomes.

Read the original post:
Using Machine Learning in Financial Services and the regulatory implications - Lexology

Duke researchers to monitor brain injury with machine learning – Duke Department of Neurology

Duke neurologists and electrical engineers are teaming up in an ambitious effort to develop a better way to monitor brain health for all patients in the ICU. Dubbed Neurologic Injury Monitoring and Real-time Output Display, the method will use machine learning and continuous electroencephalogram (EEG) data along with other clinical information to assist providers with assessment of brain injury and brain health.

Current practices for monitoring brain-injured patients include regular clinical exam assessments made every few hours around the clock. However, many patients are not able to follow commands or participate in the physical exam, so doctors can only examine gross responses to loud noises, pinches and noxious stimulation as well as rudimentary brain stem reflexes.

Not only are these exams often limited in their scope, imaging only provides a snapshot of the brain at the time the images are taken, said Brad Kolls, MD, PhD, MMCI, associate professor of neurology at Duke University School of Medicine and principal investigator on the new research study.

The new approach will leverage continuous brainwave activity along with other clinical information from the medical record and standard bedside monitoring to allow a more comprehensive assessment of the state of the brain. Kolls and Leslie Collins, professor of electrical and computer engineering at Duke, hope to improve the care of brain-injured patients by correlating this data with outcomes. This will allow clinicians to optimize brain function and personalize recovery.

With extensive experience in combining machine learning applications with biological signals, Collins will use unsupervised learning such as topic modeling and automated feature extraction to delve into the novel dataset.

We have promising results from using this approach to analyze data taken from sleeping patients, said Collins. Were excited to be able to change the care, and potentially the outcomes, of patients with brain injury.

The program is sponsored by CortiCare Inc., a leading provider of electroencephalography services to hospitals in the U.S. and internationally. CortiCare has funded this multi-year research agreement supporting the program and intends to commercialize the work once completed. The program is expected to run until the fall of 2022.

Read the original here:
Duke researchers to monitor brain injury with machine learning - Duke Department of Neurology

Zaloni Named to Now Tech: Machine Learning Data Catalogs Report, Announced as a Finalist for the NC Tech Awards, and Releases Arena 6.1 – PR Web

From controlling data sprawl to eliminating data bottlenecks, we believe Arenas birds-eye view across the entire data supply chain allows our clients to reduce IT costs, accelerate time to analytics, and achieve better AI and ML outcomes. - Susan Cook, CEO Zaloni

RESEARCH TRIANGLE PARK, N.C. (PRWEB) October 28, 2020

Zaloni, an award-winning leader in data management, today announced its inclusion in a recent Forrester report, titled Now Tech: Machine Learning Data Catalogs (MLDC), Q4 2020. Forrester, a global research and advisory firm for business and technology leaders, listed Zaloni as a midsize vendor in the MLDC Market in the report.

Defined by Forrester, A machine learning data catalog (MLDC) discovers, profiles, interprets and applies semantics and data policies to data and metadata using machine learning to enable data governance and DataOps, helping analysts, data scientists, and data consumers turn data into business outcomes. Having a secure MLDC foundation is vital for key technology trends -- internet of things (IoT), blockchain, AI, and intelligent security.

As a conclusive remark in the Forrester report: MLDCs will force organizations to address the unique processes and requirements of different data roles. Unlike other data management solutions that seek to process and automate the management of data within systems, MLDCs are workbenches for data consumption and delivery across engineers, stewards, and analyst roles.

For us, to be named a vendor in the MLDC Market by Forrester is a huge accomplishment, expressed CEO of Zaloni, Susan Cook. At Zaloni, we are passionate about making our clients lives easier with our end-to-end DataOps platform, Arena. From controlling data sprawl to eliminating data bottlenecks, we believe Arenas birds-eye view across the entire data supply chain allows our clients to reduce IT costs, accelerate time to analytics, and achieve better AI and ML outcomes.

To receive a complimentary copy of the report, visit: https://www.zaloni.com/resources/briefs-papers/forrester-ml-data-catalogs-2020/.

Zaloni Named NC Tech Award Finalist for Artificial Intelligence and Machine Learning

In addition to the inclusion in the Forrester Report, Zaloni has recently been named a finalist for the NC Tech Associations award for Best Use of Technology: Artificial Intelligence & Machine Learning for the 2020 year. The NC Tech Association recognizes North Carolina-based companies who are making an impact with technology among the state and beyond. Zaloni is looking forward to the NC Tech Award: Virtual Beacons Ceremony where the winners will be announced for all categories.

Zaloni's Arena 6.1 Release Extends Augmented Data Management

Zaloni released the latest version of the Arena platform, Arena 6.1. This release adds new features and enhancements that build upon the 6.0 releases focus on DataOps optimization and augmented data catalog. The latest release adds to the new streamlined user-interface to improve user experience and productivity. It also provides a new feature for importing and exporting metadata through Microsoft Excel.

Traditionally, Microsoft Excel has been a popular tool for managing and exchanging metadata outside of a governance and catalog tool. To jumpstart the process of building a catalog, Arena allows users to add and update catalog entities by uploading Microsoft Excel worksheets containing entity metadata helping to incorporate data catalog updates into existing business processes and workflows with the tools users already know and use.

Zaloni to Present Dataops for Improved AI & ML Outcomes at ODSC Virtual Conference

Zaloni is participating in the ODSC West Virtual Conference this week. Solutions Engineer, Cody Rich, will be presenting Wednesday, October 28th, at 3:30 PM PDT. Codys presentation will consist of a live Arena demo. This demo will walk viewers through our unified DataOps platform that bridges the gap between data engineers, stewards, analysts, and data scientists while optimizing the end-to-end data supply chain to process and deliver secure, trusted data rapidly. In addition to the presentation, Zaloni staff will be hosting a booth in the conferences Exhibitor Hall. If you are interested in learning more about Zaloni and our DataOps driven solutions with the Arena platform, make sure to visit us on Wednesday, October 28th, or Thursday, October 29th.

About ZaloniAt Zaloni, we believe in the unrealized power of data. Our DataOps software platform, Arena, streamlines data pipelines through an active catalog, automated control, and self-service consumption to reduce IT costs, accelerate analytics, and standardize security. We work with the world's leading companies, delivering exceptional data governance built on an extensible, machine-learning platform that both improves and safeguards enterprises data assets. To find out more visit http://www.zaloni.com.

Media Contact:Annie Bishopabishop@zaloni.com

Share article on social media or email:

Continued here:
Zaloni Named to Now Tech: Machine Learning Data Catalogs Report, Announced as a Finalist for the NC Tech Awards, and Releases Arena 6.1 - PR Web

Survey: Machine learning will (eventually) help win the war against financial crime – Compliance Week

It is still early days for many institutions, but what is clear is the anti-money laundering (AML) function is on the runway to using ML to fight financial crime. The benefits of ML are indisputable, though financial institutions (FIs) vary in levels of adoption.

Guidehouse and Compliance Week tapped into the ICAs network of 150,000-plus global regulatory and financial compliance professionals for the survey, which canvassed 364 compliance professionalsincluding 229 employed by financial institutions (63 percent of all respondents)to determine the degree to which FIs are using ML. It highlights the intended and realized program benefits of ML implementation; top enterprise risks and pitfalls in adopting and integrating ML to fight financial crime; and satisfaction with results post-implementation. The results also offer insights into what kinds of impediments are holding organizations back from full buy-in.

About a quarter of all surveyed respondents (24 percent) reported working at major FIs with assets at or over $40 billion; this cohort, hereafter referred to as large FIs, represents the bleeding edge of ML in AML. More than half (58 percent) have dedicated budgets for ML, and 39 percent are frontrunners in the industry, having developed or fully bought in on ML products already.

Nearly two-thirds (62 percent) of all respondents are individuals working in AML/Bank Secrecy Act (BSA) or compliance roles; this cohort, hereafter referred to as industry stakeholders, represents the population of users in the process of operationalizing ML in fighting financial crime at their respective institutions.

If large FIs are on the front line in ML adoption, then industry stakeholders are taking up the rearguard. Unlike respondents in the large FIs cohort, the majority of professionals in the industry stakeholders cohort are refraining from taking action steps around ML projects focused on fighting financial crime at this time. Nearly a third (32 percent) are abstaining from talking about ML at all at their institutions; another third (33 percent) are just talking about iti.e., they have no dedicated budget, proof of concept, or products under development just yet.

Nonetheless, there is nearly universal interest in ML among large FIs: 80 percent say they hope to reduce risk with its help, and 61 percent report they have realized this benefit already, demonstrating a compelling ROI.

While large FIs are confident in testing the ML waters, many remain judicious in how much they are willing to spend. Dedicated budgets for ML in AML remain conservative; nearly two-thirds of large FIs (61 percent) budgeted $1 million or less, pre-pandemic, toward implementing ML solutions in AML. The most frequently occurring response, at just over one-third, was a budget of less than $500,000 (34 percent).

Workingwith modest budgets, large FIs are relying on their own bandwidth and expertise to build ML technology: 71 percent are building their own in-house solution, eschewing any off-the-shelf technology, and more than half (54 percent) are training internal staff rather than hiring outside consultants.

With the larger banks, theres just a tendency to look inward first. Im a big proponent of leveraging commercially available products, says Tim Mueller, partner in the Financial Crimes practice at Guidehouse. Mueller predicts vendor solutions will become more popular as the external market matures and better options become available. I think thats the only way for this to work down-market, he adds.

A key driver of ML in the AML function has been the allure of enabling a real-time and continuous Know Your Customer (KYC) process. More than half of all surveyed respondents (55 percent) state improving KYC is the top perceived benefit to their organizations in operationalizing ML to fight financial crime, including 54 percent of large FIs and 59 percent of industry stakeholders.

This trend suggests the challenges associated with the KYC process modestly outweigh competing AML priorities as those most in need of an efficiency upgrade. From customer due diligence (CDD) to customer risk-ranking to enhanced due diligence (EDD) to managing increased regulatory scrutiny, the demands of KYC are both laborious and time-intensive. Banks want to harness a way to work smarter, not harder. ML technology may provide a viable means.

ML is getting applied in the areas of greatest pain for financial institutions, notes Mueller, referring to respondents apparent keenness to improve the KYC process. Theres the area of greatest pain, and that usually represents the area of greatest potential. When asked which additional areas have the greatest potential, Mueller cites transaction monitoring and customer risk rating.

The truth, however, is each area of the AML program is part of a larger puzzle; the pieces interconnect. For instance, an alert generated by a transaction-monitoring system about a potentially suspicious customer is not done in a vacuum, but rather is based on the adequacy of the FIs customer risk assessment processes. Because of the cyclical nature of an AML program, applying ML to one area could potentially translate into a holistic improvement to the program overall.

Its really important to remember this: The area of pain is EDD and CDD, and the area of potential is AML transaction monitoring, and making sure youve got the right alerts. Guess what? The alerts are based on the CDD and EDD. They are interdependent, points out Salvatore LaScala, partner in the AML division of Guidehouse.

While ML takes considerable time to implement and fine-tunea typical runway is 6-12 months, Mueller saysa reduction of risk can be realized relatively quickly.

For organizations that have implemented ML to fight financial crime, reducing risk is overwhelmingly the key benefit realized. Nearly two-thirds (61 percent) of large FIs state their companies have realized the benefit of reducing risk since deploying ML to fight financial crime. What is somewhat puzzling, however, is only 44 percent of large FIs state they have realized efficiency gains.

A similar incongruity is found among the industry stakeholders: 61 percent state they have effectively reduced risk, but only 51 percent indicate they have achieved efficiency gains.

Ifthe adoption of ML has increased institutions effectiveness at reducing risk in AML, why does it appear efficiencygains are lagging? Shouldnt effectiveness and efficiency go hand in hand?

Mueller says no. Effectiveness comes first. From the perspective of an AML professional working at an FI, You spend a lot of money implementing machine learning and AI, Mueller explains.You spend a lot of time. You have a lot of SMEs (subject matter experts) dedicated to making sure its working correctly. You get it implemented; then you must watch it work; then you have to improve it over time. Youre not always going to see efficiency gains right away.

LaScala says, While FIs have made tremendous effectiveness and efficiency strides in leveraging machine learning for transaction monitoring, we believe that they will enjoy even greater success leveraging it for customer due diligence in the future. As the technology evolves, we expect that FIs will be able to review potentially high-risk customer populations more frequently and comprehensively, with less effort.

Fifty-one percent of respondents at large FIs and 45 percent of industry stakeholders cite only partial satisfaction with the results of deploying ML. This reaction may be an indicator that the use of ML in this capacity/area is still emerging.

There has been an increase in the number of false matches in name-screening and transaction monitoring cases that end as risk-irrelevant, noted an AML associate working at a large commercial bank headquartered in Europe that conducts business in the Middle East.

No clear results, remarked a chief AML officer working in wealth management at a small FI that is headquartered and conducts business in Europe.

ML is good. However, it is not efficient in full coverage, another AML associate, who indicated s/he does not work at an FI, said. Manpower is still needed for several products of compliance such as enhanced due diligence.

While the lukewarm endorsement of ML from respondents does not surprise Mueller, it does disappoint him. I do think there are significant gains to be had there both from an effectiveness and an efficiency perspective, Mueller maintains. He believes the lack of satisfaction from users may result from unrealistic expectations and poor communication at the outset of development.

If people are starting more with [the mindset of], Hey, this is our strategy, were ready to go, lets launch into this, then leadership will expect big things right out of the gate, and thats hard to accomplish with anything, much less with something thats so data-driven and that takes so long to develop, Mueller says. Instead they need to start with a small project and achieve success. Then the strategy can be defined using that success as a starting point.

FIs will continue to increase investment and reliance on ML to bolster their financial crime prevention and detection efforts, LaScala adds. We believe that these advanced technologies will ultimately become widely adopted so long as they are transparent and can be explained to the regulator. In fact, someday not far off, systems deploying ML might actually be a regulatory expectation.

Excerpt from:
Survey: Machine learning will (eventually) help win the war against financial crime - Compliance Week

The security threat of adversarial machine learning is real – TechTalks

The Adversarial ML Threat Matrix provides guidelines that help detect and prevent attacks on machine learning systems.

This article is part ofDemystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

With machine learning becoming increasingly popular, one thing that has been worrying experts is the security threats the technology will entail. We are still exploring the possibilities: The breakdown of autonomous driving systems? Inconspicuous theft of sensitive data from deep neural networks? Failure of deep learningbased biometric authentication? Subtle bypass of content moderation algorithms?

Meanwhile, machine learning algorithms have already found their way into critical fields such as finance, health care, and transportation, where security failures can have severe repercussion.

Parallel to the increased adoption of machine learning algorithms in different domains, there has been growing interest in adversarial machine learning, the field of research that explores ways learning algorithms can be compromised.

And now, we finally have a framework to detect and respond to adversarial attacks against machine learning systems. Called the Adversarial ML Threat Matrix, the framework is the result of a joint effort between AI researchers at 13 organizations, including Microsoft, IBM, Nvidia, and MITRE.

While still in early stages, the ML Threat Matrix provides a consolidated view of how malicious actors can take advantage of weaknesses in machine learning algorithms to target organizations that use them. And its key message is that the threat of adversarial machine learning is real and organizations should act now to secure their AI systems.

The Adversarial ML Threat Matrix is presented in the style of ATT&CK, a tried-and-tested framework developed by MITRE to deal with cyber-threats in enterprise networks. ATT&CK provides a table that summarizes different adversarial tactics and the types of techniques that threat actors perform in each area.

Since its inception, ATT&CK has become a popular guide for cybersecurity experts and threat analysts to find weaknesses and speculate on possible attacks. The ATT&CK format of the Adversarial ML Threat Matrix makes it easier for security analysts to understand the threats of machine learning systems. It is also an accessible document for machine learning engineers who might not be deeply acquainted with cybersecurity operations.

Many industries are undergoing digital transformation and will likely adopt machine learning technology as part of service/product offerings, including making high-stakes decisions, Pin-Yu Chen, AI researcher at IBM, told TechTalks in written comments. The notion of system has evolved and become more complicated with the adoption of machine learning and deep learning.

For instance, Chen says, an automated financial loan application recommendation can change from a transparent rule-based system to a black-box neural network-oriented system, which could have considerable implications on how the system can be attacked and secured.

The adversarial threat matrix analysis (i.e., the study) bridges the gap by offering a holistic view of security in emerging ML-based systems, as well as illustrating their causes from traditional means and new risks induce by ML, Chen says.

The Adversarial ML Threat Matrix combines known and documented tactics and techniques used in attacking digital infrastructure with methods that are unique to machine learning systems. Like the original ATT&CK table, each column represents one tactic (or area of activity) such as reconnaissance or model evasion, and each cell represents a specific technique.

For instance, to attack a machine learning system, a malicious actor must first gather information about the underlying model (reconnaissance column). This can be done through the gathering of open-source information (arXiv papers, GitHub repositories, press releases, etc.) or through experimentation with the application programming interface that exposes the model.

Each new type of technology comes with its unique security and privacy implications. For instance, the advent of web applications with database backends introduced the concept SQL injection. Browser scripting languages such as JavaScript ushered in cross-site scripting attacks. The internet of things (IoT) introduced new ways to create botnets and conduct distributed denial of service (DDoS) attacks. Smartphones and mobile apps create new attack vectors for malicious actors and spying agencies.

The security landscape has evolved and continues to develop to address each of these threats. We have anti-malware software, web application firewalls, intrusion detection and prevention systems, DDoS protection solutions, and many more tools to fend off these threats.

For instance, security tools can scan binary executables for the digital fingerprints of malicious payloads, and static analysis can find vulnerabilities in software code. Many platforms such as GitHub and Google App Store already have integrated many of these tools and do a good job at finding security holes in the software they house.

But in adversarial attacks, malicious behavior and vulnerabilities are deeply embedded in the thousands and millions of parameters of deep neural networks, which is both hard to find and beyond the capabilities of current security tools.

Traditional software security usually does not involve the machine learning component because itsa new piece in the growing system, Chen says, adding thatadopting machine learning into the security landscape gives new insights and risk assessment.

The Adversarial ML Threat Matrix comes with a set of case studies of attacks that involve traditional security vulnerabilities, adversarial machine learning, and combinations of both. Whats important is that contrary to the popular belief that adversarial attacks are limited to lab environments, the case studies show that production machine learning system can and have been compromised with adversarial attacks.

For instance, in one case study, the security team at Microsoft Azure used open-source data to gather information about a target machine learning model. They then used a valid account in the server to obtain the machine learning model and its training data. They used this information to find adversarial vulnerabilities in the model and develop attacks against the API that exposed its functionality to the public.

Other case studies show how attackers can compromise various aspect of the machine learning pipeline and the software stack to conduct data poisoning attacks, bypass spam detectors, or force AI systems to reveal confidential information.

The matrix and these case studies can guide analysts in finding weak spots in their software and can guide security tool vendors in creating new tools to protect machine learning systems.

Inspecting a single dimension (machine learning vs traditional software security) only provides an incomplete security analysis of the system as a whole, Chen says. Like the old saying goes: security is only asstrong as its weakest link.

Unfortunately, developers and adopters of machine learning algorithms are not taking the necessary measures to make their models robust against adversarial attacks.

The current development pipeline is merely ensuring a model trained on a training set can generalize well to a test set, while neglecting the fact that the model isoften overconfident about the unseen (out-of-distribution) data or maliciously embbed Trojan patteninthe training set, which offers unintended avenues to evasion attacks and backdoor attacks that an adversary can leverage to control or misguide the deployed model, Chen says. In my view, similar to car model development and manufacturing, a comprehensive in-house collision test for different adversarial treats on an AI model should be the new norm to practice to better understand and mitigate potential security risks.

In his work at IBM Research, Chen has helped develop various methods to detect and patch adversarial vulnerabilities in machine learning models. With the advent Adversarial ML Threat Matrix, the efforts of Chen and other AI and security researchers will put developers in a better position to create secure and robust machine learning systems.

My hope is that with this study, the model developers and machine learning researchers can pay more attention to the security (robustness) aspect of the modeland looking beyond a single performance metric such as accuracy, Chen says.

Read the original:
The security threat of adversarial machine learning is real - TechTalks

Altruist: A New Method To Explain Interpretable Machine Learning Through Local Interpretations of Predictive Models – MarkTechPost

Artificial intelligence (AI) and machine learning (ML) are the digital worlds trendsetters in recent times. Although ML models can make accurate predictions, the logic behind the predictions remains unclear to the users. Lack of evaluation and selection criteria make it difficult for the end-user to select the most appropriate interpretation technique.

How do we extract insights from the models? Which features should be prioritized while making predictions and why? These questions remain prevalent. Interpretable Machine Learning (IML) is an outcome of the questions mentioned above. IML is a layer in ML models that helps human beings understand the procedure and logic behind machine learning models inner working.

Ioannis Mollas, Nick Bassiliades, and Grigorios Tsoumakas have introduced a new methodology to make IML more reliable and understandable for end-users.Altruist, a meta-learning method, aims to help the end-user choose an appropriate technique based on feature importance by providing interpretations through logic-based argumentation.

The meta-learning methodology is composed of the following components:

Paper: https://arxiv.org/pdf/2010.07650.pdf

Github: https://github.com/iamollas/Altruist

Related

Consulting Intern: Grounded and solution--oriented Computer Engineering student with a wide variety of learning experiences. Passionate about learning new technologies and implementing it at the same time.

See the original post:
Altruist: A New Method To Explain Interpretable Machine Learning Through Local Interpretations of Predictive Models - MarkTechPost

ATL Special Report Podcast: Tactical Use Cases And Machine Learning With Lexis+ – Above the Law

Welcome back listeners to this exclusive Above the Law Lexis+ Special Report Podcast: Introducing a New Era in Legal Research, brought to you by LexisNexis. This is the second episode in our special series.

Join us once again as LexisNexis Chief Product Officer for North America Jeff Pfeifer (@JeffPfeifer) and Evolve the Law Contributing Editor Ian Connett (@QuantumJurist) dive deeper into Lexis+, sharing tactical use cases, new tools like brief analysis and Ravel view utilizing data visualization, and howJeffs engineering team at Lexis Labs took Google machine learning technology to law school to provide Lexis+ users with the ultimate legal research experience.

This is the second episode of our special four part series. You can listen to our first episode with Jeff Pfeifer here for more on Lexis+. We hope you enjoy this special report featuring Jeff Pfeifer and will stay tuned for the next episodes in the series.

Links and Resources from this Episode

Review and Subscribe

If you like what you hear please leave a review by clicking here

Subscribe to the podcast on your favorite player to get the latest episodes.

More here:
ATL Special Report Podcast: Tactical Use Cases And Machine Learning With Lexis+ - Above the Law

Efficient audits with machine learning and Slither-simil – Security Boulevard

by Sina Pilehchiha, Concordia University

Trail of Bits has manually curated a wealth of datayears of security assessment reportsand now were exploring how to use this data to make the smart contract auditing process more efficient with Slither-simil.

Based on accumulated knowledge embedded in previous audits, we set out to detect similar vulnerable code snippets in new clients codebases. Specifically, we explored machine learning (ML) approaches to automatically improve on the performance of Slither, our static analyzer for Solidity, and make life a bit easier for both auditors and clients.

Currently, human auditors with expert knowledge of Solidity and its security nuances scan and assess Solidity source code to discover vulnerabilities and potential threats at different granularity levels. In our experiment, we explored how much we could automate security assessments to:

Slither-simil, the statistical addition to Slither, is a code similarity measurement tool that uses state-of-the-art machine learning to detect similar Solidity functions. When it began as an experiment last year under the codename crytic-pred, it was used to vectorize Solidity source code snippets and measure the similarity between them. This year, were taking it to the next level and applying it directly to vulnerable code.

Slither-simil currently uses its own representation of Solidity code, SlithIR (Slither Intermediate Representation), to encode Solidity snippets at the granularity level of functions. We thought function-level analysis was a good place to start our research since its not too coarse (like the file level) and not too detailed (like the statement or line level.)

Figure 1: A high-level view of the process workflow of Slither-simil.

In the process workflow of Slither-simil, we first manually collected vulnerabilities from the previous archived security assessments and transferred them to a vulnerability database. Note that these are the vulnerabilities auditors had to find with no automation.

After that, we compiled previous clients codebases and matched the functions they contained with our vulnerability database via an automated function extraction and normalization script. By the end of this process, our vulnerabilities were normalized SlithIR tokens as input to our ML system.

Heres how we used Slither to transform a Solidity function to the intermediate representation SlithIR, then further tokenized and normalized it to be an input to Slither-simil:

Figure 2: A complete Solidity function from the contract TurtleToken.sol.

Figure 3: The same function with its SlithIR expressions printed out.

First, we converted every statement or expression into its SlithIR correspondent, then tokenized the SlithIR sub-expressions and further normalized them so more similar matches would occur despite superficial differences between the tokens of this function and the vulnerability database.

Figure 4: Normalized SlithIR tokens of the previous expressions.

After obtaining the final form of token representations for this function, we compared its structure to that of the vulnerable functions in our vulnerability database. Due to the modularity of Slither-simil, we used various ML architectures to measure the similarity between any number of functions.

Figure 5: Using Slither-simil to test a function from a smart contract with an array of other Solidity contracts.

Lets take a look at the function transferFrom from the ETQuality.sol smart contract to see how its structure resembled our query function:

Figure 6: Function transferFrom from the ETQuality.sol smart contract.

Comparing the statements in the two functions, we can easily see that they both contain, in the same order, a binary comparison operation (>= and <=), the same type of operand comparison, and another similar assignment operation with an internal call statement and an instance of returning a true value.

As the similarity score goes lower towards 0, these sorts of structural similarities are observed less often and in the other direction; the two functions become more identical, so the two functions with a similarity score of 1.0 are identical to each other.

Research on automatic vulnerability discovery in Solidity has taken off in the past two years, and tools like Vulcan and SmartEmbed, which use ML approaches to discovering vulnerabilities in smart contracts, are showing promising results.

However, all the current related approaches focus on vulnerabilities already detectable by static analyzers like Slither and Mythril, while our experiment focused on the vulnerabilities these tools were not able to identifyspecifically, those undetected by Slither.

Much of the academic research of the past five years has focused on taking ML concepts (usually from the field of natural language processing) and using them in a development or code analysis context, typically referred to as code intelligence. Based on previous, related work in this research area, we aim to bridge the semantic gap between the performance of a human auditor and an ML detection system to discover vulnerabilities, thus complementing the work of Trail of Bits human auditors with automated approaches (i.e., Machine Programming, or MP).

We still face the challenge of data scarcity concerning the scale of smart contracts available for analysis and the frequency of interesting vulnerabilities appearing in them. We can focus on the ML model because its sexy but it doesnt do much good for us in the case of Solidity where even the language itself is very young and we need to tread carefully in how we treat the amount of data we have at our disposal.

Archiving previous client data was a job in itself since we had to deal with the different solc versions to compile each project separately. For someone with limited experience in that area this was a challenge, and I learned a lot along the way. (The most important takeaway of my summer internship is that if youre doing machine learning, you will not realize how major a bottleneck the data collection and cleaning phases are unless you have to do them.)

Figure 7: Distribution of 89 vulnerabilities found among 10 security assessments.

The pie chart shows how 89 vulnerabilities were distributed among the 10 client security assessments we surveyed. We documented both the notable vulnerabilities and those that were not discoverable by Slither.

This past summer we resumed the development of Slither-simil and SlithIR with two goals in mind:

We implemented the baseline text-based model with FastText to be compared with an improved model with a tangibly significant difference in results; e.g., one not working on software complexity metrics, but focusing solely on graph-based models, as they are the most promising ones right now.

For this, we have proposed a slew of techniques to try out with the Solidity language at the highest abstraction level, namely, source code.

To develop ML models, we considered both supervised and unsupervised learning methods. First, we developed a baseline unsupervised model based on tokenizing source code functions and embedding them in a Euclidean space (Figure 8) to measure and quantify the distance (i.e., dissimilarity) between different tokens. Since functions are constituted from tokens, we just added up the differences to get the (dis)similarity between any two different snippets of any size.

The diagram below shows the SlithIR tokens from a set of training Solidity data spherized in a three-dimensional Euclidean space, with similar tokens closer to each other in vector distance. Each purple dot shows one token.

Figure 8: Embedding space containing SlithIR tokens from a set of training Solidity data

We are currently developing a proprietary database consisting of our previous clients and their publicly available vulnerable smart contracts, and references in papers and other audits. Together theyll form one unified comprehensive database of Solidity vulnerabilities for queries, later training, and testing newer models.

Were also working on other unsupervised and supervised models, using data labeled by static analyzers like Slither and Mythril. Were examining deep learning models that have much more expressivity we can model source code withspecifically, graph-based models, utilizing abstract syntax trees and control flow graphs.

And were looking forward to checking out Slither-simils performance on new audit tasks to see how it improves our assurance teams productivity (e.g., in triaging and finding the low-hanging fruit more quickly). Were also going to test it on Mainnet when it gets a bit more mature and automatically scalable.

You can try Slither-simil now on this Github PR. For end users, its the simplest CLI tool available:

Slither-simil is a powerful tool with potential to measure the similarity between function snippets of any size written in Solidity. We are continuing to develop it, and based on current results and recent related research, we hope to see impactful real-world results before the end of the year.

Finally, Id like to thank my supervisors Gustavo, Michael, Josselin, Stefan, Dan, and everyone else at Trail of Bits, who made this the most extraordinary internship experience Ive ever had.

Recent Articles By Author

*** This is a Security Bloggers Network syndicated blog from Trail of Bits Blog authored by Nol Ponthieux. Read the original post at: https://blog.trailofbits.com/2020/10/23/efficient-audits-with-machine-learning-and-slither-simil/

Visit link:
Efficient audits with machine learning and Slither-simil - Security Boulevard

Every Thing You Need to Know About Quantum Computers – Analytics Insight

Quantum computersare machines that use the properties of quantum physics to store data and perform calculations based on the probability of an objects state before it is measured. This can be extremely advantageous for certain tasks where they could vastlyoutperform even the best supercomputers.

Quantum computers canprocess massive and complex datasetsmore efficiently than classical computers. They use the fundamentals of quantum mechanics to speed up the process of solving complex calculations. Often, these computations incorporate a seemingly unlimited number of variables and the potential applications span industries from genomics to finance.

Classic computers, which include smartphones and laptops, carry out logical operations using the definite position of a physical state. They encode information in binary bits that can either be 0s or 1s. In quantum computing, operations instead use the quantum state of an object to produce the basic unit of memory called as a quantum bit or qubit. Qubits are made using physical systems, such as the spin of an electron or the orientation of a photon. These systems can be in many different arrangements all at once, a property known as quantum superposition. Qubits can also be inextricably linked together using a phenomenon called quantum entanglement. The result is that a series of qubits can represent different things simultaneously. These states are the undefined properties of an object before theyve been detected, such as the spin of an electron or the polarization of a photon.

Instead of having a clear position, unmeasured quantum states occur in a mixed superposition that can be entangled with those of other objects as their final outcomes will be mathematically related even. The complex mathematics behind these unsettled states of entangled spinning coins can be plugged into special algorithms to make short work of problems that would take a classical computer a long time to work out.

American physicist andNobel laureate Richard Feynmangave a note about quantum computers as early as 1959. He stated that when electronic components begin to reach microscopic scales, effects predicted by quantum mechanics occur, which might be exploited in the design of more powerful computers.

During the 1980s and 1990s, the theory of quantum computers advanced considerably beyond Feynmans early speculation. In 1985,David Deutschof the University of Oxford described the construction of quantum logic gates for a universal quantum computer.Peter Shor of AT&T devised an algorithmto factor numbers with a quantum computer that would require as few as six qubits in 1994. Later in 1998, Isaac Chuang of Los Alamos National Laboratory, Neil Gershenfeld of Massachusetts Institute of Technology (MIT) and Mark Kubince of the University of Californiacreated the first quantum computerwith 2 qubits, that could be loaded with data and output a solution.

Recently, Physicist David Wineland and his colleagues at the US National Institute for Standards and Technology (NIST) announced that they havecreated a 4-qubit quantum computerby entangling four ionized beryllium atoms using an electromagnetic trap. Today, quantum computing ispoised to upend entire industriesstarting from telecommunications to cybersecurity, advanced manufacturing, finance medicine and beyond.

There are three primary types of quantum computing. Each type differs by the amount of processing power (qubits) needed and the number of possible applications, as well as the time required to become commercially viable.

Quantum annealing is best for solving optimization problems. Researchers are trying to find the best and most efficient possible configuration among many possible combinations of variables.

Volkswagen recently conducted a quantum experiment to optimize traffic flows in the overcrowded city of Beijing, China. The experiment was run in partnership with Google and D-Wave Systems. Canadian company D-Wave developed quantum annealer. But, it is difficult to tell whether it actually has any real quantumness so far. The algorithm could successfully reduce traffic by choosing the ideal path for each vehicle.

Quantum simulations explore specific problems in quantum physics that are beyond the capacity of classical systems. Simulating complex quantum phenomena could be one of the most important applications of quantum computing. One area that is particularly promising for simulation is modeling the effect of a chemical stimulation on a large number of subatomic particles also known as quantum chemistry.

Universal quantum computers are the most powerful and most generally applicable, but also the hardest to build. Remarkably, a universal quantum computer would likely make use of over 100,000 qubits and some estimates put it at 1M qubits. But to the disappointment, the most qubits we can access now is just 128. The basic idea behind the universal quantum computer is that you could direct the machine at any massively complex computation and get a quick solution. This includes solving the aforementioned annealing equations, simulating quantum phenomena, and more.

Here is the original post:
Every Thing You Need to Know About Quantum Computers - Analytics Insight

Reimagining the laser: new ideas from quantum theory could herald a revolution – The Conversation AU

Lasers were created 60 years ago this year, when three different laser devices were unveiled by independent laboratories in the United States. A few years later, one of these inventors called the unusual light sources a solution seeking a problem. Today, the laser has been applied to countless problems in science, medicine and everyday technologies, with a market of more than US$11 billion per year.

A crucial difference between lasers and traditional sources of light is the temporal coherence of the light beam, or just coherence. The coherence of a beam can be measured by a number C, which takes into account the fact light is both a wave and a particle.

Read more: Explainer: what is wave-particle duality

From even before lasers were created, physicists thought they knew exactly how coherent a laser could be. Now, two new studies (one by myself and colleagues in Australia, the other by a team of American physicists) have shown C can be much greater than was previously thought possible.

The coherence C is roughly the number of photons (particles of light) emitted consecutively into the beam with the same phase (all waving together). For typical lasers, C is very large. Billions of photons are emitted into the beam, all waving together.

This high degree of coherence is what makes lasers suitable for high-precision applications. For example, in many quantum computers, we will need a highly coherent beam of light at a specific frequency to control a large number of qubits over a long period of time. Future quantum computers may need light sources with even greater coherence.

Read more: Explainer: quantum computation and communication technology

Physicists have long thought the maximum possible coherence of a laser was governed by an iron rule known as the Schawlow-Townes limit. It is named after the two American physicists who derived it theoretically in 1958 and went on to win Nobel prizes for their laser research. They stated that the coherence C of the beam cannot be greater than the square of N, the number of energy-excitations inside the laser itself. (These excitations could be photons, or they could be atoms in an excited state, for example.)

Now, however, two theory papers have appeared that overturn the Schawlow-Townes limit by reimagining the laser. Basically, Schawlow and Townes made assumptions about how energy is added to the laser (gain) and how it is released to form the beam (loss).

The assumptions made sense at the time, and still apply to lasers built today, but they are not required by quantum mechanics. With the amazing advances that have occurred in quantum technology in the past decade or so, our imagination need not be limited by standard assumptions.

The first paper, published this week in Nature Physics, is by my group at Griffith University and a collaborator at Macquarie University. We introduced a new model, which differs from a standard laser in both gain and loss processes, for which the coherence C is as big as N to the fourth power.

In a laser containing as many photons as a regular laser, this would allow C to be much bigger than before. Moreover, we show a laser of this kind could in principle be built using the technology of superconducting qubits and circuits which is used in the currently most successful quantum computers.

Read more: Why are scientists so excited about a recently claimed quantum computing milestone?

The second paper, by a team at the University of Pittsburgh, has not yet been published in a peer-reviewed journal but recently appeared on the physics preprint archive. These authors use a somewhat different approach, and end up with a model in which C increases like N to the third power. This group also propose building their laser using superconducting devices.

It is important to note that, in both cases, the laser would not produce a beam of visible light, but rather microwaves. But, as the authors of this second paper note explicitly, this is exactly the type of source required for superconducting quantum computing.

The standard limit is that C is proportional to N , the Pittsburgh group achieved C proportional to N , and our model has C proportional to N . Could some other model achieve an even higher coherence?

No, at least not if the laser beam has the ideal coherence properties we expect from a laser beam. This is another of the results proven in our Nature Physics paper. Coherence proportional to the fourth power of the number of photons is the best that quantum mechanics allows, and we believe it is physically achievable.

An ultimate achievable limit that surpasses what is achievable with standard methods, is known as a Heisenberg limit. This is because it is related to Heisenbergs uncertainty principle.

Read more: Explainer: Heisenbergs Uncertainty Principle

A Heisenberg-limited laser, as we call it, would not be just a revolution in the design and performance of lasers. It also requires a fundamental rethinking of what a laser is: not restricted to the current kinds of devices, but any device which turns inputs with little coherence into an output of very high coherence.

It is the nature of revolutions that it is impossible to tell whether they will succeed when they begin. But if this one does, and standard lasers are supplanted by Heisenberg-limited lasers, at least in some applications, then these two papers will be remembered as the first shots.

Original post:
Reimagining the laser: new ideas from quantum theory could herald a revolution - The Conversation AU

NIST Hones In On NatSec Quantum Research – Breaking Defense

WASHINGTON: The NISTs Quantum Economic Development Consortium has just launched a new research committee focused on national security applications of quantum science, with an eye to identifying specific uses, standards and enabling technologies, says Celia Merzbacher, QEDCs deputy director.

Besides quantum computing, other applications that are particularly valuable for the national security side of things include: sensing, position, navigation and timing (PNT); and locating underground facilities, she told the Genius Machines summit sponsored by DefenseOne and NextGov yesterday.

The QEDC was launched under the National Institutes of Standards and Technology, which is overseen by the Commerce Department, to implement the December 2018 National Quantum Initiative Act. The law directed NIST to convene a consortium to identify the future measurement, standards, cybersecurity and other needs that will support the development of a quantum information science and technology industry, explains a NIST press release.

QEDC on Sept. 16 announced its steering committee members: Boeing, ColdQuanta, Google, IBM, QC Ware and Zapata Computing, as well as NIST and the Department of Energy. The consortium now boasts almost 200 members, from small startups to major defense contractors Lockheed Martin, Raytheon, BAE Systems, L3Harris and Honeywell. It also includes all of DoEs national laboratories and 32 universities from across the country.

Since its establishment, Merzbacher said, QEDC has been building a consortium of stakeholders from across the research and innovation system, but really focused on bringing together the various industries that are going to be really critical to achieving the economic impact that quantum sort of promises. Its in a very early stage. But nevertheless, there are all kinds of technologies that are going to have to advance and come together to make it possible to have quantum computing, and quantum communications and networks, and security and sensing.

As Breaking D readers know, quantum science is one of DoDs modernization priorities spearheaded by the Office of Research and Engineering. Over the past several years, long-simmering interest within the national security community in quantum science has been elevated due to concerns about Chinese achievements. DoD and Intelligence Community experts further worry that quantum computing could make it almost impossible to crack encrypted communications used by adversaries, terrorists and every-day criminals.

Air Force Research Laboratory (AFRL), for example, inJune held a Quantum Collider to help speed ongoing research by commercial labs on enabling technologies into the hands of operators awarding $5.25 million to 23 small businesses.

DARPA, in particular, has been working on quantum computing and quantum encryption for some two decades. In May DARPA chose seven university and industry teams for the first phase of its Optimization with Noisy Intermediate-Scale Quantum devices (ONISQ) program designed to figure out how to rapidly advance quantum computing by using hybrid machines that combine intermediate-sized quantum devices with classical systems.

Its important to bear in mind that quantum computing and current or classical computing are sort of two different animals and they have very different strengths, Merzbacher explained. For the time being, its very likely that the two will be somehow brought together in a hybrid way, so that when you have a computing problem, youll be able to run a program that sort of parses that, and runs the part that runs well on a classical computer on the classical processor, and the part that runs well on a quantum computer on the quantum processor and then kind of brings it back together to present the solution.

Thats not so different from the way high-performance computing brought together graphical processors, GPUs, and CPUs to do the same thing, she elaborated. So hybrid computing is likely to be the new high-performance computing.

That said, and despite its vast promise, quantum science remains in the very early stages of research. Indeed, Mark Lewis who now serves as DoDs acting deputy director for Research and Engineering back in May cautioned that there is a lot of hype around quantum science.

National Science Foundation image

Thus, the National Science Foundation last year created the Quantum Leap Challenge Institute program to fund the establishment of quantum research centers around the country, Thyaga Nandagopal, deputy division director of the National Science Foundations Directorate of Computer & Information Science and Engineering (CISE) Computing and Communication Foundations (CCF) Division.

NSF established three interdisciplinary centers last year:

It is now taking proposals from interested universities for three more centers, with bids due Feb. 1, 2021. Further, Nandagopal said, his group is now looking at the possibility of funding a kind of grand challenge that would showcase research that unifies sensing, communication and computation together.

We are in some sense hopeful that what QEDC and NSF and DoE are all trying to do with these massive investments that theyre making right now is to compress the timeframe to maybe 10, 15 years rather than waiting 30 years for useful applications to emerge. The hope, he said, is that by 2030 or 2035, there may be desktop quantum computers and well all be using a quantum network to do all financial transactions that we can be highly confident that cant be broken.

Theres such a broad set of possibilities. The quantum science that agencies like NSF are investing in and helping to happen are really the input and foundation; the sort of field of many flowers from which we get to harvest and create things that have practical application, Merzbacher enthused.

In the case of quantum information science that ranges from the ability to have super-sensitive sensors that could be huge improvements over our current technology for measuring brain imaging; or for finding things underground for defense and military purposes; perhaps for navigation; for making sure the energy grid is safe and secure; for driverless vehicles which dont even exist today and are going to demand all kinds of capabilities and assurance that quantum might help to provide; to computing capabilities to allow drug companies to discover drugs and, and various sort of chemical modeling to be done that would make possible all kinds of new materials and structures and devices that could be put to use in ways that we cant even really imagine.

its really a tool, in some ways, that will open sort of the imagination of innovators to create things that will have value that we cant predict, Merzbacher summed up. So thats why its really a good time, and important, for the government to step in and help prime the pump, and help support the costs of these initial research and development phases that will then make those tools available to all sorts of industries.

Here is the original post:
NIST Hones In On NatSec Quantum Research - Breaking Defense

Quantum Inspired Algorithm Going Back To The Source – Hackaday

Recently, [Jabrils] set out to accomplish a difficult task: porting a quantum-inspired algorithm to run on a (simulated) quantum computer. Algorithms are often inspired by all sorts of natural phenomena. For example, asolution to the traveling salesman problem models ants and their pheromone trails. Another famous example is neural nets, which are inspired by the neurons in your brain. However, attempting to run a machine learning algorithm on your neurons, even with the assistance of pen and paper would be a nearly impossible exercise.

The quantum-inspired algorithm in question is known as the wavefunction collapse function. In a nutshell, you have a cube of voxels, a graph of nodes, or simply a grid of tiles as well as a list of detailed rules to determine the state of a node or tile. At the start of the algorithm, each node or point is considered in a state of superposition, which means it is considered to be in every possible state. Looking at the list of rules, the algorithm then begins to collapse the states. Unlike a quantum computer, states of superposition is not an intrinsic part of a classic computer, so this solving must be done iteratively. In order to reduce possible conflicts and contradictions later down the line, the nodes with the least entropy (the smallest number of possible states) are solved first. At first, random states are assigned, with the changes propagating through the system. This process is continued until the waveform is ultimately collapsed to a stable state or a contradiction is reached.

Whats interesting is that the ruleset doesnt need to be coded, it can be inferred from an example. A classic use case of this algorithm is 2D pixel-art level design. By providing a small sample level, the algorithm churns and produces similar but wholly unique output. This makes it easy to provide thousands of unique and beautiful levels from an easy source image, however it comes at a price. Even a small level can take hours to fully collapse. In theory, a quantum computer should be able to do this much faster, since after all, it was the inspiration for this algorithm in the first place.

[Jabrils] spent weeks trying to get things running but ultimately didnt succeed. However, his efforts give us a peek into the world of quantum computing and this amazing algorithm. We look forward to hearing more about this project from [Jabrils] who is continuing to work on it in his spare time. Maybe give it a shot yourself by learning the basics of quantum computing for yourself.

View post:
Quantum Inspired Algorithm Going Back To The Source - Hackaday

MIT Lincoln Laboratory Creates The First Trapped-Ion Quantum Chip With Integrated Photonics – Forbes

New MIT Lincoln Laboratory's quantum chip with integrated photonics

Most experts agree that quantum computing is still in an experimental era. The current state of quantum technology has been compared to the same stage that classical computing was in during the late 1930s.

Quantum computing uses various computation technologies, such as superconducting, trapped ion, photonics, silicon-based, and others.It will likely be a decade or more before a useful fault-tolerant quantum machine is possible. However, a team of researchers at MIT Lincoln Laboratory has developed a vital step to advance the evolution of trapped-ion quantum computers and quantum sensors.

Most everyone knows that classical computers perform calculations using bits (binary digits) to represent either a one or zero.In quantum computers, a qubit (quantum bit) is the fundamental unit of information. Like classical bits, it can represent a one or zero. Still, a qubit can also be a superposition of both values when in a quantum state.

Superconducting qubits, used by IBM and several others, are the most commonly used technology.Even so, trapped-ion qubits are the most mature qubit technology. It dates back to the 1990s and its first use in atomic clocks. Honeywell and IonQ are the most prominent commercial users of trapped ion qubits.

Trapped-Ion quantum computers

Depiction of external lasers and optical equipment in a quantum computer ... [+]

Honeywell and IonQ both create trapped-ion qubits using an isotope of rare-earth metal called ytterbium.In its chip using integrated photonics, MIT used an alkaline metal called strontium.The process to create ions is essentially the same. Precision lasers remove an outer electron from an atom to form a positively charged ion.Then, lasers are used like tweezers to move ions into position. Once in position, oscillating voltage fields hold the ions in place. One main advantage of ions lies in the fact that it is natural instead of fabricated. All trapped-ion qubits are identical.A trapped-ion qubit created on earth would be the perfect twin of one created on another planet.

Dr. Robert Niffenegger, a member of the Trapped Ion and Photonics Group at MIT Lincoln Laboratory, led the experiments and is first author on the Nature paper.He explained why strontium was used for the MIT chip instead of ytterbium, the ion of choice for Honeywell and IonQ."The photonics developed for the ion trap are the first to be compatible with violet and blue wavelengths," he said. "Traditional photonics materials have very high loss in the blue, violet and UV.Strontium ions were used instead of ytterbium because strontium ions do not need UV light for optical control."

This figure shows lasers in Honeywell's powerful Model zero trapped-ion quantum computer. Parallel ... [+] operating zones are a key differentiating feature of its advanced QCCD trapped-ion system

All the manipulation of ions takes place inside a vacuum chamber containing a trapped-ion quantum processor chip.The chamber protects the ions from the environment and prevents collisions with air molecules. In addition to creating ions and moving them into position, lasers perform necessary quantum operations on each qubit.Because lasers and optical components are large, it is by necessity located outside the vacuum chamber.Mirrors and other optical equipment steer and focus external laser beams through the vacuum chamber windows and onto the ions.

The largest number of trapped-ion qubits being used in a quantum computer today is 32.For quantum computers to be truly useful, millions of qubits are needed.Of course, that means many thousands of lasers will also be required to control and measure the millions of ion qubits. The problem becomes even larger when two types of ions are used, such as ytterbium and barium in Honeywell's machine. The current method of controlling lasers makes it challenging to build trapped-ion quantum computers beyond a few hundred qubits.

Fiber optics couple laser light directly into the MIT ion-trap chip. When in use, the chip is cooled ... [+] to cryogenic temperatures in a vacuum chamber, and waveguides on the chip deliver the light to an ion trapped right above the chip's surface for performing quantum computation.

Rather than resorting to optics and bouncing lasers off mirrors to aim beams into the vacuum chamber, MIT researchers have developed another method.They have figured out how to use optical fibers and photonics to carry laser pulses directly into the chamber and focus them on individual ions on the chip.

A trapped-ion strontium quantum computer needs lasers of six different frequencies. Each frequency corresponds to a different color that ranges from near-ultraviolet to near-infrared.Each color performs a different operation on an ion qubit. The MIT press release describes the new development this way, "Lincoln Laboratory researchers have developed a compact way to deliver laser light to trapped ions. In the Nature paper, the researchers describe a fiber-optic block that plugs into the ion-trap chip, coupling light to optical waveguides fabricated in the chip itself. Through these waveguides, multiple wavelengths [colors] of light can be routed through the chip and released to hit the ions above it."

Light is coupled to the MIT integrated photonic trap chip via optical fibers which enter the ... [+] cryogenic vacuum chamber through a fiber feed-

In other words, rather than using external mirrors to shine lasers into the vacuum chamber, MIT researchers used multiple optical fibers and photonic waveguides instead.A block equipped with four optic fibers delivering a range of colors was mounted on the quantum chip's underside. According to Niffenegger, "Getting the fiber block array aligned to the waveguides on the chip and applying the epoxy felt like performing surgery. It was a very delicate process. We had about half a micron of tolerance, and it needed to survive cool down to4 Kelvin."

I asked Dr. Niffenegger his thoughts about the long-term implications of his team's development.His reply was interesting.

"I think many people in the quantum computing field think that the board is set and all of the leading technologies at play are well defined. I think our demonstration, together with other work integrating control of trapped ion qubits, could tip the game on its head and surprise some people that maybe the rules arent what they thought.But really I just hope that it spurs more out of the box ideas that could enable quantum computing technologies to break through towards practical applications.

Analyst Notes:

Note: Moor Insights & Strategy writers and editors may have contributed to this article.

Disclosure: Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including 8x8, Advanced Micro Devices, Amazon, Applied Micro, ARM, Aruba Networks, AT&T, AWS, A-10 Strategies, Bitfusion, Blaize, Calix, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Digital Optics, Dreamchain, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame, Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest-Revolve), Google Cloud, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Ion VR, Inseego, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MapBox, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Poly, Panasas, Peraso, Pexip, Pixelworks, Plume Design, Portworx, Pure Storage, Qualcomm, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak, SONY, Springpath, Spirent, Splunk, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity, TensTorrent, Tobii Technology, Twitter, Unity Technologies, UiPath, Verizon Communications, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zebra, Zededa, and Zoho which may be cited in this article

Read this article:
MIT Lincoln Laboratory Creates The First Trapped-Ion Quantum Chip With Integrated Photonics - Forbes

Global Quantum Computing Market 2020 COVID-19 Updated Analysis By Product (Simulation, Optimization, Sampling); By Application (Defense, Banking &…

Global Quantum Computing Market Report Covers Market Dynamics, Market Size, And Latest Trends Amid The COVID-19 Pandemic

For obtaining an entire summary of theQuantum Computing market, all one has to do is to read every detail mentioned in the report so as to grasp some of the vital futuristic and present innovative trends mentioned in the record. The Quantum Computing market has all the factors including growth benefits, product sales, customer demands, economic flexibilities, various applications, and entire market segmentation detailed out in a well-patterned format.

Click here for the free sample copy of the Quantum Computing Market report

On a global scale, the Quantum Computing market is shown to have crossed the profit bar due to the inclusion of endless strategies like government regulations, specific industrial policies, product expenditure analysis, and future events. The focus on the dominating players Magiq Technologies Inc., 1QB Information Technologies Inc., D-Wave Systems Inc., Intel Corporation, Nippon Telegraph And Telephone Corporation (NTT), Cambridge Quantum Computing Ltd, Fujitsu, International Business Machines Corporation (IBM), Evolutionq Inc, Hewlett Packard Enterprise (HP), QxBranch, LLC, Google Inc., Toshiba Corporation, Station Q Microsoft Corporation, University Landscape, Northrop Grumman Corporation, Accenture, Quantum Circuits, Inc, Rigetti Computing, Hitachi Ltd, QC Ware Corp. of the Quantum Computing market gives an idea about the growth enhancement being experienced on the global platform.

Key Insights encompassed in the Quantum Computing market report

Latest technological advancement in the Quantum Computing market Studying pricing analysis and market strategies trailed by the market players to enhance global Quantum Computing market growth Regional development status off the Quantum Computing market and the impact of COVID-19 in different regions Detailing of the supply-demand chain, market valuation, drivers, and more

The Quantum Computing market report provides not only the clients but also all the other entrepreneurs with the market statistics, applications, product type, end-users, topological growth, market funds, and others in a diamond-like transparent format. The topological bifurcation North America (United States, Canada and Mexico), Europe (Germany, UK, France, Italy, Russia and Turkey etc.), Asia-Pacific (China, Japan, Korea, India, Australia, Indonesia, Thailand, Philippines, Malaysia and Vietnam), South America (Brazil, Argentina, Columbia etc.), Middle East and Africa (Saudi Arabia, UAE, Egypt, Nigeria and South Africa) is important in order to study the overall market growth and development. The current report beams some light on the futuristic scopes and the alterations needed in the industrial and government strategy for the benefit of the global market.

Read Detailed Index of full Research Study at::https://www.marketdataanalytics.biz/global-quantum-computing-market-report-2020-by-key-players-73563.html

An Overview About the Table of Contents:

Global Quantum Computing Market Overview Target Audience for the Quantum Computing Market Economic Impact on the Quantum Computing Market Global Quantum Computing Market Forecast Business Competition by Manufacturers Production, Revenue (Value) by Region Production, Revenue (Value), Price Trend by Type Market Analysis by Application Cost Analysis Industrial Chain, Sourcing Strategy, and Downstream Buyers Marketing Strategy Analysis, Distributors/Traders Market Effect Factors Analysis

This informative report provides some of the vital details about the Quantum Computing market regarding segmentation {Simulation, Optimization, Sampling}; {Defense, Banking & Finance, Energy & Power, Chemicals, Healthcare & Pharmaceuticals, Others} such as application in various sectors, product type bifurcations, supply and demand statistics, and growth factors, which are commonly required for the potential positive growth and development.

Key questions answered by the report:

What are the major trends that are constantly influencing the growth of the Quantum Computing market? Which are the prominent regions that offer immense prospects for players in the Quantum Computing market? What are the business strategies adopted by key players to sustain in the global Quantum Computing market? What is the expected size and growth rate of the global Quantum Computing market during the forecast period? What are the factors impacting the growth of the global Quantum Computing market? What are the challenges and threats faced by key players in the Quantum Computing market?

Enquire Here Get customization & check discount for report@https://www.marketdataanalytics.biz/global-quantum-computing-market-report-2020-by-key-players-73563.html#inquiry-for-buying

Along with the market bifurcations, there is detailing about strategic means inculcated by the dominant players so as to carve out a name for themselves in the market. With a solitary click, the entire interface is displayed with the Quantum Computing market details mentioned in a brief and smooth-tongued format for all the laymen and business entrepreneurs present across the world.

Why Choose Market Data Analytics reports?

Our analysts use latest market research techniques to create the report Market reports are curated using the latest market research and analytical tools Customization of report is possible as per the requirement Our team comprises of expertise and highly trained analysts Quick responsive customer support for domestic and international clients

Original post:
Global Quantum Computing Market 2020 COVID-19 Updated Analysis By Product (Simulation, Optimization, Sampling); By Application (Defense, Banking &...

A Measured Approach to Regulating Fast-Changing Tech – Harvard Business Review

Executive Summary

Innovations driving what many refer to as the Fourth Industrial Revolution are as varied as the enterprises affected. Industries and their supply chains are already being revolutionized by several emerging technologies, including 5G networks, artificial intelligence, and advanced robotics, all of which make possible new products and services that are both better and cheaper than current offerings. Unfortunately, not every application of transformational technology is as obviously beneficial to individuals or society as a whole. But rather than panic, regulators will need to step back, and balance costs and benefits rationally.

Amid the economic upheaval caused by Covid-19, technology-driven disruption continues to transform nearly every business at an accelerating pace, from entertainment to shopping to how we work and go to school. Though the crisis may be temporary, many changes in consumer behavior are likely permanent.

Well before the pandemic, however, industries and their supply chains were already being revolutionized by several emerging technologies, including 5G networks, artificial intelligence, and advanced robotics, all of which make possible new products and services that are both better and cheaper than current offerings. That kind of big bang disruption can quickly and repeatedly rewrite the rules of engagement for incumbents and new entrants alike. But is the world changing too fast? And, if so, are governments capable of regulating the pace and trajectory of disruption?

The answers to those questions vary by industry, of course. Thats because the innovations driving what many refer to as the Fourth Industrial Revolution are as varied as the enterprises affected. In my recent book, Pivot to the Future, my co-authors and I identified ten transformative technologies with the greatest potential to generate new value for consumers, which is the only measure of progress that really matters. They are: extended reality, cloud computing, 3D printing, advanced human-computer interactions, quantum computing, edge and fog computing, artificial intelligence, the Internet of Things, blockchain, and smart robotics.

Some of these disruptors, such as blockchain, robotics, 3D printing and the Internet of things, are already in early commercial use. For others, the potential applications may be even more compelling, though the business cases for reaching them are less obvious. Today, for example, only the least risk-adverse investors are funding development in virtual reality, edge computing, and new user interface technologies that interpret and respond to brainwaves.

Complicating both investment and adoption of transformative technologies is the fact that the applications with the biggest potential to change the world will almost certainly be built on unanticipated combinations of several novel and mature innovations. Think of the way ride-sharing services require existing GPS services, mobile networks, and devices, or how video conferencing relies on home broadband networks and high-definition displays. Looking at just a few of the most exciting examples of things to come make clear just how unusual the next generation of disruptive combinations will be, and how widespread their potential impact on business-as-usual:

Unfortunately, not every application of transformational technology is as obviously beneficial to individuals or society as a whole. Every one of the emerging technologies we identified (and plenty of those already in mainstream use) come with potential negative side effects that may, in some cases, outweigh the benefits. Often, these costs are both hard to predict and difficult to measure.

As disruption accelerates, so too does anxiety about its unintended consequences, feeding what futurist Alvin Toffler first referred to half a century ago as Future Shock. Tech boosters and critics alike are increasingly appealing to governments to intervene, both to promote the most promising innovations and, at the same time, to solve messy social and political conflicts aggravated by the technology revolution.

On the plus side, governments continue to support research and development of emerging technologies, serving as trial users of the most novel applications. The White House, for example, recently committed over $1 billion for continued exploration of leading-edge innovation in artificial intelligence and quantum computing. The Federal Communications Commission has just concluded one its most successful auctions yet for mobile radio frequencies, clearing bandwidth once considered useless for commercial use but now seen as central to nationwide 5G deployments. Palantir, a data analytics company that works closely with governments to assess terrorism and other complex risks, has just filed for a public offering that values the start-up at over $40 billion.

At the same time, a regulatory backlash against technology continues to gain momentum, with concerns about surveillance, the digital divide, privacy, and disinformation leading lawmakers to consider restricting or even banning some of the most popular applications. And the increasingly strategic importance of continued innovation to global competitiveness and national security has fueled increasingly nasty trade disputes, including some between the U.S., China, and the European Union.

Together with on-going antitrust inquiries into the competitive behavior of leading technology providers, these negative reactions underscore what author Adam Thierer sees as the growing prevalence of techno-panics generalized fears about personal autonomy, the fate of democratic government, and perhaps even apocalyptic outcomes from letting some emerging technologies run free.

Disruptive innovation is not a panacea, but nor is it a poison. As technology transforms more industries and becomes the dominant driver of the global economy, it is inevitable both that users will grow more ambivalent, and, as a result, that regulators will become more involved. If, as a popular metaphor of the 1990s had it, the digital economy began as a lawless frontier akin to the American West, its no surprise that as settlements grow socially complex and economically powerful, the law will continue to play catch up, likely for better and for worse.

But rather than panic, regulators need to step back, and balance costs and benefits rationally. Thats the only way well achieve the exciting promise of todays transformational technologies, but still avoid the dystopias.

Go here to read the rest:
A Measured Approach to Regulating Fast-Changing Tech - Harvard Business Review

Caledonian Braves can ‘breathe a bit easier’ after first win says boss Ricky Waddell – MSN UK

Boss Ricky Waddell believes the pressure has eased on his Caledonian Braves stars after their first win of the Lowland League season.

Jack Smith, David Winters and David Sinclair earned The Braves the three points against Dalbeattie Star, with Steven Degnan's strike for the visitors proving to be a consolation for the hosts.

With a huge clash with current league champions Kelty Hearts this weekend, Waddell believes it was crucial to get off the mark last night after an impressive display.

He told Lanarkshire Live Sport : "We really set the tone of the game early on that was important for us.

"We had a couple of sticky moments but I felt we were comfortable at half-time going in at 2-0.

"We didn't give away many chances and rode the storm after they got their goal before David's free-kick wraps things up for us.

Gallery: Predicted XI: Celtic v Rangers (H) (Read Sport)

"It gives us a bit of breathing space after a tough start to the season.

"We are getting bodies back, hopefully, in time for Saturday and we are getting back to where we were before the injuries in pre-season.

"If we go into Kelty off the back of four defeats you are starting to think it's going to be a tough task.

"I feel the boys can breathe a bit easier. Dalbeattie Star is a dangerous game, they take points off people and always come with a plan.

"The win takes away the thought that it has been a really bad start to the season.

"What has happened is that we have improved every game and it gives the players and myself a bit of a lift going into the Kelty game.

"I've been at clubs where you're struggling for a win and that becomes a habit. That's broken right away for us and we can concentrate on progressing."

Follow Lanarkshire Live Sport on Twitter via @LanLiveSport, like us on Facebook or find us on Instagram for the latest sports news, pictures and video.

View original post here:
Caledonian Braves can 'breathe a bit easier' after first win says boss Ricky Waddell - MSN UK

Where Sabrina the Teenage Witch cast are now – hit TV shows and unrecognisable star – Mirror Online

Forget the point black hat and long nose, there was only one witch every 90s teenager wanted to be - Sabrina.

With one quick point, she could have the perfect outfit, fix her homework or embarrass her school enemy.

Sabrina the Teenage Witch first aired in 1996 and ran for seven years, following Sabrina Spellman, her cat Salem, aunts Zelda and Hilda, on/off flame Harvey and many more memorable characters.

But what became of the family, the teachers and the boys after the show came to an end in 2003?

Let's take a trip through the closet to the Other Realm and find out what they've all been up to...

As the star of the show, it's not surprising that Melissa Joan Hart enjoyed a successful TV career post Sabrina.

The former Clarissa Explains It All actress became a household name thanks to her magical role.

After Sabrina wrapped in 2003 Melissa starred in Nine Dead and Robot Chicken and starred in Melissa and Joey, which ran for five years.

Over the years she's spoken openly about her past drug issues, but the mum-of-three has been enjoying a new lease of life after losing weight and loving her time with hubby Mark Wilkerson.

On her Twitter, she says her "favourite role" has been being mum to her boys Mason, Brady and Tucker.

Whimsical Aunt Hilda was a definite favourite on the show - and her love story with Will the train conductor was heartbreaking stuff.

Actress Caroline Rhea took on a lesser role when Sabrina went to college, so the actress had time to host a talk show.

She went on to star in Christmas with the Kranks, The Perfect Man and Love N' Dancing.

She also voiced Linda Flynn-Fletcher on Disney's animated comedy Phineas and Ferb for more than 100 episodes

Caroline has also starred in a series of TV movies including A Christmas in Tennessee and she's currently filming Sydney to the Max.

The other - slightly more sensible - half of the sibling duo, we all remember Aunt Zelda well.

Beth had already starred in The Bonfire of the Vanitie, Hearts Afire and The 5 Mrs Buchanans before starring as Zelda.

As well as parts on Under the Dome and Lost, Beth Broderick has taken on stage roles.

She was reunited with Melissa Joan Hart in 2014 when she made two guest appearances as Dr. Ellen Radier in Melissa & Joey.

She also went on to star in Timber Falls and episodes of Cold Case and Castle.

Beth has two things in the pipeline, Something About Her and Law of Attraction.

Harvey, Harvey, Harvey. How could we forget.

Now a musician, Nate surprised fans when she shared a photo of himself in 2015.

Gone is the windswept, chiselled and clean-shaven look that captured Sabrina Spellman's heart, now sporting a pair of glasses and slight stubble - and not quite as much hair.

He hasn't given up acting altogether, having gone on to star in Lovely & Amazing, Survival Island and episodes of The Tony Danza Show, Fantasy Island and Touched by An Angel.

However, he's had to take up other jobs to pay the bills.

Back in 2018 he tweeted: "Im currently a maintenance man, a janitor, a carpenter, and do whatever random jobs I can get to pay the bills."

He also enjoys improv and writes songs.

The bully of the show, Libby was a thorn in Sabrina's side until her exit in season four.

Since leaving the show, Jenna Leigh Green has enjoyed a successful career, even keeping it in the world of magic with a role in Wicked as it toured North America.

The star also lent her voice to Extreme Ghostbusters and starred in Dharma & Greg, ER, Cold Case, You Again, Quantico and Bones.

She also took to the stage in Tonya and Nancy: A Rock Opera.

Jenny's time on the show was short lived, but her dreams of other worlds hidden behind closets led to her accidentally going through the Spellman's secret entrance.

After she left the sitcom after just one season, she went on to make appearances on the likes of The Outer Limits and Da Vinci's Inquest.

She starred in Cold Squad in 2005 and then in Le coeur a ses raisons in 2006, which seems to have been her final acting appearance.

Michelle moved to Europe, which may explain why there's not much on record after 2006.

But it sounds like she's tried her hand at lots of different things, as her Twitter bio reads: |Mom. Cuelr. Tech. Music. Design. Fashion. Film, TV & Theater. J'actress of all trades."

Little known fact about Michelle - she starred in the 1996 Sabrina movie as Marnie Littlefield.

Valerie Birkhead was Sabrina and Harvey's best friend at high school, and she remained on the show until season four.

After leaving, she starred in the likes of Bring It On, The Other Guys and both Horrible Bosses and its sequel.

Lindsay also pops up in The West Wing, Grosses Pointe and The Stones.

She can also be seen as Emily in Matthew Perry's sitcom The Odd Couple.

Detention! Not really, but if Mr Kraft was here, it'd happen at a drop of a hat.

Veteran actor Martin Mull was the man behind the role, and he has enjoyed a career spanning six decades - including the likes of Clue, Roseanne and Mary Hartman, Mary Hartman.

He appeared as George Perry on Community, and has also been seen as Russell the pharmacist on Two and a Half Men. The star also starred in Arrested Development, Dads, Veep and Brooklyn Nine-Nine.

OK, we're definitely #TeamHarvey, but Josh gave him a run for his money at the coffee shop.

David Lascher - who previously appeared in Nickelodeon series Hey Dude - took on the role, although his short romance with Sabrina only really came to fruition in season six.

Like Beth, he was reunited with Melissa Joan Hart during Melissa & Joey when he made a guest appearance as Charlie. He's also starred in Blossom with Joey Lawrence, who played Joey in the show.

Sarcastic and hell-bent on global domination, we couldn't get through this without bringing up the Spellman family pet.

He was voiced by Nick Bakay, who was also a writer on the show and its cartoon spin-off Sabrina, the Animated Series.

Amongst his credits are the likes of The Queen of Queens, Paul Blart: Mall Cop (and it's upcoming sequel) and The Adventures of Baxter and McGuire.

He also popped up in In Living Color and Coach as well as The Simspons.

He lives in the Hollywood Hills with his wife, Robin, who he married in 1994.

Soleil was Sabrina's roommate Roxie King. The former child star rose to fame in Punky Brewster, and following her role on Sabrina voiced The Proud Family, Bratz, Planet Sheen and Robot Chicken.

She also appeared in an episode of Friends, in which she dated Joey but kept hitting him.

In 2007 she launched The Little Seed an environmentally-conscious children's boutique but it closed in 2012 and now runs as an online business.

She's also released a number of books, including Happy Chaos: From Punky to Parenting and My Perfectly Imperfect Adventures in Between and party-planning guideLet's Get This Party Started.

Sabrina's other roommate Morgan was played by Elisa Donovan, who joined the show after starring in Clueless as Amber.

She's gone on to appear in Judging Amy, NCIS, The Lake and In Gayle We Trust.

Like many of her other Sabrina co-stars, she's also had a guest role on Melissa & Joey.

Alimi appeared as the Quizmaster Albert in Sabrina The Teenage Witch.

He's known for playing FBI agent David Sinclair on Numb3rs and he's starred in Queen Sugar, Lucifer, Criminal Minds and The Catch.

His most recent big role is as Marcel Dumas on Queen of the South.

Alimi, who lives in New York, is married and has two children.

Read more:
Where Sabrina the Teenage Witch cast are now - hit TV shows and unrecognisable star - Mirror Online