Global machine learning as a service market is expected to grow with a CAGR of 38.5% over the forecast period from 2018-2024 – Yahoo Finance

The report on the global machine learning as a service market provides qualitative and quantitative analysis for the period from 2016 to 2024. The report predicts the global machine learning as a service market to grow with a CAGR of 38.

New York, Feb. 20, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Machine Learning as a Service Market: Global Industry Analysis, Trends, Market Size, and Forecasts up to 2024" - https://www.reportlinker.com/p05751673/?utm_source=GNW 5% over the forecast period from 2018-2024. The study on machine learning as a service market covers the analysis of the leading geographies such as North America, Europe, Asia-Pacific, and RoW for the period of 2016 to 2024.

The report on machine learning as a service market is a comprehensive study and presentation of drivers, restraints, opportunities, demand factors, market size, forecasts, and trends in the global machine learning as a service market over the period of 2016 to 2024. Moreover, the report is a collective presentation of primary and secondary research findings.

Porters five forces model in the report provides insights into the competitive rivalry, supplier and buyer positions in the market and opportunities for the new entrants in the global machine learning as a service market over the period of 2016 to 2024. Further, IGR- Growth Matrix gave in the report brings an insight into the investment areas that existing or new market players can consider.

Report Findings1) Drivers Increasing use in cloud technologies Provides statistical analysis along with reduce time and cost Growing adoption of cloud based systems2) Restraints Less skilled personnel3) Opportunities Technological advancement

Research Methodology

A) Primary ResearchOur primary research involves extensive interviews and analysis of the opinions provided by the primary respondents. The primary research starts with identifying and approaching the primary respondents, the primary respondents are approached include1. Key Opinion Leaders associated with Infinium Global Research2. Internal and External subject matter experts3. Professionals and participants from the industry

Our primary research respondents typically include1. Executives working with leading companies in the market under review2. Product/brand/marketing managers3. CXO level executives4. Regional/zonal/ country managers5. Vice President level executives.

B) Secondary ResearchSecondary research involves extensive exploring through the secondary sources of information available in both the public domain and paid sources. At Infinium Global Research, each research study is based on over 500 hours of secondary research accompanied by primary research. The information obtained through the secondary sources is validated through the crosscheck on various data sources.

The secondary sources of the data typically include1. Company reports and publications2. Government/institutional publications3. Trade and associations journals4. Databases such as WTO, OECD, World Bank, and among others.5. Websites and publications by research agencies

Segment CoveredThe global machine learning as a service market is segmented on the basis of component, application, and end user.

The Global Machine Learning As a Service Market by Component Software Services

The Global Machine Learning As a Service Market by Application Marketing & Advertising Fraud Detection & Risk Management Predictive Analytics Augmented & Virtual Reality Security & Surveillance Others

The Global Machine Learning As a Service Market by End User Retail Manufacturing BFSI Healthcare & Life Sciences Telecom Others

Company Profiles IBM PREDICTRON LABS H2O.ai. Google LLC Crunchbase Inc. Microsoft Yottamine Analytics, LLC Fair Isaac Corporation. BigML, Inc. Amazon Web Services, Inc.

What does this report deliver?1. Comprehensive analysis of the global as well as regional markets of the machine learning as a service market.2. Complete coverage of all the segments in the machine learning as a service market to analyze the trends, developments in the global market and forecast of market size up to 2024.3. Comprehensive analysis of the companies operating in the global machine learning as a service market. The company profile includes analysis of product portfolio, revenue, SWOT analysis and latest developments of the company.4. IGR- Growth Matrix presents an analysis of the product segments and geographies that market players should focus to invest, consolidate, expand and/or diversify.Read the full report: https://www.reportlinker.com/p05751673/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Clare: clare@reportlinker.comUS: (339)-368-6001Intl: +1 339-368-6001

Read more:
Global machine learning as a service market is expected to grow with a CAGR of 38.5% over the forecast period from 2018-2024 - Yahoo Finance

Grok combines Machine Learning and the Human Brain to build smarter AIOps – Diginomica

A few weeks ago I wrote a piece here about Moogsoft which has been making waves in the service assurance space by applying artificial intelligence and machine learning to the arcane task of keeping on keeping critical IT up and running and lessening the business impact of service interruptions. Its a hot area for startups and Ive since gotten article pitches from several other AIops firms at varying levels of development.

The most intriguing of these is a company called Grok which was formed by a partnership between Numenta, a pioneering AI research firm co-founded by Jeff Hawkins and Donna Dubinsky, who are famous for having started two classic mobile computing companies, Palm and Handspring, and Avik Partners. Avik is a company formed by brothers Casey and Josh Kindiger, two veteran entrepreneurs who have successfully started and grown multiple technology companies in service assurance and automation over the past two decadesmost recently Resolve Systems.

Josh Kindiger told me in a telephone interview how the partnership came about:

Numenta is primarily a research entity started by Jeff and Donna about 15 years ago to support Jeffs ideas about the intersection of neuroscience and data science. About five years ago, they developed an algorithm called HTM and a product called Grok for AWS which monitors servers on a network for anomalies. They werent interested in developing a company around it but we came along and saw a way to link our deep domain experience in the service management and automation areas with their technology. So, we licensed the name and the technology and built part of our Grok AIOps platform around it.

Jeff Hawkins has spent most of his post-Palm and Handspring years trying to figure out how the human brain works and then reverse engineering that knowledge into structures that machines can replicate. His model or theory, called hierarchical temporal memory (HTM), was originally described in his 2004 book On Intelligence written with Sandra Blakeslee. HTM is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian (in particular, human) brain. For a little light reading, I recommend a peer-reviewed paper called A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex.

Grok AIOps also uses traditional machine learning, alongside HTM. Said Kindiger:

When I came in, the focus was purely on anomaly detection and I immediately engaged with a lot of my old customers--large fortune 500 companies, very large service providers and quickly found out that while anomaly detection was extremely important, that first signal wasn't going to be enough. So, we transformed Grok into a platform. And essentially what we do is we apply the correct algorithm, whether it's HTM or something else, to the proper stream events, logs and performance metrics. Grok can enable predictive, self-healing operations within minutes.

The Grok AIOps platform uses multiple layers of intelligence to identify issues and support their resolution:

Anomaly detection

The HTM algorithm has proven exceptionally good at detecting and predicting anomalies and reducing noise, often up to 90%, by providing the critical context needed to identify incidents before they happen. It can detect anomalies in signals beyond low and high thresholds, such as signal frequency changes that reflect changes in the behavior of the underlying systems. Said Kindiger:

We believe HTM is the leading anomaly detection engine in the market. In fact, it has consistently been the best performing anomaly detection algorithm in the industry resulting in less noise, less false positives and more accurate detection. It is not only best at detecting an anomaly with the smallest amount of noise but it also scales, which is the biggest challenge.

Anomaly clustering

To help reduce noise, Grok clusters anomalies that belong together through the same event or cause.

Event and log clustering

Grok ingests all the events and logs from the integrated monitors and then applies to it to event and log clustering algorithms, including pattern recognition and dynamic time warping which also reduce noise.

IT operations have become almost impossible for humans alone to manage. Many companies struggle to meet the high demand due to increased cloud complexity. Distributed apps make it difficult to track where problems occur during an IT incident. Every minute of downtime directly impacts the bottom line.

In this environment, the relatively new solution to reduce this burden of IT management, dubbed AIOps, looks like a much needed lifeline to stay afloat. AIOps translates to "Algorithmic IT Operations" and its premise is that algorithms, not humans or traditional statistics, will help to make smarter IT decisions and help ensure application efficiency. AIOps platforms reduce the need for human intervention by using ML to set alerts and automation to resolve issues. Over time, AIOps platforms can learn patterns of behavior within distributed cloud systems and predict disasters before they happen.

Grok detects latent issues with cloud apps and services and triggers automations to troubleshoot these problems before requiring further human intervention. Its technology is solid, its owners have lots of experience in the service assurance and automation spaces, and who can resist the story of the first commercial use of an algorithm modeled on the human brain.

More here:
Grok combines Machine Learning and the Human Brain to build smarter AIOps - Diginomica

Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 – The Register

Microsoft has announced a new application, Dynamics 365 Project Operations, as well as additional AI-driven features for its Dynamics 365 range.

If you are averse to buzzwords, look away now. Microsoft Business Applications President James Phillips announced the new features in a post which promises AI-driven insights, a holistic 360-degree view of a customer, personalized customer experiences across every touchpoint, and real-time actionable insights.

Dynamics 365 is Microsofts cloud-based suite of business applications covering sales, marketing, customer service, field service, human resources, finance, supply chain management and more. There are even mixed reality offerings for product visualisation and remote assistance.

Dynamics is a growing business for Microsoft, thanks in part to integration with Office 365, even though some of the applications are quirky and awkward to use in places. Licensing is complex too and can be expensive.

Keeping up with what is new is a challenge. If you have a few hours to spare, you could read the 546-page 2019 Release Wave 2 [PDF] document, for features which have mostly been delivered, or the 405-page 2020 Release Wave 1 [PDF], about what is coming from April to September this year.

Many of the new features are small tweaks, but the company is also putting its energy into connecting data, both from internal business sources and from third parties, to drive AI analytics.

The updated Dynamics 365 Customer Insights includes data sources such as demographics and interests, firmographics, market trends, and product and service usage data, says Phillips. AI is also used in new forecasting features in Dynamics 365 Sales and in Dynamics 365 Finance Insights, coming in preview in May.

Dynamics 365 Project Operations ... Click to enlarge

The company is also introducing a new application, Dynamics 365 Business Operations, with general availability promised for October 1 2020. This looks like a business-oriented take on project management, with the ability to generate quotes, track progress, allocate resources, and generate invoices.

Microsoft already offers project management through its Project products, though this is part of Office rather than Dynamics. What can you do with Project Operations that you could not do before with a combination of Project and Dynamics 365?

There is not a lot of detail in the overview, but rest assured that it has AI-powered business insights and seamless interoperability with Microsoft Teams, so it must be great, right? More will no doubt be revealed at the May Business Applications Summit in Dallas, Texas.

Sponsored: Detecting cyber attacks as a small to medium business

The rest is here:
Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 - The Register

Lifespan: The New Science Behind Anti-Aging and Longevity that Can Help You Live to 100 – Thrive Global

Is aging a disease? David Sinclair, PhD, a professor of genetics at Harvard Medical School one of the worlds top experts on aging and longevity, thinks so.

His new book Lifespan: Why We Ageand Why We Dont Have To covers the latest research on longevity and anti-aging therapies. I was excited to read this book after listening to Sinclair on a podcast.

Sinclair believes that aging is a disease one that is treatable within our lifetimes. According to Sinclair, there is a singular reason why we age: A loss of information. The most important loss occursin the epigenome, the expression of genetic code that instructs newly divided cells what they should be.

Aging is like the accumulation of scratches on a DVD so the information can no longer be read correctly. Every time theres a radical adjustment to the epigenome, e.g. after DNA damage from the sun, a cells identity is changed. This loss of epigenetic information, Sinclair proposes, is why we age.

Scientists have discovered longevity genes that have shown the ability to extend lifespan in many organisms. These include sirtuins, rapamycin (mTOR), and AMPK.

There are natural ways to activate these longevity genes: High intensity exercise, intermittent fasting, low-protein diets, and exposure to hot and cold temperatures. These stressors, or hormesis, turn on genes that prompt the rest of the system to survive a little longer.

Researchers are studying molecules that activate longevity genes rapamycin, metformin, resveratrol and NAD boosters. Resveratrol is a natural molecule found in red wine that activates sirtuins and has increased lifespan in mice by 20 percent. NAD supplementation has been shown to restore fertility in mice that have gone through mousopause.

Sinclair believes these innovations will let us live longer and have less disease. He predicts that humans could live to 150 years of age in the near future, with average life expectancy rising from around 80 now to 110 or higher.

The best ways to activate your longevity genes: Be hungry more often skip breakfast, fast periodically for longer periods, get lean Avoid excessive carbs (sugar, pasta, breads) and processed oils and foods in general Do resistance training lift weights, build muscle Expose your body to hot, cold, and other stressors regularly.

See the rest here:
Lifespan: The New Science Behind Anti-Aging and Longevity that Can Help You Live to 100 - Thrive Global

Machine learning and clinical insights: building the best model – Healthcare IT News

At HIMSS20 next month, two machine learning experts will show how machine learning algorithms are evolving to handle complex physiological data and drive more detailed clinical insights.

During surgery and other critical care procedures, continuous monitoring of blood pressure to detect and avoid the onset of arterial hypotension is crucial. New machine learning technology developed by Edwards Lifesciences has proven to be an effective means of doing this.

In the prodromal stage of hemodynamic instability, which is characterized by subtle, complex changes in different physiologic variables unique dynamic arterial waveform "signatures" are formed, which require machine learning and complex feature extraction techniques to be utilized.

Feras Hatib, director of research and development for algorithms and signal processing at Edwards Lifesciences, explained his team developed a technology that could predict, in real-time and continuously, upcoming hypertension in acute-care patients, using an arterial pressure waveforms.

We used an arterial pressure signal to create hemodynamic features from that waveform, and we try to assess the state of the patient by analyzing those signals, said Hatib, who is scheduled to speak about his work at HIMSS20.

His teams success offers real-world evidence as to how advanced analytics can be used to inform clinical practice by training and validating machine learning algorithms using complex physiological data.

Machine learning approaches were applied to arterial waveforms to develop an algorithm that observes subtle signs to predict hypotension episodes.

In addition, real-world evidence and advanced data analytics were leveraged to quantify the association between hypotension exposure duration for various thresholds and critically ill sepsis patient morbidity and mortality outcomes.

"This technology has been in Europe for at least three years, and it has been used on thousands of patients, and has been available in the US for about a year now," he noted.

Hatib noted similar machine learning models could provide physicians and specialists with information that will help prevent re-admissions or other treatment options, or help prevent things like delirium current areas of active development.

"In addition to blood pressure, machine learning could find a great use in the ICU, in predicting sepsis, which is critical for patient survival," he noted. "Being able to process that data in the ICU or in the emergency department, that would be a critical area to use these machine learning analytics models."

Hatib pointed out the way in which data is annotated in his case, defining what is hypertension and what is not is essential in building the machine learning model.

"The way you label the data, and what data you include in the training is critical," he said. "Even if you have thousands of patients and include the wrong data, that isnt going to help its a little bit of an art to finding the right data to put into the model."

On the clinical side, its important to tell the clinician what the issue is in this case what is causing hypertension.

"You need to provide to them the reasons that could be causing the hypertension this is why we complimented the technology with a secondary screen telling the clinician what is physiologically is causing hypertension," he explained. "Helping them decide what do to about it was a critical factor."

Hatib said in the future machine learning will be everywhere, because scientists and universities across the globe are hard at work developing machine learning models to predict clinical conditions.

"The next big step I see is going toward using this ML techniques where the machine takes care of the patient and the clinician is only an observer," he said.

Feras Hatib, along with Sibyl Munson of Boston Strategic Partners, will share some machine learning best practices during his HIMSS20 in a session, "Building a Machine Learning Model to Drive Clinical Insights." It's scheduled for Wednesday, March 11, from 8:30-9:30 a.m. in room W304A.

Read more:
Machine learning and clinical insights: building the best model - Healthcare IT News

Machine Learning Is No Place To Move Fast And Break Things – Forbes

It is much easier to apologize than it is to get permission.

jamesnoellert.com

The hacking culture has been the lifeblood of software engineering long before the move fast and break things mantra became ubiquitous of tech startups [1, 2]. Computer industry leaders from Chris Lattner [3] to Bill Gates recount breaking and reassembling radios and other gadgets in their youth, ultimately being drawn to computers for their hackability. Silicon Valley itself may have never become the worlds innovation hotbed if it were not for the hacker dojo started by Gordon French and Fred Moore, The Homebrew Club.

Computer programmers still strive to move fast and iterate things, developing and deploying reliable, robust software by following industry proven processes such as test-driven development and the Agile methodology. In a perfect world, programmers could follow these practices to the letter and ship pristine software. Yet time is money. Aggressive, business-driven deadlines pass before coders can properly finish developing software ahead of releases. Add to this the modern best practices of rapid-releases and hot-fixing (or updating features on the fly [4]), the bar for deployable software is even lower. A company like Apple even prides itself by releasing phone hardware with missing software features: the Deep Fusion image processing was part of an iOS update months after the newest iPhone was released [5].

Software delivery becoming faster is a sign of progress; software is still eating the world [6]. But its also subject to abuse: Rapid software processes are used to ship fixes and complete new features, but are also used to ship incomplete software that will be fixed later. Tesla has emerged as a poster child with over the air updates that can improve driving performance and battery capacity, or hinder them by mistake [7]. Naive consumers laud Tesla for the tech-savvy, software-first approach theyre bringing to the old-school automobile industry. Yet industry professionals criticize Tesla for their recklessness: A/B testing [8] an 1800kg vehicle on the road is slightly riskier than experimenting with a new feature on Facebook.

Add Tesla Autopilot and machine learning algorithms into the mix, and this becomes significantly more problematic. Machine learning systems are by definition probabilistic and stochastic predicting, reacting, and learning in a live environment not to mention riddled with corner cases to test and vulnerabilities to unforeseen scenarios.

Massive progress in software systems has enabled engineers to move fast and iterate, for better or for worse. Now with massive progress in machine learning systems (or Software 2.0 [9]), its seamless for engineers to build and deploy decision-making systems that involve humans, machines, and the environment.

A current danger is that the toolset of the engineer is being made widely available but the theoretical guarantees and the evolution of the right processes are not yet being deployed. So while deep learning has the appearance of an engineering profession it is missing some of the theoretical checks and practitioners run the risk of falling flat upon their faces.

In his recent book Reboot AI [10], Gary Marcus draws a thought provoking analogy between deep learning and pharmacology: Deep learning models are more like drugs than traditional software systems. Biological systems are so complex it is rare for the actions of medicine to be completely understood and predictable. Theories of how drugs work can be vague, and actionable results come from experimentation. While traditional software systems are deterministic and debuggable (and thus robust), drugs and deep learning models are developed via experimentation and deployed without fundamental understanding and guarantees. Too often the AI research process is first experiment, then justify results. It should be hypothesis-driven, with scientific rigor and thorough testing processes.

What were missing is an engineering discipline with principles of analysis and design.

Before there was civil engineering, there were buildings that fell to the ground in unforeseen ways. Without proven engineering practices for deep learning (and machine learning at large), we run the same risk.

Taking this to the extreme is not advised either. Consider the shift in spacecraft engineering the last decade: Operational efficiencies and the move fast culture has been essential to the success of SpaceX and other startups such as Astrobotic, Rocket Lab, Capella, and Planet.NASA cannot keep up with the pace of innovation rather, they collaborate with and support the space startup ecosystem. Nonetheless, machine learning engineers can learn a thing or two from an organization that has an incredible track record of deploying novel tech in massive coordination with human lives at stake.

Grace Hopper advocated for moving fast: That brings me to the most important piece of advice that I can give to all of you: if you've got a good idea, and it's a contribution, I want you to go ahead and DO IT. It is much easier to apologize than it is to get permission. Her motivations and intent hopefully have not been lost on engineers and scientists.

[1] Facebook Cofounder Mark Zuckerberg's "prime directive to his developers and team", from a 2009 interview with Business Insider, "Mark Zuckerberg On Innovation".

[2] xkcd

[3] Chris Lattner is the inventor of LLVM and Swift. Recently on the AI podcast, he and Lex Fridman had a phenomenal discussion:

[4] Hotfix: A software patch that is applied to a "hot" system; i.e., a fix to a deployed system already in use. These are typically issues that cannot wait for the next release cycle, so a hotfix is made quickly and outside normal development and testing processes.

[5]

[6]

[7]

[8] A/B testing is an experimental processes to compare two or more variants of a product, intervention, etc. This is very common in software products when considering e.g. colors of a button in an app.

[9] Software 2.0 was coined by renowned AI research engineer Andrej Karpathy, who is now the Director of AI at Tesla.

[10]

[11]

View original post here:
Machine Learning Is No Place To Move Fast And Break Things - Forbes

Deploying Machine Learning to Handle Influx of IoT Data – Analytics Insight

The Internet of Things is gradually penetrating every aspect of our lives. With the growth in numbers of internet-connected sensors built into cars, planes, trains, and buildings, we can say it is everywhere. Be it smart thermostats or smart coffee makers, IoT devices are marching ahead into mainstream adoption.

But, these devices are far from perfect. Currently, there is a lot of manual input required to achieve optimal functionality there is not a lot of intelligence built-in. You must set your alarm, tell your coffee maker when to start brewing, and manually set schedules for your thermostat, all independently and precisely.

These machines rarely communicate with each other, and you are left playing the role of master orchestrator, a labor-intensive job.

Every time the IoT sensors gather data, there has to be someone at the backend to classify the data, process them and ensure information is sent out back to the device for decision making. If the data set is massive, how could an analyst handle the influx? Driverless cars, for instance, have to make rapid decisions when on autopilot and relying on humans is completely out of the picture. Here, Machine Learning comes to play.

Tapping into that data to extract useful information is a challenge thats starting to be met using the pattern-matching abilities of machine learning. Firms are increasingly feeding data collected by Internet of Things (IoT) sensors situated everywhere from farmers fields to train tracks into machine-learning models and using the resulting information to improve their business processes, products, and services.

In this regard, one of the most significant leaders is Siemens, whose Internet of Trains project has enabled it to move from simply selling trains and infrastructure to offering a guarantee its trains will arrive on time.

Through this project, the company has embedded sensors in trains and tracks in selected locations in Spain, Russia, and Thailand, and then used the data to train machine-learning models to spot tell-tale signs that tracks or trains may be failing. Having granular insights into which parts of the rail network are most likely to fail, and when, has allowed repairs to be targeted where they are most needed a process called predictive maintenance. That, in turn, has allowed Siemens to start selling what it calls outcome as a service a guarantee that trains will arrive on-time close to 100 percent of the time.

Besides, Thyssenkrupp is one of the earliest firms to pair IoT sensor data with machine learning models, which runs 1.1 million elevators worldwide and has been feeding data collected by internet-connected sensors throughout its elevators into trained machine-learning models for several years. Such models provide real-time updates on the status of elevators and predict which are likely to fail and when, allowing the company to target maintenance where its needed, reducing elevator outages and saving money on unnecessary servicing. Similarly, Rolls-Royce collects more than 70 trillion data points from its engines, feeding that data into machine-learning systems that predict when maintenance is required.

In a recent report, IDC analysts Andrea Minonne, Marta Muoz, Andrea Siviero say that applying artificial intelligence the wider field of study that encompasses machine learning to IoT data is already delivering proven benefits for firms.

Given the huge amount of data IoT connected devices collect and analyze, AI finds fertile ground across IoT deployments and use cases, taking analytics level to uncovered insights to help lower operational costs, provide better customer service and support, and create product and service innovation, they say.

According to IDC, the most common use cases for machine learning and IoT data will be predictive maintenance, followed by analyzing CCTV surveillance, smart home applications, in-store contextualized marketing and intelligent transportation systems.

That said, companies using AI and IoT today are outliers, with many firms neither collecting large amounts of data nor using it to train machine-learning models to extract useful information.

Were definitely still in the very early stages, says Mark Hung, research VP at analyst Gartner.

Historically, in a lot of these use cases in the industrial space, smart cities, in agriculture people have either not been gathering data or gathered a large trove of data and not really acted on it, Hung says. Its only fairly recently that people understand the value of that data and are finding out whats the best way to extract that value.

The IDC analysts agree that most firms are yet to exploit IoT data using machine learning, pointing out that a large portion of IoT users are struggling to go beyond a mere data collection due to a lack of analytics skills, security concerns, or simply because they dont have a forward-looking strategic vision.

The reason machine learning is currently so prominent is because of advances over the past decade in the field of deep learning a subset of ML. These breakthroughs were applied to areas from computer vision to speech and language recognition, allowing computers to see the world around them and understand human speech at a level of accuracy not previously possible.

Machine learning uses different approaches for harnessing trainable mathematical models to analyze data, and for all the headlines ML receives, its also only one of many different methods available for interrogating data and not necessarily the best option.

Dan Bieler, the principal analyst at Forrester, says: We need to recognize that AI is currently being hyped quite a bit. You need to look very carefully whether itd generate the benefits youre looking for whether itd create the value that justifies the investment in machine learning.

Visit link:
Deploying Machine Learning to Handle Influx of IoT Data - Analytics Insight

ReversingLabs Releases First Threat Intelligence Platform with Explainable Machine Learning to Automate Incident Response Processes with Verified…

Advances to ReversingLabs Titanium Platform Deliver Transparent and Trusted Malware Insights that Address Security Skills Gap

CAMBRIDGE, Mass., Feb. 18, 2020 (GLOBE NEWSWIRE) -- ReversingLabs, a leading provider of explainable threat intelligence solutions today announced new and enhanced capabilities for its Titanium Platform, including new machine learning algorithm models, explainable classification and out-of-the-box security information and event management (SIEM) plug-ins, security, orchestration, automation and response (SOAR) playbooks, and MITRE ATT&CK Framework support. Introducing a new level of threat intelligence, the Titanium Platform now delivers explainable insights and verification that better support humans in the incident response decision making process. ReversingLabs has been named as a ML-Based Machine Learning Binary Analysis Sample Provider within Gartners 2019 Emerging Technologies and Trends Impact Radar: Security1.. ReversingLabs will showcase its new Titanium Platform at RSA 2020, February 24-28 in San Francisco, Moscone Center, Booth #3311 in the South Expo.

As digital initiatives continue to gain momentum, companies are exposed to an increasing number of threat vectors fueled by a staggering volume of data that contains countless malware infected files and objects, demanding new requirements from the IT teams that support them, said Mario Vuksan, CEO and Co-founder, ReversingLabs. Its no wonder security operations teams struggle to manage incident response. Combine the complexity of threats with blind black box detection engine verdicts, and a lack of analyst experience, skill and time, and teams are crippled by their inability to effectively understand and take action against these increased risks. The current and future threat landscape requires a different approach to threat intelligence and detection that automates time-intensive threat research efforts with the level of detail analysts need to better understand events, improve productivity and refine their skills.

According to Gartners Emerging Technologies and Trends Impact Radar: Security, Gartner estimates that ML-based file analysis has grown at 35 percent over the past year in security technology products with endpoint products being first movers to adopt this new technology.2

Black Box to Glass Box VerdictsBecause signature, AI and machine learning-based threat classifications from black box detection engines come with little to no context, security analysts are left in the dark as to why a verdict was determined, negatively impacting their ability to verify threats, take informed action and extend critical job skills. That lack of context and transparency propelled ReversingLabs to develop a new glass box approach to threat intelligence and detection designed to better inform human understanding first. Security operations teams using ReversingLabs Titanium Platform with patent-pending Explainable Machine Learning can automatically inspect, unpack, and classify threats as before, but with the added capability of verifying these threats in context with transparent, easy to understand results. By applying new machine learning algorithms to identify threat indicators, ReversingLabs enables security teams to more quickly and accurately identify and classify unknown threats.

Key FeaturesAvailable now with Explainable Machine Learning, ReversingLabs platform inspires confidence in threat detection verdicts amongst security operations teams through a transparent and context-aware diagnosis, automating manual threat research with results humans can interpret to take informed action on zero day threats, while simultaneously fueling continuous education and the upskilling of analysts. ReversingLabs Explainable Machine Learning is based on machine learning-based binary file analysis, providing high-speed analysis, feature extraction and classification that can be used to enhance telemetry provided to incident response analysts. Key features of ReversingLabs updated platform include:

Effective machine learning results depend on having the right volume, structure, and quality of data to convert information into a relevant finding, said Vijay Doradla, Chief Business Officer at SparkCognition. With access to ReversingLabs cloud extensive repository, we have the breadth, depth, and scale of data necessary to train our machine learning models. Accurate classification and detection of threats fuels the machine learning-driven predictive security model leveraged in our DeepArmor next-generation endpoint protection platform.

1, 2 Gartner, Emerging Technologies and Trends Impact Radar: Security, Lawrence Pingree, et al, 13 November 2019

About ReversingLabsReversingLabs helps Security Operations Center (SOC) teams identify, detect and respond to the latest attacks, advanced persistent threats and polymorphic malware by providing explainable threat intelligence into destructive files and objects.ReversingLabs technology is used by the worlds most advanced security vendors and deployed across all industries searching for a better way to get at the root of the web, mobile, email, cloud, app development and supply chain threat problem, of which files and objects have become major risk contributors.

ReversingLabs Titanium Platform provides broad integration support with more than 4,000 unique file and object formats, speeds detection of malicious objects through automated static analysis, prioritizing the highest risks with actionable detail in only .005 seconds. With unmatched breadth and privacy, the platform accurately detects threats through explainable machine learning models, leveraging the largest repository of malware in the industry, containing more than 10 billion files and objects. Delivering transparency and trust, thousands of human readable indicators explain why a classification and threat verdict was determined, while integrating at scale across the enterprise with connectors that support existing SIEM, SOAR, threat intelligence platform and sandbox investments, reducing incident response time for SOC analysts, while providing high priority and detailed threat information for hunters to take quick action. Learn more at https://www.reversinglabs.com, or connect on LinkedIn or Twitter.

Media Contact: Jennifer Balinski, Guyer Groupjennifer.balinski@guyergroup.com

Go here to read the rest:
ReversingLabs Releases First Threat Intelligence Platform with Explainable Machine Learning to Automate Incident Response Processes with Verified...

Brian Burch Joins zvelo as Head of Artificial Intelligence and Machine Learning to Drive New Growth Initiatives – Benzinga

GREENWOOD VILLAGE, Colo., Feb. 17, 2020 /PRNewswire-PRWeb/ --Driven by a passion for learning and all things data science, Brian Burch has cultivated an exemplary career in building solutions which solve business problems across multiple industries including cybersecurity, financial services, retail, telecommunications, and aerospace. In addition to having a strong technical background across a broad range of vertical markets, Brian brings deep expertise in the areas of Artificial Intelligence and Machine Learning (AI/ML), Software Engineering, and Product Management.

"We are excited about Brian Burch joining the zvelo leadership team," explains zvelo CEO, Jeff Finn. "zvelo is quickly gaining momentum with tremendous growth opportunities built upon the zveloAI platform. Brian brings an impressive background in AI/ML and data science to further zvelo's leadership for URL classification, objectionable and malicious detection and his passion aligns perfectly with zvelo's mission to improve internet safety and security."

From large organizations like CenturyLink and Regions Bank to successful startups like StorePerform Technologies and Cognilytics, Brian has a proven history of leveraging his vast experience in key leadership roles to advance business goals through a fully-immersed, hands-on approach.

"I'm especially excited about combining zvelo's strong web categorization technologies with the latest advances in AI/ML to identify malicious websites, phishing URLs, and malware distribution infrastructure, and play a key role in supporting the mission to make the internet safer for everyone," stated Burch.

About zvelo, Inc. zvelo is a leading provider of web content classification and detection of objectionable, malicious and threat detection services with a mission of making the Internet safer and more secure. zvelo combines advanced artificial intelligence-based contextual categorization with sophisticated malicious and phishing detection capabilities that customers integrate into network and endpoint security, URL and DNS filtering, brand safety, contextual targeting, and other applications where data quality, accuracy, and detection rates are critical.

Learn more at: https://www.zvelo.com

Corporate Information: zvelo, Inc. 8350 East Crescent Parkway, Suite 450 Greenwood Village, CO 80111 Phone: (720) 897-8113 zvelo.com or pr@zvelo.com

SOURCE zvelo

More here:
Brian Burch Joins zvelo as Head of Artificial Intelligence and Machine Learning to Drive New Growth Initiatives - Benzinga

Global Machine Learning in Automobile Market Insight Growth Analysis on Volume, Revenue and Forecast to 2019-2025 – News Parents

Advanced report on Machine Learning in Automobile Market Added by Upmarketresearch.com, offers details on current and future growth trends pertaining to the business besides information on myriad regions across the geographical landscape of the Machine Learning in Automobile market. The report also expands on comprehensive details regarding the supply and demand analysis, participation by major industry players and market share growth statistics of the business sphere.

Download Free Sample Copy of Machine Learning in Automobile Market Report: https://www.upmarketresearch.com/home/requested_sample/106492

This research report on Machine Learning in Automobile Market entails an exhaustive analysis of this business space, along with a succinct overview of its various market segments. The study sums up the market scenario offering a basic overview of the Machine Learning in Automobile market with respect to its present position and the industry size, based on revenue and volume. The research also highlights important insights pertaining to the regional ambit of the market as well as the key organizations with an authoritative status in the Machine Learning in Automobile market.

Elucidating the top pointers from the Machine Learning in Automobile market report:A detailed scrutiny of the regional terrain of the Machine Learning in Automobile market: The study broadly exemplifies, the regional hierarchy of this market, while categorizing the same into United States, China, Europe, Japan, Southeast Asia & India. The research report documents data concerning the market share held by each nation, along with potential growth prospects based on the geographical analysis. The study anticipates the growth rate which each regional segment would cover over the estimated timeframe.

To Gain Full Access with Complete ToC of The Report, Visit https://www.upmarketresearch.com/buy/machine-learning-in-automobile-market-research-report-2019

Uncovering the competitive outlook of the Machine Learning in Automobile market: The comprehensive Machine Learning in Automobile market study embraces a mutinously developed competitive examination of this business space. According to the study:AllerinIntellias LtdNVIDIA CorporationXevoKopernikus AutomotiveBlipparAlphabet IncIntelIBMMicrosoft Data pertaining to production facilities owned by market majors, industry share, and the regions served are appropriately detailed in the study. The research integrates data regarding the producers product range, top product applications, and product specifications.Gross margins and pricing models of key market contenders are also depicted in the report.

Ask for Discount on Machine Learning in Automobile Market Report at: https://www.upmarketresearch.com/home/request_for_discount/106492

Other takeaways from the report that will impact the remuneration scale of the Machine Learning in Automobile market: The Machine Learning in Automobile market study appraises the product spectrum of this vertical with all-embracing details. Based on the report, the Machine Learning in Automobile market, in terms of product terrain, is classified intoSupervised LearningUnsupervised LearningSemi Supervised LearningReinforced Leaning Insights about the market share captured based on each product type segment, profit valuation, and production growth data is also contained within the report. The study covers an elaborate analysis of the markets application landscape that has been widely fragmented into:AI Cloud ServicesAutomotive InsuranceCar ManufacturingDriver MonitoringOthers Insights about each applications market share, product demand predictions based on each application, and the application wise growth rate during the forthcoming years, have been included in the Machine Learning in Automobile market report. Other key facts tackling aspects like the market concentration rate and raw material processing rate are illustrated in the report. The report evaluates the markets recent price trends and the projects growth prospects for the industry. A precise summary of tendencies in marketing approach, market positioning, and marketing channel development is discussed in the report. The study also unveils data with regards to the producers and distributors, downstream buyers, and manufacturing cost structure of the Machine Learning in Automobile market.

Customize Report and Inquiry for The Machine Learning in Automobile Market Report: https://www.upmarketresearch.com/home/enquiry_before_buying/106492

Some of the Major Highlights of TOC covers:Executive Summary Global Machine Learning in Automobile Production Growth Rate Comparison by Types (2014-2025) Global Machine Learning in Automobile Consumption Comparison by Applications (2014-2025) Global Machine Learning in Automobile Revenue (2014-2025) Global Machine Learning in Automobile Production (2014-2025) North America Machine Learning in Automobile Status and Prospect (2014-2025) Europe Machine Learning in Automobile Status and Prospect (2014-2025) China Machine Learning in Automobile Status and Prospect (2014-2025) Japan Machine Learning in Automobile Status and Prospect (2014-2025) Southeast Asia Machine Learning in Automobile Status and Prospect (2014-2025) India Machine Learning in Automobile Status and Prospect (2014-2025)

Manufacturing Cost Structure Analysis Raw Material and Suppliers Manufacturing Cost Structure Analysis of Machine Learning in Automobile Manufacturing Process Analysis of Machine Learning in Automobile Industry Chain Structure of Machine Learning in Automobile

Development and Manufacturing Plants Analysis of Machine Learning in Automobile Capacity and Commercial Production Date Global Machine Learning in Automobile Manufacturing Plants Distribution Major Manufacturers Technology Source and Market Position of Machine Learning in Automobile Recent Development and Expansion Plans

Key Figures of Major Manufacturers Machine Learning in Automobile Production and Capacity Analysis Machine Learning in Automobile Revenue Analysis Machine Learning in Automobile Price Analysis Market Concentration Degree

About UpMarketResearch:Up Market Research (https://www.upmarketresearch.com) is a leading distributor of market research report with more than 800+ global clients. As a market research company, we take pride in equipping our clients with insights and data that holds the power to truly make a difference to their business. Our mission is singular and well-defined we want to help our clients envisage their business environment so that they are able to make informed, strategic and therefore successful decisions for themselves.

Contact Info UpMarketResearchName Alex MathewsEmail [emailprotected]Website https://www.upmarketresearch.comAddress 500 East E Street, Ontario, CA 91764, United States.

Originally posted here:
Global Machine Learning in Automobile Market Insight Growth Analysis on Volume, Revenue and Forecast to 2019-2025 - News Parents

Algorithms and bias, explained – Vox.com

Humans are error-prone and biased, but that doesnt mean that algorithms are necessarily better. Still, the tech is already making important decisions about your life and potentially ruling over which political advertisements you see, how your application to your dream job is screened, how police officers are deployed in your neighborhood, and even predicting your homes risk of fire.

But these systems can be biased based on who builds them, how theyre developed, and how theyre ultimately used. This is commonly known as algorithmic bias. Its tough to figure out exactly how systems might be susceptible to algorithmic bias, especially since this technology often operates in a corporate black box. We frequently dont know how a particular artificial intelligence or algorithm was designed, what data helped build it, or how it works.

Typically, you only know the end result: how it has affected you, if youre even aware that AI or an algorithm was used in the first place. Did you get the job? Did you see that Donald Trump ad on your Facebook timeline? Did a facial recognition system identify you? That makes addressing the biases of artificial intelligence tricky, but even more important to understand.

When thinking about machine learning tools (machine learning is a type of artificial intelligence), its better to think about the idea of training. This involves exposing a computer to a bunch of data any kind of data and then that computer learns to make judgments, or predictions, about the information it processes based on the patterns it notices.

For instance, in a very simplified example, lets say you wanted to train your computer system to recognize whether an object is a book, based on a few factors, like its texture, weight, and dimensions. A human might be able to do this, but a computer could do it more quickly.

To train the system, you show the computer metrics attributed to a lot of different objects. You give the computer system the metrics for every object, and tell the computer when the objects are books and when theyre not. After continuously testing and refining, the system is supposed to learn what indicates a book and, hopefully, be able to predict in the future whether an object is a book, depending on those metrics, without human assistance.

That sounds relatively straightforward. And it might be, if your first batch of data was classified correctly and included a good range of metrics featuring lots of different types of books. However, these systems are often applied to situations that have much more serious consequences than this task, and in scenarios where there isnt necessarily an objective answer. Often, the data on which many of these decision-making systems are trained or checked are often not complete, balanced, or selected appropriately, and that can be a major source of although certainly not the only source of algorithmic bias.

Nicol Turner-Lee, a Center for Technology Innovation fellow at the Brookings Institution think tank, explains that we can think about algorithmic bias in two primary ways: accuracy and impact. An AI can have different accuracy rates for different demographic groups. Similarly, an algorithm can make vastly different decisions when applied to different populations.

Importantly, when you think of data, you might think of formal studies in which demographics and representation are carefully considered, limitations are weighed, and then the results are peer-reviewed. Thats not necessarily the case with the AI-based systems that might be used to make a decision about you. Lets take one source of data everyone has access to: the internet. One study found that, by teaching an artificial intelligence to crawl through the internet and just reading what humans have already written the system would produce prejudices against black people and women.

Another example of how training data can produce sexism in an algorithm occurred a few years ago, when Amazon tried to use AI to build a rsum-screening tool. According to Reuters, the companys hope was that technology could make the process of sorting through job applications more efficient. It built a screening algorithm using rsums the company had collected for a decade, but those rsums tended to come from men. That meant the system, in the end, learned to discriminate against women. It also ended up factoring in proxies for gender, like whether an applicant went to a womens college. (Amazon says the tool was never used and that it was nonfunctional for several reasons.)

Amid discussions of algorithmic biases, companies using AI might say theyre taking precautions, taking steps to use more representative training data and regularly auditing their systems for unintended bias and disparate impact against certain groups. But Lily Hu, a doctoral candidate at Harvard in applied mathematics and philosophy who studies AI fairness, says those arent assurances that your system will perform fairly in the future.

You dont have any guarantees because your algorithm performs fairly on your old dataset, Hu told Recode. Thats just a fundamental problem of machine learning. Machine learning works on old data [and] on training data. And it doesnt work on new data, because we havent collected that data yet.

Still, shouldnt we just make more representative datasets? That might be part of the solution, though its worth noting that not all efforts aimed at building better data sets are ethical. And its not just about the data. As Karen Hao of the MIT Tech Review explains, AI could also be designed to frame a problem in a fundamentally problematic way. For instance, an algorithm designed to determine creditworthiness thats programmed to maximize profit could ultimately decide to give out predatory, subprime loans.

Heres another thing to keep in mind: Just because a tool is tested for bias which assumes that engineers who are checking for bias actually understand how bias manifests and operates against one group doesnt mean it is tested for bias against another type of group. This is also true when an algorithm is considering several types of identity factors at the same time: A tool may deemed fairly accurate on white women, for instance, but that doesnt necessarily mean it works with black women.

In some cases, it might be impossible to find training data free of bias. Take historical data produced by the United States criminal justice system. Its hard to imagine that data produced by an institution rife with systemic racism could be used to build out an effective and fair tool. As researchers at New York University and the AI Now Institute outline, predictive policing tools can be fed dirty data, including policing patterns that reflect police departments conscious and implicit biases, as well as police corruption.

So you might have the data to build an algorithm. But who designs it, and who decides how its deployed? Who gets to decide what level of accuracy and inaccuracy for different groups is acceptable? Who gets to decide which applications of AI are ethical and which arent?

While there isnt a wide range of studies on the demographics of the artificial intelligence field, we do know that AI tends to be dominated by men. And the high tech sector, more broadly, tends to overrepresent white people and underrepresent black and Latinx people, according to the Equal Employment Opportunity Commission.

Turner-Lee emphasizes that we need to think about who gets a seat at the table when these systems are proposed, since those people ultimately shape the discussion about ethical deployments of their technology.

But theres also a broader question of what questions artificial intelligence can help us answer. Hu, the Harvard researcher, argues that for many systems, the question of building a fair system is essentially nonsensical, because those systems try to answer social questions that dont necessarily have an objective answer. For instance, Hu says algorithms that claim to predict a persons recidivism dont ultimately address the ethical question of whether someone deserves parole.

Theres not an objective way to answer that question, Hu says. When you then insert an AI system, an algorithmic system, [or] a computer, that doesnt change the fundamental context of the problem, which is that the problem has no objective answer. Its fundamentally a question of what our values are, and what the purpose of the criminal justice system is.

That in mind, some algorithms probably shouldnt exist, or at least they shouldnt come with such a high risk of abuse. Just because a technology is accurate doesnt make it fair or ethical. For instance, the Chinese government has used artificial intelligence to track and racially profile its largely Muslim Uighur minority, about 1 million of whom are believed to be living in internment camps.

One of the reasons algorithmic bias can seem so opaque is because, on our own, we usually cant tell when its happening (or if an algorithm is even in the mix). That was one of the reasons why the controversy over a husband and wife who both applied for an Apple Card and got widely different credit limits attracted so much attention, Turner-Lee says. It was a rare instance in which two people, who at least appeared to be exposed to the same algorithm and could easily compare notes. The details of this case still arent clear, though the companys credit card is now being investigated by regulators.

But consumers being able to make apples-to-apples comparisons of algorithmic results are rare, and thats part of why advocates are demanding more transparency about how systems work and their accuracy. Ultimately, its probably not a problem we can solve on the individual level. Even if we do understand that algorithms can be biased, that doesnt mean companies will be forthright in allowing outsiders to study their artificial intelligence. Thats created a challenge for those pushing for more equitable, technological systems. How can you critique an algorithm a sort of black box if you dont have true access to its inner workings or the capacity to test a good number of its decisions?

Companies will claim to be accurate, overall, but wont always reveal their training data (remember, thats the data that the artificial intelligence trains on before evaluating new data, like, say, your job application). Many dont appear to be subjecting themselves to audit by a third-party evaluator or publicly sharing how their systems fare when applied to different demographic groups. Some researchers, such as Joy Buolamwini and Timnit Gebru, say that sharing this demographic information about both the data used to train and the data used to check artificial intelligence should be a baseline definition of transparency.

We will likely need new laws to regulate artificial intelligence, and some lawmakers are catching up on the issue. Theres a bill that would force companies to check their AI systems for bias through the Federal Trade Commission (FTC). And legislation has also been proposed to regulate facial recognition, and even to ban the technology from federally assisted public housing.

But Turner-Lee emphasizes that new legislation doesnt mean existing laws or agencies dont have the power to look over these tools, even if theres some uncertainty. For instance, the FTC oversees deceptive acts and practices, which could give the agency authority over some AI-based tools.

The Equal Employment Opportunity Commission, which investigates employment discrimination, is reportedly looking into at least two cases involving algorithmic discrimination. At the same time, the White House is encouraging federal agencies that are figuring out how to regulate artificial intelligence to keep technological innovation in mind. That raises the challenge of whether the government is prepared to study and govern this technology, and figure out how existing laws apply.

You have a group of people that really understand it very well, and that would be technologists, Turner-Lee cautions, and a group of people who dont really understand it at all, or have minimal understanding, and that would be policymakers.

Thats not to say there arent technical efforts to de-bias flawed artificial intelligence, but its important to keep in mind that the technology wont be a solution to fundamental challenges of fairness and discrimination. And, as the examples weve gone through indicate, theres no guarantee companies building or using this tech will make sure its not discriminatory, especially without a legal mandate to do so. It would seem its up to us, collectively, to push the government to rein in the tech and to make sure it helps us more than it might already be harming us.

Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

See the original post:
Algorithms and bias, explained - Vox.com

PhD in Machine Learning and Computer Vision for Smart Maintenance of Road Infrastructure job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY…

About the project

The vision of Norwegian Public Roads Administration (NPRA, Norw.: Statens vegvesen) is to contribute to national goals for the transportation system. These goals are safety, promoting added value in society, and promoting change towards lower global emissions. The road system in Norway is large and complex, and the geography of Norway raises a range of challenges in this area with respect to maintenance which will be given priority compared to new large road investments in the coming years.

The Norwegian National Transport Plan is aimed towards promoting mobility, traffic safety, climatic and environmental conditions. To ensure a high-quality road infrastructure it is important to choose effective maintenance actions within the areas of operations, maintenance and rehabilitation. In particular, the development of new technology and new digital concepts is essential to enable more efficient monitoring and analysis of road traffic and road network conditions.

There is a technological shift taking place towards a more digitalized society. This technological shift has the potential to contribute to the overall goals of safety, low emissions and increased resource efficiency. The vision of NTNU is Knowledge for a Better World and is actively pursuing this goal across education, research and innovation. In the area of transportation, NTNU conducts extensive activity in several relevant engineering fields connected to infrastructure, maintenance and digitalization.

NPRA established a research and development project with the title "Smarter maintenance". This project on road maintenance and infrastructure will involve close cooperation between the areas of research expertise in civil, transport and structural engineering, technology, digitalization and maintenance, and economics. This cooperation is organized within three thematic areas: (1) Condition registration, data analysis and modelling; (2) Big data, artificial intelligence, strategic analysis and planning; and (3) Maintenance, social economics and innovation. There is both a substantial need and many opportunities for innovation in this research program which will bring together 7 PhD candidates across several engineering and cognate fields. Together, they will seek to solve specific challenges connected to the maintenance of transportation infrastructure.

These positions will be grouped into research clusters that will ensure close cooperation between PhD-candidates, supervisors, NPRA-experts and master/bachelor students.

We are seeking motivated candidates to work in a multidisciplinary and innovative setting of national and international importance.

About the position

We have a vacancy for a PhD position at the Department of Computer Science. The work will be carried out in close collaboration with domain experts from the Norwegian Public Roads Administration (NPRA) and the position will be affiliated with the Norwegian Open AI Lab (NAIL).

The candidate will perform research on next-generation AI and computer vision methods related to maintenance of road infrastructure. Key research topics that will be investigated in this PhD project are:

The position reports to Professor Frank Lindseth.

Main duties and responsibilities

Qualification requirements

Essential requirements:

The PhD-position's main objective is to qualify for work in research positions. The qualification requirement is completion of a masters degree or second degree (equivalent to 120 credits) with a strong academic background inComputer Scienceor equivalent education with a grade of B or better in terms of NTNUs grading scale. Applicants with no letter grades from previous studies must have an equally good academic foundation. Applicants who are unable to meet these criteria may be considered only if they can document that they are particularly suitable candidates for education leading to a PhD degree. Key qualifications are:

Candidates completing their MSc-degree in the Spring 2020 are encouraged to apply. The position is also open for integrated PhD for NTNU students starting their final year of their Masters Degree in Autumn 2020.

Desirable qualifications:

The appointment is to be made in accordance with the regulations in force concerning State Employees and Civil Servants and national guidelines for appointment as PhD, postdoctor and research assistant

NTNU is committed to following evaluation criteria for research quality according to The San Francisco Declaration on Research Assessment - DORA.

Personal characteristics

In the evaluation of which candidate is best qualified, emphasis will be placed on education, experience and personal suitability, as well as motivation, in terms of the qualification requirements specified in the advertisement

We offer

Salary and conditions

PhD candidates are remunerated in code 1017, and are normally remunerated at gross from NOK 479 600 before tax peryear. From the salary, 2 % is deducted as a contribution to the Norwegian Public Service Pension Fund.

The period of employment is3 years without required duties. Appointment to a PhD position requires admission to the PhD programme in Computer Sience.

As a PhD candidate, you undertake to participate in an organized PhD programme during the employment period. A condition of appointment is that you are in fact qualified for admission to the PhD programme within three months.

The engagement is to be made in accordance with the regulations in force concerning State Employees and Civil Servants, and the acts relating to Control of the Export of Strategic Goods, Services and Technology. Candidates who by assessment of the application and attachment are seen to conflict with the criterias in the latter law will be prohibited from recruitment to NTNU. After the appointment you must assume that there may be changes in the area of work.

General information

A good work environment is characterized by diversity. We encourage qualified candidates to apply, regardless of their gender, functional capacity or cultural background. Under the Freedom of Information Act (offentleglova), information about the applicant may be made public even if the applicant has requested not to have their name entered on the list of applicants.

The national labour force must reflect the composition of the population to the greatest possible extent, NTNU wants to increase the proportion of women in its scientific posts. Women are international schools) and possibilities to enjoy nature, culture and family life encouraged to apply. Furthermore, Trondheim offers great opportunities for education (including (http://trondheim.com/). Having a population of 200000, Trondheim is a small city by international standards with low crime rates and little pollution. It also has easy access to a beautiful countryside with mountains and a dramatic coastline.

Questions about the position can be directed to Professor Frank Lindseth, phone number+47 928 09 372, e-mail frankl@ntnu.no

The application must contain:

Publications and other academic works that the applicant would like to be considered in the evaluation must accompany the application. Joint works will be considered. If it is difficult to identify the individual applicant's contribution to joint works, the applicant must include a brief

Please submit your application electronically via jobbnorge.no with your CV, diplomas and certificates. Applications submitted elsewhere will not be considered. Diploma Supplement is required to attach for European Master Diplomas outside Norway. Chinese applicants are required to provide confirmation of Master Diploma from China Credentials Verification (CHSI): http://www.chsi.com.cn/en/).

Applicants invited for interview must include certified copies of transcripts and reference letters.

Please refer to the application number 2020/5928 when applying.

Application deadline:07.03.2020

NTNU - knowledge for a better world

The Norwegian University of Science and Technology (NTNU) creates knowledge for a better world and solutions that can change everyday life.

Faculty of Information Technology and Electrical Engineering

The Faculty of Information Technology and Electrical Engineering is Norways largest university environment in ICT, electrical engineering and mathematical sciences. Our aim is to contribute to a smart, secure and sustainable future. We emphasize high international quality in research, education,innovation, dissemination and outreach. The Faculty consists of seven departments and the Faculty Administration.

Deadline 7th March 2020EmployerNTNU - Norwegian University of Science and TechnologyMunicipality TrondheimScope FulltimeDuration TemporaryPlace of service Trondheim

More:
PhD in Machine Learning and Computer Vision for Smart Maintenance of Road Infrastructure job with NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY...

Lecturer/Senior Lecturer in Artificial Intelligence and / or Machine learning job with UNIVERSITY OF BRISTOL | 196709 – Times Higher Education (THE)

Lecturer/Senior Lecturer in Artificial Intelligence and / or Machine learning

Job number ACAD104467Division/School School of Computer Science, Electrical and Electronic Engineering and Engineering MathsContract type Open EndedWorking pattern Full timeSalary 38,017 - 59,135 per annumClosing date for applications 11-Mar-2020

The Department of Computer Science, University of Bristol, is seeking to appoint a number of Lecturers (analogous to Assistant Professor) or Senior Lecturers in Artificial Intelligence and / or Machine learning, the level of appointment depending on the experience of the successful candidate.

You will be expected to both deliver outstanding teaching and undertake internationally leading research as well as carrying out appropriate administrative tasks. There are opportunities to play a significant role in shaping and leading Bristols activities in AI and Data Science, including the new UKRI Centre for Doctoral Training in Interactive AI. Teaching responsibility will cover areas including: data-driven computer science, machine learning and artificial intelligence, for advanced undergraduates as well as postgraduates.

The Department of Computer Science is an international centre of excellence in the foundations and applications of computing, ranked 4th in the UK for research intensity by the 2014 REF. The Department is already home to significant activity in Artificial Intelligence, Machine Learning and Data Science both within the Intelligent System Laboratory research group, as well as closely associated neighbouring research groups in Computer Vision and Robotics. The University of Bristol is a leading institution among the UKs Russell Group Universities and is regularly placed among the top-ranking institutions in global league tables.

We are located in the centre of Bristol, consistently recognised as one of the UK most-liveable cities.

Informal enquires are welcome and can be directed to: Prof. Seth Bullock, Head of the Computer Science department (seth.bullock@bristol.ac.uk), and Prof. Peter Flach, Professor of Artificial Intelligence (peter.flach@bristol.ac.uk).

The posts are being offered on a full-time, open-ended contract. Recruitment supplement scheme available up to 5K.

The closing date for applications is 23:59 on Wednesday 11th March and interviews are expected to take place in week commencing 6th April.

We welcome applications from all members of our community and are particularly encouraging those from diverse groups, such as members of the LGBT+ and BAME communities, to join us.

Read more here:
Lecturer/Senior Lecturer in Artificial Intelligence and / or Machine learning job with UNIVERSITY OF BRISTOL | 196709 - Times Higher Education (THE)

iMerit Leads off 2020 With New AI Innovation Initiatives and Funding – GlobeNewswire

LOS GATOS, Calif., Feb. 18, 2020 (GLOBE NEWSWIRE) -- via NetworkWire --iMerit, a leading data annotation and enrichment company, is headed into2020 with expansion plans, new innovation and new funding for itshuman-in-the-loop AItechnology platform. The company has attracted $20 millionin Series B funding led by CDC Group, the UKsleading publicly-owned impact investor.This investment, which also includes participation fromexistinginvestors, will be used to continue innovation for the companys proprietary AIplatform that delivers 100% quality control and over 98% accuracy.

The funding will also be used to expand its advanced workforcefrom 3,000 employees across the US,India and Bhutan to 10,000 global employees by 2023. It is the latest signthat iMeritshigh-quality datasets for artificial intelligence (AI) and machine learningare leading the industry and achieving the highest security certification. Thecompanys dataannotation and enrichmentspecialists work across nine securecenters globally. They provide solutionsacross multiple markets including automotive, healthcare, e-commerce, finance,media and entertainment, and government. iMerit has been growing at over 100%for 3years, has been cash positive for the last 2 years, and is continuing todifferentiate from the rest of the market.

This investmentvalidates our belief that the growth in artificial intelligence and machinelearning is best serviced by a full-time, specialist workforce thatcontinuously learns and grows with the technology, saysiMerit CEO and founderRadha R. Basu, and CDC Group shares this belief. This new funding will enableiMerit to continue to provide enterprise-scale and quality to a large clientbase in a fast-growing andevolving market.Our investment in iMerit underlinesour commitment to back companies that are creating skilled jobs, particularlyfor women, in countries where they are most needed, says Nick ODonohoe, CEO, CDC Group.Advancesin AI technology are normally seen as a threat to jobs. iMerit has demonstratedthat the opposite is true. The technology sector has an incredibly importantrole to play in supporting the UNsSustainable Development Goals and in that regard iMerit is a true pioneer.

iMerits contributions toglobal AI initiatives in 2020 will include:

CDCsmission and iMeritsjourney align very well, says DD Ganguly, President of iMerit USA. Workingwith an organization, like CDC, that prioritizes an advanced, inclusive andgender-balanced workforce isperfect for iMerit. The collaboration will enableiMerit to continue to build a specialized, profitable, high growth business,with a customizable and agile technology platform, that will foster strongcustomer loyaltyin a cutting-edge sector.About iMeritiMerit's Artificial Intelligence and Machine Learning platformpowers advanced algorithms in Machine Learning, Computer Vision, NaturalLanguage Understanding, e-Commerce,Augmented Reality and Data Analytics. Itworks on data for transformative technologies such as advancing cancer cellresearch, optimizing crop yields and training driverless cars tounderstandtheir environment. The company drives social and economic change by tappinginto an under-resourced talent pool and creating digital inclusion. The teamconsists of3000 full-time staff, with more than 50% being women. The companys initial investorsareOmidyar Network, Michael and Susan Dell Foundation, and Khosla Impact. Formore information, visit:www.imerit.net.

About CDC GroupCDC Group is the worldsfirst impact investor with over 70 years of experience of successfullysupporting the sustainable, long-term growth of businesses in Africa and SouthAsia. CDC is a key advocate for the adoption of renewable energy inAfrica and South Africa in the fight against climate change and a UK championof the UNsSustainable Development Goals the global blueprint toachieve a better andmore sustainable future for us all.The company has investments in over 1,200 businesses in emergingeconomies and a total portfolio value of 5.8bn. This year CDC will invest over$1.5bn in companies in Africaand Asia with a focus on fighting climate change,empowering women and creating new jobs and opportunities for millions ofpeople.CDC is funded by the UK government and all proceeds from itsinvestments are reinvested to improve the lives of millions of people in Africaand South Asia.CDCsexpertise makes it the perfect partner for private investors looking to devotecapital to making a measurable environmental and social impact in countriesmost in need of investment.CDC provides flexible capital in all its forms, includingequity, debt, mezzanine and guarantees, to meet businessesneeds.It can invest across all sectors, butprioritizes those that help further development,such as infrastructure,financial institutions, manufacturing, and constructions. Find out more atwww.cdcgroup.com.

Media ContactAndreaHeuer at Consort PartnersSan Franciscoandreah@consortpartners.com

For further information please contactAndrew Murray-Watson123 Victoria Street, London, SW1E 6DEM. +44 (0) 7515 695232amurray-watson@cdcgroup.com

Read the original post:
iMerit Leads off 2020 With New AI Innovation Initiatives and Funding - GlobeNewswire

Apple Patents ML Based Navigation System For Its Maps, But Why? – Analytics India Magazine

Apple thinks it can improve location accuracy by applying machine learning to Kalman estimation filters, a just-published patent application reveals. Kalman filters are popularly used in GPS and robotic motion tracking applications. And, now Apple wants to use machine learning along with Kalman filters to bring the accuracy of positioning down to centimetre-level.

According to the patent application, Apple proposes:

The device, say an iPhone would generate a machine learning model, for example, by comparing GNSS position estimates (or estimated measurement errors) with corresponding reference position estimates (where the reference positions correspond to ground truth data).

In one or more implementations, the ground truth data may be better (e.g., significantly better) than what a mobile device alone can perform in most non-aided mode(s) of operation. For example, a mobile phone in a car may be significantly better aided than a pedestrian device, because the motion model for a vehicle is more constrained, and has aiding data in the form of maps and sensor inputs.

Tall buildings and tree cover can confuse the positioning systems to accurately locate the user. So, Apple wants to generate machine learning models on the device that would predict the users location based on its training as well as a reference position.

Today even Apples rivals praise Apple for it has done to the electronics industry. In an in-depth CNBC interview with Huaweis founder and CEO Ren Zhengfei earlier this year he spoke about how Apple has revolutionised the era of the Internet. In his ascent, however, Apple has put many traditional companies to dust.

According to a 2018 CNBCs report, there has been a dramatic decline in worldwide shipments of cameras. The chart above from Statista illustrates this fall, which also coincides with the peaking of Apple iPhone and its ever-improving camera.

So, the companies those who outsource their GPS improving services will be watching the new ML-based GPS patent closely or even might be rushing to build something of their own. However, this might not be the case in this modern era of mega collaborations.

Last month we saw one of the biggest corporate crossovers of the 21st century, when the tech giants, Amazon, Apple and Google, along with others announced their plans to develop compatible smart home products together.

Gone are the days where companies build something up from scratch (with the exception of Tesla). If your rival company is good at something you are not, then you either buy a startup that works solely on that technology or join hands with the rival. So, Apples patent to improve GPS in the upcoming 5G era might receive a warm welcome.

Of course, there always will be a debate about whether one should patent widely used technology, which can hand over infinite leverage to a single entity.

That said, the last two years has since increased attention in seeking patents over ML-based techniques. Last year, it was Google, which has been in the news for patenting machine learning techniques such as batch normalisation. Companies like Google and Apple have been leading the AI race for quite some time. It can also be possible if it is a routine to apply for a patent for their innovations and this new-found obsession over ML patent news is due to the rising popularity of AI globally.

At the end of the day, it comes down to whether you should risk years worth of intellectual property to a potential patent troll or safeguard it through patenting and then democratise the technology to the masses. It has been the latter, for many years and we have to wait and watch if machine learning-based patents find an exception as we go forward.

comments

Read more from the original source:
Apple Patents ML Based Navigation System For Its Maps, But Why? - Analytics India Magazine

Quantum computing – Wikipedia

Study of a model of computation

Quantum Computing is the use of quantum-mechanical phenomena such as superposition and entanglement to perform computation. A quantum computer is used to perform such computation, which can be implemented theoretically or physically[1]:I-5 There are two main approaches to physically implementing a quantum computer currently, analog and digital. Analog approaches are further divided into quantum simulation, quantum annealing, and adiabatic quantum computation. Digital quantum computers use quantum logic gates to do computation. Both approaches use quantum bits or qubits.[1]:213

Qubits are fundamental to quantum computing and are somewhat analogous to bits in a classical computer. Qubits can be in a 1 or 0 quantum state. But they can also be in a superposition of the 1 and 0 states. However, when qubits are measured the result is always either a 0 or a 1; the probabilities of the two outcomes depends on the quantum state they were in.

Quantum computing began in the early 1980s, when physicist Paul Benioff proposed a quantum mechanical model of the Turing machine.[2]Richard FeynmanandYuri Maninlater suggested that a quantum computer had the potential to simulate things that a classical computer could not.[3][4] In 1994, Peter Shor developed a quantum algorithm for factoring integers that had the potential to decrypt all secured communications.[5]

Despite ongoing experimental progress since the late 1990s, most researchers believe that "fault-tolerant quantum computing [is] still a rather distant dream".[6] On 23 October 2019, Google AI, in partnership with the U.S. National Aeronautics and Space Administration (NASA), published a paper in which they claimed to have achieved quantum supremacy.[7] While some have disputed this claim, it is still a significant milestone in the history of quantum computing.[8]

The field of quantum computing is a subfield of quantum information science, which includes quantum cryptography and quantum communication.

The prevailing model of quantum computation describes the computation in terms of a network of quantum logic gates. What follows is a brief treatment of the subject based upon Chapter 4 of Nielsen and Chuang.[9]

A memory consisting of n {textstyle n} bits of information has 2 n {textstyle 2^{n}} possible states. A vector representing all memory states has hence 2 n {textstyle 2^{n}} entries (one for each state). This vector should be viewed as a probability vector and represents the fact that the memory is to be found in a particular state.

In the classical view, one entry would have a value of 1 (i.e. a 100% probability of being in this state) and all other entries would be zero. In quantum mechanics, probability vectors are generalized to density operators. This is the technically rigorous mathematical foundation for quantum logic gates, but the intermediate quantum state vector formalism is usually introduced first because it is conceptually simpler. This article focuses on the quantum state vector formalism for simplicity.

We begin by considering a simple memory consisting of only one bit. This memory may be found in one of two states: the zero state or the one state. We may represent the state of this memory using Dirac notation so that

The state of this one-qubit quantum memory can be manipulated by applying quantum logic gates, analogous to how classical memory can be manipulated with classical logic gates. One important gate for both classical and quantum computation is the NOT gate, which can be represented by a matrix

The mathematics of single qubit gates can be extended to operate on multiqubit quantum memories in two important ways. One way is simply to select a qubit and apply that gate to the target qubit whilst leaving the remainder of the memory unaffected. Another way is to apply the gate to its target only if another part of the memory is in a desired state. These two choices can be illustrated using another example. The possible states of a two-qubit quantum memory are

In summary, a quantum computation can be described as a network of quantum logic gates and measurements. Any measurement can be deferred to the end of a quantum computation, though this deferment may come at a computational cost. Because of this possibility of deferring a measurement, most quantum circuits depict a network consisting only of quantum logic gates and no measurements. More information can be found in the following articles: universal quantum computer, Shor's algorithm, Grover's algorithm, DeutschJozsa algorithm, amplitude amplification, quantum Fourier transform, quantum gate, quantum adiabatic algorithm and quantum error correction.

Any quantum computation can be represented as a network of quantum logic gates from a fairly small family of gates. A choice of gate family that enables this construction is known as a universal gate set. One common such set includes all single-qubit gates as well as the CNOT gate from above. This means any quantum computation can be performed by executing a sequence of single-qubit gates together with CNOT gates. Though this gate set is infinite, it can be replaced with a finite gate set by appealing to the Solovay-Kitaev theorem.

Integer factorization, which underpins the security of public key cryptographic systems, is believed to be computationally infeasible with an ordinary computer for large integers if they are the product of few prime numbers (e.g., products of two 300-digit primes).[10] By comparison, a quantum computer could efficiently solve this problem using Shor's algorithm to find its factors. This ability would allow a quantum computer to break many of the cryptographic systems in use today, in the sense that there would be a polynomial time (in the number of digits of the integer) algorithm for solving the problem. In particular, most of the popular public key ciphers are based on the difficulty of factoring integers or the discrete logarithm problem, both of which can be solved by Shor's algorithm. In particular, the RSA, DiffieHellman, and elliptic curve DiffieHellman algorithms could be broken. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications for electronic privacy and security.

However, other cryptographic algorithms do not appear to be broken by those algorithms.[11][12] Some public-key algorithms are based on problems other than the integer factorization and discrete logarithm problems to which Shor's algorithm applies, like the McEliece cryptosystem based on a problem in coding theory.[11][13] Lattice-based cryptosystems are also not known to be broken by quantum computers, and finding a polynomial time algorithm for solving the dihedral hidden subgroup problem, which would break many lattice based cryptosystems, is a well-studied open problem.[14] It has been proven that applying Grover's algorithm to break a symmetric (secret key) algorithm by brute force requires time equal to roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case,[15] meaning that symmetric key lengths are effectively halved: AES-256 would have the same security against an attack using Grover's algorithm that AES-128 has against classical brute-force search (see Key size).

Quantum cryptography could potentially fulfill some of the functions of public key cryptography. Quantum-based cryptographic systems could, therefore, be more secure than traditional systems against quantum hacking.[16]

Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been found for several problems,[17] including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell's equation. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely.[18] However, quantum computers offer polynomial speedup for some problems. The most well-known example of this is quantum database search, which can be solved by Grover's algorithm using quadratically fewer queries to the database than that are required by classical algorithms. In this case, the advantage is not only provable but also optimal, it has been shown that Grover's algorithm gives the maximal possible probability of finding the desired element for any number of oracle lookups. Several other examples of provable quantum speedups for query problems have subsequently been discovered, such as for finding collisions in two-to-one functions and evaluating NAND trees.

Problems that can be addressed with Grover's algorithm have the following properties:

For problems with all these properties, the running time of Grover's algorithm on a quantum computer will scale as the square root of the number of inputs (or elements in the database), as opposed to the linear scaling of classical algorithms. A general class of problems to which Grover's algorithm can be applied[19] is Boolean satisfiability problem. In this instance, the database through which the algorithm is iterating is that of all possible answers. An example (and possible) application of this is a password cracker that attempts to guess the password or secret key for an encrypted file or system. Symmetric ciphers such as Triple DES and AES are particularly vulnerable to this kind of attack.[citation needed] This application of quantum computing is a major interest of government agencies.[20]

Since chemistry and nanotechnology rely on understanding quantum systems, and such systems are impossible to simulate in an efficient manner classically, many believe quantum simulation will be one of the most important applications of quantum computing.[21] Quantum simulation could also be used to simulate the behavior of atoms and particles at unusual conditions such as the reactions inside a collider.[22]

Quantum annealing or Adiabatic quantum computation relies on the adiabatic theorem to undertake calculations. A system is placed in the ground state for a simple Hamiltonian, which is slowly evolved to a more complicated Hamiltonian whose ground state represents the solution to the problem in question. The adiabatic theorem states that if the evolution is slow enough the system will stay in its ground state at all times through the process.

The Quantum algorithm for linear systems of equations or "HHL Algorithm", named after its discoverers Harrow, Hassidim, and Lloyd, is expected to provide speedup over classical counterparts.[23]

John Preskill has introduced the term quantum supremacy to refer to the hypothetical speedup advantage that a quantum computer would have over a classical computer in a certain field.[24] Google announced in 2017 that it expected to achieve quantum supremacy by the end of the year though that did not happen. IBM said in 2018 that the best classical computers will be beaten on some practical task within about five years and views the quantum supremacy test only as a potential future benchmark.[25] Although skeptics like Gil Kalai doubt that quantum supremacy will ever be achieved,[26][27] in October 2019, a Sycamore processor created in conjunction with Google AI Quantum was reported to have achieved quantum supremacy,[28] with calculations more than 3,000,000 times as fast as those of Summit, generally considered the world's fastest computer.[29] Bill Unruh doubted the practicality of quantum computers in a paper published back in 1994.[30] Paul Davies argued that a 400-qubit computer would even come into conflict with the cosmological information bound implied by the holographic principle.[31]

There are a number of technical challenges in building a large-scale quantum computer,.[32] David DiVincenzo listed the following requirements for a practical quantum computer:[33]

Sourcing parts for quantum computers is very difficult: Quantum computers need Helium-3, a nuclear research byproduct, and special cables that are only made by a single company in Japan.[34]

One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, and the lattice vibrations and background thermonuclear spin of the physical system used to implement the qubits. Decoherence is irreversible, as it is effectively non-unitary, and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems in particular, the transverse relaxation time T2 (for NMR and MRI technology, also called the dephasing time), typically range between nanoseconds and seconds at low temperature.[35] Currently, some quantum computers require their qubits to be cooled to 20 millikelvins in order to prevent significant decoherence.[36]

As a result, time-consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions.[37]

These issues are more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.

As described in the Quantum threshold theorem, if the error rate is small enough, it is thought to be possible to use quantum error correction to suppress errors and decoherence. This allows the total calculation time to be longer than the decoherence time if the error correction scheme can correct errors faster than decoherence introduces them. An often cited figure for the required error rate in each gate for fault-tolerant computation is 103, assuming the noise is depolarizing.

Meeting this scalability condition is possible for a wide range of systems. However, the use of error correction brings with it the cost of a greatly increased number of required qubits. The number required to factor integers using Shor's algorithm is still polynomial, and thought to be between L and L2, where L is the number of qubits in the number to be factored; error correction algorithms would inflate this figure by an additional factor of L. For a 1000-bit number, this implies a need for about 104 bits without error correction.[38] With error correction, the figure would rise to about 107 bits. Computation time is about L2 or about 107 steps and at 1MHz, about 10 seconds.

A very different approach to the stability-decoherence problem is to create a topological quantum computer with anyons, quasi-particles used as threads and relying on braid theory to form stable logic gates.[39][40]

Physicist Mikhail Dyakonov has expressed skepticism of quantum computing as follows:

There are a number of quantum computing models, distinguished by the basic elements in which the computation is decomposed. The four main models of practical importance are:

The quantum Turing machine is theoretically important but the direct implementation of this model is not pursued. All four models of computation have been shown to be equivalent; each can simulate the other with no more than polynomial overhead.

For physically implementing a quantum computer, many different candidates are being pursued, among them (distinguished by the physical system used to realize the qubits):

A large number of candidates demonstrates that the topic, in spite of rapid progress, is still in its infancy. There is also a vast amount of flexibility.

The class of problems that can be efficiently solved by quantum computers is called BQP, for "bounded error, quantum, polynomial time". Quantum computers only run probabilistic algorithms, so BQP on quantum computers is the counterpart of BPP ("bounded error, probabilistic, polynomial time") on classical computers. It is defined as the set of problems solvable with a polynomial-time algorithm, whose probability of error is bounded away from one half.[61] A quantum computer is said to "solve" a problem if, for every instance, its answer will be right with high probability. If that solution runs in polynomial time, then that problem is in BQP.

BQP is contained in the complexity class #P (or more precisely in the associated class of decision problems P#P),[62] which is a subclass of PSPACE.

BQP is suspected to be disjoint from NP-complete and a strict superset of P, but that is not known. Both integer factorization and discrete log are in BQP. Both of these problems are NP problems suspected to be outside BPP, and hence outside P. Both are suspected to not be NP-complete. There is a common misconception that quantum computers can solve NP-complete problems in polynomial time. That is not known to be true, and is generally suspected to be false.[62]

The capacity of a quantum computer to accelerate classical algorithms has rigid limitsupper bounds of quantum computation's complexity. The overwhelming part of classical calculations cannot be accelerated on a quantum computer.[63] A similar fact prevails for particular computational tasks, like the search problem, for which Grover's algorithm is optimal.[64]

Bohmian Mechanics is a non-local hidden variable interpretation of quantum mechanics. It has been shown that a non-local hidden variable quantum computer could implement a search of an N-item database at most in O ( N 3 ) {displaystyle O({sqrt[{3}]{N}})} steps. This is slightly faster than the O ( N ) {displaystyle O({sqrt {N}})} steps taken by Grover's algorithm. Neither search method will allow quantum computers to solve NP-Complete problems in polynomial time.[65]

Although quantum computers may be faster than classical computers for some problem types, those described above cannot solve any problem that classical computers cannot already solve. A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the ChurchTuring thesis.[66] It has been speculated that theories of quantum gravity, such as M-theory or loop quantum gravity, may allow even faster computers to be built. Currently, defining computation in such theories is an open problem due to the problem of time, i.e., there currently exists no obvious way to describe what it means for an observer to submit input to a computer and later receive output.[67][68]

Visit link:
Quantum computing - Wikipedia

Explainer: What is a quantum computer? – MIT Technology Review

This is the first in a series of explainers on quantum technology. The other two are on quantum communication and post-quantum cryptography.

A quantum computer harnesses some of the almost-mystical phenomena of quantum mechanics to deliver huge leaps forward in processing power. Quantum machines promise to outstrip even the most capable of todaysand tomorrowssupercomputers.

They wont wipe out conventional computers, though. Using a classical machine will still be the easiest and most economical solution for tackling most problems. But quantum computers promise to power exciting advances in various fields, from materials science to pharmaceuticals research. Companies are already experimenting with them to develop things like lighter and more powerful batteries for electric cars, and to help create novel drugs.

The secret to a quantum computers power lies in its ability to generate and manipulate quantum bits, or qubits.

What is a qubit?

Today's computers use bitsa stream of electrical or optical pulses representing1s or0s. Everything from your tweets and e-mails to your iTunes songs and YouTube videos are essentially long strings of these binary digits.

Quantum computers, on the other hand, usequbits, whichare typically subatomic particles such as electrons or photons. Generating and managing qubits is a scientific and engineering challenge. Some companies, such as IBM, Google, and Rigetti Computing, use superconducting circuits cooled to temperatures colder than deep space. Others, like IonQ, trap individual atoms in electromagnetic fields on a silicon chip in ultra-high-vacuum chambers. In both cases, the goal is to isolate the qubits in a controlled quantum state.

Qubits have some quirky quantum properties that mean a connected group of them can provide way more processing power than the same number of binary bits. One of those properties is known as superposition and another is called entanglement.

Qubits can represent numerous possible combinations of 1and 0 at the same time. This ability to simultaneously be in multiple states is called superposition. To put qubits into superposition, researchers manipulate them using precision lasers or microwave beams.

Thanks to this counterintuitive phenomenon, a quantum computer with several qubits in superposition can crunch through a vast number of potential outcomes simultaneously. The final result of a calculation emerges only once the qubits are measured, which immediately causes their quantum state to collapse to either 1or 0.

Researchers can generate pairs of qubits that are entangled, which means the two members of a pair exist in a single quantum state. Changing the state of one of the qubits will instantaneously change the state of the other one in a predictable way. This happens even if they are separated by very long distances.

Nobody really knows quite how or why entanglement works. It even baffled Einstein, who famously described it as spooky action at a distance. But its key to the power of quantum computers. In a conventional computer, doubling the number of bits doubles its processing power. But thanks to entanglement, adding extra qubits to a quantum machine produces an exponential increase in its number-crunching ability.

Quantum computers harness entangled qubits in a kind of quantum daisy chain to work their magic. The machines ability to speed up calculations using specially designed quantum algorithms is why theres so much buzz about their potential.

Thats the good news. The bad news is that quantum machines are way more error-prone than classical computers because of decoherence.

The interaction of qubits with their environment in ways that cause their quantum behavior to decay and ultimately disappear is called decoherence. Their quantum state is extremely fragile. The slightest vibration or change in temperaturedisturbances known as noise in quantum-speakcan cause them to tumble out of superposition before their job has been properly done. Thats why researchers do their best to protect qubits from the outside world in those supercooled fridges and vacuum chambers.

But despite their efforts, noise still causes lots of errors to creep into calculations. Smart quantum algorithmscan compensate for some of these, and adding more qubits also helps. However, it will likely take thousands of standard qubits to create a single, highly reliable one, known as a logical qubit. This will sap a lot of a quantum computers computational capacity.

And theres the rub: so far, researchers havent been able to generate more than 128 standard qubits (see our qubit counter here). So were still many years away from getting quantum computers that will be broadly useful.

That hasnt dented pioneers hopes of being the first to demonstrate quantum supremacy.

What is quantum supremacy?

Its the point at which a quantum computer can complete a mathematical calculation that is demonstrably beyond the reach of even the most powerful supercomputer.

Its still unclear exactly how many qubits will be needed to achieve this because researchers keep finding new algorithms to boost the performance of classical machines, and supercomputing hardware keeps getting better. But researchers and companies are working hard to claim the title, running testsagainst some of the worlds most powerful supercomputers.

Theres plenty of debate in the research world about just how significant achieving this milestone will be. Rather than wait for supremacy to be declared, companies are already starting to experiment with quantum computers made by companies like IBM, Rigetti, and D-Wave, a Canadian firm. Chinese firms like Alibaba are also offering access to quantum machines. Some businesses are buying quantum computers, while others are using ones made available through cloud computing services.

Where is a quantum computer likely to be most useful first?

One of the most promising applications of quantum computers is for simulating the behavior of matterdown to the molecular level. Auto manufacturers like Volkswagen and Daimler are using quantum computers to simulate the chemical composition of electrical-vehicle batteries to help find new ways to improve their performance. And pharmaceutical companies are leveraging them to analyze and compare compounds that could lead to the creation of new drugs.

The machines are also great for optimization problems because they can crunch through vast numbers of potential solutions extremely fast. Airbus, for instance, is using them to help calculate the most fuel-efficient ascent and descent paths for aircraft. And Volkswagen has unveiled a service that calculates the optimal routes for buses and taxis in cities in order to minimize congestion. Some researchers also think the machines could be used to accelerate artificial intelligence.

It could take quite a few years for quantum computers to achieve their full potential. Universities and businesses working on them are facing a shortage of skilled researchersin the fieldand a lack of suppliersof some key components. But if these exotic new computing machines live up to their promise, they could transform entire industries and turbocharge global innovation.

Read the original here:
Explainer: What is a quantum computer? - MIT Technology Review

What Is Quantum Computing? The Next Era of Computational …

When you first stumble across the term quantum computer, you might pass it off as some far-flung science fiction concept rather than a serious current news item.

But with the phrase being thrown around with increasing frequency, its understandable to wonder exactly what quantum computers are, and just as understandable to be at a loss as to where to dive in. Heres the rundown on what quantum computers are, why theres so much buzz around them, and what they might mean for you.

All computing relies on bits, the smallest unit of information that is encoded as an on state or an off state, more commonly referred to as a 1 or a 0, in some physical medium or another.

Most of the time, a bit takes the physical form of an electrical signal traveling over the circuits in the computers motherboard. By stringing multiple bits together, we can represent more complex and useful things like text, music, and more.

The two key differences between quantum bits and classical bits (from the computers we use today) are the physical form the bits take and, correspondingly, the nature of data encoded in them. The electrical bits of a classical computer can only exist in one state at a time, either 1 or 0.

Quantum bits (or qubits) are made of subatomic particles, namely individual photons or electrons. Because these subatomic particles conform more to the rules of quantum mechanics than classical mechanics, they exhibit the bizarre properties of quantum particles. The most salient of these properties for computer scientists is superposition. This is the idea that a particle can exist in multiple states simultaneously, at least until that state is measured and collapses into a single state. By harnessing this superposition property, computer scientists can make qubits encode a 1 and a 0 at the same time.

The other quantum mechanical quirk that makes quantum computers tick is entanglement, a linking of two quantum particles or, in this case, two qubits. When the two particles are entangled, the change in state of one particle will alter the state of its partner in a predictable way, which comes in handy when it comes time to get a quantum computer to calculate the answer to the problem you feed it.

A quantum computers qubits start in their 1-and-0 hybrid state as the computer initially starts crunching through a problem. When the solution is found, the qubits in superposition collapse to the correct orientation of stable 1s and 0s for returning the solution.

Aside from the fact that they are far beyond the reach of all but the most elite research teams (and will likely stay that way for a while), most of us dont have much use for quantum computers. They dont offer any real advantage over classical computers for the kinds of tasks we do most of the time.

However, even the most formidable classical supercomputers have a hard time cracking certain problems due to their inherent computational complexity. This is because some calculations can only be achieved by brute force, guessing until the answer is found. They end up with so many possible solutions that it would take thousands of years for all the worlds supercomputers combined to find the correct one.

The superposition property exhibited by qubits can allow supercomputers to cut this guessing time down precipitously. Classical computings laborious trial-and-error computations can only ever make one guess at a time, while the dual 1-and-0 state of a quantum computers qubits lets it make multiple guesses at the same time.

So, what kind of problems require all this time-consuming guesswork calculation? One example is simulating atomic structures, especially when they interact chemically with those of other atoms. With a quantum computer powering the atomic modeling, researchers in material science could create new compounds for use in engineering and manufacturing. Quantum computers are well suited to simulating similarly intricate systems like economic market forces, astrophysical dynamics, or genetic mutation patterns in organisms, to name only a few.

Amidst all these generally inoffensive applications of this emerging technology, though, there are also some uses of quantum computers that raise serious concerns. By far the most frequently cited harm is the potential for quantum computers to break some of the strongest encryption algorithms currently in use.

In the hands of an aggressive foreign government adversary, quantum computers could compromise a broad swath of otherwise secure internet traffic, leaving sensitive communications susceptible to widespread surveillance. Work is currently being undertaken to mature encryption ciphers based on calculations that are still hard for even quantum computers to do, but they are not all ready for prime-time, or widely adopted at present.

A little over a decade ago, actual fabrication of quantum computers was barely in its incipient stages. Starting in the 2010s, though, development of functioning prototype quantum computers took off. A number of companies have assembled working quantum computers as of a few years ago, with IBM going so far as to allow researchers and hobbyists to run their own programs on it via the cloud.

Despite the strides that companies like IBM have undoubtedly made to build functioning prototypes, quantum computers are still in their infancy. Currently, the quantum computers that research teams have constructed so far require a lot of overhead for executing error correction. For every qubit that actually performs a calculation, there are several dozen whose job it is to compensate for the ones mistake. The aggregate of all these qubits make what is called a logical qubit.

Long story short, industry and academic titans have gotten quantum computers to work, but they do so very inefficiently.

Fierce competition between quantum computer researchers is still raging, between big and small players alike. Among those who have working quantum computers are the traditionally dominant tech companies one would expect: IBM, Intel, Microsoft, and Google.

As exacting and costly of a venture as creating a quantum computer is, there are a surprising number of smaller companies and even startups that are rising to the challenge.

The comparatively lean D-Wave Systems has spurred many advances in the fieldand proved it was not out of contention by answering Googles momentous announcement with news of a huge deal with Los Alamos National Labs. Still, smaller competitors like Rigetti Computing are also in the running for establishing themselves as quantum computing innovators.

Depending on who you ask, youll get a different frontrunner for the most powerful quantum computer. Google certainly made its case recently with its achievement of quantum supremacy, a metric that itself Google more or less devised. Quantum supremacy is the point at which a quantum computer is first able to outperform a classical computer at some computation. Googles Sycamore prototype equipped with 54 qubits was able to break that barrier by zipping through a problem in just under three-and-a-half minutes that would take the mightiest classical supercomputer 10,000 years to churn through.

Not to be outdone, D-Wave boasts that the devices it will soon be supplying to Los Alamos weigh in at 5000 qubits apiece, although it should be noted that the quality of D-Waves qubits has been called into question before. IBM hasnt made the same kind of splash as Google and D-Wave in the last couple of years, but they shouldnt be counted out yet, either, especially considering their track record of slow and steady accomplishments.

Put simply, the race for the worlds most powerful quantum computer is as wide open as it ever was.

The short answer to this is not really, at least for the near-term future. Quantum computers require an immense volume of equipment, and finely tuned environments to operate. The leading architecture requires cooling to mere degrees above absolute zero, meaning they are nowhere near practical for ordinary consumers to ever own.

But as the explosion of cloud computing has proven, you dont need to own a specialized computer to harness its capabilities. As mentioned above, IBM is already offering daring technophiles the chance to run programs on a small subset of its Q System Ones qubits. In time, IBM and its competitors will likely sell compute time on more robust quantum computers for those interested in applying them to otherwise inscrutable problems.

But if you arent researching the kinds of exceptionally tricky problems that quantum computers aim to solve, you probably wont interact with them much. In fact, quantum computers are in some cases worse at the sort of tasks we use computers for every day, purely because quantum computers are so hyper-specialized. Unless you are an academic running the kind of modeling where quantum computing thrives, youll likely never get your hands on one, and never need to.

See more here:
What Is Quantum Computing? The Next Era of Computational ...

What Is Quantum Computing? A Super-Easy Explanation For Anyone

Its fascinating to think about the power in our pockettodays smartphones have the computing power of a military computer from 50 years ago that was the size of an entire room. However, even with the phenomenal strides we made in technology and classical computers since the onset of the computer revolution, there remain problems that classical computers just cant solve. Many believe quantum computers are the answer.

The Limits of Classical Computers

Now that we have made the switching and memory units of computers, known as transistors, almost as small as an atom, we need to find an entirely new way of thinking about and building computers. Even though a classical computer helps us do many amazing things, under the hood its really just a calculator that uses a sequence of bitsvalues of 0 and 1 to represent two states (think on and off switch) to makes sense of and decisions about the data we input following a prearranged set of instructions. Quantum computers are not intended to replace classical computers, they are expected to be a different tool we will use to solve complex problems that are beyond the capabilities of a classical computer.

Basically, as we are entering a big data world in which the information we need to store grows, there is a need for more ones and zeros and transistors to process it. For the most part classical computers are limited to doing one thing at a time, so the more complex the problem, the longer it takes. A problem that requires more power and time than todays computers can accommodate is called an intractable problem. These are the problems that quantum computers are predicted to solve.

The Power of Quantum Computers

When you enter the world of atomic and subatomic particles, things begin to behave in unexpected ways. In fact, these particles can exist in more than one state at a time. Its this ability that quantum computers take advantage of.

Instead of bits, which conventional computers use, a quantum computer uses quantum bitsknown as qubits. To illustrate the difference, imagine a sphere. A bit can be at either of the two poles of the sphere, but a qubit can exist at any point on the sphere. So, this means that a computer using qubits can store an enormous amount of information and uses less energy doing so than a classical computer. By entering into this quantum area of computing where the traditional laws of physics no longer apply, we will be able to create processors that are significantly faster (a million or more times) than the ones we use today. Sounds fantastic, but the challenge is that quantum computing is also incredibly complex.

The pressure is on the computer industry to find ways to make computing more efficient, since we reached the limits of energy efficiency using classical methods. By 2040, according to a report by the Semiconductor Industry Association, we will no longer have the capability to power all of the machines around the world. Thats precisely why the computer industry is racing to make quantum computers work on a commercial scale. No small feat, but one that will pay extraordinary dividends.

How our world will change with quantum computing

Its difficult to predict how quantum computing will change our world simply because there will be applications in all industries. Were venturing into an entirely new realm of physics and there will be solutions and uses we have never even thought of yet. But when you consider how much classical computers revolutionized our world with a relatively simple use of bits and two options of 0 or 1, you can imagine the extraordinary possibilities when you have the processing power of qubits that can perform millions of calculations at the same moment.

What we do know is that it will be game-changing for every industry and will have a huge impact in the way we do business, invent new medicine and materials, safeguard our data, explore space, and predict weather events and climate change. Its no coincidence that some of the worlds most influential companies such as IBM and Google and the worlds governments are investing in quantum computing technology. They are expecting quantum computing to change our world because it will allow us to solve problems and experience efficiencies that arent possible today. In another post, I dig deeper into how quantum computing will change our world.

Read the original post:
What Is Quantum Computing? A Super-Easy Explanation For Anyone

The $600 quantum computer that could spell the end for conventional encryption – BetaNews

Concerns that quantum computing could place current encryption techniques at risk have been around for some time.

But now cybersecurity startup Active Cypher has built a password-hacking quantum computer to demonstrate that the dangers are very real.

Using easily available parts costing just $600, Active Cyphers founder and CTO, Dan Gleason, created a portable quantum computer dubbed QUBY (named after qubits, the basic unit of quantum information). QUBY runs recently open-sourced quantum algorithms capable of executing within a quantum emulator that can perform cryptographic cracking algorithms. Calculations that would have otherwise taken years on conventional computers are now performed in seconds on QUBY.

Gleason explains, "After years of foreseeing this danger and trying to warn the cybersecurity community that current cybersecurity protocols were not up to par, I decided to take a week and move my theory to prototype. I hope that QUBY can increase awareness of how the cyberthreats of quantum computing are not reserved to billion-dollar state-sponsored projects, but can be seen on much a smaller, localized scale."

The concern is that quantum computing will lead to the sunset of AES-256 (the current encryption standard), meaning all encrypted files could one day be decrypted. "The disruption that will come about from that will be on an unprecedented, global scale. It's going to be massive," says Gleason. Modelled after the SADM, a man-portable nuclear weapon deployed in the 1960s, QUBY was downsized so that it fits in a backpack and is therefore untraceable. Low-level 'neighborhood hackers' have already been using portable devices that can surreptitiously swipe credit card information from an unsuspecting passerby. Quantum compute emulating devices will open the door for significantly more cyberthreats.

In response to the threat, Active Cypher has developed advanced dynamic cyphering encryption that is built to be quantum resilient. Gleason explains that, "Our encryption is not based on solving a mathematical problem. It's based on a very large, random key which is used in creating the obfuscated cyphertext, without any key information within the cyphertext, and is thus impossible to be derived through prime factorization -- traditional brute force attempts which use the cyphertext to extract key information from patterns derived from the key material."

Active Cypher's completely random cyphertext cannot be deciphered using even large quantum computers since the only solution to cracking the key is to try every possible combination of the key, which will produce every known possible output of the text, without knowledge of which version might be the correct one. "In other words, you'll find a greater chance of finding a specific grain of sand in a desert than cracking this open," says Gleason.

Active Cypher showcased QUBY in early February at Ready -- an internal Microsoft conference held in Seattle. The prototype will also be presented at RSA in San Francisco later this month.

See more here:
The $600 quantum computer that could spell the end for conventional encryption - BetaNews