Devs: Alex Garland on Tech Company Cults, Quantum Computing, and Determinism – Den of Geek UK

Yet that difference between the common things a company can sell and the uncommon things they quietly develop is profoundly important. In Devs, the friendly exterior of Amaya with its enormous statue of a childa literal monument to Forests lost daughteris a public face to the actual profound work his Devs team is doing in a separate, highly secretive facility. Seemingly based in part on mysterious research and development wings of tech giantsthink Googles moonshot organizations at X Development and DeepMindDevs is using quantum computing to change the world, all while keeping Forests Zen ambition as its shield.

I think it helps, actually, Garland says about Forest not being a genius. Because I think what happens is that these [CEO] guys present as a kind of front between what the company is doing and the rest of the world, including the kind of inspection that the rest of the world might want on the company if they knew what the company was doing. So our belief and enthusiasm in the leader stops us from looking too hard at what the people behind-the-scenes are doing. And from my point of view thats quite common.

A lifelong man of words, Garland describes himself as a writer with a laymans interest in science. Yet its fair to say he studies almost obsessively whatever field of science hes writing about, which now pertains to quantum computing. A still largely unexplored frontier in the tech world, quantum computing is the use of technology to apply quantum-mechanical phenomena to data a traditional computer could never process. Its still so unknown that Google AI and NASA published a paper only six months ago in which they claimed to have achieved quantum supremacy (the creation of a quantum device that can actually solve problems a classical computer cannot).

Whereas binary computers work with gates that are either a one or a zero, a quantum qubit [a basic unit of measurement] can deal with a one and a zero concurrently, and all points in between, says Garland. So you get a staggering amount of exponential power as you start to run those qubits in tandem with each other. What the filmmaker is especially fascinated by is using a quantum system to model another quantum system. That is to say using a quantum computer with true supremacy to solve other theoretical problems in quantum physics. If we use a binary way of doing that, youre essentially using a filing system to model something that is emphatically not binary.

So in Devs, quantum computing is a gateway into a hell of a trippy concept: a quantum computer so powerful that it can analyze the theoretical data of everything that has or will occur. In essence, Forest and his team are creating a time machine that can project through a probabilistic system how events happened in the past, will happen in the future, and are happening right now. It thus acts as an omnipotent surveillance system far beyond any neocons dreams.

Read this article:
Devs: Alex Garland on Tech Company Cults, Quantum Computing, and Determinism - Den of Geek UK

Flux-induced topological superconductivity in full-shell nanowires – Science Magazine

INTRODUCTION

Majorana zero modes (MZMs) localized at the ends of one-dimensional topological superconductors are promising candidates for fault-tolerant quantum computing. One approach among the proposals to realize MZMsbased on semiconducting nanowires with strong spin-orbit coupling subject to a Zeeman field and superconducting proximity effecthas received considerable attention, yielding increasingly compelling experimental results over the past few years. An alternative route to MZMs aims to create vortices in topological superconductors, for instance, by coupling a vortex in a conventional superconductor to a topological insulator.

We intoduce a conceptually distinct approach to generating MZMs by threading magnetic flux through a superconducting shell fully surrounding a spin-orbitcoupled semiconducting nanowire core; this approach contains elements of both the proximitized-wire and vortex schemes. We show experimentally and theoretically that the winding of the superconducting phase around the shell induced by the applied flux gives rise to MZMs at the ends of the wire. The topological phase sets in at relatively low magnetic fields, is controlled by moving from zero to one phase twist around the superconducting shell, and does not require a large g factor in the semiconductor, which broadens the landscape of candidate materials.

In the destructive Little-Parks regime, the modulation of critical temperature with flux applied along the hybrid nanowire results in a sequence of lobes with reentrant superconductivity. Each lobe is associated with a quantized number of twists of the superconducting phase in the shell, determined by the external field. The result is a series of topologically locked boundary conditions for the proximity effect in the semiconducting core, with a dramatic effect on the subgap density of states.

Tunneling into the core in the zeroth superconducting lobe, around zero flux, we measure a hard proximity-induced gap with no subgap features. In the superconducting regions around one quantum of applied flux, 0 = h/2e, corresponding to phase twists of 2 in the shell, tunneling spectra into the core show stable zero-bias peaks, indicating a discrete subgap state fixed at zero energy.

Theoretically, we find that a Rashba field arising from the breaking of local radial inversion symmetry at the semiconductor-superconductor interface, along with 2-phase twists in the boundary condition, can induce a topological state supporting MZMs. We calculate the topological phase diagram of the system as a function of Rashba spin-orbit coupling, radius of the semiconducting core, and band bending at the superconductor-semiconductor interface. Our analysis shows that topological superconductivity extends in a reasonably large portion of the parameter space. Transport simulations of the tunneling conductance in the presence of MZMs qualitatively reproduce the experimental data in the entire voltage-bias range.

We obtain further experimental evidence that the zero-energy states are delocalized at wire ends by investigating Coulomb blockade conductance peaks in full-shell wire islands of various lengths. In the zeroth lobe, Coulomb blockade peaks show 2e spacing; in the first lobe, peak spacings are roughly 1e-periodic, with slight even-odd alternation that vanishes exponentially with island length, consistent with overlapping Majorana modes at the two ends of the Coulomb island. The exponential dependence on length, as well as incompatibility with a power-law dependence, provides compelling evidence that MZMs reside at the ends of the hybrid islands.

While being of similar simplicity and practical feasibility as the original nanowire proposals with a partial shell coverage, full-shell nanowires provide several key advantages. The modest magnetic field requirements, protection of the semiconducting core from surface defects, and locked phase winding in discrete lobes together suggest a relatively easy route to creating and controlling MZMs in hybrid materials. Our findings open the possibility of studying an interplay of mesoscopic and topological physics in this system.

(A) Colorized electron micrograph of a tunneling device composed of a hybrid nanowire with hexagonal semiconducting core and full superconducting shell. (B) Tunneling conductance (color) into the core as a function of applied flux (horizontal axis) and source-drain voltage (vertical axis) reveals a hard induced superconducting gap near zero applied flux and a gapped region with a discrete zero-energy state around one applied flux quantum, 0. (C) Realistic transport simulations in the presence of MZMs reproduce key features of the experimental data.

Link:
Flux-induced topological superconductivity in full-shell nanowires - Science Magazine

Reaching the Singularity May be Humanity’s Greatest and Last Accomplishment – Air & Space Magazine

In a new paper published in The International Journal of Astrobiology, Joseph Gale from The Hebrew University of Jerusalem and co-authors make the point that recent advances in artificial intelligence (AI)particularly in pattern recognition and self-learningwill likely result in a paradigm shift in the search for extraterrestrial intelligent life.

While futurist Ray Kurzweil predicted 15 years ago that the singularitythe time when the abilities of a computer overtake the abilities of the human brainwill occur in about 2045, Gale and his co-authors believe this event may be much more imminent, especially with the advent of quantum computing. Its already been four years since the program AlphaGO, fortified with neural networks and learning modes, defeated Lee Sedol, the Go world champion. The strategy game StarCraft II may be the next to have a machine as reigning champion.

If we look at the calculating capacity of computers and compare it to the number of neurons in the human brain, the singularity could be reached as soon as the early 2020s. However, a human brain is wired differently than a computer, and that may be the reason why certain tasks that are simple for us are still quite challenging for todays AI. Also, the size of the brain or the number of neurons dont equate to intelligence. For example, whales and elephants have more than double the number of neurons in their brain, but are not more intelligent than humans.

The authors dont know when the singularity will come, but come it will. When this occurs, the end of the human race might very well be upon us, they say, citing a 2014 prediction by the late Stephen Hawking. According to Kurzweil, humans may then be fully replaced by AI, or by some hybrid of humans and machines.

What will this mean for astrobiology? Not much, if were searching only for microbial extraterrestrial life. But it might have a drastic impact on the search for extraterrestrial intelligent life (SETI). If other civilizations are similar to ours but older, we would expect that they already moved beyond the singularity. So they wouldnt necessarily be located on a planet in the so-called habitable zone. As the authors point out, such civilizations might prefer locations with little electronic noise in a dry and cold environment, perhaps in space, where they could use superconductivity for computing and quantum entanglement as a means of communication.

We are just beginning to understand quantum entanglement, and it is not yet clear whether it can be used to transfer information. If it can, however, that might explain the apparent lack of evidence for extraterrestrial intelligent civilizations. Why would they use primitive radio waves to send messages?

I think it also is still unclear whether there is something special enough about the human brains ability to process information that casts doubt on whether AI can surpass our abilities in all relevant areas, especially in achieving consciousness. Might there be something unique to biological brains after millions and millions of years of evolution that computers cannot achieve? If not, the authors are correct that reaching the singularity could be humanitys greatest and last advance.

Like this article?SIGN UP for our newsletter

Follow this link:
Reaching the Singularity May be Humanity's Greatest and Last Accomplishment - Air & Space Magazine

Quantum Computing Market 2020 | Growing Rapidly with Significant CAGR, Leading Players, Innovative Trends and Expected Revenue by 2026 – Skyline…

New Jersey, United States:The Quantum Computing Market is carefully researched in the report while largely concentrating on top players and their business tactics, geographical expansion, market segments, competitive landscape, manufacturing, and pricing and cost structures. Each section of the research study is specially prepared to explore key aspects of the Quantum Computing market. For instance, the market dynamics section digs deep into the drivers, restraints, trends, and opportunities of the Quantum Computing Market. With qualitative and quantitative analysis, we help you with thorough and comprehensive research on the Quantum Computing market. We have also focused on SWOT, PESTLE, and Porters Five Forces analyses of the Quantum Computing market.

Global Quantum Computing Market was valued at USD 89.35 million in 2016 and is projected to reach USD 948.82 million by 2025, growing at a CAGR of 30.02% from 2017 to 2025.

Leading players of the Quantum Computing market are analyzed taking into account their market share, recent developments, new product launches, partnerships, mergers or acquisitions, and markets served. We also provide an exhaustive analysis of their product portfolios to explore the products and applications they concentrate on when operating in the Quantum Computing market. Furthermore, the report offers two separate market forecasts one for the production side and another for the consumption side of the Quantum Computing market. It also provides useful recommendations for new as well as established players of the Quantum Computing market.

Quantum Computing Market by Regional Segments:

The chapter on regional segmentation describes the regional aspects of the Quantum Computing market. This chapter explains the regulatory framework that is expected to affect the entire market. It illuminates the political scenario of the market and anticipates its impact on the market for Quantum Computing.

Analysts who have authored the report have segmented the market for Quantum Computing by product, application and region. All segments are the subject of extensive research, with a focus on CAGR, market size, growth potential, market share and other important factors. The segment study provided in the report will help players focus on the lucrative areas of the Quantum Computing market. The regional analysis will help the actors to strengthen their position in the most important regional markets. It shows unused growth opportunities in regional markets and how they can be used in the forecast period.

Ask for Discount @ https://www.verifiedmarketresearch.com/ask-for-discount/?rid=24845&utm_source=SGN&utm_medium=003

Highlights of TOC:

Overview: In addition to an overview of the Quantum Computing market, this section provides an overview of the report to give an idea of the type and content of the study.

Market dynamics: Here the authors of the report discussed in detail the main drivers, restrictions, challenges, trends and opportunities in the market for Quantum Computing.

Product Segments: This part of the report shows the growth of the market for various types of products sold by the largest companies.

Application segments: The analysts who have authored the report have thoroughly evaluated the market potential of the key applications and identified the future opportunities they should create in the Quantum Computing.

Geographic Segments: Each regional market is carefully examined to understand its current and future growth scenarios.

Company Profiles: The top players in the Quantum Computing market are detailed in the report based on their market share, served market, products, applications, regional growth and other factors.

The report also includes specific sections on production and consumption analysis, key results, key suggestions and recommendations, and other issues. Overall, it offers a complete analysis and research study of the Quantum Computing market to help players ensure strong growth in the coming years.

Complete Report is Available @ https://www.verifiedmarketresearch.com/product/Quantum-Computing-Market/?utm_source=SGN&utm_medium=003

About us:

Verified market research partners with the customer and offer an insight into strategic and growth analyzes; Data necessary to achieve corporate goals and objectives. Our core values are trust, integrity and authenticity for our customers.

Analysts with a high level of expertise in data collection and governance use industrial techniques to collect and analyze data in all phases. Our analysts are trained to combine modern data collection techniques, superior research methodology, expertise and years of collective experience to produce informative and accurate research reports.

Contact us:

Mr. Edwyne FernandesCall: +1 (650) 781 4080Email: [emailprotected]

Tags: Quantum Computing Market Size, Quantum Computing Market Trends, Quantum Computing Market Forecast, Quantum Computing Market Growth, Quantum Computing Market Analysis

Our Trending Reports

Probe Card Market Size, Growth Analysis, Opportunities, Business Outlook and Forecast to 2026

The rest is here:
Quantum Computing Market 2020 | Growing Rapidly with Significant CAGR, Leading Players, Innovative Trends and Expected Revenue by 2026 - Skyline...

7 books to read while in coronavirus quarantine or isolation – The CEO Magazine

If you, like much of the world, find yourself twiddling your thumbs in self-isolation amid the ongoing coronavirus pandemic this book list is for you.

In the words of the great Susan Sontag, The day has pockets you can always find time to read.

And because we have no idea how long the COVID-19 crisis will continue, many of us are faced with a lot of time to kill.

From entertainment and escapism to a dystopian novel thats eerily similar to the current climate, these books will enrich your self-isolation.

Stephen King It starts with a cough, then your neck swells, your nose starts to bleed and your eyes bulge out of their sockets. Death comes just hours later. Kings 1978 novel follows the breakdown of society after a strain of the flu that has been modified to be used for biological warfare is accidentally released, killing 99% of the population. Its then up to a tiny handful of survivors to rebuild society. Comparisons between COVID-19 and Kings fictitious flu are already being made, with the author tweeting in response, Its not anywhere near as serious. Its eminently survivable. Keep calm and take all reasonable precautions.

Sheryl Sandberg This game-changing book became an international bestseller for good reason. Facebook COO Sheryl Sandberg has had decades of experience climbing the corporate ladder at successful technology startups all while balancing family life. She covers everything from how to find a mentor at work to voicing your opinion, becoming a leader in your organisation, forging an equal partnership in your home life and what true equality should look like in the workplace. Balancing light humour with solemn advice, Lean In is a call to action for personal growth that can empower women around the world to achieve their full potential.

Mark OConnell Insightful, life-affirming and slightly terrifying, this book follows OConnell as he travels the globe in search of answers regarding the impending climate apocalypse. He tours survival bunkers in South Dakota, visits the billionaires bunkers in New Zealand and interviews everyone from doomsday preppers to conspiracy theorists. With insight, humanity and wit, OConnell leaves you wondering, What if the end of the world isnt the end of the world?

David A Sinclair What if everything weve been taught to believe about ageing is wrong? What if we could choose our lifespan? The paradigm-shifting book from David Sinclair, acclaimed Harvard Medical School scientist and one of Times most influential people, reveals a bold new theory for why we age. He posits, Ageing is a disease, and that disease is treatable. At a time when the health of the world is threatened, this book will change the way you think about ageing and what we can do about it.

Jessica Anthony What does taxidermy have in common with the current American political climate? A lot, according to Jessica Anthony. Inventive, original and darkly funny, this novel examines how and why a young Republican congressman discovers a mysterious stuffed aardvark placed on his doorstep. It then leaps between contemporary Washington DC and Victorian England, where readers meet the taxidermist who stuffed the creature and the naturalist who hunted it, offering a uniquely unsettling view of how male power has evolved over time.

James McBride The National Book Award winners dazzling, spiritually rich novel opens in 1969 when a boozy Brooklyn deacon guns down a drug dealer. The incident brings together an array of social groups from the African-American and Latinx residents who witnessed the crime to the members of the deacons church as they seek to understand why the violence occurred and how it relates to the multicultural history of their community.

Quan Barry Almost 300 years after the witch trials, a Massachusetts high school field hockey team is determined to make it to the state finals. After a losing streak, their luck starts to turn around after team members begin signing their names in what might be a magical notebook. The novel evolves into a nostalgic coming-of-age story that explores the teams mission to win and their experimentation with witchcraft.

Read next: 10 podcasts to binge right now

Read more from the original source:
7 books to read while in coronavirus quarantine or isolation - The CEO Magazine

What Is Quantum Computing? The Complete WIRED Guide

First, accepted explanations of the subatomic world turned out to be incomplete. Electrons and other particles didnt just neatly carom around like Newtonian billiard balls, for example. Sometimes they acted like waves instead. Quantum mechanics emerged to explain such quirks, but introduced troubling questions of its own. To take just one brow-wrinkling example, this new math implied that physical properties of the subatomic world, like the position of an electron, didnt really exist until they were observed.

Quantum Leaps

1980

Physicist Paul Benioff suggests quantum mechanics could be used for computation.

1981

Nobel-winning physicist Richard Feynman, at Caltech, coins the term quantum computer.

1985

Physicist David Deutsch, at Oxford, maps out how a quantum computer would operate, a blueprint that underpins the nascent industry of today.

1994

Mathematician Peter Shor, at Bell Labs, writes an algorithm that could tap a quantum computers power to break widely used forms of encryption.

2007

D-Wave, a Canadian startup, announces a quantum computing chip it says can solve Sudoku puzzles, triggering years of debate over whether the companys technology really works.

2013

Google teams up with NASA to fund a lab to try out D-Waves hardware.

2014

Google hires the professor behind some of the best quantum computer hardware yet to lead its new quantum hardware lab.

2016

IBM puts some of its prototype quantum processors on the internet for anyone to experiment with, saying programmers need to get ready to write quantum code.

2017

Startup Rigetti opens its own quantum computer fabrication facility to build prototype hardware and compete with Google and IBM.

If you find that baffling, youre in good company. A year before winning a Nobel for his contributions to quantum theory, Caltechs Richard Feynman remarked that nobody understands quantum mechanics. The way we experience the world just isnt compatible. But some people grasped it well enough to redefine our understanding of the universe. And in the 1980s a few of themincluding Feynmanbegan to wonder if quantum phenomena like subatomic particles' dont look and I dont exist trick could be used to process information. The basic theory or blueprint for quantum computers that took shape in the 80s and 90s still guides Google and others working on the technology.

Before we belly flop into the murky shallows of quantum computing 0.101, we should refresh our understanding of regular old computers. As you know, smartwatches, iPhones, and the worlds fastest supercomputer all basically do the same thing: they perform calculations by encoding information as digital bits, aka 0s and 1s. A computer might flip the voltage in a circuit on and off to represent 1s and 0s for example.

Quantum computers do calculations using bits, too. After all, we want them to plug into our existing data and computers. But quantum bits, or qubits, have unique and powerful properties that allow a group of them to do much more than an equivalent number of conventional bits.

Qubits can be built in various ways, but they all represent digital 0s and 1s using the quantum properties of something that can be controlled electronically. Popular examplesat least among a very select slice of humanityinclude superconducting circuits, or individual atoms levitated inside electromagnetic fields. The magic power of quantum computing is that this arrangement lets qubits do more than just flip between 0 and 1. Treat them right and they can flip into a mysterious extra mode called a superposition.

You may have heard that a qubit in superposition is both 0 and 1 at the same time. Thats not quite true and also not quite falsetheres just no equivalent in Homo sapiens humdrum classical reality. If you have a yearning to truly grok it, you must make a mathematical odyssey WIRED cannot equip you for. But in the simplified and dare we say perfect world of this explainer, the important thing to know is that the math of a superposition describes the probability of discovering either a 0 or 1 when a qubit is read outan operation that crashes it out of a quantum superposition into classical reality. A quantum computer can use a collection of qubits in superpositions to play with different possible paths through a calculation. If done correctly, the pointers to incorrect paths cancel out, leaving the correct answer when the qubits are read out as 0s and 1s.

Jargon for the Quantum Qurious

What's a qubit?

A device that uses quantum mechanical effects to represent 0s and 1s of digital data, similar to the bits in a conventional computer.

What's a superposition?

It's the trick that makes quantum computers tick, and makes qubits more powerful than ordinary bits. A superposition is in an intuition-defying mathematical combination of both 0 and 1. Quantum algorithms can use a group of qubits in a superposition to shortcut through calculations.

What's quantum entanglement?

A quantum effect so unintuitive that Einstein dubbed it spooky action at a distance. When two qubits in a superposition are entangled, certain operations on one have instant effects on the other, a process that helps quantum algorithms be more powerful than conventional ones.

What's quantum speedup?

The holy grail of quantum computinga measure of how much faster a quantum computer could crack a problem than a conventional computer could. Quantum computers arent well-suited to all kinds of problems, but for some they offer an exponential speedup, meaning their advantage over a conventional computer grows explosively with the size of the input problem.

For some problems that are very time consuming for conventional computers, this allows a quantum computer to find a solution in far fewer steps than a conventional computer would need. Grovers algorithm, a famous quantum search algorithm, could find you in a phone book with 100 million names with just 10,000 operations. If a classical search algorithm just spooled through all the listings to find you, it would require 50 million operations, on average. For Grovers and some other quantum algorithms, the bigger the initial problemor phonebookthe further behind a conventional computer is left in the digital dust.

The reason we dont have useful quantum computers today is that qubits are extremely finicky. The quantum effects they must control are very delicate, and stray heat or noise can flip 0s and 1s, or wipe out a crucial superposition. Qubits have to be carefully shielded, and operated at very cold temperatures, sometimes only fractions of a degree above absolute zero. Most plans for quantum computing depend on using a sizable chunk of a quantum processors power to correct its own errors, caused by misfiring qubits.

Recent excitement about quantum computing stems from progress in making qubits less flaky. Thats giving researchers the confidence to start bundling the devices into larger groups. Startup Rigetti Computing recently announced it has built a processor with 128 qubits made with aluminum circuits that are super-cooled to make them superconducting. Google and IBM have announced their own chips with 72 and 50 qubits, respectively. Thats still far fewer than would be needed to do useful work with a quantum computerit would probably require at least thousandsbut as recently as 2016 those companies best chips had qubits only in the single digits. After tantalizing computer scientists for 30 years, practical quantum computing may not exactly be close, but it has begun to feel a lot closer.

What the Future Holds for Quantum Computing

Some large companies and governments have started treating quantum computing research like a raceperhaps fittingly its one where both the distance to the finish line and the prize for getting there are unknown.

Google, IBM, Intel, and Microsoft have all expanded their teams working on the technology, with a growing swarm of startups such as Rigetti in hot pursuit. China and the European Union have each launched new programs measured in the billions of dollars to stimulate quantum R&D. And in the US, the Trump White House has created a new committee to coordinate government work on quantum information science. Several bills were introduced to Congress in 2018 proposing new funding for quantum research, totalling upwards of $1.3 billion. Its not quite clear what the first killer apps of quantum computing will be, or when they will appear. But theres a sense that whoever is first make these machines useful will gain big economic and national security advantages.

More:
What Is Quantum Computing? The Complete WIRED Guide

Tech incubator Fountech.Ventures launches in US and UK – UKTN

Fountech.Ventures, a next generation incubator for deep tech startups, has launched in the US and UK.

The subsidiary company ofFountech.ai, a four-year-old international AI think tank and parent company to a number of specialist AI and deep tech firms, is based in Austin, Texas, US and originated in London, UK.

Fountech.Ventures goes above and beyond a standard incubator it provides broader services over a longer timeframe so founders of deep tech startups can fast-track their businesses from ideation to commercial success.

Fountech.Ventures develops tailored programmes for members, sharing technical and commercial knowledge, along with the provision of interim CEOs, funding, business advice, office space and international networking opportunities.

Headed by Salvatore Minetti, a team of experienced tech experts will work with deep tech startups spanning artificial intelligence (AI), robotics, quantum computing and blockchain.

Based on progress and continuous assessments, Fountech.Ventures will invest its own funds into its portfolio companies, from pre-seed level right through to Series B.

Banking alternative fintech company Lanistar launches

Salvatore Minetti, CEO of Fountech.Ventures, said: The US and UK are home to a vast number of deep tech startups that have immense growth potential. However, reaching that potential is difficult tech experts and PhD graduates have incredible ideas for how to use new and advanced technologies but often lack the skills and experience to transform them into successful businesses.

Fountech.Ventures will change all this by delivering the commercial expertise and infrastructure that is sorely needed. Whats more, the fact that our members can also access vital funding and our international hubs means we have a unique ability to bring products and services grounded in leading edge technologies to huge markets.

It is this end-to-end offering that makes us more than a typical incubator Fountech.Ventures is a next generation incubator.

Fountech.Ventures already has six portfolio companies. These include Soffos, an AI TutorBot; Prospex, an AI-powered lead generation tool; and Dinabite, a restaurant app built on an AI platform.

Advanced acquires Tikit from BT Group

Rebecca Taylor and Joseph McCall have joined the Fountech.Ventures board as directors. The board is to be bolstered further with additional appointments in the coming weeks.

Nikolas Kairinos, CEO and founder of the parent company Fountech.ai, commented: We are delighted to unveil Fountech.Ventures today.

This next gen incubator is going to propel the growth of deep tech startups across both sides of the Atlantic. In doing so, we will enable innovative leading edge tech solutions to thrive and consequently improve the lives of consumers, businesses and societies.

Here is the original post:
Tech incubator Fountech.Ventures launches in US and UK - UKTN

Machine Learning: The Real Buzzword Of 2020 – Forbes

Artificial intelligence (AI) is a hot topic. Skim tech journals or sites, and you'll undoubtedly see articles focused on how AI is the big technology for 2020. CIOs are discussing how to bring AI into their organizations, and CX leaders are listing AI as a must-have.

But here's the funny thing: AI doesn't really exist not yet anyway. I know many will be surprised to hear this, but before you decide that I'm wrong, consider Merriam-Webster.com's definition: "The capability of a machine to imitate intelligent human behavior."

If you believe this is the right definition of AI, then I ask you: Are there machines imitating intelligent human behavior today? The answer right now is no. If there is a machine that seems smart on its own, the truth is that AI isn't the driver machine learning (ML) is. ML is alive and thriving, yet AI gets all the credit.

It's time to get familiar with ML.

ML powers programs and machines to take data, analyze it in real time, and then learn and adapt based on that information. This is happening today. Think of the recommendations you get for products on Amazon or the shows Netflix suggests you watch. This is all due to ML. It learns your preferences based on your browsing/purchasing/viewing behaviors and then makes intelligent recommendations. The ability to synthesize massive amounts of data in nanoseconds makes machines smart. There's actually nothing artificial about it it's real and at play in our lives already.

Without a doubt, ML is a game-changer for many industries, including contact centers. Similar to the way that automation revolutionized manufacturing, ML can be the missing link to revolutionizing the customer service industry. When leveraged correctly, ML offers enormous productivity gains in customer-facing interactions, empowering contact centers to use bots to perform basic, repetitive tasks. By offloading straightforward work to bots, human agents are free to do work that requires empathy and thought that only they can deliver. This can create an exponentially scalable customer experience workforce in other words, it could solve the industry's oldest and most expensive problem.

ML's potential is big.

Once you know how ML works, I'm sure you can think of ways it has touched your life. But ML's potential is greater than how we're using it. In fact, I don't think we've scratched the surface of its benefits. I believe one of the biggest untapped possibilities for ML lies inside organizations around internal processes. I believe that in 2020, we'll start seeing organizations using ML's data and analysis capabilities to make more informed workforce management decisions.

Instead of contact center managers having to manually sort through data to find out which agents are doing well on a particular day, they can use the insight delivered via ML to see who is providing great service and is able to take on additional customers and issues and, conversely, who is struggling and might need a break. This is an effect of ML's ability to use sentiment analysis and natural language process (NLP) to identify patterns, including patterns in an employee's productivity. ML gives managers informative, real-time data to help them support their staff, which helps employees succeed and helps to deliver an exceptional experience to every customer. Win-win.

When you have machines that can learn about your processes, customers' and employees' needs, and goals, you have the knowledge to make iterative, positive changes to your business. This can lead to:

Better employee experiences and a more engaged workforce with less turnover.

Better, more personalized, lower-effort customer experiences.

Reduced staffing expenses and higher revenue potential.

Streamlined operations by partnering humans with bots.

If you're not a computer science nerd, the concept of ML might feel unrealistic, expensive or difficult to deploy. In short, it seems risky. However, I believe this is a technology your business should be using. Here are some tips to make the transition to ML less intimidating:

1. Do your research. While you should feel a sense of urgency to integrate ML into your business, don't make hasty decisions. Take the time to get a solid understanding of your customers' needs. You don't want to start using just any solution, but one that best matches your business needs.

2. Choose the right ML-powered bot. Just like any other technology, there are options. Make sure you find a bot that meets the needs of your business and offers the services that make life better for your customers and your employees. Not every bot is built alike.

3. Don't forget about your people. Leveraging the right technology innovation is critical to your business, but so is investing in your people and ensuring that the tech and the humans are working together harmoniously.

4. Realize that you're never done. It's important for leaders across all businesses to realize that customer experience is constantly evolving and that we must always be watching, evaluating and tweaking. Don't be afraid to make changes or modifications to your ML plans. If something isn't producing the results you want, find the issue, and make a change. Learn, and keep going. If you have a win, isolate what worked, and replicate it. Similar to the first tip, this isn't a race, so be thoughtful about what you're doing, and ensure it resonates with your business objectives as well as your customers' and employees' needs.

ML isn't the way of the future it's the way of the present, and I can't think of one reason you would knowingly decide to be late to the game. Your business deserves to work smarter, and this is the power of ML. Are you ready?

More here:
Machine Learning: The Real Buzzword Of 2020 - Forbes

dotData Receives APN Machine Learning Competency Partner of the Year Award – Yahoo Finance

Award Recognizes Company's Rapid Growth and Success in the AutoML 2.0 Market

SAN MATEO, Calif., March 25, 2020 /PRNewswire/ -- dotData, focused on delivering full-cycle data science automation and operationalization for the enterprise, today announced that Amazon Web Services (AWS) has awarded dotData with the APN Machine Learning (ML) Competency Partner of the Year Award for 2019.

The award recognizes dotData's rapid growth and success in the enterprise AI/ML market and its contribution to the AWS business in 2019. This award is a testament to dotData platform's ability to significantly accelerate and simplify development of new AI/ML use cases and deliver insights to enterprise customers. The award was announced today at the AWS Partner Summit Tokyo, currently taking place virtually from March 25 - April 10, 2020.

dotData announced in February 2020 that it had achieved AWS ML Competency status, only eight months after joining the AWS Partner Network (APN). The certification recognizes dotData as an APN Partner that accelerates the full-cycle ML and data science process and provides validation that dotData has deep expertise in artificial intelligence (AI) and ML on AWS and can deliver their organization's solutions seamlessly on AWS.

dotData provides solutions designed to improve the productivity of data science projects, which traditionally require extensive manual efforts from valuable and scarce enterprise resources. The platform automates the full life-cycle of the data science process, from business raw data through feature engineering to implementation of ML in production utilizing its proprietary AI technologies.

dotData's AI-powered feature engineering automatically applies data transformation, cleansing, normalization, aggregation, and combination, and transforms hundreds of tables with complex relationships and billions of rows into a single feature table, automating the most manual data science projects.

Story continues

"We are honored and proud to receive this award which recognizes our commitment to making AI and ML accessible to as many people in the enterprise as possible and our success in helping our enterprise customers meet their business goals," said Ryohei Fujimaki, founder and CEO of dotData. "As an APN ML Competency partner we have been able to deliver an outstanding product that dramatically accelerates the AI and ML initiatives of AWS users and maximizes their business impacts. We look forward to contributing to our customers' success bycollaborating with AWS."

AWS ML Competency Partners provide solutions that help organizations solve their data challenges and enable ML and data science workflows. The program is designed to highlight APN Partners who have demonstrated technical proficiency in specialized solution areas and helps customers find the most qualified organizations with deep expertise and proven customer success.

dotData democratizes data science by enabling existing resources to perform data science tasks, making enterprise data science scalable and sustainable. dotData automates up to 100 percent of the data science workflow, enabling users to connect directly to their enterprise data sources to discover and evaluate millions of features from complex table structures and huge data sets with minimal user input. dotData is also designed to operationalize data science by producing both feature and ML scoring pipelines in production, which IT teams can then immediately integrate with business workflow. This can further automate the time-consuming and arduous process of maintaining the deployed pipeline to ensure repeatability as data changes over time. With the dotData GUI, the data science task becomes a five-minute operation, requiring neither significant data science experience nor SQL/Python/R coding.

For more information or a demo of dotData's AI-powered full-cycle data science automation platform, please visit dotData.com.

About dotDatadotData is one of the first companies focused on full-cycle data science automation. Fortune 500 organizations around the world use dotData to accelerate their ML and AI projects and deliver higher business value. dotData's automated data science platform speeds time to value by accelerating, democratizing, augmenting and operationalizing the entire data science process, from raw business data through data and feature engineering to ML in production. With solutions designed to cater to the needs of both data scientists as well as citizen data scientists, dotData provides value across the entire organization.

dotData's unique AI-powered feature engineering delivers actionable business insights from relational, transactional, temporal, geo-locational, and text data. dotData has been recognized as a leader by Forrester in the 2019 New Wave for AutoML platforms. dotData has also been recognized as the "best machine learning platform" for 2019 by the AI breakthrough awards and was named an "emerging vendor to watch" by CRN in the big data space. For more information, visit http://www.dotdata.com, and join the conversation on Twitter and LinkedIn.

View original content:http://www.prnewswire.com/news-releases/dotdata-receives-apn-machine-learning-competency-partner-of-the-year-award-301029298.html

SOURCE dotData

Follow this link:
dotData Receives APN Machine Learning Competency Partner of the Year Award - Yahoo Finance

PSD2: How machine learning reduces friction and satisfies SCA – The Paypers

Andy Renshaw, Feedzai: It crosses borders but doesnt have a passport. Its meant to protect people but can make them angry. Its competitive by nature but doesnt want you to fail. What is it?

If the PSD2 regulations and Strong Customer Authentication (SCA) feel like a riddle to you, youre not alone. SCA places strict two-factor authentication requirements upon financial institutions (FIs) at a time when FIs are facing stiff competition for customers. On top of that, the variety of payment types, along with the sheer number of transactions, continue to increase.

According to UK Finance, the number of debit card transactions surpassed cash transactions since 2017, while mobile banking surged over the past year, particularly for contactless payments. The number of contactless payment transactions per customer is growing; this increase in transactions also raises the potential for customer friction.

The number of transactions isnt the only thing thats shown an exponential increase; the speed at which FIs must process them is too. Customers expect to send, receive, and access money with the swipe of a screen. Driven by customer expectations, instant payments are gaining traction across the globe with no sign of slowing down.

Considering the sheer number of transactions combined with the need to authenticate payments in real-time, the demands placed on FIs can create a real dilemma. In this competitive environment, how can organisations reduce fraud and satisfy regulations without increasing customer friction?

For countries that fall under PSD2s regulation, the answer lies in the one known way to avoid customer friction while meeting the regulatory requirement: keep fraud rates at or below SCA exemption thresholds.

How machine learning keeps fraud rates below the exemption threshold to bypass SCA requirements

Demonstrating significantly low fraud rates allows financial institutions to bypass the SCA requirement. The logic behind this is simple: if the FIs systems can prevent fraud at such high rates, they've demonstrated their systems are secure without authentication.

SCA exemption thresholds are:

Exemption Threshold Value

Remote electronic card-based payment

Remote electronic credit transfers

EUR 500

below 0.01% fraud rate

below 0.01% fraud rate

EUR 250

below 0.06% fraud rate

below 0.01% fraud rate

EUR 100

below 0.13% fraud rate

below 0.015% fraud rate

Looking at these numbers, you might think that achieving SCA exemption thresholds is impossible. After all, bank transfer scams rose 40% in the first six months of 2019. But state-of-the-art technology rises to the challenge of increased fraud. Artificial intelligence, and more specifically machine learning, makes achieving SCA exemption thresholds possible.

How machine learning achieves SCA exemption threshold values

Every transaction has hundreds of data points, called entities. Entities include time, date, location, device, card, cardless, sender, receiver, merchant, customer age the possibilities are almost endless. When data is cleaned and connected, meaning it doesnt live in siloed systems, the power of machine learning to provide actionable insights on that data is historically unprecedented.

Robust machine learning technology uses both rules and models and learns from both historical and real-time profiles of virtually every data point or entity in a transaction. The more data we feed the machine, the better it gets at learning fraud patterns. Over time, the machine learns to accurately score transactions in less than a second without the need for customer authentication.

Machine learning creates streamlined and flexible workflows

Of course, sometimes, authentication is inevitable. For example, if a customer who generally initiates a transaction in Brighton, suddenly initiates a transaction from Mumbai without a travel note on the account, authentication should be required. But if machine learning platforms have flexible data science environments that embed authentication steps seamlessly into the transaction workflow, the experience can be as customer-centric as possible.

Streamlined workflows must extend to the fraud analysts job

Flexible workflows arent just important to instant payments theyre important to all payments. And they cant just be a back-end experience in the data science environment. Fraud analysts need flexibility in their workflows too. They're under pressure to make decisions quickly and accurately, which means they need a full view of the customer not just the transaction.

Information provided at a transactional level doesnt allow analysts to connect all the dots. In this scenario, analysts are left opening up several case managers in an attempt to piece together a complete and accurate fraud picture. Its time-consuming and ultimately costly, not to mention the wear and tear on employee satisfaction. But some machine learning risk platforms can show both authentication and fraud decisions at the customer level, ensuring analysts have a 360-degree view of the customer.

Machine learning prevents instant payments from becoming instant losses

Instant payments can provide immediate customer satisfaction, but also instant fraud losses. Scoring transactions in real-time means institutions can increase the security around the payments going through their system before its too late.

Real-time transaction scoring requires a colossal amount of processing power because it cant use batch processing, an efficient method when dealing with high volumes of data. Thats because the lag time between when a customer transacts and when a batch is processed makes this method incongruent with instant payments. Therefore, scoring transactions in real-time requires supercomputers with super processing powers. The costs associated with this make hosting systems on the cloud more practical than hosting at the FIs premises, often referred to as on prem. Of course, FIs need to consider other factors, including cybersecurity concerns before determining where they should host their machine learning platform.

Providing exceptional customer experiences by keeping fraud at or below PSD2s SCA threshold can seem like a magic trick, but its not. Its the combined intelligence of humans and machines to provide the most effective method we have today to curb and prevent fraud losses. Its how we solve the friction-security puzzle and deliver customer satisfaction while satisfying SCA.

About Andy Renshaw

Andy Renshaw, Vice President of Banking Solutions at Feedzai, has over 20 years of experience in banking and the financial services industry, leading large programs and teams in fraud management and AML. Prior to joining Feedzai, Andy held roles in global financial institutions such as Lloyds Banking Group, Citibank, and Capital One, where he helped fight against the ever-evolving financial crime landscape as a technical expert, fraud prevention expert, and a lead product owner for fraud transformation.

About Feedzai

Feedzai is the market leader in fighting fraud with AI. Were coding the future of commerce with todays most advanced risk management platform powered by big data and machine learning. Founded and developed by data scientists and aerospace engineers, Feedzai has one mission: to make banking and commerce safe. The worlds largest banks, processors, and retailers use Feedzais fraud prevention and anti-money laundering products to manage risk while improving customer experience.

Read this article:
PSD2: How machine learning reduces friction and satisfies SCA - The Paypers

Machine learning teams with antibody science on COVID-19 treatment discovery – AI in Healthcare

Two data scientists say they have created AI algorithms that can do in a week what biological researchers might otherwise spend years trying to pull off in a laboratory: discover antibody-based treatments that have a fighting chance to beat back COVID-19.

In fact, studies have shown it takes an average of five years and half a billion dollars to find and fine-tune antibodies in a lab, Andrew Satz and Brett Averso, both execs of a 12-member startup called EVQLV, explain.

Speaking with their alma mater, Columbia Universitys Data Science Institute, Satz and Averso say their machine-learning algorithms can help by cutting the chances of costly experimental failures in the lab.

We fail in the computer as much as possible to reduce the possibility of downstream failure in the laboratory, Satz tells the institutes news division. [T]hat shaves a significant amount of time from laborious and time-consuming work.

See the original post:
Machine learning teams with antibody science on COVID-19 treatment discovery - AI in Healthcare

Neural networks facilitate optimization in the search for new materials – MIT News

When searching through theoretical lists of possible new materials for particular applications, such as batteries or other energy-related devices, there are often millions of potential materials that could be considered, and multiple criteria that need to be met and optimized at once. Now, researchers at MIT have found a way to dramatically streamline the discovery process, using a machine learning system.

As a demonstration, the team arrived at a set of the eight most promising materials, out of nearly 3 million candidates, for an energy storage system called a flow battery. This culling process would have taken 50 years by conventional analytical methods, they say, but they accomplished it in five weeks.

The findings are reported in the journal ACS Central Science, in a paper by MIT professor of chemical engineering Heather Kulik, Jon Paul Janet PhD 19, Sahasrajit Ramesh, and graduate student Chenru Duan.

The study looked at a set of materials called transition metal complexes. These can exist in a vast number of different forms, and Kulik says they are really fascinating, functional materials that are unlike a lot of other material phases. The only way to understand why they work the way they do is to study them using quantum mechanics.

To predict the properties of any one of millions of these materials would require either time-consuming and resource-intensive spectroscopy and other lab work, or time-consuming, highly complex physics-based computer modeling for each possible candidate material or combination of materials. Each such study could consume hours to days of work.

Instead, Kulik and her team took a small number of different possible materials and used them to teach an advanced machine-learning neural network about the relationship between the materials chemical compositions and their physical properties. That knowledge was then applied to generate suggestions for the next generation of possible materials to be used for the next round of training of the neural network. Through four successive iterations of this process, the neural network improved significantly each time, until reaching a point where it was clear that further iterations would not yield any further improvements.

This iterative optimization system greatly streamlined the process of arriving at potential solutions that satisfied the two conflicting criteria being sought. This kind of process of finding the best solutions in situations, where improving one factor tends to worsen the other, is known as a Pareto front, representing a graph of the points such that any further improvement of one factor would make the other worse. In other words, the graph represents the best possible compromise points, depending on the relative importance assigned to each factor.

Training typical neural networks requires very large data sets, ranging from thousands to millions of examples, but Kulik and her team were able to use this iterative process, based on the Pareto front model, to streamline the process and provide reliable results using only the few hundred samples.

In the case of screening for the flow battery materials, the desired characteristics were in conflict, as is often the case: The optimum material would have high solubility and a high energy density (the ability to store energy for a given weight). But increasing solubility tends to decrease the energy density, and vice versa.

Not only was the neural network able to rapidly come up with promising candidates, it also was able to assign levels of confidence to its different predictions through each iteration, which helped to allow the refinement of the sample selection at each step. We developed a better than best-in-class uncertainty quantification technique for really knowing when these models were going to fail, Kulik says.

The challenge they chose for the proof-of-concept trial was materials for use in redox flow batteries, a type of battery that holds promise for large, grid-scale batteries that could play a significant role in enabling clean, renewable energy. Transition metal complexes are the preferred category of materials for such batteries, Kulik says, but there are too many possibilities to evaluate by conventional means. They started out with a list of 3 million such complexes before ultimately whittling that down to the eight good candidates, along with a set of design rules that should enable experimentalists to explore the potential of these candidates and their variations.

Through that process, the neural net both gets increasingly smarter about the [design] space, but also increasingly pessimistic that anything beyond what weve already characterized can further improve on what we already know, she says.

Apart from the specific transition metal complexes suggested for further investigation using this system, she says, the method itself could have much broader applications. We do view it as the framework that can be applied to any materials design challenge where you're really trying to address multiple objectives at once. You know, all of the most interesting materials design challenges are ones where you have one thing you're trying to improve, but improving that worsens another. And for us, the redox flow battery redox couple was just a good demonstration of where we think we can go with this machine learning and accelerated materials discovery.

For example, optimizing catalysts for various chemical and industrial processes is another kind of such complex materials search, Kulik says. Presently used catalysts often involve rare and expensive elements, so finding similarly effective compounds based on abundant and inexpensive materials could be a significant advantage.

This paper represents, I believe, the first application of multidimensional directed improvement in the chemical sciences, she says. But the long-term significance of the work is in the methodology itself, because of things that might not be possible at all otherwise. You start to realize that even with parallel computations, these are cases where we wouldn't have come up with a design principle in any other way. And these leads that are coming out of our work, these are not necessarily at all ideas that were already known from the literature or that an expert would have been able to point you to.

This is a beautiful combination of concepts in statistics, applied math, and physical science that is going to be extremely useful in engineering applications, says George Schatz, a professor of chemistry and of chemical and biological engineering at Northwestern University, who was not associated with this work. He says this research addresses how to do machine learning when there are multiple objectives. Kuliks approach uses leading edge methods to train an artificial neural network that is used to predict which combination of transition metal ions and organic ligands will be best for redox flow battery electrolytes.

Schatz says this method can be used in many different contexts, so it has the potential to transform machine learning, which is a major activity around the world.

The work was supported by the Office of Naval Research, the Defense Advanced Research Projects Agency (DARPA), the U.S. Department of Energy, the Burroughs Wellcome Fund, and the AAAS Mar ion Milligan Mason Award.

Visit link:
Neural networks facilitate optimization in the search for new materials - MIT News

Silicone And AI Power This Prayerful Robotic Intercessor – Hackaday

Even in a world that is as currently far off the rails as this one is, were going to go out on a limb and say that this machine learning, servo-powered prayer bot is going to be the strangest thing you see today. Were happy to be wrong about that, though, and if we are, please send links.

The Prayer, as [Diemut Strebe]s work is called, may look strange, but its another in a string of pieces by various artists that explores just what it means to be human at a time when machines are blurring the line between them and us. The hardware is straightforward: a silicone rubber representation of a human nasopharyngeal cavity, servos for moving the lips, and a speaker to create the vocals. Those are generated by a machine-learning algorithm that was trained against the sacred texts of many of the worlds major religions, including the Christian Bible, the Koran, the Baghavad Gita, Taoist texts, and the Book of Mormon. The algorithm analyzes the structure of sacred verses and recreates random prayers and hymns using Amazon Polly that sound a lot like the real thing. That the lips move in synchrony with the ersatz devotions only adds to the otherworldliness of the piece. Watch it in action below.

Weve featured several AI-based projects that poke at some interesting questions. This kinetic sculpture that uses machine learning to achieve balance comes to mind, while AI has even been employed in the search for spirits from the other side.

[Via Twitter, but we recommend abstaining from the comments, for obvious reasons.]

Continue reading here:
Silicone And AI Power This Prayerful Robotic Intercessor - Hackaday

2020 Supply Chain Planning Value Matrix Underscores Benefits of Machine Learning and Customizable Integrations – Yahoo Finance

Nucleus Research identifies Blue Yonder, E2Open, Infor, Kinaxis, One Network and Vanguard as SCP Leaders

Nucleus Research today released the 2020 Supply Chain Planning (SCP) Technology Value Matrix, its assessment of the SCP market. For the report, Nucleus evaluated SCP vendors based on their products usability, functionality and overall value.

While other firms market reports position vendors based on analyst opinions, the Nucleus Value Matrix segments competitors based on usability, functionality and the value that customers realized from each products capabilities, measured with Nucleus rigorous ROI methodologies.

Nucleus named Blue Yonder, E2Open, Infor, Kinaxis, One Network and Vanguard as SCP leaders.

Supply chain planning has become critical for success as companies must maintain service levels in the face of resource constraints and external disturbances. Tight solution integrations and robust embedded analytics have become table stakes for supply chain planning systems, which can now differentiate based on go-to-market strategy and tactical focuses. Leading vendors have undertaken a "platform approach" to product delivery, providing solution flexibility that enables customers to drive long-term value by configuring deployments with their preferred blend of best practices and customizations.

"To support a broad range of planning capabilities, supply chain planning vendors must provide comprehensive product roadmaps," says Ian Campbell, CEO of Nucleus Research. "Now more than ever, customers demand the capability to prioritize tactical focuses and personalize SCP solutions with their own differentiators."

"In order to be resilient enough to handle external challenges, organizations must have robust plans in place for their supply chains," says Andrew MacMillen, analyst at Nucleus Research. "Proactive resource management has become essential for sustainable success and requires a greater level of collaboration across an organizations departments. Leading SCP solutions realize this, and can consolidate siloed data into a unified view to deliver value."

See the full report at: https://nucleusresearch.com/research/single/scp-technology-value-matrix-2020/

About Nucleus Research

Nucleus Research is a global provider of investigative, case-based technology research and advisory services. We deliver the numbers that drive business decisions. For more information, visit NucleusResearch.com or follow us on Twitter @NucleusResearch.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200324005437/en/

Contacts

Adam OuelletInkHousenucleus@inkhouse.com 978-413-4341

See more here:
2020 Supply Chain Planning Value Matrix Underscores Benefits of Machine Learning and Customizable Integrations - Yahoo Finance

Research report covers the AI/Machine Learning Market share and Growth, 2019-2025 – Packaging News 24

With having published myriads of reports, AI/Machine Learning Market Research imparts its stalwartness to clients existing all over the globe. Our dedicated team of experts deliver reports with accurate data extracted from trusted sources. We ride the wave of digitalization facilitate clients with the changing trends in various industries, regions and consumers. As customer satisfaction is our top priority, our analysts are available 24/7 to provide tailored business solutions to the clients.

In this new business intelligence report, AI/Machine Learning Market Research serves a platter of market forecast, structure, potential, and socioeconomic impacts associated with the global AI/Machine Learning market. With Porters Five Forces and DROT analyses, the research study incorporates a comprehensive evaluation of the positive and negative factors, as well as the opportunities regarding the AI/Machine Learning market.

Request Sample Report @ https://www.researchmoz.com/enquiry.php?type=S&repid=2279818&source=atm

The AI/Machine Learning market report has been fragmented into important regions that showcase worthwhile growth to the vendors Region 1 (Country 1, Country 2), region 2 (Country 1, Country 2) and region 3 (Country 1, Country 2). Each geographic segment has been assessed based on supply-demand status, distribution, and pricing. Further, the study provides information about the local distributors with which the market players could create collaborations in a bid to sustain production footprint.

The key players covered in this studyGOOGLEIBMBAIDUSOUNDHOUNDZEBRA MEDICAL VISIONPRISMAIRIS AIPINTERESTTRADEMARKVISIONDESCARTES LABSAmazon

Market segment by Type, the product can be split intoTensorFlowCaffe2Apache MXNet

Market segment by Application, split intoAutomotiveSantific ResearchBig DateOther

Market segment by Regions/Countries, this report coversUnited StatesEuropeChinaJapanSoutheast AsiaIndiaCentral & South America

The study objectives of this report are:To analyze global AI/Machine Learning status, future forecast, growth opportunity, key market and key players.To present the AI/Machine Learning development in United States, Europe and China.To strategically profile the key players and comprehensively analyze their development plan and strategies.To define, describe and forecast the market by product type, market and key regions.

In this study, the years considered to estimate the market size of AI/Machine Learning are as follows:History Year: 2014-2018Base Year: 2018Estimated Year: 2019Forecast Year 2019 to 2025For the data information by region, company, type and application, 2018 is considered as the base year. Whenever data information was unavailable for the base year, the prior year has been considered.

Make An EnquiryAbout This Report @ https://www.researchmoz.com/enquiry.php?type=E&repid=2279818&source=atm

What does the AI/Machine Learning market report contain?

Readers can get the answers of the following questions while going through the AI/Machine Learning market report:

And many more

You can Buy This Report from Here @ https://www.researchmoz.com/checkout?rep_id=2279818&licType=S&source=atm

For More Information Kindly Contact:

ResearchMoz.com

Mr. Nachiket Ghumare,

90 State Street,

Albany NY,

United States 12207

Tel: +1-518-621-2074

USA-Canada Toll Free: 866-997-4948

Email: [emailprotected]

Read more here:
Research report covers the AI/Machine Learning Market share and Growth, 2019-2025 - Packaging News 24

Pioneering deep learning in the cyber security space: the new standard? – Information Age

Applying deep learning in the cyber security space has many benefits, such as the prediction of unknown threats and zero time classification

Will cyber security solutions move from machine learning to deep learning?

The use of deep learning in the cyber security space is an emerging trend. But, it has the potential to transform a security model that is currently broken, by predicting new attacks before theyve breached an organisations network or device.

Cyber security has a coronavirus situation every day, said Jonathan Kaftzan, VP Marketing at Deep Instinct, during his presentation as part of the latest IT Press Tour.

Deep learning neural models can predict new variations of existing cyber attacks that occur daily, while the majority of current solutions in the market can only detect infected systems or anomalies, contain and remediate them this is costly and unsustainable.

Deep learning technology, a subset of machine learning algorithms (which is itself a subset of artificial intelligence algorithms), can predict and protect organisations from known and unknown cyber attacks in real-time, while mitigating the problem of false positives. It is changing how organisations build and manage their cyber security stack.

Many traditional solutions, as well, can only protect specific domains or operating systems (one vendor for Windows and Android, for example). A single platform that can handle any threat at any time is more viable.

Before delving into deep learning and cyber security, its important to identify why the cyber security model is broken.

From 2008 to 2018, the number of data breaches has doubled, from 636 to 1,244. Files and records exposed has also jumped from 35.7 million to 446.5 million over that same time period.

The problem is getting worse, despite investment in cyber security increasing by 30%. Gartner has predicted that the market will be worth $248.6 billion by 2023, according to Statista.

There has been a huge growth in cyber security investment, but nothing has improved. In fact, its got worse. The cyber security model is not working, stated Kaftzan.

As part of Information Ages Cyber Security Month, we look at cyber security best practice everything from defining it to the importance of training. Read here

1. Volume more than 350,000 new malicious programmes are created every day (mostly by machines). It is easy to modify existing malware and create a completely new cyber attack. It is overwhelming.

2. A question of when not if 67%of CIOs thought the possibility of their company experiencing a data breach or cyber attack in 2018 was a given, according to a survey from thePonemon Institute.

3. Cost a big breach can cost an enterprise between $40-350 million.

Detection time is costly.

4. Skills shortage in cyber 69% of organisations say their cyber security teams are understaffed, while there will be as many as 3.5 million unfilled positions in the industry by 2021.

5. Complexity of cyber attacks the level of sophistication and complexity of cyber attacks is increasing. AI-based malware, and adversarial learning (using a neural network (DL or ML) to attack another neural network) is also beginning to threaten networks.

By using a deep learning neural network algorithm, organisations can detect and prevent known and unknown cyber security threats in real-time.

Referring to Deep Instincts platform, Kaftzan said: The time it takes us to analyse a file before youve even clicked it is 20 milliseconds. Weve never seen it before to assess whether it is malicious or not. In another 50 milliseconds well be able to tell you where the attack has come from and what it is autonomously, without any human being involved in the process. The time it takes to remediate and contain the attack is under a minute.

SE Labs tested the solution and found it had a 100% prevention score with 0% false positives (when system flags a security vulnerability that you do not have).

HP is an investor and strategic partner of Deep Instinct. It has installed the technology in all the laptops they sell to the enterprise market millions of new laptops [HP Sure Sense powered by Deep Instinct] are protected using our technology, added Kaftzan.

What is AI? Information Age has created a simple guide to AI, machine learning, neural networks, deep learning and random forests. Read here

Predictions of unknown threats.

Zero time prediction and detection.

Zero time classification.

Works across any device, operating system or file.

Doesnt rely on connection (edge deployment).

Deep learning is a sub category in a family of algorithims under machine learning, while machine learning is a broad set of algorithms under artificial intelligence.

Everyone is talking about AI, but it has been around for many years. You can define the technology as a system that mimics human intelligence by making decisions. There are many forms of human intelligence, such as if you do a. then b. will happen many systems are already using this type of rule-based decision-making.

In this definition, using AI is the norm, continued Kaftzan.

Machine learning was developed in the 1980s. Here, the algorithms could learn from datasets and make decisions based on that.

Machine learning couldnt improve human challenges until 10 years ago, when deep learning neural networks were introduced. They became available because of better infrastructure (GPUs), explained Kaftzan.

Machine learning is reliant on the human stack. It limited, therefore, by the data (under 2% of data is analysed), the human (lack of knowledge and expertise), adversaries (mutating and growing cyber attacks) and size of the datasets.

Attackers can also hide the malicious features through things like encryption, commonly known as feature obfuscation.

An end-to-end deep learning framework, however, is the only algorithm in the AI family that can analyse and make assumptions from all the raw data without the involvement of the human.

Deep learning solutions produce over 99% accuracy with unknown malware and 0.0001% false positives, compared to traditional ML, which produces 50-70% accuracy with unknown malware and 1-2% false positives.

There is a growing prevalence of deep learning in real-world solutions, but the technology is lagging in cyber security:

Computer vision: 98% deep learning, 2% traditional machine learning.

Speech recognition: 80% deep learning, 20% traditional machine learning.

Text understand: 65% deep learning, 35% traditional machine learning.

Cyber security: 2% deep learning, 98% traditional machine learning.

An end-to-to-end deep learning framework can predict the mutations of existing malware and prevent them in real-time, before the impact can be felt on a device or network. The algorithm is developed entirely on C/C++ and is optimised using NVIDIAs GPU training (before it is deployed on endpoints using regular CPUs).

Deep Instinct is pioneering deep learning in the cyber security space. But, there are a number challenges. The main one is that there are not enough experts that can build deep learning algorithms theyre all snapped up by the big companies, such as Baidu (speech recognition) and Google (NLP) after university.

Read the original here:
Pioneering deep learning in the cyber security space: the new standard? - Information Age

New Bellwethr Report Highlights the Devastating Impact Coronavirus is Already Having on US Small Businesses – AiThority

Machine learning-powered customer conversion and retention platform Bellwethr has released findings from its new reportThe Impact of Coronavirus on US Businesses [March 2020].

The goal of the survey was to better understand the specific wayscoronavirus is already impacting day-to-day business operations and how business owners feel about what will happen in the upcoming months.

Commenting on the report, Bellwethr COO and co-founderDaron Jamisonsaid,While the results paint a dark picture of the reality that many businesses are facing right now, there is a tremendous opportunity for those companies who are able to calm their nerves and use this time to test new ideas, double down on what they know is working, and look for ways to improve efficiencies.

Recommended AI News:Three Secrets To Better Understanding Target Accounts And Buying Committees

Key Findings:

Recommended AI News: AiThority Interview with Jeff Elton, CEO at Concerto HealthAI

Read the original:
New Bellwethr Report Highlights the Devastating Impact Coronavirus is Already Having on US Small Businesses - AiThority

Is Machine Learning The Quantum Physics Of Computer Science ? – Forbes

Preamble: Intermittently, I will be introducing some columns which introduce some seemingly outlandish concepts. The purpose is a bit of humor, but also to provoke some thought. Enjoy.

atom orbit abstract

God does not play dice with the universe, Albert Einstein is reported to have said about the field of Quantum Physics. He was referring to the great divide at the time in the physics community between general relativity and quantum physics. General relativity was a theory which beautifully explained a great deal of physical phenomena in a deterministic fashion. Meanwhile, quantum physics grew out of a model which fundamentally had a probabilistic view of the world. Since Einstein made that statement in the mid 1950s, quantum physics has proven to be quite a durable theory, and in fact, it is used in a variety of applications such as semiconductors.

One might imagine a past leader in computer science such as Donald Knuth exclaiming, Algorithms should be deterministic. That is, given any input, the output should be exact and known. Indeed, since its formation, the field of computer science has focused on building elegant deterministic algorithms which have a clear view of the transformation between inputs and outputs. Even in the regime of non-determinism such as parallel processing, the objective of the overall algorithm is to be deterministic. That is, despite the fact that operations can run out-of-order, the outputs are still exact and known. Computer scientists work very hard to make that a reality.

As computer scientists have engaged with the real world, they frequently face very noisy inputs such as sensors or even worse, human beings. Computer algorithms continue to focus on faithfully and precisely translating input noise to output noise. This has given rise to the Junk In Junk Out (JIJO) paradigm. One of the key motivations for pursuing such a structure has been the notion of causality and diagnosability. After all, if the algorithms are noisy, how is one to know the issue is not a bug in the algorithm? Good point.

With machine learning, computer science has transitioned to a model where one trains a machine to build an algorithm, and this machine can then be used to transform inputs to outputs. Since the process of training is dynamic and often ongoing, the data and the algorithm are intertwined in a manner which is not easily unwound. Similar to quantum physics, there is a class of applications where this model seems to work. Recognizing patterns seems to be a good application. This is a key building block for autonomous vehicles, but the results are probabilistic in nature.

In quantum physics, there is an implicit understanding that the answers are often probabilistic Perhaps this is the key insight which can allow us to leverage the power of machine learning techniques and avoid the pitfalls. That is, if the requirements of the algorithm must be exact, perhaps machine learning methods are not appropriate. As an example, if your bank statement was correct with somewhat high probability, this may not be comforting. However, if machine learning algorithms can provide with high probability the instances of potential fraud, the job of a forensic CPA is made quite a bit more productive. Similar analogies exist in the area of autonomous vehicles.

Overall, machine learning seems to define the notion of probabilistic algorithms in computer science in a similar manner as quantum physics. The critical challenge for computing is to find the correct mechanisms to design and validate probabilistic results.

See original here:
Is Machine Learning The Quantum Physics Of Computer Science ? - Forbes

Fujitsu Laboratories and Quantum Benchmark begin joint research on algorithms with error suppression for quantum computing – Green Car Congress

Fujitsu Laboratories Ltd., and Quantum Benchmark Inc. of Canada will conduct joint research on quantum algorithms using Quantum Benchmarks error suppression technology as they aim to advance the capabilities of current generation quantum computing platforms.

Quantum Benchmark, a startup founded by leading researchers from the University of Waterloos Institute for Quantum Computing, provides software solutions for error characterization, error suppression, and performance validation for quantum computing hardware.

In this collaborative research project, the companies will develop practical quantum algorithms utilizing Fujitsus AI algorithm development technology as well as its knowledge gained through Digital Annealer applications in finance, medicine and material development. The Digital Annealer is Fujitsus new quantum-inspired architecture that can rapidly resolve combinatorial optimization problems.

Overview of the joint research.

Quantum Benchmarks patented True-Q software system, which enables optimal performance of current hardware, is a key to this development. Accordingly, Fujitsu Laboratories and Quantum Benchmark will endeavor to solve problems in the fields of materials science, drug development and finance that are intractable to solve with conventional computers.

Quantum computers are expected to be able to perform a new form of computation by harnessing fundamental properties of the quantum world, such as entanglement and superposition. This is often explained by invoking the idea that they can process both 0 and 1 at the same time, and the continuum of states in between 0 and 1. This advantage comes by performing calculations using quantum bits, called "qubits", which is unlike conventional computers which process conventional bits, that can be only 0 or 1. However, quantum bits are fragile and highly vulnerable to errors and noise, and as time goes on, the effects of noise add up, making the quantum calculation results inaccurate. Since calculations for pharmaceuticals and materials are time-consuming, there is a need to develop error-suppression methods enabling algorithms to overcome the effects of noise.

Under the partnership, which is slated to run to March 2021, and planned for extension after April 2021, Fujitsu will develop quantum algorithms for applications such as quantum chemistry and machine learning, and develop performance analysis technology for quantum algorithms in simulations.

Quantum Benchmark will support the implementation of True-Q error diagnosis technology on current quantum computing platforms; support implementation of quantum algorithms on current quantum computing platforms; and support custom specific error suppression strategies and performance evaluation for quantum algorithms on current quantum computing platforms.

Fujitsu Laboratories and Quantum Benchmark will expand the scope of their joint research beyond finance, drug discovery, and materials, as they plan to develop quantum algorithms to be implemented in quantum computers for various applications which could not be solved with conventional computers. The companies aim to demonstrate new applications on a 100+ qubit quantum computer by 2023.

Original post:
Fujitsu Laboratories and Quantum Benchmark begin joint research on algorithms with error suppression for quantum computing - Green Car Congress

Research by University of Chicago PhD Student and EPiQC Wins IBM Q Best Paper – Quantaneo, the Quantum Computing Source

The interdisciplinary team of researchers from UChicago, University of California, Berkeley, Princeton University and Argonne National Laboratory won the $2,500 first-place award for Best Paper. Their research examined how the VQE quantum algorithm could improve the ability of current and near-term quantum computers to solve highly complex problems, such as finding the ground state energy of a molecule, an important and computationally difficult chemical calculation the authors refer to as a killer app for quantum computing.

Quantum computers are expected to perform complex calculations in chemistry, cryptography and other fields that are prohibitively slow or even impossible for classical computers. A significant gap remains, however, between the capabilities of todays quantum computers and the algorithms proposed by computational theorists.

VQE can perform some pretty complicated chemical simulations in just 1,000 or even 10,000 operations, which is good, Gokhale says. The downside is that VQE requires millions, even tens of millions, of measurements, which is what our research seeks to correct by exploring the possibility of doing multiple measurements simultaneously.

Gokhale explains the research in this video.

With their approach, the authors reduced the computational cost of running the VQE algorithm by 7-12 times. When they validated the approach on one of IBMs cloud-service 20-qubit quantum computers, they also found lower error as compared to traditional methods of solving the problem. The authors have shared their Python and Qiskit code for generating circuits for simultaneous measurement, and have already received numerous citations in the months since the paper was published.

For more on the research and the IBM Q Best Paper Award, see the IBM Research Blog. Additional authors on the paper include Professor Fred Chong and PhD student Yongshan Ding of UChicago CS, Kaiwen Gui and Martin Suchara of the Pritzker School of Molecular Engineering at UChicago, Olivia Angiuli of University of California, Berkeley, and Teague Tomesh and Margaret Martonosi of Princeton University.

Link:
Research by University of Chicago PhD Student and EPiQC Wins IBM Q Best Paper - Quantaneo, the Quantum Computing Source