Information teleported between two computer chips for the first time – New Atlas

Scientists at the University of Bristol and the Technical University of Denmark have achieved quantum teleportation between two computer chips for the first time. The team managed to send information from one chip to another instantly without them being physically or electronically connected, in a feat that opens the door for quantum computers and quantum internet.

This kind of teleportation is made possible by a phenomenon called quantum entanglement, where two particles become so entwined with each other that they can communicate over long distances. Changing the properties of one particle will cause the other to instantly change too, no matter how much space separates the two of them. In essence, information is being teleported between them.

Hypothetically, theres no limit to the distance over which quantum teleportation can operate and that raises some strange implications that puzzled even Einstein himself. Our current understanding of physics says that nothing can travel faster than the speed of light, and yet, with quantum teleportation, information appears to break that speed limit. Einstein dubbed it spooky action at a distance.

Harnessing this phenomenon could clearly be beneficial, and the new study helps bring that closer to reality. The team generated pairs of entangled photons on the chips, and then made a quantum measurement of one. This observation changes the state of the photon, and those changes are then instantly applied to the partner photon in the other chip.

We were able to demonstrate a high-quality entanglement link across two chips in the lab, where photons on either chip share a single quantum state, says Dan Llewellyn, co-author of the study. Each chip was then fully programmed to perform a range of demonstrations which utilize the entanglement. The flagship demonstration was a two-chip teleportation experiment, whereby the individual quantum state of a particle is transmitted across the two chips after a quantum measurement is performed. This measurement utilizes the strange behavior of quantum physics, which simultaneously collapses the entanglement link and transfers the particle state to another particle already on the receiver chip.

The team reported a teleportation success rate of 91 percent, and managed to perform some other functions that will be important for quantum computing. That includes entanglement swapping (where states can be passed between particles that have never directly interacted via a mediator), and entangling as many as four photons together.

Information has been teleported over much longer distances before first across a room, then 25 km (15.5 mi), then 100 km (62 mi), and eventually over 1,200 km (746 mi) via satellite. Its also been done between different parts of a single computer chip before, but teleporting between two different chips is a major breakthrough for quantum computing.

The research was published in the journal Nature Physics.

Source: University of Bristol

View post:

Information teleported between two computer chips for the first time - New Atlas

The Impact of Quantum Computing on Banking will be gigantic says Deltec Bank, Bahamas – Quantaneo, the Quantum Computing Source

However, even with that progression, there are still jobs that classical computers are not powerful enough to do. The answer looks set to come from quantum computing. In this post, we will look at what quantum computing is and how it could revolutionize a long-standing industry such as banking.

What is Quantum Computing?

Quantum computers are expected to be a new kind of technology that can solve complex problems well beyond the capabilities of traditional systems. If you take an everyday problem like climate change, the intricacies of solving it are incredibly complex. A standard computer does not have the power or ability to even get close to genuinely understanding everything that is going on. The main reason is the endless amounts of data that computers need to process to generate an accurate decision.

A quantum computer is often referred to as a supercomputer. It has highly advanced processing power that can take masses of variables into account, helping predict weather patterns and natural disasters in the case of climate change.

Brief Technical Summary

A typical computer stores information in what is known as bits. In quantum computing, these are known as qubits. Qubits have certain properties that mean a connected group of them can provide way more processing power than binary bits from classical computing. In short, where binary bits store 1s and 0s to handle a task, qubits can represent numerous possible combinations of these simultaneously.

Practical Example

An example of this could be if running a travel agency. Lets say three people need to move from one place to another, Jenny, Anna and Steve. For that purpose, there are two taxis and the problem you want to solve is who gets into which taxi. However, we know that Jenny and Anna are friends, Jenny and Steve are enemies and Anna and Steve are enemies.

The aim would be to maximize the number of friend pairs and minimize the enemy pairs sharing the same taxi. A classical computer would store each possible solution with bits one at a time before being able to calculate a potential solution. However, a quantum computer will use qubits to represent all the solutions at the same time. It will find the best solution in a few milliseconds as it piles everything into just 1 operation.

The difference here is a traditional computer performs more and more calculations every time the data scales up, whereas a quantum computer will only ever have to process one operation.

In the real-world, one industry that could heavily benefit from this technology and processing power is banking.

Quantum Computing in Banking

In an article from Banco Bilbao Vizcaya Argentaria (BBVA) from October 2019, it was suggested that this kind of quantum computing power might fundamentally change the face of banking in time.

Encryption of personal data is critical to banking, with RSA-2048 being used at the highest levels. For a classical computer to find the key to decrypt the algorithm would take 1,034 steps. To put that into context, a processor capable of a trillion operations per second would still take 317 billion years to resolve the problem. Realistically, that makes decryption impossible.

However, a quantum computer could solve the decryption in just 107 steps. If the computer were running at a million operations per second, this calculation would only take 10 seconds to complete. The potential of quantum computing in this context is quite amazing. That said, we are still a long way off having enough processing power to reach those heights, but experts are working on it.

Barclays

Researchers at Barclays Bank in collaboration with IBM have created a proof-of-concept quantum optimized application. The solution revolves around the transaction settlement process. A settlement works on a transaction-by-transaction basis where they are pushed into a queue and settled in batches. During a processing window, as many trades as possible from the queue are settled.

Trades are complex by nature according to Lee Braine, director of research and engineering at Barclays. Traders can tap into funds before the transaction has been cleared. They are settled if funding is available or if there is some sort of credit collateral facility.

In a quantum computing context, a small number of trades could, in theory, be done in your head. However, as you get up to 10 or 20 transactions, you might need to use a pen and paper. Any more than that and we start going into classical computing. However, as we get to hundreds of trades, the machines begin to experience limitations.

A bit like the travel agency example we gave earlier, a quantum computer could run masses of complex aspects of trading. Using a seven-bit qubit system, the team could identify certain features that were of sufficient complexity. The same calculations would need about 200 traditional computers.

JP Morgan

Using an IBM machine, researchers at JP Morgan have demonstrated that they could simulate the future value of a financial product. They are testing the use of quantum computers to speed up intensive pricing calculations which would take traditional machine hours to compute. As portfolios become larger, the algorithms have greater complexity and could get to a point where they are impossible to calculate.

The research by the team has shown that a commercial-grade quantum computer can run the same calculations in a matter of seconds.

Summary

According to Deltec Bank, the Bahamas Banks are successfully testing quantum computers to solve problems that were previously very resource-intensive or impossible to complete. Although the technology is still some years away from changing the way banks calculate financial models due to complex hardware requirements, it is important to start testing now.

IBM themselves have stated they are a while away from a perfect solution with big breakthroughs still required but the time will certainly come.

See the article here:

The Impact of Quantum Computing on Banking will be gigantic says Deltec Bank, Bahamas - Quantaneo, the Quantum Computing Source

Science stories that shaped 2019 – Telegraph India

This was the year of quantum physics, which redefined the kilogram and the computer. It was also the year of teamwork. Hundreds of scientists across the globe worked together to do the seemingly impossible capture an image of a black hole. A global collaboration of scientists journeyed into the heart of the Arctic to measure how the climate is changing in this crucial spot. This was also the year we lost a portion of the Amazon rainforest to a fire fuelled by greed.

First image of a black hole

After more than a decade at work, the Event Horizon Telescope, a large telescope array consisting of a global network of radio telescopes, stunned the world by capturing the first direct image of a black hole, which is situated at the centre of the Messier 87 galaxy, 54 million light years away. The image shows a circular silhouette outlined by emission from hot gas swirling around it, lending credibility to Einsteins theory of general relativity near all black holes.

Evidence of black holes from which nothing, not even light, can escape has existed for aeons. And astronomers have long observed the effects of these mysterious phenomena on their surroundings. Because of the lack of light, it was believed that you could not snap an image of these caverns in space.

Polarstern breaks ice

The German icebreaker ship, RV Polarstern, is right now stuck in the midst of the frozen Arctic sea at the North Pole. Its on a mission known as the Multidisciplinary drifting Observatory for the Study of Arctic Climate (Mosaic) the largest climate-change research expedition to the central Arctic. This region, one of the most inaccessible places on our planet, is critical to Earths climate and its essential to study it thoroughly.

During the year-long expedition (September 2019 to September 2020) that has taken 20 years to organise, over 600 researchers will rotate on and off the ship, supported by many more in research institutes across the world. The data harvested should give us an accurate picture of ice or its absence near the North Pole and is expected to silence climate change sceptics forever.

Googles quantum claim

Google claims to have reached a long-sought breakthrough called quantum supremacy that allows computers to calculate at inconceivable speeds. While some scientists are cautious about the implications, major tech companies in the US and China are investing heavily in quantum computing. IBM, a Google competitor, described the term quantum supremacy as misleading and proposed another metric, quantum volume .

Denisovan discoveries

A jawbone of a 1,60,000-year-old Denisovan hominids who existed alongside Neanderthals and disappeared 50,000 years ago was recently discovered in the Tibetan Plateau. This is the first time a fossil of this species has been found outside the Denisova Cave in Siberia, confirming the theory that these relatives of modern humans once lived across much of central and eastern Asia. The find also suggests Denisovans may have evolved genetic adaptations to high altitudes, which Tibetans inherited thanks to interbreeding between Denisovans and modern humans.

Crispr in clinical trials

Crispr/Cas9, a gene editing technique akin to molecular scissors that can snip, repair or insert genes into DNA, went into a spate of clinical trials. The technique holds the promise of curing nearly 6,000 known genetic diseases. There is already clinical evidence that it has cured two patients in the US, one suffering from beta thalassaemia and the other from sickle cell disease.

Crash course on the moon

The race to land on the moon is back in vogue. While Chinas Change-4 lander touched down smoothly on the moons far side in January, probes sent by the Israeli agency, SpaceIL, and the Indian Space Research Organisation crash-landed. China plans to launch another lunar lander next year. The European Space Agency, Russia and Nasa hope to follow in its footsteps.

Kilogram, redefined

In the biggest overhaul of the International System of Units, four units kilogram, kelvin, ampere and mole were redefined in terms of constants of nature. The new definition anchors the value of the kilogram to the Planck constant, an unvarying and infinitesimal number at the heart of quantum physics. Previously, the kilogram was defined as the mass of a specific object (stored in a Paris vault) that represented the mass of one litre of pure water at its freezing point.

Amazon ablaze

The Amazon rainforest, the worlds largest carbon sink, was irreversibly damaged after settlers allegedly set fire to it, with tacit support from the Brazilian government. Data released by Brazils National Institute for Space Research shows that from January to July, fires consumed 4.6 million acres of the Brazilian part of the Amazon rainforest. The nations right-wing President, Jair Bolsonaro, wants to facilitate the interests of industries in the forest, uncaring of the worldwide environmental concern.

Link:

Science stories that shaped 2019 - Telegraph India

2020 Will be a Banner Year for AI Custom Chipsets and Heterogenous Computing; Quantum Computing Remains on the Far Horizon – Business Wire

OYSTER BAY, N.Y.--(BUSINESS WIRE)--The year 2020 will be an exciting one for the Artificial Intelligence (AI) chipset market. In 2020 alone, more than 1.4 million cloud AI chipsets and 330 million edge AI chipsets are forecasted to be shipped, generating a total revenue of US$9 billion, states global tech market advisory firm, ABI Research.

In its new whitepaper, 54 Technology Trends to Watch in 2020, ABI Researchs analysts have identified 35 trends that will shape the technology landscape and 19 others that, although attracting huge amounts of speculation and commentary, look less likely to move the needle over the next twelve months. After a tumultuous 2019 that was beset by many challenges, both integral to technology markets and derived from global market dynamics, 2020 looks set to be equally challenging, says Stuart Carlaw, Chief Research Officer at ABI Research.

What will happen in 2020:

More custom AI chipsets will be launched:Weve already seen the launch of new custom AI chipsets by both major vendors and new startups alike. From Cerebras Systems worlds largest chipset to Alibabas custom cloud AI inference chipset, the AI chipset industry has been hugely impacted by the desire to reduce energy consumption, achieve higher performance, and, in the case of China, minimize the influence of Western suppliers in their supply chain, says Lian Jye Su, AI & Machine Learning Principal Analyst at ABI Research. 2020 will be an exciting year for AI chipsets. Several stealth startups are likely to launch programmable chipsets for data centers, while the emergence of new AI applications in edge devices will give rise to more Application Specific Integrated Circuits (ASICs) dedicated for edge AI inference workloads.

Heterogeneous computing will emerge as the key to supporting future AI Networks:Existing Artificial Intelligence (AI) applications and networks are currently serviced by different processing architectures, either that be Field Programmable Gate Array (FPGA), Graphical Processing Units (GPUs), CPUs, Digital Signal Processors (DSPs), or hardware accelerators, each used to its strength depending on the use case addressed. However, the next generation and AI and Machine Learning (ML) frameworks will be multimodal by their nature and may require heterogeneous computing resources for their operations. The leading players, including Intel, NVIDIA, Xilinx, and Qualcomm will introduce new chipset types topped by hardware accelerators to address the new use cases, says Su. Vendors of these chips will move away from offering proprietary software stacks and will start to adopt open Software Development Kits (SDKs) and Application Programming Interface (API) approaches to their tools in order to simplify the technology complexity for their developers and help them focus on building efficient algorithms for the new AI and ML applications.

What wont happen in 2020:

Quantum computing:Despite claims from Google in achieving quantum supremacy, the tech industry is still far away from the democratization of quantum computing technology, Su says. Existing vendors, such as IBM and D-Wave, will continue to enhance its existing quantum computing systems, but the developer community remains small and the benefits brought by these systems will still be limited to selected industries, such as military, national laboratories, and aerospace agencies. Like other nascent processing technologies, such as photonic and neuromorphic chipset, quantum computing systems in their current form still require very stringent operating environment, a lot of maintenance, and custom adjustment, and are definitely not even remotely ready for large-scale commercial deployments, Su concludes.

For more trends that wont happen in 2020, and the 35 trends that will, download the 54 Technology Trends to Watch in 2020 whitepaper.

About ABI Research

ABI Research provides strategic guidance to visionaries, delivering actionable intelligence on the transformative technologies that are dramatically reshaping industries, economies, and workforces across the world. ABI Researchs global team of analysts publish groundbreaking studies often years ahead of other technology advisory firms, empowering our clients to stay ahead of their markets and their competitors.

For more information about ABI Researchs services, contact us at +1.516.624.2500 in the Americas, +44.203.326.0140 in Europe, +65.6592.0290 in Asia-Pacific or visit http://www.abiresearch.com.

Go here to read the rest:

2020 Will be a Banner Year for AI Custom Chipsets and Heterogenous Computing; Quantum Computing Remains on the Far Horizon - Business Wire

The 20 technologies that defined the first 20 years of the 21st Century – The Independent

The early 2000s were not a good time for technology. After entering the new millennium amid the impotent panic of the Y2K bug, it wasnt long before the Dotcom Bubble was bursting all the hopes of a new internet-based era.

Fortunately the recovery was swift and within a few years brand new technologies were emerging that would transform culture, politicsand the economy.

They have brought with them new ways of connecting, consuming and getting around, while also raising fresh Doomsday concerns. As we enter a new decade of the 21st Century, weve rounded up the best and worst of the technologies that have taken us here, while offering some clue of where we might be going.

Sharing the full story, not just the headlines

There was nothing much really new about the iPhone: there had been phones before, there had been computers before, there had been phones combined into computers before. There was also a lot that wasnt good about it: it was slow, its internet connection barely functioned, and it would be two years before it could even take a video.

But as the foremost smartphone it heralded a revolution in the way people communicate, listen, watch and create. There has been no aspect of life that hasnt been changed by the technologies bundled up in the iPhone an ever-present and always-on internet connection, a camera that never leaves your side, a computer with mighty processing power that can be plucked out of your pocket.

Steve Jobs unveiled the first iPhone on 9 January, 2007 (Reuters)

The 2000s have, so far, been the era of mobile computers and social networking changing the shape of our cultural, political and social climate. All of those huge changes, for better or worse, are bound up in that tiny phone.AG

Though few people noticed, online social networks actually began at the end of the last century. The first was Six Degrees in 1997, which was named after the theory that everyone on the planet is separated by only six other people. It included features that became popular with subsequent iterations of the form, including profiles and friend lists, but it never really took off.

It wasnt until Friends Reunited and MySpace in the early 2000s that social networks achieved mainstream success, though even these seem insignificant when compared to Facebook.

Not only did Mark Zuckerbergs creation muscle its way to a monopoly in terms of social networks, it also swallowed up any nascent competitors in a space that came to be known as social media. First there was Instagram in 2012, for a modest $1 billion, and then came WhatsApp in 2014 for $19bn.

Between all of its apps, Facebook now reaches more than 2 billion people every day. It has come to define the way we communicate and heralded a new era of hyper-connectedness, while also profoundly shaping the internet as we know it. In doing so, Facebook has not only consigned the site Six Degrees to the history books, it has also re-written the theory itself cutting it down to just three-and-a-half degrees of separation. AC

At the start of this century, the complete reinvention of the entire economic system wasnt something many people were talking about. But then the 2007-08 financial crisis happened. As mortgages defaulted, companies collapsed, and governments bailed out the banks to the tune of trillions of dollars, people began to wonder if there might be a better way.

One person or group believed they had the answer. Satoshi Nakamotos true identity may still be a mystery, but their creation of a new electronic cash system called bitcoin in 2009 could have implications far beyond just currency. The underlying blockchain technology an immutable and unhackable online ledger could potentially transform everything from healthcare to real estate.

Bitcoin is yet to take off as a mainstream form of payment or transform the global economy like it might have promised, but we are barely a decade into the great cryptocurrency experiment. It has inspired thousands of imitators, including those currently being developed by Facebook and China, and it may be another 10 years before its true potential is finally realised. AC

Alright, so here we are, in front of the, er, elephants. And the cool thing about these guys is that they have really, really, really long trunks. And thats cool.

It may have been an inauspicious start, but these words would go on to fundamentally transform the way people consume media in the 21st century. It was 23 April, 2005, and Jawed Karim had just uploaded the first ever video to YouTube a video-sharing website he had helped create.

The YouTube channel homepage for Indian record label T-Series, which overtook controversial Swedish vlogger PewDiePie in 2019

AFP/Getty

PewDiePie has been the most popular YouTuber since 2013

PewDiePie / YouTube

5-Minute Crafts, which offers quick and quirky DIY tips to viewers, didn't even feature in the top 15 YouTube channels in July 2018

5-Minute Crafts

Brazilian music video producer and director KondZilla began his career after buying a camera with life insurance money left to him after his mother died when he was 18

Getty

Sony Entertainment Televesion (SET) launched in 1995 and has recently seen huge growth of its Hindi-language YouTube channel

AFP/Getty

Canadian musician Justin Bieber held the number-two spot in 2018 before T-Series took over

Getty

World Wrestling Entertainment has managed to gain a huge following on YouTube by sharing clips of fights and interviews with its stars

WWE

This YouTube channel specialises in 3D animation videos of nursery rhymes, as well as its own original songs. It is owned by the American firm Treasure Studio

Cocomelon

YouTube personalities Coby Cotton, Tyler Toney, Cody Jones, and Cory Cotton form Dude Perfect, a sports entertainment channel from the US

Getty

YouTube personality German Garmendia is a Chilean comedian and writer

HolaSoyGerman

One of several musicians that populate the top 15 most popular YouTube channels, Ed Sheeran joined the list in 2017

Getty

Music channel Badabun's subscriber count has not been publicly visible since 6 March 2019, at which point it had 37.2 million subscribers

Badabun / YouTube

US rapper Eminem first entered the list of the top 15 YouTube channels in 2013, the same year that PewDiePie took over

AFP/Getty

Brazilian Whindersson Nunes Batista joined YouTube in 2013 and became popular for his comedy videos

Whinderssonnunes / YouTube

US singer and actress Ariana Grande is the latest addition to the top 15 YouTube channels

AFP/Getty

The YouTube channel homepage for Indian record label T-Series, which overtook controversial Swedish vlogger PewDiePie in 2019

AFP/Getty

PewDiePie has been the most popular YouTuber since 2013

PewDiePie / YouTube

5-Minute Crafts, which offers quick and quirky DIY tips to viewers, didn't even feature in the top 15 YouTube channels in July 2018

5-Minute Crafts

Brazilian music video producer and director KondZilla began his career after buying a camera with life insurance money left to him after his mother died when he was 18

Getty

Sony Entertainment Televesion (SET) launched in 1995 and has recently seen huge growth of its Hindi-language YouTube channel

AFP/Getty

Canadian musician Justin Bieber held the number-two spot in 2018 before T-Series took over

Getty

World Wrestling Entertainment has managed to gain a huge following on YouTube by sharing clips of fights and interviews with its stars

WWE

This YouTube channel specialises in 3D animation videos of nursery rhymes, as well as its own original songs. It is owned by the American firm Treasure Studio

Cocomelon

YouTube personalities Coby Cotton, Tyler Toney, Cody Jones, and Cory Cotton form Dude Perfect, a sports entertainment channel from the US

Getty

YouTube personality German Garmendia is a Chilean comedian and writer

HolaSoyGerman

One of several musicians that populate the top 15 most popular YouTube channels, Ed Sheeran joined the list in 2017

Getty

Music channel Badabun's subscriber count has not been publicly visible since 6 March 2019, at which point it had 37.2 million subscribers

Badabun / YouTube

US rapper Eminem first entered the list of the top 15 YouTube channels in 2013, the same year that PewDiePie took over

AFP/Getty

Brazilian Whindersson Nunes Batista joined YouTube in 2013 and became popular for his comedy videos

Whinderssonnunes / YouTube

US singer and actress Ariana Grande is the latest addition to the top 15 YouTube channels

AFP/Getty

Just over a year later, Google bought the site for $1.65 billion and the fortunes of Karim, his co-founders, and countless future content creators were changed forever.

There are now hundreds of hours of video published to YouTube every minute and it all started with that 18-second clip at the zoo. AC

Arthur C Clarke famously quipped that any sufficiently advanced technology is indistinguishable from magic. But there is surely nothing more like magic and no magic more powerful than the fact that the 21st century has brought the ability to instantly connect to information and people at the other side of the world.

First, at the beginning of the century, came 3G, and then 10 years or so later came 4G. Every decade of this century has been marked by new advances in the speed and reliability of mobile data connections.

And those mobile data connections have helped re-write the world that relies on them. Just about every other major breakthrough in technology that came through the 2000s social media, instant photo sharing, citizen journalism and everything else relied on having data connections everywhere.

5G which has ostensibly already rolled out, but is yet to make its full impact is likely to be similarly transformative through the decade to come, if its evangelists are to be believed.

Debates have raged about whether this constant connectivity and the distractions and dangers it has brought has really driven us apart. But that too is surely testament to its power. AG

Many of technologys biggest developments in the 2000s havent really been about technology at all: piracy and then streaming changed how we make and consume culture entirely, social media has turned politics on his head. Nowhere is that more clear than in the gig economy and the apps and websites like Uber, Deliveroo and Airbnb that power it, which claim to be tech businesses but are really new ways of buying and selling labour.

The real revolution of the gig economy was not the technology that powers these apps: there is little difference between calling for a cab and summoning an Uber, really. Nor was it what the companies like to suggest, that they have opened up a new and inspiring way of working that allows anyone to clock on whenever they log on.

Instead, it was the beginning of a process of changing the way that people work and relate to those who fulfil services for them. It is likely that we have not seen the end of the kinds of profound changes that these companies have made to working conditions or the ways that those workers have fought back. AG

Virtual reality has been the future before: ever since the first stereoscopes, people have been excited about the possibility of disappearing into other worlds that appear before their eyes. But it has never quite arrived.

But in the more recent years of the 2000s it started to look a bit more meaningful. Virtual reality headsets have been pushed out by many of the worlds biggest companies, and consumer computers are finally powerful enough to generate believable worlds that people are happy to spend their time in.

In recent years, much of the focus has turned to augmented reality rather than virtual reality. That technology allows information to be overlaid on top of the real world, rather than putting people into an entirely virtual world. If it comes off if it is not confined to failed experiments like Google Glass then it could change the way we interact with the world, potentially giving us information all of the time and could even do away with things like smartphones as our primary way of connecting with technology. AG

Quantum computing has not really happened yet. A few months ago, researchers announced that they had achieved quantum supremacy by doing an operation that would not be possible on a traditional computer but it was a largely useless, very specific, operation, which didnt really change anything in itself.

Already, however, the promise and the threat of quantum computing is changing the world. It looks set to upend all of our assumptions about computers, allowing them to be unimaginably fast and do work never thought possible. It could unlock new kinds of health research and scientific understanding; it could also literally unlock encryption, which currently relies on impossible calculations that could quickly become very possible with quantum computers.

A new era of computing could bring about a 'quantum apocalypse' (iStock)

It isnt clear when it will arrive, of course; like other potentially revolutionary technologies, it could take a very long time or never arrive at all. But it is sitting there in the future, ready to turn everything on its head and, as researchers rush to understand it, it is already changing the world. AG

No vision of the future would be complete without the ability to speak to and control your home. And now it seems like we are finally living in it.

Through the 2000s, just about everything came to be hooked up to the internet: you could buy smart kettles, internet-enabled doorbells, and a video camera for every room in your house. And to control them came microphones and speakers that you put in your house and could talk to.

But as the smart home and the voice assistants that power it have soared in popularity, they have been beset by concerns, too. Is giving over control of your home to internet-enabled devices safe, when those devices can break down or be seized by hackers? Should we be allowing internet giants like Amazon and Google to put microphones in our home? As we enter the new decade, it looks like our homes are set to be defined not by the capabilities the technology in our homes give us but who we want to have power over them. AG

Before there was Spotify, there was Napster, and before people were watching movies on Netflix, they were downloading them through PirateBay. Piracy has been one step ahead of legal ways to consume media but in doing so it has led the way for new platforms that now dominate our online lives.

Streaming has not only changed the way we listen to music and watch films, it has also given rise to new ways to create content. Live streaming video games on Twitch is one of the fastest growing mediums, while live video broadcasts through Facebook, Twitter and YouTube give people instant access to everything from street protests to rocket launches.

The Pirate Bay's latest venture into streaming comes despite battling takedown attempts by authorities for more than a decade (Reuters)

See original here:

The 20 technologies that defined the first 20 years of the 21st Century - The Independent

20 technologies that could change your life in the next decade – Economic Times

The decade thats knocking on our doors now the 2020s is likely to be a time when science fiction manifests itself in our homes and roads and skies as viable, everyday technologies. Cars that can drive themselves. Meat that is derived from plants. Robots that can be fantastic companions both in bed and outside.

Implanting kidneys that can be 3-D printed using your own biomaterial. Using gene editing to eradicate diseases, increase crop yield or fix genetic disorders in human beings. Inserting a swarm of nanobots that can cruise through your blood stream and monitor parameters or unblock arteries. Zipping between Delhi and New York on a hypersonic jet. All of this is likely to become possible or substantially closer to becoming a reality in the next 10 years.

Ideas that have been the staple of science fiction for decades artificial intelligence, universal translators, sex robots, autonomous cars, gene editing and quantum computing are at the cusp of maturity now. Many are ready to move out of labs and enter the mainstream. Expect the next decade to witness breakout years for the world of technology.

Read on:

The 2020s: A new decade promising miraculous tech innovations

Universal translators: End of language barrier

Climate interventions: Clearing the air from carbon

Personalised learning: Pedagogy gets a reboot with AI

Made in a Printer: 3-D printing going to be a new reality

Digital money: End of cash is near, cashless currencies are in vogue

Singularity: An era where machines will out-think human

Mach militaries: Redefining warfare in the 2020

5G & Beyond: Ushering a truly connected world

Technology: Solving the problem of clean water

Quantum computing : Beyond the power of classical computing

Nanotechnology: From science fiction to reality

Power Saver: Energy-storage may be the key to maximise power generation

Secret code: Gene editing could prove to be a game-changer

Love in the time of Robots: The rise of sexbots and artificial human beings

Wheels of the future: Flying cars, hyperloops and e-highways will transform how people travel

New skies, old fears: The good, bad& ugly of drones

Artificial creativity: Computer programs could soon churn out books, movies and music

Meat alternatives: Alternative meat market is expected to grow 10 times by 2029

Intelligent robots & cyborg warriors will lead the charge in battle

Why we first need to focus on the ethical challenges of artificial intelligence

It's time to reflect honestly on our motivations for innovation

India's vital role in new space age

Plastic waste: Environment-friendly packaging technologies will gain traction

The rest is here:

20 technologies that could change your life in the next decade - Economic Times

Top 5: Scientific Breakthroughs That Made 2019 an Unforgettable Year of Human Progress – The Weather Channel

Facial reconstruction of A. anamensis by John Gurche using 38-lakh-year-old (3.8-million-year-ago) hominin cranium.

From discovering cures for life-threatening diseases to exploring outer space, from unearthing new facts about human history to making incredible strides in artificial intelligence, humanity achieved exceptional breakthroughs in the field of science and technology in 2019.

As the year comes to an end, it is time to look back at some of those glorious scientific revolutions that will shape our future. Here are our picks for the most significant scientific advancements of 2019:

5. Hello Sun? Earthlings are going beyond your influence!

A simulated landing process of Chang'e-4 lunar probe at the Beijing Aerospace Control Center on Jan. 3, 2019.

Launched in January 2006, the interplanetary space probe New Horizons from the US space agency NASA steered past the Kuiper Belt object 486958 Arrokoth (then nicknamed Ultima Thule) on January 1, 2019. The Kuiper Belt is the region beyond the known planetary system of solar system, and this was the farthest flyby ever conducted by any human-made spacecraft.

Also this year, on November 4, NASA's Voyager 2 reached the interstellar mediuma space between star systems, well beyond the influence of our solar system. Voyager 1 had earlier achieved this feat in 2012. Voyager 2, its successor, was launched in the year 1977.

Also, China's moon mission, Chang'e 4, successfully made a soft landing on the far side of the Moonbecoming the first ever mission to do so. Named after the Chinese moon goddess, the mission is attempting to determine the age and composition of the Moon's unexplored region.

4. Quantum leap in computing

Representational image

Of all the progress made in computing research in 2019, the biggest breakthrough was perhaps the realisation of quantum computing.

Right in the first month of 2019, technology giant IBM unveiled Q System Onethe first quantum computer outside a research labbringing a rather abstract concept into the public imagination. Unlike the bits of information in computers we use, a quantum computer uses quantum bits, or qubits, enabling an exponential rise in the amount of data it can process and store.

Also Read: Rewind 2019: A Look Back at Significant Developments in Indian Science This Year

Further, a team of researchers from Australia and Singapore developed a quantum-powered machine that can accurately simulate future outcomes arising from different set of alternatives. Meanwhile, another study at Yale University showed that we can catch a qubit between the quantum jump and alter its outcomes. This was an exponential jump in fine-tuning the quantum systems as the outcomes need not be completely random and abrupt.

While other research also helped in conceptualising quantum drives with immense storage capacity, the biggest news was from Google. The search giant confirmed in October that it had achieved quantum supremacy. To put things in perspective, researchers at Google claim that the quantum computer solved in three minutes a problem that would have taken 10,000 years even for a supercomputer.

3. Revolutionary research in medical science

Representational image

Medical researchers are always striving to push the envelope of human resilience and efficiency. The year 2019 saw progress on both these fronts, with the development of potential cures for multiple life-threatening diseases and gene-editing promising to be more effective than ever.

This year, twin drugs were developed for Ebola and were found to be effective in nearly 90% of the cases, making the seemingly incurable condition treatable. Researchers also discovered potential cures for bubble boy disease, a condition where babies are born without disease-fighting immune cells, for cystic fibrosis, a painful, debilitating lung disease, as well as for pancreatic cancer.

Moreover, after decades, HIV research finally yielded some fruitful results this year with patients positively responding to treatments. After a long gap of 12 years from the day the first patient was cured of HIV infection that causes AIDS, another patient was cured in March 2019. Researchers had been relentlessly trying to replicate the treatment that cured the infection for the first time in 2007.

Furthermore, using CRISPR gene-editing technology, scientists have found potential treatments for cancer patients, even those with whom the standard procedure was not successful. In October, researchers produced scientific evidence that new gene-editing technology has the potential to correct up to 89% of genetic defects like sickle cell anaemia.

2. Imaging the faraway invisible wonder

Image of the black hole at the center of galaxy M87

Named the top scientific breakthrough of 2019 by the journal Science, this incredible photograph of a black hole was taken using eight radio telescopes around the world to form a virtual instrument that is said to be the size of the Earth itself.

The first-ever image of a black hole, released on April 10 this year, was taken by the Event Horizon Telescope (EHT) collaboration team. The gravity of a black hole is so strong that even light cannot escape its pull, and to capture an image of something that does not emit light is no easy task.

EHT imaged the silhouette (or shadow) of a massive black hole called M87 which is located at the centre of a galaxy 55 million light-years from Earth. M87 has enormous masswhopping 6500 million times the mass of the Sun. The image shows a ring of light coming from the gas falling into the event horizon (the boundary from beyond which nothing can escape) of the black hole.

1. Retracing the origins of humans

Craniofacial reconstruction process of Rakhigarhi cemetery individuals (BR02 and BR36).

Humankinds fascination with the question 'Where did we come from?' has persisted over centuries. Yet, some of the biggest breakthroughs in answering this question were made this year, starting with the discovery of a previously-unknown species of ancient humans. Named Homo luzonensis, this small-bodied bipedal species was discovered in the Philippines and is said to have lived on the island of Luzon 50,000 to 67,000 years ago.

In May, researchers deciphered a four-decade old mystery by identifying a 160,000-year-old human jawbone found in the Tibetian Plateau nearly 40 years ago. The fossil was of Denisovan, an enigmatic ancestor species of humans who ranged across Asia until some 50,000 years ago. The discoverymade despite the absence of DNA in the jawhelped scientists understand this species better. In September, another group of researchers further refined the picture of Denisovans whose traces still linger in the DNA of a few modern humans.

In August, descriptions of a nearly 38-lakh-year-old remains of a skull belonging to a bipedal ancestor of humans baffled the world. This skull proved that two of our ancestor speciesA. anamensis and A. afarensismay have overlapped for at least 100,000 years. This evidence of the existence of these two of our ancestor species at a similar timescale busts the long-held belief that human evolution follows a single lineage, i.e. one species coming after the other.

In a first-of-its-kind attempt, scientists have generated an accurate facial representation of people from the Indus Valley Civilisation in October. Nnother important study showed that the ancestral homeland of every human alive today traces back to a region south of the Zambezi River in northern Botswana. Building on the previous genetic evolution studies, the researchers used ethnolinguistic and geographic frequency distribution data from the genomes of over 1000 southern Africans to trace back the origin of modern humans.

Exponential growth continues

India has also contributed immensely in all scientific domains over the past few years and is now only behind China and the US in terms of the number of published research studies. Building exponentially on the success of previous decades, scientists around the world have made immense contributions from improving our daily life to understanding the mysteries of the universe.

With so much exciting research pouring in from all corners of the world, it isn't easy to even keep track of the incredible pace at which science is progressing. While we have tried to cover a few iconic annual scientific highlights in this article, there are thousands of other important discoveries, studies and achievements that shaped science in 2019.

And as yet another potential-filled year dawns on our planet, The Weather Channel India will keep you tuned in about all the exciting news, updates and breakthroughs from the world of science.

So for your daily dose of weather, environment, space and science stories, stay tuned to weather.com and stay curious!

See the original post here:

Top 5: Scientific Breakthroughs That Made 2019 an Unforgettable Year of Human Progress - The Weather Channel

CMSWire’s Top 10 AI and Machine Learning Articles of 2019 – CMSWire

PHOTO: tiffany terry

Would you believe me if I told you artificial intelligence (AI) wrote this article?

With 2020 on the horizon, and with all the progress made in AI and machine learning (ML) already, it probably wouldnt surprise you if that were indeed the case which is bad news for writers like me (or not).

As we transition into a new year, its worth noting that 73% of global consumers say they are open to businesses using AI if it makes life easier, and 83% of businesses say that AI is a strategic priority for their businesses already. If thats not a recipe for even more progress in 2020 and beyond, then my name isnt CMSWire-Bot-927.

Today, were looking back at the AI and ML articles which resonated with CMSWire's audience in 2019. Strap yourself in, because this list is about to blast you into the future.

ML and, more broadly, AI have become the tech industry's most important trends over the past 18 months. And despite the hype and, to some extent, fear surrounding the technology, many businesses are now embracing AI at an impressive speed.

Despite this progress, many of the pilot schemes are still highly experimental, and some organizations are struggling to understand how they can really embrace the technology.

As the business world grapples with the potential of AI and machine learning, new ethical challenges arise on a regular basis related to its use.

One area where tensions are being played out is in talent management: a struggle between relying on human expertise or in deferring decisions to machines so as to better understand employee needs, skills and career potential.

Marketing technology has evolved rapidly over the past decade, with one of the most exciting developments being the creation of publicly-available, cost-effective cognitive APIs by companies like Microsoft, IBM, Alphabet, Amazon and others. These APIs make it possible for businesses and organizations to tap into AI and ML technology for both customer-facing solutions as well as internal operations.

The workplace chatbots are coming! The workplace chatbots are coming!

OK, well, theyre already here. And in a few years, there will be even more. According to Gartner, by 2021 the daily use ofvirtual assistants in the workplacewill climb to 25%. That will be up from less than 2% this year.Gartneralso identified a workplace chatbot landscape of more than 1,000 vendors, so choosing a workplace chatbot wont be easy. IT leaders need to determine the capabilities they need from such a platform in the short term and select a vendor on that basis, according to Gartner.

High-quality metadata plays an outsized role in improving enterprise search results. But convincing people to consistently apply quality metadata has been an uphill battle for most companies. One solution that has been around for a long time now is to automate metadata's creation, using rules-based content auto-classification products.

Although enterprise interest in bots seems to be at an all-time high,Gartner reports that 68%of customer service leaders believe bots and virtual assistants will become even more important in the next two years. As bots are called upon to perform a greater range of tasks, chatbots will increasingly rely on back-office bots to find information and complete transactions on behalf of customers.

If digital workplaces are being disrupted by the ongoing development of AI driven apps, by 2021 those disruptors could end up in their turn being disrupted. The emergence of a new form of AI, or a second wave of AI, known as augmented AI is so significant Gartner predicts that by 2021 it will be creating up to $2.9 trillion of business value and 6.2 billion hours of worker productivity globally.

AI and ML took center stage at IBM Think this year, the shows major AI announcements served as a reminder that the company has some of the most differentiated and competitive services for implementing AI in enterprise operational processes in the market. But if Big Blue is to win the AI race against AWS, Microsoft and Google Cloud in 2019 and beyond, it must improve its developer strategy and strengthen its communications, especially in areas such as trusted AI and governance

Sentiment analysis is the kind of tool a marketer dreams about. By gauging the publics opinion of an event or product through analysis of data on a scale no human could achieve, it gives your team the ability to figure out what people really think. Backed by a growing body of innovative research, sentiment-analysis tools have the ability to dramatically improve your ROI yet many companies are overlooking it.

Pop quiz: Can you define the differences between AI and automation?

I wont judge you if the answer is no. There's a blurry line between AI and automation, with the terms often used interchangeably, even in tech-forward professions. But there's a very real difference between the two and its one thats becoming evermore critical for organizations to understand.

View post:

CMSWire's Top 10 AI and Machine Learning Articles of 2019 - CMSWire

Can machine learning take over the role of investors? – TechHQ

As we dive deeper into the Fourth Industrial Revolution, there is no disputing how technology serves as a catalyst for growth and innovation for many businesses across a range of functions and industries.

But one technology that is steadily gaining prominence across organizations includes machine learning (ML).

In the simplest terms, ML is the science of getting computers to learn and act like humans do without being programmed. It is a form of artificial intelligence (AI) and entails feeding machine data, enabling the computer program to learn autonomously and enhance its accuracy in analyzing data.

The proliferation of technology means AI is now commonplace in our daily lives, with its presence in a panoply of things, such as driverless vehicles, facial recognition devices, and in the customer service industry.

Currently, asset managers are exploring the potential that AI/ML systems can bring to the finance industry; close to 60 percent of managers predict that ML will have a medium-to-large impact across businesses.

MLs ability to analyze large data sets and continuously self-develop through trial and error translates to increased speed and better performance in data analysis for financial firms.

For instance, according to the Harvard Business Review, ML can spot potentially outperforming equities by identifying new patterns in existing data sets and examine the collected responses of CEOs in quarterly earnings calls of the S&P 500 companies for the past 20 years.

Following this, ML can then formulate a review of good and bad stocks, thus providing organizations with valuable insights to drive important business decisions. This data also paves the way for the system to assess the trustworthiness of forecasts from specific company leaders and compare the performance of competitors in the industry.

Besides that, ML also has the capacity to analyze various forms of data, including sound and images. In the past, such formats of information were challenging for computers to analyze, but todays ML algorithms can process images faster and better than humans.

For example, analysts use GPS locations from mobile devices to pattern foot traffic at retail hubs or refer to the point of sale data to trace revenues during major holiday seasons. Hence, data analysts can leverage on this technological advancement to identify trends and new areas for investment.

It is evident that ML is full of potential, but it still has some big shoes to fil if it were to replace the role of an investor.

Nishant Kumar aptly explained this in Bloomberg, Financial data is very noisy, markets are not stationary and powerful tools require deep understanding and talent thats hard to get. One quantitative analyst, or quant, estimates the failure rate in live tests at about 90 percent. Man AHL, a quant unit of Man Group, needed three years of workto gain enough confidence in a machine-learning strategy to devote client money to it. It later extended its use to four of its main money pools.

In other words, human talent and supervision are still essential to developing the right algorithm and in exercising sound investment judgment. After all, the purpose of a machine is to automate repetitive tasks. In this context, ML may seek out correlations of data without understanding their underlying rationale.

One ML expert said, his team spends days evaluating if patterns by ML are sensible, predictive, consistent, and additive. Even if a pattern falls in line with all four criteria, it may not bear much significance in supporting profitable investment decisions.

The bottom line is ML can streamline data analysis steps, but it cannot replace human judgment. Thus, active equity managers should invest in ML systems to remain competitive in this innovate or die era. Financial firms that successfully recruit professionals with the right data skills and sharp investment judgment stands to be at the forefront of the digital economy.

See original here:

Can machine learning take over the role of investors? - TechHQ

Are We Overly Infatuated With Deep Learning? – Forbes

Deep Learning

One of the factors often credited for this latest boom in artificial intelligence (AI) investment, research, and related cognitive technologies, is the emergence of deep learning neural networks as an evolution of machine algorithms, as well as the corresponding large volume of big data and computing power that makes deep learning a practical reality. While deep learning has been extremely popular and has shown real ability to solve many machine learning problems, deep learning is just one approach to machine learning (ML), that while having proven much capability across a wide range of problem areas, is still just one of many practical approaches. Increasingly, were starting to see news and research showing the limits of deep learning capabilities, as well as some of the downsides to the deep learning approach. So are peoples enthusiasm of AI tied to their enthusiasm of deep learning, and is deep learning really able to deliver on many of its promises?

The Origins of Deep Learning

AI researchers have struggled to understand how the brain learns from the very beginnings of the development of the field of artificial intelligence. It comes as no surprise that since the brain is primarily a collection of interconnected neurons, AI researchers sought to recreate the way the brain is structured through artificial neurons, and connections of those neurons in artificial neural networks. All the way back in 1940, Walter Pitts and Warren McCulloch built the first thresholded logic unit that was an attempt to mimic the way biological neurons worked. The Pitts and McCulloch model was just a proof of concept, but Frank Rosenblatt picked up on the idea in 1957 with the development of the Perceptron that took the concept to its logical extent. While primitive by todays standards, the Perceptron was still capable of remarkable feats - being able to recognize written numbers and letters, and even distinguish male from female faces. That was over 60 years ago!

Rosenblatt was so enthusiastic in 1959 about the Perceptrons promises that he remarked at the time that the perceptron is the embryo of an electronic computer that [we expect] will be able to walk, talk, see, write, reproduce itself and be conscious of its existence. Sound familiar? However, the enthusiasm didnt last. AI researcher Marvin Minsky noted how sensitive the perceptron was to small changes in the images, and also how easily it could be fooled. Maybe the perceptron wasnt really that smart at all. Minsky and AI researcher peer Seymour Papert basically took apart the whole perceptron idea in their Perceptrons book, and made the claim that perceptrons, and neural networks like it, are fundamentally flawed in their inability to handle certain kinds of problems notably, non-linear functions. That is to say, it was easy to train a neural network like a perceptron to put data into classifications, such as male/female, or types of numbers. For these simple neural networks, you can graph a bunch of data and draw a line and say things on one side of the line are in one category and things on the other side of the line are in a different category, thereby classifying them. But theres a whole bunch of problems where you cant draw lines like this, such as speech recognition or many forms of decision-making. These are nonlinear functions, which Minsky and Papert proved perceptrons incapable of solving.

During this period, while neural network approaches to ML settled to become an afterthought in AI, other approaches to ML were in the limelight including knowledge graphs, decision trees, genetic algorithms, similarity models, and other methods. In fact, during this period, IBMs DeepBlue purpose-built AI computer defeated Gary Kasparov in a chess match, the first computer to do so, using a brute-force alpha-beta search algorithm (so-called Good Old-Fashioned AI [GOFAI]) rather than new-fangled deep learning approaches. Yet, even this approach to learning didnt go far, as some said that this system wasnt even intelligent at all.

Yet, the neural network story doesnt end here. In 1986, AI researcher Geoff Hinton, along with David Rumelhart and Ronald Williams, published a research paper entitled Learning representations by back-propagating errors. In this paper, Hinton and crew detailed how you can use many hidden layers of neurons to get around the problems faced by perceptrons. With sufficient data and computing power, these layers can be calculated to identify specific features in the data sets they can classify on, and as a group, could learn nonlinear functions, something known as the universal approximation theorem. The approach works by backpropagating errors from higher layers of the network to lower ones (backprop), expediting training. Now, if you have enough layers, enough data to train those layers, and sufficient computing power to calculate all the interconnections, you can train a neural network to identify and classify almost anything. Researcher Yann Lecun developed LeNet-5 at AT&T Bell Labs in 1998, recognizing handwritten images on checks using an iteration of this approach known as Convolutional Neural Networks (CNNs), and researchers Yoshua Bengio and Jrgen Schmidhube further advanced the field.

Yet, just as things go in AI, research halted when these early neural networks couldnt scale. Surprisingly very little development happened until 2006, when Hinton re-emerged onto the scene with the ideas of unsupervised pre-training and deep belief nets. The idea here is to have a simple two-layer network whose parameters are trained in an unsupervised way, and then stack new layers on top of it, just training that layers parameters. Repeat for dozens, hundreds, even thousands of layers. Eventually you get a deep network with many layers that can learn and understand something complex. This is what deep learning is all about: using lots of layers of trained neural nets to learn just about anything, at least within certain constraints.

In 2010, Stanford researcher Fei-Fei Li published the release of ImageNet, a large database of millions of labeled images. The images were labeled with a hierarchy of classifications, such as animal or vehicle, down to very granular levels, such as husky or trimaran. This ImageNet database was paired with an annual competition called the Large Scale Visual Recognition Challenge (LSVRC) to see which computer vision system had the lowest number of classification and recognition errors. In 2012, Geoff Hinton, Alex Krizhevsky, and Ilya Sutskever, submitted their AlexNet entry that had almost half the number of errors as all previous winning entries. What made their approach win was that they moved from using ordinary computers with CPUs, to specialized graphical processing units (GPUs) that could train much larger models in reasonable amounts of time. They also introduced now-standard deep learning methods such as dropout to reduce a problem called overfitting (when the network is trained too tightly on the example data and cant generalize to broader data), and something called the rectified linear activation unit (ReLU) to speed training. After the success of their competition, it seems everyone took notice, and Deep Learning was off to the races.

Deep Learnings Shortcomings

The fuel that keeps the Deep Learning fires roaring is data and compute power. Specifically, large volumes of well-labeled data sets are needed to train Deep Learning networks. The more layers, the better the learning power, but to have layers you need to have data that is already well labeled to train those layers. Since deep neural networks are primarily a bunch of calculations that have to all be done at the same time, you need a lot of raw computing power, and specifically numerical computing power. Imagine youre tuning a million knobs at the same time to find the optimal combination that will make the system learn based on millions of pieces of data that are being fed into the system. This is why neural networks in the 1950s were not possible, but today they are. Today we finally have lots of data and lots of computing power to handle that data.

Deep learning is being applied successfully in a wide range of situations, such as natural language processing, computer vision, machine translation, bioinformatics, gaming, and many other applications where classification, pattern matching, and the use of this automatically tuned deep neural network approach works well. However, these same advantages have a number of disadvantages.

The most notable of these disadvantages is that since deep learning consists of many layers, each with many interconnected nodes, each configured with different weights and other parameters theres no way to inspect a deep learning network and understand how any particular decision, clustering, or classification is actually done. Its a black box, which means deep learning networks are inherently unexplainable. As many have written on the topic of Explainable AI (XAI), systems that are used to make decisions of significance need to have explainability to satisfy issues of trust, compliance, verifiability, and understandability. While DARPA and others are working on ways to possibly explain deep learning neural networks, the lack of explainability is a significant drawback for many.

The second disadvantage is that deep learning networks are really great at classification and clustering of information, but not really good at other decision-making or learning scenarios. Not every learning situation is one of classifying something in a category or grouping information together into a cluster. Sometimes you have to deduce what to do based on what youve learned before. Deduction and reasoning is not a fort of deep learning networks.

As mentioned earlier, deep learning is also very data and resource hungry. One measure of a neural networks complexity is the number of parameters that need to be learned and tuned. For deep learning neural networks, there can be hundreds of millions of parameters. Training models requires a significant amount of data to adjust these parameters. For example, a speech recognition neural net often requires terabytes of clean, labeled data to train on. The lack of a sufficient, clean, labeled data set would hinder the development of a deep neural net for that problem domain. And even if you have the data, you need to crunch on it to generate the model, which takes a significant amount of time and processing power.

Another challenge of deep learning is that the models produced are very specific to a problem domain. If its trained on a certain dataset of cats, then it will only recognize those cats and cant be used to generalize on animals or be used to identify non-cats. While this is not a problem of only deep learning approaches to machine learning, it can be particularly troublesome when factoring in the overfitting problem mentioned above. Deep learning neural nets can be so tightly constrained (fitted) to the training data that, for example, even small perturbations in the images can lead to wildly inaccurate classifications of images. There are well known examples of turtles being mis-recognized as guns or polar bears being mis-recognized as other animals due to just small changes in the image data. Clearly if youre using this network in mission critical situations, those mistakes would be significant.

Machine Learning is not (just) Deep Learning

Enterprises looking at using cognitive technologies in their business need to look at the whole picture. Machine learning is not just one approach, but rather a collection of different approaches of various different types that are applicable in different scenarios. Some machine learning algorithms are very simple, using small amounts of data and an understandable logic or deduction path thats very suitable for particular situations, while others are very complex and use lots of data and processing power to handle more complicated situations. The key thing to realize is that deep learning isnt all of machine learning, let alone AI. Even Geoff Hinton, the Einstein of deep learning is starting to rethink core elements of deep learning and its limitations.

The key for organizations is to understand which machine learning methods are most viable for which problem areas, and how to plan, develop, deploy, and manage that machine learning approach in practice. Since AI use in the enterprise is still continuing to gain adoption, especially these more advanced cognitive approaches, the best practices on how to employ cognitive technologies successfully are still maturing.

Read the original:

Are We Overly Infatuated With Deep Learning? - Forbes

Lazarus Effect: Sixteen-year study by Dr. Leis resulted in recovery from West Nile for Dr. Bush – Northside Sun

To her long list of medical accomplishments, Jackson ob-gyn Dr. Freda McKissic Bush can now add celebrated research subject.

She recently became the subject of a scientific paper entitled: Lazarus Effect of High Dose Corticosteroids in a Patient with West Nile Virus Encephalitis: A Coincidence or a Clue?

Published in Frontiers of Medicine, the article is co-authored by neurologists Dr. Art Leis of Methodist Rehabilitation Center (MRC) and Dr. David Sinclair of Mississippi Baptist Medical Center.

While Bushs recovery is the focus of the treatise, its a bit of poetic license to compare her experience with that of the Biblical Lazarus. She was not raised from the dead.

But the 71-year-old retired physician said she was on my way out.I was going to die, it was as simple as that, she said.

Bush was suffering from West Nile virus (WNV) encephalitis, a swelling of brain tissue that is one of three neuro-invasive forms of WNV infection.

In the years since WNV first arrived in the United States in 1999, the Centers for Disease Control (CDC) and Prevention has recommended treating infections with only standard supportive care.

But during 16 years of studying neuro-invasive forms of the disease, Leis came to favor prescribing high-dose steroids for the most severe forms of the disease.

The approach seems to quell the bodys immune system attack of inflammation on healthy tissue. But Leis was cautious about using it at first. It is counter-intuitive to weaken the immune system when a patient has encephalitis, Leis said.

To be on the safe side, Leis initially delayed steroid treatment until at least two weeks from the onset of WNV. By 2018, hed begun to rethink that timeline.

The CDC had found that WNV rapidly cleared the body in people with normal immune systems. And Leis own experiences with immune-suppressive treatments had not raised any red flags.

To my knowledge we dont have any cases where treatment with high-dose steroids initiated acute worsening that would suggest the virus had spread, Leis said.

When he was consulted on Bushs case in July 2018, Leis believed her condition demanded an aggressive approach, as did Sinclair.

She was in the ICU in a semi-comatose state for a while, Sinclair said. It was very clear her whole brain was involved and the risk of disability at that point was extremely highif not death.

Leis and Sinclair say they sought to publicize Bushs case because they believe scientific scrutiny of the approach is needed.

Were trying to look at deciding whether patients will improve spontaneously or if steroids are helpful, Sinclair said. This really needs to be studied in a manner where we have a control group who receives the current standard of care intervention.

In the meantime, Leis will continue to contribute as a scientist who has long been on the front lines of WNV research.

In 2002, he and fellow MRC scientist Dr. Dobrivoje Stokic were the first in the world to link WNV to a polio-like paralysis. And over the years, MRC has been a valuable resource for physicians treating West Nile virus infection, as well as a support group site for survivors and their families.

Leis knows well the lifelong impact WNV infection can have. In his office is a five-drawer file cabinet full of patient data, as well as thank-you notes from people hes helped.

For those who have the more severe forms of West Nile virus infection, over half have persistent or delayed symptoms, such as severe, disabling fatigue, persistent headaches, sleep disruptions and trouble concentrating, Leis said.

Some even experience disruption of their autonomic reflex system, which controls everything from blood pressure, cardiac rhythm, sweating, bowel and bladder control to gastrointestinal mobility.

Recently, Leis obtained disease-specific privileges at several metro Jackson hospitals, which will make it easier for him to consult with acute care physicians.

And Sinclair, for one, believes theres no one better than Leis to help guide the care of WNV patients.

I think Im one of a dozen young neurologists in the state who look toward his expertise in the area of neuro-virology, Sinclair said. He has cared for the most West Nile virus patients and dealt with the most severe consequences of that illness. Hes someone I could turn to for advice on cases like that.

Like most long married couples, Lee and Freda McKissic Bush are deeply aware of each others moods.

So on the morning of July 17, 2018, Lee quickly realized something was amiss with his normally talkative wife.

The retired ob-gyn barely whispered yes or no to his questions. And she did not look well.

I stopped getting ready for work and starting paying her attention, he said. Her torso was really burning up.

After trying unsuccessfully to reach Fredas doctor, Lee decided to rush her to the emergency room at Mississippi Baptist Medical Center in Jackson.

Much later she would ask him: Why didnt you call an ambulance?

Too slow, he said.

Freda was well-known at Baptist, having delivered babies there since 1987. But when she arrived that morning, none of the staff had ever seen her like thisnearly unconscious and going downhill fast.

Her medical team quickly began a litany of tests. But it would be almost a week before Lee learned the source of his wifes suffering.

She was diagnosed with West Nile virus encephalitis, a life-threatening form of the mosquito-borne disease.

Whats worse, Lee was being told the condition carried no treatment. They said wed have to wait and see what happens. And I said: Thats my wife youre talking about.

The couple had gotten married in 1969 after only three months of dating. In the years since, theyd reared four children, balanced two demanding careers and supported causes they believed in.

What loomed ahead was more opportunities to give back, as well as time to spend with their 11 grandchildren and two great-grandchildren.

Surely, this wasnt to be the end of a union between two people whod shared so muchincluding the challenges of each growing up the fifth of nine children.

An engineer by training and a successful businessman, Lee went into problem-solving mode to save his wife.

He took matters into his own hands and said: Somebody has got to tell me something, Freda said.

After some networking, website searches and phone calls, Lee learned one of the nations foremost West Nile virus researchers worked just down the street at Methodist Rehabilitation Center.

Lee arranged a meeting with Dr. Leis, a senior scientist with MRCs Center for Neuroscience and Neurological Recovery. And hell never forget seeing him for the first time.

He walked into this huge waiting room at Baptist and I said: Here comes my angel doctor. He said: Angel? Why did you call me that? I said: Because you are my angel. He said: I dont know if you know this, but my first name is Angel.

Leis advocated treating Freda with high-dose steroids, but he warned it could be risky.

He said steroids will stop her brain from swelling, but it will also stop her immune systemwhich is kind of dangerous, Lee said.

But Leis said he was willing to take the chance because he knew Lee would provide the close observation Freda would need during steroid therapy.

When anyone would urge Lee to leave his wifes side for a well-deserved respite, hed say: Ill leave when she leaves.

He made a decision that his job was taking care of me, Freda said. He wouldnt go to work, and he slept in a chair.

When I realized how much dedication he had given to me, I boohooed. I was overwhelmed. I tell people if I thought I loved him before, it doesnt compare to now.

As Freda began to regain consciousness, she didnt know who she was or where she was, but she was awake, Lee said.

And he made it his mission to keep her roused. I am playing spiritual music, dancing around and walking around her bed praying, he said.

Baptist staff offered spiritual support, too. Every doctor who came by said were praying for her, Lee said. They would pat me on the shoulder and leave out.

The Bushs adult children also came through for their parents. While their son took over for his dad at NCS Trash and Garbage, his three sisters rotated two-week caregiving shifts. And one of Fredas sisters traveled from Washington, D.C., to lend a helping hand.

Freda spent three weeks at Baptist, including 15 days in ICU. Next came another 24 days at MRC, working on skills to regain her independence.

It wasnt easy for the accomplished physician to acknowledge her deficits.

I spent a lot of time crying, she said. I had already retired from medicine, but I was still very active. I was on a lot of boards and was working with the Medical Institute for Sexual Health in Austin. And to think now I couldnt do anything, couldnt even remember. I spent a lot of time crying because I wasnt me.

Lee, on the other hand, never lost hope.

She is my miracle in slow motion, he said. The best thing for me was seeing in her eyes she was getting better and all the miracle steps in the right direction. When she could look at me and smile and say, I love you. Those were the nuances that kept me going.

Today, Freda continues to progress. And while shes chafing for more independenceshe and Lee laugh that they never spent so much time togethershes grateful for his commitment.

I have to give credit to the Lord, she said He put us together, and he kept us together. I say Im so sorry I got West Nile, but Im so glad God gave me Lee Bush.

More here:

Lazarus Effect: Sixteen-year study by Dr. Leis resulted in recovery from West Nile for Dr. Bush - Northside Sun

Quantum computing could be the next big security breakthrough – ITProPortal

The majority of cybersecurity professionals believe quantum computing will develop faster than other security technologies, but for them thats cause for concern.

According to a new report by the Neustar International Security Council (NISC), almost three quarters (74 per cent) are keeping a close eye on the tech, while 21 per cent are doing experiments of their own. To tackle the potential coming crisis, a third (35 per cent) are already developing a quantum strategy, while just 16 per cent arent yet thinking about it.

The vast majority believe quantum computing could become a problem for encryption within five years. Just seven per cent believe quantum supremacy will never happen.

At the same time, almost all CISOs, CSO, CTOs and other security directors are feeling excitement over the potential positive changes quantum computing may bring.

At the moment, we rely on encryption, which is possible to crack in theory, but impossible to crack in practice, precisely because it would take so long to do so, over timescales of trillions or even quadrillions of years, said Rodney Joffe, Chairman of NISC and Security CTO at Neustar.

Without the protective shield of encryption, a quantum computer in the hands of a malicious actor could launch a cyberattack unlike anything weve ever seen.

According to Joffe, the cybersecurity community is already hard at work, researching quantum-proof cryptography.

IT experts of every stripe will need to work to rebuild the algorithms, strategies, and systems that form our approach to cybersecurity, Joffe concluded.

More:

Quantum computing could be the next big security breakthrough - ITProPortal

Quantum computing leaps ahead in 2019 with new power and speed – CNET

A close-up view of the IBM Q quantum computer. The processor is in the silver-colored cylinder.

Quantum computers are getting a lot more real. No, you won't be playing Call of Duty on one anytime soon. But Google, Amazon, Microsoft, Rigetti Computing and IBM all made important advances in 2019 that could help bring computers governed by the weird laws of atomic-scale physics into your life in other ways.

Google's declaration of quantum supremacywas the most headline-grabbing moment in the field. The achievement -- more limited than the grand term might suggest -- demonstrated that quantum computers could someday tackle computing problems beyond the reach of conventional "classical" computers.

Proving quantum computing progress is crucial. We're still several breakthroughs away from realizing the full vision of quantum computing. Qubits, the tiny stores of data that quantum computers use, need to be improved. So do the finicky control systems used to program and read quantum computer results. Still, today's results help justify tomorrow's research funding to sustain the technology when the flashes of hype inevitably fizzle.

Now playing: Watch this: Quantum computing is the new super supercomputer

4:11

Quantum computers will live in data centers, not on your desk, when they're commercialized. They'll still be able to improve many aspects of your life, though. Money in your retirement account might grow a little faster and your packages might be delivered a little sooner as quantum computers find new ways to optimize businesses. Your electric-car battery might be a little lighter and new drugs might help you live a little longer after quantum computers unlock new molecular-level designs. Traffic may be a little lighter from better simulations.

But Google's quantum supremacy step was just one of many needed to fulfill quantum computing's promise.

"We're going to get there in cycles. We're going to have a lot of dark ages in which nothing happens for a long time," said Forrester analyst Brian Hopkins. "One day that new thing will really change the world."

Among the developments in 2019:

Classical computers, which include everything from today's smartwatches to supercomputers that occupy entire buildings, store data as bits that represent either a 1 or a 0. Quantum computers use a different approach called qubits that can represent a combination of 1 and 0 through an idea called superposition.

Ford and Microsoft adapted a quantum computing traffic simulation to run on a classical computer. The result: a traffic routing algorithm that could cut Seattle traffic congestion by 73%.

The states of multiple qubits can be linked, letting quantum computers explore lots of possible solutions to a problem at once. With each new qubit added, a quantum computer can explore double the number of possible solutions, an exponential increase not possible with classical machines.

Quantum computers, however, are finicky. It's hard to get qubits to remain stable long enough to return useful results. The act of communicating with qubits can perturb them. Engineers hope to add error correction techniques so quantum computers can tackle a much broader range of problems.

Plenty of people are quantum computing skeptics. Even some fans of the technology acknowledge we're years away from high-powered quantum computers. But already, quantum computing is a real business. Samsung, Daimler, Honda, JP Morgan Chase and Barclays are all quantum computing customers. Spending on quantum computers should reach hundreds of millions of dollars in the 2020s, and tens of billions in the 2030s, according to forecasts by Deloitte, a consultancy. China, Europe, the United States and Japan have sunk billions of dollars into investment plans. Ford and Microsoft say traffic simulation technology for quantum computers, adapted to run on classical machines, already is showing utility.

Right now quantum computers are used mostly in research. But applications with mainstream results are likely coming. The power of quantum computers is expected to allow for the creation of new materials, chemical processes and medicines by giving insight into the physics of molecules. Quantum computers will also help for greater optimization of financial investments, delivery routes and flights by crunching the numbers in situations with a large number of possible courses of action.

They'll also be used for cracking today's encryption, an idea spy agencies love, even if you might be concerned about losing your privacy or some snoop getting your password. New cryptography adapted for a quantum computing future is already underway.

Another promising application is artificial intelligence, though that may be years in the future.

"Eventually we'll be able to reinvent machine learning," Forrester's Hopkinssaid. But it'll take years of steady work in quantum computing beyond the progress of 2019. "The transformative benefits are real and big, but they are still more sci-fi and theory than they are reality."

Read the rest here:

Quantum computing leaps ahead in 2019 with new power and speed - CNET

Could quantum computing be the key to cracking congestion? – SmartCitiesWorld

The technology has helped to improve congestion by 73 per cent in scenario-testing

Ford and Microsoft are using quantum-inspired computing technology to reduce traffic congestion. Through a joint research pilot, scientists have used the technology to simulate thousands of vehicles and their impact on congestion in the US city of Seattle.

Ford said it is still early in the project but encouraging progress has been made and it is further expanding its partnership with the tech giant.

The companies teamed up in 2018 to develop new quantum approaches running on classical computers already available to help reduce Seattles traffic congestion.

Writing on a blog post on Medium.com, Dr Ken Washington, chief technology officer, Ford Motor Company, explained that during rush hour, numerous drivers request the shortest possible routes at the same time, but current navigation services handle these requests "in a vacuum": They do not take into consideration the number of similar incoming requests, including areas where other drivers are all planning to share the same route segments, when delivering results.

What is required is a more balanced routing system that could manage all the various route requests from drivers and provide optimised route suggestions, reducing the number of vehicles on a particular road.

These and more are all variables well need to test for to ensure balanced routing can truly deliver tangible improvements for cities.

Traditional computers dont have the computational power to do this but, as Washington explained, in a quantum computer, information is processed by a quantum bit (or a qubit) and can simultaneously exist "in two different states" before it gets measured.

This ultimately enables a quantum computer to process information with a faster speed, he wrote. Attempts to simulate some specific features of a quantum computer on non-quantum hardware have led to quantum-inspired technology powerful algorithms that mimic certain quantum behaviours and run on specialised conventional hardware. That enables organisations to start realising some benefits before fully scaled quantum hardware becomes available."

Working with Microsoft, Ford tested several different possibilities, including a scenario involving as many as 5,000 vehicles each with 10 different route choices available to them simultaneously requesting routes across Metro Seattle. It reports that in 20 seconds, balanced routing suggestions were delivered to the vehicles that resulted in a 73 per cent improvement in total congestion when compared to selfish routing.

The average commute time, meanwhile, was also cut by eight per cent representing an annual reduction of more than 55,000 hours across this simulated fleet.

Based on these results, Ford is expanding its partnership with Microsoft to further improve the algorithm and understand its effectiveness in more real-world scenarios.

For example, will this method still deliver similar results when some streets are known to be closed, if route options arent equal for all drivers, or if some drivers decide to not follow suggested routes? wrote Washington. These and more are all variables well need to test for to ensure balanced routing can truly deliver tangible improvements for cities.

You might also like:

View post:

Could quantum computing be the key to cracking congestion? - SmartCitiesWorld

Quantum computing will be the smartphone of the 2020s, says Bank of America strategist – MarketWatch

When asked what invention will be as revolutionary in the 2020s as smartphones were in the 2010s, Bank of America strategist Haim Isreal said, without hesitation, quantum computing.

At the banks annual year ahead event last week in New York, Israel qualified his prediction, arguing in an interview with MarketWatch that the timing of the smartphones arrival on the scene in the mid-2000s, and its massive impact on the American business landscape in the 2010s, doesnt line up neatly with quantum-computing breakthroughs, which are only now being seen, just a few weeks before the start of the 2020s.

The iPhone already debuted in 2007, enabling its real impact to be felt in the 2010s, he said, while the first business applications for quantum computing won't be seen till toward the end of the coming decade.

But, Israel argued, when all is said and done, quantum computing could be an even more radical technology in terms of its impact on businesses than the smartphone has been. This is going to be a revolution, he said.

Quantum computing is a nascent technology based on quantum theory in physics which explains the behavior of particles at the subatomic level, and states that until observed these particles can exist in different places at the same time. While normal computers store information in ones and zeros, quantum computers are not limited by the binary nature of current data processing and so can provide exponentially more computing power.

Quantum things can be in multiple places at the same time, said Chris Monroe, a University of Maryland physicist and founder of IonQ told the Associated Press . The rules are very simple, theyre just confounding.

In October, Alphabet Inc. GOOG, -0.18% subsidiary Google claimed to have achieved a breakthrough by using a quantum computer to complete a calculation in 200 seconds on a 53-qubit quantum computing chip, a task it calculated would take the fastest current super-computer 10,000 years. Earlier this month, Amazon.com Inc. AMZN, +0.03% announced its intention to collaborate with experts to develop quantum computing technologies that can be used in conjunction with its cloud computing services. International Business Machines Corp. IBM, -0.82% and Microsoft Corp. MSFT, +0.84% are also developing quantum computing technology.

Israel argued these tools will revolutionize several industries, including health care, the internet of things and cyber security. He said that pharmaceutical companies are most likely to be the first commercial users of these devices, given the explosion of data created by health care research.

Pharma companies are right now subject to Moores law in reverse, he said. They are seeing the cost of drug development doubling every nine years, as the amount of data on the human body becomes ever more onerous to process. Data on genomics doubles every 50 days, he added, arguing that only quantum computers will be able to solve the pharmaceutical industrys big-data problem.

Quantum computing will also have a major impact on cybersecurity, an issue that effects nearly every major corporation today. Currently cyber security relies on cryptographic algorithms, but quantum computings ability to solve these equations in the fraction of the time a normal computer does will render current cyber security methods obsolete.

In the future, even robust cryptographic algorithms will be substantially weakened by quantum computing, while others will no longer be secure at all, according to Swaroop Sham, senior product marketing manager at Okta.

For investors, Israel said, it is key to realize that the first one or two companies to develop commercially applicable quantum-computing will be richly rewarded with access to untold amounts of data and that will only make their software services more valuable to potential customers in a virtuous circle.

What weve learned this decade is that whoever controls the data will win big time, he said.

Read more:

Quantum computing will be the smartphone of the 2020s, says Bank of America strategist - MarketWatch

ProBeat: AWS and Azure are generating uneasy excitement in quantum computing – VentureBeat

Quantum is having a moment. In October, Google claimed to have achieved a quantum supremacy milestone. In November, Microsoft announced Azure Quantum, a cloud service that lets you tap into quantum hardware providers Honeywell, IonQ, or QCI. Last week, AWS announced Amazon Braket, a cloud service that lets you tap into quantum hardware providers D-Wave, IonQ, and Rigetti. At the Q2B 2019 quantum computing conference this week, I got a pulse for how the nascent industry is feeling.

Binary digits (bits) are the basic units of information in classical computing, while quantum bits (qubits) make up quantum computing. Bits are always in a state of 0 or 1, while qubits can be in a state of 0, 1, or a superposition of the two. Quantum computing leverages qubits to perform computations that would be much more difficult for a classical computer. Potential applications are so vast and wide (from basic optimization problems to machine learning to all sorts of modeling) that interested industries span finance, chemistry, aerospace, cryptography, and more. But its still so early that the industry is nowhere close to reaching consensus on what the transistor for qubits should look like.

Currently, your cloud quantum computing options are limited to single hardware providers, such as those from D-Wave and IBM. Amazon and Microsoft want to change that.

Enterprises and researchers interested in testing and experimenting with quantum are excited because they will be able to use different quantum processors via the same service, at least in theory. Theyre uneasy, however, because the quantum processors are so fundamentally different that its not clear how easy it will be to switch between them. D-Wave uses quantum annealing, Honeywell and IonQ use ion trap devices, and Rigetti and QCI use superconducting chips. Even the technologies that are the same have completely different architectures.

Entrepreneurs and enthusiasts are hopeful that Amazon and Microsoft will make it easier to interface with the various quantum hardware technologies. Theyre uneasy, however, because Amazon and Microsoft have not shared pricing and technical details. Plus, some of the quantum providers offer their own cloud services, so it will be difficult to suss out when it makes more sense to work with them directly.

The hardware providers themselves are excited because they get exposure to massive customer bases. Amazon and Microsoft are the worlds biggest and second biggest cloud providers, respectively. Theyre uneasy, however, because the tech giants are really just middlemen, which of course poses its own problems of costs and reliance.

At least right now, it looks like this will be the new normal. Even hardware providers that havent announced they are partnering with Amazon and/or Microsoft, like Xanadu, are in talks to do just that.

Overall at the event, excitement trumped uneasiness. If youre participating in a domain as nascent as quantum, you must be optimistic. The news this quarter all happened very quickly, but there is still a long road ahead. After all, these cloud services have only been announced. They still have to become available, gain exposure, pick up traction, become practical, prove useful, and so on.

The devil is in the details. How much are these cloud services for quantum going to cost? Amazon and Microsoft havent said. When exactly will they be available in preview or in beta? Amazon and Microsoft havent said. How will switching between different quantum processors work in practice? Amazon and Microsoft havent said.

One thing is clear. Everyone at the event was talking about the impact of the two biggest cloud providers offering quantum hardware from different companies. The clear winners? Amazon and Microsoft.

ProBeat is a column in which Emil rants about whatever crosses him that week.

Read the rest here:

ProBeat: AWS and Azure are generating uneasy excitement in quantum computing - VentureBeat

Quantum Computers Are the Ultimate Paper Tiger – The National Interest Online

Google announced this fall to much fanfare that it had demonstrated quantum supremacy that is, it performed a specific quantum computation far faster than the best classical computers could achieve. IBM promptly critiqued the claim, saying that its own classical supercomputer could perform the computation at nearly the same speed with far greater fidelity and, therefore, the Google announcement should be taken with a large dose of skepticism.

This wasnt the first time someone cast doubt on quantum computing. Last year, Michel Dyakonov, a theoretical physicist at the University of Montpellier in France, offered a slew of technical reasons why practical quantum supercomputers will never be built in an article in IEEE Spectrum, the flagship journal of electrical and computer engineering.

So how can you make sense of what is going on?

As someone who has worked on quantum computing for many years, I believe that due to the inevitability of random errors in the hardware, useful quantum computers are unlikely to ever be built.

Whats a quantum computer?

To understand why, you need to understand how quantum computers work since theyre fundamentally different from classical computers.

A classical computer uses 0s and 1s to store data. These numbers could be voltages on different points in a circuit. But a quantum computer works on quantum bits, also known as qubits. You can picture them as waves that are associated with amplitude and phase.

Qubits have special properties: They can exist in superposition, where they are both 0 and 1 at the same time, and they may be entangled so they share physical properties even though they may be separated by large distances. Its a behavior that does not exist in the world of classical physics. The superposition vanishes when the experimenter interacts with the quantum state.

Due to superposition, a quantum computer with 100 qubits can represent 2100 solutions simultaneously. For certain problems, this exponential parallelism can be harnessed to create a tremendous speed advantage. Some code-breaking problems could be solved exponentially faster on a quantum machine, for example.

There is another, narrower approach to quantum computing called quantum annealing, where qubits are used to speed up optimization problems. D-Wave Systems, based in Canada, has built optimization systems that use qubits for this purpose, but critics also claim that these systems are no better than classical computers.

Regardless, companies and countries are investing massive amounts of money in quantum computing. China has developed a new quantum research facility worth US$10 billion, while the European Union has developed a 1 billion ($1.1 billion) quantum master plan. The United States National Quantum Initiative Act provides $1.2 billion to promote quantum information science over a five-year period.

Breaking encryption algorithms is a powerful motivating factor for many countries if they could do it successfully, it would give them an enormous intelligence advantage. But these investments are also promoting fundamental research in physics.

Many companies are pushing to build quantum computers, including Intel and Microsoft in addition to Google and IBM. These companies are trying to build hardware that replicates the circuit model of classical computers. However, current experimental systems have less than 100 qubits. To achieve useful computational performance, you probably need machines with hundreds of thousands of qubits.

Noise and error correction

The mathematics that underpin quantum algorithms is well established, but there are daunting engineering challenges that remain.

For computers to function properly, they must correct all small random errors. In a quantum computer, such errors arise from the non-ideal circuit elements and the interaction of the qubits with the environment around them. For these reasons the qubits can lose coherency in a fraction of a second and, therefore, the computation must be completed in even less time. If random errors which are inevitable in any physical system are not corrected, the computers results will be worthless.

In classical computers, small noise is corrected by taking advantage of a concept known as thresholding. It works like the rounding of numbers. Thus, in the transmission of integers where it is known that the error is less than 0.5, if what is received is 3.45, the received value can be corrected to 3.

Further errors can be corrected by introducing redundancy. Thus if 0 and 1 are transmitted as 000 and 111, then at most one bit-error during transmission can be corrected easily: A received 001 would be a interpreted as 0, and a received 101 would be interpreted as 1.

Quantum error correction codes are a generalization of the classical ones, but there are crucial differences. For one, the unknown qubits cannot be copied to incorporate redundancy as an error correction technique. Furthermore, errors present within the incoming data before the error-correction coding is introduced cannot be corrected.

Quantum cryptography

While the problem of noise is a serious challenge in the implementation of quantum computers, it isnt so in quantum cryptography, where people are dealing with single qubits, for single qubits can remain isolated from the environment for significant amount of time. Using quantum cryptography, two users can exchange the very large numbers known as keys, which secure data, without anyone able to break the key exchange system. Such key exchange could help secure communications between satellites and naval ships. But the actual encryption algorithm used after the key is exchanged remains classical, and therefore the encryption is theoretically no stronger than classical methods.

Quantum cryptography is being commercially used in a limited sense for high-value banking transactions. But because the two parties must be authenticated using classical protocols, and since a chain is only as strong as its weakest link, its not that different from existing systems. Banks are still using a classical-based authentication process, which itself could be used to exchange keys without loss of overall security.

Quantum cryptography technology must shift its focus to quantum transmission of information if its going to become significantly more secure than existing cryptography techniques.

Commercial-scale quantum computing challenges

While quantum cryptography holds some promise if the problems of quantum transmission can be solved, I doubt the same holds true for generalized quantum computing. Error-correction, which is fundamental to a multi-purpose computer, is such a significant challenge in quantum computers that I dont believe theyll ever be built at a commercial scale.

[ Youre smart and curious about the world. So are The Conversations authors and editors. You can get our highlights each weekend. ]

Subhash Kak, Regents Professor of Electrical and Computer Engineering, Oklahoma State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image: Reuters

See more here:

Quantum Computers Are the Ultimate Paper Tiger - The National Interest Online

Quantum expert Robert Sutor explains the basics of Quantum Computing – Packt Hub

What if we could do chemistry inside a computer instead of in a test tube or beaker in the laboratory? What if running a new experiment was as simple as running an app and having it completed in a few seconds?

For this to really work, we would want it to happen with complete fidelity. The atoms and molecules as modeled in the computer should behave exactly like they do in the test tube. The chemical reactions that happen in the physical world would have precise computational analogs. We would need a completely accurate simulation.

If we could do this at scale, we might be able to compute the molecules we want and need.

These might be for new materials for shampoos or even alloys for cars and airplanes. Perhaps we could more efficiently discover medicines that are customized to your exact physiology. Maybe we could get a better insight into how proteins fold, thereby understanding their function, and possibly creating custom enzymes to positively change our body chemistry.

Is this plausible? We have massive supercomputers that can run all kinds of simulations. Can we model molecules in the above ways today?

This article is an excerpt from the book Dancing with Qubits written by Robert Sutor. Robert helps you understand how quantum computing works and delves into the math behind it with this quantum computing textbook.

Lets start with C8H10N4O2 1,3,7-Trimethylxanthine.

This is a very fancy name for a molecule that millions of people around the world enjoy every day: caffeine. An 8-ounce cup of coffee contains approximately 95 mg of caffeine, and this translates to roughly 2.95 10^20 molecules. Written out, this is

295, 000, 000, 000, 000, 000, 000 molecules.

A 12 ounce can of a popular cola drink has 32 mg of caffeine, the diet version has 42 mg, and energy drinks often have about 77 mg.

These numbers are large because we are counting physical objects in our universe, which we know is very big. Scientists estimate, for example, that there are between 10^49 and 10^50 atoms in our planet alone.

To put these values in context, one thousand = 10^3, one million = 10^6, one billion = 10^9, and so on. A gigabyte of storage is one billion bytes, and a terabyte is 10^12 bytes.

Getting back to the question I posed at the beginning of this section, can we model caffeine exactly on a computer? We dont have to model the huge number of caffeine molecules in a cup of coffee, but can we fully represent a single molecule at a single instant?

Caffeine is a small molecule and contains protons, neutrons, and electrons. In particular, if we just look at the energy configuration that determines the structure of the molecule and the bonds that hold it all together, the amount of information to describe this is staggering. In particular, the number of bits, the 0s and 1s, needed is approximately 10^48:

10, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000.

And this is just one molecule! Yet somehow nature manages to deal quite effectively with all this information. It handles the single caffeine molecule, to all those in your coffee, tea, or soft drink, to every other molecule that makes up you and the world around you.

How does it do this? We dont know! Of course, there are theories and these live at the intersection of physics and philosophy. However, we do not need to understand it fully to try to harness its capabilities.

We have no hope of providing enough traditional storage to hold this much information. Our dream of exact representation appears to be dashed. This is what Richard Feynman meant in his quote: Nature isnt classical.

However, 160 qubits (quantum bits) could hold 2^160 1.46 10^48 bits while the qubits were involved in a computation. To be clear, Im not saying how we would get all the data into those qubits and Im also not saying how many more we would need to do something interesting with the information. It does give us hope, however.

In the classical case, we will never fully represent the caffeine molecule. In the future, with enough very high-quality qubits in a powerful quantum computing system, we may be able to perform chemistry on a computer.

I can write a little app on a classical computer that can simulate a coin flip. This might be for my phone or laptop.

Instead of heads or tails, lets use 1 and 0. The routine, which I call R, starts with one of those values and randomly returns one or the other. That is, 50% of the time it returns 1 and 50% of the time it returns 0. We have no knowledge whatsoever of how R does what it does.

When you see R, think random. This is called a fair flip. It is not weighted to slightly prefer one result over the other. Whether we can produce a truly random result on a classical computer is another question. Lets assume our app is fair.

If I apply R to 1, half the time I expect 1 and another half 0. The same is true if I apply R to 0. Ill call these applications R(1) and R(0), respectively.

If I look at the result of R(1) or R(0), there is no way to tell if I started with 1 or 0. This is just like a secret coin flip where I cant tell whether I began with heads or tails just by looking at how the coin has landed. By secret coin flip, I mean that someone else has flipped it and I can see the result, but I have no knowledge of the mechanics of the flip itself or the starting state of the coin.

If R(1) and R(0) are randomly 1 and 0, what happens when I apply R twice?

I write this as R(R(1)) and R(R(0)). Its the same answer: random result with an equal split. The same thing happens no matter how many times we apply R. The result is random, and we cant reverse things to learn the initial value.

There is a catch, though. You are not allowed to look at the result of what H does if you want to reverse its effect. If you apply H to 0 or 1, peek at the result, and apply H again to that, it is the same as if you had used R. If you observe what is going on in the quantum case at the wrong time, you are right back at strictly classical behavior.

To summarize using the coin language: if you flip a quantum coin and then dont look at it, flipping it again will yield heads or tails with which you started. If you do look, you get classical randomness.

A second area where quantum is different is in how we can work with simultaneous values. Your phone or laptop uses bytes as individual units of memory or storage. Thats where we get phrases like megabyte, which means one million bytes of information.

A byte is further broken down into eight bits, which weve seen before. Each bit can be a 0 or 1. Doing the math, each byte can represent 2^8 = 256 different numbers composed of eight 0s or 1s, but it can only hold one value at a time. Eight qubits can represent all 256 values at the same time

This is through superposition, but also through entanglement, the way we can tightly tie together the behavior of two or more qubits. This is what gives us the (literally) exponential growth in the amount of working memory.

Artificial intelligence and one of its subsets, machine learning, are extremely broad collections of data-driven techniques and models. They are used to help find patterns in information, learn from the information, and automatically perform more intelligently. They also give humans help and insight that might have been difficult to get otherwise.

Here is a way to start thinking about how quantum computing might be applicable to large, complicated, computation-intensive systems of processes such as those found in AI and elsewhere. These three cases are in some sense the small, medium, and large ways quantum computing might complement classical techniques:

As I write this, quantum computers are not big data machines. This means you cannot take millions of records of information and provide them as input to a quantum calculation. Instead, quantum may be able to help where the number of inputs is modest but the computations blow up as you start examining relationships or dependencies in the data.

In the future, however, quantum computers may be able to input, output, and process much more data. Even if it is just theoretical now, it makes sense to ask if there are quantum algorithms that can be useful in AI someday.

To summarize, we explored how quantum computing works and different applications of artificial intelligence in quantum computing.

Get this quantum computing book Dancing with Qubits by Robert Sutor today where he has explored the inner workings of quantum computing. The book entails some sophisticated mathematical exposition and is therefore best suited for those with a healthy interest in mathematics, physics, engineering, and computer science.

Intel introduces cryogenic control chip, Horse Ridge for commercially viable quantum computing

Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions

Amazon re:Invent 2019 Day One: AWS launches Braket, its new quantum service and releases

Read the rest here:

Quantum expert Robert Sutor explains the basics of Quantum Computing - Packt Hub

Will quantum computing overwhelm existing security tech in the near future? – Help Net Security

More than half (54%) of cybersecurity professionals have expressed concerns that quantum computing will outpace the development of other security tech, according to a research from Neustar.

Keeping a watchful eye on developments, 74% of organizations admitted to paying close attention to the technologys evolution, with 21% already experimenting with their own quantum computing strategies.

A further 35% of experts claimed to be in the process of developing a quantum strategy, while just 16% said they were not yet thinking about it. This shift in focus comes as the vast majority (73%) of cyber security professionals expect advances in quantum computing to overcome legacy technologies, such as encryption, within the next five years.

Almost all respondents (93%) believe the next-generation computers will overwhelm existing security technology, with just 7% under the impression that true quantum supremacy will never happen.

Despite expressing concerns that other technologies will be overshadowed, 87% of CISOs, CSOs, CTOs and security directors are excited about the potential positive impact of quantum computing. The remaining 13% were more cautious and under the impression that the technology would create more harm than good.

At the moment, we rely on encryption, which is possible to crack in theory, but impossible to crack in practice, precisely because it would take so long to do so, over timescales of trillions or even quadrillions of years, said Rodney Joffe, Chairman of NISC and Security CTO at Neustar.

Without the protective shield of encryption, a quantum computer in the hands of a malicious actor could launch a cyberattack unlike anything weve ever seen.

For both todays major attacks, and also the small-scale, targeted threats that we are seeing more frequently, it is vital that IT professionals begin responding to quantum immediately.

The security community has already launched a research effort into quantum-proof cryptography, but information professionals at every organization holding sensitive data should have quantum on their radar.

Quantum computings ability to solve our great scientific and technological challenges will also be its ability to disrupt everything we know about computer security. Ultimately, IT experts of every stripe will need to work to rebuild the algorithms, strategies, and systems that form our approach to cybersecurity, added Joffe.

The report also highlighted a steep two-year increase on the International Cyber Benchmarks Index. Calculated based on changes in the cybersecurity landscape including the impact of cyberattacks and changing level of threat November 2019 saw the highest score yet at 28.2. In November 2017, the benchmark sat at just 10.1, demonstrating an 18-point increase over the last couple of years.

During September October 2019, security professionals ranked system compromise as the greatest threat to their organizations (22%), with DDoS attacks and ransomware following very closely behind (21%).

Go here to read the rest:

Will quantum computing overwhelm existing security tech in the near future? - Help Net Security

How quantum computing is set to impact the finance industry – IT Brief New Zealand

Attempting to explain quantum computing with the comparison between quantum and classical computing is like comparing the world wide web to a typewriter, theres simply next to no comparison.

Thats not to say the typewriter doesnt have its own essential and commercially unique uses. Its just not the same.

However, explaining the enormous impact quantum computing could have if successfully rolled-out and becomes globally accessible is a bit easier.

Archer Materials Limited (ASX:AXE) CEO Dr Mohammad Choucair outlined the impact quantum computing could have on the finance industry.

In an address to shareholders and academics, Dr Choucair outlined that the global financial assets market is estimated to be worth trillions, and Im sure it comes as no surprise that any capability to optimise ones investment portfolio or capitalise on market volatility would be of great value to banks, governments and everyone in the audience.

Traders currently use algorithms to understand and, to a degree, predict the value movement in these markets. An accessible and operating quantum chip would provide immeasurable improvements to these algorithms, along with the machine learning that underpins them.

Archer is a materials technology-focused company that integrates the materials pulled from the ground with the converging materials-based technologies that have the capability to impact global industries including:

It could have an enormous impact on computing and the electric vehicles industries.

The potential for global consumer and business accessibility to quantum computing is the key differentiator between Archer Materials Ltd. and some of the other players in the market.

The companys 12CQ qubit, invented by Dr Choucair, is potentially capable of storing quantum information at room temperature.

As a result of this, the 12CQ chip could be thrown onto the motherboard of the everyday laptop, or tablet if youre tech-savvy, and operate in coexistence with a classical CPU.

This doesnt mean the everyday user can now go and live out a real-world, real-time simulation of The Matrix.

But it does mean that the laptop you have in your new, European leather tote could potentially perform extremely complex calculations to protect digital financial and communication transactions.

To head the progress of the 12CQ Project, Archer hired Dr Martin Fuechsle, a quantum physicist, who is by no means new to the high-performing Australian quantum tech industry.

In fact, Dr Fuechsle invented the worlds first single-atom transistor and offers over 10 years experience in the design, fabrication and integration of quantum devices.

Archer has moved quickly over the last 12 months and landed some significant 12CQ milestones, including the first-stage assembly of the nanoscale qubit processor chip.

Along with the accurate positioning of the qubit componentry with nanoscale precision.

Both of these being key success factors to the commercial and technological readiness of the room-temperature chip.

Most recently, Archer announced the successful and scalable assembly of qubit array components of the 12CQ room-temperature qubit processor. Commenting on the success, Dr Choucair announced: This excellent achievement advances our chip technology development towards a minimum viable product and strengthens our commercial readiness by providing credibility to the claim of 12 CQ chips being potentially scalable.

To build an array of a few qubits in less than a year means we are well and truly on track in our development roadmap taking us into 2020.

The Archer team has commercial agreements in place with the University of Sydney, to access the facilities they need to build chip prototypes at the Research and Prototype Foundry within the world-class, $150 million purpose-built Sydney Nanoscience Hub facility.

Continue reading here:

How quantum computing is set to impact the finance industry - IT Brief New Zealand