Honeywell Claims to Have Built the "Most Powerful" Quantum Computer – Interesting Engineering

The race to build the best and the fastest quantum computer continues, but now it's not just Google AI and IBM who are running Honeywell has joined in too.

Entering in style, Honeywell made the bold statement that "By the middle of 2020, we're releasing the most powerful quantum computer yet."

SEE ALSO: IBM'S 53 QUBIT QUANTUM COMPUTER WILL BE AVAILABLE BY OCTOBER

Google AI and IBM have been in the race for a while now. Just last October Google claimed to have made it to "quantum supremacy" by creating a quantum computer that could solve a problem that would have taken the world's most powerful supercomputer 10,000 years to figure out.

Immediately after, IBM refuted Google's statement.

Perhaps it's now time for both Google and IBM to move aside and let a third contender join in on the fun. North Carolina-based multinational conglomerate, Honeywell, has claimed that their quantum computer has twice the power as the best quantum computer that currently exists.

It's an interesting statement to make given there isn't yet a universally accepted standard for the power of a quantum computer.

Honeywell's quantum computer is supposedly extremely stable, and instead of depending on faster superconducting chips like Google AI and IBM use, Honeywell's computer uses ion traps instead. This technology enables individual ions to be held in place using electromagnetic fields and moves around thanks to laser pulses.

It's these ion traps that Honeywell claims will make its quantum computer far more scaleable.

We're yet to see a commercially available quantum computer, however, these technologies hold the real potential to revolutionize computing by being able to solve unbelievably long and complicated numerical problems simultaneously by using qubits instead of bits.

After Honeywell's rather large claim, the company has yet to reveal the computer but as they stated, we'll just have to wait until the middle of 2020.

More here:
Honeywell Claims to Have Built the "Most Powerful" Quantum Computer - Interesting Engineering

Quantum computing, AI, China, and synthetics highlighted in 2020 Tech Trends report – VentureBeat

The worlds tech industry will be shaped by China, artificial intelligence, cancel culture, and a number of other trends, according to the Future Today Institutes 2020 Tech Trends Report.

Now in its 13th year, the document is put together by the Future Today Institute and director Amy Webb, who is also a professor at New York Universitys Stern School of Business. The report attempts to recognize connections between tech and future uncertainties like the outcome of the 2020 U.S. presidential election, as well as the spread of epidemics like coronavirus.

Among major trends in the report, 2020 will be the synthetic decade.

Soon, we will produce designer molecules in a range of host cells on demand and at scale, which will lead to transformational improvements in vaccine production, tissue production and medical treatments. Scientists will start to build entire human chromosomes, and they will design programmable proteins, the report reads.

Augmentation of senses like hearing and sight, social media scaremongering, new ways to measure trust, and Chinas role in the growth of AI are also listed among key takeaways.

Artificial intelligence is again the first item highlighted on the list, and the tech Webb says is sparking a third wave of computing comes with positives like the role AlphaFold can play in discovering cures to diseases, and negatives like its current impact on the criminal justice system.

Tech giants in the U.S. and China like Amazon, Facebook, Google, and Microsoft in the United States and Tencent and Baidu in China continue to deliver the greatest impact. Webb predicts how these companies will shape the world in her 2019 bookThe Big Nine.

Those nine companies drive the majority of research, funding, government involvement and consumer-grade applications of AI. University researchers and labs rely on these companies for data, tools and funding, the report reads. Big Nine A.I. companies also wield huge influence over A.I. mergers and acquisitions, funding A.I. startups and supporting the next generation of developers.

Synthetic data, a military-tech industrial complex, and systems made to recognize people were also listed among AI trends.

Visit the Future Today Institute website to read the full report, which states whether a trend requires immediate action. Trends by industry are also highlighted.

Webb urges readers not to digest report urges readers to digest the 366-page report in multiple sittings rather than trying to read it all at once. Webb typically debuts the report with a presentation to thousands at the SXSW conference in Austin, Texas but the conference was cancelled due to coronavirus.

Read this article:
Quantum computing, AI, China, and synthetics highlighted in 2020 Tech Trends report - VentureBeat

IDC Survey Finds Optimism That Quantum Computing Will Result in Competitive Advantage – HPCwire

FRAMINGHAM, Mass., March 11, 2019 A recent International Data Corporation (IDC) survey of IT and business personnel responsible for quantum computing adoption found that improved AI capabilities, accelerated business intelligence, and increased productivity and efficiency were the top expectations of organizations currently investing in cloud-based quantum computing technologies.

Initial survey findings indicate that while cloud-based quantum computing is a young market, and allocated funds for quantum computing initiatives are limited (0-2% of IT budgets), end-users are optimistic that early investment will result in a competitive advantage. The manufacturing, financial services, and security industries are currently leading the way by experimenting with more potential use cases, developing advanced prototypes, and being further along in their implementation status.

Complex technology, skillset limitations, lack of available resources, and cost deter some organizations from investing in quantum computing technology. These factors, combined with a large interdisciplinary interest, has forced quantum computing vendors to develop quantum computing technology that addresses multiple end-user needs and skill levels. The result has led to increased availability of cloud-based quantum computing technology that is more easily accessible and user friendly for new end users. Currently, the preferred types of quantum computing technologies employed across industries include quantum algorithms, cloud-based quantum computing, quantum networks, and hybrid quantum computing.

Quantum computing is the future industry and infrastructure disruptor for organizations looking to use large amounts of data, artificial intelligence, and machine learning to accelerate real-time business intelligence and innovate product development. Many organizations from many industries are already experimenting with its potential, saidHeather West, senior research analyst, Infrastructure Systems, Platforms, and Technology at IDC. IDCs quantum computing survey provides insight into the demand-side of cloud-based quantum computing, including preferred technologies and end-user investment and implementation strategies. These insights should guide the product and service offerings being developed by quantum computing vendors, independent software vendors, and industry partners.

The IDC Special Study,Quantum Computing Adoption Trends: 2020 Survey Findings(IDC #US46049620), provides insights into near-term cloud-based quantum computing investment sentiments as well as end user cloud-based quantum computing adoption trends that will shape the future of the quantum computing industry. The study reports on findings from IDCs 2020 Quantum Computing End-User Perception and Adoption Trends Survey, which gathered insights from a multitude of sources, including surveys of 520 IT and business users worldwide and in-depth interviews with current quantum computing end-users.

The special study is part of IDCs Quantum Computing Special Report series, which also includes end-user insights from study of 2,700 European organizations, and secondary research focusing on quantum computing use cases. Additional findings can be found in the following IDC reports:The Rise of Quantum Computing: A Qualitative Perspective(IDC #US45652919),European Quantum Computing End-User Sentiment: In Search of Business Impact(IDC #EUR146014220) andEuropean Quantum Computing Use Cases Handbook, 2020(IDC #EUR146014420).

About IDC

International Data Corporation (IDC) is a provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. With more than 1,100 analysts worldwide, IDC offers global, regional, and local expertise on technology and industry opportunities and trends in over 110 countries. IDCs analysis and insight helps IT professionals, business executives, and the investment community to make fact-based technology decisions and to achieve their key business objectives. Founded in 1964, IDC is a wholly-owned subsidiary of International Data Group (IDG), the worlds leading tech media, data and marketing services company. To learn more about IDC, please visitwww.idc.com. Follow IDC on Twitter at@IDCandLinkedIn. Subscribe to the IDC Blog for industry news and insights:http://bit.ly/IDCBlog_Subscribe.

Source: International Data Corporation

View post:
IDC Survey Finds Optimism That Quantum Computing Will Result in Competitive Advantage - HPCwire

Quantum Computing for Everyone – The Startup – Medium

Qubits are exponentially faster than bits in several computing problems, such as database searches and factoring (which, as we will discuss soon, may break your Internet encryption).

An important thing to realize is that qubits can hold much more information than a bit can. One bit holds the same amount of information as one qubit they can both only hold one value. However, four bits must be used to store the same amount of information as two qubits. A two-qubit system in equal superposition holds values for four states, which on a classical computer, would need at least four bits to hold. Eight bits are needed to store the same amount of information as three qubits, since a three-qubit system can store eight states 000, 001, 010, 011, 100, 101, 110, and 111. This pattern continues.

The below graph provides a visual for the computing power of qubits. The x-axis represents the number of qubits used to hold a certain amount of information. The blue lines y represents the number of bits needed to hold the same amount of information as the number of qubits (x-axis), or 2 to the power of x. The red lines y represents the number of qubits needed to hold the same amount of information as the number of qubits in the x-axis (y=x).

Imagine the exponential speedup quantum computing can provide! A gigabyte (8E+09 bits) worth of information can be represented with log(8E+09)/log(2) = 33 (rounded up from 32.9) qubits.

Quantum computers are also great at factoring numbers which leads us to RSA encryption. The security protocol that secures Medium and probably any other website youve been on is known as RSA encryption. It relies on the fact that with current computing resources, it would take a very, very long time to factor a 30+-digit number m that has only one solution namely, p times q, where both p and q are large prime numbers. However, dividing m by p or q is computationally much easier, and since m divided by q returns p and vice versa, it provides a quick key verification system.

A quantum algorithm called Shors algorithm has shown exponential speedup in factoring numbers, which could one day break RSA encryption. But dont buy into the hype yet as of this writing, the largest number factored by quantum computers is 21 (into 3 and 7). The hardware has not been developed yet for quantum computers to factor 30-digit numbers or even 10-digit numbers. Even if quantum computers one day do break RSA encryption, a new security protocol called BB84 that relies on quantum properties is verified safe from quantum computers.

So will quantum computers ever completely replace the classical PC? Not in the forseeable future.

Quantum computing, while developing very rapidly, is still in an infantile stage, with research only being conducted semi-competitively by large corporations like Google, Microsoft, and IBM. Much of the hardware to accelerate quantum computing is not currently available. There are several obstacles to a quantum future, of which a major one is addressing gate errors and maintaining integrity of a qubits state.

However, given the amount of innovation that has happened in the past few years, it seems inevitable during our lifetimes that quantum computing will make huge strides. In addition, complexity theory has shown that there are several cases where classical computers perform better than quantum computers. IBM quantum computer developers state that quantum computing will probably never completely eliminate classical computers. Instead, in the future we may see a hybrid chip that relies on quantum transistors for certain tasks and classical transistors for others, depending on which one is more appropriate.

Excerpt from:
Quantum Computing for Everyone - The Startup - Medium

Army Project Touts New Error Correction Method That May be Key Step Toward Quantum Computing – HPCwire

RESEARCH TRIANGLE PARK, N.C., March 12, 2020 An Army project devised a novel approach for quantum error correction that could provide a key step toward practical quantum computers, sensors and distributed quantum information that would enable the military to potentially solve previously intractable problems or deploy sensors with higher magnetic and electric field sensitivities.

The approach, developed by researchers at Massachusetts Institute of Technology with Army funding, could mitigate certain types of the random fluctuations, or noise, that are a longstanding barrier to quantum computing. These random fluctuations can eradicate the data stored in such devices.

The Army-funded research, published in Physical Review Letters, involves identifying the kinds of noise that are the most likely, rather than casting a broad net to try to catch all possible sources of disturbance.

The team learned that we can reduce the overhead for certain types of error correction on small scale quantum systems, said Dr. Sara Gamble, program manager for the Army Research Office, an element of U.S. Army Combat Capabilities Development Commands Army Research Laboratory. This has the potential to enable increased capabilities in targeted quantum information science applications for the DOD.

The specific quantum system the research team is working with consists of carbon nuclei near a particular kind of defect in a diamond crystal called a nitrogen vacancy center. These defects behave like single, isolated electrons, and their presence enables the control of the nearby carbon nuclei.

But the team found that the overwhelming majority of the noise affecting these nuclei came from one single source: random fluctuations in the nearby defects themselves. This noise source can be accurately modeled, and suppressing its effects could have a major impact, as other sources of noise are relatively insignificant.

The team determined that the noise comes from one central defect, or one central electron that has a tendency to hop around at random. It jitters. That jitter, in turn, is felt by all those nearby nuclei, in a predictable way that can be corrected. The ability to apply this targeted correction in a successful way is the central breakthrough of this research.

The work so far is theoretical, but the team is actively working on a lab demonstration of this principle in action.

If the demonstration works as expected, this research could make up an important component of near and far term future quantum-based technologies of various kinds, including quantum computers and sensors.

ARL is pursuing research in silicon vacancy quantum systems which share similarities with the nitrogen vacancy center quantum systems considered by the MIT team. While silicon vacancy and nitrogen vacancy centers have different optical properties and many basic research questions are open regarding which type(s) of application each may be ultimately best suited for, the error correction approach developed here has potential to impact both types of systems and as a result accelerate progress at the lab.

About U.S. Army CCDC Army Research Laboratory

CCDC Army Research Laboratory is an element of the U.S. Army Combat Capabilities Development Command. As the Armys corporate research laboratory, ARL discovers, innovates and transitions science and technology to ensure dominant strategic land power. Through collaboration across the commands core technical competencies, CCDC leads in the discovery, development and delivery of the technology-based capabilities required to make Soldiers more lethal to win the nations wars and come home safely. CCDC is a major subordinate command of the U.S. Army Futures Command.

Source: U.S. Army CCDC Army Research Laboratory Public Affairs

See the article here:
Army Project Touts New Error Correction Method That May be Key Step Toward Quantum Computing - HPCwire

Top AI Announcements Of The Week: TensorFlow Quantum And More – Analytics India Magazine

AI is one of the most happening domains in the world right now. It would take a lifetime to skim through all the machine learning research papers released till date. As the AI keeps itself in the news through new releases of frameworks, regulations and breakthroughs, we can only hope to get the best of the lot.

So, here we have a compiled a list of top exciting AI announcements released over the past one week:

Late last year, Google locked horns with IBM in their race for quantum supremacy. Though the news has been around how good their quantum computers are, not much has been said about the implementation. Today, Google brings two of their most powerful frameworks Tensorflow and CIRQ together and releases TensorFlow Quantum, an open-source library for the rapid prototyping of quantum ML models.

Google AI team has joined hands with the University of Waterloo, X, and Volkswagen, announced the release of TensorFlow Quantum (TFQ).

TFQ is designed to provide the developers with the tools necessary for assisting the quantum computing and machine learning research communities to control and model quantum systems.

The team at Google have also released a TFQ white paper with a review of quantum applications. And, each example can be run in-browser via Colab from this research repository.

A key feature of TensorFlow Quantum is the ability to simultaneously train and execute many quantum circuits. This is achieved by TensorFlows ability to parallelise computation across a cluster of computers, and the ability to simulate relatively large quantum circuits on multi-core computers.

As the devastating news of COVID-19 keeps rising at an alarming rate, the AI researchers have given something to smile about. DeepMind, one of the premier AI research labs in the world, announced last week, that they are releasing structure predictions of several proteins that can promote research into the ongoing research around COVID-19. They have used the latest version of AlphaFold system to find these structures. AlphaFold is one of the biggest innovations to have come from the labs of DeepMind, and after a couple of years, it is exhilarating to see its application in something very critical.

As the pursuit to achieve human-level intelligence in machines fortifies, language modeling will keep on surfacing till the very end. One, human language is innately sophisticated, and two, training language models from scratch is exhaustive.

The last couple of years has witnessed a flurry of mega releases from the likes of NVIDIA, Microsoft and especially Google. As BERT topped the charts through many of its variants, Google now announces ELECTRA.

ELECTRA has the benefits of BERT but more efficient learning. They also claim that this novel pre-training method outperforms existing techniques given the same compute budget.

The gains are particularly strong for small models; for example, a model trained on one GPU for four days outperformed GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark.

China has been the worst-hit nation of all the COVID-19 victims. However, two of the biggest AI breakthroughs have come from the Chinese soil. Last month, Baidu announced how its toolkit brings down the prediction time. Last week, another Chinese giant, Alibaba announced that its new AI system has an accuracy of 96% in detecting the coronavirus from the CT scan of the patients. Alibabas founder Jack Ma has fueled the vaccine development efforts of his team with a $2.15 M donation.

Facebook AI has released its in-house feature of converting a two-dimensional photo into a video byte that gives the feel of having a more realistic view of the object in the picture. This system infers the 3D structure of any image, whether it is a new shot just taken on an Android or iOS device with a standard single camera, or a decades-old image recently uploaded to a phone or laptop.

The feature has been only available on high-end phones through the dual-lens portrait mode. But, now it will be available on every mobile device even with a single, rear-facing camera. To bring this new visual format to more people, the researchers at Facebook used state-of-the-art ML techniques to produce 3D photos from virtually any standard 2D picture.

One significant implication of this feature can be an improved understanding of 3D scenes that can help robots navigate and interact with the physical world.

As the whole world focused on the race to quantum supremacy between Google and IBM, Honeywell silently has been building, as it claims, the most powerful quantum computer yet. And, it plans to release this by the middle of 2020.

Thanks to a breakthrough in technology, were on track to release a quantum computer with a quantum volume of at least 64, twice that of the next alternative in the industry. There are a number of industries that will be profoundly impacted by the advancement and ultimate application of at-scale quantum computing, said Tony Uttley, President of Honeywell Quantum Solutions in the official press release.

The outbreak of COVID-19 has created a panic globally and rightfully so. Many flagship conferences have been either cancelled or have been moved to a virtual environment.

Nvidias flagship GPU Technology Conference (GTC), which was supposed to take place in San Francisco in the last week of March was cancelled due to fears of the COVID-19 coronavirus.

Whereas, Google Cloud also has cancelled its upcoming event, Google Cloud Next 20, which was slated to take place on April 6-8 at the Moscone Center in San Francisco. Due to the growing concern around the coronavirus (COVID-19), and in alignment with the best practices laid out by the CDC, WHO and other relevant entities, Google Cloud has decided to reimagine Google Cloud Next 20, the company stated on its website.

One of the popular conferences for ML researchers, ICLR2020 too, has announced that they are cancelling its physical conference this year due to growing concerns about COVID-19. They are shifting this event to a fully virtual conference.

ICLR authorities also issued a statement saying that all accepted papers at the virtual conference will be presented using a pre-recorded video.

comments

View original post here:
Top AI Announcements Of The Week: TensorFlow Quantum And More - Analytics India Magazine

An Invention To Change Quantum Computing Technology Has Been Realized – Somag News

Scientists from Australia managed to control the nucleus of a single atom using only electric fields. This theory, proposed by famous scientists Nobel Laureate and Nicolaas Bloembergen in 1961, has come true today.

Scientists from the University of New South Wales (UNSW) in Australia managed to control the nucleus of a single atom using electric fields. Scientists say their discovery can change the evolution of quantum computers.

Using the spins of an atom without the need for any oscillating magnetic field can offer very wide results in the application areas. Magnetic fields created with large coils and high currents have a wide range of applications today. However, this technology needs large areas. Electric fields can be produced at the ends of a single electrode. The use of these electric fields in the control of atomic nuclei will still make it much easier to control the atoms placed in nanoelectric devices.

Andrea Morello, a professor at UNSW, says his discoveries could replace nuclear magnetic resonance, which is widely used in different fields such as medicine, chemistry and mining. Morello gives an example of a pool table to explain the difference between controlling nuclear turns with magnetic and electric fields. Morello said, Magnetic resonance is like lifting the entire table and trying to move a ball on the pool table by shaking it. We aim to move the ball. But we move everything else with the ball. In electric resonance breakthrough, we will move the billiard stick to hit the ball we want.

Andrea Morello and his team unwittingly solved their problems in finding a way to control the nuclear turns introduced by Nobel Laureate and Nicolaas Bloembergen by electric fields in 1961.

Morello said, I have been working on spin resonance for 20 years of my life. But honestly, I have never heard of the nuclear resonance idea. We rediscovered this effect with an accident, he said. This theory was almost forgotten before Morello and his team made the discovery of nuclear electricity resonance.

The researchers wanted to perform nuclear magnetic resonance using antimony, which has a large nuclear return. By doing this, scientists wanted to explore the boundary between the quantum world and the classical world. However, after starting to experiment, something was noticed wrong. The core of the antimony did not respond at certain frequencies. It gave strong reactions at other frequencies. The researchers realized that this was electrical resonance rather than magnetic resonance.

They then produced a device consisting of an antimony atom and a special antenna optimized to create a high frequency magnetic field to control the nucleus of the atom. It was understood that the atomic nucleus moved when high power was applied to the antenna.

Scientists who managed to control the atomic nucleus by the electric field used a computer model to understand exactly how the electric field affects the rotation of the nucleus. In the computer model created, the electric field disrupted the atomic bonds around the nucleus, causing the nucleus to be redirected.

Scientists who managed to control the atomic nucleus with the nuclear electric field think that their discovery will open a huge horizon in the field of application. The discovery can then be used to create highly sensitive electromagnetic field sensors, with the development of quantum computers.

Link:
An Invention To Change Quantum Computing Technology Has Been Realized - Somag News

Researchers Gain Control Over Transparency With Tuning Optical Resonators – SciTechDaily

In the quantum realm, under some circumstances and with the right interference patterns, light can pass through opaque media.

This feature of light is more than a mathematical trick; optical quantum memory, optical storage and other systems that depend on interactions of just a few photons at a time rely on the process, called electromagnetically induced transparency, also known as EIT.

Because of its usefulness in existing and emerging quantum and optical technologies, researchers are interested in the ability to manipulate EIT without the introduction of an outside influence, such as additional photons that could perturb the already delicate system. Now, researchers at the McKelvey School of Engineering at Washington University in St. Louis have devised a fully contained optical resonator system that can be used to turn transparency on and off, allowing for a measure of control that has implications across a wide variety of applications.

The group published the results of the research, conducted in the lab of Lan Yang, the Edwin H. & Florence G. Skinner Professor in the Preston M. Green Department of Electrical & Systems Engineering, in a paper titled Electromagnetically Induced Transparency at a Chiral Exceptional Point in the January 13, 2020, issue of Nature Physics.

An optical resonator system is analogous to an electronic resonant circuit but uses photons instead of electrons. Resonators come in different shapes, but they all involve reflective material that captures light for a period of time as it bounces back and forth between or around its surface. These components are found in anything from lasers to high precision measuring devices.

For their research, Yangs team used a type of resonator known as a whispering gallery mode resonator (WGMR). It operates in a manner similar to the whispering gallery at St. Pauls Cathedral, where a person on one side of the room can hear a person whispering on the other side. What the cathedral does with sound, however, WGMRs do with light trapping light as it reflects and bounces along the curved perimeter.

In an idealized system, a fiber optic line intersects with a resonator, a ring made of silica, at a tangent. When a photon in the line meets the resonator, it swoops in, reflecting and propagating along the ring, exiting into the fiber in the same direction it was initially headed.

Reality, however, is rarely so neat.

Fabrication in high quality resonators is not perfect, Yang said. There is always some defect, or dust, that scatters the light. What actually happens is some of the scattered light changes direction, leaving the resonator and traveling back in the direction whence it came. The scattering effects disperse the light, and it doesnt exit the system.

Imagine a box around the system: If the light entered the box from the left, then exited out the right side, the box would appear transparent. But if the light that entered was scattered and didnt make it out, the box would seem opaque.

Because manufacturing imperfections in resonators are inconsistent and unpredictable, so too was transparency. Light that enters such systems scatters and ultimately loses its strength; it is absorbed into the resonator, rendering the system opaque.

In the system devised by co-first authors Changqing Wang, a PhD candidate, and Xuefeng Jiang, a researcher in Yangs lab, there are two WGMRs indirectly coupled by a fiber optic line. The first resonator is higher in quality, having just one imperfection. Wang added a tiny pointed material that acts like a nanoparticle to the high-quality resonator. By moving the makeshift particle, Wang was able to tune it, controlling the way the light inside scatters.

Importantly, he was also able to tune the resonator to whats known as an exceptional point, a point at which one and only one state can exist. In this case, the state is the direction of light in the resonator: clockwise or counterclockwise.

For the experiment, researchers directed light toward a pair of indirectly coupled resonators from the left (see illustration). The lightwave entered the first resonator, which was tuned to ensure light traveled clockwise. The light bounced around the perimeter, then exited, continuing along the fiber to the second, lower-quality resonator.

There, the light was scattered by the resonators imperfections and some of it began traveling counter-clockwise along the perimeter. The light wave then returned to the fiber, but headed back toward the first resonator.

Critically, researchers not only used the nanoparticle in the first resonator to make the lightwaves move clockwise, they also tuned it in a way that, as the light waves propagated back and forth between resonators, a special interference pattern would form. As a result of that pattern, the light in the resonators was canceled out, so to speak, allowing the light traveling along the fiber to eek by, rendering the system transparent.

It would be as if someone shined a light on a brick wall no light would get through. But then another person with another flashlight shined it in the same spot and, all of a sudden, that spot in the wall became transparent.

One of the more important and interesting functions of EIT is its ability to create slow light. The speed of light is always constant, but the actual value of that speed can change based on the properties of the medium through which it moves. In a vacuum, light always travels at 300,000,000 meters per second.

With EIT, people have slowed light down to eight meters per second, Wang said. That can have significant influence on the storage of light information. If light is slowed down, we have enough time to use the encoded information for optical quantum computing or optical communication. If engineers can better control EIT, they can more reliably depend on slow light for these applications.

Manipulating EIT could also be used in the development of long distance communication. A tuning resonator can be indirectly coupled to another resonator kilometers away along the same fiber optic cable. You could change the transmitted light down the line, Yang said.

This could be critical for, among other things, quantum encryption.

Reference: Electromagnetically induced transparency at a chiral exceptional point by Changqing Wang, Xuefeng Jiang, Guangming Zhao, Mengzhen Zhang, Chia Wei Hsu, Bo Peng, A. Douglas Stone, Liang Jiang and Lan Yang, 13 January 2020, Nature Physics.DOI: 10.1038/s41567-019-0746-7

The research team also included collaborators at Yale University, University of Chicago and the University of Southern California.

See the original post here:
Researchers Gain Control Over Transparency With Tuning Optical Resonators - SciTechDaily

NIST Works on the Industries of the Future in Buildings from the Past – Nextgov

The presidents budget request for fiscal 2021 proposed $738 million to fund the National Institutes of Science and Technology, a dramatic reduction from the more than $1 billion in enacted funds allocated for the agency this fiscal year.

The House Science, Space and Technology Committees Research and Technology Subcommittee on Wednesday held a hearing to hone in on NISTs reauthorizationbut instead of focusing on relevant budget considerations, lawmakers had other plans.

We're disappointed by the president's destructive budget request, which proposes over a 30% cut to NIST programs, Subcommittee Chairwoman Rep. Haley Stevens, D-Mich., said at the top of the hearing. But today, I don't want to dwell on a proposal that we know Congress is going to reject ... today I would like this committee to focus on improving NIST and getting the agency the tools it needs to do better, to do its job.

Per Stevens suggestion, Under Secretary of Commerce for Standards and Technology and NIST Director Walter Copan reflected on some of the agencys dire needs and offered updates and his view on a range of its ongoing programs and efforts.

NISTs Facilities Are in Bad Shape

President Trumps budget proposal for fiscal 2021 requests only $60 million in funds for facility construction, which is down from the $118 million enacted for fiscal 2020 and comes at a time when the agencys workspaces need upgrades.

Indeed the condition of NIST facilities are challenging, Copan explained. Over 55% of NIST's facilities are considered in poor to critical condition per [Commerce Department] standards, and so it does provide some significant challenges for us.

Some of the agencys decades-old facilities and infrastructures are deteriorating and Copan added that hed recently heard NISTs deferred maintenance backlog has hit more than $775 million. If the lawmakers or public venture out to visit some of the agencys facilities, you'll see the good, the bad, and the embarrassingly bad, he said. Those conditions are a testament to the resilience and the commitment of NISTs people, that they can work in sometimes challenging, outdated environments, Copan said.

The director noted that there have already been some creative solutions proposed to address the issue, including the development of a federal capital revolving fund. The agency is also looking creatively at the combination of maintenance with lease options for some of its facilities, in hopes that it can then move more rapidly by having its officials cycle out of laboratories to launch rebuilding and renovation processes.

It's one of my top priorities as the NIST director to have our NIST people work in 21st-century facilities that we can be proud of and that enable the important work of NIST for the nation, Copan said.

Advancing Efforts in Artificial Intelligence and Quantum Computing

The presidents budget request placed a sharp focus on industries of the future, which will be powered by many emerging technologies, and particularly quantum computing and AI.

During the hearing and in his written testimony, Copan highlighted some of NISTs work in both areas. The agency has helped shape an entire generation of quantum science, over the last century, and a significant portion of quantum scientists from around the globe have trained at the agencys facilities. Some of NISTs more recent quantum achievements include supporting the development of a quantum logic clock and helping steer advancements in quantum simulation. Following a recent mandate from the Trump administration, the agency is also in the midst of instituting the Quantum Economic Development Consortium, or QEDC, which aims to advance industry collaboration to expand the nations leadership in quantum research and development.

Looking forward, over the coming years NIST will focus a portion of its quantum research portfolio on the grand challenge of quantum networking, Copans written testimony said. Serving as the basis for secure and highly efficient quantum information transmission that links together multiple quantum devices and sensors, quantum networks will be a key element in the long-term evolution of quantum technologies.

Though there were cuts across many areas, the presidents budget request also proposed a doubling of NISTs funding in artificial intelligence and Copan said the technology is already broadly applied across all of the agencys laboratories to help improve productivity.

Going forward and with increased funding, he laid out some of the agencys top priorities, noting that there's much work to be done in developing tools to provide insights into artificial intelligence programs, and there is also important work to be done in standardization, so that the United States can lead the world in the application of [AI] in a trustworthy and ethical manner.

Standardization to Help the U.S. Lead in 5G

Rep. Frank Lucas, R-Okla., asked Copan to weigh in on the moves China is making across the fifth-generation wireless technology landscape, and the moves the U.S. needs to make to leadnot just competein that specific area.

We have entered in the United States, as we know, a hyper-competitive environment with China as a lead in activities related to standardization, Copan responded.

The director said that officials see, in some ways, that the standardization process has been weaponized, where the free market economy that is represented by the United States, now needs to lead in more effective coordination internally and incentivize industry to participate in the standards process. Though U.S. officials have already seen those rules of fair play bent or indeed broken by other players, NIST and others need to help improve information sharing across American standards-focused stakeholders, which could, in turn, accelerate adoption around the emerging technology.

We want the best technologies in the world to win and we want the United States to continue to be the leader in not only delivering those technologies, but securing the intellectual properties behind them and translating those into market value, he said.

See the original post here:
NIST Works on the Industries of the Future in Buildings from the Past - Nextgov

Deltec Bank, Bahamas Quantum Computing Will have Positive Impacts on Portfolio Optimization, Risk Analysis, Asset Pricing, and Trading Strategies -…

Quantum computing is expected to be the new technology, fully integrated with the financial sector within five to ten years. This form of computer, also known as supercomputers, are capable of highly advanced processing power that takes in massive amounts of data to solve a problem in a fraction of the time it would for the best traditional computer on the market to resolve.

Traditional Computer vs. Quantum Computing

A typical computer today stores information in the form of bits. These are represented in the binary language (0s and 1s). In quantum computing, the bits are known as Qubits and will take on the processing of similar input but rather than break it down to 0s and 1s will break the data down significantly greater where the possibilities of computational speed can be almost immeasurable.

Quantum Computing in Banking

Lets examine personal encryption in banking for example. Using a security format called RSA-2048, traditional computers would be able to decrypt the security algorithm in about 1,034 steps. With our best computers on the market, even with a processor capable of performing a trillion calculations per second, these steps translate to 317 billion years to break the secure code. While it is possible, it is not practical for a cyber-criminal to make it worthwhile.

A quantum computer, on the other hand, would be able to resolve this problem in about 107 steps. With a basic quantum computer running at one million calculations per second, this translates to ten seconds to resolve the problem.

While this example centered on breaking complex security, many other use cases can emerge from the use of quantum computing.

Trade Transaction Settlements

Barclays bank researchers have been working on a proof of concept regarding the transaction settlement process. As settlements can only be worked on a transaction-by-transaction basis, they can easily queue up only to be released in batches. When a processing window opens, as many trades as possible are settled.

Complex by their very nature, Traders can end up tapping into funds prior to the transaction being cleared. They will only be settled if the funds are available or if a collateral credit facility was arranged.

As you could probably handle a small number of trades in your head, you would need to rely on a computer after about 10-20 transactions. The same can be described for our current computational power in that it is now nearing the point where it will need more and more time to resolve hundreds of trades at a time.

With quantum computing using a seven-qubit system, it would be able to run a greater amount of complex trades in the same time it would for a traditional system to complete the trades. It would take the equivalent of about two hundred traditional computers to match the speed.

Simulating a Future Product Valuation

Researchers at JP Morgan were working on a concept that simulates the future value of a financial product. The team is testing quantum computers to perform complex intensive pricing calculations that normally take traditional computer hours to complete. This is a problem as each year greater complexity is added via newer algorithms, getting to the point where it is nearing an impossibility to calculate in a practical sense.

The research team has discovered that using quantum computing resulted in finding a resolution to the problem in mere seconds.

Final Thoughts

Banks are working on successful tests today with quantum computing to resolve extreme resource-intensive calculations for financial problem scenarios. Everything from trading, fraud, AML, etc. this is a technology not to be overlooked.

According toDeltec Bank, Bahamas - Quantum Computing will have positive impacts on portfolio optimization, risk analysis, asset pricing, and trading strategies is just the tip of the iceberg of what this technology could provide.

Disclaimer: The author of this text, Robin Trehan, has an Undergraduate degree in economics, Masters in international business and finance and MBA in electronic business. Trehan is Senior VP at Deltec International http://www.deltecbank.com. The views, thoughts, and opinions expressed in this text are solely the views of the author, and not necessarily reflecting the views of Deltec International Group, its subsidiaries and/or employees.

About Deltec Bank

Headquartered in The Bahamas, Deltec is an independent financial services group that delivers bespoke solutions to meet clients unique needs. The Deltec group of companies includes Deltec Bank & Trust Limited, Deltec Fund Services Limited, and Deltec Investment Advisers Limited, Deltec Securities Ltd. and Long Cay Captive Management.

Media ContactCompany Name: Deltec International GroupContact Person: Media ManagerEmail: Send EmailPhone: 242 302 4100Country: BahamasWebsite: https://www.deltecbank.com/

Here is the original post:
Deltec Bank, Bahamas Quantum Computing Will have Positive Impacts on Portfolio Optimization, Risk Analysis, Asset Pricing, and Trading Strategies -...

Alibaba using machine learning to fight coronavirus with AI – Gigabit Magazine – Technology News, Magazine and Website

Chinese ecommerce giant Alibaba has announced a breakthrough in natural language processing (NLP) through machine learning.

NLP is a key technology in the field of speech technologies such as machine translation and automatic speech recognition. The companys DAMO academy, a global research program, has made a breakthrough in machine reading techniques with applications in the fight against coronavirus.

Alibaba not only topped the GLUE Benchmark rankings, a table measuring the performance of competing NLP models, despite competition from the likes of Google, Facebook and Microsoft, but beat human baselines, signifying that its model could even outperform a human at understanding language. Applications include sentiment analysis, textual entailment (i.e. understanding the correct chronology of sentences) and question-answering.

SEE ALSO:

With the solution already deployed in technologies ranging from AI chatbots to search engines, it is now finding use in the analysis of healthcare records by centers for disease control in cities across China.

We are excited to achieve a new breakthrough in driving research of the NLP development, said Si Luo, head of NLP Research at Alibaba DAMO Academy. Not only NLP as a core technology underpinning Alibabas various businesses, which serve hundreds of millions of customers, but it also becomes a critical technology now in fighting the coronavirus. We hope we can continue to leverage our leading technologies and contribute to the community during this difficult time.

Other AI initiatives put forth by the company for use in containing the coronavirus epidemic include technology to assist in the diagnosis of the virus. The company also made its Alibaba Cloud computing platform free for research organisations seeking to sequence the virus genome.

Read more:
Alibaba using machine learning to fight coronavirus with AI - Gigabit Magazine - Technology News, Magazine and Website

Doing machine learning the right way – MIT News

The work of MIT computer scientist Aleksander Madry is fueled by one core mission: doing machine learning the right way.

Madrys research centers largely on making machine learning a type of artificial intelligence more accurate, efficient, and robust against errors. In his classroom and beyond, he also worries about questions of ethical computing, as we approach an age where artificial intelligence will have great impact on many sectors of society.

I want society to truly embrace machine learning, says Madry, a recently tenured professor in the Department of Electrical Engineering and Computer Science. To do that, we need to figure out how to train models that people can use safely, reliably, and in a way that they understand.

Interestingly, his work with machine learning dates back only a couple of years, to shortly after he joined MIT in 2015. In that time, his research group has published several critical papers demonstrating that certain models can be easily tricked to produce inaccurate results and showing how to make them more robust.

In the end, he aims to make each models decisions more interpretable by humans, so researchers can peer inside to see where things went awry. At the same time, he wants to enable nonexperts to deploy the improved models in the real world for, say, helping diagnose disease or control driverless cars.

Its not just about trying to crack open the machine-learning black box. I want to open it up, see how it works, and pack it back up, so people can use it without needing to understand whats going on inside, he says.

For the love of algorithms

Madry was born in Wroclaw, Poland, where he attended the University of Wroclaw as an undergraduate in the mid-2000s. While he harbored interest in computer science and physics, I actually never thought Id become a scientist, he says.

An avid video gamer, Madry initially enrolled in the computer science program with intentions of programming his own games. But in joining friends in a few classes in theoretical computer science and, in particular, theory of algorithms, he fell in love with the material. Algorithm theory aims to find efficient optimization procedures for solving computational problems, which requires tackling difficult mathematical questions. I realized I enjoy thinking deeply about something and trying to figure it out, says Madry, who wound up double-majoring in physics and computer science.

When it came to delving deeper into algorithms in graduate school, he went to his first choice: MIT. Here, he worked under both Michel X. Goemans, who was a major figure in applied math and algorithm optimization, and Jonathan A. Kelner, who had just arrived to MIT as a junior faculty working in that field. For his PhD dissertation, Madry developed algorithms that solved a number of longstanding problems in graph algorithms, earning the 2011 George M. Sprowls Doctoral Dissertation Award for the best MIT doctoral thesis in computer science.

After his PhD, Madry spent a year as a postdoc at Microsoft Research New England, before teaching for three years at the Swiss Federal Institute of Technology Lausanne which Madry calls the Swiss version of MIT. But his alma mater kept calling him back: MIT has the thrilling energy I was missing. Its in my DNA.

Getting adversarial

Shortly after joining MIT, Madry found himself swept up in a novel science: machine learning. In particular, he focused on understanding the re-emerging paradigm of deep learning. Thats an artificial-intelligence application that uses multiple computing layers to extract high-level features from raw input such as using pixel-level data to classify images. MITs campus was, at the time, buzzing with new innovations in the domain.

But that begged the question: Was machine learning all hype or solid science? It seemed to work, but no one actually understood how and why, Madry says.

Answering that question set his group on a long journey, running experiment after experiment on deep-learning models to understand the underlying principles. A major milestone in this journey was an influential paper they published in 2018, developing a methodology for making machine-learning models more resistant to adversarial examples. Adversarial examples are slight perturbations to input data that are imperceptible to humans such as changing the color of one pixel in an image but cause a model to make inaccurate predictions. They illuminate a major shortcoming of existing machine-learning tools.

Continuing this line of work, Madrys group showed that the existence of these mysterious adversarial examples may contribute to how machine-learning models make decisions. In particular, models designed to differentiate images of, say, cats and dogs, make decisions based on features that do not align with how humans make classifications. Simply changing these features can make the model consistently misclassify cats as dogs, without changing anything in the image thats really meaningful to humans.

Results indicated some models which may be used to, say, identify abnormalities in medical images or help autonomous cars identify objects in the road arent exactly up to snuff. People often think these models are superhuman, but they didnt actually solve the classification problem we intend them to solve, Madry says. And their complete vulnerability to adversarial examples was a manifestation of that fact. That was an eye-opening finding.

Thats why Madry seeks to make machine-learning models more interpretable to humans. New models hes developed show how much certain pixels in images the system is trained on can influence the systems predictions. Researchers can then tweak the models to focus on pixels clusters more closely correlated with identifiable features such as detecting an animals snout, ears, and tail. In the end, that will help make the models more humanlike or superhumanlike in their decisions. To further this work, Madry and his colleagues recently founded the MIT Center for Deployable Machine Learning, a collaborative research effort within the MIT Quest for Intelligence that is working toward building machine-learning tools ready for real-world deployment.

We want machine learning not just as a toy, but as something you can use in, say, an autonomous car, or health care. Right now, we dont understand enough to have sufficient confidence in it for those critical applications, Madry says.

Shaping education and policy

Madry views artificial intelligence and decision making (AI+D is one of the three new academic units in the Department of Electrical Engineering and Computer Science) as the interface of computing thats going to have the biggest impact on society.

In that regard, he makes sure to expose his students to the human aspect of computing. In part, that means considering consequences of what theyre building. Often, he says, students will be overly ambitious in creating new technologies, but they havent thought through potential ramifications on individuals and society. Building something cool isnt a good enough reason to build something, Madry says. Its about thinking about not if we can build something, but if we should build something.

Madry has also been engaging in conversations about laws and policies to help regulate machine learning. A point of these discussions, he says, is to better understand the costs and benefits of unleashing machine-learning technologies on society.

Sometimes we overestimate the power of machine learning, thinking it will be our salvation. Sometimes we underestimate the cost it may have on society, Madry says. To do machine learning right, theres still a lot still left to figure out.

Read the rest here:
Doing machine learning the right way - MIT News

Management Styles And Machine Learning: A Case Of Life Imitating Art – Forbes

Oscar Wilde coined a great phrase in saying that life imitates art far more than art imitates life. What he meant by that is that through art, we appreciate life more. Art is an expression of life, and it helps us better understand ourselves and our surroundings. In business, learning how life imitates art, and even how art imitates life, is an intriguing way to capitalize on the value they both can bring.

When looking at organizations, there is a lot of art and life in how the business is run. Leadership styles range from specific and precise to open and adaptable. Often this is based on a managers own personal life experience. An example on the art side of the equation is machine learning. Its something we have created as an expression of what it means to be human. Both have common benefits that have the capability to propel the business forward.

The Art Of Machine Learning

Although its been around for many years, its only been in the past 10 years that machine learning has become more mainstream. In the past, technology used traditional programming, where programmers hard-coded rigid sets of instructions that tried to accommodate every possible scenario. Outputs were predetermined and could not respond to new scenarios without additional coding. Not only does this require continuous updates, costing in lost time and money, but it can also result in outputs that are inaccurate based on problems that the program could not predict.

Computer programming has advanced to where programs are now capable of learning and evolving on their own. With traditional programming, an input was run through a program and created an output. Now, with machine learning, an input is run through a trainable data model, so the output evolves as the model is continually learning and adapting, much in the same way a human brain does.

Take a game of checkers. Using traditional programming, it is unlikely that the computer will ever beat the programmer, who is the trainer of that software. Traditional programming has limited the game to a set of rules based on that programmers knowledge of the game. Whereas with machine learning, the program is based on a self-improving model that is not only able to make decisions for each move, but also evolves the program, learning which moves are most likely to lead to a win.

After several matches, the program will learn from its "experience" in prior matches how to play even better than a human. In the same way humans learn which moves to take and which to avoid, the program also keeps track of that information and eventually becomes a better player.

A Different Take On Management Styles

Now lets consider management styles as part of life. In the business world, you typically find two types of managers: those who give explicit instructions that need to be followed in order to accomplish a goal, and those who provide the goal without detailed instructions, but offer guidance and counseling as needed to get there.

In the first scenario, the worker needs only to apply the instructions received to get the work done, which makes the manager happy with the result, similar to traditional programming. But the results are limited and dont take into account unexpected variables and changes that happen as a part of the business. In this micromanagement style, the worker often has to go back to the manager for additional instructions, costing both time and money, similar to traditional programming.

In the second scenario, the worker is expected to find their own path to achieving the goal by learning, trying and testing different options. From that experience, the worker modifies their process based on the results of their efforts, much like machine learning. The process is much more flexible and accommodating as its able to be adjusted easily, on the fly. Like machine learning, this strategic style of management provides flexibility and autonomy, saving time and producing much better results.

Leverage Common Benefits For Optimal Results

Both machine learning and strategic management styles have some common benefits, if done well. One benefit is scalability. Its nearly impossible to scale an organization with a micromanagement style. As the company grows, managers will have increasingly less time to spend with workers. The same is true of traditional programming. Unless the program can learn and change on its own, it will never be able to scale to keep pace with the business.

Another common benefit is the ability to outsmart the competition. Companies that embrace machine learnings intelligent algorithms and better analytical power will have a leg up on those organizations that do not. They can take advantage of the automated learning capabilities built into machine learning. In the same way, those companies that take advantage of a strategic management style over micromanaging will enable workers to be self-sufficient and contribute the full power of their wisdom.

Oscar Wilde was surprisingly prophetic when he talked about life imitating art. The best organizations are those that leverage the commonalities both life and art; of machine learning and strategic management styles. As life can be fuller by imitating art, art and life together help organizations realize their greatest potential.

More:
Management Styles And Machine Learning: A Case Of Life Imitating Art - Forbes

Navigating the New Landscape of AI Platforms – Harvard Business Review

Executive Summary

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tooling for AI systems than they do building the AI systems themselves. Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling, and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies.

Nearly two years ago, Seattle Sport Sciences, a company that provides data to soccer club executives, coaches, trainers and players to improve training, made a hard turn into AI. It began developing a system that tracks ball physics and player movements from video feeds. To build it, the company needed to label millions of video frames to teach computer algorithms what to look for. It started out by hiring a small team to sit in front of computer screens, identifying players and balls on each frame. But it quickly realized that it needed a software platform in order to scale. Soon, its expensive data science team was spending most of its time building a platform to handle massive amounts of data.

These are heady days when every CEO can see or at least sense opportunities for machine-learning systems to transform their business. Nearly every company has processes suited for machine learning, which is really just a way of teaching computers to recognize patterns and make decisions based on those patterns, often faster and more accurately than humans. Is that a dog on the road in front of me? Apply the brakes. Is that a tumor on that X-ray? Alert the doctor. Is that a weed in the field? Spray it with herbicide.

What only insiders generally know is that data scientists, once hired, spend more time building and maintaining the tools for AI systems than they do building the systems themselves. A recent survey of 500 companies by the firm Algorithmia found that expensive teams spend less than a quarter of their time training and iterating machine-learning models, which is their primary job function.

Now, though, new tools are emerging to ease the entry into this era of technological innovation. Unified platforms that bring the work of collecting, labelling and feeding data into supervised learning models, or that help build the models themselves, promise to standardize workflows in the way that Salesforce and Hubspot have for managing customer relationships. Some of these platforms automate complex tasks using integrated machine-learning algorithms, making the work easier still. This frees up data scientists to spend time building the actual structures they were hired to create, and puts AI within reach of even small- and medium-sized companies, like Seattle Sports Science.

Frustrated that its data science team was spinning its wheels, Seattle Sports Sciences AI architect John Milton finally found a commercial solution that did the job. I wish I had realized that we needed those tools, said Milton. He hadnt factored the infrastructure into their original budget and having to go back to senior management and ask for it wasnt a pleasant experience for anyone.

The AI giants, Google, Amazon, Microsoft and Apple, among others, have steadily released tools to the public, many of them free, including vast libraries of code that engineers can compile into deep-learning models. Facebooks powerful object-recognition tool, Detectron, has become one of the most widely adopted open-source projects since its release in 2018. But using those tools can still be a challenge, because they dont necessarily work together. This means data science teams have to build connections between each tool to get them to do the job a company needs.

The newest leap on the horizon addresses this pain point. New platforms are now allowing engineers to plug in components without worrying about the connections.

For example, Determined AI and Paperspace sell platforms for managing the machine-learning workflow. Determined AIs platform includes automated elements to help data scientists find the best architecture for neural networks, while Paperspace comes with access to dedicated GPUs in the cloud.

If companies dont have access to a unified platform, theyre saying, Heres this open source thing that does hyperparameter tuning. Heres this other thing that does distributed training, and they are literally gluing them all together, said Evan Sparks, cofounder of Determined AI. The way theyre doing it is really with duct tape.

Labelbox is a training data platform, or TDP, for managing the labeling of data so that data science teams can work efficiently with annotation teams across the globe. (The author of this article is the companys co-founder.) It gives companies the ability to track their data, spot, and fix bias in the data and optimize the quality of their training data before feeding it into their machine-learning models.

Its the solution that Seattle Sports Sciences uses. John Deere uses the platform to label images of individual plants, so that smart tractors can spot weeds and deliver pesticide precisely, saving money and sparing the environment unnecessary chemicals.

Meanwhile, companies no longer need to hire experienced researchers to write machine-learning algorithms, the steam engines of today. They can find them for free or license them from companies who have solved similar problems before.

Algorithmia, which helps companies deploy, serve and scale their machine-learning models, operates an algorithm marketplace so data science teams dont duplicate other peoples effort by building their own. Users can search through the 7,000 different algorithms on the companys platform and license one or upload their own.

Companies can even buy complete off-the-shelf deep learning models ready for implementation.

Fritz.ai, for example, offers a number of pre-trained models that can detect objects in videos or transfer artwork styles from one image to another all of which run locally on mobile devices. The companys premium services include creating custom models and more automation features for managing and tweaking models.

And while companies can use a TDP to label training data, they can also find pre-labeled datasets, many for free, that are general enough to solve many problems.

Soon, companies will even offer machine-learning as a service: Customers will simply upload data and an objective and be able to access a trained model through an API.

In the late 18th century, Maudslays lathe led to standardized screw threads and, in turn, to interchangeable parts, which spread the industrial revolution far and wide. Machine-learning tools will do the same for AI, and, as a result of these advances, companies are able to implement machine-learning with fewer data scientists and less senior data science teams. Thats important given the looming machine-learning, human resources crunch: According to a 2019 Dun & Bradstreet report, 40 percent of respondents from Forbes Global 2000 organizations say they are adding more AI-related jobs. And the number of AI-related job listings on the recruitment portal Indeed.com jumped 29 percent from May 2018 to May 2019. Most of that demand is for supervised-learning engineers.

But C-suite executives need to understand the need for those tools and budget accordingly. Just as Seattle Sports Sciences learned, its better to familiarize yourself with the full machine-learning workflow and identify necessary tooling before embarking on a project.

That tooling can be expensive, whether the decision is to build or to buy. As is often the case with key business infrastructure, there are hidden costs to building. Buying a solution might look more expensive up front, but it is often cheaper in the long run.

Once youve identified the necessary infrastructure, survey the market to see what solutions are out there and build the cost of that infrastructure into your budget. Dont fall for a hard sell. The industry is young, both in terms of the time that its been around and the age of its entrepreneurs. The ones who are in it out of passion are idealistic and mission driven. They believe they are democratizing an incredibly powerful new technology.

The AI tooling industry is facing more than enough demand. If you sense someone is chasing dollars, be wary. The serious players are eager to share their knowledge and help guide business leaders toward success. Successes benefit everyone.

See the rest here:
Navigating the New Landscape of AI Platforms - Harvard Business Review

AI-powered honeypots: Machine learning may help improve intrusion detection – The Daily Swig

John Leyden09 March 2020 at 15:50 UTC Updated: 09 March 2020 at 16:04 UTC

Forget crowdsourcing, heres crooksourcing

Computer scientists in the US are working to apply machine learning techniques in order to develop more effective honeypot-style cyber defenses.

So-called deception technology refers to traps or decoy systems that are strategically placed around networks.

These decoy systems are designed to act as a honeypot so that once an attacker has penetrated a network, they will attempt to attack them setting off security alerts in the process.

Deception technology is not a new concept. Companies including Illusive Networks and Attivo have been working in the field for several years.

Now, however, researchers from the University of Texas at Dallas (UT Dallas) are aiming to take the concept one step further.

The DeepDig (DEcEPtion DIGging) technique plants traps and decoys onto real systems before applying machine learning techniques in order to gain a deeper understanding of attackers behavior.

The technique is designed to use cyber-attacks as free sources of live training data for machine learning-based intrusion detection systems.

Somewhat ironically, the prototype technology enlists attackers as free penetration testers.

Dr Kevin Hamlen, endowed professor of computer science at UT Dallas, explained: Companies like Illusive Networks, Attivo, and many others create network topologies intended to be confusing to adversaries, making it harder for them to find real assets to attack.

The shortcoming of existing approaches, Dr Hamlen, told The Daily Swig is that such deceptions do not learn from attacks.

While the defense remains relatively static, the adversary learns over time how to distinguish honeypots from a real asset, leading to an asymmetric game that the adversary eventually wins with high probability, he said.

In contrast, DeepDig turns real assets into traps that learn from attacks using artificial intelligence and data mining.

Turning real assets into a form of honeypot has numerous advantages, according to Dr Hamlen.

Even the most skilled adversary cannot avoid interacting with the trap because the trap is within the real asset that is the adversary's target, not a separate machine or software process, he said.

This leads to a symmetric game in which the defense continually learns and gets better at stopping even the most stealthy adversaries.

The research which has applications in the field of web security was presented in a paper (PDF) entitled Improving Intrusion Detectors by Crook-Sourcing, at the recent Computer Security Applications Conference in Puerto Rico.

The research was funded by the US federal government. The algorithms and evaluation data developed so far have been publicly released to accompany the research paper.

Its hoped that the research might eventually find its way into commercially available products, but this is still some time off and the technology is still only at the prototype stage.

In practice, companies typically partner with a university that conducted the research theyre interested in to build a full product, a UT Dallas spokesman explained. Dr Hamlens project is not yet at that stage.

RELATED Gold-nuggeting: Machine learning tool simplifies target discovery for pen testers

See the article here:
AI-powered honeypots: Machine learning may help improve intrusion detection - The Daily Swig

RIT professor explores the art and science of statistical machine learning – RIT University News Services

Statistical machine learning is at the core of modern-day advances in artificial intelligence, but a Rochester Institute of Technology professor argues that applying it correctly requires equal parts science and art. Professor Ernest Fokou of RITs School of Mathematical Sciences emphasized the human element of statistical machine learning in his primer on the field that graced the cover of a recent edition of Notices of the American Mathematical Society.

One of the most important commodities in your life is common sense, said Fokou. Mathematics is beautiful, but mathematics is your servant. When you sit down and design a model, data can be very stubborn. We design models with assumptions of what the data will show or look like, but the data never looks exactly like what you expect. You may have a nice central tenet, but theres always something thats going to require your human intervention. Thats where the art comes in. After you run all these statistical techniques, when it comes down to drawing the final conclusion, you need your common sense.

Statistical machine learning is a field that combines mathematics, probability, statistics, computer science, cognitive neuroscience and psychology to create models that learn from data and make predictions about the world. One of its earliest applications was when the United States Postal Service used it to accurately learn and recognize handwritten letters and digits to autonomously sort letters. Today, we see it applied in a variety of settings, from facial recognition technology on smartphones to self-driving cars.

Researchers have developed many different learning machines and statistical models that can be applied to a given problem, but there is no one-size-fits-all method that works well for all situations. Fokou said using selecting the appropriate method requires mathematical and statistical rigor along with practical knowledge. His paper explains the central concepts and approaches, which he hopes will get more people involved in the field and harvesting its potential.

Statistical machine learning is the main tool behind artificial intelligence, said Fokou. Its allowing us to construct extensions of the human being so our lives, transportation, agriculture, medicine and education can all be better. Thanks to statistical machine learning, you can understand the processes by which people learn and slowly and steadily help humanity access a higher level.

This year, Fokou has been on sabbatical traveling the world exploring new frontiers in statistical machine learning. Fokous full article is available on the AMS website.

See more here:
RIT professor explores the art and science of statistical machine learning - RIT University News Services

How is AI and machine learning benefiting the healthcare industry? – Health Europa

In order to help build increasingly effective care pathways in healthcare, modern artificial intelligence technologies must be adopted and embraced. Events such as the AI & Machine Learning Convention are essential in providing medical experts around the UK access to the latest technologies, products and services that are revolutionising the future of care pathways in the healthcare industry.

AI has the potential to save the lives of current and future patients and is something that is starting to be seen across healthcare services across the UK. Looking at diagnostics alone, there have been large scale developments in rapid image recognition, symptom checking and risk stratification.

AI can also be used to personalise health screening and treatments for cancer, not only benefiting the patient but clinicians too enabling them to make the best use of their skills, informing decisions and saving time.

The potential AI will have on the NHS is clear, so much so, NHS England is setting up a national artificial intelligence laboratory to enhance the care of patients and research.

The Health Secretary, Matt Hancock, commented that AI had enormous power to improve care, save lives and ensure that doctors had more time to spend with patients, so he pledged 250M to boost the role of AI within the health service.

The AI and Machine Learning Convention is a part of Mediweek, the largest healthcare event in the UK and as a new feature of the Medical Imaging Convention and the Oncology Convention, the AI and Machine Learning expo offer an effective CPD accredited education programme.

Hosting over 50 professional-led seminars, the lineup includes leading artificial intelligence and machine learning experts such as NHS Englands Dr Minai Bakhai, Faculty of Clinical Informatics Professor Jeremy Wyatt, and Professor Claudia Pagliari from the University of Edinburgh.

Other speakers in the seminar programme come from leading organisations such as the University of Oxford, Kings College London, and the School of Medicine at the University of Nottingham.

The event all takes place at the National Exhibition Centre, Birmingham on the 17th and 18th March 2020. Tickets to the AI and Machine Learning are free and gains you access to the other seven shows within MediWeek.

Health Europa is proud to be partners with the AI and Machine Learning Convention, click here to get your tickets.

Do you want the latest news and updates from Health Europa? Click here to subscribe to all the latest updates and stay connected with us here.

Read the original:
How is AI and machine learning benefiting the healthcare industry? - Health Europa

PayMyTuition Develops AI and Machine Learning Technology to Settle Real-Time Cross-Border Tuition Payments for Educational Institutions – PRNewswire

TORONTO and JERSEY CITY, N.J., March 10, 2020 /PRNewswire/ -- While educational institutions are trying to evolve and become more adapted to the digital age, colleges and universities have still lagged when it comes to improved processes for cross-border tuition payments. Fortunately, PayMyTuition, a leading provider of technology-driven global payment processing solutions for international tuition payments, announced today its solution to this problem. By way of their newly developed artificial intelligence (AI) and machine learning technology, the PayMyTuition platform solution can now enable colleges and universities to settle international tuition payments in real-time.

"Today, we have the ability to make digital payments instantly from our smart-phones, but until now, to make international tuition payments, both students and educational institutions experience a high level of friction within the customer experience, manual reconciliation processes, and delays in the availability of funds to the institution, hindering students from immediate enrollment access," said Arif Harji, Chief Market Strategist at MTFX Group. "PayMyTuition AI and machine learning technology was developed specifically for educational institutions, providing them an alternative solution that can remove all the friction and restrictions that exist within current offerings, while enabling real-time settlement for the first time."

In the always-on digital environment that we live in, customers expect optimal convenience and digital solutions across the entire payment ecosystem, and the element of real-time settlement has, until now, been lacking.

PayMyTuition enables educational institution student information systems to optimize payment processing methods, giving students payment methods and timing flexibility. This technology will help institutions to reduce costs, prevent errors and improve overall speed with the ability of real-time settlement. The utilization of AI and machine learning technology within the platform will also provide institutions with large and complete amounts of rich data, including student information and payment statuses, of which they didn't have visibility on before, making end-to-end payment transactions simple and transparent.

PayMyTuition's real-time cross border tuition payment solution is an industry first and can be seamlessly integrated, by way of their real-time API, into most student information systems including: Banner, Colleague, PeopleSoft, Workday and Jenzabar.

The company is expanding rapidly, with plans to enable 30 educational institutions across North America with real-time tuition settlement in the next 60 days. PayMyTuition will continue working with customers across the globe to be able to provide unparalleled customer experience to all students, while significant efficiencies are delivered to the institution, now, all in real-time.

For more information, visit http://www.paymytuition.com.

About PayMyTuition by MTFX

PayMyTuition is part of the MTFX Group of Companies, a foreign exchange and global payments solution provider with a track record of 23+ years, facilitating payments for over 8,000 corporate and institutional clients across North America.

Media ContactCrystal ReizePayMyTuition[emailprotected]

Related Images

paymytuition-mtfx-group.png PayMyTuition - MTFX Group PayMyTuition is part of the MTFX Group of Companies, a foreign exchange and global payments solution provider with a track record of 23+ years, facilitating payments for over 8,000 corporate and institutional clients across North America.

SOURCE PayMyTuition

http://www.paymytuition.com

Read more:
PayMyTuition Develops AI and Machine Learning Technology to Settle Real-Time Cross-Border Tuition Payments for Educational Institutions - PRNewswire

What would machine learning look like if you mixed in DevOps? Wonder no more, we lift the lid on MLOps – The Register

Achieving production-level governance with machine-learning projects currently presents unique challenges. A new space of tools and practices is emerging under the name MLOps. The space is analogous to DevOps but tailored to the practices and workflows of machine learning.

Machine learning models make predictions for new data based on the data they have been trained on. Managing this data in a way that can be safely used in live environments is challenging, and one of the key reasons why 80 per cent of data science projects never make it to production an estimate from Gartner.

It is essential that the data is clean, correct, and safe to use without any privacy or bias issues. Real-world data can also continuously change, so inputs and predictions have to be monitored for any shifts that may be problematic for the model. These are complex challenges that are distinct from those found in traditional DevOps.

DevOps practices are centred on the build and release process and continuous integration. Traditional development builds are packages of executable artifacts compiled from source code. Non-code supporting data in these builds tends to be limited to relatively small static config files. In essence, traditional DevOps is geared to building programs consisting of sets of explicitly defined rules that give specific outputs in response to specific inputs.

In contrast, machine-learning models make predictions by indirectly capturing patterns from data, not by formulating all the rules. A characteristic machine-learning problem involves making new predictions based on known data, such as predicting the price of a house using known house prices and details such as the number of bedrooms, square footage, and location. Machine-learning builds run a pipeline that extracts patterns from data and creates a weighted machine-learning model artifact. This makes these builds far more complex and the whole data science workflow more experimental. As a result, a key part of the MLOps challenge is supporting multi-step machine learning model builds that involve large data volumes and varying parameters.

To run projects safely in live environments, we need to be able to monitor for problem situations and see how to fix things when they go wrong. There are pretty standard DevOps practices for how to record code builds in order to go back to old versions. But MLOps does not yet have standardisation on how to record and go back to the data that was used to train a version of a model.

There are also special MLOps challenges to face in the live environment. There are largely agreed DevOps approaches for monitoring for error codes or an increase in latency. But its a different challenge to monitor for bad predictions. You may not have any direct way of knowing whether a prediction is good, and may have to instead monitor indirect signals such as customer behaviour (conversions, rate of customers leaving the site, any feedback submitted). It can also be hard to know in advance how well your training data represents your live data. For example, it might match well at a general level but there could be specific kinds of exceptions. This risk can be mitigated with careful monitoring and cautious management of the rollout of new versions.

The effort involved in solving MLOps challenges can be reduced by leveraging a platform and applying it to the particular case. Many organisations face a choice of whether to use an off-the-shelf machine-learning platform or try to put an in-house platform together themselves by assembling open-source components.

Some machine-learning platforms are part of a cloud providers offering, such as AWS SageMaker or AzureML. This may or may not appeal, depending on the cloud strategy of the organisation. Other platforms are not cloud-specific and instead offer self-install or a custom hosted solution (eg, Databricks MLflow).

Instead of choosing a platform, organisations can instead choose to assemble their own. This may be a preferred route when requirements are too niche to fit a current platform, such as needing integrations to other in-house systems or if data has to be stored in a particular location or format. Choosing to assemble an in-house platform requires learning to navigate the ML tool landscape. This landscape is complex with different tools specialising in different niches and in some cases there are competing tools approaching similar problems in different ways (see the Linux Foundations LF AI project for a visualization or categorised lists from the Institute for Ethical AI).

The Linux Foundations diagram of MLOps tools ... Click for full detail

For organisations using Kubernetes, the kubeflow project presents an interesting option as it aims to curate a set of open-source tools and make them work well together on kubernetes. The project is led by Google, and top contributors (as listed by IBM) include IBM, Cisco, Caicloud, Amazon, and Microsoft, as well as ML tooling provider Seldon, Chinese tech giant NetEase, Japanese tech conglomerate NTT, and hardware giant Intel.

Challenges around reproducibility and monitoring of machine learning systems are governance problems. They need to be addressed in order to be confident that a production system can be maintained and that any challenges from auditors or customers can be answered. For many projects these are not the only challenges as customers might reasonably expect to be able to ask why a prediction concerning them was made. In some cases this may also be a legal requirement as the European Unions General Data Protection Regulation states that a "data subject" has a right to "meaningful information about the logic involved" in any automated decision that relates to them.

Explainability is a data science problem in itself. Modelling techniques can be divided into black-box and white-box, depending on whether the method can naturally be inspected to provide insight into the reasons for particular predictions. With black-box models, such as proprietary neural networks, the options for interpreting results are more restricted and more difficult to use than the options for interpreting a white-box linear model. In highly regulated industries, it can be impossible for AI projects to move forward without supporting explainability. For example, medical diagnosis systems may need to be highly interpretable so that they can be investigated when things go wrong or so that the model can aid a human doctor. This can mean that projects are restricted to working with models that admit of acceptable interpretability. Making black-box models more interpretable is a fast-growth area, with new techniques rapidly becoming available.

The MLOps scene is evolving as machine-learning becomes more widely adopted, and we learn more about what counts as best practice for different use cases. Different organisations have different machine learning use cases and therefore differing needs. As the field evolves well likely see greater standardisation, and even the more challenging use cases will become better supported.

Ryan Dawson is a core member of the Seldon open-source team, providing tooling for machine-learning deployments to Kubernetes. He has spent 10 years working in the Java development scene in London across a variety of industries.

Bringing DevOps principles to machine learning throws up some unique challenges, not least very different workflows and artifacts. Ryan will dive into this topic in May at Continuous Lifecycle London 2020 a conference organized by The Register's mothership, Situation Publishing.

You can find out more, and book tickets, right here.

Sponsored: Quit your addiction to storage

The rest is here:
What would machine learning look like if you mixed in DevOps? Wonder no more, we lift the lid on MLOps - The Register

An implant uses machine learning to give amputees control over prosthetic hands – MIT Technology Review

Researchers have been working to make mind-controlled prosthetics a reality for at least a decade. In theory, an artificial hand that amputees could control with their mind could restore their ability to carry out all sorts of daily tasks, and dramatically improve their standard of living.

However, until now scientists have faced a major barrier: they havent been able to access nerve signals that are strong or stable enough to send to the bionic limb. Although its possible to get this sort of signal using a brain-machine interface, the procedure to implant one is invasive and costly. And the nerve signals carried by the peripheral nerves that fan out from the brain and spinal cord are too small.

A new implant gets around this problem by using machine learning to amplify these signals. A study, published in Science Translational Medicine today, found that it worked for four amputees for almost a year. It gave them fine control of their prosthetic hands and let them pick up miniature play bricks, grasp items like soda cans, and play Rock, Paper, Scissors.

Sign up for The Algorithm artificial intelligence, demystified

Its the first time researchers have recorded millivolt signals from a nervefar stronger than any previous study.

The strength of this signal allowed the researchers to train algorithms to translate them into movements. The first time we switched it on, it worked immediately, says Paul Cederna, a biomechanics professor at the University of Michigan, who co-led the study. There was no gap between thought and movement.

The procedure for the implant requires one of the amputees peripheral nerves to be cut and stitched up to the muscle. The site heals, developing nerves and blood vessels over three months. Electrodes are then implanted into these sites, allowing a nerve signal to be recorded and passed on to a prosthetic hand in real time. The signals are turned into movements using machine-learning algorithms (the same types that are used for brain-machine interfaces).

Amputees wearing the prosthetic hand were able to control each individual finger and swivel their thumbs, regardless of how recently they had lost their limb. Their nerve signals were recorded for a few minutes to calibrate the algorithms to their individual signals, but after that each implant worked straight away, without any need to recalibrate during the 300 days of testing, according to study co-leader Cynthia Chestek, an associate professor in biomedical engineering at the University of Michigan.

Its just a proof-of-concept study, so it requires further testing to validate the results. The researchers are recruiting amputees for an ongoing clinical trial, funded by DARPA and the National Institutes of Health.

Read the original post:
An implant uses machine learning to give amputees control over prosthetic hands - MIT Technology Review