Dell Technologies HPC Community Interview: Bob Wisniewski, Intel’s Chief HPC Architect, Talks Aurora and Getting to Exascale – insideHPC

Intel is the prime vendor for the first US exascale supercomputer, the Aurora system, scheduled for delivery in 2021 at Argonne National Lab. The late Rich Brueckner of insideHPC caught up with Intels senior principal engineer and chief architect for HPC, Robert Wisniewski, to learn more.

insideHPC: Bob, we know each other mostly through your work with Intel software and OpenHPC Project. This is a very different kind of role for you, isnt it?

Wisniewski: Thank you. Yes, Im in a larger role, one that requires me to wear a software hat and a hardware hat, covering the whole system. Im currently the chief architect for HPC at Intel. I am also the technical lead for the Aurora Supercomputer at Argonne National Lab, as well as principal investigator.

insideHPC: Congratulations! That will broaden the discussion here now that the Aurora Supercomputer is just around the corner.

Wisniewski: Absolutely.

insideHPC: Lets start at the beginning. Can you describe your role as the chief architect for HPC?

Intels Bob Wisniewski

Wisniewski: Theres two parts to it. One, Im playing the role of PI for Aurora, which is the principal investigator. Thats a specific role relative to Intels contract with Argonne National Lab. Plus, Im the overall technical lead that means I am responsible for the technical direction. Large projects like Aurora must meet technical and schedule milestones. We typically start with our architectural point design, but as the project progress we learn, and products do not necessarily mature as planned so continue exploring technologies and we make changes as we go. We meet weekly and review where we are. Technically, we interact and collaborate very closely with Argonne to review schedules and discuss technical information either on the performance or functional aspects as this information becomes available. We continue to modify our point design to make sure our current design is going to meet their needs. We work closely with Argonne, who has been a great partner.

insideHPC: What about your overall role as HPC architect?

Wisniewski: Part of the role entails working with partners to better understand how they can deliver HPC capabilities. One way we do this is through POC (proof of concept) projects, which have been successful. In the broader role, Im working to make sure that the products that are coming out of Intel are well designed so they can be used by our OEMs in their systems. This is something that Intel made a shift to seven to 10 years ago, when we started thinking from a system perspective and making sure the technologies we were designing, manufacturing, building, and providing to our OEMs were going to fit well into the overall systems they were building.

The close partnerships we have with our OEMs make for a more efficient ecosystem. It comes down to understanding the needs of our OEMs and making sure were designing products that meet their needs. Creating a vision for the future and ensuring this meets the needs of the HPC computing market and our OEMs is a broad view of what my role now involves.

insideHPC: As far as your future heterogeneous (CPU-GPU) architectures and things like oneAPI, are you sharing blueprints with OEMs to enable innovation?

Wisniewski: Yes, we have a solutions group that works to understand OEM needs and help take that knowledge back into Intel. To do this effectively is what you might call co-design, though I guess thats an overused word. Intel Select Solutions offers OEMs easy and quick-to-deploy infrastructures optimized for a variety of applications, like AI, analytics clusters and HPC.

insideHPC: Bob, youre coming into this role at a time when Intel is in the process of changing its HPC focus from general purpose CPUs to heterogeneous architectures. Is that where youre taking us?

Wisniewski: Yes, I think thats a great observation. Were recognizing that HPC is expanding to include AI. But its not just AI, it is big data and edge, too. Many of the large scientific instruments are turning out huge amounts of data that need to be analyzed in real time. And big data is no longer limited to the scientific instruments its all the weather stations and all the smart city sensors generating massive amounts of data. As a result, HPC is facing a broader challenge and Intel realizes that a single hardware solution is not going to be right for everybody.

Intel is scheduled to deliver the Aurora supercomputer, the first U.S. exascale system, to Argonne National Laboratory in 2021, incorporating Intel Optane DC Persistent memory, Intels Xe compute architecture and Intel oneAPI programming framework, among other technologies. (Credit: Argonne National Laboratory)

At the same time, of course, we continue to actively support our CPU architecture. Thats the workhorse in everybodys HPC system. But as you know, with Aurora, were extending our capabilities to GPUs (Intels future Xe architecture), and we will provide both the graphical line as well as the compute line to meet customers needs.

insideHPC: But that doesnt come without its challenges. So youre providing all these heterogeneous solutions and thats good, right? Well, its good that it meets the customers needs, but does that make it harder to program? And to the end customers and OEMs who are going to be providing these solutions, do they need a full staff of programmers to rewrite all the code?

Wisniewski: This is where oneAPI comes in. It is a cross-industry, open, standards-based unified programming model. We believe that heterogeneity is valuable to customers. We want to provide the solution that customers need, but we also want to provide a productive and performant way to be able to leverage all the different solutions. The vision behind oneAPI is that regardless of which architecture you decide to utilize be it from multiple vendors or Intel, you have a single, common, cohesive way of programming them. Thats the vision. Now, there will be challenges, and as a technical person I dont want it to be presented as a simple panacea. oneAPI provides a common framework for writing code so a single code base can be portable and re-used across a diverse set of architectures.

So oneAPI empowers end customers to be much more efficient about how theyre utilizing their resources by enabling greater code re-use while allowing for architecture-specific tuning. A lot of developers remain challenged to achieve enough parallelism to leverage todays architectures, and now we are throwing heterogeneity their way. So their challenge is not just multiple cores, its heterogeneous compute elements as well. Determining which code can be parallelized and how to do that, and now which code can be off-loaded, has increased the complexity of developing applications for todays and tomorrows architectures. oneAPI is going to help us and the community address those challenges and make it easier.

Were recognizing that HPC is expanding to include AI. But its not just AI, it is big data and edge, too. Many of the large scientific instruments are turning out huge amounts of data that need to be analyzed in real time. And big data is no longer limited to the scientific instruments its all the weather stations and all the smart city sensors generating massive amounts of data. As a result, HPC is facing a broader challenge and Intel realizes that a single hardware solution is not going to be right for everybody.

insideHPC: Beyond what Intel is doing, what is the vision for the oneAPI ecosystem?

Wisniewski: To promote compatibility and enable developer productivity and innovation, the oneAPI specification builds upon industry standards and provides an open, cross-platform developer stack. It includes a cross-architecture language: Data Parallel C++, which is based on ISO C++ and Khronos SYCL. The oneAPI industry initiative aims to encourage collaboration on the oneAPI specification and compatible implementations across the ecosystem. Already, more than 30 companies and leading research organizations [LINK TO https://software.intel.com/en-us/oneapi/reviews%5D support the oneAPI concept, and further adoption is expected to grow.

Intels oneAPI product is a reference implementation of the specification for Intel architecture and consists of a base and several domain specific toolkits, including one for HPC. The components that are in the core oneAPI product will be the ones that have general applicability, for example, the Intel compiler along with multiple libraries and tools. For HPC users of oneAPI, there is an HPC toolkit, which includes components such as openMP and Fortran run times, Intel MPI library, and all the things that an HPC user would need to maximize the performance and capabilities of Intel hardware. oneAPI allows a more productive environment across all HPC and even beyond HPC, in areas like edge, cloud, and enterprise computing although I like to think of edge and cloud and AI and HPC all coming together. oneAPI will have components that will allow developers to be able to leverage heterogeneity across various environments and architectures (CPU, GPU, FPGA and specialized accelerators).

Overall, this oneAPI cross-architecture programming approach will help ensure code works well on the next generations of innovative architectures. And it also opens the door for flexibility in choosing the best architectures for a particular solution or workloads needs in performance, cost, and efficiency.

We envision oneAPI as an industry standard that will encourage broad developer engagement and collaboration, while having multi-vendor adoption and support.

insideHPC: So for our readers, whats the call to action for oneAPI? Is it time to download and start playing around with this? What would you say?

Wisniewski: Absolutely. Download the oneAPI specification at onapi.com. Developers and researchers can also directly download the Intel oneAPI toolkits [software.intel.com/oneapi], and test code and workloads for free across a variety of Intel architectures using the Intel DevCloud for oneAPI [LINK TO https://intelsoftwaresites.secure.force.com/devcloud/oneapi%5D. And there are multiple communities forming around oneAPI. It is absolutely our intent that this becomes a broad ecosystem, like the Linux model. The goal is really to make this pervasive, and it will be more powerful as more and more people use it.

insideHPC: I understand you wrote a book recently. Can you tell me more?

Wisniewski: The book is called Operating Systems for Supercomputers and High-Performance Computing. It was written together with my fellow editors Balazs Gerofi, Yutaka Ishikawa and Rolf Riesen.

The book came about from a collaboration with our customer at RIKEN. At some point as we were starting to compare the different versions of multikernels, a new operating system direction that a lot of people the high-end HPC capability class are pursuing, and how they differ from traditional operating system kernels.

We started talking about how it might be valuable if we did a comparative retrospective on these efforts. We thought we could accomplish two things.

First, we could have a little fun looking at the high-end operating systems community and how it evolved over the past three decades. It takes years to write an OS, and you learn hard lessons along the way. We decided to include lessons learned so that future OS developers can benefit from it.

Second, we wanted to provide insight as to why things work the way they do. The people that developed the OS thought long and hard about their designs. But sometimes you miss things. In each chapter, we included a section dedicated to lessons learned, so that readers could gain insight from the developers who spent the hard effort to build the OS. The book was a tremendous amount of work, but I had a fabulous set of co-editors and it was a lot of fun.

insideHPC: I wanted to wrap up and ask you more about community engagement. You are very generous with your time, attending industry events on a regular basis, multiple times a year. Why is that so important to you and the company to go out and engage at these events?

Wisniewski: I really enjoy going to these events and interacting with people. The events that I like going to most are the ones that have savvy audiences asking tough questions or discussing challenges they face. For example, when technical leaders share challenges, I can work with colleagues back at Intel to address them, and that in turn changes our future architectures to be better [co-]designed to meet the needs of our customers.

Dr. Robert W. Wisniewski is an ACM Distinguished Scientist, IEEE Senior Member, and the Chief Architect for High Performance Computing and a Senior Principal Engineer at Intel Corporation. He is the lead architect and PI for A21, the supercomputer targeted to be the first exascale machine in the US when delivered in 2021. He is also the lead architect for Intels cohesive and comprehensive software stack that was used to seed OpenHPC, and serves on the OpenHPC governance board as chairman. He has published over 77 papers in the area of high-performance computing, computer systems, and system performance, filed over 56 patents, and given over 64 external invited presentations. Before coming to Intel, he was the chief software architect for Blue Gene Research and manager of the Blue Gene and Exascale Research Software Team at the IBM T.J. Watson Research Facility, where he was an IBM Master Inventor and led the software effort on Blue Gene/Q, which was the most powerful computer in the world in June 2012, and occupied four of the top 10 positions on Top500 list.

Read the original:

Dell Technologies HPC Community Interview: Bob Wisniewski, Intel's Chief HPC Architect, Talks Aurora and Getting to Exascale - insideHPC

Check Point unearths critical SigRed bug in Windows DNS – ComputerWeekly.com

All versions of Windows Server from 2003 to 2019 are vulnerable to a newly identified vulnerability, dubbed SigRed, in Windows DNS, the domain name system service provided by Microsoft in Windows operating systems.

Uncovered by Check Point researcher Sagi Tzaik and first reported to Microsoft by Check Point through a disclosure programme on 19 May 2020, the CVE-2020-1350 vulnerability is being patched in Julys Patch Tuesday update from Microsoft. It has been assigned a CVSS score of 10, the highest possible.

The SigRed vulnerability exists in the way the Windows DNS server parses an incoming DNS query, and how it parses a response to a forwarded DNS query. If an attacker can successfully trigger it with a malicious DNS query, they can trigger a heap-based buffer overflow, which will in turn let them take control of the server and feign domain administrator rights. This makes it possible for them to intercept and manipulate email and network traffic, compromise services and harvest credentials, among other things.

Critically, SigRed is wormable, meaning that a single exploit can cause a chain reaction, allowing attacks to spread through a network without any action on the part of the user in effect one single compromised machine becomes a super-spreader.

A DNS server breach is a critical issue. Most of the time, it puts the attacker just one inch away from breaching the entire organisation. There are only a handful of these vulnerability types ever released. Every organisation, big or small, using Microsoft infrastructure is at major security risk if this flaw is left unpatched, said Omri Herscovici, leader of Check Points vulnerability research team.

The risk would be a complete breach of the entire corporate network. This vulnerability has been in Microsoft code for more than 17 years, so if we found it, it is not impossible to assume that someone else already found it as well.

A DNS server breach is a critical issue. It puts the attacker just one inch away from breaching the entire organisation. Every organisation using Microsoft infrastructure is at major security risk if this flaw is left unpatched Omri Herscovici, Check Point

Check Point is strongly advising Windows users to patch their affected servers as soon as possible as previously noted, a fix is being made available today (14 July) as part of the latest Patch Tuesday update.

Herscovici said the likelihood of SigRed being exploited at some point in the next week was very high, as his team had been able to find all of the primitives required to take advantage of it, suggesting it would be easy for a determined hacker to do the same.

Furthermore, our findings show us all that no matter how secure we think we are, there are always more security issues out there waiting to be discovered. Were calling the vulnerability SigRed, and we believe it should be top priority for remedying. This isnt just another vulnerability patch now to stop the next cyber pandemic, he said.

Besides applying the patch immediately, Check Point detailed a workaround to block the attack, which goes thus: In CMD type: reg add HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesDNSParameters /v TcpReceivePacketSize /t REG_DWORD /d 0xFF00 /f net stop DNS && net start DNS.

Visit link:

Check Point unearths critical SigRed bug in Windows DNS - ComputerWeekly.com

Japan has long accepted COVID’s airborne spread, and scientists say ventilation is key – CBS News

Tokyo Under pressure from the scientific community, the World Health Organization acknowledged last week the airborne transmission of "micro-droplets" as a possible third cause of COVID-19 infections. To many researchers in Japan, the admission felt anti-climactic.

This densely populated country has operated for months on the assumption that tiny, "aerosolized" particles in crowded settings are turbo-charging the spread of the new coronavirus.

Very few diseases tuberculosis, chicken pox and measles have been deemed transmissible through aerosols. Most are spread only through direct contact with infected persons or their bodily fluids, or contaminated surfaces.

Still the WHO has refused to confirm aerosols as a major source of new coronavirus infections, saying more evidence is needed. But scientists are keeping the pressure on.

"If the WHO recognizes what we did in Japan, then maybe in other parts of the world, they will change (their antiviral procedures)," said Shin-Ichi Tanabe, a professor in the architecture department of Japan's prestigious Waseda University. He was one of the 239 international scientists who co-wrote an open letter to the WHO urging the United Nations agency to revise its guidelines on how to stop the virus spreading.

Large droplets expelled through the nose and mouth tend to fall to the ground quickly, explained Makoto Tsubokura, who runs the Computational Fluid Dynamics lab at Kobe University. For these larger respiratory particles, social distancing and face masks are considered adequate safeguards. But in rooms with dry, stale air, Tsubokura said his research showed that people coughing, sneezing, and even talking and singing, emit tiny particles that defy gravity able to hang in the air for many hours or even days, and travel the length of a room.

The key defense against aerosols, Tsubokura said, is diluting the amount of virus in the air by opening windows and doors and ensuring HVAC systems circulate fresh air. In open-plan offices, he said partitions must be high enough to prevent direct contact with large droplets, but low enough to avoid creating a cloud of virus-heavy air (55 inches, or head height.) Small desk fans, he said, can also help diffuse airborne viral density.

To the Japanese, the latest WHO admission did at least vindicate a strategy that the country adopted in February, when residents were told to avoid "the three Cs" cramped spaces, crowded areas and close conversation.

After a lull, new infections primarily among younger residents in Tokyo have resurged recently, topping 200 for four straight days, before falling back down to 119 on Monday.

Alarmingly, new cases are cropping up not just in notoriously cramped and crowded nightlife spots, but also within homes and workplaces, prompting the national government to consider asking businesses to shut down again in the greater metro region. Authorities are anxious to prevent a corresponding surge in serious cases and deaths, which, thus far, have remained low.

Tsubokura, who also serves as the lead researcher for government institute RIKEN, has run simulations on Japan's new Fugaku supercomputer studying how to guard against airborne transmission inside subways, offices, schools, hospitals, and other public spaces.

His computer model of riders on Tokyo's congested Yamanote train line (see the animation at 7:15 minutes in this video) illustrated how air flow stagnates on packed trains with closed windows, in contrast to free-flowing air on carriages with few passengers and open windows. He suggests keeping windows open at all times to mitigate risks when trains fill up.

But Japan's infamously congested trains, he argues, probably aren't as as risky as his model suggests. "It is very crowded, and the air is bad," Kurokabe said. "But nobody is speaking, and everyone is wearing a mask. The risk is not that high."

Even riding on a crowded subway train if windows are kept open, as they are in Japan these days "is much safer than a pub, restaurant or gym," said Waseda University's Tanabe.

Masking noses and mouths is all the more important, he said, because his research shows men touch their faces up to 40 times an hour. (He said women, more likely to wear makeup, are less face-touchy.)

"Non-woven (surgical) masks are high-performance, but cloth also works it's much better than nothing," he said. "The only way to avoid leaks (of droplets) is to tightly fit the mask."

Mask-wearing and ventilation directives are helping the Japanese reopen concert halls, baseball stadiums and other venues. As of last Friday, such venues are permitted to admit up to 5,000 patrons.

Tanabe will be relying on Japan's new Fugaku supercomputer recently declared the world's fastest to plot optimal ventilation system efficiency.

"It's like predicting a typhoon," he said, noting that forecasting both extreme weather and air flow through crowded trains rely on the same equations to calculate fluid dynamics.

In an article to be published in the September issue of the scientific journal Environment International as schools and other public facilities struggle to reopen Tanabe and other experts argue that safeguarding indoor spaces can be done relatively simply and cheaply, by avoiding crowding and maintaining the flow of fresh air.

Originally posted here:

Japan has long accepted COVID's airborne spread, and scientists say ventilation is key - CBS News

Japan supercomputer finds ways to nix airborne virus at work and on trains – The Japan Times

Supercomputer-driven models simulated in Japan have suggested that operating commuter trains with windows open and limiting the number of passengers may help reduce the risk of novel coronavirus infection, as scientists warn the virus may spread in the air.

In an open letter published Monday, 239 scientists in 32 countries outlined evidence they say shows floating virus particles can infect people who breathe them in.

The World Health Organization (WHO) acknowledged evidence emerging of airborne transmission, but said it was not definitive.

Even if the coronavirus is airborne, questions remain about how many infections occur through that route. How concentrated the virus is in the air may also decide contagion risks, said Professor Yuki Furuse of Kyoto University.

In the open letter, scientists urged for improvements to ventilation and the avoidance of crowded, enclosed environments recommendations Japan broadly adopted months ago, according to Shin-ichi Tanabe, one of the co-authors of the letter.

In Japan, the committee for COVID-19 countermeasures insisted on the 3Cs at an early stage, said Tanabe, a professor at Waseda University in Tokyo, referring to Japans public campaign to avoid closed spaces, crowded places and close-contact settings. This was ahead of the world.

As the nation tamed the pandemic, with over 19,000 confirmed cases and 977 deaths so far, economy minister Yasutoshi Nishimura credited its success to the 3Cs and its cluster-tracing strategy.

The recent study by Japanese research giant Riken using the worlds fastest supercomputer, the Fugaku, to simulate how the virus travels in the air in various environments recommended several ways to lower infection risks in public settings.

Makoto Tsubokura, the studys lead researcher, said that opening windows on commuter trains can increase ventilation two- to threefold, lowering the concentration of ambient microbes.

But to achieve adequate ventilation, there needs to be space between passengers, the simulations showed, representing a drastic change from the custom of packing commuter trains tightly, for which the nation is notorious.

Other findings advised the installation of partitions in offices and classrooms, while hospital beds should be surrounded by curtains that touch the ceiling.

Originally posted here:

Japan supercomputer finds ways to nix airborne virus at work and on trains - The Japan Times

New supercomputer at the University of Aberdeen supports genomics research – Scientific Computing World

Researchers at the University of Aberdeen will now benefit from a new supercomputer named 'Maxwell' which is supporting ground research at the Universitys CentreforGenome-Enabled Biology andMedicine(CGEBM). The system also provides a centralised HPC system for the whole University with applications in medicine, biological sciences, engineering, chemistry, maths and computing science.

Dean Phillips, assistant director, digital and information services of the University of Aberdeen says: 'Aberdeen is a research-intensive university and weve already seen an increase of 50 per cent in registered usersof our Maxwell HPC cluster. Having our own HPC system helps the University to attract new researchers, research funding and expand on existing programmes of research and teaching. It is highly beneficial for our researchers to have on-site access to HPC infrastructure, particularly when securing start-up funds.OCFs Remote Admin Service is an extension of our team and really helps to ensure the smooth day to day running of our HPC cluster and dealing with support issues, user requests and keeping on top of software and security updates.'

University researchers will use Maxwell to rapidly analyse complex genomics datasets from known and novel organisms and help researchers to revolutionise the study of the Earths biodiversity and complex ecosystems important to health and disease, agriculture or the environment. It is estimated that only around 1 per cent of the Earths biodiversity is easily culturable in a laboratory, and there is little knowledge on most living organisms on the planet.

Dr Elaina Collie-Duguid, Manager, Centre for Genome Enabled Biology & Medicine at the University of Aberdeen comments: 'Genomicsis a dynamic discipline that rapidly evolves into new applications and approaches to interrogate complex systems. The new HPC cluster, with its expanded capacity and advanced GPU capabilities, enables us to use new analysis methods and work at a much quicker rate than before. It really is an exciting time for genomics, which is revolutionising the study of organisms and complex ecosystems to address issues of global importance, and HPC is a critical tool for analysis of these data.'

With the use of HPC, researchers can analyse microbiomes associated with a diverse array of ecosystems, such as the human gut, fish important to Scottish aquaculture, glaciers, deep-sea sediments, soil and bioreactors for the production of sustainable and environmentally friendly biofuels. These studies could provide a new understanding of important and diverse biological processes such as antimicrobial drug resistance; pathogen detection, evolution and virulence; mechanisms of drug efficacy and toxicity; development; inflammation; tumorigenesis; nutrition and satiety; and degradation of hydrocarbons.

Scotia Biologics is working with the Universitys CGEBM, using Maxwells capacity to speed up its existing pipeline and generating a more comprehensive dataset using genomics compared to traditional methods typically used in its field.

The new HPC system is also being used to teach graduates and post-graduate students in specialist subjects such as AI and bioinformatics, fields important to modern research and STEM careers, providing them with a unique opportunity to access HPC capacity. With 300 users, the cluster is providing a centralised HPC system to support all researchers and post-graduate students across the University.

The new HPC system is designed, integrated and managed by high performance compute, storage, cloud and AI integrator OCF.Russell Slack, managing Director of OCF comments: 'The new HPC cluster helps the University remain ahead of a fiercely competitive market. It attracts researchers, students and grants to its facility. Aberdeens investment in its HPC is a credit to its foresight in the importance of HPC in research that impacts people and everyday lives.'

Keith Charlton, CEO of Scotia Biologics states: 'As part of our drive to introduce new services to offer to the life sciences sector, Scotia is developing phage display library capabilities based around a growing number of animal species. With access to Maxwell, weve been able to quickly generate a large volume of data relatively inexpensively whilst significantly advancing our R&D programme.'

Read more from the original source:

New supercomputer at the University of Aberdeen supports genomics research - Scientific Computing World

Changing System Architectures And The Complexities Of Apple’s Butterfly Approach To ISAs – Hackaday

Apple computers will be moving away from Intel chips to its own ARM-based design. An interesting thing about Apple as a company is that it has never felt the need to tie itself to a particular system architecture or ISA. Whereas a company like Microsoft mostly tied its fortunes to Intels x86 architecture, and IBM, Sun, HP and other giants preferred vertical integration, Apple is currently moving towards its fifth system architecture for its computers since the company was formed.

What makes this latest change possibly unique, however, is that instead of Apple relying on an external supplier for CPUs and peripheral ICs, they are now targeting a vertical integration approach. Although the ARM ISA is licensed to Apple by Arm Holdings, the Apple Silicon design that is used in Apples ARM processors is their own, produced by Apples own engineers and produced by foundries at the behest of Apple.

In this article I would like to take a look back at Apples architectural decisions over the decades and how they made Apples move towards vertical integration practically a certainty.

The 1970s was definitely the era when computing was brought to living rooms around the USA, with the Commodore PET, Tandy TRS-80 and the Apple II microcomputers defining the late 1970s. Only about a year before the Apple IIs release, the newly formed partnership between Steve Wozniak and Steve Jobs had produced the Apple I computer. The latter was sold as a bare, assembled PCB for $666.66 ($2,995 in 2019), with about 200 units sold.

Like the Apple I, the Apple II and the Commodore PET were all based on the MOS 6502 MPU (microprocessor unit), which was essentially a cheaper and faster version of Motorolas 6800 MPU, with the Zilog Z80 being the other popular MPU option. What made the Apple II different was Wozniaks engineering shortcuts to reduce hardware costs, using various tricks to save separate DRAM refresh circuitry and remove the need for separate video RAM. According to Wozniak in a May 1977 Byte interview, [..] a personal computer should be small, reliable, convenient to use, and inexpensive.

With the Apple III, Apple saw the need to provide backward compatibility with the Apple II, which was made easy because the former maintained the same 6502 MPU and a compatible hardware architecture. Apples engineers did however put in limitations that prevented the emulated Apple II system to access more than a fraction of the Apple IIIs RAM and other hardware.

With the ill-fated Apple Lisa (1983) and much more successful Apple Macintosh (1984), Apple transitioned to the Motorola 68000 (m68k) architecture. The Macintosh was the first system to feature what would become the classic Mac OS series of operating systems, at the time imaginatively titled System 1. As the first step into the brave new world of 32-bit, GUI-based, mouse-driven desktops, it did not have any focus on backward compatibility. It also cost well over $6,000 when adjusted for inflation.

The reign of m68k-based Macintosh systems lasted until the release of the Macintosh LC 580, in 1995. That system featured a Motorola 68LC040 running at 33 MHz. That particular CPU in the LC 580 featured a bug that caused incorrect operation when used with a software FPU emulator. Although a fixed version of the 68LC040 was introduced in mid-1995, this was too late to prevent many LC 580s from shipping with the flawed CPU.

The year before the LC 580 was released, the first Power Macintosh system had been already released after a few years of Apple working together with IBM on the PowerPC range of chips. The reason for this shift could be found mostly in the anemic performance of the CISC m68k architecture, with Apple worried that the industrys move to the much better performing RISC architectures from IBM (POWER), MIPS, Sun (Sparc) and HP (PA-RISC). This left Apple no choice but to seek an alternative to the m68k platform.

The development of what came to be known as the Power Macintosh series of systems began in 1988, with Apple briefly flirting with the idea of making its own RISC CPU, to the point where they bought a Cray-1 super computer to assist in the design efforts. Ultimately they were forced to cancel this project due to a lack of expertise in this area, requiring a look at possible partners.

Apple would look at the available RISC offerings from Sun, MIPS, Intel (i860) and ARM, as well as Motorolas 88110 (88000 RISC architecture). All but Motorolas offering were initially rejected: Sun lacked the capacity to produce enough CPUs, MIPS had ties with Microsoft, Intels i860 was too complex, and IBM might not want to license its POWER1 core to third parties. Along the way, Apple did take a 43% stake in ARM, and would use an ARM processor in its Newton personal digital assistant.

Under the Jaguar project moniker, a system was developed that used the Motorola 88110, but the project was canceled when Apples product division president (Jean-Louis Gasse) left the company. Renewed doubt in the 88110 led to a meeting being arranged between Apple and IBM representatives, with the idea being to merge the POWER1s seven chips into a single chip solution. With Motorola also present at this meeting, it was agreed to set up an alliance that would result in the PowerPC 601 chip.

Apples System 7 OS was rewritten to use PowerPC instructions instead of m68k ones, allowing it to be used with what would become the first PowerPC-based Macintosh, the Power Macintosh 6100. Because of the higher performance of PowerPC relative to m68k at the time, the Mac 68k emulator utility that came with all PowerPC Macs was sufficient to provide backward compatibility. Later versions used dynamic recompilation to provide even more performance.

The PowerPC era is perhaps the most distinct of all Apple designs, with the colorful all-in-one iMac G3 and Power Macintosh G3 and Power Mac G4 along with the Power Mac G5 still being easily recognized computers that distinguished Apple systems from PCs. Unfortunately, by the time of the G4 and G5 series of PowerPC CPUs, their performance had fallen behind that of Intels and AMDs x86-based offerings. Although Intel made a costly mistake with their Netburst (Pentium 4) architecture during the so-called MHz wars, this didnt prevent PowerPC from falling further and further behind.

The Power Mac G5, with its water-cooled G5 CPUs, struggled to keep up with the competition and had poor performance-per-watt numbers. Frustrations between IBM and Apple about whether to focus on PowerPC or IBMs evolution of server CPUs called Power did not help here. This led Apple to the obvious conclusion: the future was CISC, with Intel x86. With the introduction of the Intel-based Mac Pro in 2006, Apples fourth architectural transition had commenced.

As with the transition from m68k to PPC back in the early 90s, a similar utility was used to the Mac 68k emulator, called Rosetta. This dynamic binary translator supports the translating of G3, G4 and AltiVec instructions, but not G5 ones. It also comes with a host of other compromises and performance limitations. For example, it does not support applications for Mac OS 9 and older (Classic Mac OS), nor Java applications.

The main difference between the Mac 68k emulator and Rosetta is that the former ran in kernel space, and the latter in user space, meaning that Rosetta is both much less effective and less efficient due to the overhead from task switching. These compromises led to Apple also introducing the universal binary format, also known as a fat binary and multi-architectural binary. This means that the same executable can have binary code in it for more than one architecture, such as PowerPC and x86.

A rare few of us may have missed the recent WWDC announcement where Apple made it official that it will be switching to the ARM system architecture, abandoning Intel after fourteen years. What the real reasons are behind this change will have to wait, for obvious reasons, but it was telling when Apple acquired P.A. Semi, a fabless semiconductor company, in 2009. Ever since Apple began to produce ARM SoCs for its iPhones instead of getting them from other companies, rumors have spread.

As the performance of this Apple Silicon began to match and exceed that of desktop Intel systems in benchmarks with the Apple iPhones and iPads, many felt that it was only a matter of time before Apple would make an announcement like this. There has also the lingering issue of Intel not having had a significant processor product refresh since introducing Skylake in 2015.

So there we are, then. It is 1994 and Apple has just announced that it will transition from m68k CISC to its own (ARM-based?) RISC architecture. Only it is 26 years later and Apple is transitioning from x86 CISC to its own ARM-based RISC architecture, seemingly completing a process that started back in the late 1980s at Apple.

As for the user experience during this transition, its effectively a repeat of the PowerPC to Intel transition during 2006 and onward, with Rosetta 2 (Rosetta Harder?) handling (some) binary translation tasks for applications that do not have a native ARM port yet and universal binaries (v2.0) for the other applications. Over the next decade or so Apple will find its straddling the divide between x86 and ARM before it can presumably settle into its new, vertically integrated home after nearly half a decade of flittering between foreign system architectures.

Read the original post:

Changing System Architectures And The Complexities Of Apple's Butterfly Approach To ISAs - Hackaday

EEENEW debuts APC, an Android/Apple Phone Computer, the world’s first hot-swap smartphone and Windows tablet – PR Web

APC, the worlds first smartphone & Windows tablet revealed!

HONG KONG (PRWEB) July 13, 2020

Nowadays, daily smartphone usage is highly frequent and important, which will boost mobility computing rapid demands in the coming years. Many smartphone users are eager for a versatile desktop-mode for their multi-window or multi-task usage to replace heavy computers when on-the-go. It means smartphones becoming mobile computers is coming true. Like Samsung DeX or Android Q onward smartphone OS, users are ready for smartphone desktop-mode usage.

In the past, a conventional laptop/tablet's screen, keyboard, and mouse touchpad were only used for itself, not for external mobile devices or smartphones. A company called EEENEW thoroughly built a new type of tablet, which can be hot-swapped between smartphone desktop-mode and PC Windows mode. Hot-swap in the same tablet, name it APC+, stands for Android/Apple Phone Computer. APC+ also represents Advanced Phone Computer or Advanced Personal Computer. It has built-in Windows and smartphone desktop mode switchable hardware. Thats truly convenient for work efficiency or smartphone gaming on-the-go.

Before the APC was introduced to the world, the traditional way to use a smartphone desktop-mode was to connect a smartphone to a dock, an external monitor, and a keyboard mouse. It needed many peripheral connections, and it was not possible to hot-swap between smartphone desktop-mode and PC mode. Since the rise of APC, that awkward has changed, the tablet's touchscreen and keyboard mouse touchpad can be used for the smartphone or Windows side upon user request.

Unlike other counterfeit products, APC provides a real desktop-mode for smartphones and has video-in USB-C and HDMI ports. APC+ provides a true tablet PC inside. Fully compatible with Nintendo Switch, Samsung DeX, EMUI, OnePlus, TNT, Windows10, Linux, LG, and Asus smartphones - all work in APC tablet!

APC will change the computer world history. The APC has hardware hot-swap for smartphones to become the desktop computer, built-in Intel & Windows. APC is a super-efficient tablet for computer and smartphone users.

To learn more details, check out the below links. APC is about to launch, dont miss out get your super early bird offer.

https://eeenew.com/https://apc.eeenew.com/

Share article on social media or email:

Read the original:

EEENEW debuts APC, an Android/Apple Phone Computer, the world's first hot-swap smartphone and Windows tablet - PR Web

Global Supercomputer Market 2020 Research with COVID-19 After Effects and Industry Progression till 2027 – Cole of Duty

Global Supercomputer Marketby Fior Markets specializes in market strategy, market orientation, expert opinion, and knowledgeable information on the global market. The report is a combination of pivotal insights including competitive landscape; global, regional, and country-level market size; market players; market growth analysis; market share; opportunities analysis, recent developments, and segmentation growth. The report also covers other thoughtful insights and facts such as historical data, sales, revenue, and global market share ofSupercomputer, product scope, market overview, opportunities, driving force, and market risks. The report segregates the market size, status, and forecast the 2020-2027 market by segments and applications/end businesses.

NOTE: Our analysts monitoring the situation across the globe explains that the market will generate remunerative prospects for producers post COVID-19 crisis. The report aims to provide an additional illustration of the latest scenario, economic slowdown, and COVID-19 impact on the overall industry.

DOWNLOAD FREE SAMPLE REPORT:https://www.fiormarkets.com/report-detail/418092/request-sample

One of the important factors that make this report worth a buy is the extensive overview of the competitive landscape of the industry. The report comprises upstream raw materials and downstream demand analysis. The most notable players in the market are examined. The report provides a detailed perspective on the trends observed in the market and the main areas with growth potential. The study predicts the growth of the globalSupercomputermarket size, market share, demand, trends, and gross sales. Key players are studied with their information like associated companies, downstream buyers, upstream suppliers, market position, historical background, and top competitors based on revenue along with sales contact information.

REQUEST FOR CUSTMIZATION:https://www.fiormarkets.com/enquiry/request-customization/418092

The major players covered in the report are:NVIDIA Corp., Fujitsu Ltd., Hewlett Packard Enterprise Co., Lenovo Group Ltd., Dell Technologies Inc., International Business Machines Corp., Huawei Investment & Holding Co. Ltd., Dawning Information Industry Co. Ltd., NEC Technologies India Private Limited., Atos SE, and Cray Inc. among others.

The globalSupercomputermarket has been analyzed and proper study of the market has been done on the basis of all the regions in the world. The regions as listed in the report include:North America, Europe, Asia Pacific, South America, and the Middle East and Africa.

Moreover, the report studies the value, volume trends, and the pricing history of the market. Then it covers the sales volume, price, revenue, gross margin, manufacturers, suppliers, distributors, intermediaries, customers, historical growth, and future perspectives in the globalSupercomputermarket. The study on market inherently projects this industry space to follow modest proceeds by the end of the forecast duration.

BROWSE COMPLETE REPORT AND TABLE OF CONTENTS:https://www.fiormarkets.com/report/supercomputer-market-by-operating-system-windows-linux-unix-418092.html

It Includes Analysis of The Following:

Market Overview: The section covers sector size, market size, detailed insights, and growth analysis by segmentation

Competitive Illustration: The report includes the list of major companies/competitors and their competition data that helps the user to determine their current position in the market and to maintain or increase their share holds.

Country-Wise Analysis: This section offers information on the sales growth in these regions on a country-levelSupercomputermarket.

Challenges And Future Outlook: Provides the challenges and future outlook aboutSupercomputer

This report will be beneficial for any new business establishment or business looking to upgrade and make impactful changes in their businesses. The overall report is a comprehensive documentation that covers all the aspects of a market study and provides a concise conclusion to its readers. For the purpose of this study, this market research report has segmented the globalSupercomputermarket report on the basis of recommendations and regions covered in this report.

Customization of the Report:This report can be customized to meet the clients requirements. Please connect with our sales team ([emailprotected]), who will ensure that you get a report that suits your needs.

See the original post:

Global Supercomputer Market 2020 Research with COVID-19 After Effects and Industry Progression till 2027 - Cole of Duty

Supercomputer Industry Market Size, Growth Opportunities, Trends by Manufacturers, Regions, Application & Forecast to 2025 – Cole of Duty

The latest trending report Global Supercomputer Industry Market to 2025 available at MarketStudyReport.com is an informative study covering the market with detailed analysis. The report will assist reader with better understanding and decision making.

The Supercomputer Industry market report is an in-depth analysis of this business space. The major trends that defines the Supercomputer Industry market over the analysis timeframe are stated in the report, along with additional pointers such as industry policies and regional industry layout. Also, the report elaborates on the impact of existing market trends on investors.

Request a sample Report of Supercomputer Industry Market at:https://www.marketstudyreport.com/request-a-sample/2769152?utm_source=coleofduty.com&utm_medium=AN

COVID-19, the disease it causes, surfaced in late 2020, and now had become a full-blown crisis worldwide. Over fifty key countries had declared a national emergency to combat coronavirus. With cases spreading, and the epicentre of the outbreak shifting to Europe, North America, India and Latin America, life in these regions has been upended the way it had been in Asia earlier in the developing crisis. As the coronavirus pandemic has worsened, the entertainment industry has been upended along with most every other facet of life. As experts work toward a better understanding, the world shudders in fear of the unknown, a worry that has rocked global financial markets, leading to daily volatility in the U.S. stock markets.

Other information included in the Supercomputer Industry market report is advantages and disadvantages of products offered by different industry players. The report enlists a summary of the competitive scenario as well as a granular assessment of downstream buyers and raw materials.

Revealing a gist of the competitive landscape of Supercomputer Industry market:

Ask for Discount on Supercomputer Industry Market Report at:https://www.marketstudyreport.com/check-for-discount/2769152?utm_source=coleofduty.com&utm_medium=AN

An outlook of the Supercomputer Industry market regional scope:

Additional takeaways from the Supercomputer Industry market report:

This report considers the below mentioned key questions:

Q.1. What are some of the most favorable, high-growth prospects for the global Supercomputer Industry market?

Q.2. Which products segments will grow at a faster rate throughout the forecast period and why?

Q.3. Which geography will grow at a faster rate and why?

Q.4. What are the major factors impacting market prospects? What are the driving factors, restraints, and challenges in this Supercomputer Industry market?

Q.5. What are the challenges and competitive threats to the market?

Q.6. What are the evolving trends in this Supercomputer Industry market and reasons behind their emergence?

Q.7. What are some of the changing customer demands in the Supercomputer Industry Industry market?

For More Details On this Report: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-supercomputer-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

Related Reports:

1. COVID-19 Outbreak-Global Switch Industry Market Report-Development Trends, Threats, Opportunities and Competitive Landscape in 2020Read More: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-switch-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

2. COVID-19 Outbreak-Global Tritium Light Sources Industry Market Report-Development Trends, Threats, Opportunities and Competitive Landscape in 2020Read More: https://www.marketstudyreport.com/reports/covid-19-outbreak-global-tritium-light-sources-industry-market-report-development-trends-threats-opportunities-and-competitive-landscape-in-2020

Related Report : https://www.marketwatch.com/press-release/steam-boiler-market-share-historical-growth-analysis-opportunities-and-forecast-to-2025-2020-07-09?tesla=y

Contact Us:Corporate Sales,Market Study Report LLCPhone: 1-302-273-0910Toll Free: 1-866-764-2150 Email: [emailprotected]

Excerpt from:

Supercomputer Industry Market Size, Growth Opportunities, Trends by Manufacturers, Regions, Application & Forecast to 2025 - Cole of Duty

Tech News: Neuromorphic computing and the brain-on-a-chip in your pocket – IOL

By Louis Fourie Jul 10, 2020

Share this article:

JOHANNESBURG - The human brain is relatively small, uses about 20 Watts of power and can accomplish an amazing number of complex tasks. In contrast, machine learning algorithms that are growing in popularity need large powerful computers and data centres that consumes megawatts of electricity.

Artificial Intelligence (AI) produces astounding achievements in the recognising of images with greater accuracy than humans, having natural conversations, beating humans in sophisticated games, and driving vehicles in heavy traffic.

AI is indeed a disruptive power of the Fourth Industrial Revolution currently driving advances in numerous things from medicine to predicting the weather. However, all of these advances require enormous amounts of computing power and electricity to develop, train and run the algorithms.

According to Elon Musk, the computing power and electricity consumption of AI machines doubles every three to four months, thus becoming a major concern for environmentalists.

But it seems that we can learn something from nature in our endeavour to address the high consumption of electricity and the resultant contribution to the climate crisis by AI and powerful machines.

A branch of computer chip design focuses on mimicking the biological brain to create super-efficient neuromorphic chips that will bring AI from the powerful and energy-hungry machines right to our pocket.

Neuromorphic computing

Neuromorphic computing is the next generation of AI and entails very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the biological nervous system.

This form of AI has more in common with human cognition than with conventional computer logic.

In November 2017 Intel Labs introduced Loihi, a fifth-generation self-learning neuromorphic research test-chip containing some 130 000 neurons, to provide a functional system for researchers to implement Spiking Neural Networks (SNN) that emulate natural neural networks in biological brains.

Each neuron in the SNN can fire or spike independently and send pulsed signals with encoded information to other neurons, thereby simulating the natural learning process by dynamically remapping the synapses between the artificial neurons in response to stimuli.

MIT & memristors

About a month ago engineers of the Massachusetts Institute of Technology (MIT) published a paper in the prestigious journal, Nature Nanotechnology, announcing that they designed a brain-on-a-chip, consisting of thousands of artificial brain synapses known as memristors.

A memristor is a silicon-based electronic memory device that mimics the information-transmitting synapses in the human brain to carry out complex computational tasks. The neuromorphic chip, smaller than a piece of confetti, is so powerful that a small portable device could now easily handle the convoluted computational tasks currently carried out by todays supercomputers.

Artificial neural networks are nothing new. However, until now synapse networks existed only as software. MIT has built real neural network hardware that made small and portable AI systems possible, thereby cutting the power consumption of AI networks by about 95 percent.

Just imagine connecting a small neuromorphic device to a camera in your car, and having it recognise lights and objects and make a decision immediately, without having to connect to the Internet. This is exactly what this new energy-efficient MIT chip will make possible on-site and in real-time.

Memristors, or memory transistors, are an essential component of neuromorphic computing. In a neuromorphic device, a memristor serves as the transistor in a circuit, however, in this case it rather resembles the functioning of a brain synapse (the junction between two neurons). The synapse receives signals from a neuron in the form of ions and sends an equivalent signal to the following neuron.

Computers in our phones and laptops currently use different digital components for processing and memory. Information is, therefore, continuously transferred between the components. The new MIT chip computes all the inputs in parallel within the memory using analog circuits in a similar way the human brain works, thus significantly reducing the amount of data that needs to be transferred, as well as a huge saving in electricity.

Since memristors are not binary as the transistors in a conventional circuit, but can have many values, they can carry out a far wider range of operations. This means that memristors could enable smaller portable devices that do not rely on supercomputers, or even connections to the Internet and cloud processing.

To overcome the challenges of reliability and scalability the MIT researchers used a new kind of silicon-based, alloyed memristor. Until now, ions flowing in memristors made from unalloyed material easily scattered as the components are getting smaller, thus leading to inferior fidelity and computational reliability. Images were often of a poorer quality.

However, an alloy of conventional silver and silicidable (a compound that has silicon with more electropositive elements) copper stabilise the flow of ions between the electrodes, allowing the scaling of the number of memristors on a small chip without sacrificing quality or functionality. The result after numerous storing and reproductions of a visual task was that the images were much crisper and clearer when compared with existing memristor designs of unalloyed elements.

The MIT researchers are not the first to create chips to carry out processing in memory to reduce power consumption of neural nets.

However, it is the first time the approach has been used to run powerful convolutional neural networks popular in image-based AI applications. This will certainly open the

possibility to use more complex convolutional neural networks for image and video classifications in the Internet of Things in the future. Although much work still needs to be done, the new MIT chip also opens up opportunities to build more AI into devices such as smartphones, household appliances, Internet of Things devices, and self-driving cars where powerful low-power AI chips are needed.

Companies & chips

MIT is not the only institution working on making AI more suitable for smaller devices. Apple has already integrated its Neural Engine into the iPhone X to power its facial recognition technology. Amazon is developing its own custom AI chips for the next generation of its Echo digital assistant.

The big chip companies are also working on the energy-efficiency of their chips since they are increasingly building advanced capabilities like machine learning into their chips. In the beginning of this year ARM unveiled new chips capable of AI tasks such as translation, facial recognition, and the detection of faces in images. Even Qualcomms new Snapdragon mobile chips are heavily focusing on AI.

Going even further, IBM and Intel are developing neuromorphic chips. IBMs TrueNorth and Intels Loihi can run powerful machine learning tasks on a fraction of the power of conventional chips.

The costs of AI and machine learning is also declining dramatically. The cost to train an image recognition algorithm decreased from around R17 000 in 2017 to about R170 in 2019.

The cost of running such an algorithm decreased even more. The cost to classify a billion images was R17 000 in 2017, but just R0.51 in 2019.

There is little doubt that as neuromorphic chips advance further in the years to come, the trends of miniaturization, increased performance, less power consumption, and much lower AI costs will continue.

Perhaps it may not be too long before we will carry some serious AI or artificial brains in our pocket that will be able to outperform current supercomputers, just as our cellphones are more powerful than the super computers of many years ago. AI will be in our pocket, as well as in numerous other devices. It will increasingly be part of our lives, making decisions on our behalf, guiding us, and automating many current tasks.

The Fourth Industrial Revolution is fundamentally changing engineering and making things possible that we could only dream of before.

Professor Louis C H Fourie is a futurist and technology strategist.

BUSINESS REPORT

Go here to read the rest:

Tech News: Neuromorphic computing and the brain-on-a-chip in your pocket - IOL

Give IBM your unused computing power to help cure coronavirus and cancer – CNET

Your idle Android phone could be performing calculations that help cure diseases.

When Sawyer Thompson was just 12 years old, he discovered his father Brett unconscious in their Washington, DC area home. Sawyer called an ambulance and Brett was rushed to the hospital, where the family learned the worst: He had brain cancer. After a year of surgeries, radiation and chemotherapy, Brett's cancer is in remission. But Sawyer wanted to do more to fight against cancer, and is tapping his interest in tech to make a bigger difference.

Like many young people, Sawyer -- who built his first computer at age 9, and started a business called ZOYA building machines for locals -- took to the internet. A Google search on "how to help cure cancer" led him to the IBM World Community Grid app, and gave him a way to make a difference from home.

Subscribe to the CNET Now newsletter for our editors' picks of the most important stories of the day.

IBM World Community Grid app uses "volunteer computing" -- a type of distributed computing where you donate your computer's unused resources to a research project. Basically, with the app, your computer, phone or tablet can run virtual experiments in the background while you aren't using it that would normally take years of expensive trial and error using laboratory computers alone. The crowdsourcing approach lets anyone participate in important research, with no time, money or expertise required.

"I've always wanted to find a way to help people with computers," Sawyer said. "World Community Grid allows anyone to help cure cancer, find cures for COVID-19 and study rainfall in Africa. It's really cool."

As people are still largely stuck at home due to the coronavirus pandemic, finding ways to volunteer that don't require an in-person commitment or a donation can be difficult. But volunteer computing initiatives like World Community Grid provide opportunities to help.

Last year, Sawyer created a website called Help Sawyer Fight Cancer to share his dad's story and urge people to sign up for the app. He set an "audacious goal" of getting 100 years of cancer research processing time donated before his dad's birthday in September. Two other users on another team, nicknamed Old Chap in the UK and the Little Mermaid in Copenhagen, came across the project. Their team joined Sawyer's, and within a few months more than 80 people around the world helped him cross the 100-year mark.

Soon after that, Old Chap received a cancer diagnosis of his own. And Sawyer, now age 14, decided to shoot for 1,000 years of research processing time, instead of just 100.

"I changed the goal not just for my dad, but for Old Chap and anyone else who finds themself unexpectedly on this journey," Sawyer said. "It's honestly been crazy. At first I never thought we'd reach 100 years, and here we are trekking our way to 1,000 years."

The team's computers have already performed about 1 million calculations -- contributing more than 450 years worth of computing, had a single PC been crunching the same numbers.

"Other forms of donating to researchers involve money," Sawyer said. "But this is 100% free and requires no effort at all."

Sawyer Thompson, right, started using IBM's Community Grid app to donate his unused computing power to cancer research after his father Brett's brain cancer diagnosis.

Volunteer computing has been around since the 1990s, and such efforts are typically organized by academic and research organizations. IBM launched the World Community Grid as part of the company's social responsibility work in 2004. The app currently has more than 785,000 volunteers who donate their unused computing power to any of seven projects, focused on healthcare research on cancer, COVID-19, bacteria, tuberculosis and AIDS, or environmental research on rainfall in sub-Saharan Africa.

"World Community Grid is essentially a way to crowdsource big scientific problems, and enlist the help of volunteers to solve challenges in health and environmental research," said Juan Hindo, an IBM Corporate Social Responsibility manager and leader of the World Community Grid team.

The Mapping Cancer Markers project identifies indicators of cancer and studies how to personalize treatment plans. Researchers have millions of different tissue samples -- from healthy people, from people with different types of cancer, from those who have passed and from those who are still patients.

The Mapping Cancer Markers project in IBM World Community Grid.

"They're essentially doing a massive data comparison exercise to compare the genetic profile of all these people in the hope of identifying factors that can say, for example, people with aggressive type of cancer X are more likely to have these biomarkers," Hindo said.

To process these millions of data points requires a lot of computing power, Hindo said. That's where volunteers step in.

"Rather than trying to find a supercomputer or get more funding for computing capacity, [the researchers] bring us millions of calculations, and we distribute them out to our massive community of volunteers," she added. "They're not scientists or techies, and they don't need any skills or expertise to solve this problem."

With the app installed on a volunteer's computer or Android device, any time those devices aren't being fully used, it can run a calculation.

"By crowdsourcing this and running it out over our volunteer community, the researchers get to do this in a fraction of the time," Hindo said. "We hear from our volunteers over and over again that they feel like they're a part of a scientific process that they wouldn't otherwise be able to contribute to."

You can join the World Community Grid through IBM's website by entering an email address and creating a password, and then selecting which of the active projects you'd like to put your computing power toward. Then, you download the app on your computer or Android device (it's not on iOS).

Once you've joined the program and installed the app, everything works seamlessly, Hindo said. The app will figure out if you have any spare computing power and if so, will take on some calculations and send results back.

You can donate your unused computing power to one of several different projects on the World Community Grid app.

The app only runs if you are plugged in and if your device is charged at least 90 percent. The Android app version will only download calculations or upload results when connected to Wi-Fi, so it won't eat up your data, Hindo said. The ideal use case is when you're charging your phone or computer overnight.

When you open the app, you can find out what types of calculations your device has been working on.

In terms of security, the app uses one folder where downloaded and uploaded data goes, but doesn't touch any other data on your device, Hindo said. On the other end, the data you receive from researchers doesn't include any personally identifiable information, she added. However, anything you post in the community forums may become available to third party search engines online, according to the app's terms of service.

Researchers keep IBM and volunteers up to date on how they're using the data and calculations, what results they're finding and where they are publishing those discoveries, Hindo said. World Community Grid is also an open data project, which means all findings are made publicly available so the wider scientific community can benefit from volunteers' work.

The projects have yielded many papers published in scientific journals, Hindo said. For example, in 2014, scientists from a World Community Grid project aiming to fight childhood cancer announced the discovery of seven compounds that can destroy neuroblastoma cancer cells without any apparent side effects, marking a move toward new treatments.

"I want people to feel empowered that they can do something productive -- it's a fairly unique way of supporting a cause they care about, like cancer research," Hindo said. "Everyone's familiar with ways of volunteering your time or donating your money, and this is a different type of volunteerism -- all it takes is for you to download the app."

Now playing: Watch this: The LifeStraw is close to eradicating an ancient disease

10:08

Continue reading here:

Give IBM your unused computing power to help cure coronavirus and cancer - CNET

New OCF Supercomputer at the University of Aberdeen Supports Ground Breaking Genomics Research – HPCwire

July 7, 2020 Researchers at the University of Aberdeen are benefitting from an investment in High Performance Computing (HPC). The new HPC cluster, called Maxwell, is supporting ground breaking research at the Universitys Centre for Genome-Enabled Biology and Medicine (CGEBM) and provides a centralized HPC system for the whole University with applications in medicine, biological sciences, engineering, chemistry, maths and computing science. The new HPC system is designed, integrated and managed by high performance compute, storage, cloud and AI integrator, OCF. The supercomputer is part of the Universitys expansion to improve facilities for staff and students.

With Maxwell, the Universitys CGEBM is able to rapidly analyze complex genomics datasets from known and novel organisms and help researchers to revolutionize the study of the Earths biodiversity and complex ecosystems important to health and disease, agriculture or the environment. It is estimated that only around 1 percent of the Earths biodiversity is easily culturable in a laboratory, and there is little knowledge on most living organisms on the planet.

With the use of HPC, University researchers can analyze microbiomes associated with a diverse array of ecosystems, such as the human gut, fish important to Scottish aquaculture, glaciers, deep-sea sediments, soil and bioreactors for the production of sustainable and environmentally friendly biofuels. These state-of-the-art studies provide new understanding of important and diverse biological processes such as antimicrobial drug resistance; pathogen detection, evolution and virulence; mechanisms of drug efficacy and toxicity; development; inflammation; tumorigenesis; nutrition and satiety; and degradation of hydrocarbons.

Scotia Biologics, a SME research company, is working with the Universitys CGEBM, using Maxwells capacity to speed up its existing pipeline and generating a more comprehensive dataset using genomics compared to traditional methods typically used in its field.

The new HPC system is also being used to teach graduates and post-graduate students in specialist subjects such as AI and bioinformatics, fields important to modern research and STEM careers, providing them with a unique opportunity to access HPC capacity. With 300 users, the cluster is providing a centralized HPC system to support all researchers and post-graduate students across the University.

With twenty times more storage than the Universitys previous HPC system, Maxwell comprises four Lenovo ThinkSystem SD530 servers, 40 compute notes, ThinkParkQ supported BeeGFS Parallel FileSystem hosted on Lenovo Servers and Storage and NVIDIA GPUs. OCF is also providing an OpenSource Software Stack and its OCF Remote HPC Admin Managed Service to support the in-house HPC team.

Dean Phillips, Assistant Director, Digital and Information Services of the University of Aberdeen says: Aberdeen is a research-intensive university and weve already seen an increase of 50 percent in registered users of our Maxwell HPC cluster. Having our own HPC system helps the University to attract new researchers, research funding and expand on existing programs of research and teaching. It is highly beneficial for our researchers to have on-site access to HPC infrastructure, particularly when securing start-up funds.

Phillips continues: OCFs Remote Admin Service is an extension of our team and really helps to ensure the smooth day to day running of our HPC cluster and dealing with support issues, user requests and keeping on top of software and security updates.

Dr Elaina Collie-Duguid, Manager, Centre for Genome Enabled Biology & Medicine at the University of Aberdeen says: Genomics is a dynamic discipline that rapidly evolves into new applications and approaches to interrogate complex systems. The new HPC cluster, with its expanded capacity and advanced GPU capabilities, enables us to use new analysis methods and work at a much quicker rate than before. It really is an exciting time for genomics, which is revolutionizing the study of organisms and complex ecosystems to address issues of global importance, and HPC is a critical tool for analysis of these data.

Russell Slack, Managing Director of OCF comments: The new HPC cluster helps the University remain ahead of a fiercely competitive market. It attracts researchers, students and grants to its facility. Aberdeens investment in its HPC is a credit to its foresight in the importance of HPC in research that impacts people and everyday lives.

Keith Charlton, CEO of Scotia Biologics, says, As part of our drive to introduce new services to offer to the life sciences sector, Scotia is developing phage display library capabilities based around a growing number of animal species. With access to Maxwell, weve been able to quickly generate a large volume of data relatively inexpensively whilst significantly advancing our R&D program.

Source: University of Aberdeen

See the original post here:

New OCF Supercomputer at the University of Aberdeen Supports Ground Breaking Genomics Research - HPCwire

German Climate Computing Centre Orders Atos Supercomputer That Will Boost Computing Power by 5X – HPCwire

PARIS, France, June 22, 2020 Atos has signed a new five-year contract with the German Climate Computing Centre (DKRZ)to supply a supercomputer based on its latestBullSequana XH2000 technology to increase DKRZs computing power by five, compared to the currently operating high-performance computer Mistral which was provided by Atosin 2015. The new systems will be available at the DKRZ from mid-2021.

BullSequana to accelerate and deliver more precise forecasting

Just as a new, more powerful telescope provides more detailed images from space, a more powerful supercomputer allows for more detailed simulations and thus deeper insights into climate events. This significant increase in computing power will enable researchers at DKRZ to use regionally more detailed climate and earth system models in future, to include more processes in calculations, to simulate longer time periods, or to more accurately capture natural climate variability using ensemble calculations and thus reduce uncertainties. This is accompanied by a strong increase in the data that is calculated and then stored and evaluated. The BullSequana is an efficient computing and data management solution, essential for climate modelling and the resulting data volumes, to promote environmental research and deliver more reliable, detailed results.

Prof. Thomas Ludwig, CEO at DKRZ says: Our high-performance computer is the heart around which our services for science are grouped.Were really happy to be working with Atos again.With the new system, our users will be able to gain new insights into the climate system anddeliver even more detailed results. This concerns basic research, but also more applied fields of research such as improved current climate projections. This way, we help gain fundamental insightsfor climate change adaptation.

Damien Dclat, Group VP, Head of HPC, AI & Quantum Business Operations at Atos, explains: With our strong expertise and experience we have been able to successfully design the DKRZ solution integrating it efficiently with the BullSequana XH2000 systems best-of-breed components to optimize DKRZs production workloads. We look forward to continuing this joint effort to anticipate the next phases as well as to adapt applications and requirements to the next processor generation and other accelerating components.

Atos is a specialist in the provision of leading technologies for some of the worlds leading centers in the Weather Forecasting and Climate community, such as theEuropean Centre for Medium-Range Weather Forecastsand the French Meteorological ServiceMto-Franceand has worked closely together to optimize applications, explore and anticipate new technologies, and look for increased efficiency and reduced TCOs.

Technical specifications

The Atos solution is based on its BullSequana XH2000 supercomputer and will be one of the first equipped with the next generation of AMD EPYC x86 processors. The interconnect uses NVIDIA Mellanox InfiniBand HDR 200G technology and the data storage solution relies on DDN equipment. The final system will consist of around 3,000 computing nodes with a total peak performance of 16 petaflops, 800 Terabytes main memory and a 120 petabytes storage system.

Financing

The new system is worth 32.5 million euros, which is being provided by the Helmholtz Association of German Research Centres, the Max Planck Society and the Free and Hanseatic City of Hamburg.

About DKRZ

The German Climate Computing Center (Deutsches Klimarechenzentrum, DKRZ) is a central service center for German climate and earth system research. Its high performance computers, data storage and services form the central research infrastructure for simulation-based climate science in Germany. Apart from providing computing power, data storage capacity and technical support for models and simulations in climate research, DKRZ offers its scientific users an extensive portfolio of tailor-made services. It maintains and develops application software relevant to climate research and supports its users in matters of data processing. Finally, DKRZ also participates in national and international joint projects and cooperations with the aim of improving the infrastructure for climate modeling.

About Atos

Atos is a global leader in digital transformation with 110,000 employees in 73 countries and annual revenue of 12 billion. European number one in Cloud, Cybersecurity and High-Performance Computing, the Group provides end-to-end Orchestrated Hybrid Cloud, Big Data, Business Applications and Digital Workplace solutions. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and operates under the brands Atos, Atos|Syntel, and Unify. Atos is a SE (Societas Europaea), listed on the CAC40 Paris stock index.

Source: Atos

More:

German Climate Computing Centre Orders Atos Supercomputer That Will Boost Computing Power by 5X - HPCwire

RIKEN Physicists Develop Pseudo-2D Architecture for Quantum Computers that is Simple and Scalable – HPCwire

June 22, 2020 A simple pseudo-2D architecture for connecting qubitsthe building blocks of quantum computershas been devised by RIKEN physicists1. This promises to make it easier to construct larger quantum computers.

Quantum computers are anticipated to solve certain problems overwhelmingly faster than conventional computers, but despite rapid progress in recent years, the technology is still in its infancy. Were still in the late 1940s or early 1950s, if we compare the development of quantum computers with that of conventional computers, notes Jaw-Shen Tsai of the RIKEN Center for Emergent Matter Science and the Tokyo University of Science.

One bottleneck to developing larger quantum computers is the problem of how to arrange qubits in such a way that they can both interact with their neighbors and be readily accessed by external circuits and devices. Conventional 2D networks suffer from the problem that, as the number of qubits increases, qubits buried deep inside the networks become difficult to access.

To overcome this problem, large companies such as Google and IBM have been exploring complex 3D architectures. Its kind of a brute-force approach, says Tsai. Its hard to do and its not clear how scalable it is, he adds.

Tsai and his team have been exploring a different tack from the big companies. Its very hard for research institutes like RIKEN to compete with these guys if we play the same game, Tsai says. So we tried to do something different and solve the problem they arent solving.

Now, after about three years of work, Tsai and his co-workers have come up with a quasi-2D architecture that has many advantages over 3D ones.

Their architecture is basically a square array of qubits deformed in such a way that all the qubits are arranged in two rows (Fig. 1)a bilinear array with cross wiring, as Tsai calls it. Since all the qubits lie on the edges, it is easy to access them.

The deformation means that some wires cross each other, but the team overcame this problem by using airbridges so that one wire passes over the other one, much like a bridge at the intersection of two roads allows traffic to flow without interruption. Tests showed that there was minimal crosstalk between wires.

The scheme is much easier to construct than 3D ones since it is simpler and can be made using conventional semiconductor fabrication methods. It also reduces the number of wires that cross each other. And importantly, it is easy to scale up.

The team now plans to use the architecture to make a 1010 array of qubits.

About RIKEN

RIKEN is Japans largest comprehensive research institution renowned for high-quality research in a diverse range of scientific disciplines. Founded in 1917 as a private research foundation in Tokyo, RIKEN has grown rapidly in size and scope, today encompassing a network of world-class research centers and institutes across Japan.

Source: RIKEN

Here is the original post:

RIKEN Physicists Develop Pseudo-2D Architecture for Quantum Computers that is Simple and Scalable - HPCwire

Tech company uses quantum computers to help shipping and trucking industries – FreightWaves

Ed Heinbockel, president and chief executive officer of SavantX, said hes excited about how a powerful new generation of quantum computers can bring practical solutions to industries such as trucking and cargo transport.

With quantum computing, Im very keen on this, because Im a firm believer that its a step change technology, Heinbockel said. Its going to rewrite the way that we live and the way we work.

Heinbockel referred to recent breakthroughs such as Googles quantum supremacy, a demonstration where a programmable quantum processor solved a problem that no classical computer could feasibly solve.

In October 2019, Googles quantum processor, named Sycamore, performed a computation in 200 seconds that would have taken the worlds fastest supercomputer 10,000 years to solve, according to Google.

Jackson, Wyoming-based SavantX also recently formed a partnership with D-Wave Systems Inc., a Burnaby, Canada-based company that develops and offers quantum computing systems, software and services.

With D-Waves quantum services, SavantX has begun offering its Hyper Optimization Nodal Efficiency (HONE) technology to solve optimization problems to customers such as the Pier 300 container terminal project at the Port of Los Angeles.

The project, which began last year, is a partnership between SavantX, Blume Global and Fenix Marine Services. The projects goal is to optimize logistics on the spacing and placement of shipping containers to better integrate with inbound trucks and freight trains. The Pier 300 site handles 1.2 million container lifts per year.

With Pier 300, when do you need trucks at the pier and when and how do you get them scheduled optimally?, Heinbockel said. So the appointing part of it is very important and that is a facet of HONE technology.

Heinbockel added, Were very excited about the Pier 300 project, because HONE is a generalized technology. Then its a question of what other systems can we optimize? In all modes of transportation, the winners are going to be those that can minimize the energy in the systems; energy reduction. Thats all about optimization.

Heinbockel co-founded SavantX in 2015 with David Ostby, the companys chief science officer. SavantX offers data collection and visualization tools for industries ranging from healthcare to nuclear energy to transportation.

Heinbockel also recently announced SavantX will be relocating its corporate research headquarters to Santa Fe, New Mexico. The new center, which could eventually include 100 employees, will be focused on the companys HONE technology and customizing it for individual clients.

Heinbockel said SavantX has been talking to trucking, transportation and aviation companies about how HONE can help solve issues such as driver retention and optimizing schedules.

One of the problems Ive been hearing consistently from trucking companies is that they hire somebody. The HR department tells the new employee well have you home every Thursday night, Heinbockel said. Then you get onto a Friday night or Saturday, and [the driver] is still not home.

Heinbockel said if quantum computing and HONE can be used to help trucking companies with driver retention, and that it will make a lot of companies happy.

Heinbockel said cross-border operations could use HONE to understand what the flow patterns are like for commercial trucks crossing through different ports at various times of the day.

You would optimize your trucking flow based on when those lax periods were at those various ports, or you could ask yourself, is it cheaper for me to send a truck 100 miles out of the way to another port, knowing that it can get right through that port without having to sit for two or three hours in queue, Heinbockel said.

Click for more FreightWaves articles byNoi Mahoney.

Original post:

Tech company uses quantum computers to help shipping and trucking industries - FreightWaves

COLUMN: Future Shock — COVID-19 Channel Upheaval – CRN: Technology news for channel partners and solution providers

In the 1970 best-seller Future Shock, Alvin Toffler wrote about the enormous structural change that was taking place as a result of the shift from an industrial to a super industrial society. The state of future shock is the perfect metaphor for the technology upheaval that is ripping through the channel in the wake of the COVID-19 pandemic.

Forget super industrial. The new future shock may well be the equivalent of a supercomputer for every home given the structural changes in the global workforce. The pandemic has exposed the fault lines in IT budgets and strategies, which are now shifting at a blinding pace to provide employees the computing power and support they need.

So what does this future shock mean to solution providers? Thats the question Senior Editor Kyle Alspach takes on in this months cover story, The New Channel Normal. The deep dive on the pandemic impactwhich includes data from the COVID-19 Channel Impact Study by our sister business unit IPEDshows that the solution providers that are thriving are changing at a rapid pace what they sell and how they sell it.

The old channel playbook has been thrown out the window. Solution providers that do the same thing they were doing before the pandemic outbreak are going to find themselves grappling with the famous definition of insanity: doing the same thing over and over again and expecting a different result.

The bottom line is customers are speedily moving to pay-per-use cloud services and anytime, anyplace and anywhere business models. Thats good news for solution providers with an end-to-end suite of recurring revenue managed IT services.

If you want a good example of a company that gets it and is moving at a blinding pace to help customers move to the new world order, then look no further than Anexinet, No. 212 on the 2020 CRN Solution Provider 500. Anexinet CEO Todd Pittman is one of the leaders who has put his Blue Bell, Pa., company at the forefront of the post-pandemic super industrial era. That means closing a blockbuster virtual sales deal with a national energy company for a new mobile and web app.

Weve revamped our approach with our customers, Pittman said, calling the virtually delivered project a major success that Anexinet is now replicating with two other customers. Frankly, [the stakeholders] at our first customer were raving fans.

Its no small matter that Anexineta Hewlett Packard Enterprise Platinum partner is also betting big on HPEs GreenLake pay-per-use platform. Everybody wants to ensure that they have the capital required to keep their business operating through this uncertain time. And so I think that will continue to drive more conversations around leveraging the cloud, pay-as-you-go models, GreenLake, Pittman said.

The future shock, by the way, also applies to vendors. HPE CEO Antonio Neri, for one, is doubling down on an edge-to-cloud Platform-as-a-Service strategy and accelerating HPEs Everything-as-a-Service model in the wake of the pandemic.

In Tofflers amazingly prescient vision of the information era, citizens are, for the most part, inextricably linked to their homes, doing their own manufacturing and consumption from those electronic cottages.

Thats the world we find ourselves living in now. Those solution providers that are able to absorb this kind of future shock are going to thrive. Those that dont will disappear into the past.

See the article here:

COLUMN: Future Shock -- COVID-19 Channel Upheaval - CRN: Technology news for channel partners and solution providers

4th World Intelligence Congress to be held online – PRNewswire

In contrast with previous WICs, the event will be held online this year. Utilizing such smart technologies as artificial intelligence, augmented and virtual reality, the congress will bring together state leaders, experts and entrepreneurs from around the world in real-time. Together, they will discuss the development of AI and the building of a community with a shared future for mankind. The WIC aims to offer an international platform for creating better lives through the development of emerging industries in the new era.

During the congress, a wide range of innovative forums, exhibitions and competitions will be held online, such as the 2020 World Intelligence Driving Challenge and Haihe Entrepreneurial Talent Competition. All these activities will center around the theme of "Intelligent New Era: Innovation, Energization and Ecology," highlighting the WIC's role in advancing the application of AI in socio-economic development.

The host city Tianjin has vigorously promoted the development of intelligent industry in recent years. Numerous achievements have been made in the city in the field of science and technology, including the Tianhe-1 supercomputer, which is among the fastest in the world, the "PK" operating system, which represents a mainstream trend in related technology roadmaps, and "Brain Talker," the world's first chip designed specifically for use in brain-computer interfaces. In addition, the pilot zone of China's Internet of Vehicles has been approved in the city.

As the birthplace of modern industry in China, Tianjin boasts a solid foundation for industrial development. With the coming of the new era, the national strategy of coordinated development in the Beijing-Tianjin-Hebei region has presented new opportunities for the city. Standing at the forefront of reform and opening-up, Tianjin has established both a national innovation demonstration zone and a free trade zone. As such, there is great room for it to develop intelligent technology and the digital economy. In recent years, Tianjin has launched a targeted action plan, invested tens of billions of yuan in special funds, pooled the strength of universities and research institutions, and improved policies to attract more professional personnel. Through such measures, the city is positioning itself to become a vanguard of AI development, with intelligent technology being applied to transport, public services and daily life. The intelligent industry has also created new opportunities for young people looking for job or start their own business.

As one amongst many cities looking to transform, Tianjin epitomizes China's efforts to advance the development of AI, replace old growth drivers with new ones, and promote high-quality development. In fact, AI has also played a prominent role in China's fight against COVID-19.

As a new round of technological revolution is taking place, holding the WIC is in line with global demand. The event is expected to create a platform for exchanges, cooperation, win-win outcomes and mutual benefits, as well as drive the sound development of a new generation of AI. Wish the congress a huge success, and hope that AI can better benefit the people of all countries.

China Mosaic http://www.china.org.cn/video/node_7230027.htm

4th World Intelligence Congress to be held onlinehttp://www.china.org.cn/video/2020-06/22/content_76189084.htm

SOURCE China.org.cn

http://www.china.org.cn

See the rest here:

4th World Intelligence Congress to be held online - PRNewswire

Definition from WhatIs.com – whatis.techtarget.com

A supercomputer is a computer that performs at or near the currently highest operational rate for computers. Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both).Although advances likemulti-core processors and GPGPUs (general-purpose graphics processing units)have enabled powerful machinesfor personal use (see: desktop supercomputer, GPU supercomputer),by definition, a supercomputer is exceptional in terms of performance.

At any given time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term is also sometimes applied to far slower (but still impressively fast) computers. The largest, most powerful supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:

The first commercially successful supercomputer, the CDC (Control Data Corporation) 6600 was designed by Seymour Cray. Released in 1964, the CDC 6600 had a single CPU and cost $8 million the equivalent of $60 million today. The CDC could handle three million floating point operations per second (flops).

Cray went on to found a supercomputer company under his name in 1972. Although the company has changed hands a number of times it is still in operation. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed at markets such as aerospace, automotive, academic, financial services and life sciences.

IBM has been a keen competitor. The company's Roadrunner, once the top-ranked supercomputer, was twice as fast as IBM's Blue Gene and six times as fast as any of other supercomputers at that time. IBM's Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on Jeopardy!, a popular quiz show.

Year

Supercomputer

Peak speed (Rmax)

Location

2016

Sunway TaihuLight

93.01PFLOPS

Wuxi, China

2013

NUDTTianhe-2

33.86PFLOPS

Guangzhou, China

2012

CrayTitan

17.59PFLOPS

Oak Ridge, U.S.

2012

IBMSequoia

17.17PFLOPS

Livermore, U.S.

2011

FujitsuK computer

10.51PFLOPS

Kobe, Japan

2010

Tianhe-IA

2.566PFLOPS

Tianjin, China

2009

CrayJaguar

1.759PFLOPS

Oak Ridge, U.S.

2008

IBMRoadrunner

1.026PFLOPS

Los Alamos, U.S.

1.105PFLOPS

In the United States, some supercomputer centers are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative.

At the lower end of supercomputing, clustering takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing.

Read more:

Definition from WhatIs.com - whatis.techtarget.com

Top 10 Supercomputers

Advertisement

If someone says "supercomputer," your mind may jump to Deep Blue, and you wouldn't be alone. IBM's silicon chess wizard defeated grandmaster Gary Kasparov in 1997, cementing it as one of the most famous computers of all time (some controversy around the win helped, too). For years, Deep Blue was the public face of supercomputers, but it's hardly the only all-powerful artificial thinker on the planet. In fact, IBM took Deep Blue apart shortly after the historic win! More recently, IBM made supercomputing history with Watson, which defeated "Jeopardy!" champions Ken Jennings and Brad Rutter in a special match.

Brilliant as they were, neither Deep Blue nor Watson would be able to match the computational muscle of the systems on the November 2013 TOP500 list. TOP500 calls itself a list of "the 500 most powerful commercially available computer systems known to us." The supercomputers on this list are a throwback to the early computers of the 1950s -- which took up entire rooms -- except modern computers are using racks upon racks of cutting-edge hardware to produce petaflops of processing power.

Your home computer probably runs on four processor cores. Most of today's supercomputers use hundreds of thousands of cores, and the top entry has more than 3 million.

TOP500 currently relies on the Linpack benchmark, which feeds a computer a series of linear equations to measure its processing performance, although an alternative testing method is in the works. The November 2013 list sees China's Tianhe-2 on top of the world. Every six months, TOP500 releases a list, and a few new computers rise into the ranks of the world's fastest. Here are the champions as of early 2014. Read on to see how they're putting their electronic mettle to work.

Read more here:

Top 10 Supercomputers

Honeywell Says It Has Built The Worlds Most Powerful Quantum Computer – Forbes

Honeywell says its new quantum computer is twice as fast than any other machine.

In the race to the future of quantum computing, Honeywell has just secured a fresh lead.

The North Carolina-based conglomerate announced Thursday that it has produced the worlds fastest quantum computer, at least twice as powerful as the existing computers operated by IBM and Google.

The machine, located in a 1,500-square-foot high-security storage facility in Boulder, Colorado, consists of a stainless steel chamber about the size of basketball that is cooled by liquid helium at a temperature just above absolute zero, the point at which atoms stop vibrating. Within that chamber, individual atoms floating above a computer chip are targeted with lasers to perform calculations.

While people have studied the potential of quantum computing for decades, that is, building machines with the ability to complete calculations beyond the limits of classic computers and supercomputers, the sector has until recently been limited to the intrigue of research groups at tech companies such as IBM and Google.

But in the past year, the race between those companies to claim supremacy and provide a commercial use in the quantum race has become heated. Honeywells machine has achieved a Quantum Volume of 64, a metric devised by IBM that measures the capability of the machine and error rates, but is also difficult to decipher (and as quantum computing expert Scott Aaronson wrote in March, is potentially possible to game). By comparison, IBM announced in January that it had achieved a Quantum Volume of 32 with its newest machine, Raleigh.

Google has also spent significant resources on developing its quantum capabilities and In October said it had developed a machine that completed a calculation that would have taken a supercomputer 10,000 years to process in just 200 seconds. (IBM disputed Googles claim, saying the calculation would have taken only 2.5 days to complete.)

Honeywell has been working toward this goal for the past decade when it began developing the technology to produce cryogenics and laser tools. In the past five years, the company assembled a team of more than 100 technologists entirely dedicated to building the machine, and in March, Honeywell announced it would be within three months a goal it was able to meet even as the Covid-19 turned its workforce upside down and forced some employees to work remotely. We had to completely redesign how we work in the facilities, had to limit who was coming on the site, and put in place physical barriers, says Tony Uttley, president of Honeywell Quantum Solutions. All of that happened at the same time we were planning on being on this race.

The advancement also means that Honeywell is opening its computer to companies looking to execute their own unimaginably large calculations a service that can cost about $10,000 an hour, says Uttley. While it wont disclose how many customers it has, Honeywell did say that it has a contract with JPMorgan Chase, which has its own quantum experts who will use its machine to execute gargantuan tasks, such as building fraud detection models. For those companies without in-house quantum experts, queries can be made through intermediary quantum firms, Zapata Computing and Cambridge Quantum Computing.

With greater access to the technology, Uttley says, quantum computers are nearing the point where they have graduated from an item of fascination to being used to solve problems like climate change and pharmaceutical development. Going forward, Uttley says Honeywell plans to increase the Quantum Volume of its machine by a factor of 10 every year for the next five years, reaching a figure of 640,000 a capability far beyond that imagined ever before.

Read this article:

Honeywell Says It Has Built The Worlds Most Powerful Quantum Computer - Forbes