Silicon Valley Chip Makers Add New Twist to Moore’s Law – Toolbox

With the demands of artificial intelligence outpacing Moores Law, a pair of Silicon Valley chip designers are rethinking the architectural approaches for machine-learning applications.

In a sectorwhere smaller has beenbeautiful for decades,can bigger really be better for lowering a chips workload latency?

Designers at thecompanies, Cerebras Systems and Xilinx, used the Hot Chips symposium in Palo Alto, California, last week as the backdrop to their largest product releases, each of which aims to pack more processing power for intensivetechnologies.

To do it, the companies are bucking the trend named for Gordon Moore, the former chief executive of Intel, who observed five decades ago that transistor densities on a microchip double about every two years. While the prediction has guided R&D teams in the intervening years, the limits of physical space on a single chip now are leading designers to explore novel solutions.

Xilinx, based in Palo Alto, is touting its new Virtex Ultra Scale Plusfield-programmable gate array, or FPGA. It possessesnine billion logic cells in a system-on-chip platform that can access 1.5 terabits of memory per second. Built on a 16-nanometerstandard, thechip's 35 billion transistors are larger by 1.6 times the logic density of its previous iteration, the Virtex UltraScale.

To make their designs work,Cerebras and Xilinx both stepped back from the sector's drivetoproduce shorter distances between transistors. From 10 micrometers in 1971, designers have shrunk those lengths to infinitesimal distances to accommodate more integrated circuits on their chips.

Following Moores Law, Koreas Samsung and Taiwans TSMC began producing5nm chips earlier this year, and both companies are working on 3nm designs that could hit the market in 2021. Shortening the distance between circuits shaves processing time, but it alsoraisesissues around quality control in manufacturing and cooling in operations.

The chip's hugesize is aimed mostly at cloud services providers like Amazon Web Services, Microsoft and Google that rent processing power and storage to corporationsand government agencies. While Cerebras saysselect customers are using its chip, it has yet to release details about when the Wafer Scale Engine will be available on the open market.

Xilinx scaled down its latest FGPA from the 20nm standard. Doing so allowed it to forge 2,000 user input-output connections and achieve a per-second transceiver bandwidth of 4.5 terabits. With it, users can implement advanced SoC design architectures or prototype their own, the company says.

To improve time to market, Xilinx offers co-validation that lets users integrate and customize hardware and software designs both before physical parts become available. The feature is part of a development platform for the FPGA that includes de-bugging and visibility tools. The company plans to bring the Ultra Scale Plus to market next year.

According to Open AI, a foundation that works to guide the development of artificial technology, computational resources used to train the most advanced machine-learning algorithms grew by 300,000 times between 2012 and 2018.

The rate means chip makers must redouble their efforts, both traditional and unconventional, to keep pace with demand.

Original post:

Silicon Valley Chip Makers Add New Twist to Moore's Law - Toolbox

Related Posts

Comments are closed.