Moore’s law – Wikipedia

Observation on the growth of integrated circuit capacity

Moore's law is the observation that the number of transistors in a dense integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production.

The observation is named after Gordon Moore, the co-founder of Fairchild Semiconductor and Intel (and former CEO of the latter), who in 1965 posited a doubling every year in the number of components per integrated circuit,[a] and projected this rate of growth would continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to doubling every two years, a compound annual growth rate (CAGR) of 41%. While Moore did not use empirical evidence in forecasting that the historical trend would continue, his prediction held since 1975 and has since become known as a "law".

Moore's prediction has been used in the semiconductor industry to guide long-term planning and to set targets for research and development, thus functioning to some extent as a self-fulfilling prophecy. Advancements in digital electronics, such as the reduction in quality-adjusted microprocessor prices, the increase in memory capacity (RAM and flash), the improvement of sensors, and even the number and size of pixels in digital cameras, are strongly linked to Moore's law. These ongoing changes in digital electronics have been a driving force of technological and social change, productivity, and economic growth.

Industry experts have not reached a consensus on exactly when Moore's law will cease to apply. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, slightly below the pace predicted by Moore's law.

In 1959, Douglas Engelbart studied the projected downscaling of integrated circuit (IC) size, publishing his results in the article "Microelectronics, and the Art of Similitude".[2][3][4] Engelbart presented his findings at the 1960 International Solid-State Circuits Conference, where Moore was present in the audience.[5]

In 1965, Gordon Moore, who at the time was working as the director of research and development at Fairchild Semiconductor, was asked to contribute to the thirty-fifth anniversary issue of Electronics magazine with a prediction on the future of the semiconductor components industry over the next ten years. His response was a brief article entitled "Cramming more components onto integrated circuits".[1][6][b] Within his editorial, he speculated that by 1975 it would be possible to contain as many as 65,000 components on a single quarter-square-inch (~1.6 square-centimeter) semiconductor.

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years.[1]

Moore posited a log-linear relationship between device complexity (higher circuit density at reduced cost) and time.[9][10] In a 2015 interview, Moore noted of the 1965 article: "...I just did a wild extrapolation saying its going to continue to double every year for the next 10 years."[11] One historian of the law cites Stigler's law of eponymy, to introduce the fact that the regular doubling of components was known to many working in the field.[10]

In 1974, Robert H. Dennard at IBM recognized the rapid MOSFET scaling technology and formulated what became known as Dennard scaling, which describes that as MOS transistors get smaller, their power density stays constant such that the power use remains in proportion with area.[12][13] Evidence from the semiconductor industry shows that this inverse relationship between power density and areal density broke down in the mid-2000s.[14]

At the 1975 IEEE International Electron Devices Meeting, Moore revised his forecast rate,[15][16] predicting semiconductor complexity would continue to double annually until about 1980, after which it would decrease to a rate of doubling approximately every two years.[16][17][18] He outlined several contributing factors for this exponential behavior:[9][10]

Shortly after 1975, Caltech professor Carver Mead popularized the term "Moore's law".[19][20] Moore's law eventually came to be widely accepted as a goal for the semiconductor industry, and it was cited by competitive semiconductor manufacturers as they strove to increase processing power. Moore viewed his eponymous law as surprising and optimistic: "Moore's law is a violation of Murphy's law. Everything gets better and better."[21] The observation was even seen as a self-fulfilling prophecy.[22][23]

The doubling period is often misquoted as 18 months because of a prediction by Moore's colleague, Intel executive David House. In 1975, House noted that Moore's revised law of doubling transistor count every 2 years in turn implied that computer chip performance would roughly double every 18 months[24] (with no increase in power consumption).[25] Mathematically, Moore's Law predicted that transistor count would double every 2 years due to shrinking transistor dimensions and other improvements. As a consequence of shrinking dimensions, Dennard scaling predicted that power consumption per unit area would remain constant. Combining these effects, David House deduced that computer chip performance would roughly double every 18 months. Also due to Dennard scaling, this increased performance would not be accompanied by increased power, i.e., the energy-efficiency of silicon-based computer chips roughly doubles every 18 months. Dennard scaling ended in the 2000s.[14] Koomey later showed that a similar rate of efficiency improvement predated silicon chips and Moore's Law, for technologies such as vacuum tubes.

Microprocessor architects report that since around 2010, semiconductor advancement has slowed industry-wide below the pace predicted by Moore's law.[14] Brian Krzanich, the former CEO of Intel, cited Moore's 1975 revision as a precedent for the current deceleration, which results from technical challenges and is "a natural part of the history of Moore's law".[26][27][28] The rate of improvement in physical dimensions known as Dennard scaling also ended in the mid-2000s. As a result, much of the semiconductor industry has shifted its focus to the needs of major computing applications rather than semiconductor scaling.[22][29][14] Nevertheless, leading semiconductor manufacturers TSMC and Samsung Electronics have claimed to keep pace with Moore's law[30][31][32][33][34][35] with 10nm and 7nm nodes in mass production[30][31] and 5nm nodes in risk production as of 2019[update].[36][37]

As the cost of computer power to the consumer falls, the cost for producers to fulfill Moore's law follows an opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. Rising manufacturing costs are an important consideration for the sustaining of Moore's law.[38] This led to the formulation of Moore's second law, also called Rock's law, which is that the capital cost of a semiconductor fabrication plant also increases exponentially over time.[39][40]

Numerous innovations by scientists and engineers have sustained Moore's law since the beginning of the IC era. Some of the key innovations are listed below, as examples of breakthroughs that have advanced integrated circuit and semiconductor device fabrication technology, allowing transistor counts to grow by more than seven orders of magnitude in less than five decades.

Computer industry technology road maps predicted in 2001 that Moore's law would continue for several generations of semiconductor chips.[64]

One of the key challenges of engineering future nanoscale transistors is the design of gates. As device dimension shrinks, controlling the current flow in the thin channel becomes more difficult. Modern nanoscale transistors typically take the form of multi-gate MOSFETs, with the FinFET being the most common nanoscale transistor. The FinFET has gate dielectric on three sides of the channel. In comparison, the gate-all-around MOSFET (GAAFET) structure has even better gate control.

Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, below the pace predicted by Moore's law.[14] Brian Krzanich, the former CEO of Intel, announced, "Our cadence today is closer to two and a half years than two."[96] Intel stated in 2015 that improvements in MOSFET devices have slowed, starting at the 22nm feature width around 2012, and continuing at 14nm.[97]

The physical limits to transistor scaling have been reached due to source-to-drain leakage, limited gate metals and limited options for channel material. Other approaches are being investigated, which do not rely on physical scaling. These include the spin state of electron spintronics, tunnel junctions, and advanced confinement of channel materials via nano-wire geometry.[98] Spin-based logic and memory options are being developed actively in labs.[99][100]

The vast majority of current transistors on ICs are composed principally of doped silicon and its alloys. As silicon is fabricated into single nanometer transistors, short-channel effects adversely change desired material properties of silicon as a functional transistor. Below are several non-silicon substitutes in the fabrication of small nanometer transistors.

One proposed material is indium gallium arsenide, or InGaAs. Compared to their silicon and germanium counterparts, InGaAs transistors are more promising for future high-speed, low-power logic applications. Because of intrinsic characteristics of III-V compound semiconductors, quantum well and tunnel effect transistors based on InGaAs have been proposed as alternatives to more traditional MOSFET designs.

Biological computing research shows that biological material has superior information density and energy efficiency compared to silicon-based computing.[108]

Various forms of graphene are being studied for graphene electronics, e.g. graphene nanoribbon transistors have shown great promise since its appearance in publications in 2008. (Bulk graphene has a band gap of zero and thus cannot be used in transistors because of its constant conductivity, an inability to turn off. The zigzag edges of the nanoribbons introduce localized energy states in the conduction and valence bands and thus a bandgap that enables switching when fabricated as a transistor. As an example, a typical GNR of width of 10nm has a desirable bandgap energy of 0.4eV.[109][110]) More research will need to be performed, however, on sub-50nm graphene layers, as its resistivity value increases and thus electron mobility decreases.[109]

In April 2005, Gordon Moore stated in an interview that the projection cannot be sustained indefinitely: "It can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens." He also noted that transistors eventually would reach the limits of miniaturization at atomic levels:

In terms of size [of transistors] you can see that we're approaching the size of atoms which is a fundamental barrier, but it'll be two or three generations before we get that farbut that's as far out as we've ever been able to see. We have another 10 to 20 years before we reach a fundamental limit. By then they'll be able to make bigger chips and have transistor budgets in the billions.[111]

In 2016 the International Technology Roadmap for Semiconductors, after using Moore's Law to drive the industry since 1998, produced its final roadmap. It no longer centered its research and development plan on Moore's law. Instead, it outlined what might be called the More than Moore strategy in which the needs of applications drive chip development, rather than a focus on semiconductor scaling. Application drivers range from smartphones to AI to data centers.[112]

IEEE began a road-mapping initiative in 2016, "Rebooting Computing", named the International Roadmap for Devices and Systems (IRDS).[113]

Most forecasters, including Gordon Moore,[114] expect Moore's law will end by around 2025.[115][112][116] Although Moore's Law will reach a physical limitation, some forecasters are optimistic about the continuation of technological progress in a variety of other areas, including new chip architectures, quantum computing, and AI and machine learning.[117][118] Nvidia CEO Jensen Huang declared Moore's law dead in 2022;[119] several days later Intel CEO Pat Gelsinger declared that Moore's law is not dead.[120]

Digital electronics have contributed to world economic growth in the late twentieth and early twenty-first centuries.[121] The primary driving force of economic growth is the growth of productivity,[122] and Moore's law factors into productivity. Moore (1995) expected that "the rate of technological progress is going to be controlled from financial realities".[123] The reverse could and did occur around the late-1990s, however, with economists reporting that "Productivity growth is the key economic indicator of innovation."[124] Moore's law describes a driving force of technological and social change, productivity, and economic growth.[125][126][122]

An acceleration in the rate of semiconductor progress contributed to a surge in U.S. productivity growth,[127][128][129] which reached 3.4% per year in 19972004, outpacing the 1.6% per year during both 19721996 and 20052013.[130] As economist Richard G. Anderson notes, "Numerous studies have traced the cause of the productivity acceleration to technological innovations in the production of semiconductors that sharply reduced the prices of such components and of the products that contain them (as well as expanding the capabilities of such products)."[131]

The primary negative implication of Moore's law is that obsolescence pushes society up against the Limits to Growth. As technologies continue to rapidly "improve", they render predecessor technologies obsolete. In situations in which security and survivability of hardware or data are paramount, or in which resources are limited, rapid obsolescence often poses obstacles to smooth or continued operations.[132]

Because of the intensive resource footprint and toxic materials used in the production of computers, obsolescence leads to serious harmful environmental impacts. Americans throw out 400,000 cell phones every day,[133] but this high level of obsolescence appears to companies as an opportunity to generate regular sales of expensive new equipment, instead of retaining one device for a longer period of time, leading to industry using planned obsolescence as a profit centre.[134]

An alternative source of improved performance is in microarchitecture techniques exploiting the growth of available transistor count. Out-of-order execution and on-chip caching and prefetching reduce the memory latency bottleneck at the expense of using more transistors and increasing the processor complexity. These increases are described empirically by Pollack's Rule, which states that performance increases due to microarchitecture techniques approximate the square root of the complexity (number of transistors or the area) of a processor.[135]

For years, processor makers delivered increases in clock rates and instruction-level parallelism, so that single-threaded code executed faster on newer processors with no modification.[136] Now, to manage CPU power dissipation, processor makers favor multi-core chip designs, and software has to be written in a multi-threaded manner to take full advantage of the hardware. Many multi-threaded development paradigms introduce overhead, and will not see a linear increase in speed vs number of processors. This is particularly true while accessing shared or dependent resources, due to lock contention. This effect becomes more noticeable as the number of processors increases. There are cases where a roughly 45% increase in processor transistors has translated to roughly 1020% increase in processing power.[137]

On the other hand, manufacturers are adding specialized processing units to deal with features such as graphics, video, and cryptography. For one example, Intel's Parallel JavaScript extension not only adds support for multiple cores, but also for the other non-general processing features of their chips, as part of the migration in client side scripting toward HTML5.[138]

Moore's law has affected the performance of other technologies significantly: Michael S. Malone wrote of a Moore's War following the apparent success of shock and awe in the early days of the Iraq War. Progress in the development of guided weapons depends on electronic technology.[139] Improvements in circuit density and low-power operation associated with Moore's law also have contributed to the development of technologies including mobile telephones[140] and 3-D printing.[141]

Several measures of digital technology are improving at exponential rates related to Moore's law, including the size, cost, density, and speed of components. Moore wrote only about the density of components, "a component being a transistor, resistor, diode or capacitor",[123] at minimum cost.

Transistors per integrated circuit The most popular formulation is of the doubling of the number of transistors on ICs every two years. At the end of the 1970s, Moore's law became known as the limit for the number of transistors on the most complex chips. The graph at the top shows this trend holds true today. As of 2017, the commercially available processor possessing the highest number of transistors is the 48 core Centriq with over 18billion transistors.[142]

This is the formulation given in Moore's 1965 paper.[1] It is not just about the density of transistors that can be achieved, but about the density of transistors at which the cost per transistor is the lowest.[143]As more transistors are put on a chip, the cost to make each transistor decreases, but the chance that the chip will not work due to a defect increases. In 1965, Moore examined the density of transistors at which cost is minimized, and observed that, as transistors were made smaller through advances in photolithography, this number would increase at "a rate of roughly a factor of two per year".[1]

Dennard scaling This posits that power usage would decrease in proportion to area (both voltage and current being proportional to length) of transistors. Combined with Moore's law, performance per watt would grow at roughly the same rate as transistor density, doubling every 12 years. According to Dennard scaling transistor dimensions would be scaled by 30% (0.7x) every technology generation, thus reducing their area by 50%. This would reduce the delay by 30% (0.7x) and therefore increase operating frequency by about 40% (1.4x). Finally, to keep electric field constant, voltage would be reduced by 30%, reducing energy by 65% and power (at 1.4x frequency) by 50%.[c] Therefore, in every technology generation transistor density would double, circuit becomes 40% faster, while power consumption (with twice the number of transistors) stays the same.[144] Dennard scaling came to end in 20052010, due to leakage currents.[14]

The exponential processor transistor growth predicted by Moore does not always translate into exponentially greater practical CPU performance. Since around 20052007, Dennard scaling has ended, so even though Moore's law continued for several years after that, it has not yielded dividends in improved performance.[12][145] The primary reason cited for the breakdown is that at small sizes, current leakage poses greater challenges, and also causes the chip to heat up, which creates a threat of thermal runaway and therefore, further increases energy costs.[12][145][14]

The breakdown of Dennard scaling prompted a greater focus on multicore processors, but the gains offered by switching to more cores are lower than the gains that would be achieved had Dennard scaling continued.[146][147] In another departure from Dennard scaling, Intel microprocessors adopted a non-planar tri-gate FinFET at 22nm in 2012 that is faster and consumes less power than a conventional planar transistor.[148] The rate of performance improvement for single-core microprocessors has slowed significantly.[149] Single-core performance was improving by 52% per year in 19862003 and 23% per year in 20032011, but slowed to just seven percent per year in 20112018.[149]

Quality adjusted price of IT equipment The price of information technology (IT), computers and peripheral equipment, adjusted for quality and inflation, declined 16% per year on average over the five decades from 1959 to 2009.[150][151] The pace accelerated, however, to 23% per year in 19951999 triggered by faster IT innovation,[124] and later, slowed to 2% per year in 20102013.[150][152]

While quality-adjusted microprocessor price improvement continues,[153] the rate of improvement likewise varies, and is not linear on a log scale. Microprocessor price improvement accelerated during the late 1990s, reaching 60% per year (halving every nine months) versus the typical 30% improvement rate (halving every two years) during the years earlier and later.[154][155] Laptop microprocessors in particular improved 2535% per year in 20042010, and slowed to 1525% per year in 20102013.[156]

The number of transistors per chip cannot explain quality-adjusted microprocessor prices fully.[154][157][158] Moore's 1995 paper does not limit Moore's law to strict linearity or to transistor count, "The definition of 'Moore's Law' has come to refer to almost anything related to the semiconductor industry that on a semi-log plot approximates a straight line. I hesitate to review its origins and by doing so restrict its definition."[123]

Hard disk drive areal density A similar prediction (sometimes called Kryder's law) was made in 2005 for hard disk drive areal density.[159] The prediction was later viewed as over-optimistic. Several decades of rapid progress in areal density slowed around 2010, from 30100% per year to 1015% per year, because of noise related to smaller grain size of the disk media, thermal stability, and writability using available magnetic fields.[160][161]

Fiber-optic capacity The number of bits per second that can be sent down an optical fiber increases exponentially, faster than Moore's law. Keck's law, in honor of Donald Keck.[162]

Network capacity According to Gerald Butters,[163][164] the former head of Lucent's Optical Networking Group at Bell Labs, there is another version, called Butters' Law of Photonics,[165] a formulation that deliberately parallels Moore's law. Butters' law says that the amount of data coming out of an optical fiber is doubling every nine months.[166] Thus, the cost of transmitting a bit over an optical network decreases by half every nine months. The availability of wavelength-division multiplexing (sometimes called WDM) increased the capacity that could be placed on a single fiber by as much as a factor of 100. Optical networking and dense wavelength-division multiplexing (DWDM) is rapidly bringing down the cost of networking, and further progress seems assured. As a result, the wholesale price of data traffic collapsed in the dot-com bubble. Nielsen's Law says that the bandwidth available to users increases by 50% annually.[167]

Pixels per dollar Similarly, Barry Hendy of Kodak Australia has plotted pixels per dollar as a basic measure of value for a digital camera, demonstrating the historical linearity (on a log scale) of this market and the opportunity to predict the future trend of digital camera price, LCD and LED screens, and resolution.[168][169][170][171]

The great Moore's law compensator (TGMLC), also known as Wirth's law generally is referred to as software bloat and is the principle that successive generations of computer software increase in size and complexity, thereby offsetting the performance gains predicted by Moore's law. In a 2008 article in InfoWorld, Randall C. Kennedy,[172] formerly of Intel, introduces this term using successive versions of Microsoft Office between the year 2000 and 2007 as his premise. Despite the gains in computational performance during this time period according to Moore's law, Office 2007 performed the same task at half the speed on a prototypical year 2007 computer as compared to Office 2000 on a year 2000 computer.

Library expansion was calculated in 1945 by Fremont Rider to double in capacity every 16 years, if sufficient space were made available.[173] He advocated replacing bulky, decaying printed works with miniaturized microform analog photographs, which could be duplicated on-demand for library patrons or other institutions. He did not foresee the digital technology that would follow decades later to replace analog microform with digital imaging, storage, and transmission media. Automated, potentially lossless digital technologies allowed vast increases in the rapidity of information growth in an era that now sometimes is called the Information Age.

Carlson curve is a term coined by The Economist[174] to describe the biotechnological equivalent of Moore's law, and is named after author Rob Carlson.[175] Carlson accurately predicted that the doubling time of DNA sequencing technologies (measured by cost and performance) would be at least as fast as Moore's law.[176] Carlson Curves illustrate the rapid (in some cases hyperexponential) decreases in cost, and increases in performance, of a variety of technologies, including DNA sequencing, DNA synthesis, and a range of physical and computational tools used in protein expression and in determining protein structures.

Eroom's law is a pharmaceutical drug development observation which was deliberately written as Moore's Law spelled backwards in order to contrast it with the exponential advancements of other forms of technology (such as transistors) over time. It states that the cost of developing a new drug roughly doubles every nine years.

Experience curve effects says that each doubling of the cumulative production of virtually any product or service is accompanied by an approximate constant percentage reduction in the unit cost. The acknowledged first documented qualitative description of this dates from 1885.[177][178] A power curve was used to describe this phenomenon in a 1936 discussion of the cost of airplanes.[179]

Edholm's law Phil Edholm observed that the bandwidth of telecommunication networks (including the Internet) is doubling every 18 months.[180] The bandwidths of online communication networks has risen from bits per second to terabits per second. The rapid rise in online bandwidth is largely due to the same MOSFET scaling that enables Moore's law, as telecommunications networks are built from MOSFETs.[181]

Haitz's law predicts that the brightness of LEDs increases as their manufacturing cost goes down.

Swanson's law is the observation that the price of solar photovoltaic modules tends to drop 20 percent for every doubling of cumulative shipped volume. At present rates, costs go down 75% about every 10 years.

More:

Moore's law - Wikipedia

Related Posts

Comments are closed.