...1020...2829303132...40...


Look at the phone in your hand you can thank the state for that – The Guardian

Who are the visionaries who drive human progress? The answer, as we all know, is the geeks, the free spirits and the crazy dreamers, who thumb their noses at authority: the Peter Thiels and the Mark Zuckerbergs of the world; the likes of Steve Jobs and the Travis Kalanick; the giants with an uncompromising vision and an iron will, as though they have stepped fresh from the pages of one of Ayn Rands novels.

Innovation, Steve Jobs once said, distinguishes between a leader and a follower. Now, if ever there were a prototypical follower, it would have to be the government. After all, why else would nearly all the innovative companies of our times hail from the United States, where the state is much smaller than in Europe?

Media outlets including the Economist and the Financial Times never tire of telling us that governments role is to create the right preconditions: good education, solid infrastructure, attractive tax incentives for innovative businesses. But no more than that. The idea that the cogs in the government machine could divine the next big thing is, they insist, an illusion.

Take the driving force behind the digital revolution, also known as Moores law. Back in 1965, the chip designer Gordon Moore was already predicting that processor speeds would accelerate exponentially. He foresaw such wonders as home computers, as well as portable communications equipment and perhaps even automatic controls for automobiles.

And just look at us now! Moores law clearly is the golden rule of private innovation, unbridled capitalism, and the invisible hand driving us to ever lofty heights. Theres no other explanation right? Not quite.

For years, Moores law has been almost single-handedly upheld by a Dutch company one that made it big thanks to massive subsidisation by the Dutch government. No, this is not a joke: the fundamental force behind the internet, the modern computer and the driverless car is a government beneficiary from socialist Holland.

Our story begins on 1 April 1984 in a shed knocked together on an isolated lot in Veldhoven, a town in the south of the Netherlands. This is where a small startup called ASML first saw the light of day. Employing a couple of dozen techies, it was a collaborative venture between Philips and ASM International set up to produce hi-tech lithography systems: in plain English, machines that draw minuscule lines on chips.

Fast-forward 25 years, and ASML is a major corporation employing more than 13,000 engineers at 70 locations in 16 countries. With a turnover of over 5.9 billion (5.2bn) and earnings of 1.2bn, it is one of the most successful Dutch companies, ever. It controls over 80% of the chip machine market the global market, mind you.

In point of fact, the company is the most powerful force upholding Moores law. For them, this law is not a prediction: its a target. The iPhone, Googles search engine, the kitty clips it would all be unthinkable without those crazy Dutch dreamers from Veldhoven.

Naturally, youll be wondering who was behind this paragon of innovation. The story told by the company itself fits the familiar mould, of a handful of revolutionaries who got together and turned the world upside down. It was a matter of hard work, sweat and pure determination against almost insurmountable odds, explains ASML in its corporate history. It is a story of individuals who together achieved greatness.

Government isnt just there to administer life-support to failing markets. Without it, many would not even exist

Theres one protagonist you never find mentioned in these sort of stories: government. But dive deep into the archives of newspapers and annual reports back to the early 90s and another side to this story emerges.

From the get-go, ASML was receiving government handouts. By the fistful. When in 1986 a crisis in the worldwide chip industry brought ASML to its knees, and while several big competitors toppled, the chip machine-maker from the south of Holland got a leg-up from its national government. Competitors who had survived the crisis no longer had enough funds to develop the next big thing, explains the companys site. So while its rivals licked their wounds, ASML shot into the lead. Is ASML an anomaly in the history of innovation? Not quite.

A few years ago the economist Mariana Mazzucato published a fascinating book debunking a whole series of myths about innovation. Her thesis is summed up in the title The Entrepreneurial State.

Radical innovation, Mazzucato reveals, almost always starts with the government. Take the iPhone, the epitome of modern technological progress. Literally every single sliver of technology that makes the iPhone a smartphone instead of a stupidphone internet, GPS, touchscreen, battery, hard drive, voice recognition was developed by researchers on the government payroll.

Why, then, do nearly all the innovative companies of our times come from the US? The answer is simple. Because it is home to the biggest venture capitalist in the world: the government of the United States of America.

These days there is a widespread political belief that governments should only step in when markets fail. Yet, as Mazzucato convincingly demonstrates, government can actually generate whole new markets. Silicon Valley, if you look back, started out as subsidy central. The true secret of the success of Silicon Valley, or of the bio- and nanotechnology sectors, Mazzucato points out, is that venture investors surfed on a big wave of government investments.

True innovation takes at least 10 to 15 years, whereas the longest that private venture capitalists are routinely willing to wait is five years. They dont join the game until all the riskiest plays have already been made by governments. In the case of biotechnology, nanotechnology and the internet, venture investors didnt jump on the bandwagon until after 15 to 20 years. Venture capitalists are not willing to venture enough.

The relationship between government and the market is mutual and necessary. Apple may not have invented the internet, GPS, touchscreens, batteries, hard drives and voice recognition; but then again, Washington was never very likely to make iPhones. Theres not much point to radical innovations if no one turns them into products.

To dismiss the government as a bumbling slowpoke, however, wont get us anywhere. Because its not the invisible hand of the market but the conspicuous hand of the state that first points the way. Government isnt there just to administer life support to failing markets. Without the government, many of those markets would not even exist.

The most daunting challenges of our times, from climate change to the ageing population, demand an entrepreneurial state unafraid to take a gamble. Rather than wait around for the market, government needs to have vision, be decisive to take to heart Steve Jobs motto: stay hungry, stay foolish.

Utopia for Realists: And How We Can Get There is available from the Guardian bookshop

This article was translated from Dutch by Elizabeth Manton

See the rest here:

Look at the phone in your hand you can thank the state for that – The Guardian

Ethernet Getting Back On The Moore’s Law Track – The Next Platform

July 10, 2017 Timothy Prickett Morgan

It would be ideal if we lived in a universe where it was possible to increase the capacity of compute, storage, and networking at the same pace so as to keep all three elements expanding in balance. The irony is that over the past two decades, when the industry needed for networking to advance the most, Ethernet got a little stuck in the mud.

But Ethernet has pulls out of its boots and left them in the swamp and is back to being barefoot again on much more solid ground where it can run faster. The move from 10 Gb/sec to 40 Gb/sec was slow and costly, and if it were not for the hyperscalers and their intense bandwidth hunger we might not even be at 100 Gb/sec Layer 2 and Layer 3 switching, much less standing at the transition to 200 Gb/sec and looking ahead to the not-to-distant future when 400 Gb/sec will be available.

Bandwidth has come along just at the right moment, when advances in CPU throughput are stalling as raw core performance did a decade ago and as new adjunct processing capabilities, embodied in GPUs, FPGAs, and various kinds of specialized processors are coming to market to get compute back on the Moores Law track. Storage, thanks to flash and persistent flash-like and DRAM-like memories such as 3D XPoint from Intel and Micron Technology, is also undergoing an evolution. It is a fun time to be a system architect, but perhaps only because we know that with these advanced networking options that bandwidth is not going to be a bottleneck.

The innovation that is allowing Ethernet to not leap ahead so much as jump to where it should have already been is PAM-4 signaling. The typical non-return to zero, or NRZ, modulation used with Ethernet switching hardware, cabling, and server adapters can encode one bit on a signal. With pulse amplitude modulation, or PAM, multiple levels of signaling can be encoded, so multiple bits can be encoded in the signal. With PAM-4, there are four levels of signaling which allow for two bits of data to be encoded at the same time on the signal, which doubles the effective bandwidth of a signal without increasing the clock rate. And looking ahead down the road, there is a possibility of stuffing even more bits in the wire using higher levels of PAM, and the whiteboards of the networking world are sketching out how to do three bits per signal with PAM-8 encoding and four bits per signal with PAM-16 encoding.

With 40 Gb/sec Ethernet, we originally had 10 Gb/sec lanes aggregated. This was not a very energy efficient way to do 40 Gb/sec, and it was even worse for early 100 Gb/sec Ethernet aggregation gear, which ganged up ten 10 Gb/sec lanes. When the hyperscalers nudged the industry along in July 2014 to backcast this 25 GHz (well, really 28 GHz before encoding) to 25 Gb/sec and 50 Gb/sec Ethernet switching with backwards compatibility to run 10 Gb/sec and 40 Gb/sec, the industry did it. So we got to affordable 100 Gb/sec switching with four lanes running at 25 Gb/sec, and there were even cheaper 25 Gb/sec and 50 Gb/sec options for situations where bandwidth needs were not as high, and at a much better cost. (Generally, you got 2.5X the bandwidth for 1.5X to 1.8X the cost, depending on the switch configuration.)

With the 200 Gb/sec Spectrum-2 Ethernet switching that Mellanox Technologies is rolling out, and that other switch makers are going to adopt, the signaling is still running at 25 GHz effective, but with the Spectrum-2 gear Mellanox has just unveiled, it is layering on PAM-4 modulation to double pump the wires, so it delivers 50 Gb/sec per lane even though it is still running at the same speed as 100 Gb/sec Ethernet lanes. And to reach 400 Gb/sec with Spectrum-2 gear, Mellanox is planning to widen out to eight lanes running at this 25 GHz (effective) while layering on PAM-4 modulation to get 100 Gb/sec effective per lane. At some point, the lane speed will have to increase to 50 GHz, but with PAM-8 modulation the switching at eight lanes could be doubled again to 800 GB/sec, and with PAM-16 you could hit 1.6 TB/sec. Adding in the 50 GHz real signaling here would get us to 3.2 TB/sec something that still probably seems like a dream and that is probably also very far into the future.

This all sounds a lot easier in theory than it will be to actually engineer, Kevin Deierling, vice president of marketing at Mellanox, tells The Next Platform. You can go to PAM-8 and you can go to Pam-16, but when you do that, you are starting to shrink the signal and it gets harder and harder to discriminate from one level in the signal and the next. Your signal-to-noise ratio goes away because you are shrinking your signal. Some folks are saying lets go to PAM-8 modulation, and other folks are saying that they need to use faster signaling rates like 50 GHz. I think we will see a combination of both.

The sweet thing about using PAM-4 to get to 200 Gb/sec switching is that the same SFP28 and QSFP28 adapters and cables that were used for 100 Gb/sec switching (and that are used for the 200 Gb/sec Quantum HDR InfiniBand that was launched by Mellanox last year and that will start shipping later this year) are used for the doubled up Ethernet speed bump. You need better copper cables for Spectrum-2 because the signal-to-noise ratio is shrinking, and similarly the optical transceivers need to be tweaked for the same reason. But the form factors for the adapters and switch ports remain the same.

With the 400 Gb/sec Spectrum-2 switching, the adapters have new wider form factors, with Mellanox supporting the QSFP-DD (short for double density) option instead of the OSFP (short for Octal Small Form Factor) option for optical ports. Deierling says Mellanox will let the market decide and support whatever it wants one, the other, or both but it is starting with QSFP-DD.

The Spectrum-2 ASIC can deliver 6.4 Tb/sec of aggregate switching bandwidth, and it can be carved up in a bunch of ways, including 16 ports at 400 Gb/sec, 32 ports at 200 Gb/sec, 64 ports at 100 Gb/sec (using splitter cables), and 128 ports running at 25 Gb/sec or 50 Gb/sec (again, using splitter cables). The Spectrum-2 chip can handle up to 9.52 billion packets per second, and has enough on chip SRAM to handle access control lists (ACLs) that span up to 512,000; with one of the 200 Gb/sec ports and a special FPGA accelerator that is designed to act as an interface to a chunk of external DRAM next to the chip, the Spectrum-2 can handle up to 2 million additional routes on the ACL what Deierling says is the first internet-scale Ethernet switch based on a commodity ASIC that is suitable for hyperscaler-class customers who want to do Layer 3 routing on a box at the datacenter scale.

As for latency, which is something that everyone is always concerned with, the port-to-port hop on the Spectrum-2 switch is around 300 nanoseconds, and this is about as low as the Ethernet protocol, which imposes a lot of overhead, can go, according to Deierling. The SwitchX-2 and Quantum InfiniBand ASICs from Mellanox can push latencies down to 100 nanoseconds or a tiny bit lower, but that is where InfiniBand hits a wall.

At any rate, Mellanox reckons that Spectrum-2 has the advantage in switching capacity, with somewhere between 1.6X and 1.8X the aggregate switching bandwidth compared to its competition and without packet loss and somewhere on the order of 1.5X to 1.7X lower latency, too.

At the moment, Mellanox is peddling four different configurations of its Spectrum-2 switches, which are shown below:

The Spectrum-2 switches are being made available in two different form factors, two full width devices and two half width devices. The SN3700 has a straight 32 ports running at 200 Gb/sec for flat, Clos style networks, while the SN3410 has 48 ports running at 50 Gb/sec with eight uplinks running at 200 Gb/sec for more standard three tiered networks used in the enterprise and sometimes on the edges of the datacenter at hyperscalers. The SN3100 is a half-width switch that has 16 ports running at 200 Gb/sec, and the SN3200 has 16 ports running at 400 Gb/sec.

It is interesting that there is not a full width SN series switch with 400 Gb/sec ports. This is intentionally so and based on the expected deployment scenarios. In scenarios where a very high bandwidth switch is needed to create a storage cluster or a hyperconverged storage platform, 16 ports in a rack is enough and two switches at 16 ports provides redundant paths between compute and storage or hyperconverged compute-storage nodes to prevent outages.

There is even a scenarios that, using the VMS Wizard software for the Spectrum-2 switch that converts a quad of the 2100 Gb/sec and 400 Gb/sec switches that creates a virtual modular switch that with 64 of the SN3410 devices that can support up to 3,072 ports in a single management domain. Take a look:

This Virtual Modular Switch is about 25 percent less expensive than actual modular switches with the same port count and lower bandwidth and higher latency.

Programmability is a big issue with networking these days, and the Spectrum-2 devices will be fully programmable and support both a homegrown compiler and scripting stack created by Mellanox as well as the P4 compiler that was created by Barefoot Networks for its Tofino Ethernet switch ASICs and that is being standardized upon by some hyperscalers. Mellanox expects for hyperscalers to want to do a lot of their own programming, but that most enterprise customers will simply run the protocols and routines that Mellanox itself codes for the machines. The point is, when a new protocol or extension comes along, Spectrum-2 will be able to adopt it and customers will not have to wait until new silicon comes out. The industry waited far too long for VXLAN to be supported in chips, and that will not happen again.

As for pricing, the more bandwidth you get, the more you pay, but the cost per bit keeps coming down and will for the 200 Gb/sec and 400 Gb/sec speeds embodied in the Spectrum-2 lineup. Pricing depends on volumes and on the cabling, of course, but here is how it generally looks. With the jump from 40 Gb/sec to 100 Gb/sec switching (based on the 25G standard), customers got a 2.5X bandwidth boost for somewhere between 1.5X and 1.8X the price somewhere around a 20 percent to 30 percent price/performance benefit. Today, almost two years later, 100 Gb/sec ports are at price parity with 40 Gb/sec ports back then, and Deierling says that a 100 Gb/sec port costs around $300 for a very high volume hyperscaler and something like $600 per port for a typical enterprise customer. The jump to 200 Gb/sec will follow a similar pattern. Customers moving from 100 Gb/sec to 200 Gb/sec switches (moving from Spectrum to Spectrum-2 devices in the Mellanox lineup) will get 2X the bandwidth for 1.5X the cost. Similarly, those jumping from 100 Gb/sec to 400 Gb/sec will get 4X the bandwidth per port for 3X the cost.

Over time, we expect that there will be price parity between 100 Gb/sec pricing today and 200 Gb/sec pricing, perhaps two years hence, and that the premium for 400 Gb/sec will be more like 50 percent than 100 percent. But those are just guesses. A lot depends on what happens in the enterprise. What we do know is that enterprises are increasingly being forced by their applications and the latency demands of their end user applications to deploy the kind of fat tree networks that are common at HPC centers and hyperscalers and they are moving away from the over-subscribed, tiered networks of the past where they could skimp on the switch devices and hope the latencies were not too bad.

Categories: Cloud, Connect, Enterprise, Hyperscale

Tags: Ethernet, Mellanox, PAM-4, Spectrum-2

Parameter Encoding on FPGAs Boosts Neural Network Efficiency OpenPower, Efficiency Tweaks Define Europes DAVIDE Supercomputer

See original here:

Ethernet Getting Back On The Moore’s Law Track – The Next Platform

Chart in Focus: AMD’s Moore’s Law Plus Concept – Market Realist

AMD Seeks to Gain Market Share from Intel and NVIDIA PART 3 OF 22

In the past, Advanced Micro Devices (AMD) had suffered from delayed product launches because it used highly specialized semiconductor nodes that it had built. As a result, any process difficulties and yield issues were specific to the company.

In 2012, AMD spun off its manufacturing unit, Global Foundries, and became a fabless company. This helped it address the problem of process technologies and yield difficulties. In 2016, the company launched its first product on the 14nm (nanometer) process node, bringing it on par with its competitors Intel (INTC) and NVIDIA (NVDA) in terms of process technology.

However, Moores law is slowing. Moores law states that the size of the chip would shrink every two years and the number of transistors would double, thereby improving performance and reducing cost and power consumption.

As Moores law slows, companies are looking for innovative ways to power the next generation of computing capability. AMD developed the concept of Moores Law Plus to drive future innovation.

At its 2017 Financial Analyst Day, AMDs chief technology officer, Mark Papermaster, explained that its semiconductor technology alone cannot address the companys future computing needs. As a result, AMD has adopted a three-pronged approach to drive future generationchip development: integrate hardware, software support, and design from a system perspective.

AMD is improving its core architecture to integrate with other hardware. The company developed Infinity Fabric, which connects multiple chips efficiently and provides greater control with respect to power and security.

AMD is advancing its packaging from the current MCM (multichip modules) and 2.5D pack unit to 3D packaging.

The company has collaborated with industry participants like IBM (IBM) and Xilinx (XLNX) in developing anindustry-standard interconnect like CCIX (cache-coherent fabric to interconnect accelerators). CCIX would provide high-performance connectivity and rack scale for different accelerators and server processors.

AMD is also looking to optimize the physical design of its chips by making them denser and more power-efficient.

AMD is supporting its hardware with very high-performance software solutions.Instead of locking the software, AMD has adopted an open computing platform that allows users to download and upload information for free.

The company is using C/C++ Compiler and advanced frameworks like ROCm (Radeon Open Compute platform) to support hardware.

Semiconductor and software technology cannot deliver the desired computing power in isolation. All these technologies are integrated into a system design. For instance, AMDs Radeon Instinct Initiative would integrate the Ryzen CPU (central processing unit), the Vega GPU (graphics processing unit), HBM2 (high-bandwidth memory), and ROCm to deliver machine learning and heterogeneous computing systems.

AMD has recently acquired wireless millimeter-wave interconnect technology that it plans to use in developing wireless VR (virtual reality) headsets.

Next, well look at AMDs new CPU architecture.

Here is the original post:

Chart in Focus: AMD’s Moore’s Law Plus Concept – Market Realist

Could this 2D materials innovation push Moore’s law into sub-5nm gate lengths? – Electropages (blog)

In a major technological development a material-device-circuit level co-optimisation of field-effect transistors (FETs) based on 2D materials for high-performance logic applications scaled beyond the 10nm technology node has been presented.

It is the result of collaborative work between Imec, the nanoelectronics and digital technology innovation centre and scientists from KU Leuven in Belgium and Pisa University in Italy. In addition to this Imec has also created designs which are thought to allow the use of mono-layer 2D materials to facilitate Moores law below a 5nm gate length.

Scientists believe 2D materials which are formed from two-dimensional crystals may be able to create a transistor with a channel thickness down to the level of single atoms and gate lengths of a few nanometers.

A key technology driver that allowed the chip industry to progress Moores Law and to producing increasingly powerful devices was the continued scaling of gate lengths.

In order to counter the resulting negative short-channel effects, chip manufacturers have already moved from planar transistors to FinFETs. They are now introducing other transistor architectures such as nanowire FETs. This material breakthrough goes beyond existing practices.

In order to fit FETs based on 2D materials into the scaling roadmap it is essential to understand how their characteristics relate to their behavior in digital circuits. In a recent paper published in Scientific Reports the Imec scientists and their colleagues explained how to choose materials, design the devices and optimise performance to create circuits meeting the requirements for sub-10nm high-performance logic chips. Their findings demonstrate the need to use 2D materials with anisotropic characteristics, meaning it is stronger along its length than laterally and also has a smaller effective mass in the transport direction.

Using one such material, monolayer black-phosphorus, the researchers presented device designs which they say could pave the way to extend Moores law into the sub-5nm gate length.

These designs reveal that for sub-5nm gate lengths, 2D electrostatics arising from gate stack design become more of a challenge than direct source-to-drain tunneling.

These results are very encouraging because in the case of 3D semiconductors, such as Si, scaling gate length so aggressively is practically impossible.

Paul Whytock is European Editor for Electropages. He has reported extensively on the electronics industry in Europe, the United States and the Far East for over twenty years. Prior to entering journalism he worked as a design engineer with Ford Motor Company at locations in England, Germany, Holland and Belgium.

Share on Google Plus Share

Here is the original post:

Could this 2D materials innovation push Moore’s law into sub-5nm gate lengths? – Electropages (blog)

NVIDIA White Paper Projects MCM-GPU Future Will Outrun Moore’s … – Hot Hardware


Hot Hardware
NVIDIA White Paper Projects MCM-GPU Future Will Outrun Moore's …
Hot Hardware
It's not too often we get the feeling that some of the technology that we regularly use is reaching its upper-limit, but there comes a time when new ideas need to …

and more »

See the rest here:

NVIDIA White Paper Projects MCM-GPU Future Will Outrun Moore’s … – Hot Hardware

Here is how Nvidia can sidestep Moore’s Law in GPU design – PC Gamer

Nvidia is fast approaching a technical wall in GPU design where it will no longer be able to shove more transistors into a GPU die to increase performance at the same rate customers have grown accustomed to. Simply put, as Moore’s Law slows down, the number of transistors per die no longer grows at historical rates, Nvidia notes. The solution to this problem could lie in switching to a multi-chip module GPU design.

Researchers from Nvidia, Arizona State University, the University of Texas, and the Barcelona Supercomputing Center have published a paper outlining the benefits of multi-chip module GPUs. It is a design that is working for AMD with its Ryzen CPUs, and likewise Nvidia believes it could benefit GPUs as well.

“Specifically, we propose partitioning GPUs into easily manufacturable basic GPU Modules (GPMs), and integrating them on package using high bandwidth and power efficient signaling technologies,” Nvidia says.

Without either switching to a multi-chip module design or coming up with an alternative solution, Nvidia warns that the performance curve of single monolithic GPUs as currently constructed will ultimately plateau. Beyond the technical challenge of cramming more transistors into smaller spaces, there is also the cost to consider, both in terms of technical research and reduced die yields.

Whether or not an MCM design is ultimately the answer, Nvidia thinks it is at least worth exploring. One thing that Nvidia mentions in its paper is that it’s difficult to scale GPU workloads on multi-GPU systems, even if they scale well on a single GPU.

“This is due to to multiple unsolved challenges related to work partitioning, load balancing, and data sharing across the slow on-board interconnection network. However, due to recent advances in packaging and signaling technologies, package-level integration provides a promising integration tier that lies between the existing on-chip and on-board integration technologies,” Nvidia says.

What Nvidia proposes is connecting multiple GPU modules using advanced, high-speed input/output protocols to efficiently communicate with each other. This would allow for less complex (and presumably cheaper) GPU modules compared to a monolithic design. It is a sort of strength in numbers approach.

Nvidia’s team of researchers used an in-house simulator to evaluate their designs. What they did was build two virtual GPUs, each with 256 streaming multiprocessors (SMs). One was based on the current monolithic design and the other used an MCM design.

The simulator showed the MCM design performed within 10 percent of monolithic GPU. It also showed that the MCM design would be nearly 27 percent faster than an SLI setup with similar specs. And when optimized, the MCM design can achieve a 45.5 percent speedup compared to the largest implementable monolithic GPU, which would have 128 SMs.

Much of this is hypothetical, not just in the simulation but also the examples used. A 256 SM chip just isn’t possible at the momentNvidia labels it as “unbuildable.” To put that into perspective, Nvidia’s GeForce GTX 1080 Ti sports 28 SMs.

It remains to be seen what Nvidia will do for the next couple of generations, though a move to MCM GPUs seems almost inevitable. The question is, which company will get there first? It is believed that AMD’s Navi GPU architecture off in the distance could utilize an MCM GPU design as well, especially now that AMD has the tech in place with Zen (Ryzen, Threadripper, Naples, Epyc).

For now, you can dive into Nvidia’s white paper (PDF) for all of the gritty details.

Here is the original post:

Here is how Nvidia can sidestep Moore’s Law in GPU design – PC Gamer

Moore’s Law end shakes industry – EE Times Asia – Eetasia.com (press release)

At the 50th anniversary of the Alan Turing award, panellists revealed that the expected death of Moore’s Law would change the semiconductor and computer industries.

A basket of silicon, systems and software technologies will continue progress, but not at the same pace, they said. With no clear replacement for CMOS scaling, semiconductor and systems industries may be reshaped into vertical silos, they added.

Moores Law said transistor density doubles every 18 months, something we maintained for 25 years, but it began slowing down to every two to three years around 2000-2005, and more recently were seeing doubling about every four years, so we’re reaching the end of semiconductor technology as we know it, said John Hennessy, former president of Stanford University and author of a key text book on microprocessors.

Figure 1: Hennessy: We’re reaching the end of semiconductor technology as we know it.

Dennard scaling, a related observation that energy requirements scale as silicon shrinks, already has been non-operational for 1015 years, creating an era of dark silicon where we quickly turned to multicore processors, Hennessy added.

Moores Law is really an observation about economics, not a law of physics. The question is whether we can find another aspect of physics that has a return on investment like CMOS, said Margaret Martonosi, a systems specialist at Princeton.

Insofar as Moores Law is about a rate [of density scaling], it is dead because I think we are at the end of a predictable rate and in a few generation well hit the limits of physics, said Doug Burger, a distinguished engineer working on FPGA accelerators at Microsofts Azure cloud service.

Figure 2: Margaret Martonosi wrote two textbooks on power-aware computers.

Moores Law gave us a free ride and thats just about over so we are entering a wild, messy time and it sounds like a lot of fun, Burger said.

I think we still have a few more years of CMOS scaling, said Norm Jouppi, a veteran microprocessor designer and lead of the team behind Googles TPU accelerator. Some apps will continue to see performance speed ups for the next decade but for others they will come more slowly, he said.

Jouppi quipped that the industry is in denial about Moore’s Law, like the vendor in the Monty Python dead-parrot sketch, who insists a bird is not dead, its just resting. Next: Goodbye DRAMS, hello franken-systems

Read the original here:

Moore’s Law end shakes industry – EE Times Asia – Eetasia.com (press release)

Boffins create 3D CPU architecture to stretch Moore’s Law more – The INQUIRER

RESEARCHERS CLAIM to have developed a new ‘3D chip’ that could be the answer to some of the bandwidth issues plaguing the current generation of chips.

The prototype, built by a team of researchers from Stanford and MIT, manages to combine memory, processor and sensors onto a single discrete unit made of graphene nanotubes, with resistive RAM (RRAM) squished over the top.

The 3D computer architecture is, the team claims, “the most complex nano-electronic system ever made with emerging nano-technologies”.

Carbon has a higher tolerance to heat than silicon, and so using the carbon nanotubes means that the chip can stand up to higher temperatures than a regular chip – especially now the wafers are getting so ridiculously thin.

Related: Researchers shift processing to memory to make things faster

The research funded by the Defense Advanced Research Projects Agency (DARPA) and the US National Sanitation Foundation (!) is making good headway, but it is, for want of a better phrase, not backwards compatible’ and so it could be a while before we see anything in the shops that uses the same technology.

“The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” said Philip Wong from the MIT team.

The work has come on at a phenomenal pace, with a University of Wisconsin-Madison team first perfecting the nanotube system to overtake speeds possible in silicon chips as recently as last September.

At that time, experts estimate the chips could take a current up to 1.9 times that of a conventional silicon dewberry. Ultimately it’s thought that figure will increase to five times the speed, with a fifth of the energy, but again, no time frames.

See original here:

Boffins create 3D CPU architecture to stretch Moore’s Law more – The INQUIRER

What Moore’s Law has to teach us about WanaCrypt0r – SC Magazine UK

Kirsten Bay, president and CEO, Cyber adAPT

WannaCrypt0r the malware that held data to ransom on a global scale was a powerful illustration of what happens when cyber-security loopholes are not effectively closed. Exploiting a weakness in Microsoft’s Windows operating system, the cryptoworm spread between PCs like wildfire, encrypting data and demanding Bitcoin payment in exchange for its return.

It is fair to say the attack took most cyber-security professionals by surprise. But was it really so unfathomable and, more importantly, how can we ensure such attacks are not repeated?

The answer to these questions lies in a theory proposed by Intel co-founder, Gordon Moore, in the 1960s: the processing power of computers doubles every two years.

Having dominated computing for the last 52 years, Moore’s Law is now looking set to run out of steam, and it is the reason behind this has much to teach us about cyber-security now, and in the future.

Keeping up with the hackers

According to Europol chief Rob Wainwright, the best way to stop WanaCrypt0r infecting PCs and corporate networks is simple: installing a Microsoft patch on all machines.

Yet as the attack has shown, keeping security systems up to date is challenging. Microsoft, after all, had already released the MS17-010 patch before the ransomware hit, but failure of individual users and businesses to update promptly meant 150 countries were still affected.

The hard truth is: security breaches are not just increasing; they are inevitable especially in large organisations where networks support multiple devices that all run different software. And considering the scale of the biggest organisations affected the UK’s National Health Service and FedEx it is easy to see how PCs running outdated systems, like Windows 7, were overlooked.

The key conclusion we can draw from this latest breach is that our tendency to focus on protecting specific networks or devices is a serious error. And this is where Moore’s Law comes in

From chip-power to the cloud

When Moore first made his observation, technology was different computing power was determined by how many transistors a dense integrated circuit, or chip, could hold. After noting that the transistor to chip ratio was doubling every two years (a revised estimate made in 1975), he predicted that processing capability would grow at the same rate, and so Moore’s Law was born.

Although the theory has been verified by more than half a century of multiplying transistors and shrinking chips, empirical support for it is dwindling. Indeed, in 2015, Moore himself said he saw the law dying in the next decade or so.

The reason for this is that computing capability is no longer tied to hardware. The advent of cloud computing means software, data and extra processing capacity can now be accessed over the internet without increasing the number of transistors in a device.

Thus, when we apply the same argument to cyber-security the problem is clear: current measures are trying to protect limited networks and specific devices, but networks are now edgeless and used by myriad devices. In other words, the idea of patching every single device linked to the network is unrealistic and we are trying to keep a gate closed that is simply too wide.

Outside in: building internal defences

To outpace the hackers, we must learn from the failings of Moore’s Law and take a lateral security perspective that extends beyond individual devices.

CISOs need to adopt a detection-led approach that focuses on preventing attacks after hackers have breached networks by monitoring for and removing suspicious users. In doing so, they can ensure their cyber-security measures are fit for the 21st century, rather than embarking on an endless mission to update every device each time a threat is identified. And with such defences in place, security professionals could stop the next ransomware attack from spreading so quickly, or at all.

The demise of Moore’s law teaches us that modern security cannot afford to view networks as silos. With the cloud constantly creating new connections, there are no more perimeters to protect, which means keeping systems safe requires defences that can identify hackers after they have made their way in.

By deploying a detection-led method, CISOs can use the lessons of the past to secure networks at all times, and ensure they are positioned tothwartthe next WanaCrypt0r-style-attack in its early stages.

Contributed by Kirsten Bay, president and CEO, Cyber adAPT

*Note: The views expressed in this blog are those of the author and do not necessarily reflect the views of SC Media or Haymarket Media.

Read the original:

What Moore’s Law has to teach us about WanaCrypt0r – SC Magazine UK

Cadence, Synopsys: Monster Chips from Nvidia, Intel Bode Well, Says RBC – Barron’s


Barron’s
Cadence, Synopsys: Monster Chips from Nvidia, Intel Bode Well, Says RBC
Barron’s
Giant chips from Nvidia and Intel packed with tons of transistors are a good sign that the chip industry rule of thumb, Moore's Law, is alive and well, says RBC's Mitch Steves, and that should be good business for Synopsys and Cadence, vendors of the …

Read more from the original source:

Cadence, Synopsys: Monster Chips from Nvidia, Intel Bode Well, Says RBC – Barron’s

Nvidia researching Multi-Chip-Module GPUs to keep Moore’s law alive – Neowin

A team consisting of researchers from Nvidia, Arizona State University, the University of Texas, and the Barcelona Supercomputing Centre have published a paper (PDF) studying ways of bypassing the recent deceleration in the pace of advancement of transistor density.

To avoid the performance ceiling monolithic GPUs will ultimately reach, they propose the manufacture of basic GPU Modules (GPMs) that will be integrated on a single package using high bandwidth and power-efficient signaling technologies, in order to create Multi-Chip-Module (MCM) GPU designs.

The researchers used Nvidia’s in-house GPU simulator to evaluate their designs. According to their findings, MCM GPUs can greatly assist in increasing the number of Streaming Multiprocessors (SM), a fact that speeds up vastly many types of applications. Utilizing the simpler GPM building blocks and advanced interconnects, they simulated a 256 SM chip that achieves a 45.5% speedup over the largest possible monolithic GPU with 128 SMs. In addition, their design performs 26.8% better than a discrete multi-GPU with the same number of SMs, and is within 10% of the performance of a hypothetical monolithic GPU with 256 SMs that cannot be built based on todays technology roadmap.

Source and Images via Hexus

Visit link:

Nvidia researching Multi-Chip-Module GPUs to keep Moore’s law alive – Neowin

Death of Moore’s Law could be cool – Fudzilla

HP Labs thinks it is the best thing to happen in computing

While Moore’s Law is slowly winding up, Hewlett-Packard Labs is not exactly mourning.

Hewlett-Packard Labs boffin Stanley Williams, has penned a report exploring the end of Moore’s Law which says it could be the best thing that has happened for computing.

He wrote that confronting the end of an epoch should enable a new era of creativity by encouraging computer scientists to invent biologically inspired devices, circuits, and architectures implemented using recently emerging technologies.

Williams argues that : “The effort to scale silicon CMOS overwhelmingly dominated the intellectual and financial capital investments in industry, government, and academia, starving investigations across broad segments of computer science and locking in one dominant model for computers, the von Neumann architecture.”

Three alternatives already being developed at Hewlett Packard Enterprise — neuromorphic computing, photonic computing, and Memory-Driven Computing.

“All three technologies have been successfully tested in prototype devices, but MDC is at centre stage.”

Follow this link:

Death of Moore’s Law could be cool – Fudzilla

TOP500 Meanderings: Sluggish Performance Growth May Portend Slowing HPC Market – TOP500 News

For all the supercomputing trends revealed on recent TOP500 lists, the most worrisome is the decline in performance growth that has taken place over the over the last several years worrisome not only because performance is the lifeblood of the HPC industry, but also because there is no definitive cause of the slowdown.

TOP500 aggregate performance (blue), top system performance (red), and last system performance (orange). Credit Erich Strohmaier

That said, there are a few smoking guns worth considering. An obvious one is Moores Law, or rather the purported slowing of Moores Law. Performance increases in supercomputer hardware relies on a combination of getting access to more powerful computer chips and putting more of them into a system. The latter explains why aggregate performance on the TOP500 list historically grew somewhat faster than the rate of Moores Law.

But this no longer appears to be the case. Since 2013 or thereabouts, the annual aggregate performance increase on the TOP500 list has fallen not just below its historical rate of growth, but the Moores Law rate as well. As you can see from the chart below, performance growth has had its ups and downs over the years, but the recent dip appears to indicate a new trend.

TOP500 rate of performance increase. Credit Erich Strohmaier

So if Moores Law is slowing, why dont users just order bigger system with more servers? Well they are system core counts are certainly rising but there are a number of disincentives to simply throwing more servers at the problem. A major limitation is power.

And although power data on the list is sketchier than performance data, there is a clear trend toward increased energy usage. For example, over the last 10 years, the supercomputer with the largest power draw increased from around 2.0 KW in 2007 (ASC Purple) to more than 17.8 KW in 2017 (Tianhe-2). In fact, three of the largest systems today use more than 10 KW. The systems in middle of the list appear to be sucking more energy as well, although the increase is not so pronounced as it is for the biggest systems.

Theres nothing inherently wrong with building supercomputers that chew through tens of megawatts of electricity. But given the cost of power, there just wont be very many of them. The nominal goal of building the first exascale supercomputers in the 20 to 30 MW range ensures there will be only a handful of such machines in the world.

The problem with using additional electricity is not just that it costs more, and thus there is less money to spend on buying more hardware, but once you grow beyond the power budget of your datacenter, youre stuck. At that point, you either have to build a bigger facility, wait until the hardware becomes more energy efficiency, or burst some of your workload to the cloud. All of those scenarios lead to the slower performance growth we see on the TOP500.

It also leads to reduced system turnover, which is another recent trend that appears to have clearly established itself. Looking at the chart below, the time an average system spends on list has tripled since 2008, and is about double the historical average. Its almost certain that this means users are hanging on to their existing systems for longer periods of times.

TOP500 of averagelifetime of system on list. Credit Erich Strohmaier

None of this bodes well for supercomputer makers. Cray, the purest HPC company in the world, has been seeing some of the effects of stretching out system procurements. Since 2016 at least, the company has experienced a contraction in the number of systems they are able bid on (although, theyve been able to compensate to some degree with better win rates). Crays recent forays into cloud computing and AI are two ways they are looking to establish revenue streams that are not reliant traditional HPC system sales.

Analysts firms Intersect360 Research and Hyperion (formerly IDC) remain bullish about the HPC market, although compared to a few years ago their growth projections have been shaved back. Hyperion is forecasting a 5.8 percent compound annual growth rate (CAGR) for HPC servers over the next five years, but thats full a point and half lower than the 7.3 percent CAGR they were talking about in 2012. Meanwhile Intersect360 Research is currently projecting a 4.7 percent CAGR for server hardware, while in 2010 they were forecasting a 7.0 percent growth rate (although that included everything, not just servers).

The demand for greater computing power from both researchers and commercial users appears to be intact, which makes the slowdown in performance growth all the more troubling. This same phenomenon appears to be some of what is behind the current trend toward more diverse architectures and heterogeneity. The most popular new processors: GPUs, Xeon Phis, and to a lesser extent, FPGAs, all exhibit better performance per watt characteristics than the multicore CPUs they nominally replace. The interest in the ARM architecture is along these same lines.

Of course, all of these processors will be subject to the erosion of Moores Law. So unless a more fundamental technology or architectural approach emerges to take change the power-performance calculus, slower growth will persist. That wont wipe out HPC usage, any more than the flat growth of enterprise computing wiped out businesses. It will just be the new normal until something else comes along.

See more here:

TOP500 Meanderings: Sluggish Performance Growth May Portend Slowing HPC Market – TOP500 News

9 Things You Didn’t Know About Intel Corporation – Madison.com

Comedian Conan O’Brien once mocked semiconductor giant Intel’s (NASDAQ: INTC) cubicle-farm offices for their soulless, gray aesthetic. The company subsequently chose to spruce up its drab digs, but after hearing a story like that, you may not think of Intel as a company whose history is replete with fascinating facts and stories of innovation. However, the chipmaking giant’s past, present, and future are far more dynamic and interesting than you might expect.

Here are nine things about Intel that you may not know.

The company’s founders at first wanted to name Intel “Moore Noyce,” a combination of their last names. However, when colleagues complained that it sounded like “more noise,” they switched to the first parts of the words “integrated electronics,” and Intel was born.

Intel’s chips helped power the first-ever live video from space to Earth in 1995. Astronauts aboard the Space Shuttle Endeavour streamed a live feed, including photographic images and annotations, to ground control at Houston’s Johnson Space Center.

Intel is famous for its five-chimed jingle. Officially known as the “Intel Bong” — insert joke here — it was composed by Austrian music producer Walter Werzowa and made its debut in 1994.The iconic jingle was so heavily incorporated into Intel’s marketing efforts that it was at one point estimated to have been played somewhere in the world once every five minutes at its peak. It’s been played over 1 billion times in total.

Observers of the semiconductor market probably know Moore’s Law, which predicts that semiconductors’ performance tends to double every two years. However, it’s less well known that this foundational concept is named after Intel’s co-founder, Gordon Moore,who popularized the idea.

Intel absolutely dominates its two core markets, holding more than a 90% share in the market for PC and server microprocessors. Competitors such asAdvanced Micro Devicestry to catch up but are rarely able to gain any ground because of Intel’s massive research and development budget. For its fiscal 2016, Intel spent $12.7 billion on R&D alone, roughly triple AMD’s entire 2016 revenue of $4.2 billion.That’s a staggering difference in scale.

Even though many consumers would have a hard time distinguishing Intel’s chips from others inside today’s computers, Intel enjoys worldwide brand recognition. Global brand consultancy Interbrand ranked Intel as having the world’s 14thmost valuable brand in 2016, with an estimated brand value of $36.9 billion,situating the company between far more consumer-facing companies Disney (13th) and Facebook(15th).

In addition to building one of tech’s most iconic companies, Intel’s top executives have left a far broader imprint on tech and business over the years. For example, Intel co-founder Bob Noycementored a young Steve Jobs,who mentioned Noyce by name in his famous Stanford commencement speech.And former Intel CEO Andy Grove, another longtime Jobs mentor,wrote several best-selling books that are today required reading at business schools around the world, including Only the Paranoid Survive and High Output Management.

The company’s legion of fans also extends into the scientific community. In 1987, researchers at the CERGA Observatory named an asteroid “Intel 8080” in the chipmaker’s honor. The name comes from the company’s 8080 chip, which is widely credited with enabling the personal-computing revolution to take off.

Intel and other semiconductor producers like it have maintained Moore’s Law by continually shrinking the number of transistors it can fit in its chips. For context, the aforementioned Intel 8080 contained 6,000 transistors, a major breakthrough at the time. Today, PC microprocessors pack in 2.6 billion transistors,a testament to Intel’s ability to continually innovate.

10 stocks we like better than Intel

When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*

David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now… and Intel wasn’t one of them! That’s right — they think these 10 stocks are even better buys.

*Stock Advisor returns as of June 5, 2017

Andrew Tonner has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Facebook and Walt Disney. The Motley Fool recommends Intel. The Motley Fool has a disclosure policy.

See the rest here:

9 Things You Didn’t Know About Intel Corporation – Madison.com

The information age is over, welcome to the machine learning age – VentureBeat

I first used a computer to do real work in 1985.

I was in college in the Twin Cities, and I remember using the DOS version of Word and later upgraded to the first version of Windows. People used to scoff at the massive gray machines in the computer lab but secretly they suspected something was happening.

It was. You could say the information age started in 1965 when Gordon Moore invented Moores Law (a prediction about how transistors would double every year, later changed to every 18 months). It was all about computing power escalation, and he was right about the coming revolution. Some would argue the information age started long before then when electricity replaced steam power. Or, maybe it was when the library system in the U.S. started to expand in the 30s.

Who knows? My theory it started when everyone had access to information on a personal computer. That was essentially what happened around 1985 for me and a bit before that in high school. (Insert your own theory here about the Apple II ushering in the information age in 1977. Id argue that was a little too much of a hobbyist machine.)

We can agree on one thing. We know that information is everywhere. Thats a given. Now, prepare for another shift.

In their book Machine, Platform, Crowd: Harnessing Our Digital Future, economic gurus Andrew McAfee and Erik Brynjolfsson suggest that were now in the machine learning age. They point to another momentous occasion that might be as significant as Moores Law. In March of last year, an AI finally beat a world champion player in Go, winning three out of four games.

Of course, pinpointing the start of the machine learning age is also difficult. Beating Go was a milestone, but my adult-age kids have been relying on GPS in their phones for years. They dont know how to read normal maps, and if they didnt have a phone, they would get lost. They are already relying on a machine that essentially replaces human reasoning. I havent looked up showtimes for a movie theater in a browser for several years now. I leave that to Siri on my iPhone. Ive been using an Amazon Echo speaker to control the thermostat in my home since 2015.

In their book, McAfee and Brynjolfsson make an interesting point about this radical shift. For anyone working in the field of artificial intelligence, leaving the information age behind, we know that this will be a crowdsourced endeavor. Its more than creating an account on Kickstarter. AI comes alive when it has access to the data generated by thousands or millions of users. The more data it has the better it will be. To beat the Go champion, Google DeepMind used a database of actual human-to-human games. AI cannot exist without crowdsourced data. We see this with chatbots and voicebots. The best bots know how to adapt to the user, know how to use previous discussions as the basis for improved AI.

Even the term machine learning has an implication about crowdsourcing. The machine learns from the crowd, typically by gathering data. We see this play out more vibrantly with autonomous cars than any other machine learning paradigm. Cars analyze thousands of data points using sensors that watch how people drive on the road. A Tesla Model S is constantly crowdsourcing. Now that GM is testing the self-driving Bolt on real roads, its clear the entire project is a way to make sure the cars understand all of the real-world variables.

The irony here? The machine age is still human-powered. In the book, the authors explain how the transition from steam power to electric power took a long time. People scoffed at the idea of using electric motors and not a complex system of gears and pulleys. Not everyone was on board. Not everyone saw the value. As we experiment with AI, test and retest the algorithms, and deploy bots into the home and workplace, its important to always keep in mind that the machines will only improve as the crowdsourced data improves.

Were still in full control. For now.

See the original post here:

The information age is over, welcome to the machine learning age – VentureBeat

We Are At The Dawn of a New Era of Innovation. Will You Still Be Able to Compete? – Inc.com

I recently appeared as a guest on Wharton Professor David Robertson’s radio show, Innovation Navigation. David is an old pro and recently published an excellent new book on innovation, The Power of Little Ideas, so it was an interesting, wide ranging discussion that covered a lot of ground.

One of the subjects we touched on was the new era of innovation. For the past few decades, firms have innovated within well understood paradigms, Moore’s Law being the most famous, but by no means the only one. This made innovation relatively simple, because we were fairly sure of where technology was going.

Today, however, Moore’s Law is nearing its theoretical limits as are lithium-ion batteries. Other technologies, such as the internal combustion engine, will be replaced by new paradigms. So the next few decades are likely to look a whole lot more like the 50s and the 60s than the 90s or the aughts, in which value will shift from developing applications to fundamental technologies.

As Thomas Kuhn explained in The Structure of Scientific Revolutions, we normally work within well established paradigms because they are useful for establishing the rules of the game. Specialists within a particular field can speak a common language, advance the field within well understood parameters and apply their knowledge to solve problems.

For example, Moore’s Law establish a stable trend of doubling computing power about every 18 months. That made it possible for technology companies to know how much computing power they would have to work with in the coming years and predict, with a fairly high level of accuracy, what they would be able to do with it.

Yet today, chip manufacturing has advanced to the point where, in a few short years, it will be theoretically impossible to fit more transistors on a silicon wafer. There are nascent technologies, such as quantum computing and neuromorphic chips that can replace traditional architectures, but they are not nearly as well understood.

Computing is just one area reaching its theoretical limits. We also need next generation batteries to power our devices, electric cars and the grid. At the same time, new technologies, such as genomics, nanotechnology and robotics are becoming ascendant and even the scientific method is being called into question.

Over the past few decades, technology and innovation has mostly been associated with the computer industry. As noted above, Moore’s law has enabled firms to bring out a steady stream of devices and services that improve so quickly that they become virtually obsolete in just a few years. Clearly, these improvements have made our lives better.

Still, as Robert Gordon points out in The Rise and Fall of American Growth, because advancement has been contained so narrowly within a single field, productivity gains have been meager compared to earlier technological revolutions, such as indoor plumbing, electricity and the internal combustion engine.

There are indications that’s beginning to change.These days, the world of bits is beginning to invade the world of atoms. More powerful computers are being used for genetic engineering and to design new materials. Robots, both physical and virtual, are replacing human labor for many jobs including high value work in medicine, law and creative tasks.

Yet again, these technologies are still fairly new and not nearly as well understood as traditional technologies. Unlike computer programming, you can’t take a course in nanotechnology, genetic engineering or machine learning at your local community college. In many cases, the cost of the equipment and expertise to create these technologies is prohibitive for most organizations.

In the 1950s and 60s, technological advancement brought increased scale to enterprises. Not only did mass production, distribution and marketing require more capital, but improved information and communication technologies made the management of a large enterprise far more feasible than ever before.

So it would stand to reason that this new era of innovation would lead to a similar trend. Only a handful of companies, like IBM, Microsoft, Google in the tech space and corporate giants like Boeing and Procter & Gamble in more conventional categories, can afford to invest billions of dollars in fundamental research.

Yet something else seems to be happening. Cloud technologies and open data initiatives are democratizing scientific research. Consider the Cancer Genome Atlas, a program that sequences the DNA inside tumors and makes it available on the Internet. It allows researchers at small labs to access the same data as major institutions. More recently, the Materials Genome Initiative was established to do much the same for manufacturing.

In fact, today there are a wide variety ways for small businesses to access world class scientific research. From government initiatives like the manufacturing hubs and Argonne Design Works to incubator, accelerator and partnership programs at major corporations, the opportunities are endless for those who are willing to explore and engage.

In fact, many large firms that I’ve talked to have come to see themselves as essentially utility companies, providing fundamental technology and letting smaller firms and startups explore thousands of new business models.

Innovation has come to be seen as largely a matter of agility and adaptation. Small, nimble players can adapt to changing conditions much faster than industry giants. That gives them an advantage over large, bureaucratic firms in bringing new applications to market. When technologies are well understood, much of the value is generated through the interface with the end user.

Consider Steve Job’s development of the iPod. Although he knew that his vision of “1000 songs in your pocket” was unachievable with available technology, he also knew that it would only be a matter of time for someone to develop hard drive with the specifications he required. When they did, he pounced, built an amazing product and a great business.

He was able to do that for two reasons. First, because the newer, more powerful hard drives worked exactly like the old ones and fit easily into Apple’s design process. Second, because the technology was so well understood, the vendor had little ability to extract large margins, even for cutting edge technology.

Yet as I explain in my book, Mapping Innovation, over the next few decades much of the value will shift back to fundamental technologies because they are not well understood, but will be essential for increasing the capability of products and services. They will require highly specialized expertise and will not fit so seamlessly into existing architectures. Rather than agility, exploration will emerge as a key competitive trait.

In short, the ones that will win in this new era will not be those with a capacity to disrupt, but those that are willing to tackle grand challenges and probe new horizons.

View post:

We Are At The Dawn of a New Era of Innovation. Will You Still Be Able to Compete? – Inc.com

Plotting a Moore’s Law for Flexible Electronics – IEEE Spectrum

Photo: IMEC Near Field Communicator: There are 1,700 transistors on the flexible chip in this NFC transmitter.

At a meeting in midtown Manhattan, Kris Myny picks up what looks like an ordinary paper business card and, with little fanfare, holds it to his smartphone. The details of the card appear almost immediately on the screen inside a custom app.

Its a simple demonstration, but Myny thinks it heralds an exciting future for flexible circuitry. In January, he began a five-year project at the nanoelectronics research institute Imec in Leuven, Belgium, to demonstrate that thin-film electronics has significant potential outside the realm of display electronics. In fact, he hopes that the project, funded with a 1.5 million grant from the European Research Council (ERC), could demonstrate that there is a path for the mass production of denser and denser flexible circuitsin other words, a Moores Law for bendable ICs.

Five years ago, Myny and his colleagues reported that they had used organic thin-film transistors to build an 8-bit microprocessor on flexible plastic. In the years since, the group has turned its focus to IGZOa metal-oxide semiconductor that is a mixture of indium, gallium, zinc, and oxygen. Thin-film transistors based on this substance can move charge significantly faster than their organic counterparts do; at the same time the transistors can still be built at or around room temperaturean important requirement when attempting to fabricate electronics directly onto plastic and other materials that can be easily deformed or damaged by heat.

To build that business card, Myny and his colleagues engineered a flexible chip containing more than 1,700thin-film IGZO transistors. What sets the chip apart from other efforts is its ability to comply with the ISO14443-A Near Field Communication (NFC) standard. For flexible circuitry, this is a demanding set of requirements, Myny says, as it requires logic gates that are fast enough to work with the 13.56-megahertz standard carrier frequency.

Adding to the challenge is that while IGZO is an effective n-type semiconductor, allowing electrons to flow easily, it is not a particularly good p-type material; there is no comparable material that excels at permitting the flow of holesthe absence of electrons that are treated as positive charges. Todays logic uses both p- and n-type devices; the complementary pairing helps control power consumption by preventing the flow of current when transistors are not in the act of switching. With just n-type devices to work with, Myny and his colleagues have to devise a different kind of circuitry.

With the ERC project, Imec aims to tackle a suite of interrelated problems in an effort to boost transistor density from 5,000 or so devices per square centimeter to 100,000. That figure isnt far from the density of thin-film transistors in conventional rigid-display backplanes today, Myny says. However, its another matter to try to achieve that density with digital logic circuitswhich require more complicated designsand to make sure those devices are reliable and consistent when theyre built on a delicate and irregular substrate.

The group also wants to prove this density is achievable outside the lab, by adapting manufacturing techniques that are already in use in display fabs. Myny says that if he and his team hit their goals, a square centimeter of fast, flexible circuitry could be built at a cost of 1 U.S. cent (assuming high-volume manufacturing). At the same time, while the density of the circuits increases, the group will also have to boost the transistor frequency and drive down power consumption to prevent overheating. The overall goal, Myny says, is to demonstrate that you can indeed make flexible circuitsthat it is not science fiction but that it is going to market.

When it comes to the fabrication of complex digital circuits on flexible substrates, Imec is in my opinion the biggest player, says Niko Mnzenrieder, a lecturer at the University of Sussex, in England, who specializes in flexible electronics. He notes that metal-oxide flexible circuitry is already starting to make commercial inroads, and he expects the first big applications to be in RFID and NFC technology. Its not a mature technology, he says, but its nearly ready for everyday use.

Go here to read the rest:

Plotting a Moore’s Law for Flexible Electronics – IEEE Spectrum

What Is the Future of Computers? | Moore’s Law

Integrated circuit from an EPROM memory microchip showing the memory blocks and supporting circuitry.

In 1958, a Texas Instruments engineer named Jack Kilby cast a pattern onto the surface of an 11-millimeter-long “chip” of semiconducting germanium, creating the first ever integrated circuit. Because the circuit contained a single transistor a sort of miniature switch the chip could hold one “bit” of data: either a 1 or a 0, depending on the transistor’s configuration.

Since then, and with unflagging consistency, engineers have managed to double the number of transistors they can fit on computer chips every two years. They do it by regularly halving the size of transistors. Today, after dozens of iterations of this doubling and halving rule, transistors measure just a few atoms across, and a typical computer chip holds 9 million of themper square millimeter. Computers with more transistors can perform more computations per second (because there are more transistors available for firing), and are therefore more powerful. The doubling of computing power every two years is known as “Moore’s law,” after Gordon Moore, the Intel engineer who first noticed the trend in 1965.

Moore’s law renders last year’s laptop models defunct, and it will undoubtedly make next year’s tech devices breathtakingly small and fast compared to today’s. But consumerism aside, where is the exponential growth in computing power ultimately headed? Will computers eventually outsmart humans? And will they ever stop becoming more powerful?

The singularity

Many scientists believe the exponential growth in computing power leads inevitably to a future moment when computers will attain human-level intelligence: an event known as the “singularity.” And according to some, the time is nigh.

Physicist, author and self-described “futurist” Ray Kurzweil has predicted that computers will come to par with humans within two decades. He told Time Magazine last year that engineers will successfully reverse-engineer the human brain by the mid-2020s, and by the end of that decade, computers will be capable of human-level intelligence.

The conclusion follows from projecting Moore’s law into the future. If the doubling of computing power every two years continues to hold, “then by 2030 whatever technology we’re using will be sufficiently small that we can fit all the computing power that’s in a human brain into a physical volume the size of a brain,” explained Peter Denning, distinguished professor of computer science at the Naval Postgraduate School and an expert on innovation in computing. “Futurists believe that’s what you need for artificial intelligence. At that point, the computer starts thinking for itself.” [How to Build a Human Brain]

What happens next is uncertain and has been the subject of speculation since the dawn of computing.

“Once the machine thinking method has started, it would not take long to outstrip our feeble powers,” Alan Turing said in 1951 at a talk entitled “Intelligent Machinery: A heretical theory,” presented at the University of Manchester in the United Kingdom. “At some stage therefore we should have to expect the machines to take control.” The British mathematician I.J. Good hypothesized that “ultraintelligent” machines, once created, could design even better machines. “There would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make,” he wrote.

Buzz about the coming singularity has escalated to such a pitch that there’s even a book coming out next month, called “Singularity Rising” (BenBella Books), by James Miller, an associate professor of economics at Smith College, about how to survive in a post-singularity world. [Could the Internet Ever Be Destroyed?]

Brain-like processing

But not everyone puts stock in this notion of a singularity, or thinks we’ll ever reach it. “A lot of brain scientists now believe the complexity of the brain is so vast that even if we could build a computer that mimics the structure, we still don’t know if the thing we build would be able to function as a brain,” Denning told Life’s Little Mysteries. Perhaps without sensory inputs from the outside world, computers could never become self-aware.

Others argue that Moore’s law will soon start to break down, or that it has already. The argument stems from the fact that engineers can’t miniaturize transistors much more than they already have, because they’re already pushing atomic limits. “When there are only a few atoms in a transistor, you can no longer guarantee that a few atoms behave as they’re supposed to,” Denning explained. On the atomic scale, bizarre quantum effects set in. Transistors no longer maintain a single state represented by a “1” or a “0,” but instead vacillate unpredictably between the two states, rendering circuits and data storage unreliable. The other limiting factor, Denning says, is that transistors give off heat when they switch between states, and when too many transistors, regardless of their size, are crammed together onto a single silicon chip, the heat they collectively emit melts the chip.

For these reasons, some scientists say computing power is approaching its zenith. “Already we see a slowing down of Moore’s law,” the theoretical physicist Michio Kaku said in a BigThink lecture in May.

But if that’s the case, it’s news to many. Doyne Farmer, a professor of mathematics at Oxford University who studies the evolution of technology, says there is little evidence for an end to Moore’s law. “I am willing to bet that there is insufficient data to draw a conclusion that a slowing down [of Moore’s law] has been observed,” Farmer told Life’s Little Mysteries. He says computers continue to grow more powerful as they become more brain-like.

Computers can already perform individual operations orders of magnitude faster than humans can, Farmer said; meanwhile, the human brain remains far superior at parallel processing, or performing multiple operations at once. For most of the past half-century, engineers made computers faster by increasing the number of transistors in their processors, but they only recently began “parallelizing” computer processors. To work around the fact that individual processors can’t be packed with extra transistors, engineers have begun upping computing power by building multi-core processors, or systems of chips that perform calculations in parallel.”This controls the heat problem, because you can slow down the clock,” Denning explained. “Imagine that every time the processor’s clock ticks, the transistors fire. So instead of trying to speed up the clock to run all these transistors at faster rates, you can keep the clock slow and have parallel activity on all the chips.” He says Moore’s law will probably continue because the number of cores in computer processors will go on doubling every two years.

And because parallelization is the key to complexity, “In a sense multi-core processors make computers work more like the brain,” Farmer told Life’s Little Mysteries.

And then there’s the future possibility of quantum computing, a relatively new field that attempts to harness the uncertainty inherent in quantum states in order to perform vastly more complex calculations than are feasible with today’s computers. Whereas conventional computers store information in bits, quantum computers store information in qubits: particles, such as atoms or photons, whose states are “entangled” with one another, so that a change to one of the particles affects the states of all the others. Through entanglement, a single operation performed on a quantum computer theoretically allows the instantaneous performance of an inconceivably huge number of calculations, and each additional particle added to the system of entangled particles doubles the performance capabilities of the computer.

If physicists manage to harness the potential of quantum computers something they are struggling to do Moore’s law will certainly hold far into the future, they say.

Ultimate limit

If Moore’s law does hold, and computer power continues to rise exponentially (either through human ingenuity or under its own ultraintelligent steam), is there a point when the progress will be forced to stop? Physicists Lawrence Krauss and Glenn Starkman say “yes.” In 2005, they calculated that Moore’s law can only hold so long before computers actually run out of matter and energy in the universe to use as bits. Ultimately, computers will not be able to expand further; they will not be able to co-opt enough material to double their number of bits every two years, because the universe will be accelerating apart too fast for them to catch up and encompass more of it.

So, if Moore’s law continues to hold as accurately as it has so far, when do Krauss and Starkman say computers must stop growing? Projections indicate that computer will encompass the entire reachable universe, turning every bit of matter and energy into a part of its circuit, in 600 years’ time.

That might seem very soon. “Nevertheless, Moore’s law is an exponential law,” Starkman, a physicist at Case Western University, told Life’s Little Mysteries. You can only double the number of bits so many times before you require the entire universe.

Personally, Starkman thinks Moore’s law will break down long before the ultimate computer eats the universe. In fact, he thinks computers will stop getting more powerful in about 30 years. Ultimately, there’s no telling what will happen. We might reach the singularity the point when computers become conscious, take over, and then start to self-improve. Or maybe we won’t. This month, Denning has a new paper out in the journal Communications of the ACM, called “Don’t feel bad if you can’t predict the future.” It’s about all the people who have tried to do so in the past, and failed.

This story was provided by Life’s Little Mysteries, a sister site to LiveScience. Follow Natalie Wolchover on Twitter @nattyoveror Life’s Little Mysteries @llmysteries. We’re also on Facebook & Google+.

Read this article:

What Is the Future of Computers? | Moore’s Law

Moore’s Law Meet Darwin – JWN

Image: Pixabay

It is not the strongest of the species that survive, nor the most intelligent, but the one most responsive to change. We must, however, acknowledge, as it seems to me, that man with all his noble qualities … still bears in his bodily frame the indelible stamp of his lowly origin. Charles Darwin, Origin of the Species

Our species and technology are at a crossroads. Our amazing human ingenuity has created amazing machines andfor the first time in our species existencethose machines are starting to create other smarter machines at scale. The changes are exponential and combinatorial. It is a phenomenon with which many non-nerds are now becoming familiar called Moores Law.

Simply stated, we have observed over many decades that the price/performance of computation is doubling approximately every two yearswhat is commonly called an exponential function. The scary / exciting thing is that it is happening not only to computation power (i.e. speed of computers, phones, etc) but to other technologies like the capacity of solar power, the capabilities of Gene Sequencing technologies and the like. In addition, these technologies are are starting to combine in the many companies of the new global entrepreneurial economy. Most worrisome, is that these exponential technologiesonce inventednever disappear. These genies are never placed back in the bottle. Good or evil.

But the human animalboth physically and culturallyis still evolving at Darwins pace. Our foundational structures of governance and education (for example) have to date only been able to respond in linear waysat best. At a more atomic level, perhaps the most linear of all of our systems is our ability as individual humans to adapt to new ideas, people and cultures. This resulting gapcalled disruption by manyis growing and is unsustainable. Left unchecked, it will likely not end well.

Subscribe to the free JWN weekly Energy Tech e-newsletter.

The answer it seems to me is that the human species needs to figure out how to adaptexponentially.

There are many ways of thinking that can help us move forward. Heres one:

Complex Adaptive Systems: Sustainability & Capacity before Velocity

In Alberta, we have adopted a concept for our innovation ecosystem that is based on the lessons of Complex Adaptive Systems called the Rainforest Framework. Centered on the seminal book by Victor Hwang and Greg Horowitz, the book and its research informs us that the key to advancing an ecosystemsmall or largeis to start with Culture. The authors note that if we are able to create a culture of trust, pay-it-forward and the likeand make it an explicit prerequisite of participationthen we can step on the gas with great effect. Velocitywhether linear or exponentiallycan only happen when individuals are acting in a way that eliminates the friction of mistrust, winner-take-all philosophies and lack of diversity.

So how does this ecosystem experiment we are conducting in Alberta relate to the broader challenges of Darwin and Moores Law? The answer we believe is to change the way we understand and define our primary adaptive strategynamely innovation. We are experimenting with a new definition of innovation that meets hyper-disruption head on. This new definition suggests that we cannot move forward as a species unless we move technology and governance together.

In short our new definition of innovation is:

The advancement of the human condition through changes in technology matched by equal or greater advancement in social governance.

Simply advancing the technologies that make our world easier and comfortable for an unbalanced few is not enough. As we measure technical advancement we also must include measures of how we are progressing socially. Our ability to sustain and capacity to absorb technological change MUST be present to increase the velocity otherwise we will continue our march to inequality and unsustainable growth. In a previous blog I called it the Innovation of Ways versus the Innovation of Things.

We can see this everywhere in our history. The environmental movement of the 60s, for example, coming on the heels of three decades of uninterrupted post war growth is no longer possible in an era of exponential change. This is a critical difference: The speed of change today is exponential and combinatorialmeaning we cant wait for social movements and institutions to catch up.

Here in Alberta we are adopting a new social contract and creating a common, collective voice that is beginning to bridge economic, political and cultural silos. Our belief is that once established, we willif we continue to embrace its philosophiesbe able to push harder and move faster. We will be able to overlay a culture of entrepreneurship within this full definition of innovation and create significantly increased velocity and change.

I recently watched an excellent TED talk by Dan Pollata called The Dreams We Havent Dared to Dream. In his talk, Dan eloquently notes that while human ingenuity has exponentially increased the transistors on a chip for the past 40 years, we have not applied the same exponential thinking to our dreams nor human compassion. As he says, we continue to make a perverse trade-off between our future dreams and our present state of evolution.

Some describe this ethical stasis as the tyranny of the OR

As the great Stephen Hawking said:

If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

It begins with a new definition of Innovation that matches Darwin to the relentless march of Moores Law.

About Jim Gibson

Jim Gibson is a Calgary-based serial entrepreneur, an active leader in the Alberta innovation ecosystem, author and founder of the Rainforest Movement (www.rainforestab.ca) that seeks to move the needle forward in the culture of innovation in Alberta and Canada. His blog on innovation is here at http://thespear.co and his upcoming book, The Tip of the Spear: Our Species and Technology at a Crossroads, will be available in the fall.

Read the original here:

Moore’s Law Meet Darwin – JWN

3-in-1 device offers alternative to Moore’s law – Phys.Org

June 14, 2017 by Lisa Zyga feature Illustration of the reconfigurable device with three buried gates, which can be used to create n- or p-type regions in a single semiconductor flake. Credit: Dhakras et al. 2017 IOP Publishing Ltd

In the semiconductor industry, there is currently one main strategy for improving the speed and efficiency of devices: scale down the device dimensions in order to fit more transistors onto a computer chip, in accordance with Moore’s law. However, the number of transistors on a computer chip cannot exponentially increase forever, and this is motivating researchers to look for other ways to improve semiconductor technologies.

In a new study published in Nanotechnology, a team of researchers at SUNY-Polytechnic Institute in Albany, New York, has suggested that combining multiple functions in a single semiconductor device can improve device functionality and reduce fabrication complexity, thereby providing an alternative to scaling down the device’s dimensions as the only method to improve functionality.

To demonstrate, the researchers designed and fabricated a reconfigurable device that can morph into three fundamental semiconductor devices: a p-n diode (which functions as a rectifier, for converting alternating current to direct current), a MOSFET (for switching), and a bipolar junction transistor (or BJT, for current amplification).

“We are able to demonstrate the three most important semiconductor devices (p-n diode, MOSFET, and BJT) using a single reconfigurable device,” coauthor Ji Ung Lee at the SUNY-Polytechnic Institute told Phys.org. “While these devices can be fabricated individually in modern semiconductor fabrication facilities, often requiring complex integration schemes if they are to be combined, we can form a single device that can perform the functions of all three devices.”

The multifunctional device is made of two-dimensional tungsten diselenide (WSe2), a recently discovered transition metal dichalcogenide semiconductor. This class of materials is promising for electronics applications because the bandgap is tunable by controlling the thickness, and it is a direct bandgap in single layer form. The bandgap is one of the advantages of 2D transition metal dichalcogenides over graphene, which has zero bandgap.

In order to integrate multiple functions into a single device, the researchers developed a new doping technique. Since WSe2 is such a new material, until now there has been a lack of doping techniques. Through doping, the researchers could realize properties such as ambipolar conduction, which is the ability to conduct both electrons and holes under different conditions. The doping technique also means that all three of the functionalities are surface-conducting devices, which offers a single, straightforward way of evaluating their performance.

“Instead of using traditional semiconductor fabrication techniques that can only form fixed devices, we use gates to dope,” Lee said. “These gates can dynamically change which carriers (electrons or holes) flow through the semiconductor. This ability to change allows the reconfigurable device to perform multiple functions.

“In addition to implementing these devices, the reconfigurable device can potentially implement certain logic functions more compactly and efficiently. This is because adding gates, as we have done, can save overall area and enable more efficient computing.”

In the future, the researchers plan to further investigate the applications of these multifunctional devices.

“We hope to build complex computer circuits with fewer device elements than those using the current semiconductor fabrication process,” Lee said. “This will demonstrate the scalability of our device for the post-CMOS era.”

Explore further: Team engineers oxide semiconductor just single atom thick

More information: Prathamesh Dhakras, Pratik Agnihotri, and Ji Ung Lee. “Three fundamental devices in one: a reconfigurable multifunctional device in two-dimensional WSe2.” Nanotechnology. DOI: 10.1088/1361-6528/aa7350

Journal reference: Nanotechnology

2017 Phys.org

A new study, affiliated with UNIST has introduced a novel method for fabrication of world’s thinnest oxide semiconductor that is just one atom thick. This may open up new possibilities for thin, transparent, and flexible …

(PhysOrg.com) — Most of todays electronics devices contain two different types of field-effect transistors (FETs): n-type (which use electrons as the charge carrier) and p-type (which use holes). Generally, a transistor …

(Phys.org)Although vacuum tubes were the basic components of early electronic devices, by the 1970s they were almost entirely replaced by semiconductor transistors. But in the past few years, researchers have been developing …

Combining silicon with a light-producing semiconductor may help develop micrometer-scale lasers, shows Doris Keh-Ting Ng and her colleagues from the A*STAR Data Storage Institute.

A team of researchers from Purdue University, SEMATECH and SUNY College of Nanoscale Science and Engineeringwill present at the 2014 Symposium on VLSI Technology on their work involving high-performance molybdenum disulfide …

Researchers at the Energy Department’s National Renewable Energy Laboratory (NREL) have uncovered a way to overcome a principal obstacle in using two-dimensional (2D) semiconductors in electronic and optoelectronic devices.

In the semiconductor industry, there is currently one main strategy for improving the speed and efficiency of devices: scale down the device dimensions in order to fit more transistors onto a computer chip, in accordance …

Carbon is one of the most versatile elements: it forms the basis for an enormous number of chemical compounds, it has several allotropes of different dimensionality, and it exhibits many different bonding geometries. For …

Flexible electronic parts could significantly improve medical implants. However, electroconductive gold atoms do not easily bind to silicones. Researchers from the University of Basel have now modified short-chain silicones …

The news story made a big splash: in January 2016 ETH researchers Professor Raffaele Mezzenga and his senior researcher Sreenath Bolisetty published a study in the journal Nature Nanotechnology about an innovative type of …

In many ways, magnets are still mysterious. They get their (often powerful) effects from the microscopic interactions of individual electrons, and from the interplay between their collective behavior at different scales. …

Queen’s University Belfast researchers have discovered a new way to create extremely thin electrically conducting sheets, which could revolutionise the tiny electronic devices that control everything from smart phones to …

Adjust slider to filter visible comments by rank

Display comments: newest first

What’s next?

Maybe the construct of intent. The how and why influence enlists underlying capabilities.

Nope. Moore’s law isn’t about speed or efficiency, but about the number of transistors at the lowest price point per transistor. Scaling down doesn’t necessarily bring cost advantges, so simply fitting more transistors per square inch doesn’t follow Moore’s law.

Again they get it wrong.

Not sure I see the point of this. In situations where Moore’s law matters (high density memory/processor logic), one generally has massive arrays of the same kind of component and they are usually dedicated, not programmable. If these things could productively do useful things simultaneously, that would be something, but they are one function at a time. Perhaps one could make some sort of Read Only Memory/FLASH-like memory out of them by programming locations to be a transistor or diode, but there’s unlikely to be a density gain by doing that. This could be a significant advance in programmable arrays, but I don’t see it helping much for conventional memory and logic.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Go here to see the original:

3-in-1 device offers alternative to Moore’s law – Phys.Org


...1020...2829303132...40...