...1020...2829303132...40...


IBM i License Transfer Deal Comes To The Power S812 Mini – IT Jungle

March 6, 2017 Timothy Prickett Morgan

Back in the early days of the AS/400 midrange system, the processor, memory, networking, and disk and tape storage hardware embodied in the system was by far the most costly part of that system, far outweighing the cost of the systems software that ran atop it. We dont have the precise numbers at hand, but it was something like 85 percent hardware cost and 15 percent hardware cost.

Fast-forward a few decades, and the Moores Law improvements in every component in the hardware means that hardware is far less costly. But software doesnt have a Moores Law scaling; in fact, it is based on people and they cost more every year. And so software now represents a very large portion of the overall Power Systems-IBM setup these days. So customers are often in a position where they want newer, more powerful, and more capacious hardware but they cannot inexpensively move their existing IBM i and related system program licenses over to the new iron.

IBM has not cut prices for IBM i in recent years, as far as I know, and I have to guess because it is no longer possible to get list prices for anything in an easy fashion. Even partners have to use a configurator to get pricing, and it has to be tied to a particular customer and a particular set of serial numbers on machines for this information to be disseminated. (Again, this is as far as I know.) What I do know is that the list price system on IBMLink that I used for decades is no longer there. In any event, IBM i software has gotten a little more expensive over time when gauged in U.S. dollars and IBM is loath to cut prices. But every now and then it does something in special deals to make it a little less costly for customers with older machines to move to newer machines with regard to software pricing, and it has done it again with the new Power S812 Mini system that was announced for IBM i and AIX operating systems back on Valentines Day and that will be shipping on March 17.

Under the IBM i Power License Transfer Free promotion announced last week, which like the last such deal that was announced for earlier Power8-based systems in May 2016, offers customers a waiver on the fees that Big Blue charges to move an operating system. As has been the case for many years, IBM charges $5,000 per core to move an IBM i license from an old machine to a new one. This transfer fee seems absurd, as I have pointed out before, for a low-end system where the operating system only costs $2,995 per core. Or, more precisely, as I think it costs because that is what IBM used to charge per core in a P05 tier the last time I saw a list price on IBM i. I can see a $500 transfer fee for a license that has already been paid for, and I can make a very strong case for zero being a good fee in a world where IBM wants to get customers current. As detailed in the IBM i Processor and User Entitlement Transfer guide, IBM cushions the blow somewhat by saying that the $5,000 fee includes one year of Software Maintenance at no charge, which I think is funny for something that costs $5,000. And any Software Maintenance that you have paid for does not transfer from the old machine to the new one, also funny. But I have a warped sense of humor.

By the way, as you can see from that IBM i Processor and User Entitlement Transfer guide, the transfer fee is not a flat $5,000 across all classes of machines. That is just for a P05-class system that is transferring to another P05-class machine and within special groups organized by IBM. If you jump from Group 1 to Group 2 or Group 3 machines, the IBM i transfer fee is $18,000 per core, and from Group 4 to either Group 5 or Group 6 it costs $17,000 per core.

On February 28, IBM said in an announcement to business partners that it would allow customers to transfer IBM i licenses from the old machines to the new Power S812 for free, saving them the $5,000 per core charge. This is obviously a good thing, particularly if the Power S812 costs around 20 percent less than the Power S822 and Power S824 machines of similar single-core, light memory configuration. Every little bit helps. But 64 GB of memory cap on IBM i setups seems a bit light, perhaps even for a single 3 GHz core as the Power S812 machine has.

To take part in the IBM i Power License Transfer fee promotion, the old machine has to be installed for the past year or more and the new Power S812 machine has to ship between February 28 and August 31 of this year. Customers can apply this deal to up to five machines, but no more than that. As far as I know, this deal is only available in the United States and Canada, but obviously, customers all over the world should ask for the same treatment. And IBM similarly says that the transfer fee forgiveness only applies to machines moving in the same software tier as described by the guide above (not the IBM i software groups P05 through P60, which are different characterizations), but I think that anyone moving up to a higher group should at least ask for those $17,000 or $18,000 fees to be knocked down by $5,000 or abolished completely.

One more thing: Last May, when a similar IBM i license transfer deal was announced for Power S824 machines, IBM also waived the After License fee charges on Software Maintenance for customers who had let their support contracts lapse. Software Maintenance costs about 25 percent of the operating system licensing fees and is charged on an annual basis, and the After License charges can be in excess of a years worth of Software Maintenance fees, depending on how long it has lapsed. This can also be a large number, and if IBM wants customers with older machines to move ahead, then it is probably wise to offer this deal again. IBM has not done so here in early 2017, but nothing prevents customers upgrading to Power8 machines of any type from older gear to ask.

Ask and ye might receive.

IBM Gives The Midrange A Valentines Day (Processor) Card

More Insight Into The Rumored Power Mini System

Geared Down, Low Cost Power IBM i Box Rumored

IBM Cuts Core And Memory Pricing On Entry Power Iron

Entry Power8 Systems Get Express Pricing, Fat Memory

Reader Feedback On Power S814 Power8 Running IBM i

Four-Core Power8 Box For Entry IBM i Shops Ships Early

IBM Wheels And Deals To Get IBM i Shops Current

IBM i Shops Pay The Power8 Hardware Premium

IBM i Runs On Two Of Five New Power8 Machines

IBM Tweaks Power Systems-IBM i Licensing Deal

More Servers Added to the IBM i License Transfer Deal

More Software Pricing Carrots for IBM i Shops

Tags: Tags: IBM i, Power S812, Power Systems

The Missing RPG OA Puzzle Piece

Visit link:

IBM i License Transfer Deal Comes To The Power S812 Mini – IT Jungle

Molino Woman Kills Dog To Stop Attack – NorthEscambia.com

A Molino woman wont be charged with a crime after she shot and killed her neighbors dog Friday morning.

The incident occurred about 9 a.m. in the 5700 block of Cedartown road.

The woman tried to stop a dog from attacking one of her puppies. The dog turned an the woman and she shot and killed it, according to county officials. The dog did not bite her.

The owner of the dog had previously been cited numerous times by Escambia County Animal Control, officials said.

See the original post:

Molino Woman Kills Dog To Stop Attack – NorthEscambia.com

Best Buy Inc Co (BBY) Stock Dives Into the Retail Dumpster – Investorplace.com

Popular Posts: Recent Posts:

Best Buy (NYSE:BBY) announced slightly lower same-store sales during the holiday season, and investors dumped BBY stock in a hurry.

The drop was minor, less than 1%.But it was unexpected, and missed analyst estimates of a top-line gain of 0.5%.

For the quarter, Best Buy reported earnings of $607 million ($1.91 per share) on revenue of $13.48 billion. This compared with earnings of just $479 million($1.40) but revenue of $13.62 billion a year earlier. More importantly, while adjusted profits of $1.95 per share easily beat estimates of $1.67, revenues fell short of analysts expectations, also for $13.62 billion.

The revenue shortfall meant analysts threw Best Buy stock into the dumpster with retailers such as Macys Inc. (NYSE:M), with shares down almost 5% early Wednesday, to $42.40. During the Christmas season, on Dec. 8, the shares traded as high as $49.31.

Were the analysts right, or did they just offer smart investors a bargain? Is Amazon.com, Inc. (NASDAQ:AMZN) about to kill all retailers, or is this just a case of Moores Law in action?

Best Buy even trounced the higher earnings whisper number of $1.66 per share. But actual estimates were all over the map, with some very bearish about the companys ability to cut costs and others bullish on margins.

The Zacks Metric Model noted that BBY stock had beaten estimates for four quarters, and that shareholders had been rewarded with a 36% gain. In particular, Best Buy was posting big gains in online sales the so-called omni-channel approach and Zacks was expecting an upside surprise.

On the bottom line, of course, Best Buy delivered one. And considering the sizable beat, BBY shares shouldve rocketed higher.

But something elseis at play and that something is Moores Law.

Moores Law, which turned 50 in 2015, was described by Intel Corporation(NASDAQ:INTC) co-founder Gordon Moore as an expected increase in circuit density on silicon, doubling every 18 months ago as far as he could see, in 1966.

But, as I have been writing for many years now, Moores Law also turned traditional economics on its head.Moores Law is deflationary, and the deflationary impact grows with time, as integrated circuits are incorporated into more and more things, and as the impact is compounded by its use in various ways.

Next Page

See the original post here:

Best Buy Inc Co (BBY) Stock Dives Into the Retail Dumpster – Investorplace.com

Expanding the Scope of Verification – EE Journal

March 1, 2017

by Kevin Morris

Looking at the agenda for the 2017 edition of the annual DVCon – arguably the industrys premiere verification conference, one sees precisely what one would expect: tutorials, keynotes, and technical sessions focused on the latest trends and techniques in the ever-sobering challenge of functional verification in the face of the relentless advance of Moores Law.

For five decades now, our designs have approximately doubled in complexity every two years. Our brains, however, have not. Our human engineering noggins can still process just about the same amount of stuff that we could back when we left college, assuming we havent let ourselves get too stale. That means that the gap between what we as engineers can understand and what we can design has been growing at an exponential rate for over fifty years. This gap has always presented the primary challenge for verification engineers and verification technology. Thirty years ago, we needed to verify that a few thousand transistors were toggling the right ways at the right times. Today, that number is in the billions. In order to accomplish that and span the complexity gap, we need significant leverage.

The basic fundamentals of verification have persisted. Logic simulation has always been a mainstay, processing vectors of stimuli and expected results as fast and accurately as possible – showing us where our logic or timing has gone awry. Along the way, we started to pick up formal methods – giving us a way to prove that our functionality was correct, rather than trying to exhaustively simulate the important or likely scenarios. Parallel to those two avenues of advancement, we have been constantly struggling to optimize and accelerate the verification process. Weve proceduralized verification through standards-based approaches like UVM, and weve worked to accelerate the execution of our verification processes through technologies such as FPGA-based prototyping and emulation.

Taking advantage of Moores Law performance gains in order to accelerate the verification of our designs as they grow in complexity according to Moores Law is, as todays kids would probably say, Kinda meta. But Moores Law alone is not enough to keep up with Moores Law. Its the classic perpetual-motion conundrum. There are losses in the system that prevent the process from being perfectly self-sustaining. Each technology-driven doubling of the complexity of our designs does not yield a doubling of the computation that can be achieved. We gradually accrue a deficit.

And the task of verification is constantly expanding in other dimensions as well. At first, it was enough to simply verify that our logic was correct – that the 1s, 0s, and Xs at the inputs would all propagate down to the correct results at the outputs. On top of that, we had to worry about timing and temporal effects on our logic. As time passed, it became important to verify that embedded software would function correctly on our new hardware, and that opened up an entire new world of verification complexity. Then, people got cranky about manufacturing variation and how that would impact our verification results. And we started to get curious about how things like temperature, radiation, and other environmental effects would call our verification results into question.

Today, our IoT applications span vast interconnected systems from edge devices with sensors and local compute resources through complex communication networks to cloud-based computing and storage centers and back again. We need to verify not just the function of individual components in that chain, but of the application as a whole. We need to confirm not simply that the application will function as intended – from both a hardware and software perspective – but that it is secure, robust, fault-tolerant, and stable. We need to assure that performance – throughput and latency – are within acceptable limits, and that power consumption is minimized. This problem far exceeds the scope of the current notion of verification in our industry.

Our definition of correct behavior is growing increasingly fuzzy over time as well. For example, determining whether a processed video stream looks good is almost impossible from a programmatic perspective. The only reliable metric we have is human eyes subjectively staring at a screen. There are many more metrics for system success that have followed similar subjectivity issues. As our digital applications interact more and more directly and intimately with our human, emotional, analog world, our ability to boil verification down to a known set of zeros and ones slips ever farther from our grasp.

The increasing dominance of big data and AI-based algorithms further complicate the real-world verification picture. When the behavior of both hardware and software is too complex to model, it is far too complex to completely verify. Until some radical breakthrough occurs in the science of verification itself, we will have to be content to verify components and subsystems along fairly narrow axes and hope that confirming the quality of the flour, sugar, eggs, butter, and baking soda somehow verifies the deliciousness of the cookie.

There is no question that Moores Law is slowly grinding to a halt. And, while that halt may give us a chance to grab a breath from the Moores Law verification treadmill, it will by no means bring an end to our verification challenges. The fact is – if Moores Law ends today, we can already build systems far too complex to verify. If your career is in verification, and you are competent, your job security future looks pretty rosy.

But this may highlight a fundamental issue with our whole notion of verification. Verification somewhat tacitly assumes a waterfall development model. It presupposes that we design a new thing, then we verify our design, then we make and deploy the thing that we developed and verified. However, software development (and Id argue that the development of all complex hardware/software applications such as those currently being created for IoT) follows something much more akin to agile development – where verification is a continual ongoing process as the applications and systems evolve over time after their initial deployment.

So, lets challenge our notion of the scope and purpose of verification. Lets think about how verification serves our customers and our business interests. Lets re-evaluate our metrics for success. Lets consider how the development and deployment of products and services has changed the role of verification. Lets think about how our technological systems have begun to invert – where applications now span large numbers of diverse systems, rather than being contained within one. Moores Law may end, but our real work in verification has just begun.

EDA. Semiconductor.

More here:

Expanding the Scope of Verification – EE Journal

Taiwan Semiconductor Mfg. Co. Ltd. Says 5-Nano Tech to Enter Risk Production in 2019 – Motley Fool

Taiwan Semiconductor Manufacturing Company (NYSE:TSM), the largest pure-play contract chip manufacturer, reportedly said (per DigiTimes) that it intends to begin “risk production” of chips using its 5-nanometer technology in the “first half of 2019.”

It usually takes about a year from risk-production start to mass-production start, so if TSMC achieves this timeline, it should begin volume production of chips using its 5-nanometer technology in the first half of 2020.

Image source: Intel.

What does this mean for TSMC investors and customers? Let’s take a closer look.

Chip manufacturers have historically tried to advance their respective manufacturing technologies at a regular pace prescribed by what is commonly referred to as Moore’s Law.

According to this “law,” the number of transistors (chips are made up of millions, if not billions, of transistors these days) that can be crammed into a given chip area doubles roughly every 24-months.

Since TSMC plans to begin mass production on its 7-nanometer technology in the first half of 2018, mass production of its 5-nanometer technology — which should deliver a doubling of transistor density compared to its 7-nanometer technology — the company is essentially following Moore’s Law (something that’s becoming much more difficult to do these days).

TSMC needs to be able to deliver new manufacturing technologies at a rapid pace to satisfy the needs of its major customers. These newer technologies allow the company’s customers to cram in more features and functionality all while improving power efficiency — a clear win for performance/power sensitive applications like high-end smartphone and data center processors.

TSMC has said in the past that it aims to continue to grow its market share with each successive manufacturing technology; if the company can deliver on its stated timeline for 5-nanometer tech, then it should offer industry-leading chip density with this technology.

Investors must keep an eye on what TSMC’s key rivals in the contract chip manufacturing business –Samsung (NASDAQOTH:SSNLF) and GlobalFoundries — ultimately manage to deliver, but it seems to me that TSMC is right on track to continue to have compelling enough technology to maintain or grow market share in advanced technologies.

In the past, chipmakers have run into difficulties transitioning to newer manufacturing technologies — this stuff is getting harder with each successive generation. As good as TSMC’s recent track record has been vis-a-vis technology transitions, there’s always going to be some level of execution risk here.

Fortunately, TSMC tends to be very transparent with its investors, offering regular technology development and manufacturing ramp updates on its quarterly earnings calls. So, if there are any issues/delays, then I would expect TSMC to disclose those to investors in a timely fashion.

For what it’s worth, given the immense pressure that TSMC likely faces to keep Apple happy, I think that the odds are extremely good that we will see iPhone models launched in 2020 that will be powered by chips manufactured in TSMC’s 5-nanometer technology.

Ashraf Eassa has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Apple. The Motley Fool has the following options: long January 2018 $90 calls on Apple and short January 2018 $95 calls on Apple. The Motley Fool has a disclosure policy.

Continued here:

Taiwan Semiconductor Mfg. Co. Ltd. Says 5-Nano Tech to Enter Risk Production in 2019 – Motley Fool

AFIS reach new levels as biometrics advance at Moore’s Law pace – SecureIDNews

Florida county solves cold case with new Automated Fingerprint Identification System

Old criminals beware. The chance that you will be identified from long-ago collected evidence is growing exponentially as biometric systems and Automated Fingerprint Identification Systems (AFIS) improve. Case in point, Pinellas County, Florida, where a man was arrested on Feb. 17, 2017 for a crime committed 25 years prior.

At the time of the 1992 sexual attack, the Tampa Bay Reporter explains that latent fingerprint evidence was collected from the scene. The prints were run through the AFIS used by the Sheriffs Office at that time to no avail.

Fast forward to July 2016. A new AFIS from MorphoTrak had recently replaced the prior decades-old system used by the county. Latent print examiners once again processed the 25-year-old fingerprint evidence collected in the cold case. Thanks to the vastly improved matching algorithms and architecture, the new AFIS hit on a single suspect.

Several months later, the man was arrested, charged with sexual battery and taken to the jail.

In Pinellas County, the new AFIS system is returning more than 230 hits on latent print checks each month, an increase of more than 50% from the prior generation technology.

That is impressive, but is it a mere glimpse of things to come?

Biometric systems for AFIS solutions, border control, identity management, authentication, mobile ID, digital identity, etc. are advancing at a Moores Law style rate. Each incarnation breeds more significant advances than its predecessor, and the incarnations or generations are coming more and more rapidly.

We are in the midst of an unprecedented rise in the acceptance of biometrics. With acceptance comes investment. And this will result in massive and spiraling gains in all areas required for exponential growth: intellectual pursuit, financial investment, technical gains chip, software, processing, cloud, et al, government and standards interest, and more.

It is very likely that the same level of improvement seen in Pinellas Countys AFIS between 1992 and 2016 will again be seen between 2016 and 2020. That next factor of advancement could take just months post 2020. And the sky is the limit from there.

New AFIS solutions hold the promise of far better identification and accuracy, streamlined human inputs and overall efficiencies, and ever-increasing processing power, storage and cross-system sharing.

One day (soon), criminals may be unable to hide.

If I had committed a past crime and left fingerprint evidence, Id be preparing for relocation to somewhere without extradition as the opportunity to avoid identification will be slimming rapidly.

Read the rest here:

AFIS reach new levels as biometrics advance at Moore’s Law pace – SecureIDNews

Moore’s Law and supply chain planning systems – The 21st Century Supply Chain – Perspectives on Innovative (blog)

It was in 1965 that Dr. Gordon Moore made a prediction that changed the pace of tech. His prediction, popularly known as Moores law, was with regards to doubling of the number of transistors per square inch on an integrated circuit every 18 months or so. As a result of the innovations attributable to the endurance of Moores law over the last 50+ years, we have seen significant accelerations in processing power, storage, and connectivity. These advances continue to have major implications on how companies plan their supply chains. In my nearly two decades as a supply chain professional, I have seen quite a few changes.

Lets look at some of the big shifts that have taken place in the supply chain planning space.

Early on in my career, I remember working with a large global company who had to take their interconnected global supply chain model and slice it up into distinct independent supply chain models. This was because the processing power at the time was simply not enough to plan their supply chain in a single instance. This surgical separation of supply chains required a high degree of ingenuity and identifying the portions of supply network with the least amount of interconnections, and partition them. This was not the most optimal way to build a supply chain model, but they did what they could within the limitations of the technology then. With the advent of better processing power, they were able to consolidate these multiple instances into a single global instance leading to a better model of their business. This is just one of many such examples.

As the hardware side of the solution benefited from Moores law, in parallel, developers of the supply chain applications continued to make conscious efforts to better utilize the storage, processing, and network resources available to them. This multi-pronged approach resulted in squeezing further efficiencies and bringing better scalability. Now companies are getting more adventurous with their planning and are getting planning down to the point of consumption. While there is enough debate within the supply chain community as to whether the data at more atomic levels is clean, trustworthy, and dense enough, and whether the extra effort needed to model down to the granular levels is worth it, the fact that we are seeing technology scale to such levels of granularity is illustrative of the power of Moores law.

In a traditional packaged planning software deployment, the vendor sells a perpetual license for the software, helps the customer with sizing the hardware, waits for the hardware to be setup and configured at the customers premises, then installs the software and the middleware components needed before the software configurations can begin. This whole process can take several weeks or in many cases, months. With Moores law holding its power over the decades, and resulting gains in processing, storage, and network speeds, newer delivery models prevailed. Supply Chain Planning capability is now being provided in a Software as a Service (SaaS) model. Immediately upon executing the necessary contracts, customers can start accessing the software, so the project can begin in earnest. This is shifting the focus from Technology enablement to Business capability enablement. I remember the days when prospects approached Cloud with skepticism, specifically around the security of cloud based systems. Now, while I still see a number of prospects asking questions around security as part of the RFP (Request for Proposal) process, it is fair to say that the security discussion in most instances is turning out to be a set of quick conversations with the customers IT teams. There is in general, a growing acknowledgement that a SaaS vendor catering to many customers is better equipped to handle security vulnerabilities than any one companys IT organization.

One added advantage of the move to the cloud is accessibility. Until a few years ago, every RFP looking for global deployment of supply chain planning systems used to contain questions around accessibility on dial up lines and such in developing nations. Now it is not as often that I see questions around speed of networks and accessibility. With tech becoming accessible across the globe and with increasing availability of the bandwidth, I am seeing fewer companies query about accessibility from different geographies. Instead, the questions are more geared around access from various mobile devices, which is becoming a core requirement. The SaaS model renders itself very well to such support across varied devices and form factors. SaaS is illustrative of the symbiotic progress between hardware and software delivery models powered by Moores law.

While there is enough talk about the rise of the machines and autonomous supply chains, the newer forms of planning technology is in fact helping get the best of bringing together the humans and machines, rather than making humans redundant. The previous generations of planning technology was very much waterfall oriented with Demand Planning, followed by Supply Planning, followed by Capacity Planning, and so on. It severely undermined the role of human intelligence in supply chain planning. The well intentioned users of such systems spend more time in data gathering, preparation, and piece together information on outdated data using excel macros and such. Also, building an S&OP capability with such underlying technology is turning out to be an expensive band aid for several organizations.

Such batch, waterfall-oriented planning is giving way to near real-time concurrent planning supported by what-if scenarios and social collaboration. Supported by technologies such as in memory computing, concurrent planning can happen at a scale like we have not seen before. Such advances in planning at the speed of business can also better leverage advances in IoT, Machine Learning, and Data science. Batch oriented supply chain planning capabilities of the previous generation are not fit to consume the real time digital signals from smart, connected devices, and course correct as needed. Having a system that can supplement human intelligence so planners can make decisions at the speed of business can be very empowering.

Now it is becoming very realistic and affordable to represent the model of an end to end network of a large corporation with all its assumptions and parameters, and simulate the response strategies to the various stimuli the supply chain receives. Linear approximations of highly non-linear supply chains are giving way to more realistic modeling of supply networks.

All in all, Moores law did have a major impact on the supply chain planning capabilities. Significant gaps still exist between the art of the possible with a new way of concurrent planning, as compared to how many organizations run their supply chain planning processes in a batch oriented manner today. My advice to the companies embarking on supply chain transformation the future is here! Challenge yourself on if the old ways of planning will meet the needs of the organizations of the present day. If Moores law helped get unprecedented computing power right in your pocket in the form of a smart phone, what can it do to your supply chain? The possibilities are limitless. You just need to be open to explore!

As Vice President of Industry Strategy at Kinaxis, Madhav serves as a trusted advisor for our customers through sales and implementation, ensuring success. He also engages with our strategic customers and key industry leaders to drive thought leadership and innovation. Madhav joined Kinaxis in the summer of 2016, bringing many years of experience in Supply Chain Management across various industries. Madhav started his professional career at i2 (which was later acquired by JDA). During his 17+ year tenure at i2/JDA, Madhav played numerous roles in Customer Support, Consulting, Presales, and Product Management. During his illustrious career, he was instrumental in helping enable numerous large scale transformational supply chain opportunities. He is very passionate about Supply Chain Management and the role it plays in making the world a better place. He shares this passion with others through his engagements and writings. Madhav has a Ph.D. in Chemical Engineering from University of Florida and a B.Tech. in Chemical Engineering from Indian Institute of Technology (IIT), Madras.

More blog posts by Dr. Madhav Durbha

See the article here:

Moore’s Law and supply chain planning systems – The 21st Century Supply Chain – Perspectives on Innovative (blog)

Chris Rowen: Neural Networks – The New Moore’s Law – Design and Reuse (press release) (blog)

In addition to being the master of ceremonies for the recent embedded neural network symposium, Chris Rowen also presented his own thoughts. Chris used to be the CTO of Tensilica, and after Cadence acquired them he became the CTO of the IP group. Last year he left to create a startup in the deep learning space, called Cognite Ventures.

Something Chris pointed out last year at the previous summit was that 99% of captured raw data are pixels (photographs and video). This dwarfs everything else such as sound and motion. Starting in 2015, there are more image sensors in the world than there are people, and the amount of data that they produce is staggering (1010 sensors x 108 pixels/second = 1018 pixels/second). Making sense of all this raw data requires computer cognition.

Click here to read more …

The rest is here:

Chris Rowen: Neural Networks – The New Moore’s Law – Design and Reuse (press release) (blog)

Large-Scale Quantum Computing Prototype on Horizon – The Next Platform

February 16, 2017 Jeffrey Burt

What supercomputers will look like in the future, post-Moores Law, is still a bit hazy. As exascale computing comes into focus over the next several years, system vendors, universities and government agencies are all trying to get a gauge on what will come after that. Moores Law, which has driven the development of computing systems for more than five decades, is coming to an end as the challenge of making smaller chips loaded with more and more features is becoming increasingly difficult to do.

While the rise of accelerators, like GPUs, FPGAs and customized ASICs, silicon photonics and faster interconnects will help drive performance to meet many of the demands of such emerging applications as artificial intelligence and machine learning, data analytics, autonomous vehicles and the Internet of Things, down the road new computing paradigms will have to be developed to address future workload challenges. Quantum computing is among the possibilities being developed as a possible solution as vendors look to map out their pathways into the future.

Intel, which more successfully than any other chip maker has driven Moores Law forward, is now turning some of its attention to the next step in computing. CEO Brian Krzanich last week during the companys investor event said Intel is investing a lot of time, effort and money in both quantum computing and neuromorphic computing developing systems that can mimic the human brain and Mark Seager, Intel Fellow and CTO for the HPC ecosystem in the chip makers Scalable Datacenter Solutions Group, told The Next Platform that at Intel, we are serious about other aspects of AI like cognitive computing and neuromorphic computing. Our way of thinking about AI is more broad than just machine learning and deep learning, but having said that, the question is how the technologies required for these workloads are converging with HPC.

Quantum computing has been talked about for decades, and there have been projects pushing the idea for almost just as long. It holds out the promise of systems that are multiple times faster than current supercomputers. At the core of quantum computers are qubits, which are to quantum systems what bits are to traditional computers.

IBM last year made its quantum computing capabilities available on the IBM Cloud to give the public access to the technology and to drive innovation and new applications that can be used for the technology. Big Blue has been working on quantum computing technology for more than three decades. D-Wave currently is the only company to offer commercial quantum computing systems, and last month introduced its latest version, the D-Wave 2000Q, which has 2,000 qubits twice the number of its predecessor and has its first customer in Temporal Defense Systems, which will use the system to address cybersecurity threats. The systems are expensive reportedly in the $15 million range and the number of applications that can run on them are small, though D-Wave officials told The Next Platform that the number of applications will grow over the next decade and that the company is working to encourage that growth.

Others organizations also are pushing to expand the capabilities of quantum computing. Researchers led by Prof. Winfried Hensinger, head of the Ion Quantum Technology Group at the University of Sussex in England, this month unveiled a blueprint for building a modular, large-scale and highly scalable quantum computer and plans to build a prototype of the system at the university. The modular model and a unique way for moving qubits between the modules are at the center of what the researchers who also come from the United States, Denmark, Japan and Germany are developing. Qubits take advantage of what is called in quantum mechanics superposition the ability to have values of 1 and 0 at the same time. That ability fuels much of the promise of quantum computers that are significantly faster than conventional systems.

Quantum physics is a very strange theory predicting things like an atom can be in two different places at the same time, were harnessing these very strange effects in order to build a new type of computer. These quantum computers will change all of our lives, revolutionizing science, medicine and commerce.

The computer will be built through modules that contain an electronics layer, a cooling layer using liquid nitrogen and piezo actuators. Each module will be lowered into a steel frame, and the modules will leverage connections created via electric fields that transmit ions from one module to the next. Its a step in another direction from the fiber optic technologies many scientists are advocating for in quantum computers.

The researchers in Sussex argue that using electric fields to transport the charged atoms will offer connection speeds between the modules that are 100,000 faster than current fiber technologies and, according to Hensinger, will allow us to build a quantum computer of any size [and] allow us to achieve phenomenal processing powers. Each module will hold about 2,500 qubits, enabling a complete system that can contain 2 billion or more qubits.

The blueprint and prototype will be the latest step in what is sure to be an ongoing debate about what quantum computers will look like. However, creating modular system that can scale quickly and offers a very fast connectivity technology will help drive the discussion forward. Hensinger and his colleagues are making the blueprint public in hopes that other scientists will to take in what theyre developing and build off of it.

Categories: Compute

Tags: Quantum Computing

Why Googles Spanner Database Wont Do As Well As Its Clone How Yahoos Internal Hadoop Cluster Does Double-Duty on Deep Learning

Follow this link:

Large-Scale Quantum Computing Prototype on Horizon – The Next Platform

Could going beyond Moore’s Law open trillion dollar markets? – Scoop.co.nz (press release)

Press Release #2 Multicore World 2017

Could going beyond Moores Law open trillion dollar markets for New Zealand?

Technology is advancing at a faster rate than societys expectations, says Paul Fenwick, keynote speaker at Multicore World 2017, Wellington, February 20 – 22

We can go from science fiction to consumer availability, with very little in the way of discussion in between. But the questions they raise are critically important, says the Australian, one of a number of global experts at a world leading forum on what is possible with vastly underutilised computing processing power now available.

Not many look at critical questions such as What happens when self-driving vehicles cause unemployment, when medical expert systems work on behalf of insurance agencies rather than patients, and weapon platforms make their own lethal decisions, he says.

Conference Director Nicolas Erdody says MW17 is much more than a talk-fest.

Erdody says that 90% of all the data in the world has been generated in the past two years; a pattern that will keep repeating. How on earth will we process these massive amounts of data, and actually make meaningful sense and use of it, he asks?

Among some of the industry, academic and research experts is Prof Michelle Simmons.

She is an Australian Research Council Laureate Fellow and Director at the Centre for Quantum Computation & Communication Technology, UNSW. She will describe the emerging field of quantum information, a response to the fact that device miniaturization will soon reach the atomic limit, set by the discreteness of matter, leading to intensified research in alternative approaches for creating logic devices of the future.

Prof Satoshi Matsuoka (Japan) will present his keynote Flops to Bytes: Accelerating Beyond Moores Law and Dr John Gustafson (former Director of Intel Labs, now Visiting Professor at the National University of Singapore) will reveal a new data type called posit, that provide a better solution for approximate computing.

In this context, New Zealander Prof Michael Kelly, Prince Philip Professor of Technology from the University of Cambridge (UK) will ask in his keynote How Might the Manufacturability of the Hardware at Device Level Impact on Exascale Computing?

Dr Nathan DeBardeleben from Los Alamos National Labs (US) will discuss how supercomputer resilience and fault-tolerance are increasingly challenging areas of extreme-scale computer research as agencies and companies strive to solve the most critical problems. In his talk he will discuss how data analytics and machine learning techniques are being applied to influence the design, procurement, and operation of some of the worlds largest supercomputers

The assemblage of big brains around multicore computing and parallel programming will pose questions and answers as the world moves towards exascale computing in the next decade. Being part of such discussions can position New Zealand technologists, entrepreneurs and scientists at the intersection of two massive global markets that will benefit this countrys future growth: Decision-Making (estimated in $2 Trillion) and Food and Agriculture (estimated in $5 Trillion), says Nicolas Erdody, Open Parallel CEO and MW17 Conference Organiser

The 6th annual Multicore World, to be held at Shed 6 will discuss these and other Big Questions. MW17 will be three days of intensive talks, panels and discussion in a think-tank format that allows plenty of time for one on one meetings.

The conference is organised by Open Parallel Ltd (New Zealand) and sponsored by MBIE, Catalyst IT, NZRise and Oracle Labs

ENDS

Scoop Media

Continue reading here:

Could going beyond Moore’s Law open trillion dollar markets? – Scoop.co.nz (press release)

Intel Corporation (NASDAQ:INTC) Realizes There Will Be A Post-Moore’s Law Era And Is Already Investing In … – Seneca Globe

Intel Corporation (NASDAQ:INTC)[Trend Analysis], stock knocked down around -0.34% in early session as its gaining volume of 44.07 Million. Intel (INTC) declared that it realizes there will be a post-Moores Law era and is already investing in technologies to drive computing beyond todays PCs and servers. The chipmaker is investing heavily in quantum and neuromorphic computing, said Brian Krzanich, CEO of Intel, during a question-and-answer session at the companys investor day on Thursday.

We are investing in those edge type things that are way out there, Krzanich said. To give an idea of how far out these technologies are, Krzanich said his daughter would perhaps be running the company by then.

Researching in these technologies, which are still in their infancy, is something Intel has to do to survive for many more decades. Shrinking silicon chips and cramming more features into them is becoming difficult, and Intel is already having trouble in manufacturing smaller chips.

The stock showed weekly upbeat performance of -3.23%, which maintained for the month at -3.67%. Similarly, the positive performance for the quarter recorded as 2.42% and for the year was 29.15%, while the YTD performance remained at -1.87%. INTC has Average True Range for 14 days of 0.57.

Cirrus Logic, Inc. (NASDAQ:CRUS)[Trend Analysis] pretends to be active mover, stock plunged around -2.13% to traded at $54.19.

The liquidity measure in recent quarter results of the company was recorded 3.90 as current ratio, on the other side the debt to equity ratio was 0.09, and long-term debt to equity ratio remained 0.09. The Company has gross margin of 49.10% and profit margin was positive 16.80% in trailing twelve months. (Read Latest [Free Analytic] Facts on NASDAQ:CRUS and Be Updated). To accommodate long-term intention, experts calculate Return on Investment of 12.50%. The firm has Profit Margin of positive 16.80%.

For latest Market Updates SubscribesHere

See the original post:

Intel Corporation (NASDAQ:INTC) Realizes There Will Be A Post-Moore’s Law Era And Is Already Investing In … – Seneca Globe

The End of Moores Law Rodney Brooks

I have been working on an upcoming post about megatrends and how they drive tech. I had included the end of Moores Law to illustrate how the end of a megatrend might also have a big influence on tech, but that section got away from me, becoming much larger than the sections on each individual current megatrend. So I decided to break it out into a separate post and publish it first. Here it is.

Moores Law, concerning what we put on silicon wafers, is over after a solid fifty year run that completely reshaped our world. But that end unleashes lots of new opportunities.

Moore, Gordon E.,Cramming more components onto integrated circuits,Electronics, Vol 32, No. 8, April 19, 1965.

Electronicswas a trade journal that published monthly, mostly, from 1930 to 1995. Gordon Moores four and a half page contribution in 1965 was perhaps its most influential article ever. That article not only articulated the beginnings, and it was the very beginnings, of a trend, but the existence of that articulation became a goal/law that has run the silicon based circuit industry (which is the basis of every digital device in our world) for fifty years. Moore was a Cal Tech PhD, cofounder in 1957 of Fairchild Semiconductor, and head of its research and development laboratory from1959. Fairchild had been founded to make transistors from silicon at a time when they were usually made from much slower germanium.

One can find many files on the Web that claim to be copies of the original paper, but I have noticed that some of them have the graphs redrawn and that they are sometimes slightly different from the ones that I have always taken to be the originals. Below I reproduce two figures from the original that as far as I can tell have only been copied from an original paper version of the magazine, with no manual/human cleanup.

The first one that I reproduce here is the money shot for the origin of Moores Law. There was however an equally important earlier graph in the paper which was predictive of the future yield over time of functional circuits that could be made from silicon. It had less actual data than this one, and as well see, that is really saying something.

This graph is about the number of components on an integrated circuit. An integrated circuit is made through a process that is like printing. Light is projected onto a thin wafer of silicon in a number of different patterns, while different gases fill the chamber in which it is held. The different gases cause different light activated chemical processes to happen on the surface of the wafer, sometimes depositing some types of material, and sometimes etching material away. With precise masks to pattern the light, and precise control over temperature and duration of exposures, a physical two dimensional electronic circuit can be printed. The circuit has transistors, resistors, and other components. Lots of them might be made on a single wafer at once, just as lots of letters are printed on a single page at one. The yield is how many of those circuits are functionalsmall alignment or timing errors in production can screw up some of the circuits in any given print. Then the silicon wafer is cut up into pieces, each containing one of the circuits and each is put inside its own plastic package with little legs sticking out as the connectorsif you have looked at a circuit board made in the last forty years you have seen it populated with lots of integrated circuits.

The number of components in a single integrated circuit is important. Since the circuit is printed it involves no manual labor, unlike earlier electronics where every single component had to be placed and attached by hand. Now a complex circuit which involves multiple integrated circuits only requires hand construction (later this too was largely automated), to connect up a much smaller number of components. And as long as one has a process which gets good yield, it is constant time to build a single integrated circuit, regardless of how many components are in it. That means less total integrated circuits that need to be connected by hand or machine. So, as Moores papers title references,crammingmore components into a single integrated circuit is a really good idea.

The graph plots the logarithm base two of the number ofcomponentsin an integrated circuit on the vertical axis against calendar years on the horizontal axis. Every notch upwards on the left doubles the number of components. So while means components, means components. That is a thousand fold increase from 1962 to 1972.

There are two important things to note here.

The first is that he is talking aboutcomponentson an integrated circuit, not just the number of transistors. Generally there are many more components thantransistors, though the ratio did drop over time as different fundamental sorts of transistors were used. But in later years Moores Law was often turned into purely a count of transistors.

The other thing is that there are only four real data points here in this graph which he published in 1965. In 1959 the number of components is , i.e., that is not about anintegratedcircuit at all, just about single circuit elementsintegrated circuits had not yet been invented. So this is a null data point. Then he plots four actual data points, which we assume were taken from what Fairchild could produce, for 1962, 1963, 1964, and 1965, having , , , and components. That is a doubling every year. It is an exponential increase in the true sense of exponential.

What is the mechanism for this, how can this work? It works because it is in the digital domain, the domain ofyesorno, the domain of or .

In the last half page of the four and a half page article Moore explains the limitations of his prediction, saying that for some things, like energy storage, we will not see his predicted trend. Energy takes up a certain number of atoms and their electrons to store a given amount, so you can not just arbitrarily change the number of atoms and still store the same amount of energy. Likewise if you have a half gallon milk container you can not put a gallon of milk in it.

But the fundamental digital abstraction isyesorno. A circuit element in an integrated circuit just needs to know whether a previous element said yes or no, whether there is a voltage or current there or not. In the design phase one decides above how many volts or amps, or whatever, means yes, and below how many means no. And there needs to be a good separation between those numbers, a significant no mans land compared to the maximum and minimum possible. But, the magnitudes do not matter.

I like to think of it like piles of sand. Is there a pile of sand on the table or not? We might have a convention about how big a typical pile of sand is. But we can make it work if we halve the normal size of a pile of sand. We can still answer whether or not there is a pile of sand there using just half as many grains of sand in a pile.

And then we can halve the number again. And the digital abstraction of yes or no still works. And we can halve it again, and it still works. And again, and again, and again.

This is what drives Moores Law, which in its original form said that we could expect to double the number of components on an integrated circuit every year for 10 years, from 1965 to 1975. That held up!

Variations of Moores Law followed; they were all about doubling, but sometimes doubling different things, and usually with slightly longer time constants for the doubling. The most popular versions were doubling of the number of transistors, doubling of the switching speed of those transistors (so a computer could run twice as fast), doubling of the amount of memory on a single chip, and doubling of the secondary memory of a computeroriginally on mechanically spinning disks, but for the last five years in solid state flash memory. And there were many others.

Lets get back to Moores original law for a moment. The components on an integrated circuit are laid out on a two dimensional wafer of silicon. So to double the number of components for the same amount of silicon you need to double the number of components per unit area. That means that the size of a component, in each linear dimension of the wafer needs to go down by a factor of . In turn, that means that Moore was seeing the linear dimension of each component go down to of what it was in a year, year over year.

But why was it limited to just a measly factor of two per year? Given the pile of sand analogy from above, why not just go to a quarter of the size of a pile of sand each year, or one sixteenth? It gets back to the yield one gets, the number of working integrated circuits, as you reduce the component size (most commonly calledfeature size). As the feature size gets smaller, the alignment of the projected patterns of light for each step of the process needs to get more accurate. Since , approximately, it needs to get better by as you halve the feature size. And because impurities in the materials that are printed on the circuit, the material from the gasses that are circulating and that are activated by light, the gas needs to get more pure, so that there are fewer bad atoms in each component, now half the area of before. Implicit in Moores Law, in its original form, was the idea that we could expect the production equipment to get better by about per year, for 10 years.

For various forms of Moores Law that came later, the time constant stretched out to 2 years, or even a little longer, for a doubling, but nevertheless the processing equipment has gotten that better time period over time period, again and again.

To see the magic of how this works, lets just look at 25 doublings. The equipment has to operate with things times smaller, i.e., roughly 5,793 times smaller. But we can fit more components in a single circuit, which is 33,554,432 times more. The accuracy of our equipment has improved 5,793 times, but that has gotten a further acceleration of 5,793 on top of the original 5,793 times due to the linear to area impact. That is where the payoff of Moores Law has come from.

In his original paper Moore only dared project out, and only implicitly, that the equipment would get better every year for ten years. In reality, with somewhat slowing time constants, that has continued to happen for 50 years.

Now it is coming to an end. But not because the accuracy of the equipment needed to give good yields has stopped improving. No. Rather it is because those piles of sand we referred to above have gotten so small that they only contain a single metaphorical grain of sand. We cant split the minimal quantum of a pile into two any more.

Perhaps the most remarkable thing is Moores foresight into how this would have an incredible impact upon the world. Here is the first sentence of his second paragraph:

Integrated circuits will lead to such wonders as home computersor at least terminals connected to a central computerautomatic controls for automobiles, and personal portable communications equipment.

This was radical stuff in 1965. So called mini computers were still the size of a desk, and to be useful usually had a few peripherals such as tape units, card readers, or printers, that meant they would be hard to fit into a home kitchen of the day, even with the refrigerator, oven, and sink removed. Most people had never seen a computer and even fewer had interacted with one, and those who had, had mostly done it by dropping off a deck of punched cards, and a day later picking up a printout from what the computer had done when humans had fed the cards to the machine.

The electrical systems of cars were unbelievably simple by todays standards, with perhaps half a dozen on off switches, and simple electromechanical devices to drive the turn indicators, windshield wipers, and the distributor which timed the firing of the spark plugsevery single function producing piece of mechanism in auto electronics was big enough to be seen with the naked eye. And personal communications devices were rotary dial phones, one per household, firmly plugged into the wall at all time. Or handwritten letters than needed to be dropped into the mail box.

That sentence quoted above, given when it was made, is to me the bravest and most insightful prediction of technology future that we have ever seen.

By the way, the first computer made from integrated circuits was the guidance computer for the Apollo missions, one in the Command Module, and one in the Lunar Lander. The integrated circuits were made by Fairchild, Gordon Moores company. The first version had 4,100 integrated circuits, each implementing a single 3 input NOR gate. The more capable manned flight versions, which first flew in 1968, had only 2,800 integrated circuits, each implementing two 3 input NOR gates. Moores Law had its impact on getting to the Moon, even in the Laws infancy.

In the original magazine article this cartoon appears:

At a fortieth anniversary of Moores Law at the Chemical Heritage Foundationin Philadelphia I asked Dr. Moore whether this cartoon had been his idea. He replied that he had nothing to do with it, and it was just there in the magazine in the middle of his article, to his surprise.

Without any evidence at all on this, my guess is that the cartoonist was reacting somewhat skeptically to the sentence quoted above. The cartoon is set in a department store, as back then US department stores often had a Notions department, although this was not something of which I have any personal experience as they are long gone (and I first set foot in the US in 1977). It seems that notions is another word for haberdashery, i.e., pins, cotton, ribbons, and generally things used for sewing. As still today, there is also aCosmeticsdepartment. And plop in the middle of them is theHandy Home Computersdepartment, with the salesman holding a computer in his hand.

I am guessing that the cartoonist was making fun of this idea, trying to point out the ridiculousness of it. It all came to pass in only 25 years, including being sold in department stores. Not too far from the cosmetics department. But the notions departments had all disappeared. The cartoonist was right in the short term, but blew it in the slightly longer term.

There were many variations on Moores Law, not just his original about the number of components on a single chip.

Amongst the many there was a version of the law about how fast circuits could operate, as the smaller the transistors were the faster they could switch on and off. There were versions of the law for how much RAM memory, main memory for running computer programs, there would be and when. And there were versions of the law for how big and fast disk drives, for file storage, would be.

This tangle of versions of Moores Law had a big impact on how technology developed. I will discuss three modes of that impact; competition, coordination, and herd mentality in computer design.

Competition

Memory chips are where data and programs are stored as they are run on a computer. Moores Law applied to the number of bits of memory that a single chip could store, and a natural rhythm developed of that number of bits going up my a multiple of four on a regular but slightly slowing basis. By jumping over just a doubling, the cost of the silicon foundries could me depreciated over long enough time to keep things profitable (today a silicon foundry is about a $7B capital cost!), and furthermore it made sense to double the number of memory cells in each dimension to keep the designs balanced, again point to a step factor of four.

In the very early days of desktop PCs memory chips had bits. The memory chips were called RAM (Random Access Memoryi.e., any location in memory took equally long to access, there were no slower of faster places), and a chip of this size was called a 16K chip, where K means not exactly 1,000, but instead 1,024 (which is ). Many companies produced 16K RAM chips. But they all knew from Moores Law when the market would be expecting 64K RAM chips to appear. So they knew what they had to do to not get left behind, and they knew when they had to have samples ready for engineers designing new machines so that just as the machines came out their chips would be ready to be used having been designed in. And they could judge when it was worth getting just a little ahead of the competition at what price. Everyone knew the game (and in fact all came to a consensus agreement on when the Moores Law clock should slow down just a little), and they all competed on operational efficiency.

Coordination

Technology Reviewtalks about this in their story on the end of Moores Law. If you were the designer of a new computer box for a desktop machine, or any other digital machine for that matter, you could look at when you planned to hit the market and know what amount of RAM memory would take up what board space because you knew how many bits per chip would be available at that time. And you knew how much disk space would be available at what price and what physical volume (disks got smaller and smaller diameters just as they increased the total amount of storage). And you knew how fast the latest processor chip would run. And you knew what resolution display screen would be available at what price. So a couple of years ahead you could put all these numbers together and come up with what options and configurations would make sense by the exact whenyou were going to bring your new computer to market.

The company that sold the computers might make one or two of the critical chips for their products but mostly they bought other components from other suppliers. The clockwork certainty of Moores Law let them design a new product without having horrible surprises disrupt their flow and plans. This really let the digital revolution proceed. Everything was orderly and predictable so there were fewer blind alleys to follow. We had probably the single most sustained continuous and predictable improvement in any technology over the history of mankind.

Herd mentality in computer design

But with this good came some things that might be viewed negatively (though Im sure there are some who would argue that they were all unalloyed good). Ill take up one of these as the third thing to talk about that Moores Law had a major impact upon.

A particular form of general purpose computer design had arisen by the time that central processors could be put on a single chip (see the Intel 4004 below), and soon those processors on a chip, microprocessors as they came to be known, supported that general architecture. That architecture is known as thevon Neumann architecture.

A distinguishing feature of this architecture is that there is a large RAM memory which holds both instructions and datamade from the RAM chips we talked about above under coordination. The memory is organized into consecutive indexable (or addressable) locations, each containing the same number of binary bits, or digits. The microprocessor itself has a few specialized memory cells, known as registers, and an arithmetic unit that can do additions, multiplications, divisions (more recently), etc. One of those specialized registers is called the program counter (PC), and it holds an address in RAM for the current instruction. The CPU looks at the pattern of bits in that current instruction location and decodes them into what actions it should perform. That might be an action to fetch another location in RAM and put it into one of the specialized registers (this is called a LOAD), or to send the contents the other direction (STORE), or to take the contents of two of the specialized registers feed them to the arithmetic unit, and take their sum from the output of that unit and store it in another of the specialized registers. Then the central processing unit increments its PC and looks at the next consecutive addressable instruction. Some specialized instructions can alter the PC and make the machine go to some other part of the program and this is known as branching. For instance if one of the specialized registers is being used to count down how many elements of an array of consecutive values stored in RAM have been added together, right after the addition instruction there might be an instruction to decrement that counting register, and then branch back earlier in the program to do another LOAD and add if the counting register is still more than zero.

Thats pretty much all there is to most digital computers. The rest is just hacks to make them go faster, while still looking essentially like this model. But note that the RAM is used in two ways by a von Neumann computerto contain data for a program and to contain the program itself. Well come back to this point later.

With all the versions of Moores Law firmly operating in support of this basic model it became very hard to break out of it. The human brain certainly doesnt work that way, so it seems that there could be powerful other ways to organize computation. But trying to change the basic organization was a dangerous thing to do, as the inexorable march of Moores Law based existing architecture was going to continue anyway. Trying something new would most probably set things back a few years. So brave big scale experiments like the Lisp MachineorConnection Machinewhich both grew out of the MIT Artificial Intelligence Lab (and turned into at least three different companies) and Japans fifth generation computerproject (which played with two unconventional ideas, data flow and logical inference) all failed, as before long the Moores Law doubling conventional computers overtook the advanced capabilities of the new machines, and software could better emulate the new ideas.

Most computer architects were locked into the conventional organizations of computers that had been around for decades. They competed on changing the coding of the instructions to make execution of programs slightly more efficient per square millimeter of silicon. They competed on strategies to cache copies of larger and larger amounts of RAM memory right on the main processor chip. They competed on how to put multiple processors on a single chip and how to share the cached information from RAM across multiple processor units running at once on a single piece of silicon. And they competed on how to make the hardware more predictive of what future decisions would be in a running program so that they could precompute the right next computations before it was clear whether they would be needed or not. But, they were all locked in to fundamentally the same way of doing computation. Thirty years ago there were dozens of different detailed processor designs, but now they fall into only a small handful of families, the X86, the ARM, and the PowerPC. The X86s are mostly desktops, laptops, and cloud servers. The ARM is what we find in phones and tablets. And you probably have a PowerPC adjusting all the parameters of your cars engine.

The one glaring exception to the lock in caused by Moores Law is that of Graphical Processing Units, orGPUs. These are different from von Neumann machines. Driven by wanting better video performance for video and graphics, and in particular gaming, the main processor getting better and better under Moores Law was just not enough to make real time rendering perform well as the underlying simulations got better and better. In this case a new sort of processor was developed. It was not particularly useful for general purpose computations but it was optimized very well to do additions and multiplications on streams of data which is what is needed to render something graphically on a screen. Here was a case where a new sort of chip got added into the Moores Law pool much later than conventional microprocessors, RAM, and disk. The new GPUs did not replace existing processors, but instead got added as partners where graphics rendering was needed. I mention GPUs here because it turns out that they are useful for another type of computation that has become very popular over the last three years, and that is being used as an argument that Moores Law is not over. I still think it is and will return to GPUs in the next section.

As I pointed out earlier we can not halve a pile of sand once we are down to piles that are only a single grain of sand. That is where we are now, we have gotten down to just about one grain piles of sand. Gordon Moores Law in its classical sense is over. SeeThe Economistfrom March of last year for a typically thorough, accessible, and thoughtful report.

I earlier talked about thefeature size of an integrated circuit and how with every doubling that size is divided by . By 1971 Gordon Moore was at Intel, and they released their first microprocessor on a single chip, the 4004 with 2,300 transistors on a 12 square millimeters of silicon, with a feature size of 10 micrometers, written 10m. That means that the smallest distinguishable aspect of any component on the chip was th of a millimeter.

Since then the feature size has regularly been reduced by a factor of , or reduced to of its previous size, doubling the number of components in a given area, on a clockwork schedule. The schedule clock has however slowed down. Back in the era of Moores original publication the clock period was a year. Now it is a little over 2 years. In the first quarter of 2017 we are expecting to see the first commercial chips in mass market products with a feature size of 10 nanometers, written 10nm. That is 1,000 times smaller than the feature size of 1971, or 20 applications of the rule over 46 years. Sometimes the jump has been a little better than , and so we actually seen 17 jumps from10m down to 10nm. You can see them listed in Wikipedia. In 2012 the feature size was 22nm, in 2014 it was 14nm, now in the first quarter of 2017 we are about to see 10nm shipped to end users, and it is expected that we will see 7nm in 2019 or so. There are stillactive areas of researchworking on problems that are yet to be solved to make 7nm a reality, but industry is confident that it will happen. There are predictions of 5nm by 2021, but a year ago there was still much uncertaintyover whether the engineering problems necessary to do this could be solved and whether they would be economically viable in any case.

Once you get down to 5nm features they are only about 20 silicon atoms wide. If you go much below this the material starts to be dominated by quantum effects and classical physical properties really start to break down. That is what I mean by only one grain of sand left in the pile.

Todays microprocessors have a few hundred square millimeters of silicon, and 5 to 10 billion transistors. They have a lot of extra circuitry these days to cache RAM, predict branches, etc., all to improve performance. But getting bigger comes with many costs as they get faster too. There is heat to be dissipated from all the energy used in switching so many signals in such a small amount of time, and the time for a signal to travel from one side of the chip to the other, ultimately limited by the speed of light (in reality, in copper it is about less), starts to be significant. The speed of light is approximately 300,000 kilometers per second, or 300,000,000,000 millimeters per second. So light, or a signal, can travel 30 millimeters (just over an inch, about the size of a very large chip today) in no less than one over 10,000,000,000 seconds, i.e., no less than one ten billionth of a second.

Todays fastest processors have a clock speed of 8.760GigaHertz, which means by the time the signal is getting to the other side of the chip, the place if came from has moved on to the next thing to do. This makes synchronization across a single microprocessor something of a nightmare, and at best a designer can know ahead of time how late different signals from different parts of the processor will be, and try to design accordingly. So rather than push clock speed further (which is also hard) and rather than make a single microprocessor bigger with more transistors to do more stuff at every clock cycle, for the last few years we have seen large chips go to multicore, with two, four, or eight independent microprocessors on a single piece of silicon.

Multicore has preserved the number of operations done per second version of Moores Law, but at the cost of a simple program not being sped up by that amountone cannot simply smear a single program across multiple processing units. For a laptop or a smart phone that is trying to do many things at once that doesnt really matter, as there are usually enough different tasks that need to be done at once, that farming them out to different cores on the same chip leads to pretty full utilization. But that will not hold, except for specialized computations, when the number of cores doubles a few more times. The speed up starts to disappear as silicon is left idle because there just arent enough different things to do.

Despite the arguments that I presented a few paragraphs ago about why Moores Law is coming to a silicon end, many people argue that it is not, because we are finding ways around those constraints of small numbers of atoms by going to multicore and GPUs. But I think that is changing the definitions too much.

Here is a recent chart that Steve Jurvetson, cofounder of the VC firm DFJ (Draper Fisher Jurvetson), posted on his FaceBook page. He said it is an update of an earlier chart compiled by Ray Kurzweil.

In this case the left axis is a logarithmically scaled count of the number of calculations per second per constant dollar. So this expresses how much cheaper computation has gotten over time. In the 1940s there are specialized computers, such as the electromagnetic computers built to break codes at Bletchley Park. By the 1950s they become general purpose, von Neuman style computers and stay that way until the last few points.

The last two points are both GPUs, the GTX 450 and the NVIDIA Titan X. Steve doesnt label the few points before that, but in every earlier version of a diagram that I can find on the Web (and there are plenty of them), the points beyond 2010 are all multicore. First dual cores, and then quad cores, such as Intels quad core i7 (and I am typing these words on a 2.9MHz version of that chip, powering my laptop).

That GPUs are there and that people are excited about them is because besides graphics they happen to be very good at another very fashionable computation. Deep learning, a form of something known originally as back propagation neural networks, has had a big technological impact recently. It is what has made speech recognition so fantastically better in the last three years that Apples Siri, Amazons Echo, and Google Home are useful and practical programs and devices. It has also made image labeling so much better than what we had five years ago, and there is much experimentation with using networks trained on lots of road scenes as part of situational awareness for self driving cars. For deep learning there is a training phase, usually done in the cloud, on millions of examples. That produces a few million numbers which represent the network that is learned. Then when it is time to recognize a word or label an image that input is fed into a program simulating the network by doing millions of multiplications and additions. Coincidentally GPUs just happen to perfect for the way these networks are structured, and so we can expect more and more of them to be built into our automobiles. Lucky break for GPU manufacturers! While GPUs can do lots of computations they dont work well on just any problem. But they are great for deep learning networks and those are quickly becoming the flavor of the decade.

While rightly claiming that we continue to see exponential growth as in the chart above, exactly what is being measured has changed. That is a bit of a sleight of hand.

And I think that change will have big implications.

I think the end of Moores Law, as I have defined the end, will bring about a golden new era of computer architecture. No longer will architects need to cower atthe relentless improvements that they know others will get due to Moores Law. They will be able to take the time to try new ideas out in silicon, now safe in the knowledge that a conventional computer architecture will not be able to do the same thing in just two or four years in software. And the new things they do may not be about speed. They might be about making computation better in other ways.

Machine learning runtime

We are seeing this with GPUs as runtime engines for deep learning networks. But we are also seeing some more specific architectures. For instance, for about a a year Google has had their own chips called TensorFlow Units (or TPUs) that save power for deep learning networks by effectively reducing the number of significant digits that are kept around as neural networks work quite well at low precision. Google has placed many of these chips in the computers in their server farms, or cloud, and are able to use learned networks in various search queries, at higher speed for lower electrical power consumption.

Special purpose silicon

Typical mobile phone chips now have four ARM processor cores on a single piece of silicon, plus some highly optimized special purpose processors on that same piece of silicon. The processors manage data flowing from cameras and optimizing speech quality, and even on some chips there is a special highly optimized processor for detecting human faces. That is used in the camera application, youve probably noticed little rectangular boxes around peoples faces as you are about to take a photograph, to decide what regions in an image should be most in focus and with the best exposure timingthe faces!

New general purpose approaches

We are already seeing the rise of special purpose architectures for very specific computations. But perhaps we will see more general purpose architectures but with a a different style of computation make a comeback.

Conceivably the dataflow and logic models of the Japanese fifth generation computer project might now be worth exploring again. But as we digitalize the world the cost of bad computer security will threaten our very existence. So perhaps if things work out, the unleashed computer architects can slowly start to dig us out of our current deplorable situation.

Secure computing

We all hear about cyber hackers breaking into computers, often half a world away, or sometimes now in a computer controlling the engine, and soon everything else, of a car as it drives by. How can this happen?

Cyber hackers are creative but many ways that they get into systems are fundamentally through common programming errors in programs built on top of the von Neumann architectures we talked about before.

A common case is exploiting something known as buffer overrun. A fixed size piece of memory is reserved to hold, say, the web address that one can type into a browser, or the Google query box. If all programmers wrote very careful code and someone typed in way too many characters those past the limit would not get stored in RAM at all. But all too often a programmer has used a coding trick that is simple, and quick to produce, that does not check for overrun and the typed characters get put into memory way past the end of the buffer, perhaps overwriting some code that the program might jump to later. This relies on the feature of von Neumann architectures that data and programs are stored in the same memory. So, if the hacker chooses some characters whose binary codes correspond to instructions that do something malicious to the computer, say setting up an account for them with a particular password, then later as if by magic the hacker will have a remotely accessible account on the computer, just as many other human and program services may. Programmers shouldnt oughta make this mistake but history shows that it happens again and again.

Another common way in is that in modern web services sometimes the browser on a lap top, tablet, or smart phone, and the computers in the cloud need to pass really complex things between them. Rather than the programmer having to know in advance all those complex possible things and handle messages for them, it is set up so that one or both sides can pass little bits of source code of programs back and forth and execute them on the other computer. In this way capabilities that were never originally conceived of can start working later on in an existing system without having to update the applications. It is impossible to be sure that a piece of code wont do certain things, so if the programmer decided to give a fully general capability through this mechanism there is no way for the receiving machine to know ahead of time that the code is safe and wont do something malicious (this is a generalization of the halting problem I could go on and on but I wont here). So sometimes a cyber hacker can exploit this weakness and send a little bit of malicious code directly to some service that accepts code.

Beyond that cyber hackers are always coming up with new inventive ways inthese have just been two examples to illustrate a couple of ways of how itis currently done.

It is possible to write code that protects against many of these problems, but code writing is still a very human activity, and there are just too many human-created holes that can leak, from too many code writers. One way to combat this is to have extra silicon that hides some of the low level possibilities of a von Neumann architecture from programmers, by only giving the instructions in memory a more limited set of possible actions.

This is not a new idea. Most microprocessors have some version of protection rings which let more and more untrusted code only have access to more and more limited areas of memory, even if they try to access it with normal instructions. This idea has been around a long time but it has suffered from not having a standard way to use or implement it, so most software, in an attempt to be able to run on most machines, usually only specifies two or at most three rings of protection. That is a very coarse tool and lets too much through. Perhaps now the idea will be thought about more seriously in an attempt to get better security when just making things faster is no longer practical.

Another idea, that has mostly only been implemented in software, with perhaps one or two exceptions, is called capability based security, through capability based addressing. Programs are not given direct access to regions of memory they need to use, but instead are given unforgeable cryptographically sound reference handles, along with a defined subset of things they are allowed to do with the memory. Hardware architects might now have the time to push through on making this approach completely enforceable, getting it right once in hardware so that mere human programmers pushed to get new software out on a promised release date can not screw things up.

From one point of view the Lisp Machines that I talked about earlier were built on a very specific and limited version of a capability based architecture. Underneath it all, those machines were von Neumann machines, but the instructions they could execute were deliberately limited. Through the use of something called typed pointers, at the hardware level, every reference to every piece of memory came with restrictions on what instructions could do with that memory, based on the type encoded in the pointer. And memory could only be referenced by a pointer to the start of a chunk of memory of a fixed size at the time the memory was reserved. So in the buffer overrun case, a buffer for a string of characters would not allow data to be written to or read from beyond the end of it. And instructions could only be referenced from another type of pointer, a code pointer. The hardware kept the general purpose memory partitioned at a very fine grain by the type of pointers granted to it when reserved. And to a first approximation the type of a pointer could never be changed, nor couldthe actual address in RAM be seen by any instructions that had access to a pointer.

There have been ideas out there for a long time on how to improve security through this use of hardware restrictions on the general purpose von Neumann architecture. I have talked about a few of them here. Now I think we can expect this to become a much more compelling place for hardware architects to spend their time, as security of our computational systems becomes a major achilles heel on the smooth running of our businesses, our lives, and our society.

Quantum computers

Quantum computers are a largely experimental and very expensive at this time technology. With the need to cool them to physics experiment level ultra cold, and the expense that entails, to the confusion over how much speed up they might give over conventional silicon based computers and for what class of problem, they are a large investment, high risk research topic at this time. I wont go into all the arguments (I havent read them all, and frankly I do not have the expertise that would make me confident in any opinion I might form) butScott Aaronsons blogon computational complexity and quantum computation is probably the best source for those interested. Claims on speedups either achieved or hoped to be achieved on practical problems range from a factor of 1 to thousands (and I might have that upper bound wrong). In the old days just waiting 10 or 20 years would let Moores Law get you there. Instead we have seen well over a decade of sustained investment in a technology that people are still arguing over whether it can ever work. To me this is yet more evidence that the end of Moores Law is encouraging new investment and new explorations.

Unimaginable stuff

Even with these various innovations around, triggered by the end of Moores Law, the best things we might see may not yet be in the common consciousness. I think the freedom to innovate, without the overhang of Moores Law, the freedom to take time to investigate curious corners, may well lead to a new garden of Eden in computational models. Five to ten years from now we may see a completely new form of computer arrangement, in traditional silicon (not quantum), that is doing things and doing them faster than we can today imagine. And with a further thirty years of development those chips might be doing things that would today be indistinguishable from magic, just as todays smart phone would have seemed like utter magic to 50 year ago me.

Many times the popular press, or people who should know better, refer to something that is increasing a lot as exponential. Something is only truly exponential if there is a constant ratio in size between any two points in time separated by the same amount. Here the ratio is , for any two points a year apart. The misuse of the term exponential growth is widespread and makes me cranky.

Why the Chemical Heritage Foundation for this celebration? Both of Gordon Moores degrees (BS and PhD) were in physical chemistry!

For those who read my first blog, once again seeRoy Amaras Law.

I had been a post-doc at the MIT AI Lab and loved using Lisp Machines there, but when I left and joined the faculty at Stanford in 1983 I realized that the more conventional SUN workstationsbeing developed there and at spin-off company Sun Microsystemswould win out in performance very quickly. So I built a software based Lisp system (which I called TAIL (Toy AI Language) in a nod to the naming conventions of most software at the Stanford Artificial Intelligence Lab, e.g., BAIL, FAIL, SAIL, MAIL) that ran on the early Sun workstations, which themselves used completely generic microprocessors. By mid 1984 Richard Gabriel, I, and others had started a company called Lucidin Palo Alto to compete on conventional machines with the Lisp Machine companies. We used my Lisp compiler as a stop gap, but as is often the case with software, that was still the compiler used by Lucid eight years later when it ran on 19 different makes of machines. I had moved back to MIT to join the faculty in late 1984, and eventually became the director of the Artificial Intelligence Lab there (and then CSAIL). But for eight years, while teaching computer science and developing robots by day, I also at night developed and maintained my original compiler as the work horse of Lucid Lisp. Just as the Lisp Machine companies got swept away so too eventually did Lucid. Whereas the Lisp Machine companies got swept away by Moores Law, Lucid got swept away as the fashion in computer languages shifted to a winner take all world, for many years, of C.

Full disclosure. DFJ is one of the VCs who have invested in my company Rethink Robotics.

Read the original:

The End of Moores Law Rodney Brooks

Whats next? Going Beyond Moores Law SAT Press Releases – Satellite PR News (press release)

Submit the press release

Conditioning consumers to expect certain advances in speed, battery life, and capabilities, Moores Law has led the way for the computing industry for decades.

Because Moores law suggests exponential growth, it is unlikely to continue indefinitely. Software and hardware innovations will likely keep the dream of Moores Law alive for several years to come; however, there may come a time when Moores Law is no longer applicable due to temperature constraints. As such a revolutionary approach to computing is required. The IEEE Rebooting Computing Initiative is dedicated to studying next-generation alternatives for the computing industry. In Engadgets recent Public Access article, Beyond Moores Law, Tom Conte, IEEE Rebooting Computing Initiative Co-Chair and Professor in the Schools of Electrical & Computer Engineering and Computer Science at the Georgia Institute of Technology, provides an overview of next-generation alternatives that could meet the growing demand for advances in computing technology.

From cryogenic computing to quantum computing, there are a variety of alternatives to meet the expectations of consumers. Change is coming to the computing industry. Are you interested in learning more? Tom will provide insight on this topic at the annual SXSW Conference and Festival, 10-19 March, 2017. His session, Going Beyond Moores Law, is included in the IEEE Tech for Humanity Series at SXSW. For more information please see http://techforhumanity.ieee.org.

Originally posted here:

Whats next? Going Beyond Moores Law SAT Press Releases – Satellite PR News (press release)

Moore’s Law And The History Of Comic Book Movies – Monkeys Fighting Robots (blog)

Back in 1965, Intel co-founder Gordon Moore made an observation that the number of transistors was doubling every year, thereby doubling the power of computers. Moores Law as it came to be known would prove even more accurate than he imagined. Since Moores observation, computing power continues to grow at an incredible rate. All this growing technology directly lead to the effects of Star Wars, Terminator 2, and the CG-heavy comic book movies of today.

No other genre benefits from computing power quite like superheromovies. Every year, Disney and Warner Brothers unleash a new effects-heavy, punch-fest starring a beloved character from comic book lore. The superhero trend went into overdrive in 2008 with Iron Man, but before that, Raimis Spider-Man conquered box offices with dazzling use of CG; before that Singers first two X-Men movies were on top. However, things get a little murkier before the arrival of X1 in 2000, and thats where the debate begins.

Some in geekdom believe Blade is the father of modern comic book movies; others argue its Tim Burtons Batman in 1989; still, others look back at Superman: The Movie. Im here to say that theyre all wrong and right! Ill explain.

Comic books were a pulp mainstay for decades. But up through the 1970s, there were only two movies to mention.

Superman and the Mole Men 1951 There wasnt going to be anyone else who broke the mold first. Superman was the most popular comic book of the time and already had a hit TV show. Superman and the Mole Men was an extension of the show, featuring George Reeves as the last son of Krypton.

Batman: The Movie 1966 In the 60s, campy Batman was all the rage. Adam West filled the cape and cowl and through the course of three seasons fought the greatest hits of Batmans rogues gallery. In 1966, much like the Superman movie of the 50s, Producers wisely created a feature length episode. In it, Penguin and the United Underworld are turning people into cubes.

You will believe a man can fly. If I had to pick an actual starting point for comic book movies as mainstream money-makers, it would undoubtedly be here. Richard Donners Superman was a mega-hit at the box office. The effects look dated now (40 years, hello!) but the innovations pioneered by Star Wars just a year before helped Donner create a dazzling comic book movie like never before.

In the 70s, anti-heroes like Batman and Wolverine werent as big a thing as today. Heroes were still meant to be the best of us, not psychologically disturbed or ferocious. Superman was still king of the comic book mountain in the minds of the masses, and there was no one else who could lift the weight of the comic book universe into the mainstream like the Man of Steel.

Total Number of Comic Book Movies Up Until December 31st, 1979: 3

The 80s were slow-going for comic book films. Superman carried the torch with three sequels, each drastically worse than the one before it. But two movies made an impact. One film served as a subtle nudge, while the other became the standard bearer.

Not a hit by any stretch of the imagination, Swamp Thing from director Wes Craven holds an important place in comic book movie history. Craven, a master of horror films, even while trying to win the mainstream hearts of Hollywood execs and keep away from his usual style, still added his signature to Swamp Thing. That macabre touch created a distinction from what was the norm and played into the growing popularity of anti-heroes.

Tim Burtons Batman was a smash box office success, rocketing into the top earners of all time. Donners Superman knocked down the door into the mainstream. But Burtons Batman went in and beat the crap out of everyone. Batman was a hype phenomenon in the days before the Internet and sites like Monkeys Fighting Robots existed. Warner Brothers unleashed a torrent of marketing that consisted of an entire magazine devoted to the film before release. Similar to leaked photos the magazine highlighted allthings about the movie.

Number of Comic Book Feature Films: 9

Its in the 1990s when thingstake a radical leap. After the success of Batman, Hollywood was gearing up to turn every comic book they could get their hands on into a movie. There were four more Batman films, Dolph Lungren played The Punisher, and the Teenage Mutant Ninja Turtles continued their transition from dark comic book to a lighthearted multimedia franchise. Again, two films set the stage for things to come.

Many viewers had no idea that The Crow was a graphic novel by James OBarr. Today, most remember the movie as the final film of Brandon Lee. The Crow is all 90s grunge-goth action movie awesome that holds up well today. Director Alex Proyas, who later created the sci-fi noir film Dark City, bathed The Crow in rain and darkness, with the dark atmospheres lifting when it serves the story. The Crow continued to lengthen the path of the anti-hero.

By the late 90s, comic book movies were either Batman movies or obscure comics and graphic novels made on an average budget. Like The Crow, only the most ardent geeks even knew Blade was a comic book, but the Wesley Snipes action movie was a sleeper hit that sliced and diced its way to a strong box office performance. Blade softened the goth style of The Crow and made it sleek with fitted leather armor and killer electronica soundtrack. Blades slick look, attitude, and sense of humor is something that continues to grow and involve in the majority of mainstream comic book movies.

Number of Comic Book Feature Films: 22

The first X-Men movie released in 2000 and Bryan Singers origin story for Marvels super-team was a wild success, breaking box office records like Burtons Batman 11 years earlier. Its here where I believe two things happened. Comic book movies as we knew them ended and comic book movies as we will come to know them began.

X-Men ended the era of practical comic book movies, as in, practical effects. Blade used CG to accent practical effects, while X-Men was a mix of practical and CG. And that use of CG, plus the way Singer presented the material, evolved into Raimis Spider-Man in 2003. Spidey, thenext big hit was a CG-heavy, joke-filled popcorn flick. Sound familiar? The borderline campy attitude of Sonys first Spider-Man created a new standard for comic book movies. Just five years later, Marvel would begin its reign at the box office with a CG-heavy, joke-filled Iron Man who is arguably also an anti-hero.

Since 2000, 77 comic book movies have seen release! We dont need to get into the specifics because everyone knows whats come and whats to come. But here are the numbers.

Number of Comic Book Feature Films the 2000s: 33

Number of Comic Book Feature Films in the 2010s: 44, so far

Like Moores Law and transistors, the number of comic book movies we can fit into a year has increased. Its leveled some, but continues to grow, and the comic book movie trend sees no end in site. Now consider that weve only talked about American comic book movies.Ghost in the Shell, a Japanese Manga (aka comic book) and Valerian, a French comic book, are on the way to the big screen.Oh, also dont forgetthat theres TV, but thats another article for another time. Moores Law will hold steady for technology. Maybe for comic book movies we can call it, Lees Law.

Original post:

Moore’s Law And The History Of Comic Book Movies – Monkeys Fighting Robots (blog)

Unwinding Moore’s Law from Genomics with Co-Design – The Next Platform

February 8, 2017 Nicole Hemsoth

More than almost any other market or research segment, genomics is vastly outpacing Moores Law.

The continued march of new sequencing and other instruments has created a flood of data and development of the DNA analysis software stack has created a tsunami. For some, high performance genomic research can only move at the pace of innovation with custom hardware and software, co-designed and tuned for the task.

We have described efforts to build custom ASICs for sequence alignment, as well as using reprogrammable hardware for genomics research, but for centers that have defined workloads and are limited by performance constraints (with an eye on energy efficiency), the push is still on to find novel architectures to fit the bill. In most cases, efforts are focused on one aspect of DNA analysis. For instance, de novo assembly exclusively. Having hardware that is tuned (and tunable) that can match the needs of multiple genomics workloads (whole genome alignments, homology searches, etc.) is ideal.

With these requirements in mind, a research team at Stanford, led by computing pioneer, Bill Daly, has taken aim at both the hardware and software inefficiencies inherent to genomics via the creation of a new hardware acceleration framework that they say can offer between a 125X and 15.6X speedup over the state-of-the-art software counterparts for reference-guided and de novo assembly of third generation (long) sequencing reads, respectively. The team also reports significant efficiency improvements on pairwise sequence alignments (39,000X more energy efficient than software alone).

Over 1,300 CPU hours are required to align reads from a 54X coverage of the human genome to a reference and over 15,600 CPU hours to assemble the reads de novoToday, it is possible to sequence genomes on rack-size, high-throughput machines at nearly 50 human genomes per day, or on portable USB-stick size sequences that require several days per human genome.

The Stanford-based hardware accelerated framework for genomic analysis, called Darwin, has several elements that go far beyond the creation or configuring of custom or reprogrammable hardware. At the heart of the effort is the Genome Alignment using Constant Memory Trace-back (GACT), which is an algorithm focused on long reads (more data/compute intensive to handle but provide more comprehensive results) that uses constant memory to make the compute-heavy part of the workload more efficient.

The use of this algorithmic approach has a profound hardware design implication, the team explains, because all previous hardware accelerators for genomic sequence alignment have assumed an upper-bound on the length of sequences they align or have left the trace-back step in alignment to software, thus undermining the benefits of hardware acceleration. Also critical to the effort is a filtering algorithm that cuts down on the search space for dynamic programming, called D-SOFT, which can be tuned for sensitivity.

To put this in context, keep in mind that long sequence reads are improve the quality of genome assembly and can be very useful in personalized medicine because it is possible to identify variances and mutations. However, this capability comes at a pricethe team notes that mean error rates can be as high as 40% in some cases and while this error can be corrected, it takes time to do so, thus cutting down on the performance and efficiency of the process. The tunable nature of Darwin helps correct for this and is fit to the hardware to speed for more accuracy faster, and with less power consumption.

Layout of one of the GACT processing elements. A 64 processing element array (minus the TB of memory) requires 0.27 square mm area with additional space for control, trace-back logic, and storage blocks. A single GATC array consumes 137mW of power.

On the hardware side, the team has already fully prototyped the concept on FPGA and performed ASIC synthesis for the GACT framework on a 45nm TSMC device. In that prototyping effort, they found pairwise alignment for sequences had a 763X jump on software-only approaches and was over 39,000X more energy efficient. The parameters of D-SOFT can be set to make it very specific event for noisy sequences at high sensitivity and the hardware acceleration of GACT results in 762X speedup over software.

Although D-SOFT is one of the critical elements that creates the tunability that is required for both accuracy and efficiency, it is also the bottleneck in the hardware/software design, eating up 80% of the overall runtime. The problem is not memory capacity, but access patterns, which the team expects they might address by speeding the random memory access using an approach like e-DRAM. Removing this barrier would allow the team to scale Darwins performance. Unlike other custom designs, for once, memory capacity is not a bottleneck as it uses only 120 MB for two arrays, which means far more can fit on a single chip.

Darwin handles and provides high speedup versus hand-optimized software for two distinct applications: reference-guided and de novo assembly of reads, and can work with reads with very different error rates, the team concludes, noting that Darwin is the first hardware-accelerated framework to demonstrate speedup in more than one class of applications, and in the future, it can extend to alignment applications even beyond read assembly.

Categories: Analyze

Tags: DNA, Genomics, Life Sciences

The Case For IBM Buying Nvidia, Xilinx, And Mellanox Putting ARM-Based Microservers Through The Paces

Continued here:

Unwinding Moore’s Law from Genomics with Co-Design – The Next Platform

Moore’s Law is running out but don’t panic – ComputerWeekly.com

Intel kicked off CES 2017 in Las Vegas with the declaration that Moores Law is still relevant as it slated its first 10nm (nanometre) processor chips for release later this year.

A collection of our most popular articles on datacentre management, including: Cloud vs. Colocation: Why both make sense for the enterprise right now; AWS at 10: How the cloud giant shook up enterprise IT and Life on the edge: The benefits of using micro datacenters

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Despite this, engineers are facing real issues in how to continue to push system performance to cope with the growing demands of new and emerging datacentre workloads.

This isnt the first time the end of Moores Law has been proclaimed, but Intel and other chip makers have so far found new tricks for shrinking transistors to meet the goal of doubling density every two years, with a knock-on boost for compute performance.

Intel chief executive Brian Krzanich said at CES: Ive been in this industry for 34 years and Ive heard the death of Moores Law more times than anything else in my career. And Im here today to really show you and tell you that Moores Law is alive and well and flourishing. I believe Moores Law will be alive well beyond my career, alive and well and kicking.

Yet the pace is slowing as Intel works at developing 7nm and 5nm technologies to follow on from 10nm. The introduction of 10nm itself has already been delayed by a year because of difficulties with the manufacturing process, and these difficulties are likely to increase as the size approaches physical limits on how small the on-chip circuitry can be made.

I cant see them getting much beyond 5nm, and Moores Law will then run out because we will have reached the end of the silicon era, says Ovum principal analyst Roy Illsley. Some industry observers think this will happen in the next 10 years or so.

As to what will ultimately replace silicon, such as optical processing or quantum computing, there appears no consensus so far. However, this does not mean that compute power will cease to expand, as both hardware and software in the datacentre have evolved since the days of single-chip servers and monolithic applications.

The way apps are written has changed, says Illsley. They are now distributed and scalable, so Moores Law is a rather pointless metric for what a computer can do, anyway.

In fact, the industry hit a similar crisis some time ago, when Intel discovered that its single-core chips simply overheated when ever-increasing clock speeds started to approach 4GHz. The solution then was to change tack and deliver greater processing power by using the extra transistors to put multiple processor cores onto the same chip, and comparable architectural shifts will enable the industry to continue to boost processing power.

Such an approach can be seen in the growing interest in complementing conventional central processing units (CPUs) with specialised accelerators that may be better suited to handling specific tasks or workloads. A good example of this is the graphics processing unit (GPU), which has long been used to accelerate 3D graphics, but which has also found its way into high-performance compute (HPC) clusters thanks to the massively parallel architecture of a GPU which makes it excellent for performing complex calculations on large datasets.

In 2016, Nvidia launched its DGX-1 server, which sports eight of its latest Tesla GPUs with 16GB memory apiece and is aimed at applications involving deep learning and artificial intelligence (AI) accelerated analytics. Nvidias system can do what would have taken a whole datacentre of servers a few years ago, at a pretty competitive price, says Illsley.

Another example is the field programmable gate array (FPGA), which is essentially a chip full of logic blocks that can be configured to perform specific functions. It provides a hardware circuit that can perform those functions much faster than can be done in software, but which can be reconfigured under software control, if necessary.

One notable adopter of FPGAs is Microsoft, which uses the technology in its Azure datacentre servers to speed up Bing searches and accelerate software-defined networking (SDN).

Intel is also working on integrating FPGA circuitry into some of its Xeon server chips, which could lead to broader adoption. In 2016, the firm showed off a Xeon coupled with a discrete FPGA inside a chip package, but its goal is to get both onto a single piece of silicon.

Meanwhile, Intel prefers to push its Xeon Phi platform rather than GPU acceleration for demanding workloads. These many integrated core chips combine a large number of CPU cores (up to 72 in the latest Knights Landing silicon) which are essentially x86 cores with 512-bit vector processing extensions, so they can run much of the same code as a standard Intel processor.

However, one issue with having so many cores on one chip is getting access to data in system memory for all those cores. Intel has addressed this by integrating 16GB of high-speed memory inside each Xeon Phi chip package, close to the CPU cores.

HPE has shown a different approach with The Machine, its experimental prototype for a next-generation architecture. This has been described as memory-driven computing, and is based around the notion of a massive, global memory pool that is shared between all the processors in a system, enabling large datasets to be processed in memory.

A working version, demonstrated at HPE Discover in December 2016, saw each processor directly controlling eight dual inline memory modules (DIMMs) as a local memory pool, with a much larger global pool of memory comprising clusters of eight DIMMs connected via a memory fabric interface that also links to the processors. In the demo, all the memory was standard DRAM, but HPE intended The Machine to have a non-volatile global memory pool.

In fact, focusing on processors overlooks the fact that memory and storage are a bigger brake on performance, as even flash-based storage takes several microseconds to read a block of data, during which time the processor may execute millions of instructions. So anything that can speed memory and storage access will deliver a welcome boost to system performance, and a number of technologies are being developed, such as Intel and Microns 3D XPoint or IBMs Phase-Change Memory, which promise to be faster than flash memory, although their cost is likely to see them used at first as a cache for a larger pool of slower storage.

These are being developed alongside new I/O interfaces that aim to make it quicker and easier to move data between memory and the processor or accelerator. Examples include Nvidias NVLink 2.0 for accelerators and the Gen-Z standard that aims to deliver a high-speed fabric for connecting both memory and new storage-class memory technologies.

One thing Illsley thinks we may see in the future is systems that are optimised for specific workloads. Currently, virtually all computers are general-purpose designs that perform different tasks by running the appropriate software. But some tasks may call for a more specialised application-specific architecture to deliver the required performance, especially if AI approaches such as deep learning become more prevalent.

Moores Law, which started out as an observation and prediction on the exponential growth of transistors in integrated circuits by Intel founder Gordon Moore, has lasted five decades. We may be reaching the point where it no longer holds true for silicon chips, but whatever happens, engineers will ensure that compute power continues to expand to meet the demands thrown at it.

See original here:

Moore’s Law is running out but don’t panic – ComputerWeekly.com

Call for Papers: Workshop on HPC in a post Moore’s Law World – insideHPC

The Workshop on HPC computing in a post Moores Law World has issued their Call for Papers. Held in conjunction with ISC 2017, the all-day workshop takes place June 22 in Frankfurt, Germany.

The impending end of traditional MOSFET scaling has sparked research into preserving HPC performance improvements through alternative computational models. To better shape our strategy, we need to understand where each technology is headed and where it will be in a span of 20 years. This workshop brings together experts who develop or use promising technologies to present the state of their work, and spark a discussion on the promise and detriments of each approach. This includes technologies that adhere to the traditional digital computational model, as well as new models such as neuromorphic and quantum computing models. As part of the workshop, we are accepting paper submissions. Papers will be published in the Springers Lecture Notes in Computer Science (LNCS) series. You can find the call for papers with detailed instructions and a link to the submission site here. We will also hold short panels and keynote presentations from experts in the field.

In scope for this workshop are all topics relevant to improving performance for HPC applications after MOSFET scaling (currently driven by Moores law) stops:

Submissions are due March 6, 2017.

Sign up for our insideHPC Newsletter

See the rest here:

Call for Papers: Workshop on HPC in a post Moore’s Law World – insideHPC

Moore’s Law is dead, long live Moore’s Law – ExtremeTech

Moores Law turns 50 this coming week making this an opportune time to revisit Gordon Moores classic prediction, its elevation to near-divine pronouncement over the last 50 years, and the question of what, if anything, Moores Law can teach us about the future of computing. My colleague David Cardinal has already discussed thelaw itself,as well as the early evolution of the integrated circuit. To get a sense of where Moores Law might evolve in the future, we sat down with lithographer, instructor, and gentleman scientist, Dr. Christopher Mack. It might seem odd to talk about the future of Moores Law with a scientist who half-jokingly toasted its death just a year ago but one of the hallmarks of the Law is the way its been reinvented several times over the past fifty years.

IBMs System/360. Photo courtesy of Wikipedia

In arecent article, Dr. Mack argues that what we call Moores Law is actually at least three different laws. In the first era, dubbed Moores Law 1.0, the focus was on scaling up the number of components on a single chip. One simple example can be found in the evolution of the microprocessor itself. In the early 1980s, the vast majority of CPUs could only perform integer math on-die. If you wanted to perform floating point calculations (meaning calculations done using a decimal point), you had to buy a standalone floating point unit with its own pinout and motherboard socket (on compatible motherboards).

Some of you may also recall that in the early days of CPU cache, the cache in question was mounted to the motherboard (and sometimes upgradeable), not integrated into the CPU die. The term front-side bus (which ran from the northbridge controller to main memory and various peripherals) was originally contrasted with the back-side bus, which ran to the CPU cache from the CPU itself. The integration of these components on-die didnt always cut costs sometimes, the final product was actually more expensive but it vastly improved performance.

Digitals VAX 11/780. In many ways, the consummate CISC machine.

Moores Law 2.0 really came into its own in the mid-1990s. Moores Law always had a quieter partner, known as Dennard Scaling. Dennard Scaling stated that as transistors became smaller, their power density remained constant meaning that smaller transistors required less voltage and lower current. If Moores Law had stated we would be able to pack more transistors into the same area, Dennard Scaling ensured that those transistors would be cooler and draw less power. It was Dennard Scaling that broke in 2005, as Intel, AMD, and most other vendors turned away from emphasizing clock-based scaling, in favor of adding more CPU cores and improving single-threaded CPU performance.

From 2005 through 2014, Moores Law continued but the emphasis was on improving cost by driving down the expense of each additional transistor. Those transistors might not run more quickly than their predecessors, but they were often more power-efficient and less expensiveto build. As Dr. Mack points out, much of this improvement was driven by developments in lithography tools. As silicon wafer yields soared and manufacturing outputs surged, the total cost of manufacturing (per transistor) fell, while the total cost per square millimeter fell slowly or stayed about the same.

Moores Law scaling through the classic era.

Moores Law 3.0, then, is far more diverse and involves integrating functions and capabilities that havent historically been seen as part of CPU functions at all. Intels on-die voltage regulator, or the further integration of power circuitry to better improve CPU idle and load characteristics, could be thought of as one application of Moores Law 3.0 along with some of Nvidias deep learning functions, or its push to move camera processing technology over to the same core silicon that powers other areas of the core.

Dr. Mack points to ideas like nanorelays tiny, tiny moving switches that may not flip as quickly as digital logic, but dont leak power at all once flipped. Whether such technologies will be integrated into future chip designs is anyones guess, and the research being poured into them is more uncertain. Its entirely possible that a company might spend millions trying to better implement a design in digital logic, or adapt principles of semiconductors to other types of chip design, only to find the final product is just incrementally better than the previous part.

Theres an argument against this shift in usage that goes something like this: Moores Law, divorced from Gordon Moores actual words, isnt Moores Law at all. Changing the definition of Moores Law changes it from a trustworthy scientific statement into a mealy-mouthed marketing term. Such criticisms arent without merit. Like clock speed, core counts, transistor densities, and benchmark results, Moores Law, in any form, is subject to distortion. Im sympathetic to this argument when Ive called Moores Law dead in the past, Ive been referring to it.

One criticism of this perspective,however, is that the extra layers of fudge were added a long time ago. Gordon Moores original paper wasnt published in The New York Times for public consumption it was a technical document meant to predict the long-term trend of observed phenomena. Modern foundries remain focused on improving density and cutting the cost per transistor (as much as is possible). But the meaning of Moores Law quickly shifted from a simple statement about costs and density trend lines and was presented as an overarching trend that governed nearly every aspect of computing.

Even this overarching trend began to change in 2005, without any undue help from marketing departments. At first, both Intel and AMD focused on adding more cores, but this required additional support from software vendors and performance tools. More recently, both companies have focused on improving power efficiency and cutting idle power to better fit into mobile power envelopes. Intel and AMD have done amazing work pulling down idle power consumption at the platform level, but full load CPU power consumption has fallen much more slowly and maximum CPU temperatures have skyrocketed. We now tolerate full load temperatures of 80-95C, compared to max temperatures of 60-70C less than a decade ago. CPU manufacturers and foundries deserve credit for building chips that can tolerate these higher temperatures, but those changes were made because the Dennard Scaling that underlay what Dr. Mack calls Moores Law 2.0 had already failed.

Transistor scaling continued long after IPC and clock speed had essentially flatlined.

Even an engineering-minded person can appreciate that each shift in the definition of Moores Law accompanied a profound shift in the nature of cutting-edge compute capability. Moores Law 1.0 gave us the mainframe and the minicomputer. Moores Law 2.0s emphasis on per-transistor performance and cost scaling ushered in the era of the microcomputer in both its desktop and laptop incarnations. Moores Law 3.0, with its focus on platform-level costs and total system integration has given us the smartphone, the tablet, and the nascent wearables industry.

Twenty years ago, the pace of Moores Law stood for faster transistors and higher clock speeds. Now it serves as shorthand for better battery life, higher boost frequencies, quicker returns to idle (0W is, in some sense, the new 1GHz), sharper screens, thinner form factors, and, yes higher overall performance in some cases, albeit not as quickly as most of us would like. It endures as a concept because it stands for something much larger than the performance of a transistor or the electrical characteristics of a gate.

After 50 years, Moores Law has become cultural shorthand for innovation itself. When Intel, or Nvidia, or Samsung refer to Moores Law in this context, theyre referring to the continuous application of decades of knowledge and ingenuity across hundreds of products. Its a way of acknowledging the tremendous collaboration that continues to occur from the fab line to the living room, the result of painstaking research aimed to bring a platforms capabilities a little more in line with what users want. Is that marketing? You bet. But its not just marketing.

Moores Law is dead. Long live Moores Law.

Read the original:

Moore’s Law is dead, long live Moore’s Law – ExtremeTech

02002-02052 (50 years): Moore’s Law, which has defined a …

Moore’s Law, Gordon Moore’s visionary prediction of continued exponential growth in semi-conductor performance, has provided the engine for innovation and the constantly increasing (and accelerating) power and resources at continually decreasing costs provided by techhnology.

Moore admits that Moore’s Law has turned out to be more accurate, longer lasting and deeper in impact than he ever imagined. In fact, it has been Intel engineers, frustrated by an inability to see clearly more than 8 to 10 years into the future of their own technology, who have been the most conservative in estimating the lifespan of Moore’s Law and partyly because they have been the most conservative in defining Moore’s Law. They continue to focus on increasing the transistor count on silicon as the main driver of Moore’s Law and thus announce that Moore’s Law may slow or even stop by the end of the next decade, as transisters approach sub-atomic sizes.

Moore’s Law, however, was never a physical law. It began as an observation, that became a prediction, that has now been dismissed as a “self-fulfilling prophecy”.

However you choose to describe it, Moore’s Law has always functioned as a expression of breathtaking (almost rash) optimism and as a pacesetting mechanism informed by scientific observation, commercial competitiveness and human ingenuity that we can and should have the ability to improve our power to provide capability and opportunity for humankind, continually and exponentially thus continuing to provide better, more efficient and less costly technologies.

This continued (and in fact unstoppabl) flow of increased performance, power and new value has transformed vast

The world has broadened its definition of Moore’s Law as our understanding of physics, materials and complexity deepens and becomes more intimate. Recently Intel suggested that an “Expanded Moore’s Law” is no longer driven solely by transitor count but by the combination of three factors. The first is the traditional increasing the count of components we can put on a chip. The second is increasing the complexity of components we can put on a chip. The third is increasing the convergence of technologies we implement on a chip.

Intel and its competitors continue to leverage and balance these factors as needed to continue producing the by-now-expected-and-required doubling of performance every new generation of technology.

(Those who go back and read Moore’s original article that appeared in the April, 1965 issue of Electronics magazine will notice that Moore always used the word components, and even today tends to talk about increasing the complexity of components, rather than focusing solely on the number of transistors on a chip.)

At a certain point, you can choose to define a chip as a network all on its own, and as such subject to Metcalfe’s Law. Metcalfe’s Law may in fact prove to be one of the most important enablers of the continued growth of semi-conductor performance. (I use the term M (squared), Moore times Metcalfe, to represent this additional factor.)

Many scientists, including those who attended a recent science summit at DARPA, believe the exponential increase in benefits defined by Moore’s Law will neither cease nor slow in the foreseeable future.

The source of those benefits may alter, but the value of Moore’s Law has now as Moore originally hoped when he first made his famous observation begun an unstoppable expansion beyond traditional computational spaces that will eventually assure new capabilities, as well as increased performance, lower cost, and greater connectivity for vitually every traditional device and services eventually universal availability of transformatory improvements.

It is Moore’s Law (arguably in combination with Metcalfe’s Law) which is helping us invent and extend our future. We need it to keep going. And for the reasons described above, I believe it will — certainly for the next five decades. This is the basis and the passion behind my bet.

See the rest here:

02002-02052 (50 years): Moore’s Law, which has defined a …

Gordon Moore – Wikipedia

Gordon Earle Moore (born January 3, 1929) is an American businessman, co-founder and Chairman Emeritus of Intel Corporation, and the author of Moore’s law.[3][4][5][6][7] As of January 2015, his net worth is $6.7 billion.[2]

Moore was born in San Francisco, California, and grew up in nearby Pescadero. He attended Sequoia High School in Redwood City. Initially he went to San Jose State University.[8] After two years he transferred to the University of California, Berkeley, from which he received a Bachelor of Science degree in chemistry in 1950.[9]

In September, 1950 Moore matriculated at the California Institute of Technology (Caltech).[10] Moore received a Ph.D[11] in chemistry and minor in physics from Caltech in 1954.[9][12] Moore conducted postdoctoral research at the Applied Physics Laboratory at Johns Hopkins University from 1953 to 1956.[9]

Moore met his future wife, Betty Irene Whitaker, while attending San Jose State University.[10] Gordon and Betty were married September 9, 1950,[13] and left the next day to move to the California Institute of Technology. The couple have two sons, Kenneth and Steven.[14]

Moore joined MIT and Caltech alumnus William Shockley at the Shockley Semiconductor Laboratory division of Beckman Instruments, but left with the “traitorous eight”, when Sherman Fairchild agreed to back them and created the influential Fairchild Semiconductor corporation.[15][16]

In 1965, Gordon E. Moore was working as the director of research and development (R&D) at Fairchild Semiconductor. He was asked by Electronics Magazine to predict what was going to happen in the semiconductor components industry over the next ten years. In an article published on April 19, 1965, Moore observed that the number of components (transistors, resistors, diodes or capacitors)[17] in a dense integrated circuit had doubled approximately every year, and speculated that it would continue to do so for at least the next ten years. In 1975, he revised the forecast rate to approximately every two years.[18]Carver Mead popularized the phrase “Moore’s law.” The prediction has become a target for miniaturization in the semiconductor industry, and has had widespread impact in many areas of technological change.[3][16]

In July 1968, Robert Noyce and Moore founded NM Electronics which later became Intel Corporation.[19][20] Moore served as Executive Vice President until 1975 when he became President. In April 1979, Moore became Chairman of the Board and Chief Executive Officer, holding that position until April 1987, when he became Chairman of the Board. He was named Chairman Emeritus of Intel Corporation in 1997.[21] Under Noyce, Moore, and later Andrew Grove, Intel has pioneered new technologies in the areas of computer memory, integrated circuits and microprocessor design.[20]

In 2000 Betty and Gordon Moore established the Gordon and Betty Moore Foundation, with a gift worth about $5 billion. Through the Foundation, they initially targeted environmental conservation, science, and the San Francisco Bay Area.[22]

The foundation gives extensively in the area of environmental conservation, supporting major projects in the Andes-Amazon Basin and the San Francisco Bay area, among others.[23] Moore was a director of Conservation International for some years. In 2002 he and Conservation International Senior Vice President Claude Gascon received the Order of the Golden Ark from His Royal Highness Prince Bernhard of Lippe-Biesterfeld for their outstanding contributions to nature conservation.[24]

Moore has been a member of Caltech’s board of trustees since 1983, chairing it from 1993 to 2000, and is now a life trustee.[25][26][27] In 2001, Moore and his wife donated $600million to Caltech, the largest gift ever to an institution of higher education.[28] He said that he wants the gift to be used to keep Caltech at the forefront of research and technology.[22]

On December 6, 2007, Gordon Moore and his wife donated $200million to Caltech and the University of California for the construction of the Thirty Meter Telescope, the world’s second largest optical telescope. The telescope will have a mirror 30 meters across and be built on Mauna Kea in Hawaii. This is nearly three times the size of the current record holder, the Large Binocular Telescope.[29]

In addition, through the Foundation, Betty Moore has created the Betty Irene Moore Nursing Initiative, targeting nursing care in the San Francisco Bay Area and Greater Sacramento.[22][30]

In 2009, the Moores received the Andrew Carnegie Medal of Philanthropy.[22][31]

Gordon Moore has received many honors. He became a member of the National Academy of Engineering in 1976.[32]

In 1990, Moore was presented with the National Medal of Technology and Innovation by President George H.W. Bush, “for his seminal leadership in bringing American industry the two major postwar innovations in microelectronics – large-scale integrated memory and the microprocessor – that have fueled the information revolution.”[33]

In 1998 he was inducted as a Fellow of the Computer History Museum “for his fundamental early work in the design and production of semiconductor devices as co-founder of Fairchild and Intel.”[34]

In 2001, Moore received the Othmer Gold Medal for outstanding contributions to progress in chemistry and science.[35][36]

Moore is also the recipient of the Presidential Medal of Freedom, the United States’ highest civilian honor, as of 2002.[37] He received the award from President George W. Bush. In 2002, Moore also received the Bower Award for Business Leadership.

In 2003, he was elected a Fellow of the American Association for the Advancement of Science.

Moore was awarded the 2008 IEEE Medal of Honor for “pioneering technical roles in integrated-circuit processing, and leadership in the development of MOS memory, the microprocessor computer and the semiconductor industry.”[38] Moore was featured in the documentary film Something Ventured which premiered in 2011.

In 2009, Moore was inducted into the National Inventors Hall of Fame.

He was awarded the 2010 Future Dan David Prize for his work in the areas of Computers and Telecommunications.[39]

The library at the Centre for Mathematical Sciences at the University of Cambridge is named after him and his wife Betty,[40] as are the Moore Laboratories building (dedicated 1996) at Caltech and the Gordon and Betty Moore Materials Research Building at Stanford.

The Electrochemical Society presents an award in Moores name, the Gordon E. Moore Medal for Outstanding Achievement in Solid State Science and Technology, every two years to celebrate scientists contributions to the field of solid state science.[41] The Society of Chemical Industry (American Section) annually presents the Gordon E. Moore Medal in his honor to recognize early career success in innovation in the chemical industries.[42][43]

Moore actively pursues and enjoys any type of fishing and has extensively traveled the world catching species from black marlin to rainbow trout. He has said his conservation efforts are partly inspired by his interest in fishing.[44]

In 2011, Moore’s genome was the first human genome sequenced on Ion Torrent’s Personal Genome Machine platform, a massively parallel sequencing device. Ion Torrent’s device obtains sequence information by directly sensing ions produced by DNA polymerase synthesis using ion-sensitive field effect transistor sensors.[45]

Originally posted here:

Gordon Moore – Wikipedia


...1020...2829303132...40...