12345...102030...


Home | TOP500 Supercomputer Sites

AMD Posts Transitional First Quarter Ahead of Rome Launch

On the eve of its 50th anniversary, Advanced Micro Devices (AMD) reported sales of $1.27 billion for Q1 2019, down 10 percent quarter-over-quarter and 23 percent year-over-year. Despite the drop, revenue came in above Wall Streets expectations, and AMD is continuing its push to win back datacenter market share ceded to Intel over the last []

The post AMD Posts Transitional First Quarter Ahead of Rome Launch appeared first on HPCwire.

TAIPEI, Taiwan,May 2, 2019 Computer and server manufacturer Inventec Enterprise Business Group (Inventec EBG) today announced the release of its P47G4 server solution, optimized for AMD deep learning technologies. The P47G4 server is one of four optimized server solutions and featuresa 2U, single-socketsystem equipped withAMD EPYC processors and up to four AMD Radeon Instinct []

The post Inventec Collaborates with AMD to Provide Deep Learning Solutions appeared first on HPCwire.

One reason China has a good chance of hitting its ambitious goal to reach exascale computing in 2020 is that the government is funding three separate architectural paths to attain that milestone.

China Fleshes Out Exascale Design for Tianhe-3 Supercomputer was written by Michael Feldman at .

Over at the IBM Blog, Rahil Garnaviwrites that IBM researchers have developed new techniques in deep learning that could help unlock earlier glaucoma detection.”Earlier detection of glaucoma is critical to slowing its progression in individuals and its rise across our global population. Using deep learning to uncover valuable information in non-invasive, standard retina imaging could lay the groundwork for new and much more rapid glaucoma testing.”

The post IBM Research Applies Deep Learning for Detecting Glaucoma appeared first on insideHPC.

Researchers at the University of Pittsburgh are using XSEDE supercomputing resources to develop new materials that can capture carbon dioxide and turn it into a commercially useful substances. With global climate change resulting from increasing levels of carbon dioxide in the Earth’s atmosphere, the work could lead to a lasting impact on our environment. “The basic idea here is that we are looking to improve the overall energetics of CO2 capture and conversion to some useful material, as opposed to putting it in the ground and just storing it someplace,” saidKarl Johnson from the University of Pittsburgh. “But capture and conversion are typically different processes.”

The post Pitt Researchers using HPC to turn CO2 into Useful Products appeared first on insideHPC.

Field Programmable Gate Arrays (FPGAs) have notched some noticeable wins as a platform for machine learning, Microsofts embrace of the technology in Azure being the most notable example.

FPGAs Open Gates in Machine Learning was written by Michael Feldman at .

See the original post:

Home | TOP500 Supercomputer Sites

Super-computer | Article about Super-computer by The Free …

A computer which, among existing general-purpose computers at any given time, is superlative, often in several senses: highest computation rate, largest memory, or highest cost. Predominantly, the term refers to the fastest number crunchers, that is, machines designed to perform numerical calculations at the highest speed that the latest electronic device technology and the state of the art of computer architecture allow.

The demand for the ability to execute arithmetic operations at the highest possible rate originated in computer applications areas collectively referred to as scientific computing. Large-scale numerical simulations of physical processes are often needed in fields such as physics, structural mechanics, meteorology, and aerodynamics. A common technique is to compute an approximate numerical solution to a set of partial differential equations which mathematically describe the physical process of interest but are too complex to be solved by formal mathematical methods. This solution is obtained by first superimposing a grid on a region of space, with a set of numerical values attached to each grid point. Large-scale scientific computations of this type often require hundreds of thousands of grid points with 10 or more values attached to each point, with 10 to 500 arithmetic operations necessary to compute each updated value, and hundreds of thousands of time steps over which the computation must be repeated before a steady-state solution is reached. See Computational fluid dynamics, Numerical analysis, Simulation

Two lines of technological advancement have significantly contributed to what roughly amounts to a doubling of the fastest computers’ speeds every year since the early 1950sthe steady improvement in electronic device technology and the accumulation of improvements in the architectural designs of digital computers.

Computers incorporate very large-scale integrated (VLSI) circuits with tens of millions of transistors per chip for both logic and memory components. A variety of types of integrated circuitry is used in contemporary supercomputers. Several use high-speed complementary metallic oxide semiconductor (CMOS) technology. Throughout most of the history of digital computing, supercomputers generally used the highest-performance switching circuitry available at the timewhich was usually the most exotic and expensive. However, many supercomputers now use the conventional, inexpensive device technology of commodity microprocessors and rely on massive parallelism for their speed. See Computer storage technology, Concurrent processing, Integrated circuits, Logic circuits

Increases in computing speed which are purely due to the architectural structure of a computer can largely be attributed to the introduction of some form of parallelism into the machine’s design: two or more operations which were performed one after the other in previous computers can now be performed simultaneously. See Computer systems architecture

Pipelining is a technique which allows several operations to be in progress in the central processing unit at once. The first form of pipelining used was instruction pipelining. Since each instruction must have the same basic sequence of steps performed, namely instruction fetch, instruction decode, operand fetch, and execution, it is feasible to construct an instruction pipeline, where each of these steps happens at a separate stage of the pipeline. The efficiency of the instruction pipeline depends on the likelihood that the program being executed allows a steady stream of instructions to be fetched from contiguous locations in memory.

The central processing unit nearly always has a much faster cycle time than the memory. This implies that the central processing unit is capable of processing data items faster than a memory unit can provide them. Interleaved memory is an organization of memory units which at least partially relieves this problem.

Parallelism within arithmetic and logical circuitry has been introduced in several ways. Adders, multipliers, and dividers now operate in bit-parallel mode, while the earliest machines performed bit-serial arithmetic. Independently operating parallel functional units within the central processing unit can each perform an arithmetic operation such as add, multiply, or shift. Array processing is a form of parallelism in which the instruction execution portion of a central processing unit is replicated several times and connected to its own memory device as well as to a common instruction interpretation and control unit. In this way, a single instruction can be executed at the same time on each of several execution units, each on a different set of operands. This kind of architecture is often referred to as single-instruction stream, multiple-data stream (SIMD).

Vector processing is the term applied to a form of pipelined arithmetic units which are specialized for performing arithmetic operations on vectors, which are uniform, linear arrays of data values. It can be thought of as a type of SIMD processing, since a single instruction invokes the execution of the same operation on every element of the array. See Computer programming, Programming languages

A central processing unit can contain multiple sets of the instruction execution hardware for either scalar or vector instructions. The task of scheduling instructions which can correctly execute in parallel with one another is generally the responsibility of the compiler or special scheduling hardware in the central processing unit. Instruction-level parallelism is almost never visible to the application programmer.

Multiprocessing is a form of parallelism that has complete central processing units operating in parallel, each fetching and executing instructions independently from the others. This type of computer organization is called multiple-instruction stream, multiple-data stream (MIMD). See Multiprocessing

Read more:

Super-computer | Article about Super-computer by The Free …

Supercomputer – Simple English Wikipedia, the free …

A supercomputer is a computer with great speed and memory. This kind of computer can do jobs faster than any other computer of its generation. They are usually thousands of times faster than ordinary personal computers made at that time. Supercomputers can do arithmetic jobs very fast, so they are used for weather forecasting, code-breaking, genetic analysis and other jobs that need many calculations. When new computers of all classes become more powerful, new ordinary computers are made with powers that only supercomputers had in the past, while new supercomputers continue to outclass them.

Electrical engineers make supercomputers that link many thousands of microprocessors.

Supercomputer types include: shared memory, distributed memory and array. Supercomputers with shared memory are developed by using a parallel computing and pipelining concept. Supercomputers with distributed memory consist of many (about 100~10000) nodes. CRAY series of CRAYRESERCH and VP 2400/40, NEC SX-3 of HUCIS are shared memory types. nCube 3, iPSC/860, AP 1000, NCR 3700, Paragon XP/S, CM-5 are distributed memory types.

An array type computer named ILIAC started working in 1972. Later, the CF-11, CM-2, and the Mas Par MP-2 (which is also an array type) were developed. Supercomputers that use a physically separated memory as one shared memory include the T3D, KSR1, and Tera Computer.

Organizations

Centers

Read more from the original source:

Supercomputer – Simple English Wikipedia, the free …

Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More

Cryptocurrency News
On the whole, cryptocurrency prices are down from our previous report on cryptos, with the market slipping on news of an exchange being hacked and a report about Bitcoin manipulation.

However, there have been two bright spots: 1) an official from the U.S. Securities and Exchange Commission (SEC) said that Ethereum is not a security, and 2) Coinbase is expanding its selection of tokens.

Let’s start with the good news.
SEC Says ETH Is Not a Security
Investors have some reason to cheer this week. A high-ranking SEC official told attendees of the Yahoo! All Markets Summit: Crypto that Ethereum and Bitcoin are not.

The post Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More appeared first on Profit Confidential.

Read more from the original source:

Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Ripple vs SWIFT: The War Begins
While most criticisms of XRP do nothing to curb my bullish Ripple price forecast, there is one obstacle that nags at my conscience. Its name is SWIFT.

The Society for Worldwide Interbank Financial Telecommunication (SWIFT) is the king of international payments.

It coordinates wire transfers across 11,000 banks in more than 200 countries and territories, meaning that in order for XRP prices to ascend to $10.00, Ripple needs to launch a successful coup. That is, and always has been, an unwritten part of Ripple’s story.

We’ve seen a lot of progress on that score. In the last three years, Ripple wooed more than 100 financial firms onto its.

The post Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More appeared first on Profit Confidential.

Read more from the original source:

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Cryptocurrency News: Looking Past the Bithumb Crypto Hack

Another Crypto Hack Derails Recovery
Since our last report, hackers broke into yet another cryptocurrency exchange. This time the target was Bithumb, a Korean exchange known for high-flying prices and ultra-active traders.

While the hackers made off with approximately $31.5 million in funds, the exchange is working with relevant authorities to return the stolen tokens to their respective owners. In the event that some is still missing, the exchange will cover the losses. (Source: “Bithumb Working With Other Crypto Exchanges to Recover Hacked Funds,”.

The post Cryptocurrency News: Looking Past the Bithumb Crypto Hack appeared first on Profit Confidential.

Read the original here:

Cryptocurrency News: Looking Past the Bithumb Crypto Hack

Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto

Cryptocurrency News
This was a bloody week for cryptocurrencies. Everything was covered in red, from Ethereum (ETH) on down to the Basic Attention Token (BAT).

Some investors claim it was inevitable. Others say that price manipulation is to blame.

We think the answers are more complicated than either side has to offer, because our research reveals deep contradictions between the price of cryptos and the underlying development of blockchain projects.

For instance, a leading venture capital (VC) firm launched a $300.0-million crypto investment fund, yet liquidity continues to dry up in crypto markets.

Another example is the U.S. Securities and Exchange Commission’s.

The post Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto appeared first on Profit Confidential.

View original post here:

Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto

Cryptocurrency News: What You Need to Know This Week

Cryptocurrency News
Cryptocurrencies traded sideways since our last report on cryptos. However, I noticed something interesting when playing around with Yahoo! Finance’s cryptocurrency screener: There are profitable pockets in this market.

Incidentally, Yahoo’s screener is far superior to the one on CoinMarketCap, so if you’re looking to compare digital assets, I highly recommend it.

But let’s get back to my epiphany.

In the last month, at one point or another, most crypto assets on our favorites list saw double-digit increases. It’s true that each upswing was followed by a hard crash, but investors who rode the trend would have made a.

The post Cryptocurrency News: What You Need to Know This Week appeared first on Profit Confidential.

Read more:

Cryptocurrency News: What You Need to Know This Week

Cryptocurrency News: XRP Validators, Malta, and Practical Tokens

Cryptocurrency News & Market Summary
Investors finally saw some light at the end of the tunnel last week, with cryptos soaring across the board. No one quite knows what kicked off the rally—as it could have been any of the stories we discuss below—but the net result was positive.

Of course, prices won’t stay on this rocket ride forever. I expect to see a resurgence of volatility in short order, because the market is moving as a single unit. Everything is rising in tandem.

This tells me that investors are simply “buying the dip” rather than identifying which cryptos have enough real-world value to outlive the crash.

So if you want to know when.

The post Cryptocurrency News: XRP Validators, Malta, and Practical Tokens appeared first on Profit Confidential.

Originally posted here:

Cryptocurrency News: XRP Validators, Malta, and Practical Tokens

Cryptocurrency News: Vitalik Buterin Doesn’t Care About Bitcoin ETFs

Cryptocurrency News
While headline numbers look devastating this week, investors might take some solace in knowing that cryptocurrencies found their bottom at roughly $189.8 billion in market cap—that was the low point. Since then, investors put more than $20.0 billion back into the market.

During the rout, Ethereum broke below $300.00 and XRP fell below $0.30, marking yearly lows for both tokens. The same was true down the list of the top 100 biggest cryptos.

Altcoins took the brunt of the hit. BTC Dominance, which reveals how tightly investment is concentrated in Bitcoin, rose from 42.62% to 53.27% in just one month, showing that investors either fled altcoins at higher.

The post Cryptocurrency News: Vitalik Buterin Doesn’t Care About Bitcoin ETFs appeared first on Profit Confidential.

Read more:

Cryptocurrency News: Vitalik Buterin Doesn’t Care About Bitcoin ETFs

Cryptocurrency News: New Exchanges Could Boost Crypto Liquidity

Cryptocurrency News
Even though the cryptocurrency news was upbeat in recent days, the market tumbled after the U.S. Securities and Exchange Commission (SEC) rejected calls for a Bitcoin (BTC) exchange-traded fund (ETF).

That news came as a blow to investors, many of whom believe the ETF would open the cryptocurrency industry up to pension funds and other institutional investors. This would create a massive tailwind for cryptos, they say.

So it only follows that a rejection of the Bitcoin ETF should send cryptos tumbling, correct? Well, maybe you can follow that logic. To me, it seems like a dramatic overreaction.

I understand that legitimizing cryptos is important. But.

The post Cryptocurrency News: New Exchanges Could Boost Crypto Liquidity appeared first on Profit Confidential.

More:

Cryptocurrency News: New Exchanges Could Boost Crypto Liquidity

Cryptocurrency News: Bitcoin ETF Rejection, AMD Microchip Sales, and Hedge Funds

Cryptocurrency News
Although cryptocurrency prices were heating up last week (Bitcoin, especially), regulators poured cold water on the rally by rejecting calls for a Bitcoin exchange-traded fund (ETF). This is the second time that the proposal fell on deaf ears. (More on that below.)

Crypto mining ran into similar trouble, as you can see from Advanced Micro Devices, Inc.‘s (NASDAQ:AMD) most recent quarterly earnings. However, it wasn’t all bad news. Investors should, for instance, be cheering the fact that hedge funds are ramping up their involvement in cryptocurrency markets.

Without further ado, here are those stories in greater detail.
ETF Rejection.

The post Cryptocurrency News: Bitcoin ETF Rejection, AMD Microchip Sales, and Hedge Funds appeared first on Profit Confidential.

Here is the original post:

Cryptocurrency News: Bitcoin ETF Rejection, AMD Microchip Sales, and Hedge Funds

Bitcoin Rise: Is the Recent Bitcoin Price Surge a Sign of Things to Come or Another Misdirection?

What You Need to Know About the Bitcoin Price Rise
It wasn’t that long ago that Bitcoin (BTC) dominated headlines for its massive growth, with many cryptocurrency millionaires being made. The Bitcoin price surged ever upward and many people thought the gravy train would never stop running—until it did.

Prices crashed, investors abandoned the space, and lots of people lost money. Cut to today and we’re seeing another big Bitcoin price surge; is this time any different?

I’m of a mind that investors ought to think twice before jumping back in on Bitcoin.

Bitcoin made waves when it once again crested above $5,000. Considering that it started 2019 around $3,700,.

The post Bitcoin Rise: Is the Recent Bitcoin Price Surge a Sign of Things to Come or Another Misdirection? appeared first on Profit Confidential.

Read more from the original source:

Bitcoin Rise: Is the Recent Bitcoin Price Surge a Sign of Things to Come or Another Misdirection?

Supercomputer – Simple English Wikipedia, the free …

A supercomputer is a computer with great speed and memory. This kind of computer can do jobs faster than any other computer of its generation. They are usually thousands of times faster than ordinary personal computers made at that time. Supercomputers can do arithmetic jobs very fast, so they are used for weather forecasting, code-breaking, genetic analysis and other jobs that need many calculations. When new computers of all classes become more powerful, new ordinary computers are made with powers that only supercomputers had in the past, while new supercomputers continue to outclass them.

Electrical engineers make supercomputers that link many thousands of microprocessors.

Supercomputer types include: shared memory, distributed memory and array. Supercomputers with shared memory are developed by using a parallel computing and pipelining concept. Supercomputers with distributed memory consist of many (about 100~10000) nodes. CRAY series of CRAYRESERCH and VP 2400/40, NEC SX-3 of HUCIS are shared memory types. nCube 3, iPSC/860, AP 1000, NCR 3700, Paragon XP/S, CM-5 are distributed memory types.

An array type computer named ILIAC started working in 1972. Later, the CF-11, CM-2, and the Mas Par MP-2 (which is also an array type) were developed. Supercomputers that use a physically separated memory as one shared memory include the T3D, KSR1, and Tera Computer.

Organizations

Centers

See more here:

Supercomputer – Simple English Wikipedia, the free …

Home | Alabama Supercomputer Authority

The Alabama Supercomputer Authority (ASA) is a state-funded corporation founded in 1989 for the purpose of planning, acquiring, developing, administering and operating a statewide supercomputer and related telecommunicationsystems.

In addition toHigh Performance Computing, and with the growth of the internet,ASA developed the Alabama Research and Education Network (AREN), whichoffers education and research clients in Alabama internet access and other related network services. ASA has further expanded its offerings with state-of-the-artapplication development services that include custom website design with content management system (CMS)development and custom web-based applications for data-mining, reporting, and other client needs.

Read more from the original source:

Home | Alabama Supercomputer Authority

Lawrence Livermore Labs turns on Sierra supercomputer …

Covering 7,000 square feet and with 240 computing racks and 4,320 nodes, a classified government lab holds what looks like a futuristic mini city of black boxes with flashing blue and green lights.

This buzzing machine, called the Sierra supercomputer, is the third most powerful computer in the world. It was unveiled Friday at its home, the Lawrence Livermore National Laboratory (LLNL) in California, after four years in the making.

At its peak, Sierra can do 125 quadrillion calculations in a second. Its simulations are 100,000 times more realistic than anything a normal desktop computer can make. The only two supercomputers that are more powerful are China’s Sunway Taihulight in second place and IBM’s Summit in first.

“It would take 10 years to do the calculations this machine can do in one second,” said Ian Buck, vice president and general manager of accelerated computing at NVIDIA.

Powering such a massive electronic brain takes about 11 to 12 megawatts of energy, roughly the equivalent of what’s needed to power 12,000 homes a relatively energy efficient level of energy consumption, according to Sierra’s creators.

Right now, Sierra is partnering with medical labs to help develop cancer treatments and study traumatic brain injury before it switches to classified work.

Many of the 4,000 nuclear weapons in the government’s stockpile are aging. Once the Sierra switches to classified production in early 2019, it will focus on top secret government activities and it will use simulations to test the safety and reliability of these weapons, without setting off the weapons themselves and endangering people.

Besides assessing nuclear weapons, this supercomputer can create simulations to predict the effects of cancer, earthquakes and more. In other words, it can answer questions in 3D.

Sierra supercomputer Rosalie Chan

The lab and the Department of Energy worked with IBM, NVIDIA and Mellanox on this project. Talks for Sierra began in 2012, and in 2014 the project took off. Now, it’s six to ten times more powerful than its predecessor, Sequoia.

What makes the Sierra notably different is the NVLink, which connect Sierra’s processing units and gives it more powerful memory.

“What’s most fascinating is the scale of what it can do and the nature of the system that opens itself to the next generation workload,” said Akhtar Ali, VP of technical computing software at IBM. “Now these systems will do the kind of breakthrough science that’s pervasive right now.

The lab also installed another new supercomputer called Lassen, which will focus on unclassified work like speeding cancer drug discovery, research in traumatic brain injury, and studying earthquakes and the climate.

Sierra’s not the last supercomputer the lab will build. They’re already planning the next one: “El Capitan,” which can do more than a quintillion calculations per second — 10 times more powerful than the colossal Sierra.

The lab expects to flip the switch on El Capitan sometime in the 2021 to 2023 time frame.

In case you’re wondering, the supercomputers are all named after natural landmarks in California.

And no, Lawrence Livermore National Laboratories spokesperson Jeremy Thomas says, there are no plans to use the Sierra supercomputer for bitcoin mining.

“While it would probably be great at it, mining bitcoin is definitely not part of our mission” Thomas says.

Sierra supercomputer Rosalie Chan

See the rest here:

Lawrence Livermore Labs turns on Sierra supercomputer …

EKA (supercomputer) – Wikipedia

EKA is a supercomputer built by the Computational Research Laboratories (a subsidiary of Tata Sons) with technical assistance and hardware provided by Hewlett-Packard.[6]

Eka means the number One in Sanskrit.[4]

EKA uses 14,352[2] cores based on the Intel QuadCore Xeon processors. The primary interconnect is Infiband 4x DDR. EKA occupies about 4,000-square-foot (370m2) area.[7] It was built using offshelf components from Hewlett-Packard, Mellanox and Voltaire Ltd..[2] It was built within a short period of 6 weeks.[7]

At the time of its unveiling, it was the fourth-fastest supercomputer in the world and the fastest in Asia.[7] As of 16 September 2011, it is ranked at 58.[5]

More:

EKA (supercomputer) – Wikipedia

History of supercomputing – Wikipedia

The history of supercomputing goes back to the early 1920s in the United States with the IBM tabulators at Columbia University and a series of computers at Control Data Corporation (CDC), designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance.[1] The CDC 6600, released in 1964, is generally considered the first supercomputer.[2][3] However, some earlier computers were considered supercomputers for their day, such as the 1954 IBM NORC[4], the 1960 UNIVAC LARC[5], and the IBM 7030 Stretch[6] and the Atlas, both in 1962.

While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records.

By the end of the 20th century, massively parallel supercomputers with thousands of “off-the-shelf” processors similar to those found in personal computers were constructed and broke through the teraflop computational barrier.

Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, reaching petaflop performance levels.

The term “Super Computing” was first used in the New York World in 1929 to refer to large custom-built tabulators that IBM had made for Columbia University.

In 1957, a group of engineers left Sperry Corporation to form Control Data Corporation (CDC) in Minneapolis, Minnesota. Seymour Cray left Sperry a year later to join his colleagues at CDC.[1] In 1960, Cray completed the CDC 1604, one of the first solid-state computers, and the fastest computer in the world[dubious discuss] at a time when vacuum tubes were found in most large computers.[7]

Around 1960, Cray decided to design a computer that would be the fastest in the world by a large margin. After four years of experimentation along with Jim Thornton, and Dean Roush and about 30 other engineers Cray completed the CDC 6600 in 1964. Cray switched from germanium to silicon transistors, built by Fairchild Semiconductor, that used the planar process. These did not have the drawbacks of the mesa silicon transistors. He ran them very fast, and the speed of light restriction forced a very compact design with severe overheating problems, which were solved by introducing refrigeration, designed by Dean Roush.[8] Given that the 6600 outran all computers of the time by about 10 times, it was dubbed a supercomputer and defined the supercomputing market when one hundred computers were sold at $8 million each.[7][9]

The 6600 gained speed by “farming out” work to peripheral computing elements, freeing the CPU (Central Processing Unit) to process actual data. The Minnesota FORTRAN compiler for the machine was developed by Liddiard and Mundstock at the University of Minnesota and with it the 6600 could sustain 500kiloflops on standard mathematical operations.[10] In 1968, Cray completed the CDC 7600, again the fastest computer in the world.[7] At 36MHz, the 7600 had about three and a half times the clock speed of the 6600, but ran significantly faster due to other technical innovations. They sold only about 50 of the 7600s, not quite a failure. Cray left CDC in 1972 to form his own company.[7] Two years after his departure CDC delivered the STAR-100 which at 100megaflops was three times the speed of the 7600. Along with the Texas Instruments ASC, the STAR-100 was one of the first machines to use vector processing – the idea having been inspired around 1964 by the APL programming language.[11][12]

In 1956, a team at Manchester University in the United Kingdom, began development of MUSE a name derived from microsecond engine with the aim of eventually building a computer that could operate at processing speeds approaching onemicrosecond per instruction, about onemillion instructions per second.[13] Mu (or ) is a prefix in the SI and other systems of units denoting a factor of 106 (one millionth).

At the end of 1958, Ferranti agreed to begin to collaborate with Manchester University on the project, and the computer was shortly afterwards renamed Atlas, with the joint venture under the control of Tom Kilburn. The first Atlas was officially commissioned on 7December 1962, nearly three years before the Cray CDC 6600 supercomputer was introduced, as one of the world’s first supercomputers – and was considered to be the most powerful computer in England and for a very short time was considered to be one of the most powerful computers in the world, and equivalent to four IBM 7094s.[14] It was said that whenever England’s Atlas went offline half of the United Kingdom’s computer capacity was lost.[14] The Atlas pioneered the use of virtual memory and paging as a way to extend the Atlas’s working memory by combining its 16,384 words of primary core memory with an additional 96K words of secondary drum memory.[15] Atlas also pioneered the Atlas Supervisor, “considered by many to be the first recognizable modern operating system”.[14]

Four years after leaving CDC, Cray delivered the 80MHz Cray-1 in 1976, and it became the most successful supercomputer in history.[12][16] The Cray-1 used integrated circuits with two gates per chip and was a vector processor which introduced a number of innovations such as chaining in which scalar and vector registers generate interim results which can be used immediately, without additional memory references which reduce computational speed.[8][17] The Cray X-MP (designed by Steve Chen) was released in 1982 as a 105MHz shared-memory parallel vector processor with better chaining support and multiple memory pipelines. All three floating point pipelines on the X-MP could operate simultaneously.[17]

The Cray-2 released in 1985 was a 4processor liquid cooled computer totally immersed in a tank of Fluorinert, which bubbled as it operated.[8] It could perform to 1.9gigaflops and was the world’s second fastest supercomputer after M-13 (2.4gigaflops)[18] until 1990 when ETA-10G from CDC overtook both. The Cray 2 was a totally new design and did not use chaining and had a high memory latency, but used much pipelining and was ideal for problems that required large amounts of memory.[17] The software costs in developing a supercomputer should not be underestimated, as evidenced by the fact that in the 1980s the cost for software development at Cray came to equal what was spent on hardware.[19] That trend was partly responsible for a move away from the in-house, Cray Operating System to UNICOS based on Unix.[19]

The Cray Y-MP, also designed by Steve Chen, was released in 1988 as an improvement of the X-MP and could have eight vector processors at 167MHz with a peak performance of 333megaflops per processor.[17] In the late 1980s, Cray’s experiment on the use of gallium arsenide semiconductors in the Cray-3 did not succeed. Seymour Roger Cray began to work on a massively parallel computer in the early 1990s, but died in a car accident in 1996 before it could be completed. Cray Research did, however, produce such computers.[16][8]

The Cray-2 which set the frontiers of supercomputing in the mid to late 1980s had only 8 processors. In the 1990s, supercomputers with thousands of processors began to appear. Another development at the end of the 1980s was the arrival of Japanese supercomputers, some of which were modeled after the Cray-1.

The SX-3/44R was announced by NEC Corporation in 1989 and a year later earned the fastest in the world title with a 4 processor model.[20] However, Fujitsu’s Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994. It had a peak speed of 1.7gigaflops per processor.[21][22] The Hitachi SR2201 on the other hand obtained a peak performance of 600gigaflops in 1996 by using 2048processors connected via a fast three-dimensional crossbar network.[23][24][25]

In the same timeframe the Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface.[26] By 1995 Cray was also shipping massively parallel systems, e.g. the Cray T3E with over 2,000 processors, using a three-dimensional torus interconnect.[27][28]

The Paragon architecture soon led to the Intel ASCI Red supercomputer in the United States, which held the top supercomputing spot to the end of the 20th century as part of the Advanced Simulation and Computing Initiative. This was also a mesh-based MIMD massively-parallel system with over 9,000 compute nodes and well over 12 terabytes of disk storage, but used off-the-shelf Pentium Pro processors that could be found in everyday personal computers. ASCI Red was the first system ever to break through the 1teraflop barrier on the MP-Linpack benchmark in 1996; eventually reaching 2teraflops.[29]

Significant progress was made in the first decade of the 21st century. The efficiency of supercomputers continued to increase, but not dramatically so. The Cray C90 used 500 kilowatts of power in 1991, while by 2003 the ASCI Q used 3,000kW while being 2,000 times faster, increasing the performance per watt 300 fold.[30]

In 2004, the Earth Simulator supercomputer built by NEC at the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) reached 35.9teraflops, using 640nodes, each with eight proprietary vector processors.[31]

The IBM Blue Gene supercomputer architecture found widespread use in the early part of the 21st century, and 27 of the computers on the TOP500 list used that architecture. The Blue Gene approach is somewhat different in that it trades processor speed for low power consumption so that a larger number of processors can be used at air cooled temperatures. It can use over 60,000 processors, with 2048 processors “per rack”, and connects them via a three-dimensional torus interconnect.[32][33]

Progress in China has been rapid, in that China placed 51st on the TOP500 list in June 2003, then 14th in November 2003, and 10th in June 2004 and then 5th during 2005, before gaining the top spot in 2010 with the 2.5petaflop Tianhe-I supercomputer.[34][35]

In July 2011, the 8.1petaflop Japanese K computer became the fastest in the world using over 60,000 SPARC64 VIIIfx processors housed in over 600 cabinets. The fact that K computer is over 60 times faster than the Earth Simulator, and that the Earth Simulator ranks as the 68th system in the world seven years after holding the top spot demonstrates both the rapid increase in top performance and the widespread growth of supercomputing technology worldwide.[36][37][38] By 2014, the Earth Simulator had dropped off the list and by 2018 K computer had dropped out of the top 10.

This is a list of the computers which appeared at the top of the Top500 list since 1993.[39] The “Peak speed” is given as the “Rmax” rating.

Combined performance of 500 largest supercomputers

Fastest supercomputer

Supercomputer in 500th place

The CoCom and its later replacement, the Wassenaar Arrangement, legally regulated – required licensing and approval and record-keeping; or banned entirely – the export of high-performance computers (HPCs) to certain countries. Such controls have become harder to justify, leading to loosening of these regulations. Some have argued these regulations were never justified.[40][41][42][43][44][45]

Read the original:

History of supercomputing – Wikipedia

Stock Exchanges to Fire Company Building Stock-Market …

WASHINGTONStock exchanges intend to fire the contractor they hired to build a data warehouse for all U.S. stock-market activity, the latest sign of trouble for a project designed to detect trading fraud and causes behind wild swings in prices.

People familiar with the matter said the exchanges have lost confidence in Thesys Technologies LLC, a startup hired in 2017 to build the repository, known as the Consolidated Audit Trail. The Securities and Exchange Commission told the exchanges to create the database so it would have…

See the article here:

Stock Exchanges to Fire Company Building Stock-Market …


12345...102030...