12345...102030...


Supercomputer – Wikipedia

A supercomputer is a computer with a high level of performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). As of 2017, there are supercomputers which can perform up to nearly a hundred quadrillions of FLOPS,[3] measured in P(eta)FLOPS.[4] As of November 2017, all of the world’s fastest 500 supercomputers run Linux-based operating systems.[5] Additional research is being conducted in China, United States, European Union, Taiwan and Japan to build even faster, more powerful and more technologically superior exascale supercomputers.[6]

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.[7]

Supercomputers were introduced in the 1960s, and for several decades the fastest were made by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran faster than their more general-purpose contemporaries. Through the 1960s, they began to add increasing amounts of parallelism with one to four processors being typical. From the 1970s, the vector computing concept with specialized math units operating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector computers remained the dominant design into the 1990s. From then until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm.[8][9]

The US has long been a leader in the supercomputer field, first through Cray’s almost uninterrupted dominance of the field, and later through a variety of technology companies. Japan made major strides in the field in the 1980s and 90s, but since then China has become increasingly active in the field. As of June 2016, the fastest supercomputer on the TOP500 supercomputer list is the Sunway TaihuLight, in China, with a LINPACK benchmark score of 93PFLOPS, exceeding the previous record holder, Tianhe-2, by around 59PFLOPS. Sunway TaihuLight’s emergence is also notable for its use of indigenous chips and is the first Chinese computer to enter the TOP500 list without using hardware from the United States. As of June 2016, China, for the first time, had more computers (167) on the TOP500 list than the United States (165). However, US built computers held ten of the top 20 positions;[10][11] as of November 2017, the U.S. has four of the top 10 and China has two.

The history of supercomputing goes back to the 1960s, with the Atlas at the University of Manchester, the IBM 7030 Stretch and a series of computers at Control Data Corporation (CDC), designed by Seymour Cray. These used innovative designs and parallelism to achieve superior computational peak performance.[12]

The Atlas was a joint venture between Ferranti and the Manchester University and was designed to operate at processing speeds approaching onemicrosecond per instruction, about onemillion instructions per second.[13] The first Atlas was officially commissioned on 7 December 1962 as one of the world’s first supercomputers considered to be the most powerful computer in the world at that time by a considerable margin, and equivalent to four IBM 7094s.[14]

For the CDC 6600 (which Cray designed) released in 1964, a switch from using germanium to silicon transistors was implemented, as they could run very fast, solving the overheating problem by introducing refrigeration,[15] and helped to make it the fastest in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold at $8 million each.[16][17][18][19]

Cray left CDC in 1972 to form his own company, Cray Research.[17] Four years after leaving CDC, Cray delivered the 80MHz Cray 1 in 1976, and it became one of the most successful supercomputers in history.[20][21] The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaFLOPS and was the world’s second fastest after M-13 supercomputer in Moscow .[22]

In 1982, Osaka University’s LINKS-1 Computer Graphics System used a massively parallel processing architecture, with 514 microprocessors, including 257 Zilog Z8001 control processors and 257 iAPX 86/20 floating-point processors. It was mainly used for rendering realistic 3D computer graphics.[23]

While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear in Japan and the United States, setting new computational performance records. Fujitsu’s Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7gigaFLOPS (GFLOPS) per processor.[24][25] The Hitachi SR2201 obtained a peak performance of 600GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network.[26][27][28] The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes, communicating via the Message Passing Interface.[29]

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s.

Early supercomputer architectures pioneered by Seymour Cray relied on compact designs and local parallelism to achieve superior computational performance.[12] Cray had noted that increasing processor speeds did little if the rest of the system did not also improve; the CPU would end up waiting longer for data to arrive from the offboard storage units. The CDC 6600, the first mass-produced supercomputer, solved this problem by providing ten simple computers whose only purpose was to read and write data to and from main memory, allowing the CPU to concentrate solely on processing the data. This made both the main CPU and the ten “PPU” units much simpler. As such, they were physically smaller and reduced the amount of wiring between the various parts. This reduced the electrical signaling delays and allowed the system to run at a higher clock speed. The 6600 outperformed all other machines by an average of 10 times when it was introduced.

The CDC 6600’s spot as the fastest computer was eventually replaced by its successor, the CDC 7600. This design was very similar to the 6600 in general organization but added instruction pipelining to further improve performance. Generally speaking, every computer instruction required several steps to process; first, the instruction is read from memory, then any required data it refers to is read, the instruction is processed, and the results are written back out to memory. Each of these steps is normally accomplished by separate circuitry. In most early computers, including the 6600, each of these steps runs in turn, and while any one unit is currently active, the hardware handling the other parts of the process is idle. In the 7600, as soon as one instruction cleared a particular unit, that unit began processing the next instruction. Although each instruction takes the same time to complete, there are parts of several instructions being processed at the same time, offering much-improved overall performance. This, combined with further packaging improvements and improvements in the electronics, made the 7600 about four to ten times as fast as the 6600.

The 7600 was intended to be replaced by the CDC 8600, which was essentially four 7600’s in a small box. However, this design ran into intractable problems and was eventually canceled in 1974 in favor of another CDC design, the CDC STAR-100. The STAR was essentially a simplified and slower version of the 7600, but it was combined with new circuits that could rapidly process sequences of math instructions. The basic idea was similar to the pipeline in the 7600 but geared entirely toward math, and in theory, much faster. In practice, the STAR proved to have poor real-world performance, and ultimately only two or three were built.

Cray, meanwhile, had left CDC and formed his own company. Considering the problems with the STAR, he designed an improved version of the same basic concept but replaced the STAR’s memory-based vectors with ones that ran in large registers. Combining this with his famous packaging improvements produced the Cray-1. This completely outperformed every computer in the world, save one, and would ultimately sell about 80 units, making it one of the most successful supercomputer systems in history. Through the 1970s, 80s, and 90s a series of machines from Cray further improved on these basic concepts.

The basic concept of using a pipeline dedicated to processing large data units became known as vector processing, and came to dominate the supercomputer field. A number of Japanese firms also entered the field, producing similar concepts in much smaller machines. Three main lines were produced by these companies, the Fujitsu VP, Hitachi HITAC and NEC SX series, all announced in the early 1980s and updated continually into the 1990s. CDC attempted to re-enter this market with the ETA10 but this was not very successful. Convex Computer took another route, introducing a series of much smaller vector machines aimed at smaller businesses.

The only computer to seriously challenge the Cray-1’s performance in the 1970s was the ILLIAC IV. This machine was the first realized example of a true massively parallel computer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer instead feeds separate parts of the data to entirely different processors and then recombines the results. The ILLIAC’s design was finalized in 1966 with 256 processors and offer speed up to 1 GFLOPS, compared to the 1970s Cray-1’s peak of 250 MFLOPS. However, development problems led to only 64 processors being built, and the system could never operate faster than about 200 MFLOPS while being much larger and more complex than the Cray. Another problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort.

But the partial success of the ILLIAC IV was widely seen as pointing the way to the future of supercomputing. Cray argued against this, famously quipping that “If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?”[30] But by the early 1980s, several teams were working on parallel designs with thousands of processors, notably the Connection Machine (CM) that developed from research at MIT. The CM-1 used as many as 65,536 simplified custom microprocessors connected together in a network to share data. Several updated versions followed; the CM-5 supercomputer is a massively parallel processing computer capable of many billions of arithmetic operations per second.[31]

Software development remained a problem, but the CM series sparked off considerable research into this issue. Similar designs using custom hardware were made by many companies, including the Evans & Sutherland ES-1, MasPar, nCUBE, Intel iPSC and the Goodyear MPP. But by the mid-1990s, general-purpose CPU performance had improved so much in that a supercomputer could be built using them as the individual processing units, instead of using custom chips. By the turn of the 21st century, designs featuring tens of thousands of commodity CPUs were the norm, with later machines adding graphic units to the mix.[8][9]

Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers.[32][33][34] The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components.[35] There have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures.[36][37]

Systems with a massive number of processors generally take one of two paths. In the grid computing approach, the processing power of many computers, organised as distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[38] In another approach, a large number of processors are used in proximity to each other, e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the interconnect becomes very important and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-dimensional torus interconnects.[39][40] The use of multi-core processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system.[41][42]

As the price, performance and energy efficiency of general purpose graphic processors (GPGPUs) have improved,[43] a number of petaFLOPS supercomputers such as Tianhe-I and Nebulae have started to rely on them.[44] However, other systems such as the K computer continue to use conventional processors such as SPARC-based designs and the overall applicability of GPGPUs in general-purpose high-performance computing applications has been the subject of debate, in that while a GPGPU may be tuned to score well on specific benchmarks, its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application towards it.[45][46] However, GPUs are gaining ground and in 2012 the Jaguar supercomputer was transformed into Titan by retrofitting CPUs with GPUs.[47][48][49]

High-performance computers have an expected life cycle of about three years before requiring an upgrade.[50]

A number of “special-purpose” systems have been designed, dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom ASICs, allowing better price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers include Belle,[51] Deep Blue,[52] and Hydra,[53] for playing chess, Gravity Pipe for astrophysics,[54] MDGRAPE-3 for protein structure computation molecular dynamics[55] and Deep Crack,[56] for breaking the DES cipher.

A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04megawatts (MW) of electricity.[57] The cost to power and cool the system can be significant, e.g. 4MW at $0.10/kWh is $400 an hour or about $3.5 million per year.

Heat management is a major issue in complex electronic devices and affects powerful computer systems in various ways.[58] The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies. The supercomputing awards for green computing reflect this issue.[59][60][61]

The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with. The Cray 2 was liquid cooled, and used a Fluorinert “cooling waterfall” which was forced through the modules under pressure.[36] However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company.[37]

In the Blue Gene system, IBM deliberately used low power processors to deal with heat density.[62] The IBM Power 775, released in 2011, has closely packed elements that require water cooling.[63] The IBM Aquasar system uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.[64][65]

The energy efficiency of computer systems is generally measured in terms of “FLOPS per watt”. In 2008, IBM’s Roadrunner operated at 3.76MFLOPS/W.[66][67] In November 2010, the Blue Gene/Q reached 1,684MFLOPS/W.[68][69] In June 2011 the top 2 spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375MFLOPS/W.[70]

Because copper wires can transfer energy into a supercomputer with much higher power densities than forced air or circulating refrigerants can remove waste heat,[71] the ability of the cooling systems to remove waste heat is a limiting factor.[72][73] As of 2015[update], many existing supercomputers have more infrastructure capacity than the actual peak demand of the machine designers generally conservatively design the power and cooling infrastructure to handle more than the theoretical peak electrical power consumed by the supercomputer. Designs for future supercomputers are power-limited the thermal design power of the supercomputer as a whole, the amount that the power and cooling infrastructure can handle, is somewhat more than the expected normal power consumption, but less than the theoretical peak power consumption of the electronic hardware.[74]

Since the end of the 20th century, supercomputer operating systems have undergone major transformations, based on the changes in supercomputer architecture.[75] While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such as Linux.[76]

Since modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger system such as a Linux-derivative on server and I/O nodes.[77][78][79]

While in a traditional multi-user computer system job scheduling is, in effect, a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully deal with inevitable hardware failures when tens of thousands of processors are present.[80]

Although most modern supercomputers use the Linux operating system, each manufacturer has its own specific Linux-derivative, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design.[75][81]

The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf.

In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA or OpenCL.

Moreover, it is quite difficult to debug and test parallel programs. Special techniques need to be used for testing and debugging such applications.

Opportunistic Supercomputing is a form of networked grid computing whereby a “super virtual computer” of many loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing cannot handle traditional supercomputing tasks such as fluid dynamic simulations.

The fastest grid computing system is the distributed computing project Folding@home (F@h). F@h reported 101 PFLOPS of x86 processing power As of October2016[update]. Of this, over 100 PFLOPS are contributed by clients running on various GPUs, and the rest from various CPU systems.[83]

The Berkeley Open Infrastructure for Network Computing (BOINC) platform hosts a number of distributed computing projects. As of February2017[update], BOINC recorded a processing power of over 166 PetaFLOPS through over 762 thousand active Computers (Hosts) on the network.[84]

As of October2016[update], Great Internet Mersenne Prime Search’s (GIMPS) distributed Mersenne Prime search achieved about 0.313 PFLOPS through over 1.3 million computers.[85] The Internet PrimeNet Server supports GIMPS’s grid computing approach, one of the earliest and most successful[citation needed] grid computing projects, since 1997.

Quasi-opportunistic supercomputing is a form of distributed computing whereby the super virtual computer of many networked geographically disperse computers performs computing tasks that demand huge processing power.[86] Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.[86]

Cloud Computing with its recent and rapid expansions and development have grabbed the attention of HPC users and developers in recent years. Cloud Computing attempts to provide HPC-as-a-Service exactly like other forms of services currently available in the Cloud such as Software-as-a-Service, Platform-as-a-Service, and Infrastructure-as-a-Service. HPC users may benefit from the Cloud in different angles such as scalability, resources being on-demand, fast, and inexpensive. On the other hand, moving HPC applications have a set of challenges too. Good examples of such challenges are virtualization overhead in the Cloud, multi-tenancy of resources, and network latency issues. Much research[87][88][89][90] is currently being done to overcome these challenges and make HPC in the cloud a more realistic possibility.

Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g., a very complex weather simulation application.[91]

Capacity computing, in contrast, is typically thought of as using efficient cost-effective computing power to solve a few somewhat large problems or many small problems.[91] Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem.[91]

In general, the speed of supercomputers is measured and benchmarked in “FLOPS” (FLoating point Operations Per Second), and not in terms of “MIPS” (Million Instructions Per Second), as is the case with general-purpose computers.[92] These measurements are commonly used with an SI prefix such as tera-, combined into the shorthand “TFLOPS” (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand “PFLOPS” (1015 FLOPS, pronounced petaflops.) “Petascale” supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaFLOPS (EFLOPS) range. An EFLOPS is one quintillion (1018) FLOPS (one million TFLOPS).

No single number can reflect the overall performance of a computer system, yet the goal of the Linpack benchmark is to approximate how fast the computer solves numerical problems and it is widely used in the industry.[93] The FLOPS measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer’s processor specifications and shown as “Rpeak” in the TOP500 lists), which is generally unachievable when running real workloads, or the achievable throughput, derived from the LINPACK benchmarks and shown as “Rmax” in the TOP500 list.[94] The LINPACK benchmark typically performs LU decomposition of a large matrix.[95] The LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.[93]

Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the “fastest” supercomputer available at any given time.

This is a recent list of the computers which appeared at the top of the TOP500 list,[96] and the “Peak speed” is given as the “Rmax” rating.

Source: TOP500

The stages of supercomputer application may be summarized in the following table:

The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat’s brain.[103]

Modern-day weather forecasting also relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.[104]

In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM’s abandonment of the Blue Waters petascale project.[105]

The Advanced Simulation and Computing Program currently uses supercomputers to maintain and simulate the United States nuclear stockpile.[106]

Given the current speed of progress, industry experts estimate that supercomputers will reach 1EFLOPS (1018, 1,000 PFLOPS or one quintillion FLOPS) by 2018. The Chinese government in particular is pushing to achieve this goal after they achieved the most powerful supercomputer in the world with Tianhe-2 since 2013. Using the Intel MIC multi-core processor architecture, which is Intel’s response to GPU systems, SGI also plans to achieve a 500-fold increase in performance by 2018 in order to achieve one EFLOPS. Samples of MIC chips with 32 cores, which combine vector processing units with standard CPU, have become available.[107] The Indian government has also stated ambitions for an EFLOPS-range supercomputer, which they hope to complete by 2017.[108] In November 2014, it was reported that India is working on the fastest supercomputer ever, which is set to work at 132EFLOPS.[109]

Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaFLOPS (1021, one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two-week time span accurately.[110][111][112] Such systems might be built around 2030.[113]

Many Monte Carlo simulations use the same algorithm to process a randomly generated data set; particularly, integro-differential equations describing physical transport processes, the random paths, collisions, and energy and momentum depositions of neutrons, photons, ions, electrons, etc. The next step for microprocessors may be into the third dimension; and specializing to Monte Carlo, the many layers could be identical, simplifying the design and manufacture process.[114]

High performance supercomputers usually require high energy, as well. However, Iceland may be a benchmark for the future with the world’s first zero-emission supercomputer. Located at the Thor Data Center in Reykjavik, Iceland, this supercomputer relies on completely renewable sources for its power rather than fossil fuels. The colder climate also reduces the need for active cooling, making it one of the greenest facilities in the world of computers.[115]

Many science-fiction writers have depicted supercomputers in their works, both before and after the historical construction of such computers. Much of such fiction deals with the relations of humans with the computers they build and with the possibility of conflict eventually developing between them. Some scenarios of this nature appear on the AI-takeover page.

Examples of supercomputers in fiction include HAL-9000, Multivac, The Machine Stops, GLaDOS, The Evitable Conflict and Vulcan’s Hammer.

See original here:

Supercomputer – Wikipedia

TOP500 – Official Site

Michael Feldman | May 11, 2018 10:19 CEST At the Google I/O conference this week, CEO Sundar Pichai announced TPU 3.0, the third iteration of the companys Tensor Processing Unit, a custom-built processor for machine learning. Michael Feldman | May 9, 2018 11:04 CEST The National Supercomputer Centre (NSC) at Linkping University is gearing up to deploy a four-petaflop ClusterVision system, which will make it the most powerful supercomputer in Scandinavia. Michael Feldman | May 8, 2018 11:00 CEST Cavium has released the ThunderX2 processor for general availability, paving the way for the first generation of ARM-powered high performance computing. Michael Feldman | May 8, 2018 07:29 CEST At the Microsoft Build conference on Monday, the company kicked off a new cloud offering that would provide machine learning resources to cloud customers using Intel FPGA-accelerated servers. Michael Feldman | May 4, 2018 09:08 CEST Italian multinational Eni is putting its new HPC4 supercomputer to good use, using all 3,200 of the systems GPUs to run 100,000 oil reservoir simulations in record time. Michael Feldman | May 2, 2018 11:29 CEST Dell EMC has launched the PowerEdge R840 and R940xa, two new four-socket servers that offer GPU and FPGA coprocessors for accelerating machine learning, analytics, and other data-intensive workloads. Michael Feldman | May 2, 2018 03:36 CEST The Pawsey Supercomputing Centre announced that the Australian government is investing $70 million in the center to replace its aging supercomputers. Michael Feldman | April 25, 2018 09:42 CEST Research university KU Leuven has installed a new HPE supercomputer designed to run artificial intelligence workloads. Michael Feldman | April 23, 2018 10:12 CEST Fujitsu has performed a massive upgrade to RIKENs RAIDEN supercomputer using NVIDIA DGX-1 servers outfitted with the latest V100 Tesla GPUs. Michael Feldman | April 20, 2018 07:25 CEST The Jlich Supercomputing Centre (JSC) has installed the first module of JUWELS, a supercomputer that will succeed JUQEEN as the centers premier HPC system and pave the way for future exascale machines. More News

The rest is here:

TOP500 – Official Site

What is Supercomputer? Webopedia Definition

Main TERM S

By Vangie Beal

The fastest type of computer. Supercomputers are very expensive and are employed for specialized applications that require immense amounts of mathematical calculations. For example, weather forecasting requires a supercomputer. Other uses of supercomputers include animated graphics, fluid dynamic calculations, nuclear energy research, and petroleum exploration.

The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently.

Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now.

Read more here:

What is Supercomputer? Webopedia Definition

What is supercomputer? – Definition from WhatIs.com

A supercomputer is a computer that performs at or near the currently highest operational rate for computers. Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both).Although advances likemulti-core processors and GPGPUs (general-purpose graphics processing units)have enabled powerful machinesfor personal use (see: desktop supercomputer, GPU supercomputer),by definition, a supercomputer is exceptional in terms of performance.

At any given time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term is also sometimes applied to far slower (but still impressively fast) computers. The largest, most powerful supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:

The first commercially successful supercomputer, the CDC (Control Data Corporation) 6600 was designed by Seymour Cray. Released in 1964, the CDC 6600 had a single CPU and cost $8 million the equivalent of $60 million today. The CDC could handle three million floating point operations per second (flops).

Cray went on to found a supercomputer company under his name in 1972. Although the company has changed hands a number of times it is still in operation. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed at markets such as aerospace, automotive, academic, financial services and life sciences.

IBM has been a keen competitor. The company’s Roadrunner, once the top-ranked supercomputer, was twice as fast as IBM’s Blue Gene and six times as fast as any of other supercomputers at that time. IBM’s Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on Jeopardy!, a popular quiz show.

Year

Supercomputer

Peak speed (Rmax)

Location

2016

Sunway TaihuLight

93.01PFLOPS

Wuxi, China

2013

NUDTTianhe-2

33.86PFLOPS

Guangzhou, China

2012

CrayTitan

17.59PFLOPS

Oak Ridge, U.S.

2012

IBMSequoia

17.17PFLOPS

Livermore, U.S.

2011

FujitsuK computer

10.51PFLOPS

Kobe, Japan

2010

Tianhe-IA

2.566PFLOPS

Tianjin, China

2009

CrayJaguar

1.759PFLOPS

Oak Ridge, U.S.

2008

IBMRoadrunner

1.026PFLOPS

Los Alamos, U.S.

1.105PFLOPS

In the United States, some supercomputer centers are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative.

At the lower end of supercomputing, clustering takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing.

Read the original here:

What is supercomputer? – Definition from WhatIs.com

Hedonism – Wikipedia

Hedonism is a school of thought that argues that pleasure and happiness are the primary or most important intrinsic goods and the aim of human life.[1] A hedonist strives to maximize net pleasure (pleasure minus pain), but when having finally gained that pleasure, happiness remains stationary.

Ethical hedonism is the idea that all people have the right to do everything in their power to achieve the greatest amount of pleasure possible to them. It is also the idea that every person’s pleasure should far surpass their amount of pain. Ethical hedonism is said to have been started by Aristippus of Cyrene, a student of Socrates. He held the idea that pleasure is the highest good.[2]

The name derives from the Greek word for “delight” ( hdonismos from hdon “pleasure”, cognate[according to whom?] with English sweet + suffix – -ismos “ism”). An extremely strong aversion to hedonism is hedonophobia.

In the original Old Babylonian version of the Epic of Gilgamesh, which was written soon after the invention of writing, Siduri gave the following advice “Fill your belly. Day and night make merry. Let days be full of joy. Dance and make music day and night […] These things alone are the concern of men”, which may represent the first recorded advocacy of a hedonistic philosophy.[3]

Scenes of a harper entertaining guests at a feast were common in ancient Egyptian tombs (see Harper’s Songs), and sometimes contained hedonistic elements, calling guests to submit to pleasure because they cannot be sure that they will be rewarded for good with a blissful afterlife. The following is a song attributed to the reign of one of the pharaohs around the time of the 12th dynasty, and the text was used in the eighteenth and nineteenth dynasties.[4][5]

Let thy desire flourish,In order to let thy heart forget the beatifications for thee.Follow thy desire, as long as thou shalt live.Put myrrh upon thy head and clothing of fine linen upon thee,Being anointed with genuine marvels of the gods’ property.Set an increase to thy good things;Let not thy heart flag.Follow thy desire and thy good.Fulfill thy needs upon earth, after the command of thy heart,Until there come for thee that day of mourning.

Democritus seems to be the earliest philosopher on record to have categorically embraced a hedonistic philosophy; he called the supreme goal of life “contentment” or “cheerfulness”, claiming that “joy and sorrow are the distinguishing mark of things beneficial and harmful” (DK 68 B 188).[6]

The Cyrenaics were an ultra-hedonist Greek school of philosophy founded in the 4th century BC, supposedly by Aristippus of Cyrene, although many of the principles of the school are believed to have been formalized by his grandson of the same name, Aristippus the Younger. The school was so called after Cyrene, the birthplace of Aristippus. It was one of the earliest Socratic schools. The Cyrenaics taught that the only intrinsic good is pleasure, which meant not just the absence of pain, but positively enjoyable sensations. Of these, momentary pleasures, especially physical ones, are stronger than those of anticipation or memory. They did, however, recognize the value of social obligation, and that pleasure could be gained from altruism[citation needed]. Theodorus the Atheist was a latter exponent of hedonism who was a disciple of younger Aristippus,[7] while becoming well known for expounding atheism. The school died out within a century, and was replaced by Epicureanism.

The Cyrenaics were known for their skeptical theory of knowledge. They reduced logic to a basic doctrine concerning the criterion of truth.[8] They thought that we can know with certainty our immediate sense-experiences (for instance, that I am having a sweet sensation now) but can know nothing about the nature of the objects that cause these sensations (for instance, that the honey is sweet).[9] They also denied that we can have knowledge of what the experiences of other people are like.[10] All knowledge is immediate sensation. These sensations are motions which are purely subjective, and are painful, indifferent or pleasant, according as they are violent, tranquil or gentle.[9][11] Further they are entirely individual, and can in no way be described as constituting absolute objective knowledge. Feeling, therefore, is the only possible criterion of knowledge and of conduct.[9] Our ways of being affected are alone knowable. Thus the sole aim for everyone should be pleasure.

Cyrenaicism deduces a single, universal aim for all people which is pleasure. Furthermore, all feeling is momentary and homogeneous. It follows that past and future pleasure have no real existence for us, and that among present pleasures there is no distinction of kind.[11] Socrates had spoken of the higher pleasures of the intellect; the Cyrenaics denied the validity of this distinction and said that bodily pleasures, being more simple and more intense, were preferable.[12] Momentary pleasure, preferably of a physical kind, is the only good for humans. However some actions which give immediate pleasure can create more than their equivalent of pain. The wise person should be in control of pleasures rather than be enslaved to them, otherwise pain will result, and this requires judgement to evaluate the different pleasures of life.[13] Regard should be paid to law and custom, because even though these things have no intrinsic value on their own, violating them will lead to unpleasant penalties being imposed by others.[12] Likewise, friendship and justice are useful because of the pleasure they provide.[12] Thus the Cyrenaics believed in the hedonistic value of social obligation and altruistic behaviour.

Epicureanism is a system of philosophy based upon the teachings of Epicurus (c. 341c. 270 BC), founded around 307 BC. Epicurus was an atomic materialist, following in the steps of Democritus and Leucippus. His materialism led him to a general stance against superstition or the idea of divine intervention. Following Aristippusabout whom very little is knownEpicurus believed that the greatest good was to seek modest, sustainable “pleasure” in the form of a state of tranquility and freedom from fear (ataraxia) and absence of bodily pain (aponia) through knowledge of the workings of the world and the limits of our desires. The combination of these two states is supposed to constitute happiness in its highest form. Although Epicureanism is a form of hedonism, insofar as it declares pleasure as the sole intrinsic good, its conception of absence of pain as the greatest pleasure and its advocacy of a simple life make it different from “hedonism” as it is commonly understood.

In the Epicurean view, the highest pleasure (tranquility and freedom from fear) was obtained by knowledge, friendship and living a virtuous and temperate life. He lauded the enjoyment of simple pleasures, by which he meant abstaining from bodily desires, such as sex and appetites, verging on asceticism. He argued that when eating, one should not eat too richly, for it could lead to dissatisfaction later, such as the grim realization that one could not afford such delicacies in the future. Likewise, sex could lead to increased lust and dissatisfaction with the sexual partner. Epicurus did not articulate a broad system of social ethics that has survived but had a unique version of the Golden Rule.

It is impossible to live a pleasant life without living wisely and well and justly (agreeing “neither to harm nor be harmed”),[14] and it is impossible to live wisely and well and justly without living a pleasant life.[15]

Epicureanism was originally a challenge to Platonism, though later it became the main opponent of Stoicism. Epicurus and his followers shunned politics. After the death of Epicurus, his school was headed by Hermarchus; later many Epicurean societies flourished in the Late Hellenistic era and during the Roman era (such as those in Antiochia, Alexandria, Rhodes and Ercolano). The poet Lucretius is its most known Roman proponent. By the end of the Roman Empire, having undergone Christian attack and repression, Epicureanism had all but died out, and would be resurrected in the 17th century by the atomist Pierre Gassendi, who adapted it to the Christian doctrine.

Some writings by Epicurus have survived. Some scholars consider the epic poem On the Nature of Things by Lucretius to present in one unified work the core arguments and theories of Epicureanism. Many of the papyrus scrolls unearthed at the Villa of the Papyri at Herculaneum are Epicurean texts. At least some are thought to have belonged to the Epicurean Philodemus.

Yangism has been described as a form of psychological and ethical egoism. The Yangist philosophers believed in the importance of maintaining self-interest through “keeping one’s nature intact, protecting one’s uniqueness, and not letting the body be tied by other things.” Disagreeing with the Confucian virtues of li (propriety), ren (humaneness), and yi (righteousness) and the Legalist virtue of fa (law), the Yangists saw wei wo, or “everything for myself,” as the only virtue necessary for self-cultivation. Individual pleasure is considered desirable, like in hedonism, but not at the expense of the health of the individual. The Yangists saw individual well-being as the prime purpose of life, and considered anything that hindered that well-being immoral and unnecessary.

The main focus of the Yangists was on the concept of xing, or human nature, a term later incorporated by Mencius into Confucianism. The xing, according to sinologist A. C. Graham, is a person’s “proper course of development” in life. Individuals can only rationally care for their own xing, and should not naively have to support the xing of other people, even if it means opposing the emperor. In this sense, Yangism is a “direct attack” on Confucianism, by implying that the power of the emperor, defended in Confucianism, is baseless and destructive, and that state intervention is morally flawed.

The Confucian philosopher Mencius depicts Yangism as the direct opposite of Mohism, while Mohism promotes the idea of universal love and impartial caring, the Yangists acted only “for themselves,” rejecting the altruism of Mohism. He criticized the Yangists as selfish, ignoring the duty of serving the public and caring only for personal concerns. Mencius saw Confucianism as the “Middle Way” between Mohism and Yangism.

Judaism believes that mankind was created for pleasure, as God placed Adam and Eve in the Garden of EdenEden being the Hebrew word for “pleasure.” In recent years, Rabbi Noah Weinberg articulated five different levels of pleasure; connecting with God is the highest possible pleasure.

Christian doctrine current in some evangelical circles, particularly those of the Reformed tradition.[16] The term was first coined by Reformed Baptist theologian John Piper in his 1986 book Desiring God: My shortest summary of it is: God is most glorified in us when we are most satisfied in him. Or: The chief end of man is to glorify God by enjoying him forever. Does Christian Hedonism make a god out of pleasure? No. It says that we all make a god out of what we take most pleasure in. [16] Piper states his term may describe the theology of Jonathan Edwards, who referred to a future enjoyment of him [God] in heaven.[17] In the 17th century, the atomist Pierre Gassendi adapted Epicureanism to the Christian doctrine.

The concept of hedonism is also found in the Hindu scriptures.[18][19]

Utilitarianism addresses problems with moral motivation neglected by Kantianism by giving a central role to happiness. It is an ethical theory holding that the proper course of action is the one that maximizes the overall good of the society.[20] It is thus one form of consequentialism, meaning that the moral worth of an action is determined by its resulting outcome. The most influential contributors to this theory are considered to be the 18th and 19th-century British philosophers Jeremy Bentham and John Stuart Mill. Conjoining hedonismas a view as to what is good for peopleto utilitarianism has the result that all action should be directed toward achieving the greatest total amount of happiness (see Hedonic calculus). Though consistent in their pursuit of happiness, Bentham and Mill’s versions of hedonism differ. There are two somewhat basic schools of thought on hedonism:[1]

Contemporary proponents of hedonism include Swedish philosopher Torbjrn Tnnsj,[21] Fred Feldman.[22] and Spanish ethic philosopher Esperanza Guisn (published a “Hedonist manifesto” in 1990).[23]

A dedicated contemporary hedonist philosopher and writer on the history of hedonistic thought is the French Michel Onfray. He has written two books directly on the subject (L’invention du plaisir: fragments cyraniques[24] and La puissance d’exister: Manifeste hdoniste).[25] He defines hedonism “as an introspective attitude to life based on taking pleasure yourself and pleasuring others, without harming yourself or anyone else.”[26] Onfray’s philosophical project is to define an ethical hedonism, a joyous utilitarianism, and a generalized aesthetic of sensual materialism that explores how to use the brain’s and the body’s capacities to their fullest extent — while restoring philosophy to a useful role in art, politics, and everyday life and decisions.”[27]

Onfray’s works “have explored the philosophical resonances and components of (and challenges to) science, painting, gastronomy, sex and sensuality, bioethics, wine, and writing. His most ambitious project is his projected six-volume Counter-history of Philosophy,”[27] of which three have been published. For him “In opposition to the ascetic ideal advocated by the dominant school of thought, hedonism suggests identifying the highest good with your own pleasure and that of others; the one must never be indulged at the expense of sacrificing the other. Obtaining this balance my pleasure at the same time as the pleasure of others presumes that we approach the subject from different angles political, ethical, aesthetic, erotic, bioethical, pedagogical, historiographical.”

For this he has “written books on each of these facets of the same world view.”[28] His philosophy aims for “micro-revolutions”, or “revolutions of the individual and small groups of like-minded people who live by his hedonistic, libertarian values.”[29]

The Abolitionist Society is a transhumanist group calling for the abolition of suffering in all sentient life through the use of advanced biotechnology. Their core philosophy is negative utilitarianism. David Pearce is a theorist of this perspective and he believes and promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. His book-length internet manifesto The Hedonistic Imperative[30] outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience among human and non-human animals, replacing suffering with gradients of well-being, a project he refers to as “paradise engineering”.[31] A transhumanist and a vegan,[32] Pearce believes that we (or our future posthuman descendants) have a responsibility not only to avoid cruelty to animals within human society but also to alleviate the suffering of animals in the wild.

In a talk David Pearce gave at the Future of Humanity Institute and at the Charity International ‘Happiness Conference’ he said “Sadly, what won’t abolish suffering, or at least not on its own, is socio-economic reform, or exponential economic growth, or technological progress in the usual sense, or any of the traditional panaceas for solving the world’s ills. Improving the external environment is admirable and important; but such improvement can’t recalibrate our hedonic treadmill above a genetically constrained ceiling. Twin studies confirm there is a [partially] heritable set-point of well-being – or ill-being – around which we all tend to fluctuate over the course of a lifetime. This set-point varies between individuals. [It’s possible to lower an individual’s hedonic set-point by inflicting prolonged uncontrolled stress; but even this re-set is not as easy as it sounds: suicide-rates typically go down in wartime; and six months after a quadriplegia-inducing accident, studies[citation needed] suggest that we are typically neither more nor less unhappy than we were before the catastrophic event.] Unfortunately, attempts to build an ideal society can’t overcome this biological ceiling, whether utopias of the left or right, free-market or socialist, religious or secular, futuristic high-tech or simply cultivating one’s garden. Even if everything that traditional futurists have asked for is delivered – eternal youth, unlimited material wealth, morphological freedom, superintelligence, immersive VR, molecular nanotechnology, etc – there is no evidence that our subjective quality of life would on average significantly surpass the quality of life of our hunter-gatherer ancestors – or a New Guinea tribesman today – in the absence of reward pathway enrichment. This claim is difficult to prove in the absence of sophisticated neuroscanning; but objective indices of psychological distress e.g. suicide rates, bear it out. Unenhanced humans will still be prey to the spectrum of Darwinian emotions, ranging from terrible suffering to petty disappointments and frustrations – sadness, anxiety, jealousy, existential angst. Their biology is part of “what it means to be human”. Subjectively unpleasant states of consciousness exist because they were genetically adaptive. Each of our core emotions had a distinct signalling role in our evolutionary past: they tended to promote behaviours that enhanced the inclusive fitness of our genes in the ancestral environment.”[33]

Russian physicist and philosopher Victor Argonov argues that hedonism is not only a philosophical but also a verifiable scientific hypothesis. In 2014 he suggested “postulates of pleasure principle” confirmation of which would lead to a new scientific discipline, hedodynamics. Hedodynamics would be able to forecast the distant future development of human civilization and even the probable structure and psychology of other rational beings within the universe.[34] In order to build such a theory, science must discover the neural correlate of pleasure – neurophysiological parameter unambiguously corresponding to the feeling of pleasure (hedonic tone).

According to Argonov, posthumans will be able to reprogram their motivations in an arbitrary manner (to get pleasure from any programmed activity).[35] And if pleasure principle postulates are true, then general direction of civilization development is obvious: maximization of integral happiness in posthuman life (product of life span and average happiness). Posthumans will avoid constant pleasure stimulation, because it is incompatible with rational behavior required to prolong life. However, in average, they can become much happier than modern humans.

Many other aspects of posthuman society could be predicted by hedodynamics if the neural correlate of pleasure were discovered. For example, optimal number of individuals, their optimal body size (whether it matters for happiness or not) and the degree of aggression.

Critics of hedonism have objected to its exclusive concentration on pleasure as valuable.

In particular, G. E. Moore offered a thought experiment in criticism of pleasure as the sole bearer of value: he imagined two worldsone of exceeding beauty and the other a heap of filth. Neither of these worlds will be experienced by anyone. The question, then, is if it is better for the beautiful world to exist than the heap of filth. In this Moore implied that states of affairs have value beyond conscious pleasure, which he said spoke against the validity of hedonism.[36]

In Quran, God admonished mankind not to love the worldly pleasures, since it is related with greedy and source of sinful habit. He also threatened those who prefer worldly life rather than hereafter with Hell.

Those who choose the worldly life and its pleasures will be given proper recompense for their deeds in this life and will not suffer any loss. Such people will receive nothing in the next life except Hell fire. Their deeds will be made devoid of all virtue and their efforts will be in vain.

“Hedonism”. Encyclopdia Britannica (11th ed.). 1911.

See the rest here:

Hedonism – Wikipedia

Hedonism | Internet Encyclopedia of Philosophy

The term “hedonism,” from the Greek word (hdon) for pleasure, refers to several related theories about what is good for us, how we should behave, and what motivates us to behave in the way that we do. All hedonistic theories identify pleasure and pain as the only important elements of whatever phenomena they are designed to describe. If hedonistic theories identified pleasure and pain as merely two important elements, instead of the only important elements of what they are describing, then they would not be nearly as unpopular as they all are. However, the claim that pleasure and pain are the only things of ultimate importance is what makes hedonism distinctive and philosophically interesting.

Philosophical hedonists tend to focus on hedonistic theories of value, and especially of well-being (the good life for the one living it). As a theory of value, hedonism states that all and only pleasure is intrinsically valuable and all and only pain is intrinsically not valuable. Hedonists usually define pleasure and pain broadly, such that both physical and mental phenomena are included. Thus, a gentle massage and recalling a fond memory are both considered to cause pleasure and stubbing a toe and hearing about the death of a loved one are both considered to cause pain. With pleasure and pain so defined, hedonism as a theory about what is valuable for us is intuitively appealing. Indeed, its appeal is evidenced by the fact that nearly all historical and contemporary treatments of well-being allocate at least some space for discussion of hedonism. Unfortunately for hedonism, the discussions rarely endorse it and some even deplore its focus on pleasure.

This article begins by clarifying the different types of hedonistic theories and the labels they are often given. Then, hedonisms ancient origins and its subsequent development are reviewed. The majority of this article is concerned with describing the important theoretical divisions within Prudential Hedonism and discussing the major criticisms of these approaches.

When the term “hedonism” is used in modern literature, or by non-philosophers in their everyday talk, its meaning is quite different from the meaning it takes when used in the discussions of philosophers. Non-philosophers tend to think of a hedonist as a person who seeks out pleasure for themselves without any particular regard for their own future well-being or for the well-being of others. According to non-philosophers, then, a stereotypical hedonist is someone who never misses an opportunity to indulge of the pleasures of sex, drugs, and rock n roll, even if the indulgences are likely to lead to relationship problems, health problems, regrets, or sadness for themselves or others. Philosophers commonly refer to this everyday understanding of hedonism as “Folk Hedonism.” Folk Hedonism is a rough combination of Motivational Hedonism, Hedonistic Egoism, and a reckless lack of foresight.

When philosophers discuss hedonism, they are most likely to be referring to hedonism about value, and especially the slightly more specific theory, hedonism about well-being. Hedonism as a theory about value (best referred to as Value Hedonism) holds that all and only pleasure is intrinsically valuable and all and only pain is intrinsically disvaluable. The term “intrinsically” is an important part of the definition and is best understood in contrast to the term “instrumentally.” Something is intrinsically valuable if it is valuable for its own sake. Pleasure is thought to be intrinsically valuable because, even if it did not lead to any other benefit, it would still be good to experience. Money is an example of an instrumental good; its value for us comes from what we can do with it (what we can buy with it). The fact that a copious amount of money has no value if no one ever sells anything reveals that money lacks intrinsic value. Value Hedonism reduces everything of value to pleasure. For example, a Value Hedonist would explain the instrumental value of money by describing how the things we can buy with money, such as food, shelter, and status-signifying goods, bring us pleasure or help us to avoid pain.

Hedonism as a theory about well-being (best referred to as Prudential Hedonism) is more specific than Value Hedonism because it stipulates what the value is for. Prudential Hedonism holds that all and only pleasure intrinsically makes peoples lives go better for them and all and only pain intrinsically makes their lives go worse for them. Some philosophers replace “people” with “animals” or “sentient creatures,” so as to apply Prudential Hedonism more widely. A good example of this comes from Peter Singers work on animals and ethics. Singer questions why some humans can see the intrinsic disvalue in human pain, but do not also accept that it is bad for sentient non-human animals to experience pain.

When Prudential Hedonists claim that happiness is what they value most, they intend happiness to be understood as a preponderance of pleasure over pain. An important distinction between Prudential Hedonism and Folk Hedonism is that Prudential Hedonists usually understand that pursuing pleasure and avoiding pain in the very short-term is not always the best strategy for achieving the best long-term balance of pleasure over pain.

Prudential Hedonism is an integral part of several derivative types of hedonistic theory, all of which have featured prominently in philosophical debates of the past. Since Prudential Hedonism plays this important role, the majority of this article is dedicated to Prudential Hedonism. First, however, the main derivative types of hedonism are briefly discussed.

Motivational Hedonism (more commonly referred to by the less descriptive label, “Psychological Hedonism”) is the theory that the desires to encounter pleasure and to avoid pain guide all of our behavior. Most accounts of Motivational Hedonism include both conscious and unconscious desires for pleasure, but emphasize the latter. Epicurus, William James, Sigmund Freud, Jeremy Bentham, John Stuart Mill, and (on one interpretation) even Charles Darwin have all argued for varieties of Motivational Hedonism. Bentham used the idea to support his theory of Hedonistic Utilitarianism (discussed below). Weak versions of Motivational Hedonism hold that the desires to seek pleasure and avoid pain often or always have some influence on our behavior. Weak versions are generally considered to be uncontroversially true and not especially useful for philosophy.

Philosophers have been more interested in strong accounts of Motivational Hedonism, which hold that all behavior is governed by the desires to encounter pleasure and to avoid pain (and only those desires). Strong accounts of Motivational Hedonism have been used to support some of the normative types of hedonism and to argue against non-hedonistic normative theories. One of the most notable mentions of Motivational Hedonism is Platos Ring of Gyges example in The Republic. Platos Socrates is discussing with Glaucon how men would react if they were to possess a ring that gives its wearer immense powers, including invisibility. Glaucon believes that a strong version of Motivational Hedonism is true, but Socrates does not. Glaucon asserts that, emboldened with the power provided by the Ring of Gyges, everyone would succumb to the inherent and ubiquitous desire to pursue their own ends at the expense of others. Socrates disagrees, arguing that good people would be able to overcome this desire because of their strong love of justice, fostered through philosophising.

Strong accounts of Motivational Hedonism currently garner very little support for similar reasons. Many examples of seemingly-pain-seeking acts performed out of a sense of duty are well-known from the soldier who jumps on a grenade to save his comrades to that time you rescued a trapped dog only to be (predictably) bitten in the process. Introspective evidence also weighs against strong accounts of Motivational Hedonism; many of the decisions we make seem to be based on motives other than seeking pleasure and avoiding pain. Given these reasons, the burden of proof is considered to be squarely on the shoulders of anyone wishing to argue for a strong account of Motivational Hedonism.

Value Hedonism, occasionally with assistance from Motivational Hedonism, has been used to argue for specific theories of right action (theories that explain which actions are morally permissible or impermissible and why). The theory that happiness should be pursued (that pleasure should be pursued and pain should be avoided) is referred to as Normative Hedonism and sometimes Ethical Hedonism. There are two major types of Normative Hedonism, Hedonistic Egoism and Hedonistic Utilitarianism. Both types commonly use happiness (defined as pleasure minus pain) as the sole criterion for determining the moral rightness or wrongness of an action. Important variations within each of these two main types specify either the actual resulting happiness (after the act) or the predicted resulting happiness (before the act) as the moral criterion. Although both major types of Normative Hedonism have been accused of being repugnant, Hedonistic Egoism is considered the most offensive.

Hedonistic Egoism is a hedonistic version of egoism, the theory that we should, morally speaking, do whatever is most in our own interests. Hedonistic Egoism is the theory that we ought, morally speaking, to do whatever makes us happiest that is whatever provides us with the most net pleasure after pain is subtracted. The most repugnant feature of this theory is that one never has to ascribe any value whatsoever to the consequences for anyone other than oneself. For example, a Hedonistic Egoist who did not feel saddened by theft would be morally required to steal, even from needy orphans (if he thought he could get away with it). Would-be defenders of Hedonistic Egoism often point out that performing acts of theft, murder, treachery and the like would not make them happier overall because of the guilt, the fear of being caught, and the chance of being caught and punished. The would-be defenders tend to surrender, however, when it is pointed out that a Hedonistic Egoist is morally obliged by their own theory to pursue an unusual kind of practical education; a brief and possibly painful training period that reduces their moral emotions of sympathy and guilt. Such an education might be achieved by desensitising over-exposure to, and performance of, torture on innocents. If Hedonistic Egoists underwent such an education, their reduced capacity for sympathy and guilt would allow them to take advantage of any opportunities to perform pleasurable, but normally-guilt-inducing, actions, such as stealing from the poor.

Hedonistic Egoism is very unpopular amongst philosophers, not just for this reason, but also because it suffers from all of the objections that apply to Prudential Hedonism.

Hedonistic Utilitarianism is the theory that the right action is the one that produces (or is most likely to produce) the greatest net happiness for all concerned. Hedonistic Utilitarianism is often considered fairer than Hedonistic Egoism because the happiness of everyone involved (everyone who is affected or likely to be affected) is taken into account and given equal weight. Hedonistic Utilitarians, then, tend to advocate not stealing from needy orphans because to do so would usually leave the orphan far less happy and the (probably better-off) thief only slightly happier (assuming he felt no guilt). Despite treating all individuals equally, Hedonistic Utilitarianism is still seen as objectionable by some because it assigns no intrinsic moral value to justice, friendship, truth, or any of the many other goods that are thought by some to be irreducibly valuable. For example, a Hedonistic Utilitarian would be morally obliged to publicly execute an innocent friend of theirs if doing so was the only way to promote the greatest happiness overall. Although unlikely, such a situation might arise if a child was murdered in a small town and the lack of suspects was causing large-scale inter-ethnic violence. Some philosophers argue that executing an innocent friend is immoral precisely because it ignores the intrinsic values of justice, friendship, and possibly truth.

Hedonistic Utilitarianism is rarely endorsed by philosophers, but mainly because of its reliance on Prudential Hedonism as opposed to its utilitarian element. Non-hedonistic versions of utilitarianism are about as popular as the other leading theories of right action, especially when it is the actions of institutions that are being considered.

Perhaps the earliest written record of hedonism comes from the Crvka, an Indian philosophical tradition based on the Barhaspatya sutras. The Crvka persisted for two thousand years (from about 600 B.C.E.). Most notably, the Crvka advocated scepticism and Hedonistic Egoism that the right action is the one that brings the actor the most net pleasure. The Crvka acknowledged that some pain often accompanied, or was later caused by, sensual pleasure, but that pleasure was worth it.

The Cyrenaics, founded by Aristippus (c. 435-356 B.C.E.), were also sceptics and Hedonistic Egoists. Although the paucity of original texts makes it difficult to confidently state all of the justifications for the Cyrenaics positions, their overall stance is clear enough. The Cyrenaics believed pleasure was the ultimate good and everyone should pursue all immediate pleasures for themselves. They considered bodily pleasures better than mental pleasures, presumably because they were more vivid or trustworthy. The Cyrenaics also recommended pursuing immediate pleasures and avoiding immediate pains with scant or no regard for future consequences. Their reasoning for this is even less clear, but is most plausibly linked to their sceptical views perhaps that what we can be most sure of in this uncertain existence is our current bodily pleasures.

Epicurus (c. 341-271 B.C.E.), founder of Epicureanism, developed a Normative Hedonism in stark contrast to that of Aristippus. The Epicureanism of Epicurus is also quite the opposite to the common usage of Epicureanism; while we might like to go on a luxurious “Epicurean” holiday packed with fine dining and moderately excessive wining, Epicurus would warn us that we are only setting ourselves up for future pain. For Epicurus, happiness was the complete absence of bodily and especially mental pains, including fear of the Gods and desires for anything other than the bare necessities of life. Even with only the limited excesses of ancient Greece on offer, Epicurus advised his followers to avoid towns, and especially marketplaces, in order to limit the resulting desires for unnecessary things. Once we experience unnecessary pleasures, such as those from sex and rich food, we will then suffer from painful and hard to satisfy desires for more and better of the same. No matter how wealthy we might be, Epicurus would argue, our desires will eventually outstrip our means and interfere with our ability to live tranquil, happy lives. Epicureanism is generally egoistic, in that it encourages everyone to pursue happiness for themselves. However, Epicureans would be unlikely to commit any of the selfish acts we might expect from other egoists because Epicureans train themselves to desire only the very basics, which gives them very little reason to do anything to interfere with the affairs of others.

With the exception of a brief period discussed below, Hedonism has been generally unpopular ever since its ancient beginnings. Although criticisms of the ancient forms of hedonism were many and varied, one in particular was heavily cited. In Philebus, Platos Socrates and one of his many foils, Protarchus in this instance, are discussing the role of pleasure in the good life. Socrates asks Protarchus to imagine a life without much pleasure but full of the higher cognitive processes, such as knowledge, forethought and consciousness and to compare it with a life that is the opposite. Socrates describes this opposite life as having perfect pleasure but the mental life of an oyster, pointing out that the subject of such a life would not be able to appreciate any of the pleasure within it. The harrowing thought of living the pleasurable but unthinking life of an oyster causes Protarchus to abandon his hedonistic argument. The oyster example is now easily avoided by clarifying that pleasure is best understood as being a conscious experience, so any sensation that we are not consciously aware of cannot be pleasure.

Normative and Motivational Hedonism were both at their most popular during the heyday of Empiricism in the 18th and 19th Centuries. Indeed, this is the only period during which any kind of hedonism could be considered popular at all. During this period, two Hedonistic Utilitarians, Jeremy Bentham (1748-1832) and his protg John Stuart Mill (1806-1873), were particularly influential. Their theories are similar in many ways, but are notably distinct on the nature of pleasure.

Bentham argued for several types of hedonism, including those now referred to as Prudential Hedonism, Hedonistic Utilitarianism, and Motivational Hedonism (although his commitment to strong Motivational Hedonism eventually began to wane). Bentham argued that happiness was the ultimate good and that happiness was pleasure and the absence of pain. He acknowledged the egoistic and hedonistic nature of peoples motivation, but argued that the maximization of collective happiness was the correct criterion for moral behavior. Benthams greatest happiness principle states that actions are immoral if they are not the action that appears to maximise the happiness of all the people likely to be affected; only the action that appears to maximise the happiness of all the people likely to be affected is the morally right action.

Bentham devised the greatest happiness principle to justify the legal reforms he also argued for. He understood that he could not conclusively prove that the principle was the correct criterion for morally right action, but also thought that it should be accepted because it was fair and better than existing criteria for evaluating actions and legislation. Bentham thought that his Hedonic Calculus could be applied to situations to see what should, morally speaking, be done in a situation. The Hedonic Calculus is a method of counting the amount of pleasure and pain that would likely be caused by different actions. The Hedonic Calculus required a methodology for measuring pleasure, which in turn required an understanding of the nature of pleasure and specifically what aspects of pleasure were valuable for us.

Benthams Hedonic Calculus identifies several aspects of pleasure that contribute to its value, including certainty, propinquity, extent, intensity, and duration. The Hedonic Calculus also makes use of two future-pleasure-or-pain-related aspects of actions fecundity and purity. Certainty refers to the likelihood that the pleasure or pain will occur. Propinquity refers to how long away (in terms of time) the pleasure or pain is. Fecundity refers to the likelihood of the pleasure or pain leading to more of the same sensation. Purity refers to the likelihood of the pleasure or pain leading to some of the opposite sensation. Extent refers to the number of people the pleasure or pain is likely to affect. Intensity refers to the felt strength of the pleasure or pain. Duration refers to how long the pleasure or pain are felt for. It should be noted that only intensity and duration have intrinsic value for an individual. Certainty, propinquity, fecundity, and purity are all instrumentally valuable for an individual because they affect the likelihood of an individual feeling future pleasure and pain. Extent is not directly valuable for an individuals well-being because it refers to the likelihood of other people experiencing pleasure or pain.

Benthams inclusion of certainty, propinquity, fecundity, and purity in the Hedonic Calculus helps to differentiate his hedonism from Folk Hedonism. Folk Hedonists rarely consider how likely their actions are to lead to future pleasure or pain, focussing instead on the pursuit of immediate pleasure and the avoidance of immediate pain. So while Folk Hedonists would be unlikely to study for an exam, anyone using Benthams Hedonic Calculus would consider the future happiness benefits to themselves (and possibly others) of passing the exam and then promptly begin studying.

Most importantly for Benthams Hedonic Calculus, the pleasure from different sources is always measured against these criteria in the same way, that is to say that no additional value is afforded to pleasures from particularly moral, clean, or culturally-sophisticated sources. For example, Bentham held that pleasure from the parlor game push-pin was just as valuable for us as pleasure from music and poetry. Since Benthams theory of Prudential Hedonism focuses on the quantity of the pleasure, rather than the source-derived quality of it, it is best described as a type of Quantitative Hedonism.

Benthams indifferent stance on the source of pleasures led to others disparaging his hedonism as the philosophy of swine. Even his student, John Stuart Mill, questioned whether we should believe that a satisfied pig leads a better life than a dissatisfied human or that a satisfied fool leads a better life than a dissatisfied Socrates results that Benthams Quantitative Hedonism seems to endorse.

Like Bentham, Mill endorsed the varieties of hedonism now referred to as Prudential Hedonism, Hedonistic Utilitarianism, and Motivational Hedonism. Mill also thought happiness, defined as pleasure and the avoidance of pain, was the highest good. Where Mills hedonism differs from Benthams is in his understanding of the nature of pleasure. Mill argued that pleasures could vary in quality, being either higher or lower pleasures. Mill employed the distinction between higher and lower pleasures in an attempt to avoid the criticism that his hedonism was just another philosophy of swine. Lower pleasures are those associated with the body, which we share with other animals, such as pleasure from quenching thirst or having sex. Higher pleasures are those associated with the mind, which were thought to be unique to humans, such as pleasure from listening to opera, acting virtuously, and philosophising. Mill justified this distinction by arguing that those who have experienced both types of pleasure realise that higher pleasures are much more valuable. He dismissed challenges to this claim by asserting that those who disagreed lacked either the experience of higher pleasures or the capacity for such experiences. For Mill, higher pleasures were not different from lower pleasures by mere degree; they were different in kind. Since Mills theory of Prudential Hedonism focuses on the quality of the pleasure, rather than the amount of it, it is best described as a type of Qualitative Hedonism.

George Edward Moore (1873-1958) was instrumental in bringing hedonisms brief heyday to an end. Moores criticisms of hedonism in general, and Mills hedonism in particular, were frequently cited as good reasons to reject hedonism even decades after his death. Indeed, since G. E. Moore, hedonism has been viewed by most philosophers as being an initially intuitive and interesting family of theories, but also one that is flawed on closer inspection. Moore was a pluralist about value and argued persuasively against the Value Hedonists central claim that all and only pleasure is the bearer of intrinsic value. Moores most damaging objection against Hedonism was his heap of filth example. Moore himself thought the heap of filth example thoroughly refuted what he saw as the only potentially viable form of Prudential Hedonism that conscious pleasure is the only thing that positively contributes to well-being. Moore used the heap of filth example to argue that Prudential Hedonism is false because pleasure is not the only thing of value.

In the heap of filth example, Moore asks the reader to imagine two worlds, one of which is exceedingly beautiful and the other a disgusting heap of filth. Moore then instructs the reader to imagine that no one would ever experience either world and asks if it is better for the beautiful world to exist than the filthy one. As Moore expected, his contemporaries tended to agree that it would be better if the beautiful world existed. Relying on this agreement, Moore infers that the beautiful world is more valuable than the heap of filth and, therefore, that beauty must be valuable. Moore then concluded that all of the potentially viable theories of Prudential Hedonism (those that value only conscious pleasures) must be false because something, namely beauty, is valuable even when no conscious pleasure can be derived from it.

Moores heap of filth example has rarely been used to object to Prudential Hedonism since the 1970s because it is not directly relevant to Prudential Hedonism (it evaluates worlds and not lives). Moores other objections to Prudential Hedonism also went out of favor around the same time. The demise of these arguments was partly due to mounting objections against them, but mainly because arguments more suited to the task of refuting Prudential Hedonism were developed. These arguments are discussed after the contemporary varieties of hedonism are introduced below.

Several contemporary varieties of hedonism have been defended, although usually by just a handful of philosophers or less at any one time. Other varieties of hedonism are also theoretically available but have received little or no discussion. Contemporary varieties of Prudential Hedonism can be grouped based on how they define pleasure and pain, as is done below. In addition to providing different notions of what pleasure and pain are, contemporary varieties of Prudential Hedonism also disagree about what aspect or aspects of pleasure are valuable for well-being (and the opposite for pain).

The most well-known disagreement about what aspects of pleasure are valuable occurs between Quantitative and Qualitative Hedonists. Quantitative Hedonists argue that how valuable pleasure is for well-being depends on only the amount of pleasure, and so they are only concerned with dimensions of pleasure such as duration and intensity. Quantitative Hedonism is often accused of over-valuing animalistic, simple, and debauched pleasures.

Qualitative Hedonists argue that, in addition to the dimensions related to the amount of pleasure, one or more dimensions of quality can have an impact on how pleasure affects well-being. The quality dimensions might be based on how cognitive or bodily the pleasure is (as it was for Mill), the moral status of the source of the pleasure, or some other non-amount-related dimension. Qualitative Hedonism is criticised by some for smuggling values other than pleasure into well-being by misleadingly labelling them as dimensions of pleasure. How these qualities are chosen for inclusion is also criticised for being arbitrary or ad hoc by some because inclusion of these dimensions of pleasure is often in direct response to objections that Quantitative Hedonism cannot easily deal with. That is to say, the inclusion of these dimensions is often accused of being an exercise in plastering over holes, rather than deducing corollary conclusions from existing theoretical premises. Others have argued that any dimensions of quality can be better explained in terms of dimensions of quantity. For example, they might claim that moral pleasures are no higher in quality than immoral pleasures, but that moral pleasures are instrumentally more valuable because they are likely to lead to more moments of pleasure or less moments of pain in the future.

Hedonists also have differing views about how the value of pleasure compares with the value of pain. This is not a practical disagreement about how best to measure pleasure and pain, but rather a theoretical disagreement about comparative value, such as whether pain is worse for us than an equivalent amount of pleasure is good for us. The default position is that one unit of pleasure (sometimes referred to as a Hedon) is equivalent but opposite in value to one unit of pain (sometimes referred to as a Dolor). Several Hedonistic Utilitarians have argued that reduction of pain should be seen as more important than increasing pleasure, sometimes for the Epicurean reason that pain seems worse for us than an equivalent amount of pleasure is good for us. Imagine that a magical genie offered for you to play a game with him. The game consists of you flipping a fair coin. If the coin lands on heads, then you immediately feel a burst of very intense pleasure and if it lands on tails, then you immediately feel a burst of very intense pain. Is it in your best interests to play the game?

Another area of disagreement between some Hedonists is whether pleasure is entirely internal to a person or if it includes external elements. Internalism about pleasure is the thesis that, whatever pleasure is, it is always and only inside a person. Externalism about pleasure, on the other hand, is the thesis that, pleasure is more than just a state of an individual (that is, that a necessary component of pleasure lies outside of the individual). Externalists about pleasure might, for example, describe pleasure as a function that mediates between our minds and the environment, such that every instance of pleasure has one or more integral environmental components. The vast majority of historic and contemporary versions of Prudential Hedonism consider pleasure to be an internal mental state.

Perhaps the least known disagreement about what aspects of pleasure make it valuable is the debate about whether we have to be conscious of pleasure for it to be valuable. The standard position is that pleasure is a conscious mental state, or at least that any pleasure a person is not conscious of does not intrinsically improve their well-being.

The most common definition of pleasure is that it is a sensation, something that we identify through our senses or that we feel. Psychologists claim that we have at least ten senses, including the familiar, sight, hearing, smell, taste, and touch, but also, movement, balance, and several sub-senses of touch, including heat, cold, pressure, and pain. New senses get added to the list when it is understood that some independent physical process underpins their functioning. The most widely-used examples of pleasurable sensations are the pleasures of eating, drinking, listening to music, and having sex. Use of these examples has done little to help Hedonism avoid its debauched reputation.

It is also commonly recognised that our senses are physical processes that usually involve a mental component, such as the tickling feeling when someone blows gently on the back of your neck. If a sensation is something we identify through our sense organs, however, it is not entirely clear how to account for abstract pleasures. This is because abstract pleasures, such as a feeling of accomplishment for a job well done, do not seem to be experienced through any of the senses in the standard lists. Some Hedonists have attempted to resolve this problem by arguing for the existence of an independent pleasure sense and by defining sensation as something that we feel (regardless of whether it has been mediated by sense organs).

Most Hedonists who describe pleasure as a sensation will be Quantitative Hedonists and will argue that the pleasure from the different senses is the same. Qualitative Hedonists, in comparison, can use the framework of the senses to help differentiate between qualities of pleasure. For example, a Qualitative Hedonist might argue that pleasurable sensations from touch and movement are always lower quality than the others.

Hedonists have also defined pleasure as intrinsically valuable experience, that is to say any experiences that we find intrinsically valuable either are, or include, instances of pleasure. According to this definition, the reason that listening to music and eating a fine meal are both intrinsically pleasurable is because those experiences include an element of pleasure (along with the other elements specific to each activity, such as the experience of the texture of the food and the melody of the music). By itself, this definition enables Hedonists to make an argument that is close to perfectly circular. Defining pleasure as intrinsically valuable experience and well-being as all and only experiences that are intrinsically valuable allows a Hedonist to all but stipulate that Prudential Hedonism is the correct theory of well-being. Where defining pleasure as intrinsically valuable experience is not circular is in its stipulation that only experiences matter for well-being. Some well-known objections to this idea are discussed below.

Another problem with defining pleasure as intrinsically valuable experience is that the definition does not tell us very much about what pleasure is or how it can be identified. For example, knowing that pleasure is intrinsically valuable experience would not help someone to work out if a particular experience was intrinsically or just instrumentally valuable. Hedonists have attempted to respond to this problem by explaining how to find out whether an experience is intrinsically valuable.

One method is to ask yourself if you would like the experience to continue for its own sake (rather than because of what it might lead to). Wanting an experience to continue for its own sake reveals that you find it to be intrinsically valuable. While still making a coherent theory of well-being, defining intrinsically valuable experiences as those you want to perpetuate makes the theory much less hedonistic. The fact that what a person wants is the main criterion for something having intrinsic value, makes this kind of theory more in line with preference satisfaction theories of well-being. The central claim of preference satisfaction theories of well-being is that some variant of getting what one wants, or should want, under certain conditions is the only thing that intrinsically improves ones well-being.

Another method of fleshing out the definition of pleasure as intrinsically valuable experience is to describe how intrinsically valuable experiences feel. This method remains a hedonistic one, but seems to fall back into defining pleasure as a sensation.

It has also been argued that what makes an experience intrinsically valuable is that you like or enjoy it for its own sake. Hedonists arguing for this definition of pleasure usually take pains to position their definition in between the realms of sensation and preference satisfaction. They argue that since we can like or enjoy some experiences without concurrently wanting them or feeling any particular sensation, then liking is distinct from both sensation and preference satisfaction. Liking and enjoyment are also difficult terms to define in more detail, but they are certainly easier to recognise than the rather opaque “intrinsically valuable experience.”

Merely defining pleasure as intrinsically valuable experience and intrinsically valuable experiences as those that we like or enjoy still lacks enough detail to be very useful for contemplating well-being. A potential method for making this theory more useful would be to draw on the cognitive sciences to investigate if there is a specific neurological function for liking or enjoying. Cognitive science has not reached the point where anything definitive can be said about this, but a few neuroscientists have experimental evidence that liking and wanting (at least in regards to food) are neurologically distinct processes in rats and have argued that it should be the same for humans. The same scientists have wondered if the same processes govern all of our liking and wanting, but this question remains unresolved.

Most Hedonists who describe pleasure as intrinsically valuable experience believe that pleasure is internal and conscious. Hedonists who define pleasure in this way may be either Quantitative or Qualitative Hedonists, depending on whether they think that quality is a relevant dimension of how intrinsically valuable we find certain experiences.

One of the most recent developments in modern hedonism is the rise of defining pleasure as a pro-attitude a positive psychological stance toward some object. Any account of Prudential Hedonism that defines pleasure as a pro-attitude is referred to as Attitudinal Hedonism because it is a persons attitude that dictates whether anything has intrinsic value. Positive psychological stances include approving of something, thinking it is good, and being pleased about it. The object of the positive psychological stance could be a physical object, such as a painting one is observing, but it could also be a thought, such as “my country is not at war,” or even a sensation. An example of a pro-attitude towards a sensation could be being pleased about the fact that an ice cream tastes so delicious.

Fred Feldman, the leading proponent of Attitudinal Hedonism, argues that the sensation of pleasure only has instrumental value it only brings about value if you also have a positive psychological stance toward that sensation. In addition to his basic Intrinsic Attitudinal Hedonism, which is a form of Quantitative Hedonism, Feldman has also developed many variants that are types of Qualitative Hedonism. For example, Desert-Adjusted Intrinsic Attitudinal Hedonism, which reduces the intrinsic value a pro-attitude has for our well-being based on the quality of deservedness (that is, on the extent to which the particular object deserves a pro-attitude or not). For example, Desert-Adjusted Intrinsic Attitudinal Hedonism might stipulate that sensations of pleasure arising from adulterous behavior do not deserve approval, and so assign them no value.

Defining pleasure as a pro-attitude, while maintaining that all sensations of pleasure have no intrinsic value, makes Attitudinal Hedonism less obviously hedonistic as the versions that define pleasure as a sensation. Indeed, defining pleasure as a pro-attitude runs the risk of creating a preference satisfaction account of well-being because being pleased about something without feeling any pleasure seems hard to distinguish from having a preference for that thing.

The most common argument against Prudential Hedonism is that pleasure is not the only thing that intrinsically contributes to well-being. Living in reality, finding meaning in life, producing noteworthy achievements, building and maintaining friendships, achieving perfection in certain domains, and living in accordance with religious or moral laws are just some of the other things thought to intrinsically add value to our lives. When presented with these apparently valuable aspects of life, Hedonists usually attempt to explain their apparent value in terms of pleasure. A Hedonist would argue, for example, that friendship is not valuable in and of itself, rather it is valuable to the extent that it brings us pleasure. Furthermore, to answer why we might help a friend even when it harms us, a Hedonist will argue that the prospect of future pleasure from receiving reciprocal favors from our friend, rather than the value of friendship itself, should motivate us to help in this way.

Those who object to Prudential Hedonism on the grounds that pleasure is not the only source of intrinsic value use two main strategies. In the first strategy, objectors make arguments that some specific value cannot be reduced to pleasure. In the second strategy, objectors cite very long lists of apparently intrinsically valuable aspects of life and then challenge hedonists with the prolonged and arduous task of trying to explain how the value of all of them can be explained solely by reference to pleasure and the avoidance of pain. This second strategy gives good reason to be a pluralist about value because the odds seem to be against any monistic theory of value, such as Prudential Hedonism. The first strategy, however, has the ability to show that Prudential Hedonism is false, rather than being just unlikely to be the best theory of well-being.

The most widely cited argument for pleasure not being the only source of intrinsic value is based on Robert Nozicks experience machine thought-experiment. Nozicks experience machine thought-experiment was designed to show that more than just our experiences matter to us because living in reality also matters to us. This argument has proven to be so convincing that nearly every single book on ethics that discusses hedonism rejects it using only this argument or this one and one other.

In the thought experiment, Nozick asks us to imagine that we have the choice of plugging in to a fantastic machine that flawlessly provides an amazing mix of experiences. Importantly, this machine can provide these experiences in a way that, once plugged in to the machine, no one can tell that their experiences are not real. Disregarding considerations about responsibilities to others and the problems that would arise if everyone plugged in, would you plug in to the machine for life? The vast majority of people reject the choice to live a much more pleasurable life in the machine, mostly because they agree with Nozick that living in reality seems to be important for our well-being. Opinions differ on what exactly about living in reality is so much better for us than the additional pleasure of living in the experience machine, but the most common response is that a life that is not lived in reality is pointless or meaningless.

Since this argument has been used so extensively (from the mid 1970s onwards) to dismiss Prudential Hedonism, several attempts have been made to refute it. Most commonly, Hedonists argue that living an experience machine life would be better than living a real life and that most people are simply mistaken to not want to plug in. Some go further and try to explain why so many people choose not to plug in. Such explanations often point out that the most obvious reasons for not wanting to plug in can be explained in terms of expected pleasure and avoidance of pain. For example, it might be argued that we expect to get pleasure from spending time with our real friends and family, but we do not expect to get as much pleasure from the fake friends or family we might have in the experience machine. These kinds of attempts to refute the experience machine objection do little to persuade non-Hedonists that they have made the wrong choice.

A more promising line of defence for the Prudential Hedonists is to provide evidence that there is a particular psychological bias that affects most peoples choice in the experience machine thought experiment. A reversal of Nozicks thought experiment has been argued to reveal just such a bias. Imagine that a credible source tells you that you are actually in an experience machine right now. You have no idea what reality would be like. Given the choice between having your memory of this conversation wiped and going to reality, what would be best for you to choose? Empirical evidence on this choice shows that most people would choose to stay in the experience machine. Comparing this result with how people respond to Nozicks experience machine thought experiment reveals the following: In Nozicks experience machine thought experiment people tend to choose a real and familiar life over a more pleasurable life and in the reversed experience machine thought experiment people tend to choose a familiar life over a real life. Familiarity seems to matter more than reality, undermining the strength of Nozicks original argument. The bias thought to be responsible for this difference is the status quo bias an irrational preference for the familiar or for things to stay as they are.

Regardless of whether Nozicks experience machine thought experiment is as decisive a refutation of Prudential Hedonism as it is often thought to be, the wider argument (that living in reality is valuable for our well-being) is still a problem for Prudential Hedonists. That our actions have real consequences, that our friends are real, and that our experiences are genuine seem to matter for most of us regardless of considerations of pleasure. Unfortunately, we lack a trusted methodology for discerning if these things should matter to us. Perhaps the best method for identifying intrinsically valuable aspects of lives is to compare lives that are equal in pleasure and all other important ways, except that one aspect of one of the lives is increased. Using this methodology, however, seems certain to lead to an artificial pluralist conclusion about what has value. This is because any increase in a potentially valuable aspect of our lives will be viewed as a free bonus. And, most people will choose the life with the free bonus just in case it has intrinsic value, not necessarily because they think it does have intrinsic value.

The main traditional line of criticism against Prudential Hedonism is that not all pleasure is valuable for well-being, or at least that some pleasures are less valuable than others because of non-amount-related factors. Some versions of this criticism are much easier for Prudential Hedonists to deal with than others depending on where the allegedly disvaluable aspect of the pleasure resides. If the disvaluable aspect is experienced with the pleasure itself, then both Qualitative and Quantitative varieties of Prudential Hedonism have sufficient answers to these problems. If, however, the disvaluable aspect of the pleasure is never experienced, then all types of Prudential Hedonism struggle to explain why the allegedly disvaluable aspect is irrelevant.

Examples of the easier criticisms to deal with are that Prudential Hedonism values, or at least overvalues, perverse and base pleasures. These kinds of criticisms tend to have had more sway in the past and doubtless encouraged Mill to develop his Qualitative Hedonism. In response to the charge that Prudential Hedonism mistakenly values pleasure from sadistic torture, sating hunger, copulating, listening to opera, and philosophising all equally, Qualitative Hedonists can simply deny that it does. Since pleasure from sadistic torture will normally be experienced as containing the quality of sadism (just as the pleasure from listening to good opera is experienced as containing the quality of acoustic excellence), the Qualitative Hedonist can plausibly claim to be aware of the difference in quality and allocate less value to perverse or base pleasures accordingly.

Prudential Hedonists need not relinquish the Quantitative aspect of their theory in order to deal with these criticisms, however. Quantitative Hedonists, can simply point out that moral or cultural values are not necessarily relevant to well-being because the investigation of well-being aims to understand what the good life for the one living it is and what intrinsically makes their life go better for them. A Quantitative Hedonist can simply respond that a sadist that gets sadistic pleasure from torturing someone does improve their own well-being (assuming that the sadist never feels any negative emotions or gets into any other trouble as a result). Similarly, a Quantitative Hedonist can argue that if someone genuinely gets a lot of pleasure from porcine company and wallowing in the mud, but finds opera thoroughly dull, then we have good reason to think that having to live in a pig sty would be better for her well-being than forcing her to listen to opera.

Much more problematic for both Quantitative and Qualitative Hedonists, however, are the more modern versions of the criticism that not all pleasure is valuable. The modern versions of this criticism tend to use examples in which the disvaluable aspect of the pleasure is never experienced by the person whose well-being is being evaluated. The best example of these modern criticisms is a thought experiment devised by Shelly Kagan. Kagans deceived businessman thought experiment is widely thought to show that pleasures of a certain kind, namely false pleasures, are worth much less than true pleasures.

Kagan asks us to imagine the life of a very successful businessman who takes great pleasure in being respected by his colleagues, well-liked by his friends, and loved by his wife and children until the day he died. Then Kagan asks us to compare this life with one of equal length and the same amount of pleasure (experienced as coming from exactly the same sources), except that in each case the businessman is mistaken about how those around him really feel. This second (deceived) businessman experiences just as much pleasure from the respect of his colleagues and the love of his family as the first businessman. The only difference is that the second businessman has many false beliefs. Specifically, the deceived businessmans colleagues actually think he is useless, his wife doesnt really love him, and his children are only nice to him so that he will keep giving them money. Given that the deceived businessman never knew of any of these deceptions and his experiences were never negatively impacted by the deceptions indirectly, which life do you think is better?

Nearly everyone thinks that the deceived businessman has a worse life. This is a problem for Prudential Hedonists because the pleasure is quantitatively equal in each life, so they should be equally good for the one living it. Qualitative Hedonism does not seem to be able to avoid this criticism either because the falsity of the pleasures experienced by the deceived businessman is a dimension of the pleasure that he never becomes aware of. Theoretically, an externalist and qualitative version of Attitudinal Hedonism could include the falsity dimension of an instance of pleasure even if the falsity dimension never impacts the consciousness of the person. However, the resulting definition of pleasure bears little resemblance to what we commonly understand pleasure to be and also seems to be ad hoc in its inclusion of the truth dimension but not others. A dedicated Prudential Hedonist of any variety can always stubbornly stick to the claim that the lives of the two businessmen are of equal value, but that will do little to convince the vast majority to take Prudential Hedonism more seriously.

Another major line of criticism used against Prudential Hedonists is that they have yet to come up with a meaningful definition of pleasure that unifies the seemingly disparate array of pleasures while remaining recognisable as pleasure. Some definitions lack sufficient detail to be informative about what pleasure actually is, or why it is valuable, and those that do offer enough detail to be meaningful are faced with two difficult tasks.

The first obstacle for a useful definition of pleasure for hedonism is to unify all of the diverse pleasures in a reasonable way. Phenomenologically, the pleasure from reading a good book is very different to the pleasure from bungee jumping, and both of these pleasures are very different to the pleasure of having sex. This obstacle is unsurpassable for most versions of Quantitative Hedonism because it makes the value gained from different pleasures impossible to compare. Not being able to compare different types of pleasure results in being unable to say if a life is better than another in most even vaguely realistic cases. Furthermore, not being able to compare lives means that Quantitative Hedonism could not be usefully used to guide behavior since it cannot instruct us on which life to aim for.

Attempts to resolve the problem of unifying the different pleasures while remaining within a framework of Quantitative Hedonism, usually involve pointing out something that is constant in all of the disparate pleasures and defining that particular thing as pleasure. When pleasure is defined as a strict sensation, this strategy fails because introspection reveals that no such sensation exists. Pleasure defined as the experience of liking or as a pro-attitude does much better at unifying all of the diverse pleasures. However, defining pleasure in these ways makes the task of filling in the details of the theory a fine balancing act. Liking or pro-attitudes must be described in such a way that they are not solely a sensation or best described as a preference satisfaction theory. And they must perform this balancing act while still describing a scientifically plausible and conceptually coherent account of pleasure. Most attempts to define pleasure as liking or pro-attitudes seem to disagree with either the folk conception of what pleasure is or any of the plausible scientific conceptions of how pleasure functions.

Most varieties of Qualitative Hedonism do better at dealing with the problem of diverse pleasures because they can evaluate different pleasures according to their distinct qualities. Qualitative Hedonists still need a coherent method for comparing the different pleasures with each other in order to be more than just an abstract theory of well-being, however. And, it is difficult to construct such a methodology in a way that avoids counter examples, while still describing a scientifically plausible and conceptually coherent account of pleasure.

The second obstacle is creating a definition of pleasure that retains at least some of the core properties of the common understanding of the term pleasure. As mentioned, many of the potential adjustments to the main definitions of pleasure are useful for avoiding one or more of the many objections against Prudential Hedonism. The problem with this strategy is that the more adjustments that are made, the more apparent it becomes that the definition of pleasure is not recognisable as the pleasure that gave Hedonism its distinctive intuitive plausibility in the first place. When an instance of pleasure is defined simply as when someone feels good, its intrinsic value for well-being is intuitively obvious. However, when the definition of pleasure is stretched, so as to more effectively argue that all valuable experiences are pleasurable, it becomes much less recognisable as the concept of pleasure we use in day-to-day life and its intrinsic value becomes much less intuitive.

The future of hedonism seems bleak. The considerable number and strength of the arguments against Prudential Hedonisms central principle (that pleasure and only pleasure intrinsically contributes positively to well-being and the opposite for pain) seem insurmountable. Hedonists have been creative in their definitions of pleasure so as to avoid these objections, but more often than not find themselves defending a theory that is not particularly hedonistic, realistic or both.

Perhaps the only hope that Hedonists of all types can have for the future is that advances in cognitive science will lead to a better understanding of how pleasure works in the brain and how biases affect our judgements about thought experiments. If our improved understanding in these areas confirms a particular theory about what pleasure is and also provides reasons to doubt some of the widespread judgements about the thought experiments that make the vast majority of philosophers reject hedonism, then hedonism might experience at least a partial revival. The good news for Hedonists is that at least some emerging theories and results from cognitive science do appear to support some aspects of hedonism.

Dan WeijersEmail: danweijers@gmail.comVictoria University of WellingtonNew Zealand

Read this article:

Hedonism | Internet Encyclopedia of Philosophy

hedonism | Philosophy & Definition | Britannica.com

Hedonism, in ethics, a general term for all theories of conduct in which the criterion is pleasure of one kind or another. The word is derived from the Greek hedone (pleasure), from hedys (sweet or pleasant).

Hedonistic theories of conduct have been held from the earliest times. They have been regularly misrepresented by their critics because of a simple misconception, namely, the assumption that the pleasure upheld by the hedonist is necessarily purely physical in its origins. This assumption is in most cases a complete perversion of the truth. Practically all hedonists recognize the existence of pleasures derived from fame and reputation, from friendship and sympathy, from knowledge and art. Most have urged that physical pleasures are not only ephemeral in themselves but also involve, either as prior conditions or as consequences, such pains as to discount any greater intensity that they may have while they last.

The earliest and most extreme form of hedonism is that of the Cyrenaics as stated by Aristippus, who argued that the goal of a good life should be the sentient pleasure of the moment. Since, as Protagoras maintained, knowledge is solely of momentary sensations, it is useless to try to calculate future pleasures and to balance pains against them. The true art of life is to crowd as much enjoyment as possible into each moment.

No school has been more subject to the misconception noted above than the Epicurean. Epicureanism is completely different from Cyrenaicism. For Epicurus pleasure was indeed the supreme good, but his interpretation of this maxim was profoundly influenced by the Socratic doctrine of prudence and Aristotles conception of the best life. The true hedonist would aim at a life of enduring pleasure, but this would be obtainable only under the guidance of reason. Self-control in the choice and limitation of pleasures with a view to reducing pain to a minimum was indispensable. This view informed the Epicurean maxim Of all this, the beginning, and the greatest good, is prudence. This negative side of Epicureanism developed to such an extent that some members of the school found the ideal life rather in indifference to pain than in positive enjoyment.

In the late 18th century Jeremy Bentham revived hedonism both as a psychological and as a moral theory under the umbrella of utilitarianism. Individuals have no goal other than the greatest pleasure, thus each person ought to pursue the greatest pleasure. It would seem to follow that each person inevitably always does what he or she ought. Bentham sought the solution to this paradox on different occasions in two incompatible directions. Sometimes he says that the act which one does is the act which one thinks will give the most pleasure, whereas the act which one ought to do is the act which really will provide the most pleasure. In short, calculation is salvation, while sin is shortsightedness. Alternatively he suggests that the act which one does is that which will give one the most pleasure, whereas the act one ought to do is that which will give all those affected by it the most pleasure.

The psychological doctrine that a humans only aim is pleasure was effectively attacked by Joseph Butler. He pointed out that each desire has its own specific object and that pleasure comes as a welcome addition or bonus when the desire achieves its object. Hence the paradox that the best way to get pleasure is to forget it and to pursue wholeheartedly other objects. Butler, however, went too far in maintaining that pleasure cannot be pursued as an end. Normally, indeed, when one is hungry or curious or lonely, there is desire to eat, to know, or to have company. These are not desires for pleasure. One can also eat sweets when one is not hungry, for the sake of the pleasure that they give.

Moral hedonism has been attacked since Socrates, though moralists sometimes have gone to the extreme of holding that humans never have a duty to bring about pleasure. It may seem odd to say that a human has a duty to pursue pleasure, but the pleasures of others certainly seem to count among the factors relevant in making a moral decision. One particular criticism which may be added to those usually urged against hedonists is that whereas they claim to simplify ethical problems by introducing a single standard, namely pleasure, in fact they have a double standard. As Bentham said, Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. Hedonists tend to treat pleasure and pain as if they were, like heat and cold, degrees on a single scale, when they are really different in kind.

More:

hedonism | Philosophy & Definition | Britannica.com

Clothing Optional Resorts, Negril, Jamaica | Hedonism II

Select Departure City Albany, Ny [ALB] Albuquerque, Nm [ABQ] Allentown, Pa [ABE] Amarillo, Tx [AMA] Anchorage, Ak [ANC] Appleton, Mn [AQP] Arcata, Ca [ACV] Asheville, Nc [AVL] Aspen, Co [ASE] Atlanta, Ga [ATL] Atlantic City, Nj [ACY] Austin, Tx [AUS] Baltimore, Md [BWI] Bangor, Me [BGR] Beaumont, Tx [BPT] Bethel, Ak [BET] Billings, Mt [BIL] Binghamton, Ny [BGM] Birmingham, Al [BHM] Bismarck, Nd [BIS] Bloomington, Il [BMI] Boise, Id [BOI] Boston, Ma [BOS] Brownsville, Tx [BRO] Brunswick, Ga [BQK] Buffalo, Ny [BUF] Burbank, Ca [BUR] Burlington, Vt [BTV] Calgary [YYC] Cedar Rapids, Ia [CID] Charleston, Sc [CHS] Charleston, Wv [CRW] Charlotte, Nc [CLT] Charlottesville, Va [CHO] Chicago (Midway), Il [MDW] Chicago (O’Hare), Il [ORD] Cincinnati, Oh [CVG] Cleveland, Oh [CLE] College Station, Tx [CLL] Colorado Springs, Co [COS] Columbia, Mo [COU] Columbia, Sc [CAE] Columbus, Oh [CMH] Cordova, Ak [CDV] Corpus Christi, Tx [CRP] Dallas Love Field, Tx [DAL] Dallas/Fort Worth, Tx [DFW] Dayton, Oh [DAY] Denver, Co [DEN] Des Moines, Ia [DSM] Detroit, Mi [DTW] Duluth, Mn [DLH] Durango, Co [DRO] Edmonton Intntl [YEG] Eastern Iowa, Ia [CID] El Paso, Tx [ELP] Erie, Pa [ERI] Eugene, Or [EUG] Eureka, Ca [EKA] Fairbanks, Ak [FAI] Fargo, Nd [FAR] Flint, Mi [FNT] Fresno, Ca [FAT] Ft. Lauderdale, Fl [FLL] Ft. Myers, Fl [RSW] Ft. Walton/Okaloosa [VPS] Ft. Wayne, In [FWA] Gainesville, Fl [GNV] Grand Forks, Nd [GFK] Grand Rapids, Mi [GRR] Great Falls, Mt [GTF] Green Bay, Wi [GRB] Greensboro, Nc [GSO] Greenville, Sc [GSP] Gulfport, Ms [GPT] Halifax Intntl [YHZ] Harlingen [HRL] Harrisburg, Pa [MDT] Hartford, Ct [BDL] Helena, Mt [HLN] Hilo, Hi [ITO] Hilton Head, Sc [HHH] Honolulu, Hi [HNL] Houston Hobby, Tx [HOU] Houston Busch, Tx [IAH] Huntington, Wv [HTS] Huntsville Intl, Al [HSV] Idaho Falls, Id [IDA] Indianapolis, In [IND] Islip, Ny [ISP] Ithaca, Ny [ITH] Jackson Hole, Wy [JAC] Jackson Int’L, Ms [JAN] Jacksonville, Fl [JAX] Juneau, Ak [JNU] Kahului, Hi [OGG] Kansas City, Mo [MCI] Kapalua, Hi [JHM] Kauai, Hi [LIH] Key West, Fl [EYW] Knoxville, Tn [TYS] Kona, Hi [KOA] Lanai, Hi [LNY] Lansing, Mi [LAN] Las Vegas, Nv [LAS] Lexington, Ky [LEX] Lincoln, Ne [LNK] Little Rock, Ar [LIT] Long Beach, Ca [LGB] Los Angeles, Ca [LAX] Louisville, Ky [SDF] Lubbock, Tx [LBB] Lynchburg, Va [LYH] Montreal Mirabel [YMX] Montreal Trudeau [YUL] Madison, Wi [MSN] Manchester, Nh [MHT] Maui, Hi [OGG] Mcallen, Tx [MFE] Medford, Or [MFR] Melbourne, Fl [MLB] Memphis, Tn [MEM] Miami, Fl [MIA] Midland/Odessa, Tx [MAF] Milwaukee, Wi [MKE] Minneapolis/St. Paul [MSP] Missoula, Mt [MSO] Mobile Regional, Al [MOB] Molokai, Hi [MKK] Monterey, Ca [MRY] Montgomery, Al [MGM] Myrtle Beach, Sc [MYR] Naples, Fl [APF] Nashville, Tn [BNA] New Braunfels, Tx [BAZ] New Orleans, La [MSY] New York Kennedy, Ny [JFK] New York Laguardia [LGA] Newark, Nj [EWR] Norfolk, Va [ORF] Ottawa Mcdonald [YOW] Oakland, Ca [OAK] Oklahoma City, Ok [OKC] Omaha, Ne [OMA] Ontario, Ca [ONT] Orange County, Ca [SNA] Orlando, Fl [MCO] Palm Springs, Ca [PSP] Panama City, Fl [PFN] Pensacola, Fl [PNS] Peoria, Il [PIA] Philadelphia, Pa [PHL] Phoenix, Az [PHX] Pittsburgh, Pa [PIT] Port Angeles, Wa [CLM] Portland Intl, Or [PDX] Portland, Me [PWM] Providence, Ri [PVD] Quebec Intntl [YQB] Raleigh/Durham, Nc [RDU] Rapid City, Sd [RAP] Redmond, Or [RDM] Reno, Nv [RNO] Richmond, Va [RIC] Roanoke, Va [ROA] Rochester, Ny [ROC] Rockford, Il [RFD] Sacramento, Ca [SMF] Saginaw, Mi [MBS] Salem, Or [SLE] Salt Lake City, Ut [SLC] San Antonio, Tx [SAT] San Diego, Ca [SAN] San Francisco, Ca [SFO] San Jose, Ca [SJC] Santa Barbara, Ca [SBA] Santa Rosa, Ca [STS] Sarasota/Bradenton [SRQ] Savannah, Ga [SAV] Seattle/Tacoma, Wa [SEA] Shreveport, La [SHV] Sioux City, Ia [SUX] Sioux Falls, Sd [FSD] Spokane, Wa [GEG] Springfield, Il [SPI] Springfield, Mo [SGF] St. Louis, Mo [STL] St. Petersburg, Fl [PIE] Syracuse, Ny [SYR] Toronto Pearson [YYZ] Tallahassee, Fl [TLH] Tampa, Fl [TPA] Traverse City, Mi [TVC] Tucson, Az [TUS] Tulsa, Ok [TUL] Vancouver Intntl [YVR] Victoria Intntl [YYJ] Winnipeg Intntl [YWG] Washington Natl, Dc [DCA] Washington/Dulles, Dc [IAD] Wenatchee, Wa [EAT] West Palm Beach, Fl [PBI] White Plains, Ny [HPN] Wichita, Ks [ICT] Wilkes-Barre/Scranton [AVP]

Original post:

Clothing Optional Resorts, Negril, Jamaica | Hedonism II

Home Hedonism Wines

. .

, , . , , [emailprotected], .

Caro Hedonista,De momento o nosso site est apenas dsponivel em Ingls.Contudo, a nossa equipa tem sua disposio alguem capaz de lhe responder em Portugus.Por favor no hesite em contactar directamente o nosso especialista, Miguel.

Chers Hdonistes, notre site internet nest disponible pour le moment quen Anglais. Cependant, notre quipe se tient votre disposition pour vous rpondre en Franais. Nhsitez pas contacter directement Maxime notre spcialiste francophone.

Read more:

Home Hedonism Wines

Home | TOP500 Supercomputer Sites

Michael Feldman | May 2, 2018 11:29 CEST Dell EMC has launched the PowerEdge R840 and R940xa, two new four-socket servers that offer GPU and FPGA coprocessors for accelerating machine learning, analytics, and other data-intensive workloads. Michael Feldman | May 2, 2018 03:36 CEST The Pawsey Supercomputing Centre announced that the Australian government is investing $70 million in the center to replace its aging supercomputers. Michael Feldman | April 25, 2018 09:42 CEST Research university KU Leuven has installed a new HPE supercomputer designed to run artificial intelligence workloads. Michael Feldman | April 23, 2018 10:12 CEST Fujitsu has performed a massive upgrade to RIKENs RAIDEN supercomputer using NVIDIA DGX-1 servers outfitted with the latest V100 Tesla GPUs. Michael Feldman | April 20, 2018 07:25 CEST The Jlich Supercomputing Centre (JSC) has installed the first module of JUWELS, a supercomputer that will succeed JUQEEN as the centers premier HPC system and pave the way for future exascale machines. Michael Feldman | April 13, 2018 08:28 CEST Intel announced Fujitsu and Dell EMC will offer servers with Intels Arria 10 GX field programmable gate array (FPGA) accelerators, along with a supporting software stack. Michael Feldman | April 11, 2018 09:34 CEST SenseTime, a Chinese-based artificial intelligence company, has raised $600 million in series C funding, bringing its valuation to over $4.5 billion according to investors tracking the startup. Michael Feldman | April 11, 2018 00:49 CEST Scientists at the Atos Quantum Laboratory say they have incorporated quantum noise into the workings of Quantum Learning Machine (QLM) platform the company offers to researchers. Michael Feldman | April 6, 2018 09:54 CEST IBM is expanding its strategy to commercialize quantum computing, adding eight startup companies to its network of organizations interested in applying the technology. Michael Feldman | April 4, 2018 09:04 CEST Paderborn University has selected a Cray CS500 cluster accelerated by FPGAs as the first phase of its Noctua multi-petaflop supercomputer. More News

See original here:

Home | TOP500 Supercomputer Sites

What is supercomputer? – Definition from WhatIs.com

A supercomputer is a computer that performs at or near the currently highest operational rate for computers. Traditionally, supercomputers have been used for scientific and engineering applications that must handle very large databases or do a great amount of computation (or both).Although advances likemulti-core processors and GPGPUs (general-purpose graphics processing units)have enabled powerful machinesfor personal use (see: desktop supercomputer, GPU supercomputer),by definition, a supercomputer is exceptional in terms of performance.

At any given time, there are a few well-publicized supercomputers that operate at extremely high speeds relative to all other computers. The term is also sometimes applied to far slower (but still impressively fast) computers. The largest, most powerful supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP).

As of June 2016, the fastest supercomputer in the world was the Sunway TaihuLight, in the city of Wixu in China. A few statistics on TaihuLight:

The first commercially successful supercomputer, the CDC (Control Data Corporation) 6600 was designed by Seymour Cray. Released in 1964, the CDC 6600 had a single CPU and cost $8 million the equivalent of $60 million today. The CDC could handle three million floating point operations per second (flops).

Cray went on to found a supercomputer company under his name in 1972. Although the company has changed hands a number of times it is still in operation. In September 2008, Cray and Microsoft launched CX1, a $25,000 personal supercomputer aimed at markets such as aerospace, automotive, academic, financial services and life sciences.

IBM has been a keen competitor. The company’s Roadrunner, once the top-ranked supercomputer, was twice as fast as IBM’s Blue Gene and six times as fast as any of other supercomputers at that time. IBM’s Watson is famous for having adopted cognitive computing to beat champion Ken Jennings on Jeopardy!, a popular quiz show.

Year

Supercomputer

Peak speed (Rmax)

Location

2016

Sunway TaihuLight

93.01PFLOPS

Wuxi, China

2013

NUDTTianhe-2

33.86PFLOPS

Guangzhou, China

2012

CrayTitan

17.59PFLOPS

Oak Ridge, U.S.

2012

IBMSequoia

17.17PFLOPS

Livermore, U.S.

2011

FujitsuK computer

10.51PFLOPS

Kobe, Japan

2010

Tianhe-IA

2.566PFLOPS

Tianjin, China

2009

CrayJaguar

1.759PFLOPS

Oak Ridge, U.S.

2008

IBMRoadrunner

1.026PFLOPS

Los Alamos, U.S.

1.105PFLOPS

In the United States, some supercomputer centers are interconnected on an Internet backbone known as vBNS or NSFNet. This network is the foundation for an evolving network infrastructure known as the National Technology Grid. Internet2 is a university-led project that is part of this initiative.

At the lower end of supercomputing, clustering takes more of a build-it-yourself approach to supercomputing. The Beowulf Project offers guidance on how to put together a number of off-the-shelf personal computer processors, using Linux operating systems, and interconnecting the processors with Fast Ethernet. Applications must be written to manage the parallel processing.

Read more:

What is supercomputer? – Definition from WhatIs.com

Supercomputer – Simple English Wikipedia, the free …

A supercomputer is a computer with great speed and memory. This kind of computer can do jobs faster than any other computer of its generation. They are usually thousands of times faster than ordinary personal computers made at that time. Supercomputers can do arithmetic jobs very fast, so they are used for weather forecasting, code-breaking, genetic analysis and other jobs that need many calculations. When new computers of all classes become more powerful, new ordinary computers are made with powers that only supercomputers had in the past, while new supercomputers continue to outclass them.

Electrical engineers make supercomputers that link many thousands of microprocessors.

Supercomputer types include: shared memory, distributed memory and array. Supercomputers with shared memory are developed by using a parallel computing and pipelining concept. Supercomputers with distributed memory consist of many (about 100~10000) nodes. CRAY series of CRAYRESERCH and VP 2400/40, NEC SX-3 of HUCIS are shared memory types. nCube 3, iPSC/860, AP 1000, NCR 3700, Paragon XP/S, CM-5 are distributed memory types.

An array type computer named ILIAC started working in 1972. Later, the CF-11, CM-2, and the Mas Par MP-2 (which is also an array type) were developed. Supercomputers that use a physically separated memory as one shared memory include the T3D, KSR1, and Tera Computer.

Organizations

Centers

Here is the original post:

Supercomputer – Simple English Wikipedia, the free …

Home | Alabama Supercomputer Authority

The Alabama Supercomputer Authority (ASA) is a state-funded corporation founded in 1989 for the purpose of planning, acquiring, developing, administering and operating a statewide supercomputer and related telecommunicationsystems.

In addition toHigh Performance Computing, and with the growth of the internet,ASA developed the Alabama Research and Education Network (AREN), whichoffers education and research clients in Alabama internet access and other related network services. ASA has further expanded its offerings with state-of-the-artapplication development services that include custom website design with content management system (CMS)development and custom web-based applications for data-mining, reporting, and other client needs.

More:

Home | Alabama Supercomputer Authority

Supercomputer – Simple English Wikipedia, the free …

A supercomputer is a computer with great speed and memory. This kind of computer can do jobs faster than any other computer of its generation. They are usually thousands of times faster than ordinary personal computers made at that time. Supercomputers can do arithmetic jobs very fast, so they are used for weather forecasting, code-breaking, genetic analysis and other jobs that need many calculations. When new computers of all classes become more powerful, new ordinary computers are made with powers that only supercomputers had in the past, while new supercomputers continue to outclass them.

Electrical engineers make supercomputers that link many thousands of microprocessors.

Supercomputer types include: shared memory, distributed memory and array. Supercomputers with shared memory are developed by using a parallel computing and pipelining concept. Supercomputers with distributed memory consist of many (about 100~10000) nodes. CRAY series of CRAYRESERCH and VP 2400/40, NEC SX-3 of HUCIS are shared memory types. nCube 3, iPSC/860, AP 1000, NCR 3700, Paragon XP/S, CM-5 are distributed memory types.

An array type computer named ILIAC started working in 1972. Later, the CF-11, CM-2, and the Mas Par MP-2 (which is also an array type) were developed. Supercomputers that use a physically separated memory as one shared memory include the T3D, KSR1, and Tera Computer.

Organizations

Centers

Follow this link:

Supercomputer – Simple English Wikipedia, the free …

Home | Alabama Supercomputer Authority

The Alabama Supercomputer Authority (ASA) is a state-funded corporation founded in 1989 for the purpose of planning, acquiring, developing, administering and operating a statewide supercomputer and related telecommunicationsystems.

In addition toHigh Performance Computing, and with the growth of the internet,ASA developed the Alabama Research and Education Network (AREN), whichoffers education and research clients in Alabama internet access and other related network services. ASA has further expanded its offerings with state-of-the-artapplication development services that include custom website design with content management system (CMS)development and custom web-based applications for data-mining, reporting, and other client needs.

Go here to see the original:

Home | Alabama Supercomputer Authority

TOP500 – Wikipedia

The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL,[1] a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers. In the most recent list (June 2017), the Chinese Sunway TaihuLight is the world’s most powerful supercomputer, reaching 93.015 petaFLOPS on the LINPACK benchmarks.

The TOP500 list is compiled by Jack Dongarra of the University of Tennessee, Knoxville, Erich Strohmaier and Horst Simon of the National Energy Research Scientific Computing Center (NERSC) and Lawrence Berkeley National Laboratory (LBNL), and from 1993 until his death in 2014, Hans Meuer of the University of Mannheim, Germany.

Combined performance of 500 largest supercomputers

Fastest supercomputer

Supercomputer in 500th place

In the early 1990s, a new definition of supercomputer was needed to produce meaningful statistics. After experimenting with metrics based on processor count in 1992, the idea arose at the University of Mannheim to use a detailed listing of installed systems as the basis. In early 1993, Jack Dongarra was persuaded to join the project with his LINPACK benchmarks. A first test version was produced in May 1993, partly based on data available on the Internet, including the following sources:[2][3]

The information from those sources was used for the first two lists. Since June 1993, the TOP500 is produced bi-annually based on site and vendor submissions only.

Since 1993, performance of the #1 ranked position has grown steadily in accordance with Moore’s law, doubling roughly every 14 months. As of November2014[update], Tianhe-2 was fastest with an Rpeak[6] of 54.9024PFLOPS. For comparison, this is over 419,102 times faster than the Connection Machine CM-5/1024 (1,024 cores), which was the fastest system in November 1993 (twenty-one years prior) with an Rpeak of 131.0GFLOPS.[7]

In June 2016, a Chinese computer made the top based on SW26010 processors, a new, radically modified, model in the Sunway (or ShenWei) line.

As of November2016[update], TOP500 supercomputers are now all 64-bit, mostly based on x86-64 CPUs (Intel EMT64 and AMD AMD64 instruction set architecture), with few exceptions (all based on reduced instruction set computing (RISC) architectures) including 22 supercomputers based on Power Architecture used by IBM POWER microprocessors, seven SPARC (all with Fujitsu-designed SPARC-chips, one of which surprisingly made the top in 2011 without a GPU, currently ranked seventh), and two, seemingly related, Chinese designs: the ShenWei-based (ranked 11 in 2011, ranked 158th in November 2016) and Sunway SW26010-based ranked 1 in 2016, making up the remainder (another non-US design is PEZY-SC, while it is an accelerator paired with Intel’s Xeon). Before the ascendance of 32-bit x86 and later 64-bit x86-64 in the early 2000s, a variety of RISC processor families made up most TOP500 supercomputers, including RISC architectures such as SPARC, MIPS, PA-RISC, and Alpha.

In recent years heterogeneous computing, mostly using Nvidia’s graphics processing units (GPU) as coprocessors, has become a popular way to reach a better performance per watt ratio and higher absolute performance; it is almost required for good performance and to make the top (or top 10), with some exceptions, such as the mentioned SPARC computer without any coprocessors. An x86-based coprocessor, Xeon Phi, has also been used.

All the fastest supercomputers in the decade since the Earth Simulator supercomputer have used operating systems based on Linux. Since November2017[update], all the listed supercomputers (100% of the performance share) use an operating system based on the Linux kernel.[8][9]

Since November 2015, no computer on the list runs Windows. In November 2014, Windows Azure[10] cloud computer was no longer on the list of fastest supercomputers (its best rank was 165 in 2012), leaving the Shanghai Supercomputer Center’s Magic Cube as the only Windows-based supercomputer on the list, until it also dropped off the list. It had been ranked 436 in its last appearance on the list released in June 2015, while its best rank was 11 in 2008.[11]

It has been well over a decade since MIPS-based systems (meaning used as host CPUs) dropped entirely off the list[12] but the Gyoukou supercomputer that jumped to 4th place in November 2017 (after a huge upgrade) has MIPS as a small part of the coprocessors. Use of 2,048-core coprocessors (plus 8 6-core MIPS, for each, that “no longer require to rely on an external Intel Xeon E5 host processor”[13]) make the supercomputer much more energy efficient than the other top 10 (i.e. it is 5th on Green500 and other such ZettaScaler-2.2-based systems take first three spots; Piz Daint is the only other system just barely making 10th on that list, of those below).[14] At 19.86 million cores, it is by far the biggest system, almost double the 1st ranked system the Chinese manycore system.

Legend:

Numbers below represent the number of computers in the TOP500 that are in each of the listed countries.

By number of systems as of June2016[update]:[20]

In November 2014, it was announced that the United States was developing two new supercomputers to exceed China’s Tianhe-2 in its place as world’s fastest supercomputer. The two computers, Sierra and Summit, will each exceed Tianhe-2’s 55 peak petaflops. Summit, the more powerful of the two, will deliver 150300 peak petaflops.[22] On 10 April 2015, US government agencies banned selling chips, from Nvidia, to supercomputing centers in China as “acting contrary to the national security… interests of the United States”;[23] and Intel Corporation from providing Xeon chips to China due to their use, according to the US, in researching nuclear weapons research to which US export control law bans US companies from contributing “The Department of Commerce refused, saying it was concerned about nuclear research being done with the machine.”[24]

On 29 July 2015, President Obama signed an executive order creating a National Strategic Computing Initiative calling for the accelerated development of an exascale (1000 petaflop) system and funding research into post-semiconductor computing.[25]

In June 2016, Japanese firm Fujitsu announced at the International Supercomputing Conference that its future exascale supercomputer will feature processors of its own design that implement the ARMv8 architecture. The Flagship2020 program, by Fujitsu for RIKEN plans to break the exaflops barrier by 2020 (and “it looks like China and France have a chance to do so and that the United States is content for the moment at least to wait until 2023 to break through the exaflops barrier.”[26]) These processors will also implement extensions to the ARMv8 architecture equivalent to HPC-ACE2 that Fujitsu is developing with ARM Holdings.[26]

Inspur has been one of the largest HPC system manufacturer based out of Jinan, China. As of May 2017, Inspur has become the third manufacturer to have manufactured 64 way system a record which has been previously mastered by IBM and HP. The company has registered over $10B in revenues and have successfully provided a number of HPC systems to countries outside China such as Sudan, Zimbabwe, Saudi Arab, Venezuela. Inspur was also a major technology partner behind both the supercomputers from China, namely Tianhe-2 and Taihu which leads the top 2 positions of Top500 supercomputer list of December 2016. Inspur and Supermicro released a few platforms aimed at HPC using GPU such as SR-AI and AGX-2 in May 2017.[27]

Some major systems are not listed on the list. The largest example is the NCSA’s Blue Waters which publicly announced the decision not to participate in the list[28] because they do not feel it accurately indicates the ability for any system to be able to do useful work.[29] Other organizations decide not to list systems for security and/or commercial competitiveness reasons. Additional purpose-built machines that are not capable or do not run the benchmark were not included, such as RIKEN MDGRAPE-3 and MDGRAPE-4.

IBM Roadrunner[30] is no longer on the list (or any other using the Cell coprocessor, or PowerXCell as in the Roadrunner supercomputer), but it is an example of a computer that would easily be included, if it had not been decommissioned, as it is faster than the one ranked 500th.[31]

Conversely, computers, such as the Microsoft Azure,[32] have dropped off the list only because the stated performance numbers are no longer high enough, while in principle, the computers could have been upgraded to get faster (or not) without being reported.

All Itanium based systems (including the one which reached second rank in 2004[33])[34] and (non-SIMD-style) vector processors (NEC-based such as the Earth simulator that was fastest in 2002[35]) have also fallen off the list. Similarly the Sun Starfire computers that occupied many spots have been overtaken.

The last non-Linux computers on the list the two AIX ones running on POWER7 (in July 2017 ranked 494rd and 495th[36] originally 86th and 85th), dropped off the list in November 2017.

Read the rest here:

TOP500 – Wikipedia

Home | TOP500 Supercomputer Sites

Michael Feldman | April 24, 2018 11:11 CEST After four years of development, the US Department of Energy (DOE) is releasing the Energy Exascale Earth System Model (E3SM), a computational platform for performing high-resolutions simulations of the weather and other earth systems. Michael Feldman | April 23, 2018 10:12 CEST Fujitsu has performed a massive upgrade to RIKENs RAIDEN supercomputer using NVIDIA DGX-1 servers outfitted with the latest V100 Tesla GPUs. Michael Feldman | April 20, 2018 07:25 CEST The Jlich Supercomputing Centre (JSC) has installed the first module of JUWELS, a supercomputer that will succeed JUQEEN as the centers premier HPC system and pave the way for future exascale machines. Michael Feldman | April 13, 2018 08:28 CEST Intel announced Fujitsu and Dell EMC will offer servers with Intels Arria 10 GX field programmable gate array (FPGA) accelerators, along with a supporting software stack. Michael Feldman | April 11, 2018 09:34 CEST SenseTime, a Chinese-based artificial intelligence company, has raised $600 million in series C funding, bringing its valuation to over $4.5 billion according to investors tracking the startup. Michael Feldman | April 11, 2018 00:49 CEST Scientists at the Atos Quantum Laboratory say they have incorporated quantum noise into the workings of Quantum Learning Machine (QLM) platform the company offers to researchers. Michael Feldman | April 6, 2018 09:54 CEST IBM is expanding its strategy to commercialize quantum computing, adding eight startup companies to its network of organizations interested in applying the technology. Michael Feldman | April 4, 2018 09:04 CEST Paderborn University has selected a Cray CS500 cluster accelerated by FPGAs as the first phase of its Noctua multi-petaflop supercomputer. Michael Feldman | April 3, 2018 07:04 CEST Japans foremost supercomputing authority, Satoshi Matsuoka, has become the new director of the RIKEN Center for Computational Science (R-CCS), the organization that oversees the K computer and its upcoming exascale successor, the Post-K supercomputer. Michael Feldman | March 30, 2018 09:46 CEST One of the more practical applications unveiled this week at the GPU Technology Conference was Project Clara, a medical imaging supercomputer that marries the graphics power of NVIDIAs chips with it deep learning capabilities. More News

Read the original post:

Home | TOP500 Supercomputer Sites

History of supercomputing – Wikipedia

The history of supercomputing goes back to the early 1920s in the United States with the IBM tabulators at Columbia University and a series of computers at Control Data Corporation (CDC), designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance.[1] The CDC 6600, released in 1964, is generally considered the first supercomputer.[2][3] However, some earlier computers were considered supercomputers for their day, such as the 1960 UNIVAC LARC[4], the 1954 IBM NORC[5], and the IBM 7030 Stretch[6] and the Atlas, both in 1962.

While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records.

By the end of the 20th century, massively parallel supercomputers with thousands of “off-the-shelf” processors similar to those found in personal computers were constructed and broke through the teraflop computational barrier.

Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors appeared, reaching petaflop performance levels.

The term “Super Computing” was first used in the New York World in 1929 to refer to large custom-built tabulators that IBM had made for Columbia University.

In 1957 a group of engineers left Sperry Corporation to form Control Data Corporation (CDC) in Minneapolis, MN. Seymour Cray left Sperry a year later to join his colleagues at CDC.[1] In 1960 Cray completed the CDC 1604, one of the first solid state computers, and the fastest computer in the world[dubious discuss] at a time when vacuum tubes were found in most large computers.[7]

Around 1960 Cray decided to design a computer that would be the fastest in the world by a large margin. After four years of experimentation along with Jim Thornton, and Dean Roush and about 30 other engineers Cray completed the CDC 6600 in 1964. Cray switched from germanium to silicon transistors, built by Fairchild Semiconductor, that used the planar process. These did not have the drawbacks of the mesa silicon transistors. He ran them very fast, and the speed of light restriction forced a very compact design with severe overheating problems, which were solved by introducing refrigeration, designed by Dean Roush.[8] Given that the 6600 outran all computers of the time by about 10 times, it was dubbed a supercomputer and defined the supercomputing market when one hundred computers were sold at $8 million each.[7][9]

The 6600 gained speed by “farming out” work to peripheral computing elements, freeing the CPU (Central Processing Unit) to process actual data. The Minnesota FORTRAN compiler for the machine was developed by Liddiard and Mundstock at the University of Minnesota and with it the 6600 could sustain 500kiloflops on standard mathematical operations.[10] In 1968 Cray completed the CDC 7600, again the fastest computer in the world.[7] At 36MHz, the 7600 had about three and a half times the clock speed of the 6600, but ran significantly faster due to other technical innovations. They sold only about 50 of the 7600s, not quite a failure. Cray left CDC in 1972 to form his own company.[7] Two years after his departure CDC delivered the STAR-100 which at 100megaflops was three times the speed of the 7600. Along with the Texas Instruments ASC, the STAR-100 was one of the first machines to use vector processing – the idea having been inspired around 1964 by the APL programming language.[11][12]

In 1956, a team at Manchester University in the United Kingdom, began development of MUSE a name derived from microsecond engine with the aim of eventually building a computer that could operate at processing speeds approaching onemicrosecond per instruction, about onemillion instructions per second.[13] Mu (or ) is a prefix in the SI and other systems of units denoting a factor of 106 (one millionth).

At the end of 1958 Ferranti agreed to begin to collaborate with Manchester University on the project, and the computer was shortly afterwards renamed Atlas, with the joint venture under the control of Tom Kilburn. The first Atlas was officially commissioned on 7December 1962, nearly three years before the Cray CDC 6600 supercomputer was introduced, as one of the world’s first supercomputers – and was considered to be the most powerful computer in England and for a very short time was considered to be one of the most powerful computers in the world, and equivalent to four IBM 7094s.[14] It was said that whenever England’s Atlas went offline half of the United Kingdom’s computer capacity was lost.[14] The Atlas pioneered the use of virtual memory and paging as a way to extend the Atlas’s working memory by combining its 16,384 words of primary core memory with an additional 96K words of secondary drum memory.[15] Atlas also pioneered the Atlas Supervisor, “considered by many to be the first recognizable modern operating system”.[14]

Four years after leaving CDC, Cray delivered the 80MHz Cray 1 in 1976, and it became the most successful supercomputer in history.[12][16] The Cray 1 used integrated circuits with two gates per chip and was a vector processor which introduced a number of innovations such as chaining in which scalar and vector registers generate interim results which can be used immediately, without additional memory references which reduce computational speed.[8][17] The Cray X-MP (designed by Steve Chen) was released in 1982 as a 105MHz shared-memory parallel vector processor with better chaining support and multiple memory pipelines. All three floating point pipelines on the X-MP could operate simultaneously.[17]

The Cray-2 released in 1985 was a 4processor liquid cooled computer totally immersed in a tank of Fluorinert, which bubbled as it operated.[8] It could perform to 1.9gigaflops and was the world’s second fastest supercomputer after M-13 (2.4gigaflops)[18] until 1990 when ETA-10G from CDC overtook both. The Cray 2 was a totally new design and did not use chaining and had a high memory latency, but used much pipelining and was ideal for problems that required large amounts of memory.[17] The software costs in developing a supercomputer should not be underestimated, as evidenced by the fact that in the 1980s the cost for software development at Cray came to equal what was spent on hardware.[19] That trend was partly responsible for a move away from the in-house, Cray Operating System to UNICOS based on Unix.[19]

The Cray Y-MP, also designed by Steve Chen, was released in 1988 as an improvement of the X-MP and could have eight vector processors at 167MHz with a peak performance of 333megaflops per processor.[17] In the late 1980s, Cray’s experiment on the use of gallium arsenide semiconductors in the Cray-3 did not succeed. Cray began to work on a massively parallel computer in the early 1990s, but died in a car accident in 1996 before it could be completed. Cray Research did, however, produce such computers.[16][8]

The Cray-2 which set the frontiers of supercomputing in the mid to late 1980s had only 8 processors. In the 1990s, supercomputers with thousands of processors began to appear. Another development at the end of the 1980s was the arrival of Japanese supercomputers, some of which were modeled after the Cray-1.

The SX-3/44R was announced by NEC Corporation in 1989 and a year later earned the fastest in the world title with a 4 processor model.[20] However, Fujitsu’s Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994. It had a peak speed of 1.7gigaflops per processor.[21][22] The Hitachi SR2201 on the other hand obtained a peak performance of 600gigaflops in 1996 by using 2048processors connected via a fast three-dimensional crossbar network.[23][24][25]

In the same timeframe the Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two-dimensional mesh, allowing processes to execute on separate nodes; communicating via the Message Passing Interface.[26] By 1995 Cray was also shipping massively parallel systems, e.g. the Cray T3E with over 2,000 processors, using a three-dimensional torus interconnect.[27][28]

The Paragon architecture soon led to the Intel ASCI Red supercomputer in the United States, which held the top supercomputing spot to the end of the 20th century as part of the Advanced Simulation and Computing Initiative. This was also a mesh-based MIMD massively-parallel system with over 9,000 compute nodes and well over 12 terabytes of disk storage, but used off-the-shelf Pentium Pro processors that could be found in everyday personal computers. ASCI Red was the first system ever to break through the 1teraflop barrier on the MP-Linpack benchmark in 1996; eventually reaching 2teraflops.[29]

Significant progress was made in the first decade of the 21st century. The efficiency of supercomputers continued to increase, but not dramatically so. The Cray C90 used 500 kilowatts of power in 1991, while by 2003 the ASCI Q used 3,000kW while being 2,000 times faster, increasing the performance per watt 300 fold.[30]

In 2004, the Earth Simulator supercomputer built by NEC at the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) reached 35.9teraflops, using 640nodes, each with eight proprietary vector processors.[31]

The IBM Blue Gene supercomputer architecture found widespread use in the early part of the 21st century, and 27 of the computers on the TOP500 list used that architecture. The Blue Gene approach is somewhat different in that it trades processor speed for low power consumption so that a larger number of processors can be used at air cooled temperatures. It can use over 60,000 processors, with 2048 processors “per rack”, and connects them via a three-dimensional torus interconnect.[32][33]

Progress in China has been rapid, in that China placed 51st on the TOP500 list in June 2003, then 14th in November 2003, and 10th in June 2004 and then 5th during 2005, before gaining the top spot in 2010 with the 2.5petaflop Tianhe-I supercomputer.[34][35]

In July 2011, the 8.1petaflop Japanese K computer became the fastest in the world using over 60,000 SPARC64 VIIIfx processors housed in over 600 cabinets. The fact that K computer is over 60 times faster than the Earth Simulator, and that the Earth Simulator ranks as the 68th system in the world seven years after holding the top spot demonstrates both the rapid increase in top performance and the widespread growth of supercomputing technology worldwide.[36][37][38]

This is a list of the computers which appeared at the top of the Top500 list since 1993.[39] The “Peak speed” is given as the “Rmax” rating.

Combined performance of 500 largest supercomputers

Fastest supercomputer

Supercomputer on 500th place

The CoCom and its later replacement, the Wassenaar Arrangement, legally regulated – required licensing and approval and record-keeping; or banned entirely – the export of high-performance computers (HPCs) to certain countries. Such controls have become harder to justify, leading to loosening of these regulations. Some have argued these regulations were never justified.[40][41][42][43][44][45]

Read more:

History of supercomputing – Wikipedia

Home | Alabama Supercomputer Authority

The Alabama Supercomputer Authority (ASA) is a state-funded corporation founded in 1989 for the purpose of planning, acquiring, developing, administering and operating a statewide supercomputer and related telecommunicationsystems.

In addition toHigh Performance Computing, and with the growth of the internet,ASA developed the Alabama Research and Education Network (AREN), whichoffers education and research clients in Alabama internet access and other related network services. ASA has further expanded its offerings with state-of-the-artapplication development services that include custom website design with content management system (CMS)development and custom web-based applications for data-mining, reporting, and other client needs.

Read the original:

Home | Alabama Supercomputer Authority


12345...102030...