Pittsburgh Supercomputing Center – Sidebar

Pittsburgh Supercomputing Center

PSC is a joint effort of Carnegie Mellon University and the University of Pittsburgh. Established in 1986, PSC is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry and is a leading partner in XSEDE (Extreme Science and Engineering Discovery Environment), the National Science Foundation cyber-infrastructure program.

2017 Pittsburgh Supercomputing Center

300 S. Craig Street, Pittsburgh, PA 15213 -Phone: 412.268.4960 | Fax: 412.268.5832

Here is the original post:

Pittsburgh Supercomputing Center – Sidebar

23 Years Of Supercomputer Evolution – Tom’s Hardware

Eventually the ASCI Red was dethroned by a supercomputer specifically designed to replace it; the ASCI White. This new supercomputer was installed in the heart of Lawrence Livermore National Laboratory. At half strength, the system became operational in November 2000, and was completed in June 2001.

Unlike the ASCI Red which was built by Intel, the ASCI White was IBM’s chance to shine. ASCI White derived its power from 8192 IBM Power3 processors clocked at 375 MHz. ASCI White represents a new trend among supercomputers, adopting a cluster. A cluster architecture is a collection of individual nodes connected together to work as a single system. Today, clustering is used by 85 percent of the supercomputers listed on the TOP500.

ASCI White actually includes 512 RS/6000 SP servers, each containing 16 CPUs. Each CPU was capable of 1.5 GFlops of processing power, which made ASCI White theoretically capable of reaching 12.3 TFlops. It’s real-world performance was considerably lower, only reaching 7.2 TFlops under Linpack (7.3 TFlops from 2003).

ASCI White required 3,000 kW of power to operate, with an additional 3,000 kW consumed by the cooling system.

Read the original:

23 Years Of Supercomputer Evolution – Tom’s Hardware

List of fictional computers – Wikipedia

Computers have often been used as fictional objects in literature, movies and in other forms of media. Fictional computers tend to be considerably more sophisticated than anything yet devised in the real world.

This is a list of computers that have appeared in notable works of fiction. The work may be about the computer, or the computer may be an important element of the story. Only static computers are included. Robots and other fictional computers that are described as existing in a mobile or humanlike form are discussed in a separate list of fictional robots and androids.

Also see the List of fictional robots and androids for all fictional computers which are described as existing in a mobile or humanlike form.

More here:

List of fictional computers – Wikipedia

Tianhe-I – Wikipedia

Tianhe-1 and Tianhe-1A Active Tianhe-1 Operational 29 October 2009, Tianhe-1A Operational 28 October 2010 Sponsors National University of Defense Technology Operators National Supercomputing Center Location National Supercomputing Center, Tianjin, People’s Republic of China Operating system Linux[1] Storage 96 TB (98304 GB) for Tianhe-1, 262TB for Tianhe-1A Speed Tianhe-1: 563 teraFLOPS (Rmax) 1,206.2 teraFLOPS (Rpeak), Tianhe-1A: 2,566.0 teraFLOPS (Rmax) 4,701.0 teraFLOPS (Rpeak) Ranking TOP500: 2nd, June 2011 (Tianhe-1A) Purpose Petroleum exploration, aircraft simulation Sources top500.org

Tianhe-I, Tianhe-1, or TH-1 (Chinese: , [tinxixau]; Sky River Number One)[2] is a supercomputer capable of an Rmax (maximum range) of 2.5 petaFLOPS. Located at the National Supercomputing Center in Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and is one of the few Petascale supercomputers in the world.[3][4]

In October 2010, an upgraded version of the machine (Tianhe-1A) overtook ORNL’s Jaguar to become the world’s fastest supercomputer, with a peak computing rate of 2.507 petaFLOPS.[5][6] In June 2011 the Tianhe-1A was overtaken by the K computer as the world’s fastest supercomputer, which was also subsequently superseded.[7]

Both the original Tianhe-1 and Tianhe-1A use a Linux-based operating system.[8][9]

On 12 August 2015, the 186,368-core Tianhe-1, felt the impact of the powerful Tianjin explosions and went offline for some time. Xinhua reports that “the office building of Chinese supercomputer Tianhe-1, one of the world’s fastest supercomputers, suffered damage.” Sources at Tianhe-1 told Xinhua the computer is not damaged, but they have shut down some of its operations as a precaution.[10] Operation resumed on 17 August 2015.[11]

Tianhe-1 was developed by the Chinese National University of Defense Technology (NUDT) in Changsha, Hunan. It was first revealed to the public on 29 October 2009, and was immediately ranked as the world’s fifth fastest supercomputer in the TOP500 list released at the 2009 Supercomputing Conference (SC09) held in Portland, Oregon, on 16 November 2009. Tianhe achieved a speed of 563 teraflops in its first Top 500 test and had a peak performance of 1.2 petaflops. Thus at startup, the system had an efficiency of 46%.[12][13] Originally, Tianhe-1 was powered by 4,096 Intel Xeon E5540 processors and 1,024 Intel Xeon E5450 processors, with 5,120 AMD graphics processing units (GPUs), which were made up of 2,560 dual-GPU ATI Radeon HD 4870 X2 graphics cards.[14][15]

In October 2010, Tianhe-1A, an upgraded supercomputer, was unveiled at HPC 2010 China.[16] It is now equipped with 14,336 Xeon X5670 processors and 7,168 Nvidia Tesla M2050 general purpose GPUs. 2,048 FeiTeng 1000 SPARC-based processors are also installed in the system, but their computing power was not counted into the machine’s official Linpack statistics as of October2010.[17] Tianhe-1A has a theoretical peak performance of 4.701 petaflops.[18] NVIDIA suggests that it would have taken “50,000 CPUs and twice as much floor space to deliver the same performance using CPUs alone.” The current heterogeneous system consumes 4.04 megawatts compared to over 12 megawatts had it been built only with CPUs.[19]

The Tianhe-1A system is composed of 112 computer cabinets, 12 storage cabinets, 6 communications cabinets, and 8 I/O cabinets. Each computer cabinet is composed of four frames, with each frame containing eight blades, plus a 16-port switching board. Each blade is composed of two computer nodes, with each computer node containing two Xeon X5670 6-core processors and one Nvidia M2050 GPU processor.[20] The system has 3584 total blades containing 7168 GPUs, and 14,336 CPUs, managed by the SLURM job scheduler.[21] The total disk storage of the systems is 2 Petabytes implemented as a Lustre clustered file system,[2] and the total memory size of the system is 262 Terabytes.[17]

Another significant reason for the increased performance of the upgraded Tianhe-1A system is the Chinese-designed NUDT custom designed proprietary high-speed interconnect called Arch that runs at 160 Gbit/s, twice the bandwidth of InfiniBand.[17]

The system also used the Chinese made FeiTeng-1000 central processing unit.[22] The FeiTeng-1000 processor is used both on service nodes and to enhance the system interconnect.[22][23]

The supercomputer is installed at the National Supercomputing Center, Tianjin, and is used to carry out computations for petroleum exploration and aircraft design.[13] It is an “open access” computer, meaning it provides services for other countries.[24] The supercomputer will be available to international clients.[25]

The computer cost $88 million to build. Approximately $20 million is spent annually for electricity and operating expenses. Approximately 200 workers are employed in its operation.

Tianhe-IA was ranked as the world’s fastest supercomputer in the TOP500 list[26][27] until July 2011 when the K computer overtook it.

In June 2011, scientists at the Institute of Process Engineering (IPE) at the Chinese Academy of Sciences (CAS) announced a record-breaking scientific simulation on the Tianhe-1A supercomputer that furthers their research in solar energy. CAS-IPE scientists ran a complex molecular dynamics simulation on all 7,168 NVIDIA Tesla GPUs to achieve a performance of 1.87 petaflops (about the same performance as 130,000 laptops).[28]

The Tianhe-1A supercomputer was shut down after the National Supercomputing Center of Tianjin was damaged by an explosion nearby. The computer was not damaged and still remains operational.[29]

See the article here:

Tianhe-I – Wikipedia

What is the worlds fastest supercomputer used for …

For most of us, a computer probably seems fast enough if it’s able to run “LEGO Lord of the Rings” or a YouTube video of an English bulldog on a skateboard without slowing to a crawl. But for scientists who need to work on really complicated problems, the mere 158 billion calculations per second that a PC with an i7 processor can perform isn’t nearly enough [sources: Peckham, ORNL, Kolawole].

That’s why researchers are so excited about the Tennessee-based Oak Ridge National Laboratory (ORNL)’s new toy, the Cray Titan supercomputer. When it was unveiled in October 2012, the Titan claimed the title of world’s fastest computer, which had been held by the IBM Sequoia Blue Gene/Q machine at the Lawrence Livermore National Laboratory in California for just six months [sources: Burt, Johnston].

How fast is the Titan? Its theoretical top speed is 27 petaflops, which doesn’t sound that impressive unless you know that it means 27,000 trillion calculations per second [source: ORNL]. That’s hundreds of thousands times faster than your top-of-the-line PC. Unlike your PC, though, Titan won’t fit on a desktop; it occupies a space the size of a basketball court [source: Kolawole].

Titan’s incredible speed makes it a fantastic tool for tackling really complicated problems that involve gigantic amounts of data. Researchers plan to use it to run detailed simulations of the Earth’s climate, which may yield ideas on how to lessen global warming. They also may use it to help design super-efficient internal combustion engines and solar panels, and to run biological simulations that will help speed the testing of new drugs. On the pure science level, Titan could help scientists simulate the breaking of the bonds that hold molecules together, giving them new insights into one of the most important processes in nature [sources: ORNL, Kolawole].

But the Titan is important not just because it’s incredibly fast, but because it pioneers a new sort of supercomputer design that could spawn a generation of even speedier machines. For years, scientists have achieved higher and higher speeds simply by building machines with thousands and thousands of central processing units, or CPUs, in them, and then breaking the calculations they want to perform into smaller pieces that could be parceled out to all of those CPUs [source: ORNL]. The drawback of that approach is all those CPU chips require enormous amounts of electricity. The Titan, however, pairs each of its 18,688 CPUs with a graphic processing unit, or GPU the sort of chip used in hot-rod gaming PCs to accelerate the computations. GPUs don’t draw as much juice as CPUs, so the result is a machine that’s faster than its predecessors but also a lot more energy efficient [sources: ORNL, Kolawole].

Researchers see the Titan as blazing the way toward exascale-class computers that is, machines a thousand or more times as fast as the most powerful supercomputers today [sources: Kolawole, Goodwin and Zacharia].

Visit link:

What is the worlds fastest supercomputer used for …

NVIDIA Boosts IQ of Self-Driving Cars With World’s First …

CESAccelerating the race to autonomous cars, NVIDIA today launched NVIDIA DRIVE PX 2 the worlds most powerful engine for in-vehicle artificial intelligence.

NVIDIA DRIVE PX 2 allows the automotive industry to use artificial intelligence to tackle the complexities inherent in autonomous driving. It utilizes deep learning on NVIDIA’s most advanced GPUs for 360-degree situational awareness around the car, to determine precisely where the car is and to compute a safe, comfortable trajectory.

“Drivers deal with an infinitely complex world,” said Jen-Hsun Huang, co-founder and CEO, NVIDIA. “Modern artificial intelligence and GPU breakthroughs enable us to finally tackle the daunting challenges of self-driving cars.

“NVIDIA’s GPU is central to advances in deep learning and supercomputing. We are leveraging these to create the brain of future autonomous vehicles that will be continuously alert, and eventually achieve superhuman levels of situational awareness. Autonomous cars will bring increased safety, new convenient mobility services and even beautiful urban designs — providing a powerful force for a better future.”

24 Trillion Deep Learning Operations per Second Created to address the needs of NVIDIA’s automotive partners for an open development platform, DRIVE PX 2 provides unprecedented amounts of processing power for deep learning, equivalent to that of 150 MacBook Pros.

Its two next-generation Tegra processors plus two next-generation discrete GPUs, based on the Pascal architecture,deliver up to 24 trillion deep learning operations per second, which are specialized instructions that accelerate the math used in deep learning network inference. That’s over 10 times more computational horsepower than the previous-generation product.

DRIVE PX 2’s deep learning capabilities enable it to quickly learn how to address the challenges of everyday driving, such as unexpected road debris, erratic drivers and construction zones. Deep learning also addresses numerous problem areas where traditional computer vision techniques are insufficient — such as poor weather conditions like rain, snow and fog, and difficult lighting conditions like sunrise, sunset and extreme darkness.

For general purpose floating point operations, DRIVE PX 2’s multi-precision GPU architecture is capable of up to 8 trillion operations per second. That’s over four times more than the previous-generation product. This enables partners to address the full breadth of autonomous driving algorithms, including sensor fusion, localization and path planning. It also provides high-precision compute when needed for layers of deep learning networks.

Deep Learning in Self-Driving Cars Self-driving cars use a broad spectrum of sensors to understand their surroundings. DRIVE PX 2 can process the inputs of 12 video cameras, plus lidar, radar and ultrasonic sensors. It fuses them to accurately detect objects, identify them, determine where the car is relative to the world around it, and then calculate its optimal path for safe travel.

This complex work is facilitated by NVIDIA DriveWorks, a suite of software tools, libraries and modules that accelerates development and testing of autonomous vehicles. DriveWorks enables sensor calibration, acquisition of surround data, synchronization, recording and then processing streams of sensor data through a complex pipeline of algorithms running on all of the DRIVE PX 2’s specialized and general-purpose processors. Software modules are included for every aspect of the autonomous driving pipeline, from object detection, classification and segmentation to map localization and path planning.

End-to-End Solution for Deep Learning NVIDIA delivers an end-to-end solution — consisting of NVIDIA DIGITS and DRIVE PX 2 — for both training a deep neural network, as well as deploying the output of that network in a car.

DIGITS is a tool for developing, training and visualizing deep neural networks that can run on any NVIDIA GPU-based system — from PCs and supercomputers to Amazon Web Services and the recently announced Facebook Big Sur Open Rack-compatible hardware. The trained neural net model runs on NVIDIA DRIVE PX 2 within the car.

Strong Market Adoption Since NVIDIA delivered the first-generation DRIVE PX last summer, more than 50 automakers, tier 1 suppliers, developers and research institutions have adopted NVIDIA’s AI platform for autonomous driving development. They are praising its performance, capabilities and ease of development.

“Using NVIDIA’s DIGITS deep learning platform, in less than four hours we achieved over 96 percent accuracy using Ruhr University Bochum’s traffic sign database. While others invested years of development to achieve similar levels of perception with classical computer vision algorithms, we have been able to do it at the speed of light.” — Matthias Rudolph, director of Architecture Driver Assistance Systems at Audi

“BMW is exploring the use of deep learning for a wide range of automotive use cases, from autonomous driving to quality inspection in manufacturing. The ability to rapidly train deep neural networks on vast amounts of data is critical. Using an NVIDIA GPU cluster equipped with NVIDIA DIGITS, we are achieving excellent results.” — Uwe Higgen, head of BMW Group Technology Office USA

“Due to deep learning, we brought the vehicle’s environment perception a significant step closer to human performance and exceed the performance of classic computer vision.” — Ralf G. Herrtwich, director of Vehicle Automation at Daimler

“Deep learning on NVIDIA DIGITS has allowed for a 30X enhancement in training pedestrian detection algorithms, which are being further tested and developed as we move them onto NVIDIA DRIVE PX.” — Dragos Maciuca, technical director of Ford Research and Innovation Center

The DRIVE PX 2 development engine will be generally available in the fourth quarter of 2016. Availability to early access development partners will be in the second quarter.

Keep Current on NVIDIA Subscribe to the NVIDIA blog, follow us on Facebook, Google+, Twitter, LinkedIn and Instagram, and view NVIDIA videos on YouTube and images on Flickr.

About NVIDIA Since 1993,NVIDIA(NASDAQ:NVDA) has pioneered the art and science ofvisual computing. With a singular focus on this field, the company offers specialized platforms for the gaming, automotive, data center and professional visualization markets. Its products, services and software power amazing new experiences in virtual reality, artificial intelligence and autonomous cars. More information athttp://nvidianews.nvidia.com/.

Certain statements in this press release including, but not limited to, statements as to: the features, benefits and performance of NVIDIA DRIVE PX 2 and NVIDIA DriveWorks; the effects of modern artificial intelligence and GPU breakthroughs; NVIDIA’s GPU being central to advances in deep learning and supercomputing; the effects of leveraging deep learning and supercomputing advances; the benefits and impact of autonomous cars; the abilities of deep learning; the features of NVIDIA DIGITS; and the availability of the DRIVE PX 2 development platform are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; our reliance on third parties to manufacture, assemble, package and test our products; the impact of technological development and competition; development of new products and technologies or enhancements to our existing product and technologies; market acceptance of our products or our partners’ products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of our products or technologies when integrated into systems; as well as other factors detailed from time to time in the reports NVIDIA files with the Securities and Exchange Commission, or SEC, including its Form 10-Q for the fiscal period ended October 25, 2015. Copies of reports filed with the SEC are posted on the company’s website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

2016 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, Tegra, NVIDIA DIGITS, NVIDIA DRIVE, NVIDIA DriveWorks and Pascal are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability and specifications are subject to change without notice.

Read the original post:

NVIDIA Boosts IQ of Self-Driving Cars With World’s First …

Supercomputer | Article about supercomputer by The Free …

supercomputer, a state-of-the-art, extremely powerful computercomputer, device capable of performing a series of arithmetic or logical operations. A computer is distinguished from a calculating machine, such as an electronic calculator, by being able to store a computer program (so that it can repeat its operations and make logical ….. Click the link for more information. capable of manipulating massive amounts of data in a relatively short time. Supercomputers are very expensive and are employed for specialized scientific and engineering applications that must handle very large databases or do a great amount of computation, among them meteorology, animated graphics, fluid dynamic calculations, nuclear energy research and weapon simulation, and petroleum exploration. There are two approaches to the design of supercomputers. One, called massively parallel processing (MPP), is to chain together thousands of commercially available microprocessorsmicroprocessor, integrated circuit containing the arithmetic, logic, and control circuitry required to interpret and execute instructions from a computer program. When combined with other integrated circuits that provide storage for data and programs, often on a single ….. Click the link for more information. utilizing parallel processingparallel processing, the concurrent or simultaneous execution of two or more parts of a single computer program, at speeds far exceeding those of a conventional computer. ….. Click the link for more information. techniques. A variant of this, called a Beowulf cluster, or cluster computing, employs large numbers of personal computers interconnected by a local area network and running programs written for parallel processing. The other approach, called vector processing, is to develop specialized hardware to solve complex calculations. This technique was employed (2002) in the Earth Simulator, a Japanese supercomputer with 640 nodes composed of 5104 specialized processors to execute 35.6 trillion mathematical operations per second; it is used to analyze earthquake and weather patterns and climate change, including global warming. Operating systems for supercomputers, formerly largely Unix-based, are now typically Linux-based.

Advances in supercomputing have regularly resulted in new supercomputers that significantly exceed the capabilities of those that are only a year older; by 2012 the fastest supercomputer was more than 250,000 times faster than the fastest in 1993 in terms of the number of calculations per second it could complete. Although calculation speed is the standard for measuring supercomputer power, it is not, however, an accurate indicator of everyday performance; most supercomputers are not fully utilized when running programs. Supercomputers can require significant amounts of electrical power, and many use water and refrigeration for cooling, but some are air-cooled and use no more power than the average home. In 2003 scientists at Virginia Tech assembled a relatively low-cost supercomputer using 1,100 dual-processor Apple Macintoshes; it was ranked at the time as the third fastest machine in the world.

A computer which is among those with the highest speed, largest functional size, biggest physical dimensions, or greatest monetary cost in any given period of time.

A computer which, among existing general-purpose computers at any given time, is superlative, often in several senses: highest computation rate, largest memory, or highest cost. Predominantly, the term refers to the fastest number crunchers, that is, machines designed to perform numerical calculations at the highest speed that the latest electronic device technology and the state of the art of computer architecture allow.

The demand for the ability to execute arithmetic operations at the highest possible rate originated in computer applications areas collectively referred to as scientific computing. Large-scale numerical simulations of physical processes are often needed in fields such as physics, structural mechanics, meteorology, and aerodynamics. A common technique is to compute an approximate numerical solution to a set of partial differential equations which mathematically describe the physical process of interest but are too complex to be solved by formal mathematical methods. This solution is obtained by first superimposing a grid on a region of space, with a set of numerical values attached to each grid point. Large-scale scientific computations of this type often require hundreds of thousands of grid points with 10 or more values attached to each point, with 10 to 500 arithmetic operations necessary to compute each updated value, and hundreds of thousands of time steps over which the computation must be repeated before a steady-state solution is reached. See Computational fluid dynamics, Numerical analysis, Simulation

Two lines of technological advancement have significantly contributed to what roughly amounts to a doubling of the fastest computers’ speeds every year since the early 1950sthe steady improvement in electronic device technology and the accumulation of improvements in the architectural designs of digital computers.

Computers incorporate very large-scale integrated (VLSI) circuits with tens of millions of transistors per chip for both logic and memory components. A variety of types of integrated circuitry is used in contemporary supercomputers. Several use high-speed complementary metallic oxide semiconductor (CMOS) technology. Throughout most of the history of digital computing, supercomputers generally used the highest-performance switching circuitry available at the timewhich was usually the most exotic and expensive. However, many supercomputers now use the conventional, inexpensive device technology of commodity microprocessors and rely on massive parallelism for their speed. See Computer storage technology, Concurrent processing, Integrated circuits, Logic circuits

Increases in computing speed which are purely due to the architectural structure of a computer can largely be attributed to the introduction of some form of parallelism into the machine’s design: two or more operations which were performed one after the other in previous computers can now be performed simultaneously. See Computer systems architecture

Pipelining is a technique which allows several operations to be in progress in the central processing unit at once. The first form of pipelining used was instruction pipelining. Since each instruction must have the same basic sequence of steps performed, namely instruction fetch, instruction decode, operand fetch, and execution, it is feasible to construct an instruction pipeline, where each of these steps happens at a separate stage of the pipeline. The efficiency of the instruction pipeline depends on the likelihood that the program being executed allows a steady stream of instructions to be fetched from contiguous locations in memory.

The central processing unit nearly always has a much faster cycle time than the memory. This implies that the central processing unit is capable of processing data items faster than a memory unit can provide them. Interleaved memory is an organization of memory units which at least partially relieves this problem.

Parallelism within arithmetic and logical circuitry has been introduced in several ways. Adders, multipliers, and dividers now operate in bit-parallel mode, while the earliest machines performed bit-serial arithmetic. Independently operating parallel functional units within the central processing unit can each perform an arithmetic operation such as add, multiply, or shift. Array processing is a form of parallelism in which the instruction execution portion of a central processing unit is replicated several times and connected to its own memory device as well as to a common instruction interpretation and control unit. In this way, a single instruction can be executed at the same time on each of several execution units, each on a different set of operands. This kind of architecture is often referred to as single-instruction stream, multiple-data stream (SIMD).

Vector processing is the term applied to a form of pipelined arithmetic units which are specialized for performing arithmetic operations on vectors, which are uniform, linear arrays of data values. It can be thought of as a type of SIMD processing, since a single instruction invokes the execution of the same operation on every element of the array. See Computer programming, Programming languages

A central processing unit can contain multiple sets of the instruction execution hardware for either scalar or vector instructions. The task of scheduling instructions which can correctly execute in parallel with one another is generally the responsibility of the compiler or special scheduling hardware in the central processing unit. Instruction-level parallelism is almost never visible to the application programmer.

Multiprocessing is a form of parallelism that has complete central processing units operating in parallel, each fetching and executing instructions independently from the others. This type of computer organization is called multiple-instruction stream, multiple-data stream (MIMD). See Multiprocessing

A less serious definition, reported from about 1990 at The University Of New South Wales states that a supercomputer is any computer that can outperform IBM’s current fastest, thus making it impossible for IBM to ever produce a supercomputer.

Original post:

Supercomputer | Article about supercomputer by The Free …

Homepage | Ohio Supercomputer Center

Chancellor John Carey joined Pankaj Shah, executive director of OSC and OARnet, and Professor Thomas Beck, chair of the Statewide Users Group, on April 9 to dedicate the center’s newest system the HP/Intel Xeon Phi Ruby Cluster. The system highlights a new direction in system acquisition through a partnership with two significant “condo” partners.

The rest is here:

Homepage | Ohio Supercomputer Center

Raspberry Pi Supercomputer Guide Steps

Return to http://www.soton.ac.uk/~sjc/raspberrypi

View video at: http://www.youtube.com/watch?v=Jq5nrHz9I94

Prof Simon Cox

Computational Engineering and Design Research Group

Faculty of Engineering and the Environment

University of Southampton, SO17 1BJ, UK.

V0.2: 8th September 2012

V0.3: 30th November 2012 [Updated with less direct linking to MPICH2 downloads]

V0.4: 9th January 2013 [Updated step 33]

First steps to get machine up

1. Get image from


I originally used: 2012-08-16-wheezy-raspbian.zip

Updated 30/11/12: 2012-10-28-wheezy-raspbian.zip

My advice is to to check the downloads page on raspberrypi.org and use the latest version.

2. Use win32 disk imager to put image onto an SD Card (or on a Mac e.g. Disk Utility/ dd)


You will use the Write option to put the image from the disk to your card

3. Boot on Pi

4. Expand image to fill card using the option on screen when you first boot. If you dont do this on first boot, then you need to use

$ sudo raspi-config


5. Log in and change the password


$ passwd

6. Log out and check that you typed it all OK (!)

$ exit

7. Log back in again with your new password

Building MPI so we can run code on multiple nodes

8. Refresh your list of packages in your cache

$ sudo apt-get update

9. Just doing this out of habit, but note not doing any more than just getting the list (upgrade is via sudo apt-get upgrade).

10. Get Fortran after all what is scientific programming without Fortran being a possibility?

$ sudo apt-get install gfortran

11. Read about MPI on the Pi. This is an excellent post to read just to show you are going to make it by the end, but dont type or get anything just yet- we are going to build everything ourselves:


Note there are a few things to note here

a) Since we put Fortran in we are good to go without excluding anything

b) The packages here are for armel and we need armhf in this case so we are going to build MPI ourselves

12. Read a bit more before you begin:


Note: As the version of MPICH2 updates, you are better to go to:


and get the latest installers Guide.

We are going to follow the steps from 2.2 (from the Quick Start Section) in the guide.

13. Make a directory to put the sources in

$ mkdir /home/pi/mpich2

$ cd ~/mpich2

14. Get MPI sources from Argonne.

$ wget http://www.mcs.anl.gov/research/projects/mpich2/downloads/tarballs/1.4.1p1/mpich2-1.4.1p1.tar.gz

[Note that as the MPI source updates, you can navigate to:

http://www.mpich.org/downloads/ to get the latest stable release version for MPICH2]

15. Unpack them.

$ tar xfz mpich2-1.4.1p1.tar.gz

[Note: You will need to update this as the version of MPICH2 increments]

16. Make yourself a place to put the compiled stuff this will also make it easier to figure out what you have put in new on your system. Also you may end up building this a few times

$ sudo mkdir /home/rpimpi/

$ sudo mkdir /home/rpimpi/mpich2-install

[I just chose the rpimpi to replace the you in the Argonne guide and I did the directory creation in two steps]

17. Make a build directory (so we keep the source directory clean of build things)

mkdir /home/pi/mpich_build

18. Change to the BUILD directory

$ cd /home/pi/mpich_build

19. Now we are going to configure the build

$ sudo /home/pi/mpich2/mpich2-1.4.1p1/configure -prefix=/home/rpimpi/mpich2-install

[Note: You will need to update this as the version of MPICH2 increments]

Make a cup of tea

20. Make the files

$ sudo make

Make another cup of tea

21. Install the files

$ sudo make install

Make another cup of tea it will finish

22. Add the place that you put the install to your path

$ export PATH=$PATH:/home/rpimpi/mpich2-install/bin

Note to permanently put this on the path you will need to edit .profile

$nano ~/.profile

and add at the bottom these two lines:

# Add MPI to path


23. Check whether things did install or not

$ which mpicc

$ which mpiexec

24. Change directory back to home and create somewhere to do your tests

$ cd ~

$ mkdir mpi_testing

$ cd mpi_testing

25. Now we can test whether MPI works for you on a single node

mpiexec -f machinefile -n hostname

where machinefile contains a list of IP addresses (in this case just one) for the machines

a) Get your IP address

$ ifconfig

b) Put this into a single file called machinefile

26. $ nano machinefile

Add this line:

[or whatever your IP address was]

27. If you use

Follow this link:

Raspberry Pi Supercomputer Guide Steps

What Are the Uses of a Supercomputer? | eHow


Today’s supercomputers can not only perform calculation after calculation with blazing speed, they process vast amounts of data in parallel by distributing computing chores to thousands of CPUs. Supercomputers are found at work in research facilities, government agencies and businesses performing mathematical calculations as well as collecting, collating, categorizing and analyzing data.

Your local weatherman bases his forecasts on data supplied by supercomputers run by NOAA, or the National Oceanic and Atmospheric Administration. NOAA’s systems perform database operations, mathematical and statistical calculations on huge amounts of data gathered from across the nation and around the world. The processing power of supercomputers help climatologists predict, not only the likelihood of rain in your neighborhood, but also the paths of hurricanes and the probability of tornado strikes.

Like the weather, scientific research depends upon the number-crunching ability of supercomputers. For example, astronomers at NASA analyze data streaming from satellites orbiting earth, ground-based optical and radio telescopes and probes exploring the solar system. Researchers at the European Organization for Nuclear Research, or CERN, found the Higgs-Boson particle by analyzing the massive amounts of data generated by the Large Hadron Collider.

National Security Agency and similar government intelligence agencies all over the world use supercomputers to monitor communications between private citizens, or from suspected terrorist organizations and potentially hostile governments. The NSA needs the numerical processing power of supercomputers to keep ahead of increasingly sophisticated encryption of Internet, cell phone, email and satellite transmissions — as well as old-fashioned radio communications. In addition, the NSA uses supercomputers to find patterns in both written and spoken communication that might alert officials to potential threats or suspicious activity.

Some supercomputers are needed to extract information from raw data gathered from data farms on the ground or in the cloud. For example, businesses can analyze data collected from their cash registers to help control inventory or spot market trends. Life Insurance companies use supercomputers to minimize their actuarial risks. Likewise, companies that provide health insurance reduce costs and customer premiums by using supercomputers to statistically analyze the benefits of different treatment options.

Mainframe computers, created in the early 1940s, initially were bulky machines that required cooling-sensitive rooms. The 1951 UNIVAC was the size of…

“Microcomputer” is the term coined in the 1970s for a personal computer. Until that point, computers had been bulky room-sized electronics; even…

Even if your organization has researched the benefits and advantages of using a supercomputer to tackle tough and complicated problems, you will…

Computers are used for a variety of applications—from scientific data recording to engineering to … Supercomputers provide the fastest processing speed of…

There are four categories of compters on which most computer scientists agree: the supercomputer, the mainframe, the minicomputer and the microcomputer. As…

Go here to read the rest:

What Are the Uses of a Supercomputer? | eHow

Raspberry Pi at Southampton

The steps to make a Raspberry Pi supercomputer can be downloaded here [9th Jan 2013 update]: Raspberry Pi Supercomputer (PDF).

You can also follow the steps yourself here [9th Jan 2013 update]: Raspberry Pi Supercomputer (html).

The press release (11th Sept 2012) for our Raspberry Pi Supercomputer with Lego is here: Press Release University Page

The press release is also here (PDF): Press Release (PDF).

Pictures are here – including Raspberry Pi and Lego: Press Release (More Pictures).

We wrote up our work as a scientific journal publication where you can find further technical details on the build, motivation for the project and benchmarking.

The reference to the paper is:

Simon J. Cox, James T. Cox, Richard P. Boardman, Steven J. Johnston, Mark Scott, Neil S. O’Brien

Iridis-pi: a low-cost, compact demonstration cluster

Cluster Computing

June 2013

DOI: 10.1007/s10586-013-0282-7

These are some links you may find helpfulul

See the article here:

Raspberry Pi at Southampton

Usain Bolt's diet, a super computer's palate and more: Reading About Eating

Usain Bolt

Jamaica’s Usain Bolt celebrates winning gold in the men’s 200-meter final at the World Athletics Championships in the Luzhniki stadium in Moscow, Russia, Aug. 2013. Bolt reportedly ate a lot of McDonald’s en route to becoming one of the world’s fastest humans. (AP Photo/Martin Meissner, File) (Martin Meissner)

An ongoing digest of the food stories we’re consuming at NOLA.com | The Times-Picayune.

The Diet of Champions– “[T]hey lived on a diet of McDonalds, but that did not stop Ryan Lochte winning 4 Olympic medals for the USA, and Usain Bolt winning 3 Olympic medals for Jamaica, becoming the then fastest Olympian over 100m in the process.” (Decibel h/t Digg)

IBM’s AI computer has come up with some pretty incredible food pairings– “Knowledge thatmight’ve taken a lifetime for a Michelin-starred chef to attain can now be accessedinstantly from your tablet.” (Mike Murphy/Quartz)

How Peter Chang stopped running and started empire building– Chang was once America’s most famous elusive chef. “Chang’s triumphal return to Northern Virginia generated so much excitement that Changians — as his devoted pack members call themselves — briefly crashed the Arlington restaurant’s Web site before the place could open its doors.” (Tim Carman/The Washington Post)

See the rest here:

Usain Bolt's diet, a super computer's palate and more: Reading About Eating

Supermicro Debuts New 720TB 4U 90x 3.5" Top-Load Hot-Swap, SAS3 12Gb/s HDD SuperStorage JBOD Platform @ NAB Show 2015

LAS VEGAS, April 13, 2015 /PRNewswire/ –Super Micro Computer, Inc. (NASDAQ: SMCI), a global leader in high-performance, high-efficiency server, storage technology, and green computing debuts its new extreme density 4U 90x hot-swap bay SuperStorage platform alongside a wide range of high performance server, storage and networking solutions for the Broadcast Media Industry this week at NAB (National Association of Broadcasters) Show in Las Vegas, Nevada. The new 4U JBOD storage chassis supports up to 90x 3.5″ 8TB HDDs for 720TB of SAS3, 12Gb/s performance in an easily serviceable top-load hot-swap architecture. Dual hot-swap expander modules featuring 4x Mini-SAS HD ports maximize throughput and failover redundancy for media streaming, nearline and archive storage applications. With 4K 60p Ultra High-Definition hitting mainstream and 8K 120p UHD on the horizon, all aspects of broadcast media production, from capture, editing, and CG/VFX, to encoding/transcoding, streaming and archiving will drive demand for the most extreme levels of compute, storage and network performance.

“Supermicro Green Computing solutions address the most extreme digital workload challenges facing the Broadcast industry with maximum performance, density, and energy efficiency,” said Charles Liang, President and CEO of Supermicro. “Solutions such as our new 4U 90 top-load hot-swap 3.5″ HDD SuperStorage or 1U 12x 3.5″ direct attached 10GbE storage are unrivaled in density, performance and scalability for uncompressed media. And our 1U 4x GPU systems deliver the raw compute power for visual effects, rendering and transcoding. As the entertainment industry shifts to UHD media, Supermicro has exactly the best compute, storage, and network solutions available for studios to achieve lowest TCO through optimum performance per watt, per square foot, per dollar.”

Supermicro NAB Show 2015 highlights include Blackmagic Design, DaVinci Resolve running on the Blackmagic Design certified, GPU accelerated SYS-7048GR-TR, and full range of server, blade, storage and networking solutions optimized for digital content creation, production, management, and distribution.

Visit Supermicro at NAB Show 2015, April 13-16 in the Las Vegas Convention Center, Lower South Hall, Booth SL15705. For more information on Supermicro’s complete range of high performance, high-efficiency Server, Blade, Storage and Networking solutions, visit http://www.supermicro.com.

Follow Supermicro on Facebook and Twitter to receive their latest news and announcements.

About Super Micro Computer, Inc.

Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green” initiative and provides customers with the most energy-efficient, environmentally-friendly solutions available on the market.

Supermicro, Building Block Solutions and We Keep IT Green are trademarks and/or registered trademarks of Super Micro Computer, Inc.

All other brands, names and trademarks are the property of their respective owners.


See the rest here:

Supermicro Debuts New 720TB 4U 90x 3.5" Top-Load Hot-Swap, SAS3 12Gb/s HDD SuperStorage JBOD Platform @ NAB Show 2015

Supermicro(R) Debuts New 720TB 4U 90x 3.5" Top-Load Hot-Swap, SAS3 12Gb/s HDD SuperStorage(R) JBOD Platform @ NAB Show …

Home Mail Search News Sports Finance Weather Games Answers Screen Flickr Mobile More Politics Celebrity Movies Music TV Groups Health Style Beauty Food Parenting Makers Tech Shopping Travel Autos Homes Try Yahoo Finance on Firefox Skip to Navigation Skip to Main content Skip to Right rail Sign In Mail Help Account Info Help Suggestions

The rest is here:

Supermicro(R) Debuts New 720TB 4U 90x 3.5" Top-Load Hot-Swap, SAS3 12Gb/s HDD SuperStorage(R) JBOD Platform @ NAB Show …

REPORT: This billionaire hedge funder is quietly bankrolling Ted Cruz's campaign

Hedge fund magnate Robert Mercer, a reported billionaire, is the “main donor” bankrolling the super PACs supportingSen. Ted Cruz’s (R-Texas) presidential campaign, The New York Times reported Friday.

According to The Times, Mercer, “a reclusive Long Islander who started I.B.M. and made his fortune using computer patterns to outsmart the stock market, emerged this week as a key bankroller” of Cruz’s “surprisingly fast campaign start.”

Cruz made headlines on Wednesday when the super PACs reported raising what Bloomberg described as a stunning $31 million in just a week.

“Cruz’s presidential effort is getting into the shock-and-awe fundraising business,” Bloomberg’s Mark Halperin wrote. “Even in the context of a presidential campaign cycle in which the major party nominees are expected to raise more than $1.5 billion, Cruz’s haul is eye-popping, one that instantly raises the stakes in the Republican fundraising contest.”

Cruz’s campaign was not expected to be so well funded, especially because the firebrand senator has repeatedly railed against the Republican establishment and “crony capitalism.” Other presidential contenders like former Florida Gov. Jeb Bush (R), Wisconsin Gov. Scott Walker (R), and Sen. Marco Rubio (R-Florida) appear to be closer to establishment-friendly GOP donors.

Trevor Potter,a Republican campaign finance lawyer, told The Times that donors like Mercer are changing the landscape of electoral politics.

“It just takes a random billionaire to change a race and maybe change the country,” Potter said. “That’s what’s so radically different now.”

The paper reported that Mercer, who declined to comment, has not said anything publicly about his financial support for Cruz or why he’s backing the senator’s presidential campaign, which launched on March 23.According to The Daily Beast, Mercer’s hedge fund, Renaissance Technologies, “recently faced an unflattering congressional investigation, the results of which indicated that it used complex and unorthodox financial structures to dramatically lower its tax burden.”

Cruz’s campaign, required to be independent of the super PACs, previously told Business Insider that it was more than excited by the report indicating the committees were flush with money.

“We are thrilled by the report!” his spokeswoman said.

Read more:

REPORT: This billionaire hedge funder is quietly bankrolling Ted Cruz's campaign

Supercomputer passes Turing test


More than a a third of Royal Society testers have been fooled by a super computer into thinking that it was a 13 year old boy.

Five machines were tested at the Royal Society in central London to see if they could fool people into thinking they were humans during text-based conversations.

The test was devised in 1950 by computer science pioneer and World War II code breaker Alan Turing, who said that if a machine was indistinguishable from a human, then it was thinking.

So far no computer has passed the Turing test, which requires 30 percent of human interrogators to be duped during a series of five minute keyboard conversations.

Eugene Goostman, a computer program developed to simulate a 13-year-old boy, managed to convince 33 percent of the judges that it was human, the university said.

Professor Kevin Warwick, from the University of Reading, said: In the field of artificial intelligence there is no more iconic and controversial milestone than the Turing test.

It is fitting that such an important landmark has been reached at the Royal Society in London, the home of British science and the scene of many great advances in human understanding over the centuries. This milestone will go down in history as one of the most exciting.

The successful machine was created by Russian-born Vladimir Veselov, who lives in the United States, and Ukrainian Eugene Demchenko who lives in Russia.

See original here:

Supercomputer passes Turing test