Scientists Say New Quantum Material Could “‘Download’ Your Brain”

A new type of quantum material can directly measure neural activity and translate it into electrical signals for a computer.

Computer Brain

Scientists say they’ve developed a new “quantum material” that could one day transfer information directly from human brains to a computer.

The research is in early stages, but it invokes ideas like uploading brains to the cloud or hooking people up to a computer to track deep health metrics — concepts that until now existed solely in science fiction.

Quantum Interface

The new quantum material, described in research published Wednesday in the journal Nature Communications, is a “nickelate lattice” that the scientists say could directly translate the brain’s electrochemical signals into electrical activity that could be interpreted by a computer.

“We can confidently say that this material is a potential pathway to building a computing device that would store and transfer memories,” Purdue University engineer Shriram Ramanathan told ScienceBlog.

Running Diagnostics

Right now, the new material can only detect the activity of some neurotransmitters — so we can’t yet upload a whole brain or anything like that. But if the tech progresses, the researchers hypothesize that it could be used to detect neurological diseases, or perhaps even store memories.

“Imagine putting an electronic device in the brain, so that when natural brain functions start deteriorating, a person could still retrieve memories from that device,” Ramanathan said.

READ MORE: New Quantum Material Could Warn Of Neurological Disease [ScienceBlog]

More on brain-computer interface: This Neural Implant Accesses Your Brain Through the Jugular Vein

The post Scientists Say New Quantum Material Could “‘Download’ Your Brain” appeared first on Futurism.

Excerpt from:

Scientists Say New Quantum Material Could “‘Download’ Your Brain”

NASA Is Funding the Development of 18 Bizarre New Projects

Through the NASA Innovative Advanced Concepts (NIAC) program, NASA funds projects that go

Nurturing the Bizarre

NASA isn’t afraid to take a chance on the weird. In fact, it has a program designed for that specific purpose, called NASA Innovative Advanced Concepts (NIAC) — and on Wednesday, the agency announced 18 bizarre new projects receiving funding through the program.

“Our NIAC program nurtures visionary ideas that could transform future NASA missions by investing in revolutionary technologies,” NASA exec Jim Reuter said in a press release. “We look to America’s innovators to help us push the boundaries of space exploration with new technology.”

Sci-Fi to Sci-Fact

The 18 newly funded projects are divided into two groups: Phase I and Phase II.

The 12 recipients of the Phase I awards will each receive approximately $125,000 to fund nine month’s worth of feasibility studies for their concepts. These include a project to beam power through Venus’ atmosphere to support long-term missions, a spacesuit with self-healing skin, and floating microprobes inspired by spiders.

The six Phase II recipients, meanwhile, will each receive up to $500,000 to support two-year studies dedicated to fine-tuning their concepts and investigating potential ways to implement the technologies, which include a flexible telescope, a neutrino detector, and materials for solar surfing.

“NIAC is about going to the edge of science fiction, but not over,” Jason Derleth, NIAC program executive, said in the press release. “We are supporting high impact technology concepts that could change how we explore within the solar system and beyond.”

READ MORE: NASA Invests in Potentially Revolutionary Tech Concepts [Jet Propulsion Laboratory]

More on bizarre NASA plans: New NASA Plan for Mars Is Moderately-Terrifying-Sounding, Also, Completely-Awesome: Robotic. Bees.

The post NASA Is Funding the Development of 18 Bizarre New Projects appeared first on Futurism.

View original post here:

NASA Is Funding the Development of 18 Bizarre New Projects

Report: Tesla Doc Is Playing Down Injuries to Block Workers’ Comp

Former Tesla and clinic employees share how doctors blocked workers' compensation claims and put injured people back to work to avoid payouts.

Here’s A Band-Aid

Tesla’s on-site clinic, Access Omnicare, has allegedly been downplaying workers’ injuries to keep the electric automaker off the hook for workers’ compensation.

Several former Tesla employees, all of whom got hurt on the job, and former employees of Access Omnicare, told Reveal News that the clinic was minimizing worker injuries so that the automaker wouldn’t have to pay workers’ comp — suggesting that the barely-profitable car company is willing to do whatever it takes to stay out of the red and avoid negative press.

Back To Work

Reveal, which is a project by the Center for Investigative Reporting, described cases in which employees suffered electrocution, broken bones, and mold-related rashes while working in a Tesla factory — only for Omnicare to deny that the injuries warranted time off work.

The clinic’s top doctor “wanted to make certain that we were doing what Tesla wanted so badly,” former Omnicare operations manager Yvette Bonnet told Reveal. “He got the priorities messed up. It’s supposed to be patients first.”

Missing Paperwork

Meanwhile, employees who requested the paperwork to file for workers’ comp were repeatedly ignored, according to Reveal.

“I just knew after the third or fourth time that they weren’t going to do anything about it,” a former employee whose back was crushed under a falling Model X hatchback told Reveal. “I was very frustrated. I was upset.”

The automaker is on the hook for up to $750,000 in medical payments per workers’ comp claim, according to Reveal‘s reporting.

Meanwhile, both Tesla CEO Elon Musk and Laurie Shelby, the company’s VP of safety, have publicly praised Access Omnicare, Reveal found. Musk even recently announced plans to extend it to other plants, “so that we have really immediate first-class health care available right on the spot when people need it.”

READ MORE: How Tesla and its doctor made sure injured employees didn’t get workers’ comp [Reveal News]

More on Tesla: Video Shows Tesla Autopilot Steering Toward Highway Barriers

The post Report: Tesla Doc Is Playing Down Injuries to Block Workers’ Comp appeared first on Futurism.

More here:

Report: Tesla Doc Is Playing Down Injuries to Block Workers’ Comp

Infertile Couple Gives Birth to “Three-Parent Baby”

A Greek couple just gave birth to a three-parent baby, the first conceived as part of a clinical trial to treat infertility.

Happy Birthday

On Tuesday, a couple gave birth to what researchers are calling a “three-parent baby” — giving new hope to infertile couples across the globe.

After four cycles of in vitro fertilization failed to result in a pregnancy, the Greek couple enrolled in a clinical trial for mitochondrial replacement therapy (MRT) — meaning doctors placed the nucleus from the mother’s egg into a donor egg that had its nucleus removed. Then they fertilized the egg with sperm from the father and implanted it into the mother.

Due to this procedure, the six-pound baby boy has DNA from both his mother and father, as well as a tiny bit from the woman who donated the egg.

Greek Life

The Greek baby wasn’t the first “three-parent baby” born after his parents underwent MRT — that honor goes to the offspring of a Jordanian woman who gave birth in 2016.

However, in her case and others that followed it, doctors used the technique to prevent a baby from inheriting a parent’s genetic defect. This marked the first time a couple used MRT as part of a clinical trial to treat infertility.

“Our excellent collaboration and this exceptional result will help countless women to realise their dream of becoming mothers with their own genetic material,” Nuno Costa-Borges, co-founder of Embryotools, one of the companies behind the trial, said in a statement.

READ MORE: Baby with DNA from three people born in Greece [The Guardian]

More on three-parent babies: An Infertile Couple Is Now Pregnant With a “Three-Parent Baby”

The post Infertile Couple Gives Birth to “Three-Parent Baby” appeared first on Futurism.

Go here to read the rest:

Infertile Couple Gives Birth to “Three-Parent Baby”

MIT Prof: If We Live in a Simulation, Are We Players or NPCs?

An MIT scientist asks whether we're protagonists in a simulated reality or so-called NPCs who exist to round out a player character's experience. 

Simulation Hypothesis

Futurism readers may recognize Rizwan Virk as the MIT researcher touting a new book arguing that we’re likely living in a game-like computer simulation.

Now, in new interview with Vox, Virk goes even further — by probing whether we’re protagonists in the simulation or so-called “non-player characters” who are presumably included to round out a player character’s experience.

Great Simulation

Virk speculated about whether we’re players or side characters when Vox writer Sean Illing asked a question likely pondered by anyone who’s seen “The Matrix”: If you were living in a simulation, would you actually want to know?

“Probably the most important question related to this is whether we are NPCs (non-player characters) or PCs (player characters) in the video game,” Virk told Vox. “If we are PCs, then that means we are just playing a character inside the video game of life, which I call the Great Simulation.”

More Frightening

It’s a line of inquiry that cuts to the core of the simulation hypothesis: If the universe is essentially a video game, who built it — and why?

“The question is, are all of us NPCs in a simulation, and what is the purpose of that simulation?” Virk asked. “A knowledge of the fact that we’re in a simulation, and the goals of the simulation and the goals of our character, I think, would still be interesting to many people.”

READ MORE: Are we living in a computer simulation? I don’t know. Probably. [Vox]

More on the simulation hypothesis: Famous Hacker Thinks We’re Living in Simulation, Wants to Escape

The post MIT Prof: If We Live in a Simulation, Are We Players or NPCs? appeared first on Futurism.

Read more:

MIT Prof: If We Live in a Simulation, Are We Players or NPCs?

Supercomputer – Simple English Wikipedia, the free …

A supercomputer is a computer with great speed and memory. This kind of computer can do jobs faster than any other computer of its generation. They are usually thousands of times faster than ordinary personal computers made at that time. Supercomputers can do arithmetic jobs very fast, so they are used for weather forecasting, code-breaking, genetic analysis and other jobs that need many calculations. When new computers of all classes become more powerful, new ordinary computers are made with powers that only supercomputers had in the past, while new supercomputers continue to outclass them.

Electrical engineers make supercomputers that link many thousands of microprocessors.

Supercomputer types include: shared memory, distributed memory and array. Supercomputers with shared memory are developed by using a parallel computing and pipelining concept. Supercomputers with distributed memory consist of many (about 100~10000) nodes. CRAY series of CRAYRESERCH and VP 2400/40, NEC SX-3 of HUCIS are shared memory types. nCube 3, iPSC/860, AP 1000, NCR 3700, Paragon XP/S, CM-5 are distributed memory types.

An array type computer named ILIAC started working in 1972. Later, the CF-11, CM-2, and the Mas Par MP-2 (which is also an array type) were developed. Supercomputers that use a physically separated memory as one shared memory include the T3D, KSR1, and Tera Computer.

Organizations

Centers

Read the rest here:

Supercomputer – Simple English Wikipedia, the free …

Home | TOP500 Supercomputer Sites

AWS Adds More Epyc Compute To EC2

For the first decade that Amazon Web Services was in operation, its Elastic Compute Cloud (EC2) raw compute was available in precisely one flavor: Intel Xeon. AWS Adds More Epyc Compute To EC2

AWS Adds More Epyc Compute To EC2 was written by Timothy Prickett Morgan at .

BOULDER, Colo., March 27, 2019 ColdQuanta, Inc., a leading developer of ultracold-atom quantum technology, today announced that its board of directors has appointed Robert Bo Ewald as president and chief executive officer. Ewald is well-known in high technology having previously been president of supercomputing leader Cray Research, CEO of Silicon Graphics, and for the []

The post ColdQuanta Appoints Robert Bo Ewald as President and Chief Executive Officer appeared first on HPCwire.

March 27, 2019 As part of its mission to educate and engage the public in science, CERN is launching theScience Gateway, a new facility dedicated to scientific education and outreach. TheScience Gatewaywill be hosted in a new, iconic building on CERNs Meyrin site, designed by world-renowned Renzo Piano Building Workshop, architects. Construction is planned []

The post CERN to Unveil Its New Science Gateway Project appeared first on HPCwire.

March 27, 2019 The San Diego Supercomputer Center (SDSC) at UC San Diego, andSylabs.io recently hosted the first-ever Singularity User Group meeting, attracting users and developers from around the nation and beyond who wanted to learn more about the latest developments in an open source project known as Singularity. Now in use on SDSCsCometsupercomputer, []

The post SDSC and Sylabs Spread the Word on Singularity appeared first on HPCwire.

The San Diego Supercomputer Center (SDSC) at UC San Diego, and Sylabs.io recently hosted the first-ever Singularity User Group meeting, attracting users and developers from around the nation and beyond who wanted to learn more about the latest developments in an open source project known as Singularity. Now in use on SDSCs Comet supercomputer, Singularity has quickly become an essential tool in improving the productivity of researchers by simplifying the development and portability challenges of working with complex scientific software.

The post SDSC and Sylabs Gather for Singularity User Group appeared first on insideHPC.

Today quantum computing startup ColdQuanta announced the appointment of Robert Bo Ewald as president and chief executive officer. Ewald is well-known in high technology having previously been president of supercomputing leader Cray Research, CEO of Silicon Graphics, and for the past six years, president of quantum computing company D-Wave International. With his experience at Cray, SGI and D-Wave, Bo has successfully navigated companies through the bleeding edge of technology several times before. I am thrilled to have Bo take ColdQuantas helm.

The post Bo Ewald joins quantum computing firm ColdQuanta as CEO appeared first on insideHPC.

Sean Hefty and Venkata Krishnan from Intel gave this talk at the OpenFabrics Workshop in Austin. “Advances in Smart NIC/FPGA with integrated network interface allow acceleration of application-specific computation to be performed alongside communication. Participants will learn about the potential for Smart NIC/FPGA application acceleration and will have the opportunity to contribute application expertise and domain knowledge to a discussion of how Smart NIC/FPGA acceleration technology can bring individual applications into the Exascale era.”

The post Video: Enabling Applications to Exploit SmartNICs and FPGAs appeared first on insideHPC.

The conversational AI created by IBM called Project Debater is designed to have a formal debate with a person. IBM Project Debater Speaks To The Future Of AI

IBM Project Debater Speaks To The Future Of AI was written by Paul Teich at .

It is no secret that Intel has been working to get its Cascade Lake processors, the second generation of its Xeon SP family to market as early as possible this year and to ramp sales at the same time that X86 server rival AMD is expected to get its second generation Rome Epyc processors in the field. A First Peek At Cascade Lake Xeons Ahead Of Launch

A First Peek At Cascade Lake Xeons Ahead Of Launch was written by Timothy Prickett Morgan at .

Visit link:

Home | TOP500 Supercomputer Sites

What is a Supercomputer? – Definition from Techopedia

Supercomputers are primarily are designed to be used in enterprises and organizations that require massive computing power. A supercomputer incorporates architectural and operational principles from parallel and grid processing, where a process is simultaneously executed on thousands of processors or is distributed among them. Although supercomputers houses thousands of processors and require substantial floor space, they contain most of the key components of a typical computer, including a processor(s), peripheral devices, connectors, an operating system and applications.

As of 2013, IBM Sequoia is the fastest supercomputer to date. It has more than 98,000 processors that allow it to process at a speed of 16,000 trillion calculations per second.

Follow this link:

What is a Supercomputer? – Definition from Techopedia

EKA (supercomputer) – Wikipedia

EKA is a supercomputer built by the Computational Research Laboratories (a subsidiary of Tata Sons) with technical assistance and hardware provided by Hewlett-Packard.[6]

Eka means the number One in Sanskrit.[4]

EKA uses 14,352[2] cores based on the Intel QuadCore Xeon processors. The primary interconnect is Infiband 4x DDR. EKA occupies about 4,000-square-foot (370m2) area.[7] It was built using offshelf components from Hewlett-Packard, Mellanox and Voltaire Ltd..[2] It was built within a short period of 6 weeks.[7]

At the time of its unveiling, it was the fourth-fastest supercomputer in the world and the fastest in Asia.[7] As of 16 September 2011, it is ranked at 58.[5]

Read more here:

EKA (supercomputer) – Wikipedia

IBM Blue Gene – Wikipedia

Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the PFLOPS (petaFLOPS) range, with low power consumption.

The project created three generations of supercomputers, Blue Gene/L, Blue Gene/P, and Blue Gene/Q. Blue Gene systems have often led the TOP500[1] and Green500[2] rankings of the most powerful and most power efficient supercomputers, respectively. Blue Gene systems have also consistently scored top positions in the Graph500 list.[3] The project was awarded the 2009 National Medal of Technology and Innovation.[4]

As of 2015, IBM seems to have ended the development of the Blue Gene family[5] though no public announcement has been made. IBM’s continuing efforts of the supercomputer scene seems to be concentrated around OpenPower, using accelerators such as FPGAs and GPUs to battle the end of Moore’s law.[6]

In December 1999, IBM announced a US$100 million research initiative for a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding.[7] The project had two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. Major areas of investigation included: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures. The initial design for Blue Gene was based on an early version of the Cyclops64 architecture, designed by Monty Denneau. The initial research and development work was pursued at IBM T.J. Watson Research Center and led by William R. Pulleyblank.[8]

At IBM, Alan Gara started working on an extension of the QCDOC architecture into a more general-purpose supercomputer: The 4D nearest-neighbor interconnection network was replaced by a network supporting routing of messages from any node to any other; and a parallel I/O subsystem was added. DOE started funding the development of this system and it became known as Blue Gene/L (L for Light); development of the original Blue Gene system continued under the name Blue Gene/C (C for Cyclops) and, later, Cyclops64.

In November 2004 a 16-rack system, with each rack holding 1,024 compute nodes, achieved first place in the TOP500 list, with a Linpack performance of 70.72 TFLOPS.[1] It thereby overtook NEC’s Earth Simulator, which had held the title of the fastest computer in the world since 2002. From 2004 through 2007 the Blue Gene/L installation at LLNL[9] gradually expanded to 104 racks, achieving 478 TFLOPS Linpack and 596 TFLOPS peak. The LLNL BlueGene/L installation held the first position in the TOP500 list for 3.5 years, until in June 2008 it was overtaken by IBM’s Cell-based Roadrunner system at Los Alamos National Laboratory, which was the first system to surpass the 1 PetaFLOPS mark. The system was built in Rochester, MN IBM plant.

While the LLNL installation was the largest Blue Gene/L installation, many smaller installations followed. In November 2006, there were 27 computers on the TOP500 list using the Blue Gene/L architecture. All these computers were listed as having an architecture of eServer Blue Gene Solution. For example, three racks of Blue Gene/L were housed at the San Diego Supercomputer Center.

While the TOP500 measures performance on a single benchmark application, Linpack, Blue Gene/L also set records for performance on a wider set of applications. Blue Gene/L was the first supercomputer ever to run over 100 TFLOPS sustained on a real-world application, namely a three-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metal under high pressure and temperature conditions. This achievement won the 2005 Gordon Bell Prize.

In June 2006, NNSA and IBM announced that Blue Gene/L achieved 207.3 TFLOPS on a quantum chemical application (Qbox).[10] At Supercomputing 2006,[11] Blue Gene/L was awarded the winning prize in all HPC Challenge Classes of awards.[12] In 2007, a team from the IBM Almaden Research Center and the University of Nevada ran an artificial neural network almost half as complex as the brain of a mouse for the equivalent of a second (the network was run at 1/10 of normal speed for 10 seconds).[13]

The name Blue Gene comes from what it was originally designed to do, help biologists understand the processes of protein folding and gene development.[14] “Blue” is a traditional moniker that IBM uses for many of its products and the company itself. The original Blue Gene design was renamed “Blue Gene/C” and eventually Cyclops64. The “L” in Blue Gene/L comes from “Light” as that design’s original name was “Blue Light”. The “P” version was designed to be a petascale design. “Q” is just the letter after “P”. There is no Blue Gene/R.[15]

The Blue Gene/L supercomputer was unique in the following aspects:[16]

The Blue Gene/L architecture was an evolution of the QCDSP and QCDOC architectures. Each Blue Gene/L Compute or I/O node was a single ASIC with associated DRAM memory chips. The ASIC integrated two 700MHz PowerPC 440 embedded processors, each with a double-pipeline-double-precision Floating Point Unit (FPU), a cache sub-system with built-in DRAM controller and the logic to support multiple communication sub-systems. The dual FPUs gave each Blue Gene/L node a theoretical peak performance of 5.6 GFLOPS (gigaFLOPS). The two CPUs were not cache coherent with one another.

Compute nodes were packaged two per compute card, with 16 compute cards plus up to 2 I/O nodes per node board. There were 32 node boards per cabinet/rack.[17] By the integration of all essential sub-systems on a single chip, and the use of low-power logic, each Compute or I/O node dissipated low power (about 17 watts, including DRAMs). This allowed aggressive packaging of up to 1024 compute nodes, plus additional I/O nodes, in a standard 19-inch rack, within reasonable limits of electrical power supply and air cooling. The performance metrics, in terms of FLOPS per watt, FLOPS per m2 of floorspace and FLOPS per unit cost, allowed scaling up to very high performance. With so many nodes, component failures were inevitable. The system was able to electrically isolate faulty components, down to a granularity of half a rack (512 compute nodes), to allow the machine to continue to run.

Each Blue Gene/L node was attached to three parallel communications networks: a 3D toroidal network for peer-to-peer communication between compute nodes, a collective network for collective communication (broadcasts and reduce operations), and a global interrupt network for fast barriers. The I/O nodes, which run the Linux operating system, provided communication to storage and external hosts via an Ethernet network. The I/O nodes handled filesystem operations on behalf of the compute nodes. Finally, a separate and private Ethernet network provided access to any node for configuration, booting and diagnostics. To allow multiple programs to run concurrently, a Blue Gene/L system could be partitioned into electronically isolated sets of nodes. The number of nodes in a partition had to be a positive integer power of 2, with at least 25 = 32 nodes. To run a program on Blue Gene/L, a partition of the computer was first to be reserved. The program was then loaded and run on all the nodes within the partition, and no other program could access nodes within the partition while it was in use. Upon completion, the partition nodes were released for future programs to use.

Blue Gene/L compute nodes used a minimal operating system supporting a single user program. Only a subset of POSIX calls was supported, and only one process could run at a time on node in co-processor modeor one process per CPU in virtual mode. Programmers needed to implement green threads in order to simulate local concurrency. Application development was usually performed in C, C++, or Fortran using MPI for communication. However, some scripting languages such as Ruby[18] and Python[19] have been ported to the compute nodes.

In June 2007, IBM unveiled Blue Gene/P, the second generation of the Blue Gene series of supercomputers and designed through a collaboration that included IBM, LLNL, and Argonne National Laboratory’s Leadership Computing Facility.[20]

The design of Blue Gene/P is a technology evolution from Blue Gene/L. Each Blue Gene/P Compute chip contains four PowerPC 450 processor cores, running at 850MHz. The cores are cache coherent and the chip can operate as a 4-way symmetric multiprocessor (SMP). The memory subsystem on the chip consists of small private L2 caches, a central shared 8 MB L3 cache, and dual DDR2 memory controllers. The chip also integrates the logic for node-to-node communication, using the same network topologies as Blue Gene/L, but at more than twice the bandwidth. A compute card contains a Blue Gene/P chip with 2 or 4 GB DRAM, comprising a “compute node”. A single compute node has a peak performance of 13.6 GFLOPS. 32 Compute cards are plugged into an air-cooled node board. A rack contains 32 node boards (thus 1024 nodes, 4096 processor cores).[21]By using many small, low-power, densely packaged chips, Blue Gene/P exceeded the power efficiency of other supercomputers of its generation, and at 371 MFLOPS/W Blue Gene/P installations ranked at or near the top of the Green500 lists in 2007-2008.[2]

The following is an incomplete list of Blue Gene/P installations. Per November 2009, the TOP500 list contained 15 Blue Gene/P installations of 2-racks (2048 nodes, 8192 processor cores, 23.86 TFLOPS Linpack) and larger.[1]

The third supercomputer design in the Blue Gene series, Blue Gene/Q has a peak performance of 20 Petaflops,[37] reaching LINPACK benchmarks performance of 17 Petaflops. Blue Gene/Q continues to expand and enhance the Blue Gene/L and /P architectures.

The Blue Gene/Q Compute chip is an 18 core chip. The 64-bit A2 processor cores are 4-way simultaneously multithreaded, and run at 1.6GHz. Each processor core has a SIMD Quad-vector double precision floating point unit (IBM QPX). 16 Processor cores are used for computing, and a 17th core for operating system assist functions such as interrupts, asynchronous I/O, MPI pacing and RAS. The 18th core is used as a redundant spare, used to increase manufacturing yield. The spared-out core is shut down in functional operation. The processor cores are linked by a crossbar switch to a 32 MB eDRAM L2 cache, operating at half core speed. The L2 cache is multi-versioned, supporting transactional memory and speculative execution, and has hardware support for atomic operations.[38] L2 cache misses are handled by two built-in DDR3 memory controllers running at 1.33GHz. The chip also integrates logic for chip-to-chip communications in a 5D torus configuration, with 2GB/s chip-to-chip links. The Blue Gene/Q chip is manufactured on IBM’s copper SOI process at 45nm. It delivers a peak performance of 204.8 GFLOPS at 1.6GHz, drawing about 55 watts. The chip measures 1919mm (359.5mm) and comprises 1.47 billion transistors. The chip is mounted on a compute card along with 16 GB DDR3 DRAM (i.e., 1 GB for each user processor core).[39]

A Q32[40] compute drawer contains 32 compute cards, each water cooled.[41]A “midplane” (crate) contains 16 Q32 compute drawers for a total of 512 compute nodes, electrically interconnected in a 5D torus configuration (4x4x4x4x2). Beyond the midplane level, all connections are optical. Racks have two midplanes, thus 32 compute drawers, for a total of 1024 compute nodes, 16,384 user cores and 16 TB RAM.[41]

Separate I/O drawers, placed at the top of a rack or in a separate rack, are air cooled and contain 8 compute cards and 8 PCIe expansion slots for Infiniband or 10 Gigabit Ethernet networking.[41]

At the time of the Blue Gene/Q system announcement in November 2011, an initial 4-rack Blue Gene/Q system (4096 nodes, 65536 user processor cores) achieved #17 in the TOP500 list[1] with 677.1 TeraFLOPS Linpack, outperforming the original 2007 104-rack BlueGene/L installation described above. The same 4-rack system achieved the top position in the Graph500 list[3] with over 250 GTEPS (giga traversed edges per second). Blue Gene/Q systems also topped the Green500 list of most energy efficient supercomputers with up to 2.1 GFLOPS/W.[2]

In June 2012, Blue Gene/Q installations took the top positions in all three lists: TOP500,[1] Graph500 [3] and Green500.[2]

The following is an incomplete list of Blue Gene/Q installations. Per June 2012, the TOP500 list contained 20 Blue Gene/Q installations of 1/2-rack (512 nodes, 8192 processor cores, 86.35 TFLOPS Linpack) and larger.[1] At a (size-independent) power efficiency of about 2.1 GFLOPS/W, all these systems also populated the top of the June 2012 Green 500 list.[2]

Record-breaking science applications have been run on the BG/Q, the first to cross 10 petaflops of sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run,[61] while the Cardioid code,[62][63] which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near real-time simulation, both on Sequoia. A fully compressible flow solver has also achieved 14.4 PFLOP/s (originally 11 PFLOP/s) on Sequoia, 72% of the machine’s nominal peak performance.[64]

See the rest here:

IBM Blue Gene – Wikipedia

ORNL Launches Summit Supercomputer | ORNL

OAK RIDGE, Tenn., June 8, 2018The U.S. Department of Energys Oak Ridge National Laboratory today unveiled Summit as the worlds most powerful and smartest scientific supercomputer.

With a peak performance of 200,000 trillion calculations per secondor 200 petaflops, Summit will be eight times more powerful than ORNLs previous top-ranked system, Titan. For certain scientific applications, Summit will also be capable of more than three billion billion mixed precision calculations per second, or 3.3 exaops. Summit will provide unprecedented computing power for research in energy, advanced materials and artificial intelligence (AI), among other domains, enabling scientific discoveries that were previously impractical or impossible.

Todays launch of the Summit supercomputer demonstrates the strength of American leadership in scientific innovation and technology development. Its going to have a profound impact in energy research, scientific discovery, economic competitiveness and national security, said Secretary of Energy Rick Perry. I am truly excited by the potential of Summit, as it moves the nation one step closer to the goal of delivering an exascale supercomputing system by 2021. Summit will empower scientists to address a wide range of new challenges, accelerate discovery, spur innovation and above all, benefit the American people.

The IBM AC922 system consists of 4,608 compute servers, each containing two 22-core IBM Power9 processors and six NVIDIA Tesla V100 graphics processing unit accelerators, interconnected with dual-rail Mellanox EDR 100Gb/s InfiniBand. Summit also possesses more than 10 petabytes of memory paired with fast, high-bandwidth pathways for efficient data movement. The combination of cutting-edge hardware and robust data subsystems marks an evolution of the hybrid CPUGPU architecture successfully pioneered by the 27-petaflops Titan in 2012.

ORNL researchers have figured out how to harness the power and intelligence of Summits state-of-art architecture to successfully run the worlds first exascale scientific calculation. A team of scientists led by ORNLs Dan Jacobson and Wayne Joubert has leveraged the intelligence of the machine to run a 1.88 exaops comparative genomics calculation relevant to research in bioenergy and human health. The mixed precision exaops calculation produced identical results to more time-consuming 64-bit calculations previously run on Titan.

From its genesis 75 years ago, ORNL has a history and culture of solving large and difficult problems with national scope and impact, ORNL Director Thomas Zacharia said. ORNL scientists were among the scientific teams that achieved the first gigaflops calculations in 1988, the first teraflops calculations in 1998, the first petaflops calculations in 2008 and now the first exaops calculations in 2018. The pioneering research of ORNL scientists and engineers has played a pivotal role in our nations history and continues to shape our future. We look forward to welcoming the scientific user community to Summit as we pursue another 75 years of leadership in science.

In addition to scientific modeling and simulation, Summit offers unparalleled opportunities for the integration of AI and scientific discovery, enabling researchers to apply techniques like machine learning and deep learning to problems in human health, high-energy physics, materials discovery and other areas. Summit allows DOE and ORNL to respond to the White House Artificial Intelligence for America initiative.

Summit takes accelerated computing to the next level, with more computing power, more memory, an enormous high-performance file system and fast data paths to tie it all together. That means researchers will be able to get more accurate results faster, said Jeff Nichols, ORNL associate laboratory director for computing and computational sciences. Summits AI-optimized hardware also gives researchers an incredible platform for analyzing massive datasets and creating intelligent software to accelerate the pace of discovery.

Summit moves the nation one step closer to the goal of developing and delivering a fully capable exascale computing ecosystem for broad scientific use by 2021.

Summit will be open to select projects this year while ORNL and IBM work through the acceptance process for the machine. In 2019, the bulk of access to the IBM system will go to research teams selected through DOEs Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program.

In anticipation of Summits launch, researchers have been preparing applications for its next-generation architecture, with many ready to make effective use of the system on day one. Among the early science projects slated to run on Summit:

Astrophysics

Exploding stars, known as supernovas, supply researchers with clues related to how heavy elementsincluding the gold in jewelry and iron in bloodseeded the universe.

The highly scalable FLASH code models this process at multiple scalesfrom the nuclear level to the large-scale hydrodynamics of a stars final moments. On Summit, FLASH will go much further than previously possible, simulating supernova scenarios several thousand times longer and tracking about 12 times more elements than past projects.

Its at least a hundred times more computation than weve been able to do on earlier machines, said ORNL computational astrophysicist Bronson Messer. The sheer size of Summit will allow us to make very high-resolution models.

Materials

Developing the next generation of materials, including compounds for energy storage, conversion and production, depends on subatomic understanding of material behavior. QMCPACK, a quantum Monte Carlo application, simulates these interactions using first-principles calculations.

Up to now, researchers have only been able to simulate tens of atoms because of QMCPACKs high computational cost. Summit, however, can support materials composed of hundreds of atoms, a jump that aids the search for a more practical superconductora material that can transmit electricity with no energy loss.

Summits large, on-node memory is very important for increasing the range of complexity in materials and physical phenomena, said ORNL staff scientist Paul Kent. Additionally, the much more powerful nodes are really going to help us extend the range of our simulations.

Cancer Surveillance

One of the keys to combating cancer is developing tools that can automatically extract, analyze and sort existing health data to reveal previously hidden relationships between disease factors such as genes, biological markers and environment. Paired with unstructured data such as text-based reports and medical images, machine learning algorithms scaled on Summit will help supply medical researchers with a comprehensive view of the U.S. cancer population at a level of detail typically obtained only for clinical trial patients.

This cancer surveillance project is part of the CANcer Distributed Learning Environment, or CANDLE, a joint initiative between DOE and the National Cancer Institute.

Essentially, we are training computers to read documents and abstract information using large volumes of data, ORNL researcher Gina Tourassi said. Summit enables us to explore much more complex models in a time efficient way so we can identify the ones that are most effective.

Systems Biology

Applying machine learning and AI to genetic and biomedical datasets offers the potential to accelerate understanding of human health and disease outcomes.

Using a mix of AI techniques on Summit, researchers will be able to identify patterns in the function, cooperation and evolution of human proteins and cellular systems. These patterns can collectively give rise to clinical phenotypes, observable traits of diseases such as Alzheimers, heart disease or addiction, and inform the drug discovery process.

Through a strategic partnership project between ORNL and the U.S. Department of Veterans Affairs, researchers are combining clinical and genomic data with machine learning and Summits advanced architecture to understand the genetic factors that contribute to conditions such as opioid addiction.

The complexity of humans as a biological system is incredible, said ORNL computational biologist Dan Jacobson. Summit is enabling a whole new range of science that was simply not possible before it arrived.

Summit is part of the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility located at ORNL. UT-Battelle manages ORNL for the Department of Energys Office of Science, the single largest supporter of basic research in the physical sciences in the United States. DOEs Office of Science is working to address some of the most pressing challenges of our time. For more information, please visithttp://science.energy.gov.

Image:https://www.ornl.gov/sites/default/files/2018-P01537.jpg

Caption: Oak Ridge National Laboratory launches Summit supercomputer.

Photos, b-roll and additional resources are available at http://olcf.ornl.gov/summit.

Access Summit Flickr Photos at https://flic.kr/s/aHsmmTwKLg.

Videos of Summit available at https://www.dropbox.com/sh/fy76ppz7cvjblia/AAC0m93xBWk4poM-rRwJbiZza?dl=0.

Continued here:

ORNL Launches Summit Supercomputer | ORNL

SCP-866 – SCP Foundation

Item #: SCP-866

Object Class: Euclid

Special Containment Procedures: SCP-866 is to be contained in situ in the HPC Center of the University in , . Floor containing SCP-866 is to be permanently sealed off to all but authorized SCP personnel. At least two SCP personnel should monitor the diesel backup generators at all times as a complete power failure could lead to unquantifiable loss of personnel and civilian casualties, unquantifiable loss of equipment, complete loss of acquired experimental data and in the worst case [DATA EXPUNGED]. Access to the input terminals is allowed only with permission of Level 4 Staff. At least two guards should be stationed in the room of SCP-866 and prevent any individual from entering SCP-866 beyond the input terminals. Unauthorized attempts of access should be logged, but due to the location of containment extreme measures should be avoided if possible.

Description: SCP-866 is a Series Supercomputer constructed in 20. Its anomalous properties were discovered when the system proved capable of running computation jobs with more processors than physically available. Subsequent attempts to determine the reason for this behavior have failed, but have caused university employees to disappear. See Addendum 1.1a for details. Foundation operatives determined the system has non-euclidian geometry in the computation node rack topology, possibly a polydimensional n-hypercube structure. This however does not account for the reason for the anomalous computations, only for their speed. An attempt to remove SCP-866 from the power supply has resulted in immediate [DATA EXPUNGED] resulting in displacements and disappearances, including the entire recovery team. See [REDACTED] for additional information. In situ containment measures have been devised.

Addendum 1: SCP-866 has been successfully used by Foundation staff for large-scale simulations and computations. At this time, the limit, if any, to SCP-866 computational capacity is not known. Access to the machine can be made remotely by anyone possessing a student or staff account for the University System. Addition of a [REDACTED] prevents non-Foundation access.

Addendum 1.1a: of the university employees have since been discovered. Prof. has been found in the building’s basement by janitorial staff. Analysis of the remains has shown that his death occurred roughly at the same time as the attempt to remove SCP-866 from the power supply. He was found embe[REDACTED]oom wall. Position of the body suggests Prof. was initially alive while in the basement, the words “[illegible] [illegible] died to a rounding error” were written in his own blood. Radar scans of the building’s concrete walls are ongoing, but have failed to find anything of note. Research assistant Dr. has been found in Lagrangian point L3 through unrelated observation regarding [REDACTED].

Addendum 2: An analysis of currently running jobs shows that less than 5% of tasks are the result of foundation personnel. This value could not be increased through an increase in jobs submitted, suggesting non-linear relation between job size and machine resources. Attempts to identify the nature of the other jobs has proven so far unsuccessful. Largest observed jobs up to date, still running, are the “TSTWRLD1” to “TSTWRLD4” series submitted by “ao000002” and taking 20% of total machine resources each. Further analysis required.

Addendum 3: Log recovered after attempt to remove from power supply failed.

Addendum 4:Investigation Log of TSTWRLD2 program

UpdateActivity logs have recorded the following output:

Further investigation required. Priority [REDACTED].

See the article here:

SCP-866 – SCP Foundation

Supercomputer – Simple English Wikipedia, the free …

A supercomputer is a computer with great speed and memory. This kind of computer can do jobs faster than any other computer of its generation. They are usually thousands of times faster than ordinary personal computers made at that time. Supercomputers can do arithmetic jobs very fast, so they are used for weather forecasting, code-breaking, genetic analysis and other jobs that need many calculations. When new computers of all classes become more powerful, new ordinary computers are made with powers that only supercomputers had in the past, while new supercomputers continue to outclass them.

Electrical engineers make supercomputers that link many thousands of microprocessors.

Supercomputer types include: shared memory, distributed memory and array. Supercomputers with shared memory are developed by using a parallel computing and pipelining concept. Supercomputers with distributed memory consist of many (about 100~10000) nodes. CRAY series of CRAYRESERCH and VP 2400/40, NEC SX-3 of HUCIS are shared memory types. nCube 3, iPSC/860, AP 1000, NCR 3700, Paragon XP/S, CM-5 are distributed memory types.

An array type computer named ILIAC started working in 1972. Later, the CF-11, CM-2, and the Mas Par MP-2 (which is also an array type) were developed. Supercomputers that use a physically separated memory as one shared memory include the T3D, KSR1, and Tera Computer.

Organizations

Centers

Read the rest here:

Supercomputer – Simple English Wikipedia, the free …

Home | TOP500 Supercomputer Sites

Hyperion Research Invites Submissions for HPC Innovation Excellence Awards

ST. PAUL, Minn., March 5, 2019 Hyperion Research today invited members of the worldwide high performance computing (HPC) community to submit entries for the next round of HPC Innovation Excellence Awards. Awards will be presented during Hyperion Researchs popular HPC market update breakfast that happens each year during the ISC European supercomputing conference in []

The post Hyperion Research Invites Submissions for HPC Innovation Excellence Awards appeared first on HPCwire.

SUNNYVALE, Calif. & YOKNEAM, Israel, Mar. 5, 2019 Mellanox Technologies, Ltd., a leading supplier of optical transceivers and high-performance, end-to-end smart interconnect solutions for data center servers and storage systems, today announced the introduction of new 100G, 200G, and 400G technologies and will be demonstrating these at the Optical Fiber Conference (OFC 2019) in []

The post Mellanox Showcases Live System Demonstrations of LinkX 200G & 400G Cables & Transceivers appeared first on HPCwire.

When it comes to making swift pivots to keep pace with the newest architectural innovations, organizations like weather and climate prediction-focused NOAA have major constraints.

NOAA Faces Winds of Architectural Change was written by Nicole Hemsoth at .

The Dell EMC Community Meeting has published their preliminary speaker agenda. The event takes place March 25-27 in Austin, Texas. The Dell HPC Community is a worldwide technical forum that facilitates the exchange of ideas among researchers, computer scientists, executives, developers, and engineers and promotes the advancement of innovative, powerful HPC solutions.The vision of the []

The post Agenda Posted for Dell EMC Community Event in Austin appeared first on insideHPC.

March 5, 2019 Federal cloud security expertMartin Rieger joins Penguin Computing, aleading provider of high-performance computing (HPC), artificial intelligence (AI), enterprise data center and cloud solutions,as Information Systems Security Manager (ISSM), helping to expand the cyber security and risk management capabilities Penguin Computing is able to bring to its government clients. In his role []

The post Penguin Computing Appoints Martin Rieger as Information Systems Security Manager Focused on Federal Cloud Services appeared first on HPCwire.

In this video, Gilad Shainer from the InfiniBand Trade Association describes how InfiniBand offers the optimal interconnect technology for Ai, HPC, and Exascale. “Tthrough Ai, you need the biggest pipes in order to move those giant amount of data in order to create those Ai software algorithms. That’s one thing. Latency is important because you need to drive things faster. RDMA is one of the key technology that enables to increase the efficiency of moving data, reducing CPU overhead. And by the way, now, there’s all of the Ai frameworks that exist out there, supports RDMA as a default element within the framework itself.”

The post Video: Why InfiniBand is the Way Forward for Ai and Exascale appeared first on insideHPC.

When Accenture Federal Services researched how current AI technologies could be used by the U.S. federal government, Accenture documented nearly 100 use cases for AI adoption. Artificial intelligence is making a difference to government right now. For more information on how to get involved in this important and growing sector, take advantage of the resources outlined in this excerpt from an insideHPC Guide.

The post Exploring Todays AI Resources A Portal to a Growing Sector appeared first on insideHPC.

A group of researchers at Sandia National Laboratories have developed a tool that can cross-train standard convolutional neural networks (CNN) to a spiking neural model that can be used on neuromorphic processors.

One Step Closer to Deep Learning on Neuromorphic Hardware was written by Michael Feldman at .

The Air Force Research Laboratory (AFRL) has officially inaugurated its four newest supercomputers in a ribbon cutting ceremony at the AFRL DoD Supercomputing Resource Center (DRSC).

U.S. Air Force Adds to Supercomputer Arsenal was written by Michael Feldman at .

See original here:

Home | TOP500 Supercomputer Sites

Home | Alabama Supercomputer Authority

The Alabama Supercomputer Authority (ASA) is a state-funded corporation founded in 1989 for the purpose of planning, acquiring, developing, administering and operating a statewide supercomputer and related telecommunicationsystems.

In addition toHigh Performance Computing, and with the growth of the internet,ASA developed the Alabama Research and Education Network (AREN), whichoffers education and research clients in Alabama internet access and other related network services. ASA has further expanded its offerings with state-of-the-artapplication development services that include custom website design with content management system (CMS)development and custom web-based applications for data-mining, reporting, and other client needs.

Continue reading here:

Home | Alabama Supercomputer Authority

EKA (supercomputer) – Wikipedia

EKA is a supercomputer built by the Computational Research Laboratories (a subsidiary of Tata Sons) with technical assistance and hardware provided by Hewlett-Packard.[6]

Eka means the number One in Sanskrit.[4]

EKA uses 14,352[2] cores based on the Intel QuadCore Xeon processors. The primary interconnect is Infiband 4x DDR. EKA occupies about 4,000-square-foot (370m2) area.[7] It was built using offshelf components from Hewlett-Packard, Mellanox and Voltaire Ltd..[2] It was built within a short period of 6 weeks.[7]

At the time of its unveiling, it was the fourth-fastest supercomputer in the world and the fastest in Asia.[7] As of 16 September 2011, it is ranked at 58.[5]

Read the original here:

EKA (supercomputer) – Wikipedia

Just 19 Percent of Americans Trust Self-Driving Cars With Kids

A new survey by AAA shows that most Americans distrust self-driving cars. In the past two years, public trust in the emerging technology has gone down.

Poor Turnout

While tech companies like Waymo, Uber, and Tesla race to be the first to build a fully-autonomous vehicle, the public is left eating their dust.

About 71 percent of Americans say that they don’t trust self-driving cars, according to a new American Automobile Association (AAA) survey. That’s roughly the same percentage as last year’s survey, but it’s eight points higher than in 2017, according to Bloomberg and just 19 percent say they’d put their children or family members into an autonomous vehicle.

Overall, the data is a striking sign of public fatigue with self-driving cars.

Track Record

Autonomous vehicles, unlike some other emerging technologies, have suffered very public setbacks, including when an Uber vehicle struck and killed a pedestrian a year ago.

“It’s possible that the sustained level of fear is rooted in a heightened focus, whether good or bad, on incidents involving these types of vehicles,” said AAA director of automotive engineering Greg Brannon in a statement obtained by Bloomberg. “Also it could simply be due to a fear of the unknown.”

Uphill Battle

The AAA survey found that Americans are more accepting of autonomous vehicle tech in limited-use cases. For example, 53 percent of survey respondents were okay with self-driving trams or shuttles being used in areas like theme parks, while 44 percent accepted the idea of autonomous food-delivery bots.

Self-driving car companies are currently engaging in public relations efforts to earn people’s trust, Bloomberg reports. But if these AAA numbers are any indication, there’s a long way to go.

READ MORE: Americans Still Fear Self-Driving Cars [Bloomberg]

More on autonomous vehicles: Exclusive: A Waymo One Rider’s Experiences Highlight Autonomous Rideshare’s Shortcomings

The post Just 19 Percent of Americans Trust Self-Driving Cars With Kids appeared first on Futurism.

View original post here:

Just 19 Percent of Americans Trust Self-Driving Cars With Kids

Elon Musk: $47,000 Model Y SUV “Will Ride Like a Sports Car”

A Familiar Car

First, it was supposed to feature Model-X-style “falcon wing” doors, and then it didn’t. It was supposed to be built in the Shanghai factory, but that didn’t work out either.

Tesla finally unveiled its fifth production car, the Model Y, at its design studio outside of Los Angeles Thursday evening.

“It has the functionality of an SUV, but it will ride like a sports car,” Tesla CEO Elon Musk said during the event. “So this thing will be really tight on corners.”

Bigger than the 3, Smaller Than the X

Yes, acceleration is still zippy: zero to 60 in 3.5 seconds.

But the vehicle is less than revolutionary. It’s arguably the company’s second crossover sports utility vehicle, after the Model X, and it borrows heavily from the company’s successful Model 3. In fact, 75 percent of its parts are the same, according to CEO Elon Musk.

The back of the Y is slightly elevated in the back for a roomier cargo space. A long-range model will feature seven seats — just like the Model X, despite being slightly smaller. Range: still 300 miles with the Long Range battery pack, thanks to its aerodynamic shape.

It will also be “feature complete” according to Musk, referring to the fact that the Model Y will one day be capable of “full self-driving” that he says “will be able to do basically anything just with software upgrades.”

10 Percent Cheaper

As expected, the Model Y is ten percent bigger and costs roughly ten percent more than the Model 3: the first Model Y — the Long Range model — will be released in the fall of 2020 and will sell for $47,000. A dual-motor all-wheel drive version and a performance version will sell for $51,000 and $60,000, respectively.

If you want to save a buck and get the ten-percent-cheaper-than-the-Model-3 version, you’ll have to wait: a Standard Range (230 miles) model will go on sale in 2021 for just $39,000.

Overall, the Model Y seems like a compromise: it’s not a radical shift, but it seems carefully designed to land with a certain type of consumer — and, if Musk is to be believed, without sacrificing Tesla’s carefully-cultivated “cool factor.”

Investors seemed slightly underwhelmed, too — the company’s stock reportedly slid up to five percent after the announcement.

READ MORE:  Tesla unveils Model Y electric SUV with 300 miles range and 7-seats [Electrek]

More on the Model Y: Elon Musk: Tesla Will Unveil Model Y Next Week

The post Elon Musk: $47,000 Model Y SUV “Will Ride Like a Sports Car” appeared first on Futurism.

Read more here:

Elon Musk: $47,000 Model Y SUV “Will Ride Like a Sports Car”

Samsung Is Working on Phone With “Invisible” Camera Behind Screen

A Samsung exec has shared new details on the company's efforts to create a full-screen phone, one with the camera embedded beneath the display.

Punch It

Just last month, South Korean tech giant Samsung unveiled the Galaxy S10, a phone with just a single hole punched in the screen to accommodate its front-facing camera.

On Thursday, a Samsung exec shared new details on the company’s intentions to create a “perfect full-screen” phone, with an “invisible” camera behind the screen to eliminate the need for any visible holes or sensors — confirming that one of the biggest players in tech sees edge-to-edge screens as the future of mobile devices.

Hidden Tech

During a press briefing covered by Yonhap News Agency, Samsung’s Mobile Communication R&D Group Display Vice President Yang Byung-duk said the company’s goal is to create a phone with a screen that covers the entire front of the device — but consumers shouldn’t expect it in the immediate future.

“Though it wouldn’t be possible to make (a full-screen smartphone) in the next 1-2 years,” Yang said, “the technology can move forward to the point where the camera hole will be invisible, while not affecting the camera’s function in any way.”

Quest for Perfection

This isn’t Samsung’s first mention of an uninterrupted full-screen phone — as pointed out by The Verge, the company discussed its ambitions to put the front-facing camera under a future device’s screen during a presentation in October.

That presentation included a few additional details on how the camera in a full-screen phone would work.

Essentially, the entire screen would serve as a display whenever the front-facing camera wasn’t in use. When in use, however, the screen would become transparent, allowing the camera to see through so you could snap the perfect selfie — and based on Yang’s comments, that new innovation could be just a few years away.

READ MORE: Samsung Seeks Shift to Full Screen in New Smartphones [Yonhap News Agency]

More on Samsung: Samsung Just Revealed a $1,980 Folding Smartphone

The post Samsung Is Working on Phone With “Invisible” Camera Behind Screen appeared first on Futurism.

More:

Samsung Is Working on Phone With “Invisible” Camera Behind Screen

Special Announcement: Futurism Media and Singularity University

Futurism acquired by Singularity University

So, Readers –

As always, we’ve got some news about the future. Except this time, it’s about us.

We’re about to enter the next chapter of Futurism, one that will usher in a new era for this site. It’ll come with new ways we’ll be able to deliver on everything you’ve grown to read, watch, subscribe to, and love about what we do here. And also, more in volume of what we do, with larger ambitions, and ultimately, a higher level of quality with which we’re able to bring those ambitions to fruition.

As of today, Futurism Media is proud to announce that we’re joining operations with Singularity University. In other words: They bought us, they own us, and quite frankly, we’re excited about the deal.

It’s an excitement and an occasion we share in with you, our community of readers — aspiring and working technologists, scientists, engineers, academics, and fans, who carried us to where we are, who helped make this independent media company what it is today. We’ve always been humbled by your support, and we’ve worked to reciprocate it by publishing one of the most crucial independent technology and science digital digests, every day, full stop.

What this changes for you? Nothing. Really. Except: More of what you’ve come to count on Futurism.com to deliver every time you’ve read our stories, opened our emails, swiped up on our ‘Gram, watched our videos, dropped in on our events, clicked through a Byte, and so on. This partnership represents the sum total of the work you’ve engaged with, and the start of a new chapter in which we’ll be able to deliver on more of the above.

That means increased coverage of the emergent, cutting-edge innovation and scientific developments changing the world, and the key characters and narratives shaping them (or being shaped by them). It means an expanded, in-depth feature publishing program, arriving this Spring (it’s rad, and it’s gonna blow your socks off). It means more breaking news reporting and analysis. It means original media products you haven’t seen from us before — new verticals, microsites, other ways for you to get in the mix with our coverage. And yes, by working in concert with Singularity University, we’re going to have a pretty decent competitive advantage: Direct access to the characters and personas shaping our future, the people, ideas, and innovations right at the frontier of exponential growth technologies. Our branded content team, Futurism Creative, will also continue to produce guideline-abiding, cutting-edge, thoughtful and engaging content for our partners, and for the partners of SU, too. And finally, our Futurism Studios division will continue to push the envelope of feature-length narrative storytelling of the science fiction (and science fact) stories of that future.

Will this change our journalism? Not in the slightest. We’ll still be operating as an independent, objective news outlet, without interference from our partners, who will continue to hold us to the same ethics and accountability standards we’ve held ourselves to these last few years. There might be more appearances from the folks at SU in our work (not that SU’s proliferate network of notable alumni or board members haven’t previously made appearances around these parts prior to this), but by no means will SU be shoehorning themselves into what we do here.

Yet: Where the opportunity exists, we’ll absolutely seize on the chance to co-create and catalyze action together to shape the technology and science stories on the horizon, to say nothing of that future itself. We’ll continue to make quality the primary concern — and they’re here to support that mandate, and augment this team with additional resources to accomplish it. If even the appearance of a conflict presents itself, as always, we’ll default to disclosure. But it’d be absurd of us not to take advantage of the immense base of knowledge our new partners in Mountain View have on offer (an apt comparison here would be, say, Harvard Business Review to H.B.S. or M.I.T. and our contemporaries at the MIT Technology Review).

We’ve been circling this partnership for a while; they, fans of ours, and us, fans of theirs. The original mandate of Futurism as written by our C.E.O. Alex Klokus was to increase the rate of human adaptability towards the future through delivering on the news of where that future is headed. Singularity University concerns itself with educating the world on the exponential growth technologies changing our lives. It’s a perfect merging of interests. Where exponential growth technologies are concerned: One only need look as far as the way online advertising and social platforms changed the economics of media to see this. To find a home with a growing institution that will prove increasingly vital to the growing global community they’ve already established in spades is the best possible outcome. And no, we didn’t get crazy-rich or anything. But we did galvanize the future (and all its possibilities) for everyone at this company, and our ability to keep serving you, our readers.

We’re immensely proud of the scrappy, tight team here; and especially you, our community of readers and partners we’ve grown with these last few years. We’re proud of the product we’ve created, especially last year, when we steered away from reliance on social media platforms for an audience, and reconfigured an editorial strategy around the priority of driving you directly to Futurism.com daily, by prioritizing quality, topicality, reliability, and on-site presentation (shocker: it worked). Now, we proud to be able to do more, better, of what we’ve always done here:

Tell the stories of tomorrow, today. On behalf of the entire Brooklyn-based Futurism team, thanks for being along for the ride so far, and on behalf of the new Futurism x Singularity University family, here’s to more of where that came from.

The future, as ever, is looking bright. We can’t wait to tell you about it.

– Foster Kamer
Director of Content

James Del
Publisher

Sarah Marquart
Director of Strategic Operations

Geoff Clark
President of Futurism Studios

The post Special Announcement: Futurism Media and Singularity University appeared first on Futurism.

Visit link:

Special Announcement: Futurism Media and Singularity University