Apple announces a new MacBook Pro and says it has finally fixed the broken keyboard problem – CNBC

The MacBook Pro's keys are deeper and have more travel, so they feel better than the shallow keys on earlier models.

Todd Haselton | CNBC

Apple announced on Wednesday a 16-inch MacBook Pro that replaces the company's high-end 15-inch MacBook Pro laptops.

But here's what matters most: Apple completely redesigned the keyboard for the first time since 2015.

And that's big news because previous MacBook keyboards that used a so-called butterfly switch were prone to lots of problems, like jamming keys and endless typos. On a super-nice and super-expensive computer like the MacBook Pro, the keyboard was often just a sore point in an otherwise excellent laptop. Assuming that the keyboard problems have been fixed and that the new design finds its way to other MacBook models, the company's laptop woes could finally be over.

Here's what you need to know about the new 16-inch MacBook Pro, which starts at $2,399.99 and will be available for preorder Wednesday. It will be in stores by the end of the week.

Apple's 16-inch MacBook Pro has a brand new keyboard, and it's nice.

Todd Haselton | CNBC

Apple was panned by industry experts and fans alike for spending four years making a MacBook keyboard that malfunctioned for many customers. Well-known blogger and Apple pundit John Gruber said in March that the previous generation keyboards were "the worst products in Apple history" and that they were "doing lasting harm to the reputation of the MacBook brand."

The Wall Street Journal's Joanna Stern wrote a review of the third generation butterfly keyboard in March, which was supposed to fix some of the issues in earlier versions. It was titled "Appl still hasn't fixd its MacBook kyboad Problm." Her column prompted Apple to issue an apology.

You get the idea.

With the 16-inch MacBook Pro, Apple has a new scissors-style design that moves back toward it previous keyboards. In my early tests, it's pretty great. But problems with the earlier keyboards didn't always crop up immediately. For example, it took a few weeks for the keyboard to break on Apple's redesigned MacBook Air that I bought last year. Only time will tell.

Apple's 16-inch MacBook Pro

Todd Haselton | CNBC

Still, Apple is standing by this redesign. The company claims it's the best keyboard it has ever made for a MacBook. It used the Magic Keyboard for desktop Macs, which people love, as the inspiration for the new design.

The keys have more travel, meaning you can feel more when you push down on them. Earlier models felt like you were tapping on a flat surface at times. Also, Apple put rubber domes under the keys that help hold them in place and keep them sturdy. And it relied on user studies and testing to build a keyboard people liked.

Apple's 16-inch MacBook Pro

Todd Haselton | CNBC

Finally, Apple added a physical escape key back to the MacBook Pro keyboard by shortening the "Touch Bar" at the top, which is a touchscreen that lets you access shortcuts inside apps and other things. I know. It seems silly the escape key was removed in the first place.

Apple's 16-inch MacBook Pro

Todd Haselton | CNBC

There's a lot more to the story here. Apple's MacBook Pros cater to people who need a lot of power and really good screens for things like photo editing and video editing. The display is really important to those folks, so Apple made it larger while keeping the device similar size to the 15-inch MacBook Pro. That means the bezels on the side of the screen are 34% smaller than before. It's a sharp screen, it gets nice and bright and it's generally just gorgeous. Apple said it's the best display in any MacBook ever. Tough to disagree.

Apple's 16-inch MacBook Pro

Todd Haselton | CNBC

Apple added new ninth-generation Intel Core processors and upgraded the graphics to the latest AMD 5300 and 55000 chips, depending on the model you buy. Translation: That means it's faster for stuff that artistic professionals want.

Apple also increased the power while extending battery life, which is impressive. This year's model gets 11 hours of battery life, an hour longer than last year's MacBook Pro. It's definitely heavy at 4.3 pounds, but only 0.3 pounds heavier than last year's model. I prefer my less powerful MacBook Air, which is a heck of a lot lighter at 2.75 pounds.

It's big!

Todd Haselton | CNBC

Apple is also including a faster USB-C charger that can juice up the 16-inch MacBook Pro in just 2.5 hours. That's pretty good given the size of the battery. It means you won't need to sit at an outlet for half a day just to get a full charge.

Aside from the keyboard, my favorite feature is the new speakers. They sound better than anything I've ever heard out of a laptop before. I demoed them while listening to Miles Davis and loved how it felt like I was sitting inside the music, thanks to Dolby Atmos support. It's almost as if you're sitting in front of a really good speaker rather than a laptop. It's weird to hear something this good. It worked really well for surround sound when I watched a clip of the Apple TV+ show "Dickinson," too.

The internal microphone also was improved. Apple claims it's good enough to record a podcast without using any accessories, but I haven't tested that yet.

Watching the Apple TV+ show Dickinson on the new 16-inch MacBook Pro.

Todd Haselton | CNBC

Some other things didn't change. The trackpad is the same (it's still really good), you still get four USB-C ports and the FaceTime cameras haven't changed from the last model. That's kind of a bummer, since I still wish Apple would just add a really high-quality camera for FaceTime.

The screen color accuracy is great for photo editing. Here: my dog Mabel.

Todd Haselton | CNBC

Anyway, despite all of this, it definitely feels like a huge and heavy laptop compared with my MacBook Air. It's for professionals who need this sort of power and who are willing to pay a lot for a MacBook Pro. It is not for people like me who like to travel light. I assume, though Apple hasn't confirmed, this keyboard will eventually make it to the MacBook Air and other MacBook models.

Apple's 16-inch MacBook Pro

Todd Haselton | CNBC

Apple will sell two models of the 16-inch MacBook Pro that you can upgrade with additional storage, which ultimately increases the price drastically. For example, you can add up to 8TB of storage in it, which is more storage than any other laptop I've ever come across.

The starting configuration costs $2,399.99 and comes with a 6-core Intel processor, 512GB of hard drive space and AMD Radeon 5300 graphics. A $2,799 model will ship with an 8-core processor, 1 terabyte of storage and AMD 5500 graphics. They're available to order on Wednesday and will be in stores by the end of the week.

Read the original here:

Apple announces a new MacBook Pro and says it has finally fixed the broken keyboard problem - CNBC

Nvidia’s Jetson Xavier NX Is ‘World’s Smallest Supercomputer’ For AI – CRN: The Biggest Tech News For Partners And The IT Channel

Nvidia said its new Jetson Xavier NX is the world's smallest supercomputer for artificial intelligence applications at the edge, giving robotics and embedded computing companies the ability to deliver "server-class performance" in a 10-watt power envelope.

The $399 Jetson Xavier NX, revealed Wednesday during the GPU powerhouse's Nvidia GTC conference in Washington, D.C., is the smallest form factor in Nvidia's Jetson computing board lineup, measuring at just 70-by-45 millimeters, roughly as tall as a Lego figurine. The company also announced that it has achieved the fastest results across five benchmarks in the MLPerf Inference Suite.

[Related: Nvidia Reveals EGX Edge Supercomputing Platform For AI, IoT And 5G]

The computing board comes with 384 CUDA cores and 48 tensor cores, allowing it to deliver up to 21 Tera Operations Per Second, or TOPs, a common way to measure performance in high-performance system-on-chips. Thanks to Nvidia's engineering and design, the Jetson Xavier NX provides up to 15 times higher performance than its Jetson TX2 in a smaller form factor with the same power draw.

The Santa Clara, Calif.-based company said the Jetson Xavier NX is made for performance-hungry devices that are "constrained by size, weight, power budgets or cost," such as commercial robots, drones high-resolution factory sensors, portable medical devices and industrial IoT systems.

"AI has become the enabling technology for modern robotics and embedded devices that will transform industries," Deepu Talla, vice president and general manager of edge computing at Nvidia, said in a statement. "Many of these devices, based on small form factors and lower power, were constrained from adding more AI features. Jetson Xavier NX lets our customers and partners dramatically increase AI capabilities without increasing the size or power consumption of the device."

Like Nvidia's other Jetson products, Jetson Xavier NX runs on the chipmaker's CUDA-X AI software architecture that the company said can speed up development and lower costs. It's also supported by the company's JetPack software development kit, which provides a "complete AI software stack."

Lee Ritholz, director and chief architect of applied AI at Lockheed Martin, said that Nvidia's embedded Jetson products help accelerate research, development and deployment of AI solutions on Lockheed Martin's platforms.

"With Jetson Xavier NXs exceptional performance, small form factor and low power, we will be able to do more processing in real time at the edge than ever before," he said in a statement.

In addition to its 384 CUDA cores and 48 Tensor Cores, the Jetson Xavier NX comes with Nvidia's Deep Learning Accelerator, up to a six-core Carmel Arm CPU, up to six CSI cameras, 12 lanes for the MIPI CSI-2 camera serial interface, 8 GB of 128-bit LPDDR4x memory, gigabit Ethernet and Ubuntu-based Linux. It comes with options for 10-watt and 15-watt power envelopes.

See the article here:

Nvidia's Jetson Xavier NX Is 'World's Smallest Supercomputer' For AI - CRN: The Biggest Tech News For Partners And The IT Channel

16,000 core supercomputer completes best galaxy simulation video ever – TweakTown

The most detailed large-scale simulation has been released showing just after the Big Bang, all the way until the present day.

Scientists have been struggling with the creation of accurate simulations of cosmic-level events due to the limitations of computing power. The computational limitations forced scientists to choose between large-scale designs or fine detail. But now, scientists from Germany and the United States have completed and released the most detailed large-scale simulation of a galaxy forming.

The simulation is called TNG50 and is a state-of-the-art simulation of the formation of a galaxy similar in mass to our neighboring galaxy Andromeda. The video shows a formation of a single massive galaxy, with cosmic gas becoming denser and denser over the course of billions of years. The Hazel Hen supercomputer, located in Stuttgart, created the simulation over the course of a year using 16,000 computational cores. The results are an extremely detailed cosmic visualization that consists of 230 million light-years in diameter and more than 20 billion particles that represent dark matter, stars, cosmic gas, magnetic fields, and supermassive black holes.

* Prices last scanned on 11/13/2019 at 6:32 am CDT - prices may not be accurate, please click for very latest pricing

See the original post here:

16,000 core supercomputer completes best galaxy simulation video ever - TweakTown

Simulated BCS rankings have 2 SEC teams in top 4 after Week 11 – Saturday Down South

Dave Holcomb | 16 hours ago

Towards the end of the BCS era, everyone hated the computer system that decided who should play for the national championship.

But what if we combined the two eras? What if we still had the College Football Playoff with two teams vying for a championship at the end of the year, but the teams were determined by a super computer as opposed to a committee.

BCSKnowHow.com looks at that every week. On Tuesday, the website tweeted how the BCS would rank the potential playoff teams following Week 11.

On those rankings, two SEC teams LSU and Alabama were ranked in the Top 4. Joining them in that group was Ohio State and Clemson.

If the committees rankings Tuesday night mirror these, there might be a mutiny outside the SEC. Alabama was No. 3 last week and lost at home for the first time in more than four years. Granted, it was to the top team in the country, but the Crimson Tide, at one point, trailed by 20.

Regardless, in the computer rankings, Alabama would be ranked just ahead of Oregon and Georgia. Minnesota would slot in behind them at No. 7, making one of the biggest jumps of the week, moving just in front of Penn State at No. 8.

Here are the entire playoff rankings from BCS Know How as of Tuesday:

Continued here:

Simulated BCS rankings have 2 SEC teams in top 4 after Week 11 - Saturday Down South

Solving a Riddle That Would Provide the World With Entirely Clean, Renewable Energy – SciTechDaily

Scientists from Trinity College Dublin have taken a giant stride towards solving a riddle that would provide the world with entirely renewable, clean energy from which water would be the only waste product.

Reducing humanitys carbon dioxide (CO2) emissions is arguably the greatest challenge facing 21st-century civilization especially given the ever-increasing global population and the heightened energy demands that come with it.

The Trinity team behind the latest breakthrough combined chemistry smarts with very powerful computers to find one of the holy grails of catalysis.

One beacon of hope is the idea that we could use renewable electricity to split water (H2O) to produce energy-rich hydrogen (H2), which could then be stored and used in fuel cells. This is an especially interesting prospect in a situation where wind and solar energy sources produce electricity to split water, as this would allow us to store energy for use when those renewable sources are not available.

The essential problem, however, is that water is very stable and requires a great deal of energy to break up. A particularly major hurdle to clear is the energy or overpotential associated with the production of oxygen, which is the bottleneck reaction in splitting water to produce H2.

Although certain elements are effective at splitting water, such as Ruthenium or Iridium (two of the so-called noble metals of the periodic table), these are prohibitively expensive for commercialization. Other, cheaper options tend to suffer in terms of their efficiency and/or their robustness. In fact, at present, nobody has discovered catalysts that are cost-effective, highly active and robust for significant periods of time.

So, how do you solve such a riddle? Stop before you imagine lab coats, glasses, beakers, and funny smells; this work was done entirely through a computer.

By bringing together chemists and theoretical physicists, the Trinity team behind the latest breakthrough combined chemistry smarts with very powerful computers to find one of the holy grails of catalysis.

The team, led by Professor Max Garca-Melchor, made a crucial discovery when investigating molecules that produce oxygen: Science had been underestimating the activity of some of the more reactive catalysts and, as a result, the dreaded overpotential hurdle now seems easier to clear. Furthermore, in refining a long-accepted theoretical model used to predict the efficiency of water splitting catalysts, they have made it immeasurably easier for people (or super-computers) to search for the elusive green bullet catalyst.

Lead author, Michael Craig, Trinity, is excited to put this insight to use. He said: We know what we need to optimize now, so it is just a case of finding the right combinations.

Professor Max Garcia-Melchor and Ph.D. Candidate, Michael Craig, Trinity College Dublin, searching for the green bullet catalyst. Credit: Trinity College Dublin

The team aims to now use artificial intelligence to put a large number of earth-abundant metals and ligands (which glue them together to generate the catalysts) in a melting pot before assessing which of the near-infinite combinations yield the greatest promise.

In combination, what once looked like an empty canvas now looks more like a paint-by-numbers as the team has established fundamental principles for the design of ideal catalysts.

Professor Max Garca-Melchor added: Given the increasingly pressing need to find green energy solutions it is no surprise that scientists have, for some time, been hunting for a magical catalyst that would allow us to split water electrochemically in a cost-effective, reliable way. However, it is no exaggeration to say that before now such a hunt was akin to looking for a needle in a haystack. We are not over the finishing line yet, but we have significantly reduced the size of the haystack and we are convinced that artificial intelligence will help us hoover up plenty of the remaining hay.

He also stressed: This research is hugely exciting for a number of reasons and it would be incredible to play a role in making the world a more sustainable place. Additionally, this shows what can happen when researchers from different disciplines come together to apply their expertise to try to solve a problem that affects each and every one of us.

###

Professor Max Garca-Melchor is an Ussher Assistant Professor in Chemistry at Trinity and senior author on the landmark research that has just been published in a leading international journal, Nature Communications.

Reference: Universal scaling relations for the rational design of molecular water oxidation catalysts with near-zero overpotential by Michael John Craig, Gabriel Coulter, Eoin Dolan, Joaqun Soriano-Lpez, Eric Mates-Torres, Wolfgang Schmitt and Max Garca-Melchor, 8 November 2019, Nature Communications.DOI: 10.1038/s41467-019-12994-w

Collaborating authors include Gabriel Coulter, formerly of Trinity and now studying for a MSc at the University of Cambridge; Eoin Dolan formerly of Trinity and now completing an Erasmus Mundus joint MSc degree in Paris; Dr Joaqun Soriano-Lpez, MSCA-Edge fellow in Trinitys School of Chemistry; Eric Mates, PhD candidate in Trinitys School of Chemistry and Professor Wolfgang Schmitt from Trinitys School of Chemistry.

The research has been supported by Science Foundation Ireland and the Irish Centre for High-End Computing (ICHEC), where the team is benefiting from 4,500,000 CPU hours at Irelands state-of-the-art super-computer facility.

See the original post:

Solving a Riddle That Would Provide the World With Entirely Clean, Renewable Energy - SciTechDaily

Supercomputer predicts Championship table and Reading FC fans will be livid – Get Reading

We are now well into the Championship campaign and a thrilling season is on the cards following a dramatic first few months.

Leaders West Brom are just six points ahead of seventh-placed Sheffield Wednesday, with the play-off and automatic promotion picture shifting every week after the latest batch of matches.

Meanwhile, the bottom of the table has managerless Stoke City struggling to find their footing, while Middlesbrough and Huddersfield Town are among the heavy-hitters floundering at the wrong end of the standings.

With things being extremely tight, it is very difficult to predict the final outcomes but that is exactly what TalkSport has done with their latest supercomputer - which issues the final verdict for all 24 clubs.

In the end, its good news for Leeds United and West Brom, both of whom earn promotion to the Premier League, leaving Fulham, Swansea City, Preston and Nottingham Forest battling it out in the play-offs.

But it's bad news for Reading who have once again been written off and predicted for relegation.

A run of two wins in three matches has seen them slowly move up the league but that's still not enough to convince the outsiders it won't be a dismal season.

Royals, Barnsley and Luton have all been predicted to face the drop into the third tier of English football.

You can see the full table below.

1 - Leeds United

2 - West Brom

-------------------

3 - Fulham

4 - Swansea City

5 - Preston

6 - Nottingham Forest

---------------------------

7 - Bristol City

8 - Brentford

9 - Sheffield Wednesday

10 - QPR

11 - Charlton

12 - Cardiff City

13 - Derby County

14 - Blackburn Rovers

15 - Stoke City

16 - Hull City

17 - Birmingham City

18 - Huddersfield Town

19 - Middlesbrough

20 - Millwall

21 - Wigan Athletic

------------------------

22 - Luton Town

23 - Reading

24 - Barnsley

Keep an eye out on our social media pages for more Reading FC news - we are on Twitter @readingfclive and on Facebook Reading FC Live

Follow our dedicated Reading FC reporter on Twitter @jonathanl50

You can also get the latest news via the FREE getreading app - download it for Apple devices here and Android devices here

Read the rest here:

Supercomputer predicts Championship table and Reading FC fans will be livid - Get Reading

Using AI to Split Water Electrochemically in a Cost-Effective, Reliable Way – AZoCleantech

Written by AZoCleantechNov 11 2019

Researchers from Trinity College Dublin have taken a massive stride towards finding an answer to a puzzle that would offer the world a totally renewable, clean energy from which the sole waste product would be water.

Left to right, Professor Max Garcia-Melchor and PhD Candidate Michael Craig, Trinity College Dublin. Image Credit: Trinity College Dublin.

Decreasing human-induced carbon dioxide (CO2) emissions is debatably the paramount challenge facing the 21st-century civilizationparticularly given the constantly growing global population and the intensified energy demands that accompany it.

One ray of hope is the concept that renewable electricity could be used to split water (H2O) to create energy-rich hydrogen (H2), which could be subsequently stored and used in fuel cells. This is a particularly stimulating perspective in a situation where solar and wind energy sources generate electricity to split water, as this would enable the storage of energy for use when those renewable sources are not available.

The vital issue, however, is that water is highly stable and necessitates a considerable amount of energy to split up. A specifically major obstacle to overcome is the energy or overpotential related to the formation of oxygen, which is the bottleneck reaction in splitting water to generate H2.

Although some elements like Iridium or Ruthenium (two of the noble metals of the periodic table) hold the potential to split water, these are exorbitantly expensive for commercialization. Other, inexpensive options tend to be less robust and/or have low efficiency. At the moment, no one has found catalysts that are robust, cost-effective, and highly active for substantial lengths of time.

So, how does one solve such a puzzle? Without any need for glasses, lab coats, beakers, and odd smells, this work was performed completely with a computer.

By uniting theoretical physicists and chemists, the Trinity team behind the latest innovation combined chemistry experts with highly robust computers to discover one of the holy grails of catalysis.

The team, headed by Professor Max Garca-Melchor, made a critical discovery while examining molecules that synthesize oxygen: Science had been undervaluing the activity of some of the more reactive catalysts. Consequently, the dreaded overpotential obstacle now seems easier to overcome.

By improving a long-accepted theoretical model used to estimate the efficiency of water splitting catalysts, they have made it extremely easier for people (or super-computers) to locate the mysterious green bullet catalyst.

Michael Craig from Trinity College Dublin, who was the lead author, is eager to implement this insight.

We know what we need to optimise now, so it is just a case of finding the right combinations.

Michael Craig, Study Lead Author, Trinity College Dublin

The team intends to use artificial intelligence (AI) to place a large number of earth-rich metals and ligands (which stick together to produce the catalysts) in a melting pot before evaluating which of the near-infinite combinations hold the greatest potential.

Together, what earlier appeared like an empty canvas currently appears more like paint-by-numbers as the researchers have defined fundamental principles for designing perfect catalysts.

Given the increasingly pressing need to find green energy solutions it is no surprise that scientists have, for some time, been hunting for a magical catalyst that would allow us to split water electrochemically in a cost-effective, reliable way.

Max Garca-Melchor, Study Senior Author and Ussher Assistant Professor in Chemistry, Trinity College Dublin

Professor Garca-Melchor continued, However, it is no exaggeration to say that before now such a hunt was akin to looking for a needle in a haystack. We are not over the finishing line yet, but we have significantly reduced the size of the haystack and we are convinced that artificial intelligence will help us hoover up plenty of the remaining hay.

This research is hugely exciting for a number of reasons and it would be incredible to play a role in making the world a more sustainable place. Additionally, this shows what can happen when researchers from different disciplines come together to apply their expertise to try to solve a problem that affects each and every one of us.

Max Garca-Melchor, Study Senior Author and Ussher Assistant Professor in Chemistry, Trinity College Dublin

Max Garca-Melchor, who is a Ussher Assistant Professor in Chemistry at Trinity, is the senior author on the breakthrough research that has recently been published in Nature Communications, a leading international journal.

Authors collaborating on this research include Gabriel Coulter, formerly of Trinity and now studying for an MSc at the University of Cambridge; Eoin Dolan, formerly of Trinity and currently completing an Erasmus Mundus joint MSc degree in Paris; Dr Joaqun Soriano-Lpez, MSCA-Edge fellow in Trinitys School of Chemistry; Eric Mates, PhD candidate in Trinitys School of Chemistry; and Professor Wolfgang Schmitt from Trinitys School of Chemistry.

The study has been supported by Science Foundation Ireland and the Irish Centre for High-End Computing (ICHEC), where the team is profiting from 4,500,000 CPU hours at Irelands high-tech super-computer facility.

Source: https://www.tcd.ie

Go here to read the rest:

Using AI to Split Water Electrochemically in a Cost-Effective, Reliable Way - AZoCleantech

Anyone who wrote off Sheffield United as a direct, functional team might want to watch the goal they scored at… – The Athletic

Dele Alli is far from alone in being surprised at just how well Sheffield United have adapted to life in the Premier League.

In recent weeks, a team that has taken four points off last seasons Europa League finalists and been unlucky not to bag the same from clashes with the two teams who competed in last seasons Champions League showpiece have become used to the plaudits.

For wing-back George Baldock, however, the words of praise from England international Alli in a post-match chat between the two former Milton Keynes Dons team-mates made an afternoon featuring his maiden top-flight goal feel even more special.

I played for a long time with Dele Alli and we had a brief chat afterwards, said the unlikely goalscoring hero for Chris Wilders side following their 1-1 draw with Tottenham Hotspur.

He congratulated us. He said hed watched us a few times this season on TV and been surprised at how well we...

Here is the original post:

Anyone who wrote off Sheffield United as a direct, functional team might want to watch the goal they scored at... - The Athletic

Calvin Harris shows off his jaw-dropping studio and reveals that he already has the new Mac Pro – MusicRadar

It seems that, if youre Calvin Harris, theres no piece of gear that you cant get your hands on - even Apples so-far-unreleased new Mac Pro.

Harris was was all over Instagram this weekend, posting a series of videos showcasing what we assume to be his amazing studio and revealing new music. One of these, which you can watch above, clearly features the newly-designed Mac Pro, which is listed on the Apple website as Coming in Autumn.

One would assume that Harris is one of a number of high-profile creatives whove got early access to the new machine. Its demanding professionals, after all, who represent Apples target market for the new super-computer.

Harriss Instagram splurge also confirms that hes an avid synth collector - check out the line-up of instruments in the video below - and that at least some of his new music has a distinctly acid housey vibe to it. You can hear a suitably squelchy bassline, and theres even a smiley face hanging on the wall.

As such, we wonder if Harris might soon be getting his hands on Behringers TD-3? As one of the biggest electronic music producers in the world, were sure he could swing a pre-release version of that as well...

Read the original here:

Calvin Harris shows off his jaw-dropping studio and reveals that he already has the new Mac Pro - MusicRadar

What TV Scandal Did NBC Forbid Quantum Leap to Feature on the Series? – CBR – Comic Book Resources

TV URBAN LEGEND: Quantum Leap was forbidden to do an episode about the TV quiz show scandals of the 1950s.

Quantum Leap was a critically acclaimed television drama that aired from 1989-1993 on NBC that starred Scott Bakula as Dr. Sam Beckett, a scientist who cracked the secret to time travel, but found himself leaping from year to year without control over where he would leap next (he would displace an actual person from the time period every time he leaped). The only thing he could do was hope that eventually he would leap back to his time period. He figured out that the way to trigger another leap was to fix something that had gone wrong in the life of the person he leaped into. He was aided on his quest by his colleague, Al, who used a super-computer to calculate the best bet for what Sam was meant to fix during each leap (as I recently noted in a tweet, Al sometimes had WAAAAAY too much information available to him about the lives involved in the leaps). Al appeared to Sam as a hologram that only he could see and hear.

The show addressed a number of controversial topics over the years, but there were a few topics that NBC ruled off limits for the series. Some of them were logistical issues, where the show just didn't have the budget to get a topic done right. Others, though, were just too sensitive for NBC and they forbade the show from doing episodes on the topic.

On the Quantum Leap podcast, David Campiti, the creator of Innovation Comics, who did licensed Quantum Leap comic books, explained that he was given the Series Bible when he got the license and saw that one of the things that the show could not touch was the quiz show scandal of the 1950s.

The quiz show scandal was when a number of popular game shows of the 1950s were revealed to have been rigging their results to get the most interesting winners, with the NBC series, Twenty-One, specifically rigging things for a popular contestant named Charles Van Doren....

This was a dark time for television and things got so bad that the government nearly stepped in and took over control of broadcast television outright, but luckily, things didn't go that route. However, game shows have been ruled by very strict rules ever since (which is why some competition reality shows like Big Brother make it clear that they are NOT game shows, so that they don't have to play by the strict game show rules established in the wake of the quiz show scandal).

Of course, since Campiti was not restricted by the rules of the television series, he quickly had a Quantum Leap episode be about the quiz show scandal..

Here's Sam leaping in...

and here, he learns why he is there...

The legend is...

STATUS: True

Thanks to my pal, Loren, for suggesting this one! And thanks to David Campiti and the Quantum Leap podcast for the information!

Be sure to check out my archive of TV Legends Revealed for more urban legends about the world of TV.

Feel free (heck, I implore you!) to write in with your suggestions for future installments! My e-mail address is bcronin@legendsrevealed.com.

EXCLUSIVE: Event Leviathan Reveals the Fallout of Damian Wayne's Revelation

Tags:TV Legends Revealed,Quantum Leap

Read the rest here:

What TV Scandal Did NBC Forbid Quantum Leap to Feature on the Series? - CBR - Comic Book Resources

Google says ‘quantum supremacy’ achieved in new age super computer – Fox Business

FOX Businesss Hillary Vaughn on Google CEO Sundar Pichai and Ivanka Trump teaming up to create more IT jobs.

SAN FRANCISCO (AP) Google says it has achieved a breakthrough in quantum computing research.

It says an experimental quantum processor has completed a calculation in just a few minutes that would take a traditional supercomputer thousands of years.

The results of its study appear in the scientific journal Nature. Google says it has achieved quantum supremacy, which means the quantum computer did something a conventional computer could never do.

"For those of us working in science and technology, its the hello world moment weve been waiting forthe most meaningful milestone to date in the quest to make quantum computing a reality," Google CEOSundar Pichai wrote in a blog post announcing the breakthrough."But we have a long way to go between todays lab experiments and tomorrows practical applications; it will be many years before we can implement a broader set of real-world applications," he continued.

Competitor IBM is disputing that Google achieved the benchmark, saying Google underestimated the conventional supercomputer.

Quantum computing is an advanced computing technology that is still at a relatively early stage of development.

Link:

Google says 'quantum supremacy' achieved in new age super computer - Fox Business

Milwaukee School of Engineering Supercomputer to Support AI Education and Research – Campus Technology

High-Performance Computing

A new supercomputer at the Milwaukee School of Engineering will serve both research and instruction in artificial intelligence and deep learning for the university's computer science program. MSOE partnered with Microway to build the custom cluster, an NVIDIA DGX POD-based machine designed to support modern AI development. The DGX POD reference architecture is based on NVIDIA's DGX SATURN V AI supercomputer used for internal research and development for autonomous vehicles, robotics, graphics and more.

Technical specs include:

MSOE students will be able to access the supercomputer via web browser and "start a DGX-1 or NVIDIA T4 GPU deep learning session with the click of a button" with no need to understand command line interfaces and workload managers, according to a news announcement. "Unlike many university programs in which students' access to supercomputers is usually limited to graduate students in computer labs, this configuration gives undergraduate students at MSOE supercomputer access in the classroom, enabling training of the next AI workforce."

About the Author

About the author: Rhea Kelly is executive editor for Campus Technology. She can be reached at rkelly@1105media.com.

The rest is here:

Milwaukee School of Engineering Supercomputer to Support AI Education and Research - Campus Technology

New Cray Supercomputer Brings Advanced AI Capabilities to the High-Performance Computing Center Stuttgart – Yahoo Finance

German HPC Center Prepares for the Exascale Era; Responds to Growing Demand for Converged Solutions Combining AI and HPC

SEATTLE, Oct. 24, 2019 (GLOBE NEWSWIRE) -- Global supercomputer leader Cray, a Hewlett Packard Enterprise company (HPE), today announced that the High-Performance Computing Center of the University of Stuttgart (HLRS) in Germany has selected a new Cray CS-Storm GPU-accelerated supercomputer to advance its computing infrastructure in response to user demand for processing-intensive applications like machine learning and deep learning. The new Cray system is tailored for artificial intelligence (AI) and includes the Cray Urika-CS AI and Analytics suite, enabling HLRS to accelerate AI workloads, arm users to address complex computing problems and process more data with higher accuracy of AI models in engineering, automotive, energy, and environmental industries and academia.

As we extend our service portfolio with AI, we require an infrastructure that can support the convergence of traditional high-performance computing applications and AI workloads to better support our users and customers, said Prof. Dr. Michael Resch, director at HRLS. Weve found success working with our current Cray Urika-GX system for data analytics, and we are now at a point where AI and deep learning have become even more important as a set of methods and workflows for the HPC community. Our researchers will use the new CS-Storm system to power AI applications to achieve much faster results and gain new insights into traditional types of simulation results.

Supercomputer users at HLRS are increasingly asking for access to systems containing AI acceleration capabilities. With the GPU-accelerated CS-Storm system and Urika-CS AI and Analytics suite, which leverages popular machine intelligence frameworks like TensorFlow and PyTorch, HLRS can provide machine learning and deep learning services to its leading teaching and training programs, global partners and R&D. The Urika-CS AI and Analytics suite includes Crays Hyperparameter Optimization (HPO) and Cray Programming Environment Deep Learning Plugin, arming system users with the full potential of deep learning and advancing the services HLRS offers to its users interested in data analytics, machine learning and related fields.

The future will be driven by the convergence of modeling and simulation with AI and analytics and were honored to be working with HLRS to further their AI initiatives by providing advanced computing technology for the Centers engineering and HPC training and research endeavors, said Peter Ungaro, president and CEO at Cray, a Hewlett Packard Enterprise company. HLRS has the opportunity to apply AI to improve and scale data analysis for the benefit of its core research areas, such as looking at trends in industrial HPC usage, creating models of car collisions, and visualizing black holes. The Cray CS-Storm combined with the unique Cray-CS AI and Analytics suite will allow HLRS to better tackle converged AI and simulation workloads in the exascale era.

In addition to the Cray CS-Storm architecture and Cray-CS AI and Analytics suite, the system will feature NVIDIA V100 Tensor Core GPUs and Intel Xeon Scalable processors.

The convergence of AI and scientific computing has accelerated the pace of scientific progress and is helping solve the world's most challenging problems, said Paresh Kharya, Director of Product Management and Marketing at NVIDIA. Our work with Cray and HLRS on their new GPU-accelerated system will result in a modern HPC infrastructure that addresses the demands of the Centers research community to combine simulation with the power of AI to advance science, find cures for disease, and develop new forms of energy.

The system is scheduled for delivery to HLRS in November 2019.

About Cray Inc.Cray, a Hewlett Packard Enterprise company, combines computation and creativity so visionaries can keep asking questions that challenge the limits of possibility. Drawing on more than 45 years of experience, Cray develops the worlds most advanced supercomputers, pushing the boundaries of performance, efficiency and scalability. Cray continues to innovate today at the convergence of data and discovery, offering a comprehensive portfolio of supercomputers, high-performance storage, data analytics and artificial intelligence solutions. Go to http://www.cray.com for more information.

Story continues

CRAY and Urika are registered trademarks of Cray Inc. in the United States and other countries, and CS-Storm is a trademark of Cray Inc. Other product and service names mentioned herein are the trademarks of their respective owners.

Cray Media:Diana Brodskiy415/306-6199pr@cray.com

Continue reading here:

New Cray Supercomputer Brings Advanced AI Capabilities to the High-Performance Computing Center Stuttgart - Yahoo Finance

AMD CPUs Will Power UKs Next-Generation ARCHER2 Supercomputer – The Next Platform

AMD has picked up yet another big supercomputer win with the selection of its second-generation Epyc processors, aka Rome, as the compute engine for the ARCHER2 system to be installed at the University of Edinburgh next year. The UK Research and Innovation (UKRI) announced the selection earlier this week, along with additional details on the makeup of system hardware.

According to the announcement, when ARCHER2 is up and running in 2020, it will deliver a peak performance of around 28 petaflops, more than 10 times that of the UKs current ARCHER supercomputer housed at EPCC, the University of Edinburghs supercomputing center. ARCHER, which stands for Advanced Research Computing High End Resource, has filled the role of the UK National Supercomputing Service since it came online in 2013.

The now six-year-old ARCHER is a Cray XC30 machine comprised of 4,920 dual-socket nodes, powered by 12-core, 2.7 GHz Intel Ivy Bridge Xeon E5 v2 processors vintage, yielding a total of 118,080 cores and rated at a peak theoretical performance of 2.55 petaflops across all those nodes. Most of the nodes are outfitted with 64 GB of memory, with a handful of large-memory nodes equipped with 128 GB, yielding a total capacity of 307.5 TB. Crays Aries XC interconnect, as the system name implies, is employed to lash together the nodes.

The upcoming ARCHER2 will also be a Cray (now owned by Hewlett Packard Enterprise) machine, in this case based on the companys Shasta platform. It will consist of 5,848 nodes laced together with the 100 Gb/sec Slingshot HPC variant of Ethernet, which is based on Crays homegrown Rosetta switch ASIC and deployed in a 3D dragonfly topology.

Although thats only about a thousand more nodes than its predecessor, each ARCHER2 node will be equipped with two AMD Rome 64-core CPUs running at 2.25 GHz, for a grand total of 748,544 cores. It looks like ARCHER2 is not using the new Epyc 7H12 HPC variant of the Rome chip, which was launched in September, in fact, which has clocks spinning at 2.6 GHz but a turbo boost speed that is lower at 3.3 GHz; this chip requires direct liquid cooling on the socket because it is revving at 280 watts, which cannot be moved quickly off the CPU by fans blowing air in the server chassis.

Even though the ARCHER2 machine will only have about six times the core count, each of those Rome cores is nearly twice as powerful as the Ivy Bridge ones in ARCHER from a peak double precision flops perspective. Thats actually pretty remarkable when you consider that the nominal clock frequency on these particular Rome chips is 450 MHz slower than that of the Xeon E5 v2 counterparts in ARCHER. Having 5.3X the number of cores helps, and really, it is the only benefit we are getting out of Moores Law. The vector units in the Rome chips are 256-bits wide, while the AVX units in the Ivy Bridge Xeons are 128 bits wide, so this also accounts for some of the performance increase.

ARCHER2s total system memory is 1.57 PB, which is more than five times larger than that of ARCHER, but given the 10X peak performance discrepancy, the second-generation machine will have to manage with about half the number of bytes per double-precision flop. Fortunately, those bytes are moving at lot faster now, thanks to the eight-memory-controller design of the Epyc processors. The system also has a 1.1 PB all-flash Lustre burst buffer front ending a 14.5 PB Lustre parallel disk file system to keep the data moving steadily into and out of the system. All of this will be crammed into 23 Shasta cabinets, which have water cooling in the racks.

In fact, as we reported in August in our deep dive on the Rome architecture, these processors can deliver up to 410 GB/sec of memory bandwidth if all the DIMM slots are populated. That works out to about 45 percent more bandwidth than what can be achieved with Intels six-channel Cascade Lake Xeon SP, a processor that can deliver a comparable number of flops.

The reason we are dwelling of this particular metric is that when we spoke with EPCC center director Mark Parsons in March, he specifically referenced memory bandwidth as an important criteria for the selection of the CPU that would be powering ARCHER2, telling us that the better the balance between memory bandwidth and flops, the more attractive the processor is.

Of course, none of these peak numbers matter much to users, who are more interested in real-world application performance. In that regard, ARCHER2 is expected to provide over 11X the application throughput as ARCHER, on average, based on five of the most heavily used codes at EPCC. Specifically, their evaluation, presumably based on early hardware, revealed the following application speedups compared to the 2.5 petaflops ARCHER:

As the announcement pointed out, that level of performance puts ARCHER2 in the upper echelons of CPU-only supercomputers. (Currently, the top CPU-powered system is the 38.7 petaflops Frontera system at the Texas Advanced Computing Center.) It should be noted that ARCHER2 will, however, include a collaboration platform with four compute nodes containing a total of 16 AMD GPUs, so technically its not a pure CPU machine.

ARCHER2 will be installed in the same machine room at EPCC as ARCHER, so when they swap machines, there will be a period without HPC service. The plan is to pull the plug on ARCHER on February 18, 2020 and have ARCHER2 up and running on May 6. Subsequent to that, the new system will undergo a 30-day stress test, during which access may be limited.

This is all good news for AMD, of course, which has been capturing HPC business at a breakneck pace over the last several months. Thats largely been due to the attractive performance (and likely price-performance) offered by the Rome silicon compared to what Intel is currently offering.

Some recent notable AMD wins include a 24-petaflop supercomputer named Hawk, which is headed to the High-Performance Computing Center of the University of Stuttgart (HLRS) later this year, as well as a 7.5-petaflops system at the IT Center for Science, CSC, in Finland. Add to that a couple of large Rome-powered academic systems, including a 5.9-petaflops machine for the national Norwegian e-infrastructure provider Uninett Sigma2 and another system of the same size to be deployed at Indiana University. The US Department of Defense has jumped on the AMD bandwagon as well, with a trio of Rome-based supercomputers for the Air Force and Army.

All of these systems are expected to roll out in 2019 and 2020. And until Intel is able to counter the Rome juggernaut with its upcoming 10 nanometer Ice Lake Xeon processors in 2020, we fully expect to see AMD continue to rack up HPC wins at the expense of its larger competitor.

The ARCHER2 contract was worth 79 million, which translates to about $102 million at current exchange rates. The original ARCHER system cost 43 million, which converted to about $70 million at the time. So the ARCHER2 machine will cost about 1.46X and delivers 11X the peak theoretical performance over an eight year span of time. First of all, that is a very long time to wait to do an upgrade for an HPC center, so clearly EPCC was waiting for a chance to get a really big jump in price/performance, and by the way, at 28 petaflops, that is considerably higher than the 20 petaflops to 25 petaflops that EPCC was expecting back in March when the requisition was announced.

That original ARCHER system cost around $27,450 per peak teraflops back in 2012, which was on par with all-CPU systems but considerably more expensive than the emerging accelerated systems, on a cost per teraflops basis, of the time. (We did an analysis of the cost of the highest end, upper echelon supercomputers over time back in April 2018.) The ARCHER2 system is coming in at around $3,642 per teraflops, which is a huge improvement of 7.5X in bang for the buck, but the US Department of Energy is going to pay another order of magnitude lower something on the order of $335 per teraflops for the Frontier accelerated system at Oak Ridge National Laboratory and the El Capitan accelerated system at Lawrence Livermore National Laboratory when they are accepted in around 2022 and 2023. Both have AMD CPUs and Frontier will also use AMD GPUs for compute; El Capitan has not yet decided on its GPU. The current Summit and Sierra systems at those very same labs, which mix IBM Power9 processors with Nvidia Tesla V100 GPU accelerators, cost a little more than $1,000 per teraflops.

Our point is, all-CPU systems are necessary, particularly for labs with diverse workloads, and they come at a premium compared to labs that use accelerators and have ported their codes to them.

Continued here:

AMD CPUs Will Power UKs Next-Generation ARCHER2 Supercomputer - The Next Platform

Surprising Discovery Made When Supercomputer Simulations Explore Magnetic Reconnection – SciTechDaily

Collision of two magnetized plasma plumes showing Biermann battery-mediated reconnection. Credit: Jackson Matteucci and Will Fox

Magnetic reconnection, a process in which magnetic field lines tear and come back together, releasing large amounts of kinetic energy, occurs throughout the universe. The process gives rise to auroras, solar flares and geomagnetic storms that can disrupt cell phone service and electric grids on Earth. A major challenge in the study of magnetic reconnection, however, is bridging the gap between these large-scale astrophysical scenarios and small-scale experiments that can be done in a lab.

Researchers have now overcome this barrier through a combination of clever experiments and cutting-edge simulations. In doing so, they have uncovered a previously unknown role for a universal process called the Biermann battery effect, which turns out to impact magnetic reconnection in unexpected ways.

The Biermann battery effect, a possible seed for the magnetic fields pervading our universe, generates an electric current that produces these fields. The surprise findings, made through computer simulations, show the effect can play a significant role in the reconnection occurring when the Earths magnetosphere interacts with astrophysical plasmas. The effect first generates magnetic field lines, but then reverses roles and cuts them like scissors slicing a rubber band. The sliced fields then reconnect away from the original reconnection point.

The simulations modeled the results of experiments in China that studied high-energy-density plasmasmatter under extreme states of pressure. The experiments used lasers to blast a pair of plasma bubbles from a solid metal target. Simulations of the three-dimensional plasma (see image at the top of the page) traced the expansion of the bubbles and the magnetic fields that the Biermann effect created, tracking the collision of the fields to produce magnetic reconnection. Researchers performed these simulations on the Titan supercomputer at the U.S. Department of Energys Oak Ridge Leadership Computing Facility at Oak Ridge National Laboratory.

The results provide a new platform for replicating the reconnection observed in astrophysical plasmas in the laboratory, said Jackson Matteucci, a graduate student in the Plasma Physics program at the Princeton Plasma Physics Laboratory who led the research.

By bridging the traditional gap between laboratory experiments and astrophysical processes, these results open a new chapter in efforts to understand the universe.

###

Funding provided in part by the National Defense Science and Engineering Graduate Fellowship Program.

Abstract:

3-D magnetic reconnection in laser-driven plasmas: novel modeling provides insight into laboratory and astrophysical current sheets9:30 AM-12:30 PM, Thursday, October 24, 2019Room: Floridian Ballroom CD

Visit link:

Surprising Discovery Made When Supercomputer Simulations Explore Magnetic Reconnection - SciTechDaily

Bitcoin and cryptocurrencies had a very bad day – TechCrunch

The price of Bitcoin and other cryptocurrencies tanked today, continuing a months-long slide that has seen the value of the digital currency slide by more than $2,000 from highs of above $10,000 earlier in the year.

Investors are still speculating about the cause of the crash, but hopeful cryptocurrency bulls before today had hoped that $8,000 would be the new floor for Bitcoin.

No longer. Today the price of Bitcoin dropped to $7,448.75, down from around $8,000 earlier in the day.

Investors arent sure whats behind the crash, but Bitcoins commentariat pointed to two likely culprits.

One was the underwhelming performance of Facebooks chief executive Mark Zuckerberg in testimony before Congress on the Libra cryptocurrency that his company is leading the charge to create.

However, an underwhelming performance from Zuckerberg and the potential fate of Libra, which cryptocurrency purists have scoffed at anyway, may be less concerning for the Bitcoin crowd than developments happening in Googles quantum computing research labs around the world.

Earlier today, Google declared quantum dominance, indicating that it had solved a problem using quantum computing that a supercomputer would have taken years to solve. Thats great news for theoretical physicists and quantum computing aficionados, but less good for investors whove put their faith (and billions of dollars) into a system of record whose value depends on its inability to be cracked by computing power.

When news of Googles achievement first began trickling out in late September (thanks to reporting by the Financial Times), Bitcoin experts dismissed the notion that it would cause problems for the cryptocurrency.

We still dont even know if its possible to scale quantum computers; quite possible that adding qbits will have an exponential cost, wrote early Bitcoin developer Peter Todd, on Twitter.

The comments, flagged by CoinTelegraph, seem to indicate that the economic cost of cracking Bitcoins cryptography is far beyond the means of even Alphabets multibillion-dollar budgets.

Still, it has been a dark few months for cryptocurrencies after steadily surging throughout the year. The real test, of course, of the viability of Bitcoin and the other cryptographically secured transaction mechanisms floating around the tech world these days is whether anyone will build viable products on their open architectures.

Aside from a few flash-in-the-pan fads, the jury is very much still out on what the verdict will be.

That uncertainty affects more than just Bitcoin, and, indeed, the rest of the market also tumbled, as Coindesk pricing charts indicate.

Read more:

Bitcoin and cryptocurrencies had a very bad day - TechCrunch

Google claims to have developed a quantum computer which is BILLIONS of times faster than our most advanced – The Sun

GOOGLE says it has developed a quantum computer billions of times faster than any other technology.

The US giant claims it took 200 seconds to carry out a task that would have taken a supercomputer around 10,000 years.

1

Experts called it a "phenomenally significant" breakthrough.

In conventional computers, a unit of information or bit can have a value of 0 or 1.

But quantum bits can be both 0 and 1, allowing multiple computations at once.

University College London expert Dr Ciarn Gilligan-Lee said: It is the first baby step on a long journey where we can fully harness the power of quantum mechanics that will eclipse anything our current laptops or even supercomputers can do.

"It is a huge technological and scientific milestone."

Quantum computing promises to revolutionise the way PCs crunch data.

They could perform important work like designing super-materials, speeding up package deliveries and creating new drugs for deadly diseases at high speed.

Scientists have spent decades trying to achieve "quantum supremacy", a landmark that Google now claims to have conquered.

The phrase basically means the moment at which aquantum computer is able to do something that a classical computer can't.

The search giant worked for more than a decade to produce its own quantum computing chip, called Sycamore.

"Our machine performed the target computation in 200 seconds," Google researchers saidin a blog post about the work.

"From measurements in our experiment we determined that it would take the world's fastest supercomputer 10,000 years to produce a similar output."

Google carried out its research at a lab in Santa Barbara, California.

The findings were published in the journal Nature.

Dr Luke Fleet, a senior editor at Nature, said quantum machines are still years, if not decades away.

GAME DAY Call of Duty Modern Warfare release date is TODAY read our ultimate launch guide

SWORD IN THE STONE 'Real Excalibur' pulled from rock at bottom of lake

PAY TO PLAY Netflix vows crackdown on users who share logins and could make you pay EXTRA

HIDDEN FIGURES Skeletons of woman and child dating back 500 years found in Tower of London

LUNAR MYSTERY Crashed Indian Moon lander mysteriously 'missing' as Nasa fails to find it

He said: This breakthrough result marks the dawn of a new type of computing.it allows you to compute things faster not just a little, but lot faster!

This is transformative as people will be able to compute things that they previously thought impossible.

Follow this link:

Google claims to have developed a quantum computer which is BILLIONS of times faster than our most advanced - The Sun

talkSPORT Super Computer predicts where every club will finish in the 2019/20 Premier League table *October – talkSPORT.com

Were now two months into the Premier League season and, quite frankly, its even more chaotic than we imagined.

Sure, unbeaten Liverpool and title holders Manchester City look by far and away the best teams but elsewhere were seeing unpredictable results and performances from every team.

Getty Images - Getty

Newcastle United looked a smart and solid outfit in their victory over Tottenham but against Leicester City looked incapable of battling against relegation to the Championship.

Top four contenders Arsenal have looked excellent one minute and laughable the next in several of their matches, while Manchester United are also rotating between sublime and ridiculous.

Its making for interesting viewing with teams like Leicester now seeing a real opening to break into the top six but just how will it all play out across the season?

We booted up the talkSPORT Super Computer to find out just what is going to happen.

You can see the results and the predicted Premier League table below

AFP or licensors

Getty Images - Getty

Getty Images - Getty

Getty Images - Getty

Getty Images - Getty

AFP or licensors

AFP or licensors

Getty Images - Getty

AFP or licensors

Saturday is GameDay on talkSPORT and talkSPORT 2 as we become your go to destination for all the Premier League action.

Well bring you LIVE commentary of Premier League games across all three time slots on Saturday 12.30pm, 3pm and 5.30pm delivering award-winning coverage to moreGameDaylisteners than ever.

See original here:

talkSPORT Super Computer predicts where every club will finish in the 2019/20 Premier League table *October - talkSPORT.com

India is on its way to become a supercomputer power – Quartz India

Indias recent Chandrayaan-2 mission, which almost soft landed a probe on the moon, had a palpable zeal, which, as prime minister Narendra Modipointed out, will be felt in other realms of the knowledge society.

With renewed aspirations to excel in science, engineering, and business, the time is ripe for India to invest in the infrastructure that will help achieve these goalsamong them, supercomputers.

Developed, and almost-developed, countries have begun investing heavily in high-performance computing to boost their economies and tackle the next generation of social problems.

With their unique ability to simulate the real world, by processing massive amounts of data, supercomputers have made cars and planes safer, and fuel more efficient and environment friendly. They help in the extraction of new sources of oil and gas, development of alternative energy sources, and the advancement of medical science.

Supercomputers have also allowed weather forecasters to accurately predict severe storms, enabling better mitigation planning, and warning systems. They are increasingly being deployed by financial services, manufacturing and internet companies, and in vital infrastructure systems such as water supply networks, energy grids, and transportation.

Future applications of artificial intelligence (AI), running at any moderate degree of scale, will depend on supercomputing. This explanatory video brings the potential of high-performance computing (HPC) to life.

Thanks to the potential of HPC, countries like the US, China, France, Germany, Japan, and Russia have created national-level supercomputing strategies and are investing substantial resources in these programmes. These are the nations with which India has to compete in its bid to become a centre for scientific and business excellence.

Yet, the list of top 500 supercomputers, counts fewer than five in the country.

A pertinent question here is whether it makes economic sense for India to invest in expensive technology like supercomputers? Cant we make do with something more frugal? After all, we launched our Mangalyaan Mars Orbiter Mission with a budget of $73 million and we almost made it to the moons south pole, where no country has ever gone before, for less than $150 million.

India is not typically considered a pioneer or leader when it comes to adopting newer technologies. While it has the most number of IT professionals in the world, it is a laggard in adopting innovation.

By harnessing the power of supercomputing, there is an opportunity to reverse this trend. India has reached a stage where it has the will and wherewithal to provide better lives to its citizens. It wants to enhance the impact of its welfare programmes by formulating the right schemes for the right beneficiaries in the right parts of the country. It wants to improve its prediction of cyclones and droughts and better plan infrastructure for its fast-expanding cities.

To realise these goals, India can no longer afford to ignore supercomputers. It needs the capacity to solve complex scientific problems which have real-life implications. It needs its workforce to have the skills to participate and lead in new innovations across various academic and industrial sectors.

To do all of this a country needs the appropriate infrastructuredigital as well as physical. Case in point:Chinas Jiangsu.

In the province, the supercomputer Sunway TaihuLight performs a range of tasks, including climate science, weather forecasting, and earth-system modelling to help ships avoid rough seas, farmers improve their yield and ensure the safety of offshore drilling. TaihuLight has already led to an increase in profits and a reduction in expenses that justify its $270 million cost.

In the US, too, supercomputers are radically transforming the healthcare system. The Center for Disease Control (CDC) has used supercomputers to create a far more detailed model of the Hepatitis-C virus, a major cause of liver disease that costs $9 billion in healthcare costs in the US alone.

Using supercomputers, the researchers have now developed a model that comprehensively simulates the human heart down to the cellular level and could lead to a substantial reduction in heart diseases, which costs the US around $200 billion each year.

On Aug. 14, 2017, the SpaceX CRS-12 rocket was launched from the Kennedy Space Center to send Dragon Spacecraft to the International Space Station (ISS) National Lab. Aboard the Dragon was a Hewlett Packard enterprises (HPE) supercomputer, called the Spaceborne, which is part of a year-long experiment conducted by HPE and NASA to run a supercomputer system in space.

The goal is for the system to operate seamlessly in the harsh conditions of space for one yearroughly the amount of time it would take to travel to Mars.

If India truly wants to become a knowledge-driven, multi-trillion-dollar economy, which is able to support cutting-edge science to benefit its economy, its society and the businesses that operate within it environment, investment in supercomputing is a necessity.

Without it, India risks being surpassed on the global stage by other nations and will consequently miss the huge benefits that come from having this vitally important technology at the disposal of Indias best and brightest minds. The Modi government has big ambitions for India and supercomputing can help make them a reality.

This piece is published in collaboration with India Economic Summit. We welcome your comments at ideas.india@qz.com.

See the original post:

India is on its way to become a supercomputer power - Quartz India

Why build your own cancer-sniffing neural network when this 1.3 exaflop supercomputer can do if for you? – The Register

The worlds fastest deep learning supercomputer is being used to develop algorithms that can help researchers automatically design neural networks for cancer research, according to the Oak Ridge National Laboratory.

The World Health Organisation estimates that by 2025, the number of diagnosed new cases of cancer will reach 21.5 million a year, compared to the current number of roughly 18 million. Researchers at Oak Ridge National Laboratory (ORNL) and Stony Brook University reckon that this means doctors will have to analyse about 200 million biopsy scans per year.

Neural networks could help ease their workloads, however, so that they can focus more on patient care. There have been several studies describing how computer vision models can be trained to diagnose cancerous cells in the lung or prostate. Although these systems seem promising theyre time consuming and expensive to build.

The team working at ORNL, a federally funded research facility working under the US Department of Energy, however, want to change that. They have developed software that automatically spits out new neural network architectures to analyse cancer scans so that engineers dont have to spend as much time designing the model themselves.

Known as MENDLL, the Python-based framework uses an evolutionary algorithm and neural architecture search to piece together building blocks in neural networks to come up with new designs. Millions of new models can be generated within hours before the best one is chosen, according to a paper released on arXiv.

The end result is a convolutional neural architecture that can look for seven different types of cancers within a pathology image, Robet Patton, first author of the study and a researcher at ORNL, told The Register.

The software is computationally intensive to run and requires a deep learning supercomputer like Summit. The ORNL supercomputer contains 4,608 nodes, where each one contains two IBM POWER9 CPUs and six Nvidia Volta GPUs. MENDLL can achieve 1.3 exaflops - a quintillion or 1018 floating point operations per second - when the code is running at mixed precision floating point operations on a total of 9,216 CPUs and 27,648 GPUs.

Although millions of potential architectures are created, the best one is chosen based on the neural networks size, how computationally intensive it is to train, and its accuracy at detecting tumors in medical scans.

The images in the training dataset are split into patches; 86,000 patches were manually annotated to classify the tumors, where 64,381 patches contained benign cells and 21,773 contained cancerous ones. All the images represent seven different cancer types in the breast, colon, lung, pancreas, prostate, skin, and pelvic forms.

The seven different cancer types are considered to be a single data set. As a result, MENNDL starts with some initial set of architectures, and then evolves that set toward a single network architecture that is capable of identifying seven different types, said Patton.

The winning model achieved an accuracy score of 0.839, where 1 is the perfect score, and could zip through 7,033 patches per second. For comparison, a hand designed convolutional neural network known as Inception is slightly more accurate at 0.899 but can only analyse 433 patches per second.

Currently, the best networks were still too slow, creating a backlog of images that needed to analyzed. Using MENNDL, a network was created that was 16x faster and capable of keeping up with the image generation so that no backlog would be created, said Patton.

In other words, the one built by MENNDL has a comparable performance to a hand-built design and can process cancer scans at a much faster rate. The researchers believe the network can bring the rate of image analysis up to the speed of the rate of image collection.

But the software is still a proof of concept, however. It is important to note that the goal of MENNDL is not to produce a fully trained model - a network that can immediately be deployed on a particular problem - but to optimize the network design to perform well on a particular dataset. The resulting network design can then be trained for a longer period of time on the dataset to produce a fully trained model, the paper said.

Our goal with MENNDL is to not only create novel neural architectures for different applications but also to create datasets of neural architectures that could be studied to better understand how neural structures differ from one application to the next. This would give AI researchers greater insights into how neural networks operate, Scott concluded.

Sponsored: Beyond the Data Frontier

More:

Why build your own cancer-sniffing neural network when this 1.3 exaflop supercomputer can do if for you? - The Register