Expo 2020 Dubai: UK pavillion to champion AI and space exploration – Verdict

When you think of testbed locations for artificial intelligence (AI), you might think of the US, Japan or China, but the UK also deserves a spot on that list.

A briefing note by McKinsey Global Institute highlights that AI could add an incremental 16 per cent in economic gains to global output by 2030. The gains could be as high as 22 per cent in the UK, as it is regarded as more AI-ready compared to the global average. Indeed, the 2019 Government AI Readiness Index compiled by Oxford Insights and the International Development Research Centre (IDRC) saw the UK come second only to Singapore.

Highlighting a drive for AI and space exploration, the UKs pavilion for Expo 2020 Dubai will showcase innovations in culture, education, tourism and business.

Every time the UK takes part in a World Expo, we try to be different, says Laura Faulkner, UK commissioner general and project director for the UK pavilion. We have a strong creative and cultural sector within the UK, so we start with an open mind when we put out the design brief and end up with never-before-seen, thought-provoking ideas.

The UKs Department for International Trade (DIT) has chosen branding agency Avantgarde and British designer Es Devlins design for a Poem Pavilion to represent innovation at the expo. The UKs theme for Expo 2020 is Innovating for a Shared Future.

Devlins design for the UK pavilion, which will be located in the Opportunity District, features a 20-metre-high structure consisting of rows of slats protruding outwards to form a conical shape. The facade of the structure will feature an LED display of poetry created through AI, with words contributed both by visitors to the expo and by a machine-learning system.

The design concept is inspired by one of the final projects of late English physicist Stephen Hawking, Breakthrough Message. The project saw Hawking and his colleagues launch a global competition in 2015, inviting people worldwide to consider what message the human race should communicate to alien civilisations in space.

The pavilion is just one aspect of our participation, says Faulkner. We plan to pose a series of questions, framing our participation around these. We will be asking, in the future, what will we wear? What will we eat? How will we create? How will we travel? How will we learn? Through these questions and conversations with the youth, government and thought-leaders, we want to look at what the future looks like for the planet.

Construction of the pavilion is being overseen by marketing firm Pico Group, with UK construction firm McLaren building the 3,417 square-metre, two-storey structure. Foundation works for the structure have been started, and Faulkner says that the UK pavilion is on track for delivery in May or June 2020. The news agency MEED estimates the cost of construction to be $18m.

The building will feature cross-laminated timber on a concrete structure. Much of the pavilion will be manufactured and assembled off-site.

The UK pavilion will not remain standing at the expo site following the conclusion of the event. Faulkner explains that a decommissioning strategy for the physical structure is being planned.

DIT has tasked UK-based independent environmental consultancy Resource Futures to lead a team to explore decommissioning possibilities.

Get the Verdict morning email

In accordance with expos target for the site to be restored following the event and for each country pavilion to redeploy, recycle or return 75 per cent of its construction materials to the manufacturer, Resource Futures and collaborators will focus on ensuring that much of materials are diverted from landfill.

Promoting higher education is a key part of the dialogue at the UK pavilion, and pavilion founding partner De Montfort University (DMU) Leicester sees Expo 2020 as an opportunity to advance the global discussion about innovation.

Simon Bradbury, pro vice-chancellor dean of the faculty of arts, design and humanities at DMU, says: We see [this] involvement as an opportunity to offer our students an unparalleled experience, whether that is going to Dubai to experience the festival, having their work in the spotlight [in front of] an audience of millions, or getting a behind-the-scenes look at how such an event is run.

Speaking to the networking potential, he notes: The expo will allow us to make connections across the world, to be at the forefront of innovation and enterprise.

London-headquartered HSBC is the other founding partner of the pavilion. We will be promoting the power of international connectivity at Expo 2020, says Abdulfattah Sharaf, group general manager, chief executive officer UAE and head of international for HSBC Bank Middle East.

The UK ultimately hopes to provoke insightful and forward-looking conversations at the expo.

We are not coming to the expo with a bilateral intention to broadcast something about the UK. We are coming to the expo because the whole world is in one place. We want to engage in multilateral conversation and build lifelong partnerships, says Faulkner.

MEEDThis article is sourced from Verdict Technology sister publication http://www.meed.com, a leading source of high-value business intelligence and economic analysis about the Middle East and North Africa. To access more MEED content register for the 30-day Free Guest User Programme.

GlobalData is this websites parent business intelligence company.

Continued here:

Expo 2020 Dubai: UK pavillion to champion AI and space exploration - Verdict

Astronaut Snoopy Rides Aboard the International Space Station – Science Times

(Photo : Snoopygrams) Astronaut Snoopy's balloon float in the Macy's Thanksgiving Parade is a part of NASA's celebration of the first moon landing.

Astronaut Snoopy is no longer just a comic book character and mascot. Last Nov. 28, the "world-famous astronaut" took flight and went aboard the International Space Station -- just in time for Macy's Thanksgiving Parade, which features a Snoopy plush doll donning an orange astronaut suit complete with the NASA logo floating aboard the orbiting laboratory.

Who is Astronaut Snoopy?

For those who are not familiar with comic book characters, Snoopy is the famous beagle from the comic strip Peanuts created by Charles Schulz in the 1950s. Snoopy, throughout the entirety of Peanuts has had a number of personas, including Astronaut Snoopy, which first appeared in the Macy's Thanksgiving Parade back in 1969 following the launch of Apollo 10 command and lunar modules which were named Charlie Brown and Snoopy.

Astronaut Snoopy has been NASA's mascot since 1968, a product of a51-year partnershipbetween the Space Agency and Peanut, first to promote safety in human spaceflight and in recent years, to use the famous cartoon beagle to promote science, technology, engineering, and mathematics and encourage children to be interested in those fields.

Why is Astronaut Snoopy's Appearance in the Macy's Thanksgiving Parade a Big Deal?

Astronaut Snoopy went on board the International Space Station as a part of NBC's television coverage of the celebration and was welcomed by NASA crew members Jessica Meir and Christina Koch -- the first female astronauts to do the spacewalk (READ: Christina Koch and Jessica Meir Execute First All-Woman Spacewalk). In thebroadcast, Meir narrates, "Snoopy has been along for space rides since the Apollo era. Today, he's floating in the Macy's parade and here in space."

Back on Earth, Astronaut Snoopy's 49-foot-tall helium balloon for the parade dons the bright orange spacesuits complete with NASA's logo and red-soled boots. This is patterned after the Orion Crew Survival System which will be worn by astronauts who will be included in the Artemis missions to the moon and. Eventually, Mars.

The 8-inch Astronaut, Snoopy plushie on the ISS which wore the same bright orange suit, was launched to the space station aboard the Northrop Grunman Cygnus cargo spacecraft last month along with a handful of stuff for the astronauts in the space station (READ:Space Oven is Ready for a Test Cook-Off).

However, the announcement of Astronaut Snoopy's inclusion in the launch was not announced until Nov. 2 byPeanuts Worldwide, the company which handles the comic strip.

Astronaut Snoopy's presence in the Macy's Thanksgiving Parade is a part of the celebration of the first moon landing's 50th anniversary. Of course, if Snoopy is present, his best buddy Woodstock will surely be attending. The Macy's Thanksgiving Parade also featured a balloon float of Woodstock using a telescope, probably looking at Snoopy's adventures in the ISS.

The partnership between NASA and Peanuts continues not only during the Thanksgiving parade but also through the launching of a new line of toys via McDonald's Happy Meal and a new cartoon and a new cartoon series called "Snoopy in Space" which will be available on Apple TV+. The animated series aims to engage children in space exploration and activities.

Lastly, Astronaut Snoopy's presence in the ISS reminds people of the upcoming anniversary of the first-ever crew to reside on board in November of the year 2000. Astronaut Snoopy plush doll wascreated by Hallmark.

Continue reading here:

Astronaut Snoopy Rides Aboard the International Space Station - Science Times

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits – Forbes

Digital Human Brain Covered with Networks

Artificial intelligence is advancing rapidly. In a few decades machines will achieve superintelligence and become self-improving. Soon after that happens we will launch a thousand ships into space. These probes will land on distant planets, moons, asteroids, and comets. Using AI and terabytes of code, they will then nanoassemble local particles into living organisms. Each probe will, in fact, contain the information needed to create an entire ecosystem. Thanks to AI and advanced biotechnology, the species in each place will be tailored to their particular plot of rock. People will thrive in low temperatures, dim light, high radiation, and weak gravity. Humanity will become an incredibly elastic concept. In time our distant progeny will build megastructures that surround stars and capture most of their energy. Then the power of entire galaxies will be harnessed. Then life and AIlong a common entity by this pointwill construct a galaxy-sized computer. It will take a mind that large about a hundred-thousand years to have a thought. But those thoughts will pierce the veil of reality. They will grasp things as they really are. All will be one. This is our destiny.

Then again, maybe not.

There are, of course, innumerable reasons to reject this fantastic tale out of hand. Heres a quick and dirty one built around Copernicuss discovery that we are not the center of the universe. Most times, places, people, and things are average. But if sentient beings from Earth are destined to spend eons multiplying and spreading across the heavens, then those of us alive today are special. We are among the very few of our kind to live in our cosmic infancy, confined in our planetary cradle. Because we probably are not special, we probably are not at an extreme tip of the human timeline; were likely somewhere in the broad middle. Perhaps a hundred-billion modern humans have existed, across a span of around 50,000 years. To claim in the teeth of these figures that our species is on the cusp of spending millions of years spreading trillions of individuals across this galaxy and others, you must engage in some wishful thinking. You must embrace the notion that we today are, in a sense, back at the center of the universe.

It is in any case more fashionable to speculate about imminent catastrophes. Technology again looms large. In the gray goo scenario, runaway self-replicating nanobots consume all of the Earths biomass. Thinking along similar lines, philosopher Nick Bostrom imagines an AI-enhanced paperclip machine that, ruthlessly following its prime directive to make paperclips, liquidates mankind and converts the planet into a giant paperclip mill. Elon Musk, when he discusses this hypothetical, replaces paperclips with strawberries, so that he can worry about strawberry fields forever. What Bostrom and Musk are driving at is the fear that an advanced AI being will not share our values. We might accidently give it a bad aim (e.g., paperclips at all costs). Or it might start setting its own aims. As Stephen Hawking noted shortly before his death, a machine that sees your intelligence the way you see a snails might decide it has no need for you. Instead of using AI to colonize distant planets, we will use it to destroy ourselves.

When someone mentions AI these days, she is usually referring to deep neural networks. Such networks are far from the only form of AI, but they have been the source of most of the recent successes in the field. A deep neural network can recognize a complex pattern without relying on a large body of pre-set rules. It does this with algorithms that loosely mimic how a human brain tunes neural pathways.

The neurons, or units, in a deep neural network are layered. The first layer is an input layer that breaks incoming data into pieces. In a network that looks at black-and-white images, for instance, each of the first layers units might link to a single pixel. Each input unit in this network will translate its pixels grayscale brightness into a numer. It might turn a white pixel into zero, a black pixel into one, and a gray pixel into some fraction in between. These numbers will then pass to the next layer of units. Each of the units there will generate a weighted sum of the values coming in from several of the previous layers units. The next layer will do the same thing to that second layer, and so on through many layers more. The deeper the layer, the more pixels accounted for in each weighted sum.

An early-layer unit will produce a high weighted sumit will fire, like a neuron doesfor a pattern as simple as a black pixel above a white pixel. A middle-layer unit will fire only when given a more complex pattern, like a line or a curve. An end-layer unit will fire only when the patternor, rather, the weighted sums of many other weighted sumspresented to it resembles a chair or a bonfire or a giraffe. At the end of the network is an output layer. If one of the units in this layer reliably fires only when the network has been fed an image with a giraffe in it, the network can be said to recognize giraffes.

A deep neural network is not born recognizing objects. The network just described would have to learn from pre-labeled examples. At first the network would produce random outputs. Each time the network did this, however, the correct answers for the labeled image would be run backward through the network. An algorithm would be used, in other words, to move the networks unit weighting functions closer to what they would need to be to recognize a given object. The more samples a network is fed, the more finely tuned and accurate it becomes.

Some deep neural networks do not need spoon-fed examples. Say you want a program equipped with such networks to play chess. Give it the rules of the game, instruct it to seek points, and tell it that a checkmate is worth a hundred points. Then have it use a Monte Carlo method to randomly simulate games. Through trial and error, the program will stumble on moves that lead to a checkmate, and then on moves that lead to moves that lead to a checkmate, and so on. Over time the program will assign value to moves that simply tend to lead toward a checkmate. It will do this by constantly adjusting its networks unit weighting functions; it will just use points instead of correctly labeled images. Once the networks are trained, the program can win discrete contests in much the way it learned to play in the first place. At each of its turns, the program will simulate games for each potential move it is considering. It will then choose the move that does best in the simulations. Thanks to constant fine-tuning, even these in-game simulations will get better and better.

There is a chess program that operates more or less this way. It is called AlphaZero, and at present it is the best chess player on the planet. Unlike other chess supercomputers, it has never seen a game between humans. It learned to play by spending just a few hours simulating moves with itself. In 2017 it played a hundred games against Stockfish 8, one of the best chess programs to that point. Stockfish8 examined 70million moves per second. AlphaZero examined only 80,000. AlphaZero won 28 games, drew 72, and lost zero. It sometimes made baffling moves (to humans) that turned out to be masterstrokes. AlphaZero is not just a chess genius; it is an alien chess genius.

AlphaZero is at the cutting edge of AI, and it is very impressive. But its success is not a sign that AI will take us to the starsor enslave usany time soon. In Artificial Intelligence: A Guide For Thinking Humans, computer scientist Melanie Mitchell makes the case for AI sobriety. AI currently excels, she notes, only when there are clear rules, straightforward reward functions (for example, rewards for points gained or for winning), and relatively few possible actions (moves). Take IBMs Watson program. In 2011 it crushed the best human competitors on the quiz show Jeopardy!, leading IBM executives to declare that its successors would soon be making legal arguments and medical diagnoses. It has not worked out that way. Real-world questions and answers in real-world domains, Mitchell explains, have neither the simple short structure of Jeopardy! clues nor their well-defined responses.

Even in the narrow domains that most suit it, AI is brittle. A program that is a chess grandmaster cannot compete on a board with a slightly different configuration of squares or pieces. Unlike humans, Mitchell observes, none of these programs can transfer anything it has learned about one game to help it learn a different game. Because the programs cannot generalize or abstract from what they know, they can function only within the exact parameters in which they have been trained.

A related point is that current AI does not understand even basic aspects of how the world works. Consider this sentence: The city council refused the demonstrators a permit because they feared violence. Who feared violence, the city council or the demonstrators? Using what she knows about bureaucrats, protestors, and riots, a human can spot at once that the fear resides in the city council. When AI-driven language-processing programs are asked this kind of question, however, their responses are little better than random guesses. When AI cant determine what it refers to in a sentence, Mitchell writes, quoting computer scientist Oren Etzioni, its hard to believe that it will take over the world.

And it is not accurate to say, as many journalists do, that a program like AlphaZero learns by itself. Humans must painstakingly decide how many layers a network should have, how much incoming data should link to each input unit, how fast data should aggregate as it passes through the layers, how much each unit weighting function should change in response to feedback, and much else. These settings and designs, adds Mitchell, must typically be decided anew for each task a network is trained on. It is hard to see nefarious unsupervised AI on the horizon.

The doom camp (AI will murder us) and the rapture camp (it will take us into the mind of God) share a common premise. Both groups extrapolate from past trends of exponential progress. Moores lawwhich is not really a law, but an observationsays that the number of transistors we can fit on a computer chip doubles every two years or so. This enables computer processing speeds to increase at an exponential rate. The futurist Ray Kurzweil asserts that this trend of accelerating improvement stretches back to the emergence of life, the appearance of Eukaryotic cells, and the Cambrian Explosion. Looking forward, Kurzweil sees an AI singularitythe rise of self-improving machine superintelligenceon the trendline around 2045.

The political scientist Philip Tetlock has looked closely at whether experts are any good at predicting the future. The short answer is that theyre terrible at it. But theyre not hopeless. Borrowing an analogy from Isaiah Berlin, Tetlock divides thinkers into hedgehogs and foxes. A hedgehog knows one big thing, whereas a fox knows many small things. A hedgehog tries to fit what he sees into a sweeping theory. A fox is skeptical of such theories. He looks for facts that will show he is wrong. A hedgehog gives answers and says moreover a lot. A fox asks questions and says however a lot. Tetlock has found that foxes are better forecasters than hedgehogs. The more distant the subject of the prediction, the more the hedgehogs performance lags.

Using a theory of exponential growth to predict an impending AI singularity is classic hedgehog thinking. It is a bit like basing a prediction about human extinction on nothing more than the Copernican principle. Kurzweils vision of the future is clever and provocative, but it is also hollow. It is almost as if huge obstacles to general AI will soon be overcome because the theory says so, rather than because the scientists on the ground will perform the necessary miracles. Gordon Moore himself acknowledges that his law will not hold much longer. (Quantum computers might pick up the baton. Well see.) Regardless, increased processing capacity might be just a small piece of whats needed for the next big leaps in machine thinking.

When at Thanksgiving dinner you see Aunt Jane sigh after Uncle Bob tells a blue joke, you can form an understanding of what Jane thinks about what Bob thinks. For that matter, you get the joke, and you can imagine analogous jokes that would also annoy Jane. You can infer that your cousin Mary, who normally likes such jokes but is not laughing now, is probably still angry at Bob for spilling the gravy earlier. You know that although you cant see Bobs feet, they exist, under the table. No deep neural network can do any of this, and its not at all clear that more layers or faster chips or larger training sets will close the gap. We probably need further advances that we have only just begun to contemplate. Enabling machines to form humanlike conceptual abstractions, Mitchell declares, is still an almost completely unsolved problem.

There has been some concern lately about the demise of the corporate laboratory. Mitchell gives the impression that, at least in the technology sector, the corporate basic-research division is alive and well. Over the course of her narrative, labs at Google, Microsoft, Facebook, and Uber make major breakthroughs in computer image recognition, decision making, and translation. In 2013, for example, researchers at Google trained a network to create vectors among a vast array of words. A vector set of this sort enables a language-processing program to define and use a word based on the other words with which it tends to appear. The researchers put their vector set online for public use. Google is in some ways the protagonist of Mitchells story. It is now an applied AI company, in Mitchells words, that has placed machine thinking at the center of diverse products, services, and blue-sky research.

Google has hired Ray Kurzweil, a move that might be taken as an implicit endorsement of his views. It is pleasing to think that many Google engineers earnestly want to bring on the singularity. The grand theory may be illusory, but the treasures produced in pursuit of it will be real.

More:

Doubting The AI Mystics: Dramatic Predictions About AI Obscure Its Concrete Benefits - Forbes

AWS wants to reinvent the supercomputer, starting with the network – ZDNet

Image: Asha Barbaschow

Amazon Web Services wants to reinvent high performance computing (HPC) and according to VP of AWS global infrastructure Peter DeSantis, it all starts with the network.

Speaking at his Monday Night Live keynote, DeSantis said AWS has been working for the last decade to make supercomputing in the cloud a possibility.

"Over the past year we've seen this goal become reality," he said.

According to DeSantis, there's no precise definition of an HPC workload, but he said the one constant is that it is way too big to fit on a single server.

"What really differentiates HPC workloads is the need for high performance networking so those servers can work together to solve problems," he said, talking on the eve of AWS re:Invent about what the focus of the cloud giant's annual Las Vegas get together will be.

"Do I care about HPC? I hope so, because HPC impacts literally every aspect of our lives the big, hard problems in science and engineering."

See also: How higher-ed researchers leverage supercomputers in the fight for funding (TechRepublic)

DeSantis explained that typically in supercomputing, each server works out a portion of the problem, and then all the servers share the results with each other.

"This information exchange allows the servers to continue doing their work," he said. "The need for tight coordination puts significant pressure on the network."

To scale these HPC workloads effectively, DeSantis said a high-performance, low-latency network is required.

"If you look really closely at a modern supercomputer, it really is a cluster of servers with a purpose-built, dedicated network network provides specialised capabilities to help run HPC applications efficiently," he said.

In touting the cloud as the best place to run HPC workloads, DeSantis said the "other" problem with physical supercomputers is they're custom built, which he said means they're expensive and take years to procure and stand up.

"One of the benefits of cloud computing is elasticity," he continued.

Another problem AWS wants to fix with supercomputing in the cloud is the democratisation element, with DeSantis saying one issue is getting access to a supercomputer.

"Usually only the high-value applications have access to the supercomputer," he said.

"With more access to low-cost supercomputing we could have safer cars we could have more accurate forecasting, we could have better treatment for diseases, and we can unleash innovation by giving everybody [access].

"If we want to reinvent high performance computing, we have to reinvent supercomputers."

AWS also wants to reinvent machine learning infrastructure.

"Machine learning is quickly becoming an integral part of every application," DeSantis said.

However, the optimal infrastructure for the two components of machine learning -- training and inference -- are very different.

"A good machine learning dataset is big, and they're getting bigger and training involves doing multiple passes through your training data," De Santis said.

"We're excited in investments we've made in HPC and it's helping us with machine learning."

Earlier on Monday, Formula 1 announced it had partnered with AWS to carry out simulations that it says has resulted in the car design for the 2021 racing season, touting the completion of a Computational Fluid Dynamics (CFD) project that simulates the aerodynamics of cars while racing.

The CFD project used over 1,150 compute cores to run detailed simulations comprised of over 550 million data points that model the impact of one car's aerodynamic wake on another.

Asha Barbaschow travelled to re:Invent as a guest of AWS.

See the original post:

AWS wants to reinvent the supercomputer, starting with the network - ZDNet

Idaho’s Third Supercomputer Coming to Collaborative Computing Center – HPCwire

IDAHO FALLS, Idaho, Dec. 5, 2019 A powerful new supercomputer arrived this week at Idaho National Laboratorys Collaborative Computing Center. The machine has the power to run complex modeling and simulation applications, which are essential to developing next-generation nuclear technologies.

Named after a central Idaho mountain range, Sawtooth arrives in December and will be available to users early next year. The $19.2 million systemranks #37 on the 2019 Top 500fastest supercomputers in the world. That is the highest ranking reached by an INL supercomputer. Of 102 new systems added to the list in the past six months, only three were faster than Sawtooth.

It will be able to crunch much more complex mathematical calculations at approximately six times the speed of Falcon and Lemhi,INLs current systems.

The boost in computing power will enable researchers at INL and elsewhere to simulate new fuels and reactor designs, greatly reducing the time, resources and funding needed to transition advanced nuclear technologies from the concept phase into the marketplace.

Supercomputing reduces the need to build physical experiments to test every hypothesis, as was the process used to develop the majority of technologies used in currently operating reactors. By using simulations to predict how new fuels and designs will perform in a reactor environment, engineers can select only the most promising technologies for the real-world experiments,saving time and money.

INLs ability to model new nuclear technologies has become increasingly important as nations strive to meet growing energy needs while minimizing emissions. Today, there are about 450 nuclear power reactors operating in 30 countries plus Taiwan. These reactors produce approximately 10% of the worlds electricity and 60% of Americas carbon-free electricity. According to theWorld Nuclear Association, 15 countries are currently building about 50 power reactors.

John Wagner, the associate laboratory director for INLs Nuclear Science and Technology directorate, said Sawtooth plays an important role in developing and deploying advanced nuclear technologies and is a key capability for the National Reactor Innovation Center (NRIC).

In August, theU.S. Department of Energy designated INL to lead NRIC, which was established to provide developers the resources to test, demonstrate and assess performance of new nuclear technologies, critical steps that must be completed before they are available commercially.

With advanced modeling and simulation and the computing power now available, we expect to be able to dramatically shorten the time it takes to test, manufacture and commercialize new nuclear technologies, Wagner said. Other industries and organizations, such as aerospace, have relied on modeling and simulation to bring new technologies to market much faster without compromising safety and performance.

Sawtooth is funded by the DOEs Office of Nuclear Energy through the Nuclear Science User Facilities program. It will provide computer access to researchers at INL, other national laboratories, industry and universities. Idahos three research universitieswill be able to access Sawtoothand INLs other supercomputers remotely via the Idaho Regional Optical Network (IRON), an ultra-high-speed fiber optic network.

This system represents a significant increase in computing resources supporting nuclear energy research and development and will be the primary system for DOEs nuclear energy modeling and simulation activities, said Eric Whiting, INLs division director for Advanced Scientific Computing. It will help guide the future of nuclear energy.

Sawtooth, with its nearly 100,000 processors, is being installed in the new 67,000-square-foot Collaborative Computing Center,which opened in October. The new facility was designed to be the heart of modeling and simulation work for INL as well as provide floor space, power and cooling for systems such as Sawtooth. Falcon and Lemhi, the labs current supercomputing systems, also are slated to move to this new facility.

About INL

INL is one of the U.S. Department of Energys (DOEs) national laboratories. The laboratory performs work in each of DOEs strategic goal areas: energy, national security, science and environment. INL is the nations center for nuclear energy research and development. Day-to-day management and operation of the laboratory is the responsibility of Battelle Energy Alliance. See more INL news atwww.inl.gov. Follow @INL onTwitteror visit our Facebook page atwww.facebook.com/IdahoNationalLaboratory.

Source: Idaho National Laboratory

Follow this link:

Idaho's Third Supercomputer Coming to Collaborative Computing Center - HPCwire

The rise and fall of the PlayStation supercomputers – The Verge

Dozens of PlayStation 3s sit in a refrigerated shipping container on the University of Massachusetts Dartmouths campus, sucking up energy and investigating astrophysics. Its a popular stop for tours trying to sell the school to prospective first-year students and their parents, and its one of the few living legacies of a weird science chapter in PlayStations history.

Those squat boxes, hulking on entertainment systems or dust-covered in the back of a closet, were once coveted by researchers who used the consoles to build supercomputers. With the racks of machines, the scientists were suddenly capable of contemplating the physics of black holes, processing drone footage, or winning cryptography contests. It only lasted a few years before tech moved on, becoming smaller and more efficient. But for that short moment, some of the most powerful computers in the world could be hacked together with code, wire, and gaming consoles.

Researchers had been messing with the idea of using graphics processors to boost their computing power for years. The idea is that the same power that made it possible to render Shadow of the Colossus grim storytelling was also capable of doing massive calculations if researchers could configure the machines the right way. If they could link them together, suddenly, those consoles or computers started to be far more than the sum of their parts. This was cluster computing, and it wasnt unique to PlayStations; plenty of researchers were trying to harness computers to work as a team, trying to get them to solve increasingly complicated problems.

The game consoles entered the supercomputing scene in 2002 when Sony released a kit called Linux for the PlayStation 2. It made it accessible, Craig Steffen said. They built the bridges, so that you could write the code, and it would work. Steffen is now a senior research scientist at the National Center for Supercomputing Applications (NCSA). In 2002, he had just joined the group and started working on a project with the goal of buying a bunch of PS2s and using the Linux kits to hook them (and their Emotion Engine central processing units) together into something resembling a supercomputer.

They hooked up between 60 and 70 PlayStation 2s, wrote some code, and built out a library. It worked okay, it didnt work superbly well, Steffen said. There were technical issues with the memory two specific bugs that his team had no control over.

Every time you ran this thing, it would cause the kernel on whatever machine you ran it on to kind of go into this weird unstable state and it would have to be rebooted, which was a bummer, Steffen said.

They shut the project down relatively quickly and moved on to other questions at the NCSA. Steffen still keeps one of the old PS2s on his desk as a memento of the program.

But thats not where PlayStations supercomputing adventures met their end. The PS3 entered the scene in late 2006 with powerful hardware and an easier way to load Linux onto the devices. Researchers would still need to link the systems together, but suddenly, it was possible for them to imagine linking together all of those devices into something that was a game-changer instead of just a proof-of-concept prototype.

Thats certainly what black hole researcher Gaurav Khanna was imagining over at UMass Dartmouth. Doing pure period simulation work on black holes doesnt really typically attract a lot of funding, its just because it doesnt have too much relevance to society, Khanna said.

Money was tight, and it was getting tighter. So Khanna and his colleagues were brainstorming, trying to think of solutions. One of the people in his department was an avid gamer and mentioned the PS3s Cell processor, which was made by IBM. A similar kind of chip was being used to build advanced supercomputers. So we got kind of interested in it, you know, is this something interesting that we could misuse to do science? Khanna says.

Inspired by the specs of Sonys new machine, the astrophysicist started buying up PS3s and building his own supercomputer. It took Khanna several months to get the code into shape and months more to clean up his program into a working order. He started with eight, but by the time he was done, he had his own supercomputer, pieced together out of 176 consoles and ready to run his experiments no jockeying for space or paying other researchers to run his simulations of black holes. Suddenly, he could run complex computer models or win cryptography competitions at a fraction of the cost of a more typical supercomputer.

Around the same time, other researchers were having similar ideas. A group in North Carolina also built a PS3 supercomputer in 2007, and a few years later, at the Air Force Research Laboratory in New York, computer scientist Mark Barnell started working on a similar project called the Condor Cluster.

The timing wasnt great. Barnells team proposed the project in 2009, just as Sony was shifting toward the pared-back PS3 slim, which didnt have the capability to run Linux, unlike the original PS3. After a hack, Sony even issued a firmware update that pulled OpenOS, the system that allowed people to run Linux, from existing PS3 systems. That made finding useful consoles even harder. The Air Force had to convince Sony to sell it the un-updated PS3s that the company was pulling from shelves, which, at the time, were sitting in a warehouse outside Chicago. It took many meetings, but eventually, the Air Force got what it was looking for, and in 2010, the project had its big debut.

Running on more than 1,700 PS3s that were connected by five miles of wire, the Condor Cluster was huge, dwarfing Khannas project, and it used to process images from surveillance drones. During its heyday, it was the 35th fastest supercomputer in the world.

But none of this lasted long. Even while these projects were being built, supercomputers were advancing, becoming more powerful. At the same time, gaming consoles were simplifying, making them less useful to science. The PlayStation 4 outsold both the original PlayStation and the Wii nearing the best-selling status currently held by the PS2. But for researchers, it was nearly useless. Like the slimmer version of the PlayStation 3 released before it, the PS4 cant easily be turned into a cog for a supercomputing machine. Theres nothing novel about the PlayStation 4, its just a regular old PC, Khanna says. We werent really motivated to do anything with the PlayStation 4.

The era of the PlayStation supercomputer was over.

The one at UMass Dartmouth is still working, humming with life in that refrigerated shipping container on campus. The UMass Dartmouth machine is smaller than it used to be at its peak power of about 400 PlayStation 3s. Parts of it have been cut out and repurposed. Some are still working together in smaller supercomputers at other schools; others have broken down or been lost to time. Khanna has since moved on to trying to link smaller, more efficient devices together into his next-generation supercomputer. He says the Nvidia Shield devices hes working with now are about 50 times more efficient than the already-efficient PS3.

Its the Air Forces supercluster of super consoles that had the most star-studded afterlife. When the program ended about four years ago, some consoles were donated to other programs, including Khannas. But many of the old consoles were sold off as old inventory, and a few hundred were snapped up by people working with the TV show Person of Interest. In a ripped-from-the-headlines move, the consoles made their silver screen debut in the shows season 5 premiere, playing wait for it a supercomputer made of PlayStation 3s.

Its all Hollywood, Barnell said of the script, but the hardware is actually our equipment.

Correction, 7:05 PM ET: Supercomputer projects needed the original PS3, not the PS3 Slim, because Sony had removed Linux support from the console in response to hacks which later led to a class-action settlement. This article originally stated that it was because the PS3 Slim was less powerful. We regret the error.

Go here to see the original:

The rise and fall of the PlayStation supercomputers - The Verge

Premier League table: talkSPORT Super Computer predicts where every club will finish in 2019/20 campaign – talkSPORT.com

With managers falling on their swords left, right, and centre, the Premier League is proving to be just as exciting as ever.

The troubling seasons the likes of Tottenham, Manchester United, and Arsenal have had, along with the success of Leicester and Sheffield United mean this could end up being an incredible era-defining year in the English top-flight.

Getty Images - Getty

And the rest of the campaign certainly seems set to be enthralling with December seeing Manchester and Merseyside derbies along with more humdingers you can hear live on the talkSPORT Network.

But how will it all end?

We booted up the talkSPORT Super Computer to find out just what is going to happen.

You can see the results and the predicted Premier League table below

Getty Images - Getty

Getty Images - Getty

Getty Images - Getty

Getty Images - Getty

AFP or licensors

Getty Images - Getty

Getty Images - Getty

Getty Images

AFP or licensors

Getty Images - Getty

Saturday is GameDay on talkSPORT as we bring you THREE live Premier League commentaries across our network

Read more:

Premier League table: talkSPORT Super Computer predicts where every club will finish in 2019/20 campaign - talkSPORT.com

A Success on Arm for HPC: We Found a Fujitsu A64fx Wafer – AnandTech

When speaking about Arm in the enterprise space, the main angle for discussion is on the CPU side. Having a high-performance SoC at the heart of the server has been a key goal for many years, and we have had players such as Amazon, Ampere, Marvell, Qualcomm, Huawei, and others play for the server market. The other angle to attack is for co-processors and accelerators. Here we have one main participant: Fujitsu. We covered the A64FX when the design was disclosed at Hot Chips last year, with its super high cache bandwidth, and it will be available on a simple PCIe card. The main end-point for a lot of these cards will be the Fugaku / Post-K supercomputer in Japan, where we expect it to hit a one of the top numbers on the TOP500 supercomputer list next year.

After the design disclosure last year at Hot Chips, at Supercomputing 2018 we saw an individual chip on display.This year at Supercomputing 2019, we found a wafer.

I just wanted to post some photos. Enjoy.

The A64FX is the main recipient of the Arm Scalable Vector Extensions, new to Arm v8.2, which in this instance gives 48 computing cores with a 512-bit wide SIMD powered by 32 GiB of HBM2. Inside the chip is a custom network, and externally the chip is connected via a Tofu interconnect (6D/Torus), and the chip provides 2.7 TFLOPs of DGEMM performance. The chip itself is built on TSMC 7nm and has 8.786 billion transistors, but only 594 pins. Peak memory bandwidth is 1 TB/s.

The chip is built for both high performance, high throughput, and high performance per watt, supporting FP64 through to INT8. The L1 data cache is designed for sustained throughput, and power management is tightly controlled on chip. Either way you slice it, this chip is mightily impressive. We even saw HPE deploy two of these chips in a single half-width node.

See the original post:

A Success on Arm for HPC: We Found a Fujitsu A64fx Wafer - AnandTech

Why India Needed A 100PF Supercomputer To Help With Weather Forecasting – Analytics India Magazine

India has been dealing with the changing climatic conditions and needs reliable forecasts for extreme weather conditions like droughts, floods, cyclones, lightning and air quality, among others. On November 28, India announced that over the next 2 years it plans to augment the existing Supercomputer to 100PetaFlops for accurate weather forecasting. This announcement made at the 4-day workshop on Prediction skill of extreme Precipitation events and tropical cyclones: Present Status and future prospect (IP4) and Annual Climate Change.

Why use Supercomputers?

With Indias 70% of livelihood still depending on agriculture, increasing the accuracy of weather prediction becomes essential. To understand how supercomputers help in weather prediction, we have to understand a little about how weather forecasting works.

Weather forecasting uses what we call weather forecast models. These models are the closest thing to a time machine for meteorologists. The weather forecasting models are a computer programme which simulates what atmospheric conditions could look like in the foreseeable future. These models solve group mathematical equations that govern the climatic conditions, and these equations approximate the atmospheric changes before they take place. Now, there are two types Statistical and dynamic models. The statistical models havent been providing a reliable result, so a dynamic model has been developed for Indian conditions.

However, to run such dynamic models and provide such forecasting, enhanced supercomputing power is necessary.

Every hour weather satellites, weather balloons, ocean buoys, and surface weather stations around the world record billions of data. This large volume of data is stored, processed and analysed and thats where supercomputers come in. The meteorologists could solve the governing equations by themselves, but the equations are so complex that it would take months to solve them by hand. In contrast, supercomputers can solve them in a matter of hours. This process of using models equations to forecast the weather conditions numerically is called Numerical Weather Prediction.

Some of the worlds most famous weather forecast and climate monitoring models include:

When the supercomputer gives the output, the forecaster takes this information in consideration along with their knowledge of weather process, personal experience and the familiarity with natures unpredictability to issue the forecast.

Now, what is this 100 PF Supercomputer that India is planning to use?

First, 100PF stands for 100 petaflops like there are measuring units for speed for a device, the supercomputer has FLOPS.

A fun way to understand what a PF is like if 1 PetaFlop took 1 second to prove a complex mathematical theorem, then you would take 31,688,765 years to solve it.

These supercomputers are usually hard to manufacture and have an operating span of 5 years because of the enormous temperature conditions they operate in. These supercomputers need a large facility to be kept in because of their size. US supercomputer Summit which holds the top position when it comes to supercomputer rankings with a capacity of about 148.6PF, spreads over 5,600 sqft, which is about the size of two tennis courts.

Regarding India, in 2018 the French company, Atos won the 4500 crore tender for manufacturing these supercomputers, other competitors being Lenovo, HP, NetWeb Technologies. Atos has a 3-year contract with India to manufacture 70 HPCs under National Supercomputing Mission.

Over the last ten years, India has successfully upgraded its supercomputing capacity facility. Below is a list of some of the acquisitions of High-Performance Computing (HPC) systems over the years:

(A TF speed unit is like, what 1 TF computer system can do in just one second, youd have to perform one calculation every second for 31,688.77 years.)

Both Pratyush and Mihir were inaugurated in 2018. With Pratyush and Mihir, India is in the top-30 position (from 368th position) in the top 500 list of HPC facilities in the world. The facility will also place India at the 4th position after Japan, UK and USA, concerning the HPC resources for weather and climate community.

Some notable predictions and uses by supercomputing power in India:

Dr M Rajeevan, the Union Secretary for Earth Sciences, this week said that in 2019 the Govt has predicted five cyclones accurately. India is using supercomputers with a combined capacity of 6.8 PF. One can imagine how powerful the services of a 100PF Supercomputer can be because even with accurate predictions of the weather this year, some improvement is needed. Therefore to provide precise prediction at a high resolution, more supercomputing power is required. It will not only benefit India but also the neighbouring countries.

Also, here is a short video by NASA Goddard

comments

Read the original here:

Why India Needed A 100PF Supercomputer To Help With Weather Forecasting - Analytics India Magazine

The Snapdragon 865: The new chip where everything really is new and improved – Android Central

Another year means another new version of Qualcomm's high-end Snapdragon mobile chip. It's always a good thing to see and even though it never measures up to Apple's A-series processors in raw computing numbers (and it doesn't need to because raw numbers usually don't mean anything), you know Qualcomm is going to bring its A-game. There will always be a thing or two that turns out to be a significant upgrade from last year's model.

This year, what's a significant upgrade is best described as everything. Qualcomm spent the year fighting in court and staving off buyout attempts and building a chip where everything is newer, better, stronger, and faster.

Qualcomm Snapdragon 865: Top 4 best things (and 1 bad)

The Kryo CPU cores are more powerful, yet use the same arrangement that means the Snapdragon can have good battery life. The GPU is insane and is actually built with a mind for high-end gaming first and foremost. The Camera ISP can shoot 8K video or a 200-megapixel photo. Yes, 200-megapixels. And we haven't even mentioned the new AI capabilities. This thing is for real.

CPU cores and their arrangements aren't exciting to most of us. Qualcomm has used the same basics in its Snapdragon series for a while. You'll find a combination of low power using cores, moderate power using cores and one big honking battery bleeding group that's sickeningly powerful and kicks in when it's needed and sleeping when it's not. This works great and nothing here has changed. What did change in the CPU was a 25% across the board improvement in processing power on a 7-nanometer die so it's not going to kill your battery before lunch unless you are trying to push everything to the limit.

In 2009 the HTC Hero was released with the Qualcomm MSM7200A chip. It was the most powerful phone you could buy. 10 years later and we have super-computer chips going into our next phone.

The New Adreno GPU is a gaming-first hunk of silicon that's not only optimized for desktop-class titles (which you're going to need if you really want to put an ARM CPU in a Windows 10 laptop) but is built so that Qualcomm can work with game developers and optimized the driver and let you download it from the Play Store. That's amazing, and my favorite part of the whole announcement.

More: Qualcomm delivering Adreno 650 GPU updates via the Play Store is a huge deal for Android gaming

The Camera ISP (Image Signal Processor) in current-generation Snapdragon processors is one of the finest available. Companies like Google or Huawei may depend on AI to make great photos, but with the right camera hardware, the Qualcomm Spectra ISP can do a great job, too. And starting in 2020, it can do it in 8K videography, 200-MP photos, incorporate Dolby Vision or HDR10, and a handful of other things that the storage controller will never be able to keep up with and a measly 512GB of storage on the highest-end phones won't be able to hold. But that's not the point Qualcomm can do it, so now it's time for other companies to step up so it can happen.

The X55 modem is going to be in every phone using the Snapdragon 865. That means world-class LTE performance, both Sub-6 and mmWave 5G, and it will be able to use both Stand-Alone and Non-Stand Alone configurations in single or dual SIM modes. The RF stuff doesn't stop there, though.

The Snapdragon 865 also has Wi-Fi 6 with Qualcomm's patented fast connect setup that saves even more battery power when using the right Wi-Fi gear, and a new phase of the aptX codec designed for voice calls that allows super-wideband transfer over Bluetooth so your Bluetooth headphones don't make it sound like you're in a sewer when you make a call.

Imagine if your phone could "hear" the difference when talking to it at home or talking to it in the car. That's contextual awareness.

Finally as if this isn't quite enough Qualcomm has ramped up its Hexagon Tensor Accelerator package that's built for AI. It can process at an amazing 15 trillion operations per second, which means if you are a developer who wants to integrate AI into your software, the engine that can do it has enough power. Qualcomm specifically says that real-time translations and a Sensing Hub package that can be contextually aware of its surroundings are part of the 5th generation of its AI engine and I can't wait to see what it can do in the real world.

Most people aren't going to know they have a fancy upgraded chip inside their phone or care about it as long as they can do the things they bought a smartphone for. The new Snapdragon 865 brings that and so much more that the people who can and do care will also be plenty happy with Qualcomm's offering for 2020. I know I can't wait.

Read more:

The Snapdragon 865: The new chip where everything really is new and improved - Android Central

Super Computer model rates Newcastle relegation probability and chances of beating Sheffield United – The Mag

Interesting overview of Newcastle United for the season and tonights game at Sheffield United.

The super computer model predictions are based on the FiveThirtyEight revision to the Soccer Power Index, which is a rating mechanism for football teams which takes account of over half a million matches, and is based on Optas play-by-play data.

They have analysed all Premier League matches this midweek, including the game at Bramall Lane.

Their computer model gives Sheffield United a 49% chance of an away win, it is 27% for a draw and a 24% possibility of a Newcastle win.

They also have predictions as to how the final Premier League table will look, for winning the title it is now Man City 27% and Liverpool 70%, the rest basically nowhere, Leicester next highest at 2% each.

Interesting to see how the computer model rates the percentage probability chances of relegation as:

70% Norwich

65% Watford

26% Newcastle United

25% West Ham

24% Southampton

21% Villa

18% Brighton

16% Bournemouth

10% Burnley

8% Sheff Utd

8% Everton

4% Palace

3% Arsenal

2% Wolves

Link:

Super Computer model rates Newcastle relegation probability and chances of beating Sheffield United - The Mag

Regulating Nanotechnology-Enabled Health Products in EU – The National Law Review

Since 1996, Carla Hutton has monitored, researched, and written about regulatory and legislative issues that may potentially affect Bergeson & Campbell, P.C. (B&C) clients. She is responsible for creating a number of monthly and quarterly regulatory updates for B&C's clients, as well as other documents, such as chemical-specific global assessments of regulatory developments and trends. She authors memoranda for B&C clients on regulatory and legislative developments, providing information that is focused, timely and applicable to client initiatives. These tasks have proven invaluable to many clients, keeping them aware and abreast of developing issues so that they can respond in kind and prepare for the future of their business.

Ms. Hutton brings a wealth of experience and judgment to her work in federal, state, and international chemical regulatory and legislative issues, including green chemistry, nanotechnology, the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), the Toxic Substances Control Act (TSCA), Proposition 65, and the Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) program.

Read more:

Regulating Nanotechnology-Enabled Health Products in EU - The National Law Review

Nanotechnologies almost there what’s next? – SciTech Europa

In June, SciTech Europa Quarterly travelled to Bucharest, Romania, to attend the 2019 instalment of the EuroNanoForum, a conference that brought together scientists, industrialists and policy makers to discuss cross-sectorial challenges focusing on both the industrial application of research results and future strategic research priorities in the area of nanotechnology and advanced materials in Horizon 2020 and beyond.

Amongst numerous high level speakers at the event, the Director of the European Commissions Joint Research Centre (JRC) Geel site and Director of JRC Directorate F: Health, Consumers & Reference Material, Dr Elke Anklam, delivered an interesting presentation during a plenary session entitled Almost there whats next?, which was designed to promote the discussion of the status and achievements in nanotechnology and advanced materials areas during Horizon 2020, now in its pen-ultimate year.

SciTech Europa Quarterly caught up with Anklam after the event to find out more about how the European Commission is supporting innovation in nanotechnologies, as well as about some of the ways in which nanotechnologies can benefit European society.

Nanotechnology underpins a wide range of new products and technologies that have a huge potential to improve our daily lives, as well as creating jobs. Nanotechnology is also an important source of innovation worldwide and Europe is of course very active in this area. However, it is not without its challenges. The need for high tech instruments and facilities that require high-end interdisciplinary expertise can often impede the translation of innovations into real products and technologies.

The European Commission implements concrete actions to tackle this, including making research and technology infrastructures available to benefit research institutions and SMEs and help them bring their innovations to the market.

DG RTD tools such as Open Innovation Test Beds, Pilot Lines, the European Strategic Forum of Research Infrastructures Roadmaps (ESFRI), and research infrastructure projects are already in place to support innovation. The JRC offers access to more than 38 of its research infrastructures to external users, including the Nanobiotechnology Laboratory, which has state-of-the-art instrumentation to perform interdisciplinary studies aiming to characterise nanomaterials, micro/nanoplastics, nanomedicines, and advanced (bio)nanomaterials, enabling researchers to complete their studies using facilities that are not available in their institutions.

Besides giving access to infrastructures, the JRC co-ordinates the European Technology Transfer Offices Circle (TTO Circle), a network aiming to bring together major public research organisations to share best practices, knowledge, and expertise, perform joint activities, and develop a common approach towards international standards for the professionalisation of technology transfer. Such an initiative creates an interesting European ecosystem that favours the technology transfer from innovation towards the market.

To summarise, creating a sustainable innovation ecosystem including research infrastructures, technology transfer, as well as an active interface between research, policy, and regulators is necessary to be globally competitive in nanotechnologies. In addition, not only is it important to develop the regulatory landscape supporting the new technologies, but also to gain consumers confidence and acceptance to make the products a success, safe, and sustainable.

It is true that China is becoming the world leader in chemicals production. The Chinese demand for chemicals is growing very steadily and will continue to do so. We should not underestimate the competitiveness challenges for the European chemicals industry, for example when looking at key factors like energy prices, labour costs or the regulatory and tax burden. However, this is only one side of the story. European sales of chemicals have increased by more than 50% in 20 years. We are a very successful exporter of chemicals. Europe is internationally competitive in this sector, in particular thanks to very strong R&D and a skilled and talented workforce. The single market for chemicals has been a competitive asset for the chemicals industry in Europe. In the future, Europes position will depend on whether we manage to ensure a global level playing field for our companies as well as a regulatory and policy framework that is conducive to sustainable innovation. Europes ambition is to be a world leader in sustainability and the digital economy. This will open new avenues for industrial competitiveness. The chemicals industry is a key player in this transition. Europe can remain a world leader in sustainable chemicals if we fully embrace the opportunities offered by digitisation, decarbonisation and the circular economy. Achieving this ambition will require joint efforts from all actors, at all levels, including the industry itself.

Sometimes when starting new fields there is a tendency to fragment and a sort of re-invent the wheel. In many cases, one can profit from lessons learned in other fields or from results from finalised projects. There are many ways to promote tangible output from Horizon Europe taking benefit from running or past framework programmes. For instance, data management infrastructures ensuring the availability and maintenance of the data, e.g. on the behaviour, physicochemical, fate and (eco)toxicological properties of nanomaterials, produced by projects, following the FAIR principles: Findable, Accessible, Interoperable and Reusable.

There are already several H2020 projects implementing the harmonised logging of laboratory data or developing databases to store data related to nanomaterials analysis from different projects; part of this is freely accessible via the EU Observatory of Nanomaterials (EUON, hosted by the European Chemical Agency (ECHA)). More efforts could still be made to communicate project results and to translate them into practical applications. Knowledge also needs to be transferred to appropriate stakeholders (including the training of other scientists or regulators, and by involving more SMEs and by supporting spin-offs). It is also important to build future investigations on current, already achieved knowledge, and to share the results amongst scientists to avoid duplication of work.

This is indeed a very good point. The EU is interested in developing a technological leadership and deploying technologies. This is beneficial for the environment and wellbeing (health) of EU citizens, as well as for job creation and economic growth. Moreover, these technologies can also be deployed in emerging/developing nations. There is indeed a need to make these technologies available to those nations at a reasonable cost without the need to repeat the path that we travelled at high environmental cost.

Knowledge transfer, training, and international collaboration are very often key aspects of open-access to research infrastructure initiatives, showing that nanotechnologies for EU and their transfer (including training) to developing countries are not in competition, but rather two aspects of the same goal. In this respect, the JRC just launched a training and capacity building initiative in the nanotechnology area for research institutions and SMEs from H2020 associated countries. The goal is to provide a week of hands-on training on state-of-the-art techniques and instrumentation that would enable teams of researchers from one or several institutions from different associated countries to build new capacities and knowledge to favour their integration and collaboration within the EU scientific community.

The Malta Initiative (supported by DG RTD) is an excellent example of how a concerted action by the EU Member States and the Commission can stir and lead sustainable innovation by setting priorities, addressing regulatory needs, and collaborate with other major players at global level.

The adoption of a Safe Innovation Approach seems to be a reasonable way to maintain good safety standards while preserving innovative technologies. This includes the development and implementation of tools for Safe by Design as well as the opening of fora where regulators and industry can share information. This way, regulators can become aware in advance of upcoming (nano-)innovations and have the time to address potential issues beforehand and design appropriate regulations.

Dr Elke Anklam

Director

Directorate F: Health, Consumers & Reference Material

Joint Research Centre (JRC)

European Commission

Tweet @ElkeAnklam

Tweet @EU_ScienceHub

https://ec.europa.eu/jrc/en

Link:

Nanotechnologies almost there what's next? - SciTech Europa

PCJCCI conducts seminar on nanotechnology and its prospects for business community – Daily Times

The ever-changing and competitive economic landscape has made it essential for businesses across geographies and industries to embrace emerging technologies if they want to set themselves apart.

It was stated by Prof Dr Fazal Ahmad Khalid chairman of the Punjab Higher Education Commission (PHEC), an industrial researcher, educationist, engineer with diverse experience in socioeconomic development of Pakistan as a key speaker at a capacity-building seminar of Pakistan-China Joint Chamber of Commerce and Industry (PCJCCI) held on Monday at the chambers premises.

Standing Committee on Capacity Building Chairman Dr Iqbal Qureshi, Standing Committee on International Trade Chairman Farooq Sherwani and PCJCCI senior member Daud Ahmed also spoke on the occasion. Main theme of the seminar was nanotechnology and its prospects for the business community.

Prof Fazal Khalid said that, about this technology, its advantages, approach and application of this disruptive technology around the world. In this context, in Pakistan, the cognizance is limited and focused on incorporating Sustainable Development Goals (SDGs) in our industry. In addition to this, he focused on the need to change the curriculum of universities to produce Hi-Tech graduates. He further added that there is dire need to upgrade the automation process.

Read more here:

PCJCCI conducts seminar on nanotechnology and its prospects for business community - Daily Times

Honey "Sandwich" Could Help Fight Infection – Technology Networks

Meshes are used to help promote soft tissue healing inside the body following surgery and are common in operations such as hernia repair.

However, they carry with them an increased risk of infection as the bacteria are able to get a hold inside the body by forming a biofilm on the surface of the mesh.

Skin and soft tissue infections are the most common bacterial infections, accounting for around 10% of hospital admissions, and a significant proportion of these are secondary infections following surgery.

Currently, any infection is treated with antibiotics, but the emergence of antibiotic resistant strains or superbugs - means scientists are on the hunt for alternatives.

Sandwiching eight nano-layers of Manuka honey (with a negative charge) between eight layers of a polymer (with a positive charge), the international team of scientists and engineers led by Dr Piergiorgio Gentile at Newcastle University, UK, and Dr Elena Mancuso, at Ulster University, showed it is possible to create an electrostatic nanocoating on the mesh which in the lab inhibits bacteria for up to three weeks as the honey is slowly released.

Publishing their findings today in the academic journal Frontiers, the team says the study highlights the potential benefits of infusing medical implants with honey.

Dr Piergiorgio Gentile, lead author and a Biomedical Engineer at Newcastle University, explains Mesh is implanted inside the body to provide stability while the internal tissues heal but, unfortunately, it also provides the perfect surface for bacteria to grow on. Once the bacteria form a biofilm on the surface, its very difficult to treat the infection. By sandwiching the honey in a multilayer coating on the mesh surface and slowly releasing it, the aim is to inhibit the growth of the bacteria and stop the infection before it even starts.

These results are really very exciting. Honey has been used to treat infected wounds for thousands of years but this is the first time it has been shown to be effective at fighting infection in cells from inside the body.

Dr Mancuso, a lecturer within the Nanotechnology and Integrated Bioengineering Centre (NIBEC) at Ulster University, adds Although numerous antibiotic-based coatings, constructed through layered approaches, and intended for the development of antibacterial implants, have been investigated so far, it has been found that the effect of antibiotics may decrease with time, since antibiotic resistant bacteria may potentially develop.

Honey has been used to treat infected wounds since ancient times, and thousands of years before the discovery of bacteria.

Most honey is believed to have some bacteria killing properties because it contains chemicals that produce hydrogen peroxide.

However, in 1991 a New Zealand study showed that when you remove the hydrogen peroxide from a range of honeys, Manuka - made from nectar collected by bees that forage on the wild Manuka tree - was the only type that kept its ability to kill bacteria. This is due to the presence of a unique ingredient, now identified as methylglyoxal, which has specific antimicrobial properties.

Using medical-grade Manuka honey, the team used the Layer-by-Layer assembly technology to create alternating layers of negatively-charged honey and positively-charged conventional biocompatible polymer to modify the surface of electrospun membrane, each layer just 10-20 nanometers thick.

Tested in-vitro on different soft tissue cell lines to test their biocompatibility, the functionalised meshes were exposed to a range of common bacterial infections such as MRSA, Staphylococcus and E coli.

Too little honey and it wont be enough to fight the infection but too much honey can kill the cells, explains Dr Gentile. By creating this 16-layerd charged sandwich we were able to make sure the honey was released in a controlled way over two to three weeks which should give the wound time to heal free of infection.

Dr Mancuso adds With our study we have demonstrated the promising combination of a naturally-derived antibacterial agent with a nanotechnology approach, which may be translated to the design and development of novel medical devices with advanced functionality.

ReferencePotential of Manuka Honey as a Natural Polyelectrolyte to Develop Biomimetic Nanostructured Meshes With Antimicrobial Properties. Elena Mancuso, Chiara Tonda-Turo, Chiara Ceresa, Virginia Pensabene, Simon D. Connell, Letizia Fracchia and Piergiorgio Gentile. Front. Bioeng. Biotechnol., 04 December 2019 | https://doi.org/10.3389/fbioe.2019.00344.

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.

More here:

Honey "Sandwich" Could Help Fight Infection - Technology Networks

Nanotechnology in Medical Devices Size, Growth, Analysis Of Key- players Types And Application, Outlook 2025 – BoundWatch

The upcoming market report contains data for the historical year 2015, the base year of calculation is 2016 and the forecast period is 2017 to 2024. The https://marketreports.co/global-nanotechnology-in-medical-devices-market-insights-forecast-to-2025/165494/#Free-Sample-Report

The report offers information of the market segmentation by type, application, and regions in general. The report highlights the development policies and plans, government regulations, manufacturing processes, and cost structures. It also covers technical data, manufacturing plants analysis, and raw material sources analysis as well as explains which product has the highest penetration, their profit margins, and R&D status.

Read Detailed Index of full Research Study at @ https://marketreports.co/global-nanotechnology-in-medical-devices-market-insights-forecast-to-2025/165494/

The Top Key players Of Global Nanotechnology in Medical Devices Market:

Types of Global Nanotechnology in Medical Devices Market:

Applications Of Global Nanotechnology in Medical Devices Market:

Regional Segmentation for Nanotechnology in Medical Devices market:

Table of Content (TOC) at a glance:Overview of the market includes Definition, Specifications, and Classification of Nanotechnology in Medical Devices Size, Features, Scope, and Applications.

Product Cost and Pricing Analysis: The Manufacturing Cost Structure, Raw Material, and Suppliers cost, Manufacturing Process, Industry Chain Structure.

Market Demand and Supply Analysis that includes, Capacity and Commercial Production Date, Manufacturing Plants Distribution, R&D Status and Technology Source, Raw Materials Sources Analysis;Forces that drive the market

In the end, the report covers the precisely studied and evaluated data of the global market players and their scope in the market using a number of analytical tools. The analytical tools such as investment return analysis, SWOT analysis, and feasibility study are used to analyses the key global market players growth in the Nanotechnology in Medical Devices.

Check here for the [emailprotected]https://marketreports.co/global-nanotechnology-in-medical-devices-market-insights-forecast-to-2025/165494/#Buying-Enquiry

Customization of the Report:This report can be customized to meet the clients requirements. Please connect with our sales team ([emailprotected]), who will ensure that you get a report that suits your needs.

Go here to see the original:

Nanotechnology in Medical Devices Size, Growth, Analysis Of Key- players Types And Application, Outlook 2025 - BoundWatch

Global Nanotechnology in Medical Devices Market 2019 by Manufacturers, Regions, Type and Application, Forecast to 2025 – The Industry Reporter

The Global Nanotechnology in Medical Devices Market report study includes an elaborative summary of the Nanotechnology in Medical Devices market that provides in-depth knowledge of various different segmentations. Nanotechnology in Medical Devices Market Research Report presents a detailed analysis based on the thorough research of the overall market, particularly on questions that border on the market size, growth scenario, potential opportunities, operation landscape, trend analysis, and competitive analysis of Nanotechnology in Medical Devices Market. The information includes the company profile, annual turnover, the types of products and services they provide, income generation, which provide direction to businesses to take important steps. Nanotechnology in Medical Devices delivers pin point analysis of varying competition dynamics and keeps ahead of Nanotechnology in Medical Devices competitors such as 3M, Dentsply International, Mitsui Chemicals, Stryker, AAP Implantate, Affymetrix, Perkinelmer, ST. Jude Medical, Smith & Nephew, Starkey Hearing Technologies.

View Sample Report @www.marketresearchstore.com/report/global-nanotechnology-in-medical-devices-market-2019-by-495636#RequestSample

The main objective of the Nanotechnology in Medical Devices report is to guide the user to understand the Nanotechnology in Medical Devices market in terms of its definition, classification, Nanotechnology in Medical Devices market potential, latest trends, and the challenges that the Nanotechnology in Medical Devices market is facing. In-depth researches and Nanotechnology in Medical Devices studies were done while preparing the Nanotechnology in Medical Devices report. The Nanotechnology in Medical Devices readers will find this report very beneficial in understanding the Nanotechnology in Medical Devices market in detailed. The aspects and information are represented in the Nanotechnology in Medical Devices report using figures, bar-graphs, pie diagrams, and other visual representations. This intensifies the Nanotechnology in Medical Devices pictorial representation and also helps in getting the Nanotechnology in Medical Devices industry facts much better.

.This research report consists of the worlds crucial region market share, size (volume), trends including the product profit, price, Value, production, capacity, capability utilization, supply, and demand and industry growth rate.

Geographically this report covers all the major manufacturers from India, China, the USA, the UK, and Japan. The present, past and forecast overview of the Nanotechnology in Medical Devices market is represented in this report.

The Study is segmented by following Product Type, Biochip, Implant Materials, Medical Textiles, Wound Dressing, Cardiac Rhythm Management Devices, Hearing Aid

Major applications/end-users industry are as follows Therapeutic, Diagnostic, Research

Nanotechnology in Medical Devices Market Report Highlights:

1)The report provides a detailed analysis of current and future market trends to identify the investment opportunities2) In-depth company profiles of key players and upcoming prominent players3) Global Nanotechnology in Medical Devices Market Trends (Drivers, Constraints, Opportunities, Threats, Challenges, Investment Opportunities, and recommendations)4) Strategic recommendations in key business segments based on the market estimations5) To get the research methodologies those are being collected by Nanotechnology in Medical Devices driving individual organizations.

Research Parameter/ Research Methodology

Primary Research:

The primary sources involve the industry experts from the Global Nanotechnology in Medical Devices industry including the management organizations, processing organizations, analytics service providers of the industrys value chain. All primary sources were interviewed to gather and authenticate qualitative & quantitative information and determine future prospects.

In the extensive primary research process undertaken for this study, the primary sources industry experts such as CEOs, vice presidents, marketing director, technology & innovation directors, founders and related key executives from various key companies and organizations in the Global Nanotechnology in Medical Devices in the industry have been interviewed to obtain and verify both qualitative and quantitative aspects of this research study.

Secondary Research:

In Secondary research crucial information about the industry value chain, the total pool of key players, and application areas. It also assisted in market segmentation according to industry trends to the bottom-most level, geographical markets and key developments from both market and technology oriented perspectives.

Inquiry for Buying Report: http://www.marketresearchstore.com/report/global-nanotechnology-in-medical-devices-market-2019-by-495636#InquiryForBuying

Thanks for reading this article, you can also get individual chapter wise section or region wise report versions like North America, Europe or Asia. Also, If you have any special requirements, please let us know and we will offer you the report as you want.

Sorry! The Author has not filled his profile.

More:

Global Nanotechnology in Medical Devices Market 2019 by Manufacturers, Regions, Type and Application, Forecast to 2025 - The Industry Reporter

SIF November Review: Oxford Instruments’ Tiny Tech Measures A 65% Gain | Roland Head – Stockopedia

This weeks month-end review for November is a little late, as Ive been away for a couple of weeks enjoying some late season sun.

I seem to have dodged a nasty spell of wet weather up here in the north, but unfortunately one of the stocks in my rules-based SIF portfolio did take a bath.

Pawnbroker H&T Group fell by about 15% while I was away, after the company said the FCA had carried out a review of its high-cost short term credit (HCSTC) business - often known as payday lending.

The review followed the introduction of new FCA rules in November 2018. This is particularly disappointing as in August the company said that minimal changes would be required to achieve compliance. It now seems this is no longer the case.

If you havent seen it already, Id recommend a look at Graham Nearys excellent analysis of this situation in SCVR on 18 November. I agree with his conclusion that the impact should be manageable.

I dont intend to take any portfolio action as a result of this news and continue to hold HAT stock. Aside from this issue, HAT appears to be trading well. Indeed, broker forecasts for 2019 and 2020 have risen over the last month:

Brokers (including the companys house broker) appear to remain confident of a big step up in earnings next year:

(Original coverage: 12/02/2019)

Back in February, I added nanotechnology firm Oxford Instruments (LON:OXIG) to the SIF folio. The holding is now nine months old, so its time to review it to see if it continues to satisfy my screening criteria, or if it should be sold.

This nanotechnology firm designs and manufactures equipment that can fabricate, analyse and manipulate matter at the atomic and molecular level. Sales are split almost equally between academic and industrial customers. Although I dont understand much about the firms technology, I can tell you that its customers operate in most major industrial and scientific sectors (you can find out more here).

Nine months ago, Oxford Instruments had a market cap of 530m and was a member of the FTSE SmallCap index.

Read more:

SIF November Review: Oxford Instruments' Tiny Tech Measures A 65% Gain | Roland Head - Stockopedia

Forget Fitbit or Apple Watch. This technology hopes to take personal fitness to the next level. – WUSA9.com

FREDERICK, Md. Big things really do come in small packages, even ones as small as a sunflower seed. But the potential solution to our exercise and fitness woes comes from a very common issue.

"When we got married, we had great body shape," Dr. Xiaonao Liu exclaimed. "But after we gave birth to our son, both of us gained 20 pounds."

Dr. Lius touts her experience as a former nanomaterial fabrication specialist at California Institute of Technology and a research assistant professor at Zhejiang University in China. She co-founded Nanobiofab with Dr. Ruoting Yang, an expert in biomarker research and artificial intelligence.

The goal for Nanobiofab is to innovate beyond whats already available on the market. Sure, Apple Watches and Fitbit devices already do something similar. But they cant provide that extra personal, physical information in real-time.

What makes this nanotechnology unique is that its calibrated to your body while embedded in your clothes. According to Dr. Liu, the patented material is then used to track fat-burning in real-time.

RELATED: Google buys Fitbit for $2.1B, stepping back into wearables

RELATED: Can this $35 fitness tracker compare to a Fitbit or Apple Watch?

Users would follow all the sensors data on an app downloaded onto your phone.

"This will keep track of your heart rate, steps, but then also track how your body is operating," Dr. Liu said. "If you wear it on different parts of your body you can even know which kind of exercises can increase certain parts."

Nanobiofab plans to officially start launching products next year, with a wearable band as the first thing available.

Gio Insignares

Nanobiofab plans to officially start launching products next year, with a wearable band as the first thing available.

Eventually, the company hopes to progress to shirts, shorts, and other fabrics.

RELATED: 'Blessing from God' | Apple Watch may have saved Florida teen's life

RELATED: Sleep and fitness trackers drop to $50 for mom

Download the brand new WUSA9 app here.

Sign up for the Get Up DC newsletter: Your forecast. Your commute. Your news.

See the rest here:

Forget Fitbit or Apple Watch. This technology hopes to take personal fitness to the next level. - WUSA9.com

Four from Johns Hopkins named to American Association for the Advancement of Science – The Hub at Johns Hopkins

ByHub staff report

Four Johns Hopkins faculty members have been elected as fellows of the American Association for the Advancement of Science, a lifetime distinction that recognizes their outstanding contributions to science and technology.

Image caption: The Johns Hopkins faculty named 2019 AAAS fellows are (top row, from left) Mary Aramanios, David Gracias, (bottom row, from left) Colin Norman, and Elizabeth Platz.

Fellows are elected each year by the member-run governing body of AAAS. The 2019 group, which includes 443 fellows, will receive official certificates and rosette pins during a ceremony in February.

The distinction is unrelated to the annual list of fellows announced each spring by the American Academy of Arts and Sciences.

The AAAS fellows from Johns Hopkins University and their areas of expertise are:

Aramanios is a professor of oncology at the Johns Hopkins School of Medicine. Her research focuses on dysfunction of telomeres, the protective ends of chromosomes. When telomeres are abnormally short, it prevents cells from dividing effectively and contributes to the development of cancers and other syndromes such as dyskeratosis congenita, a premature aging syndrome. She is the clinical director of the Telomere Clinic at Johns Hopkins.

Gracias is a professor and director of graduate studies in the Department of Chemical and Biomolecular Engineering at the Whiting School of Engineering. His research focuses on the design, development, and characterization of miniaturized devices and intelligent materials and systems. At Johns Hopkins, he has collaborated extensively with clinicians to apply the capabilities of micro and nanotechnology to modern medicine. He was elected fellow for distinguished research contributions to three-dimensional micro and nanoscale assembly.

Norman is an astrophysicist in the Department of Physics and Astronomy in the Krieger School of Arts and Sciences and an astronomer in the Space Telescope Science Institute. He is the founding co-director of the Institute for Planets and Life, which combines the interdisciplinary expertise of scientists from Johns Hopkins University, the Johns Hopkins Applied Physics Laboratory, and the Space Telescope Science Institute. He specializes in the study of molecular clouds, star formation, plasma astrophysics, and the formation and structure of galaxies, among other fields.

Platz is a cancer epidemiologist and deputy chair of the Department of Epidemiology at the Bloomberg School of Public Health. She specializes in research on prostate and colon cancers and studies the association of genetic, epigenetic, and lifestyle factors that affect the incidence and recurrence of these cancers. She has co-led the Cancer Prevention and Control Program at the Sidney Kimmel Comprehensive Cancer Center at Johns Hopkins since 2008.

See the original post:

Four from Johns Hopkins named to American Association for the Advancement of Science - The Hub at Johns Hopkins