Whats the Best Human Brain Alternative for Hungry Zombies? – Gizmodo Australia

Lets say youre a zombie. Youre lumbering around, doing your zombie-mumble, and just ten feet ahead you see a living human being. Your first impulse, of course, is to head over there and eat their brain. And youre about to do just that, when suddenly you feel a pang of something like shame. You remember, dimly, being a human yourself. You remember how you mightve felt, if an undead weirdo got to gnawing on your skull. Youre at an impasse: at once desperate for brain meat and reluctant to kill for it. So you head to your zombie psychologist and start explaining the situation, and your zombie psychologist starts grinning, which annoys you at first I mean, youre baring your soul to this guy until he explains whats on his mind. Turns out, hes been toying with an idea a pilot program for conscience-stricken zombies. Instead of human brains, theyll be fed stuff that looks and tastes just like brains, thereby sparing them the obligation to kill. The only thing they need to work out is: what would be an acceptable substitute for human brains? For this weeks Giz Asks, we reached out to a number of brain experts to find out.

Associate Professor, Neurobiology, Harvard Medical School

The brain is of course composed primarily of lipids, and so it is perfectly reasonable to assume that it is brain lipids that zombies really crave. But why human brains and not, say, mouse brains? Lipidomic analysis reveals that human brains are unusually enriched in a compound called sphingomyelin (relative to brains from rodents), and so it is further reasonable to assume that what zombies want is actually lots of sphingomyelin. So where to get it? Eggs. Eggs are packed with sphingomyelin. Furthermore, eggs also have the advantage of having a white outer cortex and a lipid-rich centre, just like the human brain, so they seem a reasonable substitute all around.

Chair and Professor of Neurology at the David Geffen School of Medicine at UCLA and Co-Director UCLA Broad Stem Cell Centre

A food-based substitute would require a fair amount of work, because youd have to get a sort of fatty, proteinaceous slop together as a mimic for the brain. A thick macaroni and cheese might work, with a larger noodle like ziti or rigatoni and no tang, meaning a thick white cheese, as opposed to cheddar.

The brain sandwich, made from cow brains, was an unusual delicacy in St. Louis for years. When I lived there, I saw what it looked like as they fried it, and its hard to imagine any other organ meat could substitute for the real thing. Kidney and liver are too firm and too structured; most foods we eat, or could think about eating, are also too firm, and not fatty enough.

A brain from another animal might work, though it would have to be an animal with an advanced brain that is, one with the folds we see when we look at the brains surface (which are called gyri and cilici). Those are what distinguish higher mammals from lower mammals. They also make the human brain this particularly characteristic thing in terms of substance and texture and appearance. So an animal brain, to sub for a human brain, would need to have those features. That would mean anything from, say, a dog or cat on up those both have gyri and cilici, whereas rodents and rabbits, for example, do not.

Assistant Professor of Brain Science, Psychiatry and Human Behaviour at Brown University

I think my Zombie would be a vegan. The thing that I have found to be the closest in texture to the brain is tofu (not the firm kind). People are often surprised by that fact, because its really soft you can put your finger through it easily.

Broadly, I study the kind of complex planning and decision making that is localised to the front of the brain, the prefrontal cortex. This area is also one of the most likely to be injured if you hit your head, because your very soft brain bounces around inside your skull. Our lab typically does a demo for Brain Week and other events that lets people feel tofu, and then shake it around in a container and see what happens to it. Shake it around in some water (mimicking some of the protections that our brain has in the cerebro-spinal fluid that it floats in) and the tofu does much better (which is why its packaged in water!).

Unfortunately tofu doesnt mimic all the wonderful folding that it has that lets us pack so many brain cells into a tight space. A sheet of paper crumpled up is best to show that capacity, but paper is probably much less tasty than tofu (to humans anyway, I dont know about zombies!).

Professor, Systems Biology, George Mason University

My proposal is: a literal pound of flesh. Many people have too much of it; its very similar to the brain in texture; it has a lot of cholesterol, which is important, because in my opinion at least zombies would crave exactly that. Also, adipose tissue is very rich with various kinds of growth hormones and other kinds of bioactive stuff. If you could develop some kind of device that would transfer the flesh to the zombies, people might even be grateful they wouldnt have to get liposuction.

Senior Lecturer, Medical Biotechnology, Deakin University

The best thing to do would be to make small versions of a brain from stem cells, called organoids. These are almost, but not quite, brains. You grow them in an artificial 3D environment that mimics the properties of the central nervous tissue, and allow them to develop networks of neural cells in a structured way. Theyre used for research into drugs and diseases and so on, but would probably be an acceptable meat-free snack for an ethically conscious zombie plague.

Professor in Neurology and Professor of Biomedical Engineering at Duke University

If I were a vegetarian zombie, I would try to make a brain substitute using the major components of the brain carbohydrates, proteins, and cells. The major carbohydrate component is hyaluronic acid (which is found in many beauty products, and can be purchased in bulk). Though by itself it does not form a solid, only a very viscous liquid, it can be combined with other materials that do form a solid. For example, sea weed has a carbohydrate named alginate that does form gels when combined with calcium. So, a blend of hyaluronic acid and alginate with calcium can yield a material that has the mechanics of the brain. For the protein component, eggs, beans, soy, and quinoa all can be good choices. To get the texture right, the calcium can be added while stirring to generate chunks. If it is OK to eat other animals, then I would buy pig brains, which are often discarded. Pig organs are close to the same size of humans and have even been used for transplantation due to similarities in physiology/biochemistry. That would be the simplest choice.

Associate Professor, Psychology and Neuroscience, George Mason University

Whenever I eat cauliflower, I think of the cerebellum or little brain. It is tucked away behind the cerebrum, or main part of the brain. The cerebellum is small, but it is where about 80 per cent of the entire brains neurons are found! Most of the cerebellums neurons, or grey matter, are found on its outer surface. They are tightly packed together in little folds called folia. The neurons in the folia are connected to each other by nerve fibres, also known as white matter. When the cerebellum is cut in half, the white matter appears as this beautiful network of branches called the arbor vitae, or tree of life. It really does look just like a head of cauliflower!

Professor, Psychology and Neuroscience, Trinity College

The brain is actually quite soft and squishy. Fortunately for us it normally floats in a pool of cerebrospinal fluid that serves as a cushiony packing material protecting the delicate brain from the hard skull. But the brain is so soft it can easily become injured without the head striking any object. If there is enough rotational or acceleration/deceleration motion for the brain to hit the skull the tips of the brain can be bruised and individual cells can be stretched or sheared from their connections. This can happen, for example, in motor vehicle accidents or shaken baby syndrome where the head is thrown very quickly forwards and then backwards.

The consistency I think the brain comes closest to is a gelatin. But I would recommend that our zombie make the gelatin with milk rather than water. This will give it a closer consistency to a brain, the colour will be more opaque like a real brain, and it will provide more of the much needed protein the zombie craves. There are even commercially made gelatin molds if the zombie is able to access stores or online shopping.

Another option would be a soft tofu. This might be a great option for a zombie who is a vegetarian or vegan. There is plenty of protein but it will be much harder to mould into the right shape. Sadly, most zombies are not portrayed to have the fine motor skills needed to create a brain shape from scratch, so the tofu would just have to be eaten as is.

On a side note, if our zombie truly finds that nothing satisfies like a real brain, they could certainly consider becoming a neurosurgeon that specialises in therapeutic surgeries, like temporal lobe resections. In this case, a small portion of the temporal lobe of the brain is removed to relieve a person of intractable epilepsy. This might allow for a chance to satisfy their craving while providing benefit to the person involved.

Do you have a burning question for Giz Asks? Email us at [emailprotected]

Go here to see the original:
Whats the Best Human Brain Alternative for Hungry Zombies? - Gizmodo Australia

Protein Folding and Evolution: Information, Function and …

The structure/function link for proteins has for a long time served as a convenient paradigm. Increasing evidence suggests however that the order/disorder landscape in proteins is far more complex than hitherto imagined, and is an ongoing product of an exquisitely tuned evolutionary process. Furthermore, the field has been dominated by a restricted mindset resulting in neglected areas and unconventional concepts on the periphery of the main topic that we feel deserve to be addressed. Following on from the introduction of the concept of structural capacity and its inextricable origins in ribosome evolution, we wish to illustrate how transitions involving order/disorder rearrangements are involved not only in the selection of new folds and thus functions, but also play a role in the unavoidably associated increase in dysfunction and disease. The underlying theme will be the emergence of the role of information transfer between the genetic code repository and protein structure/function.

The number of potential protein folds and hence structures is immense. It is a sobering fact that the number of known folds is infinitesimally small compared to those potentially available. Current techniques, strongly influenced by main-stream structural biology approaches such as X-ray crystallography have described few of the protein structures that potentially could exist. The concept of evolutionary selection as an ongoing process, and the acceptance that proteins are dynamic structures lead to a reorganization of how the protein structure/function paradigm is viewed. The immense impact of molecular modelling, powered not only by huge computational capacities but also by machine learning and AI (Artificial Intelligence) algorithms is revolutionizing the whole field.Our aim therefore, is to collate a series of original articles that address the relationship between protein structure and function from both an evolutionary and dynamic point of view. We aim to create a platform for launching new models based on original, innovative ideas or experimental approaches. The final goal is thus to seek and exploit hitherto unexplored facets of protein structure/function with the practical aim of facilitating protein engineering and expanding our knowledge of the origins and direction of life processes, with important impact on health issues.

Within the broad scope of protein evolution, we are seeking contributions on the following topics although this list is neither exhaustive nor exclusive:

The evolution of protein folds and function;The role of order disorder transitions in protein evolution;The consequences of ribosomal evolution;Information theory, protein structure/function and evolution;Evolution, protein folding and disease;The inevitable link between evolved function and disease;The evolution of information transfer: from the genetic code to protein structure;Moonlighting proteins: disorder and order in multi-functional proteins;Protein Engineering and design: how to harness disorder?

Articles may be either data based novel observations, innovative technologies which grant new insights into protein dynamics or theoretical approaches that provide new testable models. We will be particularly attentive to fringe ideas independent of how unconventional they may be and welcome contributions from young unestablished scientists.

Dr. Ashley Buckle is founder of the structural biology and protein engineering company PTNG Consulting, and holds patents in the field. All other Topic Editors declare no competing interests.

Keywords:protein evolution, protein folding, protein engineering, intrinsically disordered proteins, protein folding evolution and disease

Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

The structure/function link for proteins has for a long time served as a convenient paradigm. Increasing evidence suggests however that the order/disorder landscape in proteins is far more complex than hitherto imagined, and is an ongoing product of an exquisitely tuned evolutionary process. Furthermore, the field has been dominated by a restricted mindset resulting in neglected areas and unconventional concepts on the periphery of the main topic that we feel deserve to be addressed. Following on from the introduction of the concept of structural capacity and its inextricable origins in ribosome evolution, we wish to illustrate how transitions involving order/disorder rearrangements are involved not only in the selection of new folds and thus functions, but also play a role in the unavoidably associated increase in dysfunction and disease. The underlying theme will be the emergence of the role of information transfer between the genetic code repository and protein structure/function.

The number of potential protein folds and hence structures is immense. It is a sobering fact that the number of known folds is infinitesimally small compared to those potentially available. Current techniques, strongly influenced by main-stream structural biology approaches such as X-ray crystallography have described few of the protein structures that potentially could exist. The concept of evolutionary selection as an ongoing process, and the acceptance that proteins are dynamic structures lead to a reorganization of how the protein structure/function paradigm is viewed. The immense impact of molecular modelling, powered not only by huge computational capacities but also by machine learning and AI (Artificial Intelligence) algorithms is revolutionizing the whole field.Our aim therefore, is to collate a series of original articles that address the relationship between protein structure and function from both an evolutionary and dynamic point of view. We aim to create a platform for launching new models based on original, innovative ideas or experimental approaches. The final goal is thus to seek and exploit hitherto unexplored facets of protein structure/function with the practical aim of facilitating protein engineering and expanding our knowledge of the origins and direction of life processes, with important impact on health issues.

Within the broad scope of protein evolution, we are seeking contributions on the following topics although this list is neither exhaustive nor exclusive:

The evolution of protein folds and function;The role of order disorder transitions in protein evolution;The consequences of ribosomal evolution;Information theory, protein structure/function and evolution;Evolution, protein folding and disease;The inevitable link between evolved function and disease;The evolution of information transfer: from the genetic code to protein structure;Moonlighting proteins: disorder and order in multi-functional proteins;Protein Engineering and design: how to harness disorder?

Articles may be either data based novel observations, innovative technologies which grant new insights into protein dynamics or theoretical approaches that provide new testable models. We will be particularly attentive to fringe ideas independent of how unconventional they may be and welcome contributions from young unestablished scientists.

Dr. Ashley Buckle is founder of the structural biology and protein engineering company PTNG Consulting, and holds patents in the field. All other Topic Editors declare no competing interests.

Keywords:protein evolution, protein folding, protein engineering, intrinsically disordered proteins, protein folding evolution and disease

Important Note: All contributions to this Research Topic must be within the scope of the section and journal to which they are submitted, as defined in their mission statements. Frontiers reserves the right to guide an out-of-scope manuscript to a more suitable section or journal at any stage of peer review.

Here is the original post:
Protein Folding and Evolution: Information, Function and ...

Rewriting the Rules of Vaccine Design With DNA Origami – Technology Networks

By folding DNA into a virus-like structure, MIT researchers have designed HIV-like particles that provoke a strong immune response from human immune cells grown in a lab dish. Such particles might eventually be used as an HIV vaccine.

The DNA particles, which closely mimic the size and shape of viruses, are coated with HIV proteins, or antigens, arranged in precise patterns designed to provoke a strong immune response. The researchers are now working on adapting this approach to develop a potential vaccine for SARS-CoV-2, and they anticipate it could work for a wide variety of viral diseases.

The rough design rules that are starting to come out of this work should be generically applicable across disease antigens and diseases, says Darrell Irvine, who is the Underwood-Prescott Professor with appointments in the departments of Biological Engineering and Materials Science and Engineering; an associate director of MITs Koch Institute for Integrative Cancer Research; and a member of the Ragon Institute of MGH, MIT, and Harvard.

Irvine and Mark Bathe, an MIT professor of biological engineering and an associate member of the Broad Institute of MIT and Harvard, are the senior authors of the study, which appears today inNature Nanotechnology. The papers lead authors are former MIT postdocs Rmi Veneziano and Tyson Moyer.

DNA design

Because DNA molecules are highly programmable, scientists have been working since the 1980s on methods to design DNA molecules that could be used for drug delivery and many other applications, most recently using a technique called DNA origami that was invented in 2006 by Paul Rothemund of Caltech.

In 2016, Bathes lab developed an algorithm that can automatically design and build arbitrary three-dimensionalvirus-like shapesusing DNA origami. This method offers precise control over the structure of synthetic DNA, allowing researchers to attach a variety of molecules, such as viral antigens, at specific locations.

The DNA structure is like a pegboard where the antigens can be attached at any position, Bathe says. These virus-like particles have now enabled us to reveal fundamental molecular principles of immune cell recognition for the first time.

Natural viruses are nanoparticles with antigens arrayed on the particle surface, and it is thought that the immune system (especially B cells) has evolved to efficiently recognize such particulate antigens. Vaccines are now being developed to mimic natural viral structures, and such nanoparticle vaccines are believed to be very effective at producing a B cell immune response because they are the right size to be carried to the lymphatic vessels, which send them directly to B cells waiting in the lymph nodes. The particles are also the right size to interact with B cells and can present a dense array of viral particles.

However, determining the right particle size, spacing between antigens, and number of antigens per particle to optimally stimulate B cells (which bind to target antigens through their B cell receptors) has been a challenge. Bathe and Irvine set out to use these DNA scaffolds to mimic such viral and vaccine particle structures, in hopes of discovering the best particle designs for B cell activation.

There is a lot of interest in the use of virus-like particle structures, where you take a vaccine antigen and array it on the surface of a particle, to drive optimal B-cell responses, Irvine says. However, the rules for how to design that display are really not well-understood.

Other researchers have tried to create subunit vaccines using other kinds of synthetic particles, such as polymers, liposomes, or self-assembling proteins, but with those materials, it is not possible to control the placement of viral proteins as precisely as with DNA origami.

For this study, the researchers designed icosahedral particles with a similar size and shape as a typical virus. They attached an engineered HIV antigen related to the gp120 protein to the scaffold at a variety of distances and densities. To their surprise, they found that the vaccines that produced the strongest response B cell responses were not necessarily those that packed the antigens as closely as possible on the scaffold surface.

It is often assumed that the higher the antigen density, the better, with the idea that bringing B cell receptors as close together as possible is what drives signaling. However, the experimental result, which was very clear, was that actually the closest possible spacing we could make was not the best. And, and as you widen the distance between two antigens, signaling increased, Irvine says.

The findings from this study have the potential to guide HIV vaccine development, as the HIV antigen used in these studies is currently being tested in a clinical trial in humans, using a protein nanoparticle scaffold.

Based on their data, the MIT researchers worked with Jayajit Das, a professor of immunology and microbiology at Ohio State University, to develop a model to explain why greater distances between antigens produce better results. When antigens bind to receptors on the surface of B cells, the activated receptors crosslink with each other inside the cell, enhancing their response. However, the model suggests that if the antigens are too close together, this response is diminished.

Beyond HIV

In recent months, Bathes lab has created a variant of this vaccine with the Aaron Schmidt and Daniel Lingwood labs at the Ragon Institute, in which they swapped out the HIV antigens for a protein found on the surface of the SARS-CoV-2 virus. They are now testing whether this vaccine will produce an effective response against the coronavirus SARS-CoV-2 in isolated B cells, and in mice.

Our platform technology allows you to easily swap out different subunit antigens and peptides from different types of viruses to test whether they may potentially be functional as vaccines, Bathe says.

Because this approach allows for antigens from different viruses to be carried on the same DNA scaffold, it could be possible to design variants that target multiple types of coronaviruses, including past and potentially future variants that may emerge, the researchers say.

Reference: Veneziano et al. (2020). Role of nanoscale antigen organization on B-cell activation probed using DNA origami. Nature Nanotechnology. DOI: 10.1038/s41565-020-0719-0.

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.

Original post:
Rewriting the Rules of Vaccine Design With DNA Origami - Technology Networks

Games, not con-calls, may help build strong remote teams – Livemint

In the book, Gamification By Design, Gabe Zichermann writes that gamification is 75% psychology and 25% technology." Simply put, gamification is incorporating game elements like points, badges, leaderboard and competition into other activities to encourage engagement. And the popularity of gaming is increasing, especially during the lockdown. A study by MarketsandMarkets states the gamification market is projected to grow in size from $9.1 billion in 2020 to $30.7 billion by 2025, at a compound annual growth rate (CAGR) of 27.4% during the forecast period.

If you are considering introducing gamification at work, there are three essential factors that must be maintained, as Ethan Mollick and Nancy Rothbard suggest in their paper, Mandatory Fun: Consent, Gamification And The Impact of Games at Work. First, consent. Employees need to be looped in and made aware of the fact that they are playing a game. Second, legitimation. They must understand the rules of the game. Third, a sense of individual agency: They need to believe the game is fair.

Gamers often attract myths of being slackers and non-serious as a community. Hence, a linkage of games with a core business function with an older and gender-balanced workforce is not seen as a fit.

In her TED talk, Gaming Can Make a Better World", game designer Jane McGonigal focuses on World Of Witchcrafts highly motivated gamers who spend 22 hours a week on an average, playing the game of strategy and problem-solving. She also draws focus on the nuances of motivation and feelings that games can arouse: sense of urgency, fear, competitiveness and a sense of deep, undivided focus. She goes on to explain the larger implications of this, where at the Institute for the Future, she alongside colleagues develop games like The World Without Oil. The players sign up and are provided real-life information, data feeds about real-time oil prices, food supplies, simulated riot situations to set up the game universe.

With the pilot rolling in 2007 with 1,700 players, most players , she claimed, adopted the practices they imbibed in the video game in their real life as well, to conserve oil.

As covid-19 renders certain work practices redundant, it may do good to rethink and explore the world of gamification, as it can help ensure cohesiveness, productivity and sustainability measures for the long-term.

SAP, for example, used a gamification app to motivate sales professionals. The app simulated client meetings and incorporated real examples and data on customer needs. While playing the game, sales professionals had to answer client questions accurately. They earned badges and competed against each other, and hence were better prepared to tackle complex sales meetings with clients. It also provided sales professionals with a better understanding of what to expect and helped them succeed in their meetings.

Gamification could also lead to community-as-a-service (CaaS) being utilized in the now virtual workspace. The post-covid-19 world should not merely want success as the fulcrum, its priority instead has to be on cooperation and collaboration.

Gaming environments should be able to gauge the level of skill that the employee holds at the moment to be able to assign the perfect task" to test their skills while also levelling up just at the verge so that they can exceed and improve themselves in a slightly difficult terrain. All this while playing as a team and helping out the groups collective progress.

The University of Washington tried submitting one of its projects to the powers of the collective brain. A team of highly qualified scientists worked on a technique, called protein folding, as part of a research effort for nearly a decade to understand, prevent and treat diseases like HIV/AIDS, cancer and Alzheimers. They, however, could not attain much progress as they wanted to and decided to try incorporating gamification.

In 2011, they created a puzzle that allowed gamers to fold proteins called Foldit, and invited the general public to play the game online. About 47,000 people volunteered for this challenge and solved the problem within a record time of 10 days.

As with any form of engagement, there are ethics to be followed in gamification as well. The games should work on nudges rather than manipulation. Employees should be prodded and not coerced to choose one form of working over another. The social architecture should try to push for collective good rather than drive for a profiteering venture which exploits goodwill of the employees by keeping some part of the agenda covert.

Maintaining full transparency and ensuring the employees opt-in explicitly to the game with full knowledge about its data management and consent procedures is the most desirable and sustainable form of boosting employee morale and performance at the workplace. Write to us at businessoflife@livemint.com

Subscribe to newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Go here to read the rest:
Games, not con-calls, may help build strong remote teams - Livemint

Flourishing Biopharmaceutical Industry to Boost Demand for Protein a Resins, High Adoption Expected in North America and Europe – Press Release -…

The protein A resin market players are on the innovation and approval spree with regards to manufacturing biotherapeutics, so as to have an edge over their counterparts. These innovations and approvals are expected to help the protein A resin market in the good stead in the years to come

This press release was orginally distributed by SBWire

Valley Cottage, NY -- (SBWIRE) -- 07/06/2020 -- According to a recently published report by Future Market Insights (FMI), the global protein A resins market size is expected to reach approximately US$ 800 Mn by 2025, and register an annual average growth rate of 8%. This growth is largely dependent on factors such as rising government funding for cancer treatment study, increasing research on monoclonal antibodies, and rising pipeline of monoclonal antibodies to be commercialized.

Market players are targeted towards offering advanced protein A resin products such as next-generation resins for clinical trials, which is expected to cater to evolving demand from biotechnological and pharmaceutical industries. However, widespread availability of alternative purification methods such as crystallization, ultrafiltration, capillary electrophoresis, and high pressure folding is expected to create a hindrance in the market growth.

"Mounting demand for affinity chromatography processes is the direct result of significant growth of biotech and meditech industries, especially in the development of enzymes, antibodies, and protein-based drugs & therapies. Furthermore, rising number of procedures for separating biochemical mixtures along with the increasing demand for affinity chromatography remain the primary growth-driving factors of the protein A resin market," says FMI analyst.

Download a Sample Report with Table of Contents and Figures: https://www.futuremarketinsights.com/reports/sample/rep-gb-1720

Key Takeaways - Protein A Resin Market Study

As per the FMI's study, natural protein A resin accounted for approximately 65% of the total market revenue share in 2019. On the other hand, demand for recombinant protein A resins is expected to witness a significant CAGR over the forecast period.

The latest advancements in recombinant technology emerge as key market growth influencer, due to customization and greater yields of protein A resin depending on specific demand of the customers.

Adoption of agarose-based matrix for protein A resin accounts for around 85% of the total market value, and is expected to grow at an impressive CAGR through 2029.

Demand for glass or silica-based matrix will continue to move on an upward swing, as it offers numerous benefits including high thermal and chemical stability, less toxicity, neutral pH, and high surface area, in addition to easy availability. These advantages are poised to create lucrative growth opportunities for market players.

Adoption of glass or silica-based protein in biomedical and pharmaceutical industries will contribute to positive growth prospects of the market.

Rising incidences of chronic diseases such as cancer and enhanced productivity in biopharmaceutical industry along with expanding testing services in clinics are factors boosting the demand for antibody purification in protein A resins market.

According to FMI, adoption of antibody purification holds major revenue share of the protein A resins market. Additionally, demand from clinical research laboratories is expected to grow at a significant pace and result in increased market share.Opportunities Abound in Developed Markets

North America and Europe continue to maintain their lead in the global protein A resins market, while high opportunities are expected in developing countries of Asia Pacific. The FMI study finds that China and South Korea are the major contributors, owing to increasing number of market players, growing ecological research, and high expenditure on R&D activities. Moreover, improving pharmaceutical industry along with rising government spending will further accelerate the regional market growth.

For information on the research approach used in the report, request methodology@ https://www.futuremarketinsights.com/askus/rep-gb-1720

More Valuable Insights on Protein A Resin Market

FMI's research study on the protein A resin market is segmented into:

Product (Natural and Recombinant)Application (Immunoprecipitation (IP) and Antibody Purification)End Users (Biopharmaceutical Manufacturers, Clinical Research Laboratories, and Academic Institutes)Matrix (Glass or Silica Based, Agarose-based, and Organic Polymer Based)Region (North America, Latin America, Western Europe, Eastern Europe, Asia Pacific excluding Japan, South Korea, and Japan, Middle East & Africa, Japan, South Korea, and China)

For more information on this press release visit: http://www.sbwire.com/press-releases/flourishing-biopharmaceutical-industry-to-boost-demand-for-protein-a-resins-high-adoption-expected-in-north-america-and-europe-1294793.htm

Read more from the original source:
Flourishing Biopharmaceutical Industry to Boost Demand for Protein a Resins, High Adoption Expected in North America and Europe - Press Release -...

Is The Goal-Driven Systems Pattern The Key To Artificial General Intelligence (AGI)? – Forbes

Goal-driven systems

Since the beginnings of artificial intelligence, researchers have long sought to test the intelligence of machine systems by having them play games against humans. It is often thought that one of the hallmarks of human intelligence is the ability to think creatively, consider various possibilities, and keep a long-term goal in mind while making short-term decisions. If computers can play difficult games just as well as humans then surely they can handle even more complicated tasks. From early checkers-playing bots developed in the 1950s to todays deep learning-powered bots that can beat even the best players in the world at games like chess, Go and DOTA, the idea of machines that can find solutions to puzzles is as old as AI itself, if not older.

As such, it makes sense that one of the core patterns of AI that organizations develop is the goal-driven systems pattern. Like the other patterns of AI, we see this form of artificial intelligence used to solve a common set of problems that would otherwise require human cognitive power. In this particular pattern, the challenge that machines address is the need to find the optimal solution to a problem. The problem might be finding a path through a maze, optimizing a supply chain, or optimize driving routes and idle time. Regardless of the specific need, the power that were looking for here is the idea of learning through trial-and-error, and determining the best way to solve something, even if its not the most obvious.

Reinforcement learning and learning through trial-and-error

One of the most intriguing, but least used, forms of machine learning is reinforcement learning. As opposed to supervised learning approaches in which machines learn by being trained by humans with well-labeled data, or unsupervised learning approaches in which machines try to learn through discovery of clusters of information and other groupings, reinforcement learning attempts to learn through trial-and-error, using environmental feedback and general goals to iterate towards success.

Without the use of AI, organizations depend on humans to create programs and rules-based systems that guide software and hardware systems on how to operate. Where programs and rules can be somewhat effective in managing money, employees, time and other resources, they suffer from brittleness and rigidity. The systems are only as strong as the rules that a human creates, and the machine isnt really learning at all. Rather, its the human intelligence incorporated into rules that makes the system work.

Goal-learning AI systems on the other hand are given very few rules, and need to learn how the system works on their own through iteration. In this way, AI can wholly optimize the entire system and not depend on human-set, brittle rules. Goal-driven driven systems have proved their worth to show the uncanny ability for systems to find the hidden rules that solve challenging problems. It isnt surprising just how useful goal-driven systems are in areas where resource optimization is a must.

AI can be efficiently used in scenario simulation and resource optimization. By applying this generalized approach to learning, AI-enabled systems can be set to optimize a particular goal or scenario and find many solutions to getting there, some not even obvious to their more-creative human counterparts. In this way, while the goal-driven systems pattern hasnt seen as much implementation as other patterns such as the recognition, predictive analytics, or conversational patterns, the potential is just as enormous across a wide range of industries.

Reinforcement-learning based goal-driven systems are being utilized in the financial sector for such use cases as robo-advising which uses learning to identify savings and investment plans catered to the specific needs of individuals. Other applications of the goal-driven systems pattern are in use in the control of traffic light systems, finding the best way to control traffic lights without causing disruptions. Other uses are in the supply chain and logistics industries, finding the best way to package and deliver goods. Further uses include helping to train physical robots, creating mechanisms and algorithms by which robots can run and jump.

Goal-driven systems are even being used in e-commerce and advertising, finding optimal prices for goods and automating bids on advertising space. Goal-driven systems are even used in the pharmaceutical industry to perform protein folding and discover new and innovative treatments for illnesses. These systems are capable of selecting the best reagent and reaction parameters in order to achieve the intended product, making it an asset during the complex and delicate drug or therapeutic making process.

Is the goal-driven systems pattern the key to Artificial General Intelligence (AGI)?

The idea of learning through trial-and-error is a potent one, and possibly can be applied to any problem. Notably, DeepMind, the organization that brought to reality the machine that could solve the once-thought unsolvable problem of a machine beating a human Go player, believes that reinforcement learning-based goal-driven systems could be the key to unlocking the ultimate goal of a machine that can learn anything and accomplish any task. The concept of a general intelligence is one that is like our human brain. Rather than being focused on a narrow, single learning task, as is the case with all real-world AI systems today, an artificial general intelligence (AGI) can learn any task and apply learning from one domain to another, without requiring extensive retraining.

DeepMind, established in the United Kingdom and acquired by Google in 2014, is aiming to solve some of the most complicated problems for machine intelligence by pushing the boundaries of what is capable with goal-driven systems and other patterns of AI. Starting with AlphaGo, which was purpose-built to learn how to play the game Go against a human opponent, the company rapidly branched out with AlphaZero, which could learn from scratch any game by playing itself. What had previously taken AlphaGo months to learn, AlphaZero could now do in a matter of days using reinforced learning. From scratch, with the only goal of increasing its win rate, AlphaZero triumphed over AlphaGo in all 100 test games. AlphaZero had achieved this by simply playing games against itself and learning by trial & error. It is by this simple method that general-learning systems are able to not only create patterns but essentially devise optimal conditions and outcomes for any input given to it. This predictably became the crowning glory of DeepMind and the holy grail of the AI industry.

Naturally, as those in the tech industry have often done with new technology, they turned their minds towards possible real-world applications. AlphaZero was created with the best techniques available at the time such as machine learning and applying other domains such as neuroscience and research in behavioral psychology. These techniques are channelled into the development of powerful general-purpose learning algorithms, and perhaps we might be only years away from a real breakthrough in research in AGI.

The AI industry is a bit of a crossroads with regards to research in machine learning. The most widely used algorithms today are solving important, but relatively simple problems. While machines have proven their ability to recognize images, understand speech, find patterns, spot anomalies, and make predictions, they depend on training data and narrow learning tasks to be able to achieve their tasks with any level of accuracy. In these situations, machine learning is very data and compute hungry. If you have a sufficiently complicated learning task, you might need petabytes or more of training data, hundreds of thousands of dollars of GPU-intensive computing, and months of training. Clearly, the solution to AGI is not achievable through just brute force approaches.

The goal-driven systems pattern, while today being one of the least implemented of the seven patterns, might hold a key to learning that isnt so data and compute intensive. Goal-driven systems are increasingly being implemented into projects with real-life use-cases. It is therefore one of the most interesting patterns to look into due to its potential promise.

Original post:
Is The Goal-Driven Systems Pattern The Key To Artificial General Intelligence (AGI)? - Forbes

Case Medical Awarded Patent for Multi Enzymatic Solution for Cleaning Medical Devices and Food Industry Utensils and Surfaces Exposed to Brain Wasting…

Patent is a significant step toward commercializing cleaning products to effectively inactivate and degrade prions

Case Medical today announced that it was awarded U.S. patent number 10,699,513 B2, by the U.S. Patent and Trademark office for "compositions and methods for handling potential prion contamination." The patent is a significant step for the company toward commercializing cleaning products that will enable prion contaminated devices and surfaces to be processed without resorting to the extraordinary methods required today.

Prions are a type of protein that can cause unfolding in normal prion proteins most commonly found in the brain, but also in the spine, eye, spleen, and lymphoid tissues. Prion diseases are described by the CDC as "a family of rare progressive neurodegenerative disorders that affect both humans and animals. They are distinguished by long incubation periods, characteristic spongiform changes associated with neuronal loss, and a failure to induce inflammatory response." The CDC also indicates that "the abnormal folding of the prion proteins leads to brain damage... Prion diseases are usually rapidly progressive and always fatal."

Prions are transmitted by eating of meat infected with prions, but also in healthcare settings from blood transfusions and from medical devices, especially from surgical instruments, even from apparently cleaned devices, having residual prion contamination.

"The challenge with prions is that they are almost impossible to detect before a fatal occurrence of the disease and they are also extremely hard to remove from contaminated devices and surfaces," said Marcia Frieze, CEO of Case Medical. "The logical solution would be to make prion decontamination a standard part of medical device processing but the current options are extremely time consuming and so harsh that they significantly reduce the useful life of the devices themselves."

Currently, prion contaminated materials are either incinerated or pre-treated with sodium hypochlorite, sterilization, oxidizing agents, peracetic acid, or pre-treatment at temperatures above 100C for extended periods of time. These methods and materials are environmentally unfriendly and excessively corrosive to the materials being cleaned. The cleaning solution patented by Case Medical uses a multi enzymatic formulation to achieve a safer, more thorough result and requires much less time and effort, suggesting a feasible process for healthcare settings and the food processing industry.

In brief, Case Medicals formulation uses specific enzymes combined with a surfactant. The enzymes effectively digest or inactivate prions rendering them ineffective and the surfactant lowers the level of friction to allow easy rinsing. The process is easy, biodegradable, and environmentally preferred.

"While prion diseases are currently rare and a much bigger issue in Europe than in the U.S., the coronavirus pandemic has hopefully taught us the value of being prepared," said Frieze. "We still have many regulatory steps before we can fully commercialize this product and process, but we are continuing to work as fast as we can."

Testing and validation were performed in conjunction with the U.S. Geological Service (USGS) through their National Wildlife Health Center at the Class III prion lab in Madison, Wisc.

About Case Medical

Case Medical is a FDA registered, ISO certified manufacturer of validated, sustainable, and cost effective products for instrument processing. Our reusable sterilization containers and instrument chemistries meet the highest standards for patient safety and environmental preference. Case Medical was an inaugural recipient of the U.S. EPA Safer Choice Partner of the Year award. Visit http://www.casemed.com for more information.

View source version on businesswire.com: https://www.businesswire.com/news/home/20200616005865/en/

Contacts

Lisa Forsell, Director of MarketingPhone: 201-313-1999 x302Email: lforsell@casemed.com Web: http://www.casemed.com

Read more:
Case Medical Awarded Patent for Multi Enzymatic Solution for Cleaning Medical Devices and Food Industry Utensils and Surfaces Exposed to Brain Wasting...

After this COVID winter comes an AI spring – VentureBeat

During boom times, companies focus on growth. In tough times, they seek to improve efficiency. History shows us that after every major economic downturn since the 1980s, businesses relied on digital technology and, specifically, innovations in software technology to return to full productivity with fewer repetitive jobs and less bloat.

The years Ive spent as a VC have convinced me that this is the best time to start an AI-first enterprise, not despite the recession, but because of it. The next economic recovery will both be driven by artificial intelligence and accelerate its adoption.

While the Great Recession is often thought of as a jobless recovery, economists at the National Bureau of Economic Research (NBER) found that the downturn accelerated the shift from repetitive to non-routine jobs at both the high and low ends of the spectrum. So, yes, existing tasks were automated, but companies empowered their employees with data and analytics to augment their judgment to improve productivity and quality, in a virtuous cycle of data and judgment that both increased profitability and created more rewarding work.

Indeed, the highest levels of unemployment during the Great Recession were followed by a surge in enrollment in post-secondary education in analytics and data science as people sought out opportunities to upskill. And the period was followed by a recovery in which despite increased automation unemployment fell to historic lows.

Through no fault of our own, were again thrust into the cycle of recession and recovery. Industries already expect to benefit from improved AI and machine learning in the next recovery. That expectation will create new opportunities for AI entrepreneurs.

Every economic recovery is defined by an emerging software technology and set of applications.

The companies that grew in the lackluster economy of the early 1980s staged the first software IPOs when the economy rebounded in the middle of that decade: Lotus, Microsoft, Oracle, Adobe, Autodesk and Borland.

Packaged software signified a unique turning point in the history of commercial enterprise; the category required little in the way of either CAPEX or personnel costs. Software companies had gross margins of 80% or more, which gave them amazing resilience to grow or shrink without endangering their existence. If entrepreneurs were willing to work for lower wages, software companies could be started quickly with minimal to no outside investment, and if they could find early product-market fit, they could often bootstrap and grow organically.

Those new software companies were perfectly adapted to foster innovation when recessions hit, because high-quality people were available and less expensive, and office space was abundant. At the same time, established companies put new product development on hold while they tried to service and keep existing customers.

I started working as a VC in 1990 for the first venture firm that focused purely on investing in software, Hummer Winblad. While it took hard work and tenacity for John Hummer and Ann Winblad to raise that first fund, their timing as investors turned out to be perfect. A recession began in the second quarter of that year and lasted through Q1 1991.

The software companies coming out of that recession pioneered cost-effective client-server computing. Sybase, which established this trend with its Open Client-Server Interfaces went public in 1991, after growing 54% in the previous year.

By then, universities had graduated many programmers, creating a talent pool for startups. New software developer platforms made those programmers more productive. The 1990s became the first golden era for enterprise computing. One Hummer Winblad company, Arbor Software, invented the category of Online Analytical Processing (OLAP). Another, Powersoft, became the dominant no-code client server development platform. It was the industrys first billion-dollar software acquisition.

The first CRM companies, spawned in that recession, held successful IPOs from 1993 to 1999. This class included Remedy a company that BusinessWeek breathlessly called Americas Number One Top Hot Growth Company in 1996. Scopus, Vantive, and Clarify all grew rapidly and went public or were acquired in this period or shortly thereafter.

That exuberance ended with the dot-com bust in March 2000.

At that time, Salesforce had existed for only a year. Concur was a relatively new company, forced to reinvent itself when its packaged software business collapsed. Many people would have thought their timing was terrible, but they were unhindered by the obligation to service an installed base during the 2001 recession that followed the bust. That left them free to innovate, and they became two of the very first SaaS businesses.

Salesforce went public in 2004, and now has a market cap of about $135 billion. In 2013, Concur sold to SAP for $8.3 billion. Amazon Web Services was also conceived during that recession and launched in July 2002. SaaS and cloud computing leveraged each other for the rest of the decade.

When the sub-prime mortgage crisis brought the entire economy down, companies had to retain customers and improve efficiency goals that are often at odds with each other. The idea of a big data future had already taken root, and forward-thinking executives suspected that the solution was already in their data, if they could only find it. But at the same time, established software companies also cut R&D spending. That opened up fertile ground for newer and more agile analytics companies.

Most software companies saw no growth in 2009, but Omniture, a leader in web analytics, grew more than 80% that year, prompting its acquisition by Adobe for $1.9 billion. Tableau had been founded back in 2003, but it grew slowly until the recession. From 2008-2010, it grew from $13 million to $34 million in sales. Over the same period Splunk went from $9 million to $35 million. Ayasdi, Cloudera, Mapr and Datameer were all launched in the depths of the Great Recession.

Of course, none of those companies could have flourished without data scientists. Just as universities accelerated the creation of software developers in the early 1990s, they again accelerated the creation of analytics experts and data scientists during the Great Recession, which again helped to spur the recovery and drive a decade of economic expansion, job growth, and the longest bull market in American history.

Even before the pandemic, many economists and corporate CFOs felt there was at least a 50% chance of recession in 2020.

Over a year ago, The Parliament the policy magazine published by the EU Parliament predicted that the next recession would usher in a wave of AI. The magazine quoted Mirko Draca, of the London School of Economics as saying, We expect to see another technology surge in the next 10 to 15 years, based on AI and robotics technology.

Those who predicted a mere recession were, to say the least, insufficiently pessimistic. Companies have reduced their labor costs more aggressively than ever to match the suddenness and seriousness of the situation. Once again, theyll rely on automation to boost production when the recovery begins.

The Atlantic Council surveyed over 100 technology experts on the impact that COVID-19 would have on global innovation. Even in the midst of the pandemic, those experts felt that over the next two to five years, data and AI would have more impact than medical bioengineering. The two are not mutually exclusive; Googles Deepmind Technologies recently used its AlphaFold tool to predict complex protein folding patterns, useful in the search for a vaccine.

Companies emerging from this recession will adapt processes to vaccinate their systems against the next pandemic. In response to supply-chain disruptions, Volkswagen is considering increasing its 3D printing capabilities in Germany, which would give the automaker a redundant parts source. The government-run Development Bank of Japan will subsidize the costs of companies that move production back to Japan.

Bringing production back onshore while controlling costs will require significant investment in robotics and AI. Even companies that dont have their own production capacity, such as online retailers, plan to use AI to improve the reliability of complex global supply chains. So a surge in demand for AI talent is inevitable.

In 2018, several major universities announced initiatives to develop that talent. MIT announced the largest-ever commitment to AI from a university: a $1 billion initiative to create a College of Computing. Carnegie-Mellon created the first bachelor of science in artificial intelligence degree program. UCBerkeleyannounced a new division of data science. And Stanford announced ahuman-centered AI initiative.

Dozens more schools have followed suit. Machine learning has moved from obscurity to ubiquity, just as software development did 30 years ago and data science did 10 years ago.

Back in 2017, a couple of my colleagues wrote about the AI risk curve, arguing that the adoption of AI is held back not by technology but by managers perception of the risks involved in replacing a worker (whose performance is known) with an unfamiliar software process.

Recessions increase the pressure on managers to reduce labor costs, and thus increase their tolerance for the risks associated with adopting new technology. Over the next year or two, companies will be more willing to take risks and integrate new technologies into their infrastructure. But the challenges of surviving in the recession will mean that AI-first companies must deliver measurable improvements in quality and productivity.

One relatively new risk that managers must tolerate pertains to data. Even companies that are not yet exploiting their data effectively now recognize it as a valuable resource. As startups deploy AI software systems that prove more accurate and cost-effective than human beings, their early-adopter customers must be more willing to trust them with proprietary data. That will allow AI companies to train new products and make them even smarter. And in return for taking this risk, companies must make their models more transparent, more easily reproducible, and more explainable to their customers, auditors, and regulators.

In the area of food and agriculture, AI will help us to understand and adapt to a changing climate. In infrastructure and security, machine learning models will improve the efficiency, reliability and performance of cloud infrastructure. Better and more dynamic risk models will help companies and the entire financial market handle the next crisis.

A host of new applied-AI companies will be needed in order accomplish all this and, especially, AI-enabling companies creating better developer tools and infrastructure, continuous optimization systems, and products that help disciplines improve data quality, security, and privacy.

Boom times favor established companies. They have the cash flow to fund skunkworks and conduct pure research. But its a truism that R&D spending is one of the first things big companies cut in a recession. As an entrepreneur, the idea of starting a company now of all times might be scary, but that retrenchment by established competitors leaves fresh ground open for you to seed with new ideas.

The first sign of AI spring will come when companies again forecast increased demand and seek to improve productivity. The only way to be there when that opportunity presents itself is to start now.

The best part is you wont just profit from the recovery, youll help to create it.

[VentureBeats Transform 2020 event in July will feature a host of disruptive new AI technologies and companies.]

Mark Gorenberg is founder and managing director at Zetta Venture Partners.

Continued here:
After this COVID winter comes an AI spring - VentureBeat

Site-specific glycan analysis of the SARS-CoV-2 spike – Science Magazine

SARS-CoV-2 spike protein, elaborated

Vaccine development for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is focused on the trimeric spike protein that initiates infection. Each protomer in the trimeric spike has 22 glycosylation sites. How these sites are glycosylated may affect which cells the virus can infect and could shield some epitopes from antibody neutralization. Watanabe et al. expressed and purified recombinant glycosylated spike trimers, proteolysed them to yield glycopeptides containing a single glycan, and determined the composition of the glycan sites by mass spectrometry. The analysis provides a benchmark that can be used to measure antigen quality as vaccines and antibody tests are developed.

Science this issue p. 330

The emergence of the betacoronavirus, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative agent of coronavirus disease 2019 (COVID-19), represents a considerable threat to global human health. Vaccine development is focused on the principal target of the humoral immune response, the spike (S) glycoprotein, which mediates cell entry and membrane fusion. The SARS-CoV-2 S gene encodes 22 N-linked glycan sequons per protomer, which likely play a role in protein folding and immune evasion. Here, using a site-specific mass spectrometric approach, we reveal the glycan structures on a recombinant SARS-CoV-2 S immunogen. This analysis enables mapping of the glycan-processing states across the trimeric viral spike. We show how SARS-CoV-2 S glycans differ from typical host glycan processing, which may have implications in viral pathobiology and vaccine design.

Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the causative pathogen of coronavirus 2019 (COVID-19) (1, 2), induces fever, severe respiratory illness, and pneumonia. SARS-CoV-2 uses an extensively glycosylated spike (S) protein that protrudes from the viral surface to bind to angiotensin-converting enzyme 2 (ACE2) to mediate host-cell entry (3). The S protein is a trimeric class I fusion protein, composed of two functional subunits, responsible for receptor binding (S1 subunit) and membrane fusion (S2 subunit) (4, 5). The surface of the envelope spike is dominated by host-derived glycans, with each trimer displaying 66 N-linked glycosylation sites. The S protein is a key target in vaccine design efforts (6), and understanding the glycosylation of recombinant viral spikes can reveal fundamental features of viral biology and guide vaccine design strategies (7, 8).

Viral glycosylation has wide-ranging roles in viral pathobiology, including mediating protein folding and stability and shaping viral tropism (9). Glycosylation sites are under selective pressure as they facilitate immune evasion by shielding specific epitopes from antibody neutralization. However, we note the low mutation rate of SARS-CoV-2 and that as yet, there have been no observed mutations to N-linked glycosylation sites (10). Surfaces with an unusually high density of glycans can also enable immune recognition (9, 11, 12). The role of glycosylation in camouflaging immunogenic protein epitopes has been studied for other coronaviruses (10, 13, 14). Coronaviruses form virions by budding into the lumen of endoplasmic reticulumGolgi intermediate compartments (15, 16). However, observations of complex-type glycans on virally derived material suggests that the viral glycoproteins are subjected to Golgi-resident processing enzymes (13, 17).

High viral glycan density and local protein architecture can sterically impair the glycan maturation pathway. Impaired glycan maturation resulting in the presence of oligomannose-type glycans can be a sensitive reporter of native-like protein architecture (8), and site-specific glycan analysis can be used to compare different immunogens and monitor manufacturing processes (18). Additionally, glycosylation can influence the trafficking of recombinant immunogen to germinal centers (19).

To resolve the site-specific glycosylation of the SARS-CoV-2 S protein and visualize the distribution of glycoforms across the protein surface, we expressed and purified three biological replicates of recombinant soluble material in an identical manner to that which was used to obtain the high-resolution cryoelectron microscopy (cryo-EM) structure, albeit without a glycan-processing blockade using kifunensine (4). This variant of the S protein contains all 22 glycans on the SARS-CoV-2 S protein (Fig. 1A). Stabilization of the trimeric prefusion structure was achieved by using the 2P stabilizing mutations (20) at residues 986 and 987, a GSAS (Gly-Ser-Ala-Ser) substitution at the furin cleavage site (residues 682 to 685), and a C-terminal trimerization motif. This helps to maintain quaternary architecture during glycan processing. Before analysis, supernatant containing the recombinant SARS-CoV-2 S was purified by size exclusion chromatography to ensure that only native-like trimeric protein was analyzed (Fig. 1B and fig. S1). The trimeric conformation of the purified material was validated by using negative-stain EM (Fig. 1C).

(A) Schematic representation of the SARS-CoV-2 S glycoprotein. The positions of N-linked glycosylation sequons (N-X-S/T, where X P) are shown as branches (N, Asn; X, any residue; S, Ser; T, Thr; P, Pro). Protein domains are illustrated: N-terminal domain (NTD), receptor binding domain (RBD), fusion peptide (FP), heptad repeat 1 (HR1), central helix (CH), connector domain (CD), and transmembrane domain (TM). (B) SDSpolyacrylamide gel electrophoresis analysis of the SARS-CoV-2 S protein (indicated by the arrowhead) expressed in human embryonic kidney (HEK) 293F cells. Lane 1: filtered supernatant from transfected cells; lane 2: flow-through from StrepTactin resin; lane 3: wash from StrepTactin resin; lane 4: elution from StrepTactin resin. (C) Negative-stain EM 2D class averages of the SARS-CoV-2 S protein. 2D class averages of the SARS-CoV-2 S protein are shown, confirming that the protein adopts the trimeric prefusion conformation matching the material used to determine the structure (4).

To determine the site-specific glycosylation of SARS-CoV-2 S, we used trypsin, chymotrypsin, and -lytic protease to generate three glycopeptide samples. These proteases were selected to generate glycopeptides that contain a single N-linked glycan sequon. The glycopeptides were analyzed by liquid chromatographymass spectrometry, and the glycan compositions were determined for all 22 N-linked glycan sites (Fig. 2). To convey the main processing features at each site, the abundances of each glycan are summed into oligomannose-type, hybrid-type, and categories of complex-type glycosylation based on branching and fucosylation. The detailed, expanded graphs showing the diverse range of glycan compositions are presented in table S1 and fig. S2.

The schematic illustrates the color code for the principal glycan types that can arise along the maturation pathway from oligomannose- to hybrid- to complex-type glycans. The graphs summarize quantitative mass spectrometric analysis of the glycan population present at individual N-linked glycosylation sites simplified into categories of glycans. The oligomannose-type glycan series (M9 to M5; Man9GlcNAc2 to Man5GlcNAc2) is colored green, afucosylated and fucosylated hybrid-type glycans (hybrid and F hybrid) are dashed pink, and complex glycans are grouped according to the number of antennae and presence of core fucosylation (A1 to FA4) and are colored pink. Unoccupancy of an N-linked glycan site is represented in gray. The pie charts summarize the quantification of these glycans. Glycan sites are colored according to oligomannose-type glycan content, with the glycan sites labeled in green (80 to 100%), orange (30 to 79%), and pink (0 to 29%). An extended version of the site-specific analysis showing the heterogeneity within each category can be found in table S1 and fig. S2. The bar graphs represent the mean quantities of three biological replicates, with error bars representing the standard error of the mean.

Two sites on SARS-CoV-2 S are principally oligomannose-type: N234 and N709. The predominant oligomannose-type glycan structure observed across the protein, with the exception of N234, is Man5GlcNAc2 (Man, mannose; GlcNAc, N-acetylglucosamine), which demonstrates that these sites are largely accessible to -1,2-mannosidases but are poor substrates for GlcNAcT-I, which is the gateway enzyme in the formation of hybrid- and complex-type glycans in the Golgi apparatus. The stage at which processing is impeded is a signature related to the density and presentation of glycans on the viral spike. For example, the more densely glycosylated spikes of HIV-1 Env and Lassa virus (LASV) GPC exhibit numerous sites dominated by Man9GlcNAc2 (2124).

A mixture of oligomannose- and complex-type glycans can be found at sites N61, N122, N603, N717, N801, and N1074 (Fig. 2). Of the 22 sites on the S protein, 8 contain substantial populations of oligomannose-type glycans, highlighting how the processing of the SARS-CoV-2 S glycans is divergent from host glycoproteins (25). The remaining 14 sites are dominated by processed, complex-type glycans.

Although unoccupied glycosylation sites were detected on SARS-CoV-2 S, when quantified they were revealed to form a very minor component of the total peptide pool (table S2). In HIV-1 immunogen research, the holes generated by unoccupied glycan sites have been shown to be immunogenic and potentially give rise to distracting epitopes (26). The high occupancy of N-linked glycan sequons of SARS-CoV-2 S indicates that recombinant immunogens will not require further optimization to enhance site occupancy.

Using the cryo-EM structure of the trimeric SARS-CoV-2 S protein [Protein Data Bank (PDB) ID 6VSB] (4), we mapped the glycosylation status of the coronavirus spike mimetic onto the experimentally determined three-dimensional (3D) structure (Fig. 3). This combined mass spectrometric and cryo-EM analysis reveals how the N-linked glycans occlude distinct regions across the surface of the SARS-CoV-2 spike.

Representative glycans are modeled onto the prefusion structure of the trimeric SARS-CoV-2 S glycoprotein (PDB ID 6VSB) (4), with one RBD in the up conformation and the other two RBDs in the down conformation. The glycans are colored according to oligomannose content as defined by the key. ACE2 receptor binding sites are highlighted in light blue. The S1 and S2 subunits are rendered with translucent surface representation, colored light and dark gray, respectively. The flexible loops on which the N74 and N149 glycan sites reside are represented as gray dashed lines, with glycan sites on the loops mapped at their approximate regions.

Shielding of the receptor binding sites on the SARS-CoV-2 spike by proximal glycosylation sites (N165, N234, N343) can be observed, especially when the receptor binding domain is in the down conformation. The shielding of receptor binding sites by glycans is a common feature of viral glycoproteins, as observed on SARS-CoV-1 S (10, 13), HIV-1 Env (27), influenza hemagglutinin (28, 29), and LASV GPC (24). Given the functional constraints of receptor binding sites and the resulting low mutation rates of these residues, there is likely selective pressure to use N-linked glycans to camouflage one of the most conserved and potentially vulnerable areas of their respective glycoproteins (30, 31).

We note the dispersion of oligomannose-type glycans across both the S1 and S2 subunits. This is in contrast to other viral glycoproteins; for example, the dense glycan clusters in several strains of HIV-1 Env induce oligomannose-type glycans that are recognized by antibodies (32, 33). In SARS-CoV-2 S, the oligomannose-type structures are likely protected by the protein component, as exemplified by the N234 glycan, which is partially sandwiched between the N-terminal and receptor binding domains (Fig. 3).

We characterized the N-linked glycans on extended flexible loop structures (N74 and N149) and at the membrane-proximal C terminus (N1158, N1173, N1194) that were not resolved in the cryo-EM maps (4). These were determined to be complex-type glycans, consistent with steric accessibility of these residues.

Whereas the oligomannose-type glycan content (28%) (table S2) is above that observed on typical host glycoproteins, it is lower than other viral glycoproteins. For example, one of the most densely glycosylated viral spike proteins is HIV-1 Env, which exhibits ~60% oligomannose-type glycans (21, 34). This suggests that the SARS-CoV-2 S protein is less densely glycosylated and that the glycans form less of a shield compared with other viral glycoproteins, including HIV-1 Env and LASV GPC, which may be beneficial for the elicitation of neutralizing antibodies.

Additionally, the processing of complex-type glycans is an important consideration in immunogen engineering, especially considering that epitopes of neutralizing antibodies against SARS-CoV-2 S can contain fucosylated glycans at N343 (35). Across the 22 N-linked glycosylation sites, 52% are fucosylated and 15% of the glycans contain at least one sialic acid residue (table S2 and fig. S3). Our analysis reveals that N343 is highly fucosylated with 98% of detected glycans bearing fucose residues. Glycan modifications can be heavily influenced by the cellular expression system used. We have previously demonstrated for HIV-1 Env glycosylation that the processing of complex-type glycans is driven by the producer cell but that the levels of oligomannose-type glycans were largely independent of the expression system and are much more closely related to the protein structure and glycan density (36).

Highly dense glycan shields, such as those observed on LASV GPC and HIV-1 Env, feature so-called mannose clusters (22, 24) on the protein surface (Fig. 4). Whereas small mannose-type clusters have been characterized on the S1 subunit of Middle East respiratory syndrome (MERS)CoV S (10), no such phenomenon has been observed for the SARS-CoV-1 or SARS-CoV-2 S proteins. The site-specific glycosylation analysis reported here suggests that the glycan shield of SARS-CoV-2 S is consistent with other coronaviruses and similarly exhibits numerous vulnerabilities throughout the glycan shield (10). Last, we detected trace levels of O-linked glycosylation at Thr323/Ser325 (T323/S325), with over 99% of these sites unmodified (fig. S4), suggesting that O-linked glycosylation of this region is minimal when the structure is native-like.

From left to right, MERS-CoV S (10), SARS-CoV-1 S (10), SARS-CoV-2 S, LASV GPC (24), and HIV-1 Env (8, 21). Site-specific N-linked glycan oligomannose quantifications are colored according to the key. All glycoproteins were expressed as soluble trimers in HEK 293F cells apart from LASV GPC, which was derived from virus-like particles from Madin-Darby canine kidney II cells.

Our glycosylation analysis of SARS-CoV-2 offers a detailed benchmark of site-specific glycan signatures characteristic of a natively folded trimeric spike. As an increasing number of glycoprotein-based vaccine candidates are being developed, their detailed glycan analysis offers a route for comparing immunogen integrity and will also be important to monitor as manufacturing processes are scaled for clinical use. Glycan profiling will therefore also be an important measure of antigen quality in the manufacture of serological testing kits. Last, with the advent of nucleotide-based vaccines, it will be important to understand how those delivery mechanisms affect immunogen processing and presentation.

See the original post:
Site-specific glycan analysis of the SARS-CoV-2 spike - Science Magazine

Disordered proteins follow diverse transition paths as they fold and bind to a partner – Science Magazine

Shedding light on disordered proteins

Disordered proteins often fold as they bind to a partner protein. There could be many different molecular trajectories between the unbound proteins and the bound complex. Most methods to measure transition paths rely on monitoring a single distance, making it difficult to resolve complex pathways. Kim and Chung used fast three-color single-molecule Foster resonance energy transfer (FRET) to simultaneously probe distance changes between the two ends of an unfolded protein and between each end and a probe on the partner protein. They show that binding can be initiated by diverse conformations and that the molecules are held together by non-native interactions as the disordered protein folds. This allows the association to be diffusion limited because most collisions lead to binding.

Science, this issue p. 1253

Transition paths of macromolecular conformational changes such as protein folding are predicted to be heterogeneous. However, experimental characterization of the diversity of transition paths is extremely challenging because it requires measuring more than one distance during individual transitions. In this work, we used fast three-color single-molecule Frster resonance energy transfer spectroscopy to obtain the distribution of binding transition paths of a disordered protein. About half of the transitions follow a path involving strong non-native electrostatic interactions, resulting in a transition time of 300 to 800 microseconds. The remaining half follow more diverse paths characterized by weaker electrostatic interactions and more than 10 times shorter transition path times. The chain flexibility and non-native interactions make diverse binding pathways possible, allowing disordered proteins to bind faster than folded proteins.

Read the original here:
Disordered proteins follow diverse transition paths as they fold and bind to a partner - Science Magazine

Coupling chromatin structure and dynamics by live super-resolution imaging – Science Advances

INTRODUCTION

The three-dimensional organization of the eukaryotic genome plays a central role in gene regulation (1). Its spatial organization has been prominently characterized by molecular and cellular approaches including high-throughput chromosome conformation capture (Hi-C) (2) and fluorescent in situ hybridization (3). Topologically associated domains (TADs), genomic regions that display a high degree of interaction, were revealed and found to be a key architectural feature (4). Direct three-dimensional localization microscopy of the chromatin fiber at the nanoscale (5) confirmed the presence of TADs in single cells but also, among others, revealed great structural variation of chromatin architecture (3). To comprehensively resolve the spatial heterogeneity of chromatin, super-resolution microscopy must be used. Previous work showed that nucleosomes are distributed as segregated, nanometer-sized accumulations throughout the nucleus (68) and that the epigenetic state of a locus has a large impact on its folding (9, 10). However, to resolve the fine structure of chromatin, high labeling densities, long acquisition times, and, often, cell fixation are required. This precludes capturing dynamic processes of chromatin in single live cells, yet chromatin moves at different spatial and temporal scales.

The first efforts to relate chromatin organization and its dynamics were made using a combination of photoactivated localization microscopy (PALM) and tracking of single nucleosomes (11). It could be shown that nucleosomes mostly move coherently with their underlying domains, in accordance with conventional microscopy data (12); however, a quantitative link between the observed dynamics and the surrounding chromatin structure could not yet be established in real time. Although it is becoming increasingly clear that chromatin motion and long-range interactions are key to genome organization and gene regulation (13), tools to detect and to define bulk chromatin motion simultaneously at divergent spatiotemporal scales and high resolution are still missing.

Here, we apply deep learningbased PALM (Deep-PALM) for temporally resolved super-resolution imaging of chromatin in vivo. Deep-PALM acquires a single resolved image in a few hundred milliseconds with a spatial resolution of ~60 nm. We observed elongated ~45- to 90-nm-wide chromatin domain blobs. Using a computational chromosome model, we inferred that blobs are highly dynamic entities, which dynamically assemble and disassemble. Consisting of chromatin in close physical and genomic proximity, our chromosome model indicates that blobs, nevertheless, adopt TAD-like interaction patterns when chromatin configurations are averaged over time. Using a combination of Deep-PALM and high-resolution dense motion reconstruction (14), we simultaneously analyzed both structural and dynamic properties of chromatin. Our analysis emphasizes the presence of spatiotemporal cross-correlations between chromatin structure and dynamics, extending several micrometers in space and tens of seconds in time. Furthermore, extraction and statistical mapping of multiple parameters from the dynamic behavior of chromatin blobs show that chromatin density regulates local chromatin dynamics.

Super-resolution imaging of complex and compact macromolecules such as chromatin requires dense labeling of the chromatin fiber to resolve fine features. We use Deep-STORM, a method that uses a deep convolutional neural network (CNN) to predict super-resolution images from stochastically blinking emitters (Fig. 1A; see Materials and Methods) (15). The CNN was trained to specific labeling densities for live-cell chromatin imaging using a photoactivated fluorophore (PATagRFP); we therefore refer to the method as Deep-PALM. We chose three labeling densities 4, 6, and 9 emitters/m2 per frame in the ON-state to test on the basis of the comparison of simulated and experimental wide-field images (fig. S1A). The CNN trained with 9 emitters/m2 performed significantly worse than the other CNNs and was thus excluded from further analysis (fig. S1B; see Materials and Methods). We applied Deep-PALM to reconstruct an image set of labeled histone protein (H2B-PATagRFP) in human bone osteosarcoma (U2OS) cells using the networks trained on 4 and 6 emitters/m2 per frame (see Materials and Methods). A varying number of predictions by the CNN of each frame of the input series were summed to reconstruct a temporal series of super-resolved images (fig. S1C). The predictions made by the CNN trained with 4 emitters/m2 show large spaces devoid of signal intensity, especially at the nuclear periphery, making this CNN inadequate for live-cell super-resolution imaging of chromatin. While collecting photons from long acquisitions for super-resolution imaging is desirable in fixed cells, Deep-PALM is a live imaging approach. Summing over many individual predictions leads to considerable motion blur and thus loss in resolution. Quantitatively, the Nyquist criterion states that the image resolution R=2/ depends on , the localization density per second, and the time resolution (16). In contrast, motion blur strictly depends on the diffusion constant D of the underlying structure R=4D. There is thus an optimum resolution due to the trade-off between increased emitter sampling and the avoidance of motion blur, which was at a time resolution of 360 ms for our experiments (Fig. 1B and fig. S1D).

(A) Wide-field images of U2OS nuclei expressing H2B-PATagRFP are input to a trained CNN, and predictions from multiple input frames are summed to construct a super-resolved image of chromatin in vivo. (B) The resolution trade-off between the prolonged acquisition of emitter localizations (green line) and motion blur due to diffusion of the underlying diffusion processes (purple line). For our experimental data, the localization density per second is = (2.4 0.1) m2s1, the diffusion constant is D = (3.4 0.8) 103 m2s1 (see fig. S8B), and the acquisition time per frame is = 30 ms. The spatial resolution assumes a minimum (69 5 nm) at a time resolution of 360 ms. (C) Super-resolution images of a single nucleus at time intervals of about 10 s. Scale bars, 2 m. (D) Magnification of segregated accumulations of H2B within a chromatin-rich region. Scale bar, 200 nm. (E) Magnification of a stable but dynamic structure (arrows) over three consecutive images. Scale bars, 500 nm. (F) Fourier ring correlation (FRC) for super-resolved images resulting in a spatial resolution of 63 2 nm. FRC was conducted on the basis of 332 consecutive super-resolved images from two cells. a.u. arbitrary units.

Super-resolution imaging of H2B-PATagRFP in live cells at this temporal resolution shows a pronounced nuclear periphery, while fluorescent signals in the interior vary in intensity (Fig. 1C). This likely corresponds to chromatin-rich and chromatin-poor regions (8). These regions rearrange over time, reflecting the dynamic behavior of bulk chromatin. Chromatin-rich and chromatin-poor regions were visible not only at the scale of the whole nucleus but also at the resolution of a few hundred nanometers (Fig. 1D). Within chromatin-rich regions, the intensity distribution was not uniform but exhibited spatially segregated accumulations of labeled histones of variable shape and size, reminiscent of nucleosome clutches (6), nanodomains (9, 11), or TADs (17). At the nuclear periphery, prominent structures arise. Certain chromatin structures could be observed for ~1 s, which underwent conformational changes during this period (Fig. 1E). The spatial resolution at which structural elements can be observed (see Materials and Methods) in time-resolved super-resolution data of chromatin was 63 2 nm (Fig. 1E), slightly more optimistic than the theoretical prediction (Fig. 1B) (18).

We compared images of H2B reconstructed from 12 frames (super-resolved images) by Deep-PALM in living cells to super-resolution images reconstructed by 8000 frames of H2B in fixed cells (fig. S2, A and B). Overall, the contrast in the fixed sample appears higher, and the nuclear periphery appears more prominent than in images from living cells. However, in accordance with the previous super-resolution images of chromatin in fixed cells (6, 8, 9, 11, 17) and Deep-PALM images, we observe segregated accumulations of signal throughout the nucleus. Thus, Deep-PALM identifies spatially heterogeneous coverage of chromatin, as previously reported (6, 8, 9, 11, 17). We further monitor chromatin temporally at the nanometer scale in living cells.

To quantitatively assess the spatial distribution of H2B, we developed an image segmentation scheme (see Materials and Methods; fig. S3), which allowed us to segment spatially separated accumulations of H2B signal with high fidelity (note S1 and figs. S4 and S5). Applying our segmentation scheme, ~10,000 separable elements, blob-like structures were observed for each super-resolved image (166 resolved images per movie; Fig. 2A). The experimental resolution does not enable us to elucidate their origin and formation because tracking of blobs in three dimensions would be necessary to do so (see Discussion). We therefore turned to a transferable computational model introduced by Qi and Zhang (19), which is based on one-dimensional genomics and epigenomics data, including histone modification profiles and binding sites of CTCF (CCCTC-binding factor). To compare our data to the simulations, super-resolution images were generated from the modeled chromosomes. Within these images, we could identify and characterize chromatin blobs analogously to those derived from experimental data (see Materials and Methods; Fig. 2B).

(A) Super-resolved images show blobs of chromatin (left). These blobs are segmented (see Materials and Methods and note S1) and individually labeled by random color (right). Magnifications of the boxed regions are shown. Scale bars, 2 m (whole nucleus); magnifications, 200 nm. (B) Generation of super-resolution images and blob identification and characterization for a 25million base pair (Mbp) segment of chromosome 1 from GM12878 cells, as simulated in Qi and Zhang (19). Beads (5-kb genomic length) of a simulated polymer configuration within a 200-nm-thick slab are projected to the imaging plane, resembling experimental super-resolved images of live chromatin. Blobs are identified as on experimental data. (C) From the centroid positions, the NND distributions are computed for up to 40 nearest neighbors (blue to red). The envelope of the k-NND distributions (black line) shows peaks at approximately 95, 235, 335, and 450 nm (red dots). (D) k-NND distributions as in (B) for simulated data. (E) Area distribution of experimental and simulated blobs. The distribution is, in both cases, well described by a lognormal distribution with parameters (3.3 2.8) 103 m2 for experimental blobs and (3.1 3.2) 103 m2 for simulated blobs (means SD). PDF, probability density function. (F) Eccentricity distribution for experimental and simulated chromatin blobs. Selected eccentricity values are illustrated by ellipses with the corresponding eccentricity. Eccentricity values range from 0, describing a circle, to 1, describing a line. Prominent peaks arise because of the discretization of chromatin blobs in pixels. The data are based on 332 consecutive super-resolved images from two cells, in each of with ~10,000 blobs were identified.

For imaged (in living and fixed cells) and modeled chromatin, we first computed the kth nearest-neighbor distance (NND; centroid-to-centroid) distributions, taking into account the nearest 1st to 40th neighbors (Fig. 2C and fig. S2, C and D, blue to red). Centroids of the nearest neighbors are (95 30) nm (means SD) apart, consistent with previous and our own super-resolution images of chromatin in fixed cells (9) and slightly further than what was found for clutches of nucleosomes (6). The envelope of all NND distributions (Fig. 2C, black line) shows several weak maxima at ~95, 235, 335, and 450 nm, which roughly coincide with the peaks of the 1st, 7th, 14th, and 25th nearest neighbors, respectively (Fig. 2C, red dots). In contrast, simulated data exhibit a prominent first nearest-neighbor peak at a slightly smaller distance, and higher-order NND distribution decay quickly and appear washed out (Fig. 2D). This hints toward greater levels of spatial organization of chromatin in vivo, which is not readily recapitulated in the used state-of-the-art chromosome model.

Next, we were interested in the typical size of chromatin blobs. Their area distribution (Fig. 2E) fit a log-normal distribution with parameters (3.3 2.8) 103 m2 (means SD), which is in line with the area distribution derived from fixed samples (fig. S2E) and modeled chromosomes. Notably, blob areas vary considerably, as indicated by the high SD and the prominent tail of the area distribution toward large values. Following this, we calculated the eccentricity of each blob to resolve their shape (Fig. 2F and fig. S2F). The eccentricity is a measure of the elongation of a region reflecting the ratio of the longest chord of the shape and the shortest chord perpendicular to it (Fig. 2F; illustrated shapes at selected eccentricity values). The distribution of eccentricity values shows an accumulation of values close to 1, with a peak value of ~0.9, which shows that most blobs have an elongated, fiber-like shape and are not circular. In particular, the eccentricity value of 0.9 corresponds to a ratio between the short and long axes of the ellipse of 1:2 (see Materials and Methods), which results, considering the typical area of blobs in experimental and simulated data, in roughly 92-nm-long and 46-nm-wide blobs on average. A highly similar value was found in fixed cells (fig. S2F). The length coincides with the value found for the typical NND [Fig. 2C; (95 30) nm]. However, because of the segregation of chromatin into blobs, their elongated shape, and their random orientation (Fig. 2A), the blobs cannot be closely packed throughout the nucleus. We find that chromatin has a spatially heterogeneous density, occupying 5 to 60% of the nuclear area (fig. S6, A and B), which is supported by a previous electron microscopy study (20).

Blob dimensions derived from live-cell super-resolution imaging using Deep-PALM are consistent with those found in fixed cells, thereby further validating our method, and in agreement with previously determined size ranges (6, 9). A previously published chromosome model based on Hi-C data (and thus not tuned to display blob-like structures per se) also displays blobs with dimensions comparable to those found here, in living cells. Together, these data strongly suggest the existence of spatially segregated chromatin structures in the sub100-nm range.

The simulations offer to track each monomer (chromatin locus) unambiguously, which is currently not possible to do from experimental data. Since the simulations show blobs comparable to those found in experiment (Fig. 2), simulations help to indicate possible mechanisms leading to the observation of chromatin blobs. For instance, because of the projection of the nuclear volume onto the imaging plane, the observed blobs could simply be overlays of distant, along the one-dimensional genome, noninteracting genomic loci. To examine this possibility, we analyzed the gap length between beads belonging to the same blob along the simulated chromosome. Beads constitute the monomers of the simulated chromosome, and each bead represents roughly 5 kb (19).

The analysis showed that the blobs are mostly made of consecutive beads along the genome, thus implying an underlying domain-like structure, similar to TADs (Fig. 3A). Using the affiliation of each bead to an intrinsic chromatin state of the model (Fig. 3B), it became apparent that blobs along the simulated chromosome consisting mostly of active chromatin are significantly larger than those formed by inactive and repressive chromatin (Fig. 3C). These findings are in line with experimental results (10) and results from the simulations directly (19), thereby validating the projection and segmentation process.

(A) Gap length between beads belonging to the same blob. An exemplary blob with small gap length is shown. The blob is mostly made of consecutive beads being in close spatial proximity. (B) A representative polymer configuration is colored according to chromatin states (red, active; green, inactive; and blue, repressive). (C) The cumulative distribution function (CDF) of clusters within active, inactive, and repressive chromatin. Inset: Mean area of clusters within the three types of chromatin. The distributions are all significantly different from each other, as determined by a two-sample Kolmogorov-Smirnov test (P < 1050). (D) Distribution of the continuous residence time of any monomer within a cluster (0.5 0.3 s; means SD). Inset: Continuous residence time of any monomer within a slab of 200-nm thickness (1.5 1.6 s; means SD). (E) The blob association strength between any two beads is measured as the frequency at which any two beads are found in one blob. The association map is averaged over all simulated configurations (upper triangular matrix; from simulations), and experimental Hi-C counts are shown for the same chromosome segment [lower triangular matrix; from Rao et al. (40)]. The association and Hi-C maps are strongly correlated [Pearsons correlation coefficient (PCC) = 0.76]. (F) Close-up views around the diagonal of Hi-Clike matrices. The association strength is shown together with the inverse distance between beads (top; PCC = 0.85) and with experimental Hi-C counts [bottom; as in (E)]. The data are based on 20,000 polymer configurations.

Since chromatin is dynamic in vivo and in computer simulations, each bead can diffuse in and out of the imaging volume from frame to frame. We estimated that, on average, each bead spent approximately 1.5 s continuously within a slab of 200-nm thickness (Fig. 3D). Furthermore, a bead is, on average, found only 0.55 0.33 s continuously within a blob, which corresponds to one to two experimental super-resolved images (Fig. 3D). These results suggest that chromatin blobs are highly dynamic entities, which usually form and dissemble within less than 1 s. We thus constructed a time-averaged association map for the modeled chromosomes, quantifying the frequency at which each locus is found with any other locus within one blob. The association map is comparable to interaction maps derived from Hi-C (Fig. 3E). Notably, interlocus association and Hi-C maps are strongly correlated, and the association map shows similar patterns as those identified as TADs in Hi-C maps, even for relatively distant genomic loci [>1 million base pairs (Mbp)]. A similar TAD-like organization is also apparent when the average inverse distance between loci is considered (Fig. 3F, top), suggesting that blobs could be identified in super-resolved images because of the proximity of loci within blobs in physical space. The computational chromosome model indicates that chromatin blobs identified by Deep-PALM are mostly made of continuous regions along the genome and cannot be attributed to artifacts originating from the projection of the three-dimensional genome structure to the imaging plane. The simulations further indicate that the blobs associate and dissociate within less than 1 s, but loci within blobs are likely to belong to the same TAD. Their average genomic content is 75 kb, only a fraction of typical TAD lengths in mammalian cells (average size, 880 kb) (4), suggesting that blobs likely correspond to sub-TADs or TAD nanocompartments (17).

To quantify the experimentally observed chromatin dynamics at the nanoscale, down to the size of one pixel (13.5 nm), we used a dense reconstruction of flow fields, optical flow (Fig. 4A; see Materials and Methods), which was previously used to analyze images taken on confocal (12, 14), and structured illumination microscopes (8). We examined the suitability of optical flow for super-resolution on the basis of single-molecule localization images using simulations. We find that the accuracy of optical flow is slightly enhanced on super-resolved images compared to conventional fluorescence microscopy images (note S2 and fig. S7, A to C). Experimental super-resolution flow fields are illustrated on the basis of two subsequent images, between which the dynamics of structural features are apparent to the eye (fig. S7, D and E). On the nuclear periphery, connected regions spanning up to ~500 nm can be observed [fig. S7D (i and ii), marked by arrows]. These structures are stable for at least 360 ms but move from frame to frame. The flow field is shown on top of an overlay of the two super-resolved images and color-coded [fig. S7D (iii); the intensity in frame 1 is shown in green, the intensity in frame 2 is shown in purple, and colocalization of both is white]. Displacement vectors closely follow the redistribution of intensity from frame to frame (roughly from green to purple). Similarly, structures within the nuclear interior (fig. S7E) can be followed by eye, thus further validating and justifying the use of a dense motion reconstruction as a quantification tool of super-resolved chromatin motion.

(A) A time series of super-resolution images (left) is subject to optical flow (right). (B) Blobs of a representative nucleus (see movie S1) are labeled by their NND (left), area (middle), and flow magnitude (right). Colors denote the corresponding parameter magnitude. (C) The average blob area, (D) NND, (E) density, and (F) flow magnitude are shown versus the normalized distance from the nuclear periphery (lower x axis; 0 is on the periphery and 1 is at the center of the nucleus) and versus the absolute distance (upper x axis). Line and shaded area denote the means SE from 322 super-resolved images of two cells. Scale bar, (A) and (B): 3 m.

Using optical flow fields, we linked the spatial appearance of chromatin to their dynamics. Effectively, the blobs were characterized with two structural parameters (NND and area) and their flow magnitude (Fig. 4B). Movie S1 shows the time evolution of those parameters for an exemplary nucleus. Blobs at the nuclear periphery showed a distinct behavior from those in the nuclear interior. In particular, the periphery exhibits a lower density of blobs, but those appear slightly larger and are less mobile than in the nuclear interior (Fig. 4, C to F), in line with previous findings using conventional microscopy (14). The peripheral blobs are reminiscent of dense and relatively immobile heterochromatin and lamina-associated domains (21), which extend only up to 0.5 m inside the nuclear interior. In contrast, blob dynamics increase gradually within 1 to 2 m from the nuclear rim.

To further elucidate the relationship between chromatin structure and dynamics, we analyzed the correlation between each pair of parameters in space and time. Therefore, we computed the auto- and cross-correlation of parameter maps with a given time lag across the entire nucleus (in space) (Fig. 5A). In general, a positive correlation denotes a low-low or a high-high relationship (a variable de-/increases when another variable de-/increases), while, analogously, a negative correlation denotes a high-low relationship. The autocorrelation of NND maps [Fig. 5A (i)] shows a positive correlation; thus, regions exist spanning 2 to 4 m, in which chromatin is either closely packed (low-low) or widely dispersed (high-high). Likewise, blobs of similar size tend to be in spatial proximity [Fig. 5A (iii)]. These regions are not stable over time but rearrange continuously, an observation bolstered by the fact that the autocorrelation diminishes with increasing time lag. The cross-correlation between NND and area [Fig. 5A (ii)] shows a negative correlation for short time lags, suggesting that large blobs appear with a high local density while small ones are more isolated. The correlation becomes slightly positive for time lags 20 s, indicating that big blobs are present in regions that were sparsely populated before and small blobs tend to accumulate in previously densely populated regions. This is in line with dynamic reorganization and reshaping of chromatin domains on a global scale, as observed in snapshots of the Deep-PALM image series (Fig. 1A).

(A) The spatial auto- and cross-correlation between parameters were computed for different time lags. The graphs depict the correlation over space lag for each parameter pair, and different colors denote the time lag (increasing from blue to red). (B) Illustration of the instantaneous relationship between local chromatin density and dynamics. The blob density is shown in blue; the magnitude of chromatin dynamics is shown by red arrows. The consistent negative correlation between NND and flow magnitude is expressed by increased dynamics in regions of high local blob density. Data represent the average over two cells. The cells behave similarly such that error bars are omitted for the sake of clarity.

The flow magnitude is positively correlated for all time lags, while the correlation displays a slight increase for time lags 20 s [Fig. 5A (vi)], which has also been observed previously (8, 12, 22). The spatial autocorrelation of dynamic and structural properties of chromatin are in stark contrast. While structural parameters are highly correlated at short but not at long time scales, chromatin motion is still correlated at a time scale exceeding 30 s. At very short time scales (<100 ms), stochastic fluctuations determine the local motion of the chromatin fiber, while coherent motion becomes apparent at longer times (22). However, there exists a strong cross-correlation between structural and dynamic parameters: The cross-correlation between the NND and flow magnitude shows notable negative correlation at all time lags [Fig. 5A (iv)], strongly suggesting that sparsely distributed blobs appear less mobile than densely packed ones. The area seems to play a negligible role for short time lags, but there is a modest tendency that regions with large blobs tend to exhibit increased dynamics at later time points [10 s; Fig. 5A (v)], likely due to the strong relationship between area and NND.

In general, parameter pairs involving chromatin dynamics exhibit an extended spatial auto- or cross-correlation (up to ~6 m; the lower row of Fig. 5A) compared to correlation curves including solely structural parameters (up to 3 to 4 m). Furthermore, the cross-correlation of flow magnitude and NND does not considerably change for increasing time lag, suggesting that the coupling between those parameters is characterized by an unexpectedly resilient memory, lasting for at least tens of seconds (23). Concomitantly, the spatial correlation of time-averaged NND maps and maps of the local diffusion constant of chromatin for the entire acquisition time enforces their negative correlation at the time scale of ~1 min (fig. S8). Such resilient memory was also proposed by a computational study that observed that interphase nuclei behave similar to concentrated solutions of unentangled ring polymers (24). Our data support the view that chromatin is mostly unentangled since entanglement would influence the anomalous exponent of genomic loci in regions of varying chromatin density (24). However, our data do not reveal a correlation between the anomalous exponent and the time-averaged chromatin density (fig. S8), in line with our previous results using conventional microscopy (14).

Overall, the spatial cross-correlation between chromatin structure and dynamics indicates that the NND between blobs and their mobility stand in a strong mutual, negative relationship. This relationship, however, concerns chromatin density variations at the nanoscale, but not global spatial density variations such as in euchromatin or heterochromatin (14). These results support a model in which regions with high local chromatin density, i.e., larger blobs are more prevalent and are mobile, while small blobs are sparsely distributed and less mobile (Fig. 5B). Blob density and dynamics in the long-time limit are, to an unexpectedly large extent, influenced by preceding chromatin conformations.

The spatial correlations above were only evaluated pairwise, while the behavior of every blob is likely determined by a multitude of factors in the complex energy landscape of chromatin (19, 22). Here, we aim to take a wider range of available information into account to reveal the principle parameters, driving the observed chromatin structure and dynamics. Using a microscopy-based approach, we have access to a total of six relevant structural, dynamic, and global parameters, which potentially shape the chromatin landscape in space and time (Fig. 6A). In addition to the parameters used above, we included the confinement level as a relative measure, allowing the quantification of transient confinement (see Materials and Methods). We further included the bare signal intensity of super-resolved images and, as the only static parameter, the distance from the periphery since it was shown that dynamic and structural parameters show some dependence on this parameter (Fig. 4). We then used t-distributed stochastic neighbor embedding (t-SNE) (25), a state-of-the-art dimensionality reduction technique, to map the six-dimensional chromatin features (the six input parameters) into two dimensions (Fig. 6A and see note S3). The t-SNE algorithm projects data points such that neighbors in high-dimensional space likely stay neighbors in two-dimensional space (25). Visually apparent grouping of points (Fig. 6B) implies that grouped points exhibit great similarity with respect to all input features, and it is of interest to reveal which subset of the input features can explain the similarity among chromatin blobs best. It is likely that points appear grouped because their value of a certain input feature is considerably higher or lower than the corresponding value of other data points. We hence labeled points in t-SNE maps which are smaller than the first quartile point or larger than the third quartile point. Data points falling in either of the low/high partition of one input feature are colored accordingly for visualization (Fig. 6D; blue/red points, respectively). We then assigned a rank to each of the input features according to their nearest-neighbor fraction (n-n fraction): Since the t-SNE algorithm conserves nearest neighbors, we described the extent of grouping in t-SNE maps by the fraction of nearest neighbors, which fall in either one of the subpopulations of low or high points (illustrated in fig. S9). A high n-n fraction (Fig. 6C) therefore indicates that many points marked as low/high are indeed grouped by t-SNE and are therefore similar. The ranking (from low to high n-n fraction) reflects the potency of a given parameter to induce similar behavior between chromatin blobs with respect to all input features.

(A) The six-dimensional parameter space is input to the t-SNE algorithm and projected to two dimensions. (B) The two-dimensional embedding of an exemplary dataset is shown and colored according to the magnitude of each input feature (blue to red; the parameter average is shown in beige). (C) Points below the first (blue) and above the third (red) quartile points of the corresponding parameter are marked, and the parameters are ranked according to the fraction of nearest neighbors that fall in one of the marked regions. (D) Data points marked below the first or above the third quartile points are labeled according to the feature in which they were marked. Priority is given to the feature with the higher n-n fraction if necessary. (E) t-SNE analysis is carried out for each nucleus over the whole time series, and it is counted how often a parameter ranked first. The results are visualized as a pie chart. The NND predominantly ranks first in about two-thirds of all cases. (F) Marked points in (C) and (D) are mapped back onto the corresponding nuclei, and the CDF over space is shown (means SE). Pie chart and CDF computations are based on 322 super-resolved images from two cells.

The relative frequency at which each parameter ranked first provides an intuitive feeling for the most influential parameters in the dataset (Fig. 6E). The signal intensity plays a negligible role, suggesting that our data are free of potential artifacts related to the bare signal intensity. Furthermore, the blob area and the distance from the periphery likewise do not considerably shape chromatin blobs. In contrast, the NND between blobs was found to be the main factor inducing the observed characteristics in 67% of all-time frames across all nuclei. The flow magnitude and confinement level together rank first in 26% of all cases (11 and 17%, respectively). These numbers suggest that the local chromatin density is a universal key regulator of instantaneous chromatin dynamics. Note that no temporal dependency is included in the t-SNE analysis and, thus, the feature extraction concerns only short-term (360 ms) relationships. The characteristics of roughly one-fourth of all blobs at each time point are mainly determined by similar dynamical features. Mapping chromatin blobs as marked in Fig. 6 (C and D) back to their respective positions inside the nucleus (Fig. 6F) shows that blobs with low/high flow magnitude or confinement level markedly also grouped in physical space, which is highly reminiscent of coherent motion of chromatin (12). In contrast, blobs with extraordinary low or high NND were found interspersed throughout the nucleus, in line with spatial correlation analysis between structural and dynamic features (Fig. 5). Our results point toward a large influence of the local chromatin density on the dynamics of chromatin at the scale of a few hundred nanometers and within a few hundred milliseconds. At longer time and length scales, however, previous results suggest that this relationship is lost (14).

With Deep-PALM, we present temporally resolved super-resolution images of chromatin in living cells. Our technique identified chromatin nanodomains, named blobs, which mostly have an elongated shape, consistent with the curvilinear arrangement of chromatin, as revealed by structured illumination microscopy (8) with typical axes lengths of 45 to 90 nm. A previous study reported ~30-nm-wide clutches of nucleosomes in fixed mammalian cells using STORM nanoscopy (6), while the larger value obtained using Deep-PALM may be attributed to the motion blurring effect in live-cell imaging. However, histone acetylation and methylation marks were shown to form nanodomains of diameter 60 to 140 nm, respectively (9), which includes the computed dimensions for histone H2B using Deep-PALM.

To elucidate the origin of chromatin blobs, we turned to a simulated chromosome model, which displays chromatin blobs similar to our experimental data when seen in a super-resolution reconstruction. The simulations suggest that chromatin blobs consist of continuous genomic regions with an average length of 75 kb, assembling and disassembling dynamically within less than 1 s. Monomers within blobs display a distinct TAD-like association pattern in the long-time limit, suggesting that the identified blobs represent sub-TADs. Transient formation is consistent with recent findings that TADs are not stable structural elements but exhibit extensive heterogeneity and dynamics (3, 5). To experimentally probe the transient assembly of chromatin blobs, it would be interesting to track individual blobs over time. However, this is a nontrivial task. While the size (area/volume) or shape of blobs could be used to establish correspondences between blobs in subsequent frames, the framework needs to be flexible enough to allow for blob deformations since blobs likely arise stochastically and are not rigid bodies. Achieving an even shorter acquisition time per frame in the future could help minimize the influence of blob deformations and make tracking feasible. The second challenge is to distinguish between disassembly and out-of-focus diffusion of a blob. The three-dimensional imaging at sufficient spatial and temporal resolution will be helpful in the future to overcome this hurdle.

Using an optical flow approach to determine the blob dynamics instead, we found that structural and dynamic parameters exhibit extended spatial and temporal (cross-) correlations. Structural parameters such as the local chromatin density (expressed as the NND between blobs) and area lose their correlation after 3 to 4 m and roughly 40 s in the spatial and temporal dimension, respectively. In contrast, chromatin mobility correlations extend over ~6 m and persist during the whole acquisition period (40 s). Extensive spatiotemporal correlation of chromatin dynamics has been presented previously, both experimentally (12) and in simulations (22), but was not linked to the spatiotemporal behavior of the underlying chromatin structure until now. We found that the chromatin dynamics are closely linked to the instantaneous but also to past local structural characterization of chromatin. In other words, the instantaneous local chromatin density influences chromatin dynamics in the future and vice versa. On the basis of these findings, we suggest that chromatin dynamics exhibit an extraordinary long memory. This strong temporal relationship might be established by the fact that stress propagation is affected by the folded chromosome organization (26). Fiber displacements cause structural reconfiguration, ultimately leading to a local amplification of chromatin motion in local high-density environments. This observation is also supported by the fact that increased nucleosome mobility grants chromatin accessibility even within regions of high nucleosome density (27).

Given the persistence at which correlations of chromatin structure and, foremost, dynamics occur in a spatiotemporal manner, we speculate that the interplay of chromatin structure and dynamics could involve a functional relationship (28): Transcriptional activity is closely linked to chromatin accessibility and the epigenomic state (29). Because chromatin structure and dynamics are related, dynamics could also correlate with transcriptional activity (14, 30, 31). However, it is currently unknown whether the structure-dynamics relationship revealed here is strictly mutual or whether it may be causal. Simulations hint that chromatin dynamics follows from structure (22, 23); this question will be exciting to answer experimentally and in the light of active chromatin remodelers to elucidate a potential functional relationship to transcription. Chromatin regions that are switched from inactive to actively transcribing, for instance, undergo a structural reorganization accompanied by epigenetic modifications (32). The mechanisms driving recruitment of enzymes inducing histone modifications such as histone acetyltransferases, deacetylases, or methyltransferases are largely unknown but often involve the association to proteins (33). Their accessibility to the chromatin fiber is inter alia determined by local dynamics (27). Such a structure-dynamics feedback loop would constitute a quick and flexible way to transiently alter gene expression patterns upon reaction to external stimuli or to coregulate distant genes (1). Future work will study how structure-dynamics correlations differ in regions of different transcriptional activity and/or epigenomic states. Furthermore, probing the interactions between key transcriptional machines such as RNA polymerases with the local chromatin structure and recording their (possibly collective) dynamics could shed light into the target search and binding mechanisms of RNA polymerases with respect to the local chromatin structure. Deep-PALM in combination with optical flow paves the way to answer these questions by enabling the analysis of time-resolved super-resolution images of chromatin in living cells.

Human osteosarcoma U2OS expressing H2B-PATagRFP cells were a gift from S. Huet (CNRS, UMR 6290, Institut Gntique et Dveloppement de Rennes, Rennes, France); the histone H2B was cloned, as described previously (34). U2OS cells were cultured in Dulbeccos modified Eagles medium [with glucose (4.5 g/liter)] supplemented with 10% fetal bovine serum (FBS), 2 mM glutamine, penicillin (100 g/ml), and streptomycin (100 U/ml) in 5% CO2 at 37C. Cells were plated 24 hours before imaging on 35-mm petri dishes with a no. 1.5 coverslip-like bottom (ibidi, Biovalley) with a density of 2 105 cells per dish. Just before imaging, the growth medium was replaced by Leibovitzs L-15 medium (Life Technologies) supplemented with 20% FBS, 2 mM glutamine, penicillin (100 g/ml), and streptomycin (100 U/ml).

Imaging of H2B-PAtagRFP in living U2OS cells was carried out on a fully automated Nikon Ti-E/B PALM (Nikon Instruments) microscope. The microscope is equipped with a full incubator enclosure with gas regulation to maintain a temperature of ~37C for normal cell growth during live-cell imaging. Image sequences of 2000 frames were recorded with an exposure time of 30 ms per frame (33.3 frames/s). For Deep-PALM imaging, a relatively low power (~50 W/cm2 at the sample) was applied for H2B-PATagRFP excitation at 561 nm and then combined with the 405 nm (~2 W/cm2 at the sample) to photoactivate the molecules between the states. Note that for Deep-PALM imaging, switched fluorophores are not required to stay as long in the dark state as for conventional PALM imaging. We used oblique illumination microscopy (11) combined with total internal reflection fluorescence (TIRF) mode to illuminate a thin layer of 200 nm (axial resolution) across the nucleus. The reconstruction of super-resolved images improves the axial resolution only marginally (fig. S1, E and F). Laser beam powers were controlled by acoustic optic-modulators (AA Opto-Electronic). Both wavelengths were united into an oil immersion 1.49-NA (numerical aperture) TIRF objective (100; Nikon). An oblique illumination was applied to acquire image series with a high signal-to-noise ratio. The fluorescence emission signal was collected by using the same objective and spectrally filtered by a Quad-Band beam splitter (ZT405/488/561/647rpc-UF2, Chroma Technology) with a Quad-Band emission filter (ZET405/488/561/647m-TRF, Chroma Technology). The signal was recorded on an electron-multiplying charge-coupled device camera (Andor iXon X3 DU-897, Andor Technology) with a pixel size of 108 nm. For axial correction, Perfect Focus System was applied to correct for defocusing. NIS-Elements software was used for acquiring the images.

The same cell line (U2OS expressing H2B-PAtagRFP), as in live-cell imaging, was used for conventional PALM imaging. Before fixation, cells were washed with phosphate-buffered saline (PBS) (three times for 5 min each) and then fixed with 4% paraformaldehyde (Sigma-Aldrich) diluted in PBS for 15 min at room temperature. A movie of 8000 frames was acquired with an exposure time of 30 ms per frame (33.3 frames/s). In comparison to Deep-PALM imaging, a relatively higher excitation laser of 561 nm (~60 W/cm2 at the sample) was applied to photobleach H2B-PATagRFP and then combined with the 405 nm (~2.5 W/cm2 at the sample) for photoactivating the molecules. We used the same oblique illumination microscopy combined with TIRF system, as applied in live-cell imaging.

PALM images from fixed cells were analyzed using ThunderSTORM (35). Super-resolution images were constructed by binning emitter localizations into 13.5 13.5 nm pixels and blurred by a Gaussian to match Deep-PALM images. The image segmentation was carried out as on images from living cells (see below).

The CNN was trained using simulated data following Nehme et al. (15) for three labeling densities (4, 6, and 9 emitters/m2 per frame). Raw imaging data were checked for drift, as previously described (12). The detected drift in raw images is in the range of <10 nm and therefore negligible. The accuracy of the trained net was evaluated by constructing ground truth images from the simulated emitter positions. The structural similarity index is computed to assess the similarity between reconstructed and ground truth images (36)SSIM=x,y(2xx+C1)(2xy+C2)(x2+y2+C1)(x2+y2+C2)(1)where x and y are windows of the predicted and ground truth images, respectively, and denote their local means and SD, respectively, and xy denotes their cross-variance. C1 = (0.01L)2 and C2 = (0.03L)2 are regularization constants, where L is the dynamic range of the input images. The second quantity to assess CNN accuracy is the root mean square error between the ground truth G and reconstructed image RRMSE=1NN(RG)2(2)where N is the number of pixels in the images. After training, sequences of all experimental images were compared to the trained network, and predictions of single Deep-PALM images were summed to obtain a final super-resolved image. An up-sampling factor of 8 was used, resulting in an effective pixel size of 108 nm/8 = 13.5 nm. A blind/referenceless image spatial quality evaluator (37) was used to determine the optimal number of predictions to be summed. For visualization, super-resolved images were convolved with a Gaussian kernel ( = 1 pixel) and represented using a false red, green, and blue colormap. The parameters of the three trained networks are available at https://github.com/romanbarth/DeepPALM-trained-models.

Fourier ring correlation (FRC) is an unbiased method to estimate the spatial resolution in microscopy images. We follow an approach similar to the one described by Nieuwenhuizen et al. (38). For localization-based super-resolution techniques, the set of localizations is divided into two statistically independent subsets, and two images from these subsets are generated. The FRC is computed as the statistical correlation of the Fourier transforms of both subimages over the perimeter of circles of constant frequency in the frequency domain. Deep-PALM, however, does not result in a list of localizations, but in predicted images directly. The set of 12 predictions from Deep-PALM were thus split into two statistically independent subsets, and the method described by Nieuwenhuizen et al. (38) was applied.

The super-resolved images displayed isolated regions of accumulated emitter density. To quantitatively assess the structural information implied by this accumulation of emitters in the focal plane, we developed a segmentation scheme that aims to identify individual blobs (fig. S3). A marker-assisted watershed segmentation was adapted to accurately determine blob boundaries. For this purpose, we use the raw predictions from the deep CNN without convolution (fig. S3A). The foreground in this image is marked by regional maxima and pixels with very high density (i.e., those with I > 0.99 Imax; fig. S3B). Since blobs are characterized by surrounding pixels of considerably less density, the Euclidian distance transform is computed on the binary foreground markers. Background pixels (i.e., those pixels not belonging to any blobs) are expected to lie far away from any blob center, and thus, a good estimate for background markers are those pixels being furthest from any foreground pixel. We hence compute the watershed transform on the distance transform of foreground markers, and the resulting watershed lines depict background pixels (fig. S3C). Equipped with fore- and background markers (fig. S3D), we apply a marker-controlled watershed transform on the gradient of the input image (fig. S3E). The marker-controlled watershed imposes minima on marker pixels, preventing the formation of watershed lines across marker pixels. Therefore, the marker-controlled watershed accurately detects boundaries and blobs that might not have been previously marked as foreground (fig. S3F). Last, spurious blobs whose median- or mean intensity is below 10% of the maximum intensity are discarded, and each blob is assigned a unique label for further correspondence (fig. S3G). The area and centroid position are computed for each identified blob for further analysis. This automated segmentation scheme performs considerably better than other state-of-the-art algorithms for image segmentation because of the reliable identification of fore- and background markers accompanied by the watershed transform (note S1).

Centroid position, area, and eccentricity were computed. The eccentricity is computed by describing the blobs as an ellipseE=1a2/b2(3)where a and b are the short and long axes of the ellipse, respectively.

We chose to use a computational chromatin model, recently introduced by Qi and Zhang (19), to elucidate the origin of experimentally determined chromatin blobs. Each bead of the model covers a sequence length of 5 kb and is assigned 1 of 15 chromatin states to distinguish promoters, enhancers, quiescent chromatin, etc. Starting from the simulated polymer configurations, we consider monomers within a 200-nm-thick slab through the center of the simulated chromosome. To generate super-resolved images as those from Deep-PALM analysis, fluorescence intensity is ascribed to each monomer. Monomer positions are subsequently discretized on a grid with 13.5-nm spacing and convolved with a narrow point-spread function, which results in images closely resembling experimental Deep-PALM images of chromatin. Chromatin blobs were then be identified and characterized as on experimental data (Fig. 2, A and B). Mapping back the association of each bead to a blob (if any) allows us to analyze principles of blob formation and maintenance using the distance and the association strength between each pair of monomers, averaged over all 20,000 simulated polymer configurations.

The radial distribution function g(r) (also pair correlation function) is calculated (in two dimensions) by counting the number of blobs in an annulus of radius r and thickness dr. The result is normalized by the bulk density = n/A, with the total number of blobs n and, A, the area of the nucleus, and the area of the annulus, 2r drdn(r)=g(r)2rdr(4)

Super-resolved images of chromatin showed spatially distributed blobs of varying size, but the resolved structure is too dense for state-of-the-art single-particle tracking methods to track. Furthermore are highly dynamic structures, assembling and dissembling within one to two super-resolved frames (Fig. 3D), which makes a single-particle tracking approach unsuitable. Instead, we used a method for dynamics reconstruction of bulk macromolecules with dense labeling, optical flow. Optical flow builds on the computation of flow fields between two successive frames of an image series. The integration of these flow fields from super-resolution images results in trajectories displaying the local motion of bulk chromatin with temporal and high spatial resolution. Further, the trajectories are classified into various diffusion models, and parameters describing the underlying motion are computed (14). Here, we use the effective diffusion coefficient D (in units of m2/s), which reflects the magnitude of displacements between successive frames (the velocity of particles or monomers in the continuous limit) and the anomalous exponent (14). The anomalous exponent reflects whether the diffusion is free ( = 1, e.g., for noninteracting particles in solution), directed ( > 1, e.g., as the result from active processes), or hindered ( < 1, e.g., because of obstacles or an effective back-driving force). Furthermore, we compute the length of constraint Lc, which is defined as the SD of the trajectory positions with respect to its time-averaged position. Denoting R(t; R0), the trajectory at time t originating from R0, the expression reads Lc(R0) = var(R(t; R0))1/2, where var denotes the variance. The length of constraint is a measure of the length scale explored of the monomer during the observation period. A complementary measure is the confinement level (39), which computes the inverse of the variance of displacements within a sliding window of length : C / var(R(t; R0)), where the sliding window length is set to four frames (1.44 s). Larger values of C denote a more confined state than small ones.

The NND and the area, as well as the flow magnitude, were calculated and assigned to the blobs centroid position. To calculate the spatial correlation between parameters, the parameters were interpolated from the scattered centroid positions onto a regular grid spanning the entire nucleus. Because not every pixel in the original super-resolved images is assigned a parameter value, we chose an effective grid spacing of five pixels (67.5 nm) for the interpolated parameter maps. After interpolation, the spatial correlation was computed between parameter pairs: Let r = (x, y)T denote a position on a regular two-dimensional grid and f(r, t) and g(r, t) two scalar fields with mean zero and variance one, at time t on that grid. The time series of parameter fields consist of N time points. The spatial cross-correlation between the fields f and g, which lie a lag time apart, is then calculated asC(,)=1Ntx,yf(r,t)g(r+,t+)x,yf(r,t)g(r,t+)(5)where the space lag is a two-dimensional vector = (x, y)T. The sums in the numerator and denominator are taken over the spatial dimensions; the first sum is taken over time. The average is thus taken over all time points that are compliant with time lag . Subsequently, the radial average in space is taken over the correlation, thus effectively calculating the spatial correlation C(, ) over the space lag =x2+y2. If f = g, then the spatial autocorrelation is computed.

We denote as global parameters those that reflect the structural and dynamic behavior of chromatin spatially resolved in a time-averaged manner. Examples involve the diffusion constant, the anomalous exponent, the length of constraint, but also time-averaged NND maps, etc. (fig. S8). Those parameters are useful to determine time-universal characteristics. The spatial correlation between those parameters is equivalent to the expression given for temporally varying parameters when the temporal dimension is omitted, effectively resulting in a correlation curve C().

The distance from the periphery, intensity, their NND, area, flow magnitude, and confinement level of each identified blob form the six-dimensionalinput feature space for t-SNE analysis. The parameters for each blob (n = 3,260,232; divided into subsets of approximately 10,000) were z-transformed before the t-SNE analysis. The t-SNE analysis was performed using MATLAB and the Statistics and Machine Learning Toolbox (Release 2017b; The MathWorks Inc., Natick, MA, USA) with the Barnes-Hut approximation. The algorithm was tested using different distance metrics and perplexity values and showed robust results within the examined ranges (note S3 and fig. S10).

Acknowledgments: We acknowledge support from the Ple Scientifique de Modlisation Numrique, ENS de Lyon for providing computational resources. We thank B. Zhang (Massachusetts Institute of Technology) for providing data of simulated chromosomes and S. Kocanova (LBME, CBI-CNRS; University of Toulouse) for providing PALM videos for fixed cells. We thank H. Babcock (Harvard University), A. Seeber (Harvard University), and M. Tamm (Moscow State University) for valuable feedback on the manuscript. Funding: This publication is based upon work from COST Action CA18127, supported by COST (European Cooperation in Science and Technology). This work is supported by Agence Nationale de la Recherche (ANR) ANDY and Sinfonie grants. Author contributions: H.A.S. designed and supervised the project. R.B. designed the data analysis and wrote the code. H.A.S. carried out experimental work. R.B. carried out the data analysis. H.A.S. and R.B. interpreted results. H.A.S., R.B., and K.B. wrote the manuscript. Competing interests: The authors declare that they have no competing interests. Data and materials availability: All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials. Additional data related to this paper may be requested from the authors.

View post:
Coupling chromatin structure and dynamics by live super-resolution imaging - Science Advances

Why the buzz around DeepMind is dissipating as it transitions from games to science – CNBC

Google Deepmind head Demis Hassabis speaks during a press conference ahead of the Google DeepMind Challenge Match in Seoul on March 8, 2016.

Jung Yeon-Je | AFP |Getty Images | Getty Images

In 2016, DeepMind, an Alphabet-owned AI unit headquartered in London, was riding a wave of publicity thanks to AlphaGo, its computer program that took on the best player in the world at the ancient Asian board game Go and won.

Photos of DeepMind's leader, Demis Hassabis, were splashed across the front pages of newspapers and websites, and Netflix even went on to make a documentary about the five-game Go match between AlphaGo and world champion Lee SeDol. Fast-forward four years, and things have gone surprisingly quiet about DeepMind.

"DeepMind has done some of the most exciting things in AI in recent years. It would be virtually impossible for any company to sustain that level of excitement indefinitely," said William Tunstall-Pedoe, a British entrepreneur who sold his AI start-up Evi to Amazon for a reported $26 million. "I expect them to do further very exciting things."

AI pioneer Stuart Russell, a professor at the University of California, Berkeley, agreed it was inevitable that excitement around DeepMind would tail off after AlphaGo.

"Go was a recognized milestone in AI, something that some commentators said would take another 100 years," he said. "In Asia in particular, top-level Go is considered the pinnacle of human intellectual powers. It's hard to see what else DeepMind could do in the near term to match that."

DeepMind's army of 1,000 plus people, which includes hundreds of highly-paid PhD graduates, continues to pump out academic paper after academic paper, but only a smattering of the work gets picked up by the mainstream media. The research lab has churned out over 1,000 papers and 13 of them have been published by Nature or Science, which are widely seen as the world's most prestigious academic journals. Nick Bostrom, the author of Superintelligence and the director of the University of Oxford's Future of Humanity Institute described DeepMind's team as world-class, large, and diverse.

"Their protein folding work was super impressive," said Neil Lawrence, a professor of machine learning at the University of Cambridge, whose role is funded by DeepMind. He's referring to a competition-winning DeepMind algorithm that can predict the structure of a protein based on its genetic makeup. Understanding the structure of proteins is important as it could make it easier to understand diseases and create new drugs in the future.

The World's top human Go player, 19-year-old Ke Jie (L) competes against AI program AlphaGo, which was developed by DeepMind, the artificial intelligence arm of Google's parent Alphabet. Machine won the three-game match against man in 2017. The AI didn't lose a single game.

VCG | Visual China Group | Getty Images

DeepMind is keen to move away from developing relatively "narrow" so-called "AI agents," that can do one thing well, such as master a game. Instead, the company is trying to develop more general AI systems that can do multiple things well, and have real world impact.

It's particularly keen to use its AI to leverage breakthroughs in other areas of science including healthcare, physics and climate change.

But the company's scientific work seems to be of less interest to the media.In 2016, DeepMind was mentioned in 1,842 articles, according to media tracker LexisNexis. By 2019, that number had fallen to 1,363.

One ex-DeepMinder said the buzz around the company is now more in line with what it should be. "The whole AlphaGo period was nuts," they said. "I think they've probably got another few milestones ahead, but progress should be more low key. It's a marathon not a sprint, so to speak."

DeepMind denied that excitement surrounding the company has tailed off since AlphaGo, pointing to the fact that it has had more papers in Nature and Science in recent years.

"We have created a unique environment where ambitious AI research can flourish. Our unusually interdisciplinary approach has been core to our progress, with 13 major papers in Nature and Science including 3 so far this year," a DeepMind spokesperson said. "Our scientists and engineers have built agents that can learn to cooperate, devise new strategies to play world-class chess and Go, diagnose eye disease, generate realistic speech now used in Google products around the world, and much more."

"More recently, we've been excited to see early signs of how we could use our progress in fundamental AI research to understand the world around us in a much deeper way. Our protein folding work is our first significant milestone applying artificial intelligence to a core question in science, and this is just the start of the exciting advances we hope to see more of over the next decade, creating systems that could provide extraordinary benefits to society."

The company, which competes with Facebook AI Research and OpenAI, did a good job of building up hype around what it was doing in the early days.

Hassabis and Mustafa Suleyman, the intellectual co-founders who have been friends since school, gave inspiring speeches where they would explain how they were on a mission to "solve intelligence" and use that to solve everything else.

There was also plenty of talk of developing "artificial general intelligence" or AGI, which has been referred to as the holy grail in AI and is widely viewed as the point when machine intelligence passes human intelligence.

But the speeches have become less frequent (partly because Suleyman left Deepmind and works for Google now), and AGI doesn't get mentioned anywhere near as much as it used to.

Larry Page, left, and Sergey Brin, co-founders of Google Inc.

JB Reed | Bloomberg | Getty Images

Google co-founders Larry Page and Sergey Brin were huge proponents of DeepMind and its lofty ambitions, but they left the company last year and its less obvious how Google CEO Sundar Pichai feels about DeepMind and AGI.

It's also unclear how much free reign Pichai will give the company, which cost Alphabet $571 million in 2018. Just one year earlier, the company had losses of $368 million.

"As far as I know, DeepMind is still working on the AGI problem and believes it is making progress," Russell said. "I suspect the parent company (Google/Alphabet) got tired of the media turning every story about Google and AI into the Terminator scenario, complete with scary pictures."

One academic who is particularly skeptical about DeepMind's achievements is AI entrepreneur Gary Marcus, who sold a machine-learning start-up to Uber in 2016 for an undisclosed sum.

"I think they realize the gulf between what they're doing and what they aspire to do," he said. "In their early years they thought that the techniques they were using would carry us all the way to AGI. And some of us saw immediately that that wasn't going to work. It took them longer to realize but I think they've realized it now."

Marcus said he's heard that DeepMind employees refer to him as the "anti-Christ" because he has questioned how far the "deep learning" AI technique that DeepMind has focused on can go.

"There are major figures now that recognize that the current techniques are not enough," he said. "It's very different from two years ago. It's a radical shift."

He added that while DeepMind's work on games and biology had been impressive, it's had relatively little impact.

"They haven't used their stuff much in the real world," he said. "The work that they're doing requires an enormous amount of data and an enormous amount of compute, and a very stable world. The techniques that they're using are very, very data greedy and real-world problems often don't supply that level of data."

Read the original post:
Why the buzz around DeepMind is dissipating as it transitions from games to science - CNBC

What’s the Difference Between Prokaryotic and Eukaryotic Cells? – HowStuffWorks

Advertisement

You know when you hear somebody start a sentence with, "There are two kinds of people..." and you think to yourself "Oh boy, here it comes." Because reducing the whole of humanity down to "two kinds of people" seems like an odious activity at best.

But what if I were to tell you that there are just two kinds of organisms?

According to scientists, the world is split into two kinds of organisms prokaryotes and eukaryotes which have two different types of cells. An organism can be made up of either one type or the other. Some organisms consist of only one measly cell, but even so, that cell will either be either prokaryotic or eukaryotic. It's just the way things are.

The difference between eukaryotic and prokaryotic cells has to do with the little stuff-doing parts of the cell, called organelles. Prokaryotic cells are simpler and lack the eukaryote's membrane-bound organelles and nucleus, which encapsulate the cell's DNA. Though more primitive than eukaryotes, prokaryotic bacteria are the most diverse and abundant group of organisms on Earth we humans are literally covered in prokaryotes, inside and out. On the other hand, all humans, animals, plants, fungi and protists (organisms made up of a single cell) are eukaryotes. And though some eukaryotes are single celled think amoebas and paramecium there are no prokaryotes that have more than one cell.

"I think of a prokaryote as a one-room efficiency apartment and a eukaryote as a $6 million mansion," says Erin Shanle, a professor in the Department of Biological and Environmental Sciences at Longwood University, in an email interview. "The size and separation of functional 'rooms,' or organelles, in eukaryotes is similar to the many rooms and complex organization of a mansion. Prokaryotes have to get similar jobs done in a single room without the luxury of organelles."

One reason this analogy is helpful is because all cells, both prokaryotes and eukaryotes, are surrounded by a selectively permeable membrane which allows only certain molecules to get in and out much like the windows and doors of our home. You can lock your doors and windows to keep out stray cats and burglars (the cellular equivalent to viruses or foreign materials), but you unlock the doors to bring in groceries and to take out the trash. In this way, all cells maintain internal homeostasis, or stability.

"Prokaryotes are much simpler with respect to structure," says Shanle. "They have a single 'room' to perform all the necessary functions of life, namely producing proteins from the instructions stored in DNA, which is the complete set of instructions for building a cell. Prokaryotes don't have separate compartments for energy production, protein packaging, waste processing or other key functions."

In contrast, eukaryotes have membrane-bound organelles that are used to separate all these processes, which means the kitchen is separate from the master bathroom there are dozens of walled-off rooms, all of which serve a different function in the cell.

For example, DNA is stored, replicated, and processed in the eukaryotic cell's nucleus, which is itself surrounded by a selectively permeable membrane. This protects the DNA and allows the cell to fine-tune the production of proteins necessary to do its job and keep the cell alive. Other key organelles include the mitochondria, which processes sugars to generate energy, the lysosome, which processes waste and the endoplasmic reticulum, which helps organize proteins for distribution around the cell. Prokaryotic cells have to do a lot of this same stuff, but they just don't have separate rooms to do it in. They're more of a two-bit operation in this sense.

"Many eukaryotic organisms are made up of multiple cell types, each containing the same set of DNA blueprints, but which perform different functions," says Shanle. "By separating the large DNA blueprints in the nucleus, certain parts of the blueprint can be utilized to create different cell types from the same set of instructions."

You might be wondering how organisms got to be divided in this way. Well, according to endosymbiotic theory, it all started about 2 billion years ago, when some large prokaryote managed to create a nucleus by folding its cell membrane in on itself.

"Over time, a smaller prokaryotic cell was engulfed by this larger cell," says Shanle. "The smaller prokaryote could perform aerobic respiration, or process sugars into energy using oxygen, similar to the mitochondria we see in eukaryotes that are living today. This smaller cell was maintained within the larger host cell, where it replicated and was passed on to subsequent generations. This endosymbiotic relationship ultimately led to the smaller cell becoming a part of the larger cell, eventually losing its autonomy and much of its original DNA."

However, the mitochondria of today's eukaryotes have their own DNA blueprints that replicate independently from the DNA in the nucleus, and mitochondrial DNA has some similarity to prokaryotic DNA, which supports the endosymbiotic theory. A similar model is thought to have led to the evolution of chloroplasts in plants, but the story begins with a eukaryotic cell containing a mitochondria engulfing a photosynthetic prokaryote.

Eukaryotes and prokaryotes they're different! But even though it can be hard to see the similarities between humans and bacteria, we are all made of the same stuff: DNA, proteins, sugars and lipids.

Read more:
What's the Difference Between Prokaryotic and Eukaryotic Cells? - HowStuffWorks

University of Hull Supercomputer Supporting Global COVID-19 Research – HPCwire

June 1, 2020 A multi-million-pound high-performance computer (HPC) at the University of Hull is playing a crucial role in global COVID-19 research. Known as Viper, the supercomputer became the fastest machine of any northern university when it arrived in Hull back in 2016.

Four years on, Viper is now helping researchers around the world better understand and tackle the spread of COVID-19. The University has partnered up with HPC specialist OCF to support global research into COVID-19 on a project called [emailprotected].

Chris Collins, Research Systems Manager at the University of Hull said: It has been humbling to see how the University has responded to the challenges posed by COVID-19. From a team producing face shields for the NHS, to helping re-train former NHS staff, the University is doing everything it can in this difficult time.

[emailprotected] is another example of this. Using spare compute capacity on Viper which is constantly supporting other research projects within the University is us doing our bit to help tackle COVID-19. Viper is able to download and process bitesize chunks of huge computer simulations, and the final results can then be accessed by researchers across the world.

OCF is helping the University of Hull and other research institutions to donate any spare capacity in their existing solutions to the COVID-19 sequencing effort through [emailprotected] Spare capacity can be utilised when users are not using all HPC resources and any donation of clock cycles doesnt need to impact on any current workloads that are being worked on.

HPC is one of the most powerful tools we have in the fight against disease, giving us detailed insight into the building blocks of viruses, said Russell Slack, managing director at OCF. This is an opportunity for anyone with an x86 Slurm cluster to get involved in combating COVID-19. GPU capacity is the most sought after at this time, but all donated resources help.

[emailprotected] is a distributed computing project for simulating protein dynamics, including the process of protein folding and the movements of proteins implicated in a variety of diseases, developed by Stanford University in California to focus on disease research. The project brings together personal computers, as well as those donated by larger companies and institutions from across the world and enables them to join together to run huge simulations to provide new opportunities for developing therapeutics and treatments for COVID-19.

Breaking up and distributing large tasks across personal computers is not a new concept, with projects using this approach since the 1990s, Collins said. Supercomputers like Viper are normally used to tackle the grand challenges of science and engineering on their own rather than as part of distributed projects like this, however COVID-19 has really brought computers like Viper to the forefront of the [emailprotected] project.

The Universitys HPC team is working hard to dedicate any resources not currently being used for University research to the project. Other OCF customers also joining the [emailprotected] effort include the University of Aberdeen, the University of East Anglia and Plymouth Marine Laboratory.

Source: OCF and Hull University

Original post:
University of Hull Supercomputer Supporting Global COVID-19 Research - HPCwire

QCI Achieves Best-in-Class Performance with its Mukai Quantum-Ready Application Platform – Quantaneo, the Quantum Computing Source

These performance benefits eliminate one of the greatest obstacles to the development and adoption of quantum-ready applications, since up until now they have been slower than traditional methods running classically. The results show that Mukai provides better results than currently used software to solve complex optimization problems faced by nearly every major company and government agency worldwide.

While future quantum computers are expected to deliver even greater performance benefits, Mukai delivers today the best-known quality of results, time-to-solution, and diversity of solutions in a commercially available service. This superior capability enables business and government organizations to become quantum-ready today and realize immediate benefits from improved performance.

Optimization problems can occur in logistics routing, where timely delivery, reduced fuel consumption, and driver safety all come into play. Optimization solutions can significantly mitigate the impact to revenue or business operations posed by events such as flooding or power outages. Companies can leverage the robust and diverse solutions offered by Mukai to minimize disruptive high-impact events in real-time.

Optimization can also be achieved in R&D contexts like drug design, where better predicted protein folding can speed the design process, increase the efficacy of drugs, and guide the search for patient cohorts who might benefit. Optimization of business processes generated by solvers like Mukai can result in savings of hundreds of billions of dollars annually.

The technical study used MITs MQlib, a well-established combinatorial optimization benchmark, to compare QCI qbsolv performance with those of a variety of solvers. QCI qbsolv delivered better quality or energy of results for most problems (27 of 45) and often ran more than four times faster than the best MQlib solver (21 of 45 problems).

In terms of diversity of resultsfinding, for example, logistics routes that are quite different from each otherQCI qbsolv often found dozens of binary results that were different in more than 350 different positions (i.e., route segments). Known also to researchers as Hamming distance, diversity of results is another important advantage expected of quantum computing.

The paper, QCI Qbsolv Delivers Strong Classical Performance for Quantum-Ready Formulation, describes the full results and discusses their impact, and is available at arxiv.org/abs/2005.11294

These results demonstrate that Mukai-powered applications can exploit quantum computing concepts to solve real-world problems effectively using classical computers, noted QCI CTO, Mike Booth. More importantly, the quality, speed, and diversity of solutions offered by Mukai means government and corporate organizations can use Mukai to adopt quantum-ready approaches today without sacrificing performance. Mukai is also hardware-agnostic, enabling adopters to exploit whichever hardware delivers the quantum advantage. Were confident that leading companies can leverage Mukai today to achieve a competitive advantage.

To be sure, we are very early in the quantum computing and software era, continued Booth. Just as the vectorizing compilers for Crays processors improved radically over time, we are planning to introduce further performance improvements to Mukai over the coming months. Some of these advancements will benefit application performance using classical computers as well as hybrid quantum-classical scenarios, but all will be essential to delivering the quantum advantage. We expect Mukai to play an integral role in the quantum computing landscape by enabling organizations to tap into quantum-inspired insights today to better answer their high-value problems.

The Mukai software execution platform for quantum computers enables users and application developers to solve complex discrete constrained-optimization problems that are at the heart of some of the most difficult computing challenges in industry, government and academia. This includes, for example, scheduling technicians, parts and tools for aircraft engine repair, or designing proteins for coronavirus vaccines and therapies.

QCI recently announced version 1.1 of Mukai, which introduced higher performance and greater ease-of-use for subject-matter experts who develop quantum-ready applications and need superior performance today. Local software connects users to the Mukai cloud service for solving extremely complex optimization problems. It enables developers to create and execute quantum-ready applications on classical computers today that are ready to run on the quantum computers of tomorrow when these systems achieve performance superiority.

Continue reading here:
QCI Achieves Best-in-Class Performance with its Mukai Quantum-Ready Application Platform - Quantaneo, the Quantum Computing Source

AMD COVID-19 HPC Fund delivers supercomputing to researchers – Scientific Computing World

AMD and Penguin Computing

AMD and Penguin Computing have donated seven petaflops of compute power as part of the AMD HPC Fund for COVID-19 research. New York University (NYU), Massachusetts Institute of Technology (MIT) and Rice University are the first universities named to receive complete AMD-powered, high-performance computing systems.

Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing commented on the importance of computing resources in the fight against the current viral outbreak. Across MIT we are engaged in work to address the global COVID-19 pandemic, from that with immediate impact such as modelling, testing, and treatment, to that with medium and longer term impact such as discovery of new therapeutics and vaccines. Nearly all of this work involves computing, and much of it requires the kind of high performance computing that AMD is so generously providing with this gift of a Petaflop machine,

At the Center for Theoretical Biological Physics, Rice researcher Jos Onuchic is using his previous studies on influenza A as a guide to explore how the coronavirus's surface proteins facilitate entrance to human cells, the critical first step of infection. Another scientist, Peter Wolynes, is using principles from his foundational theories of protein folding to screen thousands of drug molecules and identify the best candidates for clinical tests based upon how well they bind to the virus's surface proteins.

Peter Rossky, dean of Rice's Wiess School of Natural Sciences said:The AMD gift will be truly transformational for Rice's computational attack on COVID-19.We have the methods to progress, but studies of large, complex systems are at the cutting-edge of computational feasibility. The AMD contribution of dedicated, state-of-the-art computational power will be a game changer in accelerating progress toward defeating this virus.

AMD also announced it will contribute a cloud-based system powered by AMD EPYC and AMD Radeon Instinct processors located on-site at Penguin Computing, providing remote supercomputing capabilities for selected researchers around the world.

Penguin Computing is looking forward to supporting and contributing to the COVID-19 research efforts through this AMD collaboration. We are committed to providing our applications and technology expertise in high performance computing, artificial intelligence and data analytics to both the University on-premises and our remote POD cloud environments, said Sid Mair, president of Penguin Computing.

Combined, the donated systems will collectively provide researchers with more than seven petaflops of compute power that can be applied to fight COVID-19.Contributions from Penguin Computing, NVIDIA, Gigabyte, and others are helping the AMD HPC Fund advance COVID-19 research.

Ultra-fast data speeds and smart data-processing are key to delivering insights that science demands, particularly in these challenging times, said Gilad Shainer, senior vice-president of marketing for Mellanox networking at NVIDIA. NVIDIA Mellanox HDR 200 gigabit InfiniBand solutions provide high data throughput, extremely low latency, and application offload engines that accelerate bio-science simulations and further the development of treatments against the coronavirus.

The AMD COVID-19 HPC fund was established to provide research institutions with computing resources to accelerate medical research on COVID-19 and other diseases. In addition to the initial donations of $15 million of high-performance computing systems, AMD has contributed technology and technical resources to nearly double the peak system of the Corona system at Lawrence Livermore National Laboratory which is being used to provide additional computing power for molecular modelling in support of COVID-19 research.

See the original post:
AMD COVID-19 HPC Fund delivers supercomputing to researchers - Scientific Computing World

QCI Achieves Best-in-Class Performance with its Mukai Quantum-Ready Application Platform – GlobeNewswire

LEESBURG, Va., June 02, 2020 (GLOBE NEWSWIRE) -- Quantum Computing Inc. (OTCQB:QUBT) (QCI), a technology leader in quantum-ready applications and tools, reported in a newly released scientific paper that QCI qbsolv, a component of its Mukai software execution platform for quantum computers, has delivered on its promise of immediate performance benefits from quantum-ready methods running on classical computers.

These performance benefits eliminate one of the greatest obstacles to the development and adoption of quantum-ready applications, since up until now they have been slower than traditional methods running classically. The results show that Mukai provides better results than currently used software to solve complex optimization problems faced by nearly every major company and government agency worldwide.

While future quantum computers are expected to deliver even greater performance benefits, Mukai delivers today the best-known quality of results, time-to-solution, and diversity of solutions in a commercially available service. This superior capability enables business and government organizations to become quantum-ready today and realize immediate benefits from improved performance.

Optimization problems can occur in logistics routing, where timely delivery, reduced fuel consumption, and driver safety all come into play. Optimization solutions can significantly mitigate the impact to revenue or business operations posed by events such as flooding or power outages. Companies can leverage the robust and diverse solutions offered by Mukai to minimize disruptive high-impact events in real-time.

Optimization can also be achieved in R&D contexts like drug design, where better predicted protein folding can speed the design process, increase the efficacy of drugs, and guide the search for patient cohorts who might benefit. Optimization of business processes generated by solvers like Mukai can result in savings of hundreds of billions of dollars annually.

The technical study used MITs MQlib, a well-established combinatorial optimization benchmark, to compare QCI qbsolv performance with those of a variety of solvers. QCI qbsolv delivered better quality or energy of results for most problems (27 of 45) and often ran more than four times faster than the best MQlib solver (21 of 45 problems).

In terms of diversity of resultsfinding, for example, logistics routes that are quite different from each otherQCI qbsolv often found dozens of binary results that were different in more than 350 different positions (i.e., route segments). Known also to researchers as Hamming distance, diversity of results is another important advantage expected of quantum computing.

The paper, QCI Qbsolv Delivers Strong Classical Performance for Quantum-Ready Formulation, describes the full results and discusses their impact, and is available at arxiv.org/abs/2005.11294.

These results demonstrate that Mukai-powered applications can exploit quantum computing concepts to solve real-world problems effectively using classical computers, noted QCI CTO, Mike Booth. More importantly, the quality, speed, and diversity of solutions offered by Mukai means government and corporate organizations can use Mukai to adopt quantum-ready approaches today without sacrificing performance. Mukai is also hardware-agnostic, enabling adopters to exploit whichever hardware delivers the quantum advantage. Were confident that leading companies can leverage Mukai today to achieve a competitive advantage.

To be sure, we are very early in the quantum computing and software era, continued Booth. Just as the vectorizing compilers for Crays processors improved radically over time, we are planning to introduce further performance improvements to Mukai over the coming months. Some of these advancements will benefit application performance using classical computers as well as hybrid quantum-classical scenarios, but all will be essential to delivering the quantum advantage. We expect Mukai to play an integral role in the quantum computing landscape by enabling organizations to tap into quantum-inspired insights today to better answer their high-value problems.

The Mukai software execution platform for quantum computers enables users and application developers to solve complex discrete constrained-optimization problems that are at the heart of some of the most difficult computing challenges in industry, government and academia. This includes, for example, scheduling technicians, parts and tools for aircraft engine repair, or designing proteins for coronavirus vaccines and therapies.

QCI recently announced version 1.1 of Mukai, which introduced higher performance and greater ease-of-use for subject-matter experts who develop quantum-ready applications and need superior performance today. Local software connects users to the Mukai cloud service for solving extremely complex optimization problems. It enables developers to create and execute quantum-ready applications on classical computers today that are ready to run on the quantum computers of tomorrow when these systems achieve performance superiority.

Mukai addresses the fast-growing market for quantum computing, which isexpected to grow at a 23.2% CAGR to $9.1 billion by 2030, according to Tractica.

For more information about Mukai or a demonstration of the platform, contact John Dawson at (703) 436-2161 or info@quantumcomputinginc.com.

About Quantum Computing Inc.Quantum Computing Inc. (QCI) is focused on developing novel applications and solutions utilizing quantum and quantum-ready computing techniques to solve difficult problems in various industries. The company is leveraging its team of experts in finance, computing, security, mathematics and physics to develop commercial applications for industries and government agencies that will need quantum computing power to solve their most challenging problems. For more information about QCI, visit http://www.quantumcomputinginc.com.

Important Cautions Regarding Forward-Looking StatementsThis press release contains forward-looking statements as defined within Section 27A of the Securities Act of 1933, as amended, and Section 21E of the Securities Exchange Act of 1934, as amended. By their nature, forward-looking statements and forecasts involve risks and uncertainties because they relate to events and depend on circumstances that will occur in the near future. Those statements include statements regarding the intent, belief or current expectations of Quantum Computing (Company), and members of its management as well as the assumptions on which such statements are based. Prospective investors are cautioned that any such forward-looking statements are not guarantees of future performance and involve risks and uncertainties, and that actual results may differ materially from those contemplated by such forward-looking statements.

The Company undertakes no obligation to update or revise forward-looking statements to reflect changed conditions. Statements in this press release that are not descriptions of historical facts are forward-looking statements relating to future events, and as such all forward-looking statements are made pursuant to the Securities Litigation Reform Act of 1995. Statements may contain certain forward-looking statements pertaining to future anticipated or projected plans, performance and developments, as well as other statements relating to future operations and results. Any statements in this presentation that are not statements of historical fact may be considered to be forward-looking statements. Words such as "may," "will," "expect," "believe," "anticipate," "estimate," "intends," "goal," "objective," "seek," "attempt," aim to, or variations of these or similar words, identify forward-looking statements. These risks and uncertainties include, but are not limited to, those described in Item 1A in the Companys Annual Report on Form 10-K, which is expressly incorporated herein by reference, and other factors as may periodically be described in the Companys filings with the SEC.

Company ContactRobert Liscouski, CEOTel (703) 436-2161info@quantumcomputinginc.com

Investor & Media Relations ContactRon Both or Grant StudeCMA Investor RelationsTel (949) 432-7566Email Contact

Read the original here:
QCI Achieves Best-in-Class Performance with its Mukai Quantum-Ready Application Platform - GlobeNewswire

Global Lab Automation in Protein Engineering Market Industry Analysis and Forecast… – Azizsalon News

Global Lab Automation in Protein Engineering Market is expected to reach US$ 2,710Mn by 2026 from US$ 1,073.01 Mn in 2018 at CAGR of 14.15%.

Lab automation in protein engineering market report helps to cover the marketplace and internal & external factors which could impact the automation industry. The increasing demand for protein drugs over non-protein drugs along with high incidences of lifestyle diseases is one of the key drivers for automation in the protein engineering market globally. Other drivers such as positive regulation of government for protein engineering and the need for consistency in quality.

REQUEST FOR FREE SAMPLE REPORT: https://www.maximizemarketresearch.com/request-sample/22288

Lack of planning for technology development, improperly trained personnel and high initial setup cost and low priority for lab automation among small and medium-sized laboratories hampering the Global Lab Automation in Protein Engineering Market.

Monoclonal antibodies segment is expected to grow at the highest XX% CAGR during the forecast period. Monoclonal antibodies are widely used as diagnostic and research reagents also in human therapy. This growth is attributed to the rise in adoption of them for various therapies such as cancer and autoimmune diseases.

Software and informatics segment is expected to grow at the highest XX% CAGR during the forecast period. The software can be used to improve electron density maps throughout a statistical approach in combining experimental X-ray diffraction data with information about the expected characteristics of an electron map. Automation of instrument helpful to understand and solve the mysteries of protein dysfunction, as well as mis-folding, aggregation, and abnormal movement.

On the basis of region Global Lab Automation in Protein Engineering Market divided into five regions such as Asia Pacific, North America, Europe, Latin America, and Middle East Africa. Among all the regions, North America had the XX% market share in 2018 and is projected to lead the market during the forecast period. Because of dominating the lab automation in protein engineering market globally and growing outsourcing pharmaceutical manufacturing to these regions due to the availability of cheaper labour and resources. Strict regulations imposed by the US government and the FDA, increasing demand in the diagnostic market, a growing emphasis on the drug discovery and research labs, and the rising presence of numerous diseases in North America have fueled the growth.

Key players operating in global automation in protein engineering market, Thermo Fisher Scientific, Danaher, Hudson Robotics, Becton Dickinson, Synchron Lab Automation, Agilent Technologies, Siemens Healthcare, Tecan Group Ltd, Perkinelmer, Honeywell International, Bio-Rad, Roche Holding AG, Eppendorf AG, Shimadzu, Aurora Biomed.

The objective of the report is to present comprehensive analysis of Global Lab Automation in Protein Engineering Market including all the stakeholders of the industry. The past and current status of the industry with forecasted market size and trends are presented in the report with the analysis of complicated data in simple language. The report covers all the aspects of industry with dedicated study of key players that includes market leaders, followers and new entrants by region. PORTER, SVOR, PESTEL analysis with the potential impact of micro-economic factors by region on the market have been presented in the report. External as well as internal factors that are supposed to affect the business positively or negatively have been analyzed, which will give clear futuristic view of the industry to the decision makers. The report also helps in understanding Global Lab Automation in Protein Engineering Market dynamics, structure by analyzing the market segments, and project the Global Lab Automation in Protein Engineering Market size. Clear representation of competitive analysis of key players by A Global Lab Automation in Protein Engineering Type, price, financial position, product portfolio, growth strategies, and regional presence in the Global Lab Automation in Protein Engineering Market make the report investors guide.

DO INQUIRY BEFORE PURCHASING REPORT HERE: https://www.maximizemarketresearch.com/inquiry-before-buying/22288

Scope of Global Lab Automation in Protein Engineering Market

Global Lab Automation in Protein Engineering Market, by Software

Automated liquid handling Micro plate readers Standalone robots Software and informatics ASRSGlobal Lab Automation in Protein Engineering Market, by Protein Type

Monoclonal Antibodies Interferon Growth HormoneGlobal Lab Automation in Protein Engineering Market, by Application

Clinical diagnostics Drug discovery Genomics solutions Proteomics solutions Protein engineeringGlobal Lab Automation in Protein Engineering Market, by Type of automation

Modular automation Total lab automationGlobal Lab Automation in Protein Engineering Market, by Region

North America Europe Asia Pacific Middle East and Africa South AmericaKey Players Operating in Global Lab Automation in Protein Engineering Market

Thermo Fisher Scientific Danaher Hudson Robotics Becton Dickinson Synchron Lab Automation Agilent Technologies Siemens Healthcare Tecan Group Ltd Perkinelmer Honeywell International Bio-Rad Roche Holding AG Eppendorf AG Shimadzu Aurora Biomed

MAJOR TOC OF THE REPORT

Chapter One: Lab Automation in Protein Engineering Market Overview

Chapter Two: Manufacturers Profiles

Chapter Three: Global Lab Automation in Protein Engineering Market Competition, by Players

Chapter Four: Global Lab Automation in Protein Engineering Market Size by Regions

Chapter Five: North America Lab Automation in Protein Engineering Revenue by Countries

Chapter Six: Europe Lab Automation in Protein Engineering Revenue by Countries

Chapter Seven: Asia-Pacific Lab Automation in Protein Engineering Revenue by Countries

Chapter Eight: South America Lab Automation in Protein Engineering Revenue by Countries

Chapter Nine: Middle East and Africa Revenue Lab Automation in Protein Engineering by Countries

Chapter Ten: Global Lab Automation in Protein Engineering Market Segment by Type

Chapter Eleven: Global Lab Automation in Protein Engineering Market Segment by Application

Chapter Twelve: Global Lab Automation in Protein Engineering Market Size Forecast (2019-2026)

Browse Full Report with Facts and Figures of Lab Automation in Protein Engineering Market Report at: https://www.maximizemarketresearch.com/market-report/global-lab-automation-in-protein-engineering-market/22288/

About Us:

Maximize Market Research provides B2B and B2C market research on 20,000 high growth emerging technologies & opportunities in Chemical, Healthcare, Pharmaceuticals, Electronics & Communications, Internet of Things, Food and Beverages, Aerospace and Defense and other manufacturing sectors.

Contact info:

Name: Vikas Godage

Organization: MAXIMIZE MARKET RESEARCH PVT. LTD.

Email: sales@maximizemarketresearch.com

Contact: +919607065656/ +919607195908

Website:www.maximizemarketresearch.com

Originally posted here:
Global Lab Automation in Protein Engineering Market Industry Analysis and Forecast... - Azizsalon News

Monster or Machine? A Profile of the Coronavirus at 6 Months – Seattle Times

A virus, at heart, is information, a packet of data that benefits from being shared.

The information at stake is genetic: instructions to make more virus. Unlike a truly living organism, a virus cannot replicate on its own; it cannot move, grow, persist or perpetuate. It needs a host. The viral code breaks into a living cell, hijacks the genetic machinery and instructs it to produce new code new virus.

President Donald Trump has characterized the response to the pandemic as a medical war, and described the virus behind it as, by turns, genius, a hidden enemy and a monster. It would be more accurate to say that we find ourselves at odds with a microscopic photocopy machine. Not even that: an assembly manual for a photocopier, model SARS-CoV-2.

For at least six months now, the virus has replicated among us. The toll has been devastating. Officially, more than 6 million people worldwide have been infected so far, and 370,000 have died. (The actual numbers are certainly higher.) The United States, which has seen the largest share of cases and casualties, recently surpassed 100,000 deaths, one-quarter the number of all Americans who died in World War II. Businesses are shuttered in 10 weeks, some 40 million Americans have lost their jobs and food banks are overrun. The virus has fueled widespread frustration and exposed our deepest faults: of color, class and privilege, between the deliverers and the delivered to.

Still, summer summer! has all but arrived. We step out to look, breathe, vent. The pause is illusory. Cases are falling in New York, the epicenter in the United States, but firmly rising in Wisconsin, Virginia, Alabama, Arkansas, North and South Carolina, and other states. China, where the pandemic originated, and South Korea saw recent resurgences. Health officials fear another major wave of infections in the fall, and a possible wave train beyond.

We are really early in this disease, Dr. Ashish Jha, director of the Harvard Global Health Institute, told The New York Times recently. If this were a baseball game, it would be the second inning.

There may be trillions of species of virus in the world. They infect bacteria, mostly, but also abalone, bats, beans, beetles, blackberries, cassavas, cats, dogs, hermit crabs, mosquitoes, potatoes, pangolins, ticks and the Tasmanian devil. They give birds cancer and turn bananas black. Of the trillions, a few hundred thousand kinds of viruses are known, and fewer than 7,000 have names. Only about 250, including SARS-CoV-2, have the mechanics to infect us.

In our information age, we have grown familiar with computer viruses and with memes going viral; now here is the real thing to remind us what the metaphor means. A mere wisp of data has grounded more than half of the worlds commercial airplanes, sharply reduced global carbon emissions and doubled the stock price of Zoom. It has infiltrated our language social distancing, immunocompromised shoppers and our dreams. It has postponed sports, political conventions, and the premieres of the next Spider-Man, Black Widow, Wonder Woman and James Bond films. Because of the virus, the U.S. Supreme Court renders rulings by telephone, and wild boars roam the empty streets of Barcelona, Spain.

It also has prompted a collaborative response unlike any our species has seen. Teams of scientists, working across national boundaries, are racing to understand the viruss weaknesses, develop treatments and vaccine candidates, and to accurately forecast its next moves. Medical workers are risking their lives to tend to the sick. Those of us at home do what we can: share instructions for how to make a surgical mask from a pillowcase; sing and cheer from windows and doorsteps; send condolences; offer hope.

Were mounting a reaction against the virus that is truly unprecedented, said Dr. Melanie Ott, director of the Gladstone Institute of Virology in San Francisco.

So far the match is deadlocked. We gather, analyze, disseminate, probe: What is this thing? What must be done? When can life return to normal? And we hide while the latest iteration of an ancient biochemical cipher ticks on, advancing itself at our expense.

A Fearsome Envelope

Who knows when viruses first came about. Perhaps, as one theory holds, they began as free-living microbes that, through natural selection, were stripped down and became parasites. Maybe they began as genetic cogs within microbes, then gained the ability to venture out and invade other cells. Or maybe viruses came first, shuttling and replicating in the primordial protein soup, gaining shades of complexity enzymes, outer membranes that gave rise to cells and, eventually, us. They are sacks of code double- or single-stranded, DNA or RNA and sometimes called capsid-encoding organisms, or CEOs.

As viruses go, SARS-CoV-2 is big its genome is more than twice the size of that of the average flu virus and about one-half larger than Ebolas. But it is still tiny: 10,000 times smaller than a millimeter, barely one-thousandth the width of a human hair, smaller even than the wavelength of light from a germicidal lamp. If a person were the size of Earth, the virus would be the size of a person. Picture a human lung cell as a cramped office just big enough for a desk, a chair and a copy machine. SARS-CoV-2 is an oily envelope stuck to the door.

It was formally identified on Jan. 7 by scientists in China. For weeks beforehand, a mysterious respiratory ailment had been circulating in the city of Wuhan. Health officials were worried that it might be a reappearance of severe acute respiratory syndrome, or SARS, an alarming viral illness that emerged abruptly in 2002, infected more than 8,000 people and killed nearly 800 in the next several months, then was quarantined into oblivion.

The scientists had gathered fluid samples from three patients and, with nucleic-acid extractors and other tools, compared the genome of the pathogen with that of known ones. A transmission electron microscope revealed the culprit: spherical, with quite distinctive spikes reminiscent of a crown or the corona of the sun. It was a coronavirus, and a novel one.

In later colorized images, the virus resembles small garish orbs of lint or the papery eggs of certain spiders, adhering by the dozens to much larger cells. Recently a visual team, working closely with researchers, created the most accurate model of the SARS-CoV-2 viral particle currently available: a barbed, multicolored globe with the texture of fine moss, like something out of Dr. Seuss, or a sunken naval mine draped in algae and sponges.

Once upon a time, our pathogens were crudely named: Spanish flu, Asian flu, yellow fever, Black Death. Now we have H1N1, MERS (Middle East respiratory syndrome), HIV strings of letters as streamlined as the viruses themselves, codes for codes. The new coronavirus was temporarily named 2019-nCoV. On Feb. 11, the International Committee on Taxonomy of Viruses officially renamed it SARS-CoV-2, to indicate that it was very closely related to the SARS virus, another coronavirus.

Before the emergence of the original SARS, the study of coronaviruses was a professional backwater. There has been such a deluge of attention on we coronavirologists, said Susan R. Weiss, a virologist at the University of Pennsylvania. It is quite in contrast to previously being mostly ignored.

There are hundreds of kinds of coronaviruses. Two, SARS-CoV and MERS-CoV, can be deadly; four cause one-third of common colds. Many infect animals with which humans associate, including camels, cats, chickens and bats. All are RNA viruses. Our coronavirus, like the others, is a string of roughly 30,000 biochemical building blocks called nucleotides enclosed in a membrane of both protein and lipid.

Ive always been impressed by coronaviruses, said Anthony Fehr, a virologist at the University of Kansas. They are extremely complex in the way that they get around and start to take over a cell. They make more genes and more proteins than most other RNA viruses, which gives them more options to shut down the host cell.

The core code of SARS-CoV-2 contains genes for as many as 29 proteins: the instructions to replicate the code. One protein, S, provides the spikes on the surface of the virus and unlocks the door to the target cell. The others, on entry, separate and attend to their tasks: turning off the cells alarm system; commandeering the copier to make new viral proteins; folding viral envelopes, and helping new viruses bubble out of the cell by the thousands.

I usually picture it as an entity that comes into the cell and then it falls apart, Ott of the Gladstone Institute said. It has to fall apart to build some mini-factories in the cell to reproduce itself, and has to come together as an entity at the end to infect other cells.

For medical researchers, these proteins are key to understanding why the virus is so successful, and how it might be neutralized. For instance, to break into a cell, the S protein binds to a receptor called angiotensin converting enzyme 2, or ACE2, like a hand on a doorknob. The S protein on this coronavirus is nearly identical in structure to the one in the first SARS SARS Classic but some data suggests that it binds to the target enzyme far more strongly. Some researchers think this may partly explain why the new virus infects humans so efficiently.

Every pathogen evolves along a path between impact and stealth. Too mild and the illness does not spread from person to person; too visible and the carrier, unwell and aware, stays home or is avoided and the illness does not spread. SARS infected 8,000 people, and was contained quickly, in part because it didnt spread before symptoms appeared, Weiss noted.

By comparison, SARS-CoV-2 seems to have achieved an admirable balance. No aspect of the virus is extraordinary, said Dr. Pardis Sabeti, a computational geneticist at the Broad Institute who helped sequence the Ebola virus in 2014. Its the combination of things that makes it extraordinary.

SARS Classic settled quickly into human lung cells, causing a person to cough but also announcing its presence. In contrast, its successor tends to colonize first the nose and throat, sometimes causing few initial symptoms. Some cells there are thought to be rich in the surface enzyme ACE2 the doorknob that SARS-CoV-2 turns so readily. The virus replicates quietly, and quietly spreads: One study found that a person carrying SARS-CoV-2 is most contagious two to three days before they are aware that they might be ill.

From there, the virus can move into the lungs. The delicate alveoli, which gather oxygen essential to the body, become inflamed and struggle to do their job. The texture of the lungs turns from airy froth to gummy marshmallow. The patient may develop pneumonia; some, drowning internally and desperate for oxygen, go into acute respiratory distress and require a ventilator.

The virus can settle in still further: damaging the muscular walls of the heart; attacking the lining of the blood vessels and generating clots; inducing strokes, seizures and inflammation of the brain; and damaging the kidneys. Often the greatest damage is inflicted not by the virus but by the bodys attempt to fight it off with a dangerous cytokine storm of immune system molecules.

The result is an illness with a perplexing array of faces. A dry cough and a low fever at the outset, sometimes. Shortness of breath or difficulty breathing, sometimes. Maybe you lose your sense of smell or taste. Maybe your toes become red and inflamed, as if you had frostbite. For some patients it feels like a heart attack, or it causes delusion or disorientation.

Often it feels like nothing at all; according to the Centers for Disease Control and Prevention, 35% of people who contract the virus experience few to no symptoms, although they can continue to spread it. The virus acts like no pathogen humanity has ever seen, the journal Science notes.

More to the point, the pathogen has gone largely unseen. It has these perfect properties to spread throughout the entire human population, Fehr said. If we didnt know what a virus was and didnt take proper precautions this virus would infect virtually every human on the planet. It still might do that.

(BEGIN OPTIONAL TRIM.)

Data vs. Data

On Jan. 10, the Wuhan health commission in China reported that in the previous weeks, 41 people had contracted the illness caused by the coronavirus, and that one had died the first known casualty at the time.

That same day, Chinese scientists publicly released the complete genome of the virus. The blueprint, which could be simulated and synthesized in the lab, was almost as good as a physical sample, and easier for researchers worldwide to obtain. Analyses appeared in journals and on preprint servers like bioRxiv, on sites like nextstrain.org and virological.org: clues to the viruss origin, its errors and its weaknesses. From then on, the new coronavirus began to replicate not only physically in human cells but also figuratively, and likely to its own detriment, in the human mind.

Ott entered medicine in the 1980s, when AIDS was still new and terrifyingly unknown. Compare that time to today, there are a lot of similarities, she said. A new virus, a rush to understand, a rush to a cure or a vaccine. Whats fundamentally different now is that we have generated this community of collaboration and data-sharing. Its really mind-blowing.

Three hours after the viruss code was published, Inovio Pharmaceuticals, based in San Diego, began work on a vaccine against it one of more than 100 such efforts now underway around the world. Sabetis lab quickly got to work developing diagnostic tests. Ott and Weiss soon managed to obtain samples of live virus, which allowed them to actually look at whats going on when it infects cells in the lab, Ott said.

The cell is mounting a profound battle to prevent the virus from entering or, on entering, to alarm everyone around it so it cant spread, she said. The viruss intent is to overcome this initial surge of defense, to set up shop long enough to reproduce itself and to spread.

With so many proteins in its tool kit, the virus has many ways to counter our immune system; these also offer targets for potential vaccines and drugs. Researchers are working every angle. Most vaccine efforts are focused on disrupting the spike proteins, which allow entry into the cell. The drug remdesivir targets the viruss replication machinery. Fehr studies how the virus disables our immune system.

I use the analogy of Star Wars, he said. The virus is the Dark Side. We have a cellular defense system of hundreds of antiviral proteins Jedi knights to defend ourselves. Our lab is studying one specific Jedi that uses one particular weapon, and how the virus fights back.

These battles, fought on the field of biochemistry, strain the alphabet to describe. The Jedi in this analogy are particular enzymes (poly-ADP-ribose polymerases, or PARPS, if you must know) that are produced in infected cells and wield a molecule that attaches to certain invading proteins We dont know what these are yet, Fehr said and disrupts them. In response, the virus has an enzyme of its own that sweeps away our Jedi like dust from a sandcrawler.

Carolyn Machamer, a cell biologist at the Johns Hopkins School of Medicine, is studying the later stages of the process, to learn how the virus manages to navigate and assemble itself within a host cell and depart it. Among the research topics listed on her university webpage are coronaviruses but also intracellular protein trafficking and exocytosis of large cargo.

On entering the cell, components of the virus set up shop in a subregion, or organelle, called the Golgi complex, which resembles a stack of pancakes and serves as the cells mail-sorting center. Machamer has been working to understand how the virus commandeers the unit to route all the newly replicated viral bits, scattered throughout the cell, for final assembly.

The subject was poorly studied, she conceded. Most drug research has focused on the early stages, like blocking infection at the very outset or disrupting replication inside the cell. Like I said, it hasnt gotten a whole lot of attention, she said. But I think it will now, because I think we have some really interesting targets that could possibly yield new types of drugs.

The line of inquiry dates back to her postdoctoral days. She was studying the Golgi complex The organelle is really bizarre even then. Its following what you're interested in; thats what basic science is about. Its, like, you dont actually set out to cure the world or anything, but you follow your nose.

(END OPTIONAL TRIM.)

For all the attention the virus has received, it is still new to science and rich in unknowns. Im still very focused on the question: How does the virus get into the body? Ott said. Which cells does it infect in the upper airway? How does it get into the lower airway, and from there to other organs? Its absolutely not clear what the path is, or what the vulnerable path types are.

And most pressing: Why are so many of us asymptomatic? How does the virus manage to do this without leaving traces in some people, but in others theres a giant reaction? she said. Thats the biggest question currently, and the most urgent.

(STORY CAN END HERE. OPTIONAL MATERIAL FOLLOWS.)

Mistakes Are Made

Even a photocopier is imperfect, and SARS-CoV-2 is no exception. When the virus commandeers a host cell to copy itself, invariably mistakes are made, an incorrect nucleotide swapped for the right one, for instance. In theory, such mutations, or an accumulation of them, could make a virus more infectious or deadly, or less so, but in a vast majority of cases, they do not affect a viruss performance.

Whats important to note is that the process is random and incessant. Humans describe the contest between host and virus as a war, but the virus is not at war. Our enemy has no agency; it does not develop strategies for escaping our medicines or the activity of our immune systems.

Unlike some viruses, SARS-CoV-2 has a proofreading protein NSP14 that clips out mistakes. Even still, errors slip through. The virus acquires two mutations a month, on average, which is less than half the error rate of the flu and increases the possibility that a vaccine or drug treatment, once developed, will not be quickly outdated. So far its been relatively faithful, Ott said. Thats good for us.

By March, at least 1,388 variants of the coronavirus had been detected around the world, all functionally identical as far as scientists could tell. Arrayed as an ancestral tree, these lineages reveal where and when the virus spread. For instance, the first confirmed case of COVID-19 in New York was announced on March 1, but an analysis of samples revealed that the virus had begun to circulate in the region weeks earlier. Unlike early cases on the West Coast, which were seeded by people arriving from China, these cases were seeded from Europe, and in turn seeded cases throughout much of the country.

The roots can be traced back still further. The first known patient was hospitalized in Wuhan on Dec. 16, 2019, and first felt ill on Dec. 1; the first infection would have occurred still earlier. Sometime before that the virus, or its progenitor, was in a bat the genome is 96% similar to a bat virus. How long ago it made that jump, and acquired the mutations necessary to do so, is unclear. In any case, and contrary to certain conspiracy theories, SARS-CoV-2 was not engineered in a laboratory.

Those scenarios are so unlikely as to be impossible, said Dr. Robert Garry, a microbiologist at Tulane University and an expert on emerging diseases. In March, a team of researchers including Garry published a paper in Nature Medicine comparing the genome and protein structures of the novel virus with those of other coronaviruses. The novel distinctions were most likely the result of natural selection, they concluded. Our analyses clearly show that SARS-CoV-2 is not a laboratory construct or a purposefully manipulated virus.

In our species, the virus has found prime habitat. It seems to do most of its replicating in the upper respiratory tract, Garry noted: That makes it easier to spread with your voice, so there may be more opportunities for it to spread casually, and perhaps earlier in the course of the disease.

And there we have it: an organism, or whatever the right word is, ideally adapted to human conversation, the louder the better. Our communication is its transmission. Consider where so many outbreaks have begun: funerals, parties, call centers, sports arenas, meatpacking plants, dorm rooms, cruise ships, prisons. In February, a medical conference in Boston led to more than 70 cases in two weeks. In Arkansas, several cases were linked to a high school swim party that Im sure everybody thought was harmless, Gov. Asa Hutchinson said. After a choir rehearsal in Mount Vernon, Washington, 28 members of the choir fell ill. Not even song is safe anymore.

The virus has no trouble finding us. But we are still struggling to find it; a recent model by epidemiologists at Columbia University estimated that for every documented infection in the United States, 12 more go undetected. Who has it, or had it, and who does not? A firm grasp of the viruss whereabouts using diagnostic tests, antibody tests and contact tracing is essential to our bid to return to normal life. But humanitys immune response has been uneven.

In late May, in an open letter, a group of former White House science advisers warned that, to prepare for an anticipated resurgence of the pandemic later this year, the federal government needed to begin preparing immediately to avoid the extraordinary shortage of supplies that occurred this spring.

The virus is here, its everywhere, Dr. Rick Bright, former director of the Biomedical Advanced Research and Development Authority, told the U.S. Senate in mid-May. We need to unleash the voices of the scientists in our public health system in the United States, so they can be heard. Right now, he added, there is no master coordinated plan on how to respond to this outbreak.

SARS-CoV-2 virus has no plan. It doesnt need one; absent a vaccine, the virus is here to stay.

This is a pretty efficient pathogen, Garry said. Its very good at what it does.

The Next Wave

The virus spreads because of an intrinsic, latent quality in the culture, media theorist Douglas Rushkoff, who two decades ago coined the phrase going viral, wrote recently. Both biological and media viruses say less about themselves than they do about their hosts.

To know SARS-CoV-2 is to know ourselves in reflection. It is mechanical, unreflecting, consistently on-message the purest near-living expression of data management to be found on Earth. It is, and does, and is more. There is no I in a virus.

We are exactly its opposite: human, and everything that implies. Masters of information, suckers for misinformation; slaves to emotion, ego and wishful thinking. But also: inquiring, willful, optimistic. In our best moments, we strive to learn, and to advance more than our individual selves.

The best thing to come out of this pandemic is that everyone has become a virologist in some way, Ott said. She has a regular trivia night with her family in Germany, over Zoom. Lately, the topic has centered on viruses, and she has been impressed by how much they know. Theres so much more knowledge around, she said. A lot of wrong info around, also. But people have become so literate, because we all want it to go away.

Sabeti agreed, up to a point. She expressed a deep curiosity about viruses they are formidable opponents to understand but said that, this time around, she found herself less interested in the purely intellectual pursuit.

For me right now, the place that Im in, I really just most want to stop this virus, she said. Its so frustrating and disappointing, to say the least, to be in this position in which we have stopped the world, in which weve created social distancing, in which we have created mass amounts of human devastation and collateral damage because we just werent prepared.

I dont care to understand it, she said. For me, its I get up in the morning and my motivation is just: Stop this thing, and figure out how to never have this happen again.

Go here to read the rest:
Monster or Machine? A Profile of the Coronavirus at 6 Months - Seattle Times

How to do PewDiePies workout for abs and biceps: Cheap equipment and other YouTubers to follow! – HITC – Football, Gaming, Movies, TV, Music

Although PewDiePie deleted his Twitter account awhile back, he was recently all over the social media space a couple of days ago thanks to a shredded picture posted to Instagram by his wife Marzia. This shredded image resulted in fans wanting to know his biceps and abs workout routine so they can do it as well, and thankfully the Swedish YouTube star has shared his routine and methods. Here youll discover how to do the workout as well what cheap equipment you can buy and other YouTubers you can follow to expand upon Pewds advice.

PewDiePie has been a controversial figure on YouTube over the years for some and he has explained some of his heated acts as him being pretty irresponsible in the past. However, he has lately been one of the more honest and straightforward celebrities on the mega platform and this has helped continue his popularity despite him no longer being the horror video game squealer he was before.

His transformation is shown by him acting a lot wiser and more mature, but his transformation is also now embodied by his ripped body. And here youll discover how to do his workout routine for abs and biceps with even cheap equipment.

PewDiePie has shared his five day dumbbell workout routine for abs, biceps, and more.

According to PewDiePie, his five day dumbbell workout routine for abs, biceps, and more begins on Monday with him focusing heavily on his chest and finishing on his shoulders.

Tuesday is a leg day where he does squats, dead-lifts, and lunges, meanwhile Wednesday is reserved for pull exercises.

Thursday is another leg day whereas Friday is a mix of both push and pull exercises.

Although the YouTuber didnt show himself performing any of the exercises, he did share a diagram of the moves he consistently performs.

Of course, anyone will tell you that to build muscle and to burn fat you need to do a lot more than just lift weights.

PewDiePie himself admitted this by stating that he is now eating a greater amount of protein. Not only that, but he has also largely quit alcohol with the exception of social gatherings.

You can buy workout equipment used by PewDiePie to follow his five day dumbbell workout routine.

His DTX Fitness Folding Weight Bench is currently unavailable on Amazon, but you can find other just as good benches for as cheap as 109.99.

PewDiePie says he uses a PowerBlock Sports Series Interchangeable Dumbbell that goes up to 90 pounds, and you can buy one of these from the Powerblock website.

If you really want to hone in on your abs, a machine you could buy is a wonder core for just 89.99. This is a great piece of equipment which allows you to do multiple ab exercises as well as even arms.

You could also instead buy an adjustable Power Tower for the same price. This is an extremely effective tool as it allows you to do ab exercises as well pull ones.

As for weights, you can do PewDiePies pull and lift five day dumbbell workout routine with dumbbells or his Power Blocks, but you may wish to invest in a barbell with a set of weight plates.

This is because it helps you become stronger and lift heavier thanks to both your arms sharing and lifting the load.

For squats and other leg exercises you may want to buy some resistance bands for extra tension.

Lastly, PewDiePie also states that he uses Wrist Wraps to help prevent injury when lifting and these can be bought for as cheap as 9.

If youre interested in changing your figure like PewDiePie there are other YouTubers you can watch for workout routines.

Athlean X is particularly good as he shares routines that can be done at home as well as in the gym, with expensive equipment or with just DIY resources such as a towel.

WWE wrestler Sheamus is also good as he showcases a wide variety of different workout routines from heavy lifting to crossfit. And yes, a lot of his can be performed at home too.

If you want to burn body fat, then youll also be interested in performing HIIT exercises as these burn more calories than lifting weights.

YouTubers/figures who are helpful in this area include Joe Wicks as well as crossfits Lauren Fisher who has her own virtual fitness classes.

In other news, TikTok: What is the Pause Challenge? And how can I do it?

Visit link:
How to do PewDiePies workout for abs and biceps: Cheap equipment and other YouTubers to follow! - HITC - Football, Gaming, Movies, TV, Music